AI-powered children’s toys are here, but are they safe?


Teddy bears and stuffed plushies have lengthy been a mainstay in toy collections. But in the present day they don’t speak again in a toddler’s creativeness — some speak by way of built-in AI chatbots.

Sometimes that’s an issue, although: A shawl-wearing teddy bear lately went off the rails throughout a playtest with researchers and set off alarms for what these toys are able to.

Online chatbots can pose dangers for adults, from triggering delusions in a small variety of circumstances to hallucinating made-up data. OpenAI’s GPT-4o has been the mannequin of alternative for some AI toys, and utilizing a big language mannequin (LLM) in children’s toys has raised security questions as as to whether youngsters needs to be uncovered to such toys and with what protections toy makers ought to implement.

These dangers are ever-present whereas the AI toy market is booming overseas, with 1,500 corporations working in China, in response to a Massachusetts Institute of Technology (MIT) Technology Review report. Those corporations are now promoting AI toys within the US, whereas Barbie-maker Mattel in June introduced a partnership with OpenAI.

Here’s what it’s best to learn about AI-powered toys whereas the vacation buying season is in full swing on Cyber Monday.

AI toys aren’t the Eighties Teddy Ruxpin that informed tales from cassette tapes.

These toys connect with WiFi and, utilizing a microphone to know requests from youngsters, use LLMs to generate a response — oftentimes verbally by way of a speaker contained in the toy.

That permits toys like Curio’s Grok plushie, Miko robots, Poe the AI story bear, Little Learners’ Robot Mini and KEYi Technology’s Loona robotic pet, to offer real-time responses to youngsters. (Curio’s Grok is to not be confused with Elon Musk’s chatbot.)

As seen in a single toy AI bear, these real-time responses might present inappropriate responses.

Singapore-based FoloToy’s “Kumma” bear, priced at $99 and powered by OpenAI’s GPT-4o, informed researchers the place to search out probably harmful objects and engaged in sexually express conversations, in response to a report launched in November by the Denver-based shopper advocacy group US Public Interest Research Group (PIRG) Education Fund.

OpenAI suspended FoloToy for violating its insurance policies, which “prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old,” in response to an OpenAI spokesperson.

Larry Wang, FoloToy’s chief government, informed NCS on November 19 that the corporate had withdrawn the teddy bear and different AI merchandise from its web site and is conducting an internal safety audit. But on Friday, FoloToy introduced on X that it has reintroduced the product, “after a rigorous review, testing, and reinforcement of our safety modules.”

Unlike most AI toys, FoloToy’s Kumma bear makes use of a full-fledged LLM to freely reply and generate content material, making it susceptible to controversial content material, in response to Subodha Kumar, a professor of statistics, operations and knowledge science at Temple University’s Fox School of Business. Other toys could use a hybrid mannequin of LLMs offering responses whereas programmed to keep away from some content material.

Even Curio’s Grok plushie could counsel “where to find a variety of dangerous household objects” when aggressively prompted, in response to PIRG.

Curio has not responded to NCS’s request for remark.

Chris Byrne, a toy business advisor, informed NCS that AI toys that talk inappropriate messages are a “doomsday” state of affairs that sadly got here with the Kumma bear, but could not occur with each toy.

Few AI toys are able to be broadly used resulting from addictive design options, inconsistent responses on mature subjects and a concentrate on social companionship as a substitute of being an academic instrument, in response to PIRG.

But some toys have protections and filters to keep away from inappropriate conversations with a playmate.

Some AI toys can redirect conversations when requested probably inappropriate questions. There are additionally toys, together with Curio’s Grok, which have security options primarily based on a toddler’s age vary.

And toys just like the Miko 3 could have companion apps that embrace numerous levels of monitoring, whether or not that’s locking down the toy for a break or real-time transcripts of children’s conversations, like Curio’s Grok.

“It’s a nice idea that parents could actually put in their own guardrails and really control what the toy would talk about and how it would behave,” stated R.J. Cross, director of PIRG’s Don’t Sell My Data Campaign.

Warnings and advantages

When Mattel launched the Hello Barbie in 2015 with a microphone, WiFi connection and pre-written responses, issues arose that the toy was hackable, and that the doll remembered conversations and introduced them up days later.

Similar issues have surfaced with AI toys, which might probably retailer private knowledge, together with children’s names, faces, voices and places, warned Azhelle Wade, founding father of the Toy Coach consulting agency.

“AI toys feel like a wolf in sheep’s clothing to me, because when using them it’s hard to tell how much privacy you don’t have,” she informed NCS in an e-mail.

Kumar cautioned that knowledge might be susceptible to knowledge breaches and hacks, but famous that AI toys can be utilized for language studying and social improvement.

For instance, Curio’s Grok is a companion that may reply questions on leaves and trains, or tackle the persona of Gollum from “The Lord of the Rings.”

The Miko 3 robotic has a built-in digicam for facial recognition and supplies instructional and leisure packages. For $14.99 a month, Miko Max subscribers can entry children’s manufacturers like Disney tales, the Lingokids app and others.