New York
Nearly a third of US youngsters say they use AI chatbots every day, a new study finds, shedding gentle on how younger individuals are embracing a expertise that’s raised vital security considerations round psychological well being impacts and publicity to mature content material for youths.
The Pew Research Center study, which marks the group’s first time surveying teens on their normal AI chatbot use, discovered that almost 70% of American teens have used a chatbot at the least as soon as. And amongst those that use AI chatbots every day, 16% stated they did so a number of instances a day or “almost constantly.”
AI chatbots have been pitched as studying and schoolwork instruments for younger folks, however some teens have additionally turned to them for companionship or romantic relationships. That’s contributed to questions on whether or not younger folks ought to use chatbots within the first place. Some consultants have frightened that their use even in a studying context may stunt improvement.
Pew surveyed almost 1,500 US teens between the ages of 13 and 17 for the report, and the pool was designed to be consultant throughout gender, age, race and ethnicity, and family earnings.
ChatGPT was by far the most well-liked AI chatbot, with greater than half of teens reporting having used it. The different high gamers have been Google’s Gemini, Meta AI, Microsoft’s Copilot, Character.AI and Anthropic’s Claude, in that order.
A virtually equal proportion of women and boys — 64% and 63%, respectively — say they’ve used an AI chatbot. Teens ages 15 to 17 are barely extra seemingly (68%) to say they’ve used chatbots than these ages 13 to 14 (57%). And utilization will increase barely as family earnings goes up, the survey discovered.
Just shy of 70% of Black and Hispanic teens say they’ve used an AI chatbot, barely increased than the 58% of White teens who say the identical.
The findings come after two of the main AI corporations, OpenAI and Character.AI, have confronted lawsuits from households who alleged the apps performed a function of their teens’ suicides or psychological well being points. OpenAI subsequently said it would roll out parental controls and age restrictions. And Character.AI has stopped allowing teens to have interaction in back-and-forth conversations with its AI-generated characters.
Meta additionally got here underneath fireplace earlier this 12 months after reports emerged that its AI chatbot would interact in sexual conversations with minors. The firm stated it had up to date its insurance policies and subsequent 12 months will give mother and father the flexibility to block teens from chatting with AI characters on Instagram.
At least one on-line security group, Common Sense Media, has suggested mother and father to not enable kids underneath 18 to make use of companion-like AI chatbots, saying they pose “unacceptable risks” to younger folks.
Some consultants have additionally raised concerns that the use of AI for schoolwork may encourage dishonest, though others say the expertise can present extra personalized learning assist.
Meanwhile, AI corporations have pushed to get their chatbots into colleges. OpenAI, Microsoft and Anthropic have all rolled out instruments for college kids and academics. Earlier this 12 months, the businesses additionally partnered with academics unions to launch an AI instruction academy for educators.
Microsoft, specifically, has sought to position its Copilot because the most secure alternative for fogeys, with AI CEO Mustafa Suleyman telling NCS in October that it’ll by no means enable romantic or sexual conversations for adults or kids.