The 'Halftime' Investment Committee debate whether the AI trade overdone


Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that would pressure the corporate to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.

Nathan Howard | Reuters

Meta on Friday mentioned it’s making short-term changes to its synthetic intelligence chatbot insurance policies associated to youngsters as lawmakers voice issues about security and inappropriate conversations.

The social media big is now coaching its AI chatbots in order that they don’t generate responses to youngsters about topics like self-harm, suicide, disordered consuming and keep away from probably inappropriate romantic conversations, a Meta spokesperson confirmed.

The firm mentioned AI chatbots will as a substitute level youngsters to knowledgeable assets when acceptable.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the corporate mentioned in a press release.

Additionally, teenage customers of Meta apps like Facebook and Instagram will solely be capable of entry sure AI chatbots supposed for instructional and skill-development functions.

The firm mentioned it is unclear how lengthy these short-term modifications will final, however they’ll start rolling out over the following few weeks throughout the corporate’s apps in English-speaking nations. The “interim changes” are a part of the corporate’s longer-term measures over teen security.

TechCrunch was first to report the change.

Last week, Sen. Josh Hawley, R-Mo., mentioned that he was launching an investigation into Meta following a Reuters report in regards to the firm allowing its AI chatbots to interact in “romantic” and “sensual” conversations with teenagers and kids.

The Reuters report described an inside Meta doc that detailed permissible AI chatbot behaviors that workers and contract staff ought to keep in mind when creating and coaching the software program.  

In one instance, the doc cited by Reuters mentioned {that a} chatbot can be allowed to have a romantic dialog with an eight-year-old and will inform the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”

A Meta spokesperson instructed Reuters on the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”

Most just lately, the nonprofit advocacy group Common Sense Media launched a threat evaluation of Meta AI on Thursday and mentioned that it shouldn’t be utilized by anybody below the age of 18, as a result of the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit mentioned in a press release.

“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” mentioned Common Sense Media CEO James Steyer in a press release. “No teen should use Meta AI until its fundamental safety failures are addressed.”

WATCH: Is the A.I. trade overdone?

The 'Halftime' Investment Committee debate whether the AI trade overdone