Beranda Uncategorized Fears Rise as ChatGPT Integration in Barbie Dolls Explored

Fears Rise as ChatGPT Integration in Barbie Dolls Explored

3
0

Proposals to install
ChatGPT
into a range of
toys
including Barbie dolls have sparked alarm from experts who branded it a ‘reckless social experiment’ on children.

US
toymaker Mattel unveiled plans to collaborate with OpenAI to add the chatbot to its future editions of popular lines.

While not confirming specifically how the new application would
work
, Mattel promised the development would ‘bring the magic of AI to age-appropriate play experiences’.

However, child welfare experts have condemned the idea, saying it would run the risk of ‘inflicting real damage on children’, the
Independent
reported.

Robert Weissman, the co-president of advocacy group Public Citizen said Mattel’s plans could inhibit children’s social development.

Sign up for all of the latest stories

Start your day informed with Tamarafka’s

News Updates

newsletter or get

Breaking News

alerts the moment it happens.

He said: ‘Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.

‘Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children.

‘It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm.

‘Mattel should not leverage its trust with parents to conduct a reckless social experiment on our children by selling toys that incorporate AI.’

It comes amid broader concerns over the impact of AI on vulnerable and young people.

Sewell Setzer III, from Orlando, Florida took his own life in February 2024.

His mother Megan Garcia has since sued Google-backed startup Character.ai, whose software her son used extensively in the months leading up to his death.

Sam Altman, the CEO of OpenAI, said his company was working to implement measures to protect vulnerable users from harmful content such as conspiracy theories.

He added the technology would direct people to professional help if and when sensitive topics such as suicide crop up and took over-reliance on AI ‘extremely seriously’.

Asked how people could be steered away from dangerous content, Altman told the Hard Fork Live podcast: ‘We do a lot of things to try to mitigate that.

‘If people are having a crisis that they talk to ChatGPT about, we try to suggest that they get help from a professional and talk to their family.’

But Altman, who recently welcomed his first son, said he still hoped that his child would make more human friends that AI companions.

He said: ‘I still do have a lot of concerns about the impact on mental health and the social impact from the deep relationships that they’re going to have with AI, but it has surprised me on the upside of how much how much people differentiate between [AI and humans].’

Mattel said that its first products using the technology would have a focus on older customers.

It said it was committed to responsible innovation, which protects users’ safety and privacy.

Tamarafkahas contacted Mattel and OpenAI for comment.




Get in touch with our news team by emailing us at
[email protected]
.



For more stories like this,


check our news page

.

Stay up to date with the stories everybody’s talking about by signing up to Tamarafka’s News Updates newsletter.

TINGGALKAN KOMENTAR

Silakan masukkan komentar anda!
Silakan masukkan nama Anda di sini