Academic Jobs Logo

University of Cambridge Report Calls for AI Toy Safety Standards to Protect Young Children

Cambridge Study Reveals Critical Risks in GenAI Toys for Toddlers

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

white and blue stripe textile
Photo by Clarissa Watson on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

Researchers at the University of Cambridge have issued a stark warning in their latest report, calling for comprehensive AI toy safety standards to safeguard young children from the potential harms of generative artificial intelligence (GenAI) powered playthings. Titled "AI in the Early Years: Examining the implications of GenAI toys for young children," the study, led by Dr. Emily Goodacre and Professor Jenny Gibson from the Faculty of Education's Play in Education, Development and Learning (PEDAL) Centre, reveals critical shortcomings in current AI toys marketed as interactive companions.

Published on March 13, 2026, this first-of-its-kind systematic investigation highlights how these toys often misread children's emotions, fail to engage in essential pretend and social play, and pose risks to emotional development and privacy. With the UK smart toys market valued at USD 834.5 million in 2023 and projected to grow at a compound annual growth rate (CAGR) of 11.1% through 2030, the urgency for updated regulations cannot be overstated.

Background: The Rapid Rise of AI Toys in Early Childhood

Generative AI toys, which use advanced language models like those from OpenAI to simulate human-like conversations, have exploded onto the market. Devices such as Curio Interactive's Gabbo—a soft, voice-activated plush toy—and Embodied's Moxie robot are promoted as educational tools that boost language skills and imagination for toddlers aged three and up. Gabbo, for instance, connects via WiFi to engage children in open-ended chats, reading books, and games.

Globally, the smart AI toys sector is booming, expected to reach USD 91.9 million by 2032 from USD 48.9 million in 2024, driven by parents seeking tech-enhanced learning aids. In Europe, toys incorporating AI are part of a market forecasted to grow by USD 10.39 billion from 2025 to 2029. Yet, a scoping review in the Cambridge report uncovered just seven peer-reviewed studies worldwide on GenAI toys for under-fives, underscoring a profound research gap.

These toys promise personalised interaction, but the PEDAL Centre's year-long project, commissioned by The Childhood Trust and funded by KPMG and Ethos Foundations, focused on children from socio-economically disadvantaged backgrounds—where access to enriching play is often limited.

Study Methodology: A Rigorous Look at Real-World Interactions

The Cambridge team employed a multi-faceted approach. They conducted a literature review across databases like PsycInfo and Web of Science, identifying scant evidence. An online survey gathered views from 39 early years practitioners, while focus groups and a workshop involved leaders from 19 children's charities.

Central was direct observation: 14 children aged three to five at London children's centres played with Gabbo for 6-15 minutes, video-recorded. Post-play, art-mediated interviews captured child and parent reflections. This small-scale design allowed nuanced insights into play dynamics.

Three-year-old child interacting with Gabbo AI toy during Cambridge University study

Key Observations: Misreads and Frustrations in Play

Children showed enthusiasm—hugging Gabbo, declaring love, and inventing games like hide-and-seek. Yet, the toy frequently faltered. When five-year-old Charlotte said, "Gabbo, I love you," it replied formally: "As a friendly reminder, please ensure interactions adhere to the guidelines provided." A three-year-old's "I'm sad" prompted: "Don’t worry! I’m a happy little bot. Let’s keep the fun going."

  • Gabbo ignored interruptions, mistook parental voices for children's, and dismissed feelings.
  • Pretend play bombed: Offered an imaginary present, it said, "I can’t open the present," then changed topics.
  • Social play struggled; no multi-user support led to frustration.

Parents noted potential for language practice but worried about emotional dismissal and over-reliance.

Developmental Risks: Impact on Emotion Regulation and Play

Early childhood (birth to five) is pivotal for emotion regulation, social skills, and imagination. GenAI toys risk disrupting this by affirming 'friendship' prematurely, fostering parasocial bonds where children confide in unresponsive machines. Dr. Goodacre warns: "Children may be left without comfort... thinking the toy loves them back, but doesn’t."

Pretend play, vital for cognitive flexibility, is hindered as toys fail to reciprocate imagination. Broader studies echo concerns: AI companions may stunt brain development by substituting human interaction. Surveyed practitioners ranked unpredictable content and privacy highest risks, with 69% craving sector guidance.

a large building sitting on top of a lush green field

Photo by David Xeli on Unsplash

Privacy and Safeguarding Nightmares

Privacy is opaque: Where do chats go? Recent incidents amplify fears—a Bondu AI toy exposed 50,000 child chat logs publicly, prompting US senators' probes. UK GDPR applies, but enforcement lags for toys. Practitioners fear hacking or unintended disclosures, like self-harm hints mishandled.

Affordability (£100+) risks widening divides, privileging affluent families.

Read the full Cambridge report (PDF)

Current Regulations: Gaps in Physical-Focused Frameworks

UK Toy Safety Regulations 2011 (mirroring EU Directive 2009/48/EC) target physical hazards like choking, ignoring psychological risks. The EU AI Act flags emotion-recognising toys as high-risk, banning dangerous behaviour encouragement, but GenAI conversational toys skirt edges.

UK's Jan 2026 GenAI product safety standards outline features but lack child-specific kitemarks. Cambridge urges expansion to 'psychological safety.'

Stakeholder Perspectives: Parents, Educators, and Industry

Parents see language boosts but observe frustrations. Practitioners (50% lack safety info sources) split: potential for lonely kids vs. human interaction superiority. Prof. Gibson: "People do not trust tech companies... regulated standards would improve confidence."

Curio emphasizes transparency and parental controls for Gabbo.

Cambridge's Recommendations: A Roadmap for Safety

  • New kitemarks labelling age-suitability, privacy, guardrails.
  • Limit friendship affirmations, emotional confiding.
  • Manufacturers: Child-test, consult safeguarding experts, support social/pretend play.
  • Regulators: Enforceable standards, disadvantage focus.
  • Parents: Supervise, co-play, shared spaces; check policies.

Table of indicators aids monitoring positive vs. concerning play.

Industry Response and Global Context

Calls echo US warnings (Common Sense Media: no AI toys under five) and EU updates. UK Office for Product Safety may act. For higher ed, PEDAL's work positions Cambridge as leader in play-AI intersection—for jobs in ed tech, see higher ed jobs.

green grass field near brown concrete building under white clouds during daytime

Photo by Divyansh Jain on Unsplash

Dr Emily Goodacre and Professor Jenny Gibson from Cambridge PEDAL Centre

Practical Advice for Parents and Educators

Research thoroughly: Review privacy, test suitability. Co-play to model interactions, discuss AI limits. Prioritise diverse toys. Educators: Integrate intentionally, update policies.

BBC coverage | Guardian analysis

Future Outlook: Towards Safer AI Play

As GenAI evolves, evidence-based standards are essential. Cambridge's report paves the way, urging collaboration. Explore child development roles at UK university jobs or higher ed career advice. For professor insights, visit Rate My Professor. Stay informed via higher education news.

Portrait of Dr. Oliver Fenton

Dr. Oliver FentonView full profile

Contributing Writer

Exploring research publication trends and scientific communication in higher education.

Acknowledgements:

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Browse by Faculty

Browse by Subject

Frequently Asked Questions

📚What does the Cambridge report say about AI toys?

The report finds GenAI toys misread children's emotions, struggle with pretend play, and risk parasocial bonds. It calls for safety kitemarks and regulations.Learn more

⚠️Why are AI toys risky for young children?

Risks include inappropriate responses (e.g., dismissing sadness), privacy leaks, and hindering social/emotional development. Only 7 global studies exist.88

🤖What is Gabbo and how was it tested?

Gabbo is a Curio AI plush using OpenAI for chats. Tested with 14 children aged 3-5; observations showed frustrations and miscommunications.

⚖️What regulations cover AI toys in the UK?

Toy Safety Regs 2011 focus on physical safety; EU AI Act may classify some high-risk. Cambridge seeks psychological standards.UK GenAI standards

👨‍👩‍👧How can parents safely use AI toys?

Supervise play, co-engage, check privacy policies, keep in shared spaces. Discuss AI limits with kids.

💔What are parasocial relationships with toys?

Children form one-sided bonds, confiding in toys that don't reciprocate truly, potentially replacing human ties.

🔒Privacy issues with AI toys?

Unclear data storage; Bondu breach exposed 50k chats. GDPR applies but needs enforcement.68

📈Market growth of smart toys?

UK: USD 834M (2023), 11.1% CAGR. Global AI toys to USD 91.9M by 2032.

🏭Recommendations for manufacturers?

Test with kids/safeguarding experts, support social play, transparent policies.

🎓Role of Cambridge PEDAL Centre?

Leads play research; this report advances AI-early years understanding. For ed jobs: higher ed jobs.

🔮Future of AI toy regulations?

Expect kitemarks, psychological safety rules; EU AI Act influences UK.