Promote Your Research… Share it Worldwide
Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.
Submit your Research - Make it Global NewsResearchers at the University of Cambridge have issued a stark warning in their latest report, calling for comprehensive AI toy safety standards to safeguard young children from the potential harms of generative artificial intelligence (GenAI) powered playthings. Titled "AI in the Early Years: Examining the implications of GenAI toys for young children," the study, led by Dr. Emily Goodacre and Professor Jenny Gibson from the Faculty of Education's Play in Education, Development and Learning (PEDAL) Centre, reveals critical shortcomings in current AI toys marketed as interactive companions.
Published on March 13, 2026, this first-of-its-kind systematic investigation highlights how these toys often misread children's emotions, fail to engage in essential pretend and social play, and pose risks to emotional development and privacy. With the UK smart toys market valued at USD 834.5 million in 2023 and projected to grow at a compound annual growth rate (CAGR) of 11.1% through 2030, the urgency for updated regulations cannot be overstated.
Background: The Rapid Rise of AI Toys in Early Childhood
Generative AI toys, which use advanced language models like those from OpenAI to simulate human-like conversations, have exploded onto the market. Devices such as Curio Interactive's Gabbo—a soft, voice-activated plush toy—and Embodied's Moxie robot are promoted as educational tools that boost language skills and imagination for toddlers aged three and up. Gabbo, for instance, connects via WiFi to engage children in open-ended chats, reading books, and games.
Globally, the smart AI toys sector is booming, expected to reach USD 91.9 million by 2032 from USD 48.9 million in 2024, driven by parents seeking tech-enhanced learning aids. In Europe, toys incorporating AI are part of a market forecasted to grow by USD 10.39 billion from 2025 to 2029. Yet, a scoping review in the Cambridge report uncovered just seven peer-reviewed studies worldwide on GenAI toys for under-fives, underscoring a profound research gap.
These toys promise personalised interaction, but the PEDAL Centre's year-long project, commissioned by The Childhood Trust and funded by KPMG and Ethos Foundations, focused on children from socio-economically disadvantaged backgrounds—where access to enriching play is often limited.
Study Methodology: A Rigorous Look at Real-World Interactions
The Cambridge team employed a multi-faceted approach. They conducted a literature review across databases like PsycInfo and Web of Science, identifying scant evidence. An online survey gathered views from 39 early years practitioners, while focus groups and a workshop involved leaders from 19 children's charities.
Central was direct observation: 14 children aged three to five at London children's centres played with Gabbo for 6-15 minutes, video-recorded. Post-play, art-mediated interviews captured child and parent reflections. This small-scale design allowed nuanced insights into play dynamics.
Key Observations: Misreads and Frustrations in Play
Children showed enthusiasm—hugging Gabbo, declaring love, and inventing games like hide-and-seek. Yet, the toy frequently faltered. When five-year-old Charlotte said, "Gabbo, I love you," it replied formally: "As a friendly reminder, please ensure interactions adhere to the guidelines provided." A three-year-old's "I'm sad" prompted: "Don’t worry! I’m a happy little bot. Let’s keep the fun going."
- Gabbo ignored interruptions, mistook parental voices for children's, and dismissed feelings.
- Pretend play bombed: Offered an imaginary present, it said, "I can’t open the present," then changed topics.
- Social play struggled; no multi-user support led to frustration.
Parents noted potential for language practice but worried about emotional dismissal and over-reliance.
Developmental Risks: Impact on Emotion Regulation and Play
Early childhood (birth to five) is pivotal for emotion regulation, social skills, and imagination. GenAI toys risk disrupting this by affirming 'friendship' prematurely, fostering parasocial bonds where children confide in unresponsive machines. Dr. Goodacre warns: "Children may be left without comfort... thinking the toy loves them back, but doesn’t."
Pretend play, vital for cognitive flexibility, is hindered as toys fail to reciprocate imagination. Broader studies echo concerns: AI companions may stunt brain development by substituting human interaction. Surveyed practitioners ranked unpredictable content and privacy highest risks, with 69% craving sector guidance.
Photo by David Xeli on Unsplash
Privacy and Safeguarding Nightmares
Privacy is opaque: Where do chats go? Recent incidents amplify fears—a Bondu AI toy exposed 50,000 child chat logs publicly, prompting US senators' probes. UK GDPR applies, but enforcement lags for toys. Practitioners fear hacking or unintended disclosures, like self-harm hints mishandled.
Affordability (£100+) risks widening divides, privileging affluent families.
Read the full Cambridge report (PDF)Current Regulations: Gaps in Physical-Focused Frameworks
UK Toy Safety Regulations 2011 (mirroring EU Directive 2009/48/EC) target physical hazards like choking, ignoring psychological risks. The EU AI Act flags emotion-recognising toys as high-risk, banning dangerous behaviour encouragement, but GenAI conversational toys skirt edges.
UK's Jan 2026 GenAI product safety standards outline features but lack child-specific kitemarks. Cambridge urges expansion to 'psychological safety.'
Stakeholder Perspectives: Parents, Educators, and Industry
Parents see language boosts but observe frustrations. Practitioners (50% lack safety info sources) split: potential for lonely kids vs. human interaction superiority. Prof. Gibson: "People do not trust tech companies... regulated standards would improve confidence."
Curio emphasizes transparency and parental controls for Gabbo.
Cambridge's Recommendations: A Roadmap for Safety
- New kitemarks labelling age-suitability, privacy, guardrails.
- Limit friendship affirmations, emotional confiding.
- Manufacturers: Child-test, consult safeguarding experts, support social/pretend play.
- Regulators: Enforceable standards, disadvantage focus.
- Parents: Supervise, co-play, shared spaces; check policies.
Table of indicators aids monitoring positive vs. concerning play.
Industry Response and Global Context
Calls echo US warnings (Common Sense Media: no AI toys under five) and EU updates. UK Office for Product Safety may act. For higher ed, PEDAL's work positions Cambridge as leader in play-AI intersection—for jobs in ed tech, see higher ed jobs.
Photo by Divyansh Jain on Unsplash
Practical Advice for Parents and Educators
Research thoroughly: Review privacy, test suitability. Co-play to model interactions, discuss AI limits. Prioritise diverse toys. Educators: Integrate intentionally, update policies.
BBC coverage | Guardian analysisFuture Outlook: Towards Safer AI Play
As GenAI evolves, evidence-based standards are essential. Cambridge's report paves the way, urging collaboration. Explore child development roles at UK university jobs or higher ed career advice. For professor insights, visit Rate My Professor. Stay informed via higher education news.

Be the first to comment on this article!
Please keep comments respectful and on-topic.