Academic Jobs Logo

York University Study Reveals Hidden Mismatches in Brain-Like AI Models

Challenging the 'Brain-Like' Claim in Modern AI Systems

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

a close up of a book
Photo by Hanyang Zhang on Unsplash

Promote Your Research… Share it Worldwide

Have a story or a research paper to share? Become a contributor and publish your work on AcademicJobs.com.

Submit your Research - Make it Global News

York University Researchers Challenge Assumptions in Brain-Inspired AI Development

A groundbreaking study from York University has exposed significant limitations in the claim that modern artificial neural networks (ANNs)—the core technology behind today's artificial intelligence (AI) systems—truly mimic the human brain. Published on March 25, 2026, in Nature Machine Intelligence, the research introduces a novel diagnostic tool called the "reverse predictivity test," revealing hidden mismatches between how AI processes visual information and how primate brains, including ours, handle the same tasks.

Lead researcher Kohitij Kar, an Assistant Professor in York University's Department of Biology and Canada Research Chair in Visual Neuroscience, explains that while AI models excel at predicting brain activity during object recognition, the reverse isn't true. "The results were striking," Kar notes. "While AI models can predict the neurons we recorded in the brain fairly well, the brain cannot equally predict many of the model’s internal features." This asymmetry suggests AI relies on computational shortcuts or "internal strategies" that diverge from biological processes, potentially undermining their use in neuroscience and clinical applications.

Understanding the Reverse Predictivity Test: A Bidirectional Approach

Traditional benchmarks for brain-like AI focus on forward predictivity: how well an ANN's internal activations forecast neural responses in the brain's inferior temporal (IT) cortex, the region key to object recognition. These models often explain up to 50% of neural variance, earning them the "brain-like" label. However, York researchers flipped this paradigm with reverse predictivity, measuring how effectively brain activity predicts ANN unit activations.

The test uses linear regression mappings between neural populations and model features, providing a scalable, conservative metric. In monkey-to-monkey comparisons, predictivity is symmetric—a biological baseline. Yet, for ANNs, even top performers like convolutional neural networks (CNNs) and transformers show pronounced asymmetry, highlighting biologically inaccessible dimensions in their representations.

Diagram illustrating forward and reverse predictivity between AI models and brain activity

This bidirectional framework distinguishes "common" units—those aligned with brain activity, behaviorally relevant, and generalizable across species—from "unique" units that boost task performance but lack biological grounding.

Detailed Methodology: Testing Across Diverse Visual Stimuli

To rigorously probe these mismatches, the team curated a dataset of 1,320 naturalistic images featuring everyday objects like bears, elephants, faces, apples, cars, dogs, chairs, planes, birds, and zebras against varied backgrounds (natural, indoor, outdoor). An additional 300 images rendered these objects in non-photorealistic styles—outlines, drawings, schematized forms, and artistic variations—to stress-test generalization.

  • ANNs were evaluated on vision tasks, with neural data from macaque IT cortex.
  • Forward and reverse predictivity computed via ridge regression over 20 repetitions.
  • Behavioral relevance assessed through human and monkey psychophysics data.
  • Influencing factors analyzed: feature dimensionality (via PCA), training objectives (e.g., joint recognition-memorability), adversarial robustness.

Ablation experiments confirmed common units drive consistent behavioral predictions across models and primates, while unique units do not.

Key Findings: Asymmetry and Its Drivers

The study uncovered that high forward predictivity (~50% variance explained) coexists with low reverse predictivity, indicating ANNs solve visual recognition via strategies alien to the brain. Factors exacerbating mismatches include:

  • High dimensionality: Reducing via PCA boosts reverse predictivity.
  • Training objectives: Multi-task learning (e.g., recognition + memorability) yields more symmetric models.
  • Adversarial vulnerability: Robust models show better alignment.

Common units not only mirror IT cortex but predict human behavior superiorly, generalizing across monkeys. This diagnostic pinpoints pathways for biologically plausible AI.

Meet the Researchers: Pioneers at York University

Senior author Kohitij Kar heads the ViTA Lab at York, focusing on computational visual neuroscience. A member of the Centre for Vision Research and Centre for Integrative and Applied Neuroscience (CIAN), Kar's work bridges AI and biology, with applications in autism research. "Our approach helps identify which parts of an ANN truly match brain activity," he says, emphasizing baselines for neurotypical models.

Co-author Sabine Muzellec, a postdoctoral fellow and Connected Minds trainee, highlights the metric's field-wide utility: "We provide a well-vetted diagnostic for the field." Their collaboration leverages York's ecosystem, including the $318.4 million Connected Minds initiative—Canada's largest York-led program—for neural-machine intelligence.

Implications for AI Development and Neuroscience

These mismatches challenge AI's role in hypothesizing brain function or designing behavioral experiments. As ANNs inform clinical tools for post-traumatic stress disorder (PTSD) or autism, unaddressed divergences risk invalid baselines. The study urges developers to prioritize reverse predictivity alongside accuracy.

York's open-source toolkit (reverse-pred on PyPI, GitHub code at github.com/vital-kolab/reverse_pred) democratizes testing, with data on OSF. For full details, access the paper via DOI: 10.1038/s42256-026-01204-0.

York University's Leadership in Canadian AI-Neuroscience Research

York exemplifies Canada's push in brain-inspired AI. Connected Minds, partnering with Queen's University, funds BCI and mental health tech with $105.7 million from New Frontiers in Research Fund. Lassonde School advances neuromorphic computing, mimicking brain efficiency. Nationally, Nengo software at universities like Waterloo enables large-scale brain modeling, while Queen's photonic neuromorphic chips promise energy savings.

Funding like CFI's $1.5 million to York supports AI infrastructure, positioning Canadian higher education as a global hub.

Challenges in Brain-Like AI and Paths Forward

Mismatches risk amplifying over time, as Kar warns: "This difference... will widen if not corrected now." In higher education, overreliance on unaligned models hampers teaching AI ethics, neuroscience curricula, and interdisciplinary research. Solutions include multi-objective training and dimensionality controls.

  • Incorporate reverse predictivity in benchmarks.
  • Leverage toolkits for iterative improvement.
  • Foster collaborations like Connected Minds.

For students and faculty, this underscores hybrid human-AI workflows, emphasizing biological plausibility.

a classroom with desks and chairs

Photo by Hanyang Zhang on Unsplash

Future Outlook: Toward Truly Brain-Aligned AI in Canada

By addressing these flaws, Canadian researchers can lead in neuromorphic and plausible AI, enhancing robustness against adversarial attacks and improving clinical translations. York's autism program exemplifies potential: brain-aligned models as neurotypical baselines. With initiatives like Nengo Summer School 2026, the next generation is poised to bridge gaps.

As AI integrates into higher education—from research to remote jobs—studies like this ensure ethical, effective progress. Explore opportunities in Canada's vibrant AI ecosystem.

Portrait of Prof. Evelyn Thorpe

Prof. Evelyn ThorpeView full profile

Contributing Writer

Promoting sustainability and environmental science in higher education news.

Acknowledgements:

Discussion

Sort by:

Be the first to comment on this article!

You

Please keep comments respectful and on-topic.

New0 comments

Join the conversation!

Add your comments now!

Have your say

Engagement level

Browse by Faculty

Browse by Subject

Frequently Asked Questions

🔄What is reverse predictivity?

Reverse predictivity measures how well brain neural activity predicts artificial neural network (ANN) activations, complementing traditional forward predictivity.

🧠Why do AI models mismatch the brain?

ANNs use high-dimensional, biologically inaccessible strategies for visual tasks, despite strong forward prediction of brain activity.

👨‍🔬Who led the York University study?

Kohitij Kar (senior author) and Sabine Muzellec, from York's Biology Department and centres like CVR and CIAN.

🖼️What images were used in the study?

1,320 naturalistic object images (e.g., bear, car) plus 300 stylized versions (outlines, drawings) to test generalization.

💻How to use the reverse-pred toolkit?

Install via pip install reverse-pred; compute metrics with functions like compute_model_to_monkey(). Full code on GitHub.

⚖️What are common vs. unique ANN units?

Common units align with brain activity and predict behavior; unique units enhance performance but lack biological plausibility.

🏥Implications for clinical applications?

Better alignment could improve models for PTSD, autism; current mismatches risk invalid neurotypical baselines.

🇨🇦York's role in Canadian AI research?

Leads Connected Minds ($318M initiative) for neural-machine systems; supports neuromorphic and vision research.

📈Factors improving brain-AI alignment?

Multi-task training, dimensionality reduction, adversarial robustness training.

📊Where to access the study data?

Neural responses, features, results on OSF: osf.io/y3qmk; paper DOI 10.1038/s42256-026-01204-0.

🚀Future of neuromorphic AI in Canada?

Initiatives like Nengo and photonic chips at Queen's position Canada as leader in energy-efficient, brain-like computing.