Postdoctoral Associate (Levine/Muthukrishna Lab)
Dr. Sydney Levine and Dr. Michael Muthukrishna are jointly recruiting a Postdoctoral Associate working across their two labs at NYU starting this summer. You will be working at the intersection of cultural evolution, computational moral cognition, and AI safety.
Human societies have solved an extraordinary version of the multi-agent alignment problem, getting millions to billions of individuals to cooperate, coordinate, and enforce shared moral norms. Cultural evolution built these solutions over generations, encoded and transmitted through connected beliefs, values, behaviors, norms, and institutions. The cognitive mechanisms that underpin moral judgment - rule-following, outcome evaluation, bargaining mechanisms - are themselves products of this evolutionary process, shaped to facilitate cooperation under human time, information, and cognitive constraints.
Today, we’re building AI systems that navigate the same landscape of pluralistic and often competing human values and incentives, but with new constraints and lack of constraints.
Our goal is to bring these three fields together, asking how insights from millennia of cultural evolution and the cognitive science of moral judgment can be brought to bear on the challenge of AI alignment.
To offer a glimpse of the kind of questions and approaches you would be working on, consider the question of how do the mechanisms of cooperation operate at scale, how are they made concrete in moral norms, and can analogous mechanisms be engineered into multi-agent AI systems? You would answer this question by building and testing computational models, developing multi-agent simulations where agents, aligned to different users’ values, must coordinate, negotiate, and resolve conflicts, and designing and testing methods for aligning individual AI agents to user values in ways that reflect how humans transmit those values and align to one another. These synchronic models of moral cognition will be complemented by diachronic models of how moral consensus emergence and expands over cultural evolutionary time. The diversity of human values are themselves continually evolving - how might they coevolve with AI agents?
This is a collaboration between Levine’s lab (computational moral cognition, AI safety) and Muthukrishna’s lab (cultural evolution, cooperation, cross-cultural and historic variation), and may include collaboration with their respective networks both within and outside academia. The position is housed in the NYU Psychology Department, bridging social and cognitive psychology, with natural links to philosophy, economics, and computer science.
This position is based in New York and the selected candidate will be expected to work onsite as of their effective start date.
In compliance with NYC’s Pay Transparency Act, the annual base salary range for this position is $62,500-$90,000. New York University considers factors such as (but not limited to) the specific grant funding and the terms of the research grant when extending an offer.
You have a PhD (or are close to completion) in computer science, computational cognitive science, or any quantitative science, and a genuine interest in working across disciplinary boundaries. We?re especially excited about candidates with backgrounds or serious interest in one or more of: computational modeling of social learning, norms, or moral cognition; cultural evolution and gene-culture coevolution; evolutionary game theory and the evolution of cooperation; mechanism design and institutional design; cooperative AI and multi-agent systems; the study and development of large language models; cross-cultural psychology and large-scale behavioral data; philosophy of morality or social contract theory.
You don?t need all of these disciplinary backgrounds, but you should be someone who finds it natural to move between individual cognition, population dynamics, and engineered systems.
Please include a CV, a brief research statement describing how your background connects to the themes above, and the names of two references.
Find Your Best Opportunity
Tell them AcademicJobs.com sent you!












