The ‘fatal conceit’ that central planning can solve complex social problems arises in a new form.
By Gary Saul Morson and Julio M. Ottino
It keeps happening—some shiny new idea or technology promises to solve all our problems. Give power to experts to arrange affairs “scientifically,” and poverty, oppression, disease, war and all human ills will disappear. Today, we are asked to trust artificial intelligence.
The International Monetary Fund promises that “AI can enhance democratic institutions by ensuring citizens’ voices are truly heard.” Power wielded by a few experts can enhance democracy? Isn’t that what the early 20th-century Progressive movement promised? For that matter, isn’t that the thinking behind Soviet “scientific socialism”?
Google researchers recently unveiled their “Habermas Machine,” an AI system designed to help people find common ground on divisive issues such as Brexit and climate change. No need for plebiscites, which may yield the wrong answers, or debates, which muddy the issues. Politics, in the sense of compromise and give and take, is old-fashioned. Named after Jürgen Habermas, the German philosopher who championed rational discourse, the Habermas machine assumes that our deepest conflicts stem primarily from communication failures rather than genuinely incompatible values.
This optimism reflects what we might call “techno-solutionism”—the belief that complex social problems can be solved by some algorithm, with minimal regard for unintended consequences or unexpected emergent properties. Some tech leaders have a quasireligious faith that they can, as the book of Revelation promises, “make all things new.” Past innovations may have been imperfect or even harmful, but now we see that, as a character in the recent film “Mountainhead” declares: “The antidote to bad tech is good tech!”
According to the editor of Science magazine, AI generates statements that are “more clear, logical, and informative without alienating minority perspectives.” Who needs diplomacy? Do we really need a Federal Reserve or 6-3 decisions on the Supreme Court? If humans fail to solve conflict in the Middle East, why not give ChatGPT a try?
This enthusiasm reflects a dangerous misunderstanding of complexity itself. We are witnessing the emergence of a new version of the old central-planning fallacy—the belief that complex social systems can be optimized through technological intervention, much as software systems are engineered.
To be sure, technological achievement has been accelerating. Facebook took 10 months to reach a million users in 2004; ChatGPT achieved this milestone in five days in 2022. Yet our record of anticipating the social consequences of new technologies remains abysmal. Once we were told that the internet would make authoritarianism obsolete. No one foresaw internet mobs intimidating people from exchanging opinions. The internet itself serves as a sobering reminder of how transformative technologies can generate effects their creators never imagined.
Soviet planners were confident in their ability to organize society rationally. Instead of the waste and unemployment of capitalist countries, experts would allot resources selflessly and rationally. But there are limits, and central planning is an inferior way to deal with them. Vladimir Lenin and his successors failed catastrophically because, as Friedrich Hayek observed, knowledge is inherently decentralized and dispersed throughout society. The emergent properties of complex systems simply can’t be captured in centralized models, no matter how sophisticated.
Today’s tech leaders risk falling into the same trap. The assumption that human behavior and social systems can be engineered like software reflects the overconfidence in prediction and control that inspired five-year plans. It isn’t that nothing is accomplished. The Soviets succeeded, albeit at unspeakable human and environmental cost, in industrializing their society and expanding their empire.
In much the same way, modern AI systems are, within bounds, successful. Large language models process vast amounts of information and generate seemingly sophisticated outputs about complex social issues, creating an illusion of comprehensive knowledge that may be even more dangerous than crude economic models. Once again, we are being seduced by utopian promises that underestimate the complexity of human affairs.
Consider the Middle East conflicts, which are precisely the kind of problems that resist technological solutions. They arise from grievances spanning centuries, multiple overlapping ethnicities and identities, theological claims to sacred sites, and the belief that the other side shouldn’t exist. No Habermas Machine can resolve such hostilities. These aren’t communication problems amenable to clearer AI-generated statements. Some of them represent zero-sum perceptions rooted in incompatible worldviews. No amount of algorithmic optimization can bridge gaps where compromise is perceived as betrayal or where the stakes involve survival itself.
Hayek called this “the fatal conceit”—the assumption that central authority can gather and use all relevant knowledge. Just as Soviet planners couldn’t capture the distributed knowledge embedded in economic decisions, today’s AI systems can’t aggregate and optimize all relevant social knowledge. Human behavior is too complex. Cultural context is too important and can’t be formalized.
This isn’t an argument against AI, but rather for humility about its limits. AI works best as a tool that enhances rather than replaces human judgment. It can help us process information, identify patterns and generate options. But it can’t substitute for the irreducibly human work of navigating competing values, managing trade-offs and living with uncertainty.
History suggests that attempts to engineer human complexity away don’t eliminate it. They merely drive it underground, where it erupts in unpredictable and often destructive ways.
(Wall Street Journal, Opinion, June 30, 2025)