AI’s exponential progress shows signs of slowing. On the road to AGI, two visions emerge: AI as a normal technology — or as a humanlike superintelligent, disruptive force.
It is a well-kept secret in the AI industry: for over a year now, frontier models appear to have reached their ceiling. The scaling laws that powered the exponential progress of Large Language Models (LLMs) like GPT-4, and fueled bold predictions of Artificial General Intelligence (AGI) by 2026 from Sam Altman (OpenAI) and Dario Amodei (Anthropic) have started to show diminishing returns. Inside labs, the consensus is growing that simply adding more data and compute will not create the “all-knowing digital gods” once promised (TechCrunch). Many respected voices - from Yann LeCun to Michael Jordan - have long argued that LLMs will not get us to AGI. Instead, progress will require new breakthroughs, as the curve of innovation flattens. The disappointment and backlash surrounding the release of GPT-5 has only made this ceiling more visible.
I found particularly fascinating how this turning point has opened a wider debate on what to expect, or fear, from AI in the years ahead. It even gives me arguments when facing my children’s difficult questions: ‘Mummy, will robots replace humans?’
With reasoning models built on post-training techniques (reinforcement learning) such as GPT-o1, Claude 3.7 Sonnet, or DeepSeek R1, are we witnessing “the emergence of new scaling laws” (Satya Nadella)? Or just the “illusion of thinking” (Apple Machine Learning Research), a critique of a narrative designed to keep investment flowing into ever-larger compute infrastructures and massive data center projects? The stakes are high in this winner-takes-all race toward AGI.
What can we expect from AI in the year to come, and what will be the economic and societal impacts? Will half of all entry-level white-collar jobs vanish within the next five years due to AI automation, as Dario Amodei from Anthropic predicted? Sam Altman, the CEO of OpenAI, foresees AI automating 30–40% of jobs worldwide by 2030. Both leaders envision massive disruption ahead, with the potential for job displacement on an unprecedented scale.
Skeptics counter that there is little empirical evidence of large-scale job loss so far, suggesting instead that AI will automate tasks rather than entire occupations. In their view, the near future lies in smaller, industry-focused agents designed to enhance productivity rather than replace workers outright.
At the heart of the debate are two competing visions of AI’s pace of evolution and adoption. Is AI best understood as a “normal technology” (Arvind Narayanan, Sayash Kapoor), like electricity or the internet, transformative but gradually integrated into society? Or are we instead entering the era of the biggest disruption in human history, with humanlike superintelligent entities (“superhuman AIs” - AI 2027) taking over jobs, reshaping economies and societies, and perhaps even threatening human survival?
Press review sources
What if A.I. Doesn’t Get Much Better Than This?
Cal Newport, The New Yorker, August 12, 2025
A computer scientist’s reflection on AI’s “plateau” moment: what it means for expectations, innovation, and human work.
Listen to podcast
AI Scaling Laws Are Showing Diminishing Returns, Forcing AI Labs to Change Course
Maxwell Zeff, TechCrunch, November 20, 2024
The end of exponential scaling: why AI labs are rethinking strategies after diminishing returns set in.
Faith in God-like Large Language Models Is Waning
The Economist, September 8, 2025
Investor and public enthusiasm cools as LLMs fall short of their promise to become “digital gods.”
AI as Normal Technology
Arvind Narayanan & Sayash Kapoor, Normal Tech, April 15, 2025
A compelling argument for seeing AI not as magic but as a transformative technology with slow, steady adoption.
AI 2027
Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean, ai-2027.com
Speculative scenarios of “superhuman AI” by 2027: will machines surpass us in intelligence and control?