From Predictive AI to Generative AI: Understanding the ChatGPT Breakthrough
In 2025, AI remains a helpful tool – but for how long? When artificial general intelligence becomes available, could Isaac Asimov’s I, robot shift from science fiction to reality? With these questions in mind, we must make the choice of AI revolution and its most famous application, ChatGPT. That begins with understanding the often-overlooked aspects of this Generative Pre-Trained Transformer.
Vincent Fraitot, an associate professor at HEC Paris and author, addressed these issues during a masterclass. He examined the use of AI in business prior to ChatGPT, the actual capabilities of generative models, and their underlying mechanisms. Fraitot also discussed the primary challenges posed by this new wave of AI, including implications for employment, energy consumption, and digital sovereignty.
Before ChatGPT: a brief history of AI in business
Before discussing generative AI, Vincent Fraitot stresses the importance of clear terminology.
The foundations of artificial intelligence date back to the post-war period, specifically the 1950s and 1960s. Although researchers had conceptual frameworks, they lacked the computational power and digitized data necessary for implementation.
A turning point occurred in the early 2000s with the rise of data science, enabling large-scale data collection and analysis. Without digitized data, algorithms could not be trained. Once data and computing power reached sufficient levels, AI development accelerated.
Over the next fifteen years, companies widely adopted machine learning, algorithms capable of learning from data. Their main objective is to predict and optimize, anticipating the future based on the past. For example, in the energy sector, it is possible to estimate the output of a wind farm based on wind forecasts, then adjust the blades to maximize the energy produced.
As real-time data volumes grew, existing methods needed reassessment. For example, early autonomous vehicles used basic sensors and limited data, but the addition of high-definition cameras and LIDAR revealed the shortcomings of traditional machine learning, which was too slow and inflexible.
As Fraitot notes, “The algorithm would take four or five seconds to decide to brake... on the road, that's just impossible.”
This led to the adoption of deep learning, which also learns from data but uses artificial neural networks inspired by the human brain. Progress continued until the mid-2010s, when development stalled.
A major breakthrough occurred in 2017 when Google researchers published “Attention Is All You Need”, introducing the Transformer architecture. This approach replaced traditional recurrent mechanisms with attention-based processing, enabling faster training, improved parallelization, and the ability to handle complex sequences. Without Transformers, large language models and ChatGPT would not exist.
When AI arrived in our offices
For years, artificial intelligence remained limited to specific tasks. This changed with a significant breakthrough.
The ChatGPT breakthrough
Vincent Fraitot dates this breakthrough to the end of 2022, with the arrival of GPT-3.5. Generative AI then became widely accessible, available to anyone at any time for about $20 per month.
The term “GPT” stands for “Generative” because the model produces text, “Pre-trained” because it is trained in advance on a vast corpus, and “Transformer” in reference to the learning architecture introduced in 2017 by Google teams.
Another key term is LLM, or Large Language Model. These models are large in both size and scope, built on massive neural networks with hundreds of billions of parameters. They have been trained on extensive datasets, including web texts, books, and various documents.
How generative AI really works
These models generate content and translate effectively into English, French, and German, but perform less well with languages that have limited data. They can also summarize texts, a capability not originally anticipated. “A side effect”, notes Vincent Fraitot: since the model can produce coherent text, it can also ingest and synthesize it.
Vincent Fraitot clarifies a common misconception: ChatGPT does not retrieve answers from a database or use a hidden search engine. Instead, it learns probabilities from large volumes of text, compressing this knowledge into billions of parameters. When responding, it calculates the most relevant word sequence and may revise its output for clarity.
This explains why two people may receive different answers to the same question. The first reason is “temperature”: higher settings make the model more creative and variable, while lower settings produce more cautious and predictable responses. High temperatures can increase creativity but may reduce accuracy. Another factor is user history. The system gradually adapts to each user, recognizing context, communication style, and sometimes even assigned roles. As Vincent Fraitot humorously notes, some users identify as “boss,” while others are seen as teachers.
Responses evolve based on tone, topics, and user feedback.
These mechanisms also explain AI errors. It does not reason like we do: it assembles billions of simple rules, sometimes contradictory. Taken individually, each rule lacks intelligence, but together they often produce convincing answers... as long as we stay within the scope of what they have learned. Whereas humans can transfer knowledge from one field to another, AI remains largely confined to what it has learned.
The true cost of the revolution
The question is no longer just “what can generative AI do?”, but “how much does it cost, and who really benefits from it?”
The shockwave in employment
Hundreds of millions of jobs could be affected. In some sectors, up to 70% of tasks may be automated, according to McKinsey. This does not signal the end of jobs, but rather their transformation: there will be fewer entry-level roles, more expert positions, and a shrinking middle tier.
A colossal ecological footprint
A single ChatGPT query consumes significantly more energy than a Google search. Generating one email can use as much energy as driving several dozen miles in an electric car. This high energy demand affects the entire supply chain, including electricity for data centers, water for cooling, and large quantities of electronic components. As a result, infrastructure is under significant strain.
The global race for power
Excelling in AI requires three ingredients: vast amounts of data, significant computing power, and large budgets. Only a few countries, notably the United States and China, are leading, while Europe struggles to keep pace. Reduced investment leads to less innovation and growth, creating a vicious cycle.
Web giants in the hot seat
Google and Amazon have built their businesses on capturing user attention. However, if users begin interacting with AI to find information or make purchases, this model will be disrupted. Instead of searching and comparing, users may simply ask and follow AI recommendations.
Despite current enthusiasm, there is a risk of rapid market consolidation. Similar to the dot-com era, we may see mergers, acquisitions, and company failures. The market could soon be dominated by a few players able to absorb the high costs.
A highly political revolution
Ultimately, decisions about investment, priorities, and infrastructure are increasingly made by private companies with resources comparable to those of states, rather than by governments or citizens. The risk is clear: by allowing these “corporate states” to make structural choices, we gradually transfer part of our democratic power to them.
The revolution is underway, but like all major changes, it comes at a cost.