• France is betting on collaboration between research clusters, rather than a single national champion.
• Energy use is shaping both research priorities and public investment choices.
• For Europe, the trade-off is now explicit: protecting core values while building the scale needed for wide adoption.
• At pre-summit conference in Paris, researchers say the next step is not “more AI”, it is better proof: reliability, validation, and clear limits of applicability.
On February 9, 2026, a cross-institution conference at the Collège de France took stock of what has changed “one year after the Paris AI summit”, and what might carry into the next global meeting in New Delhi later this month. Eight top researchers, three French ministers and around 200 spectators wrangled with issues which will be at the heart of the New Delhi Summit kicking off on February 19. The tone was diagnostic. Speakers returned to the same idea: the next phase of AI is defined less by headlines about new models, and more by constraints, infrastructure, and legitimacy.
Philippe Baptiste, France’s Minister of Higher Education, Research and Space, described to Dare what he called “a growing awareness of what is at stake worldwide”: “The government has invested heavily in research, in particular clusters like Hi! Paris and Pr(AI)rie.” But Baptiste also argued that France cannot simply mirror the dominant path of “hard and pure generative AI”: “We’ll probably hit a wall at one point or the other as we compete against the great AI powers like the USA or China” he said. For him, the energy constraint forces a shift: “We simply can’t continue with the models we have today, with data centers dominating our space. It’s simply not realistic.” The policy implication, in his view, is to invest in research paths that are both competitive and energy feasible.
Collaboration as infrastructure, not branding
The conference’s opening message was that scale is built by coordination. The University of Paris Dauphine (PSL) President El Mouhoub Mouhoud described a deliberate choice to avoid fragmentation across the Paris research ecosystem. “We prefer ‘co-petition’ to competition,” he confided after the conference. “That is, we cooperate. That is our strength.” He summarized the goal in one line: “We must not be segmented, we must not be fragmented.”
“Frugal” generative AI, and why constraints matter
If the political framing is energy and compute, the scientific question becomes: can performance improve without simply multiplying data and parameters?
Vicky Kalogeiton, a professor at École Polytechnique and Hi! Paris Chair, works on visual generative AI. She summarized the dominant recipe plainly: “We collect a lot of data, we have a lot of models, we train them, and we have great results.” Her pitch was that this is not the only path. “We can have great results with fewer data and fewer compute and fewer parameters and models,” she said.
Two motivations stand out for Kalogeiton. The first is deployment. In robotics or other “on-device” settings, she noted, “the model cannot be very, very big right now.” The second is traceability, especially when copyright and privacy debates intensify. Smaller, better understood datasets make it easier to answer questions about provenance: “We want to know exactly what our models have seen in order to produce an output so that we don’t face copyright issues or privacy violation issues.”
Kalogeiton shared a concrete illustrations of why “frugal” visual generative AI matters for traceability. She imagined a small company training a text-to-image model on data it fully controls, so it can generate a new line of products without legal uncertainty. “We can now train models where we know exactly what data they have seen, which allows us to avoid copyright issues and privacy violations."
The recent work of the former research fellow at the University of Oxford also targets a familiar obstacle: real-world data is noisy. In Don’t drop your samples! Coherence-aware training benefits Conditional diffusion (CVPR 2024), Kalogeiton and co-authors propose a training approach for conditional diffusion models that can use unreliable conditioning information instead of discarding data. Asked what she would tell decision-makers in New Delhi, Kalogeiton was direct. “Compute is a key. More compute can help a lot.” But she also pointed to upstream choices: “Data is a big thing and how we collect data, how we use data, what evaluations will we do with data, that’s also an important part.”
AI for science: what counts as proof?
The prolific researcher Tony Lelièvre is a professor of applied mathematics at École des Ponts and Institut Polytechnique de Paris. He approaches AI as a tool inside scientific simulation. In molecular dynamics machine learning is attractive because it can help with problems that are too slow or too high-dimensional for conventional methods. After the conference, he listed three areas where ML is being explored: 1) “the ability of neural networks to approximate high-dimensional functions,” 2) “the ability of neural nets to uncover latent structures in data,” and 3) “the power of generative models, which could be used to get better sampling.” He situates molecular dynamics as a workhorse used “in chemistry to study chemical reactions, in biology to study conformations of proteins - [how] a drug interact[s] with proteins… [and] in material science.” In this way, improvements in modeling and sampling can catalyze drug discovery and materials development. When Lelièvre explains the “assets” of machine learning, he points to generative models as a way to get “better sampling,” and gives a concrete research direction: “We’re working on invertible neural networks [and] diffusion models… In molecular dynamics, you need to get samples… And so, these are new ways to get samples.”
But Lelièvre framed this as a research frontier, not a solved engineering task. In molecular dynamics, he said, “we know what we should sample, according to the Boltzmann-Gibbs measures.” The question is whether new generative tools can sample it efficiently, without drifting away from the target distribution. For Lelièvre, that is why validation matters. Beyond performance, he emphasized “the quality, the accuracy, and maybe the range of applications.” In scientific settings, it is not enough to get good outputs on one dataset. You need to know when a method stops being reliable, and why.
This focus aligns with his work on sampling and free energy computation. He co-authored in particular a book with Mathias Rousset and Gabriel Stoltz, Free Energy Computations: A Mathematical Perspective (Imperial College Press, 2010) where such sampling problems are highlighted: "Machine learning techniques shed new light on sampling problems that remain challenges for molecular simulation," he concluded.
Lelièvre also offered an ecosystem diagnosis. In AI, he confided, “many things are done in private companies… Companies are hiring our students, but it seems more difficult to set up common research projects with them than in more traditional and well-established fields in scientific computing such as fluid mechanics, or computational chemistry” he argued. “There is little feedback for us.” Europe’s policy dilemma, backed by evidence
When Lelièvre explains the “assets” of machine learning, he points to generative models as a way to get “better sampling,” and gives a concrete research direction: “We’re working on invertible neural networks [and] diffusion models… In molecular dynamics, you need to get samples… And so these are new ways to get samples.”
The author of over 100 articles, Lelièvre also offers a widely legible reference point to help non-specialists picture the kind of scientific breakthrough people hope for: “I highlighted the example of AlphaFold and more recently the advent of machine-learning interatomic potentials.” The key caution he adds is that scientific use cases are often “small data” and require reliability, so the challenge is not just using AI, but validating it in contexts where you already know what “correct” sampling should look like.
Why AI Summit Matter
Antonin Bergeaud, a professor in the Economics and Decision-making department of HEC Paris, focused on diffusion. AI summits matter, he argued, because AI will involve “a lot of public investment” and “a radical change in public policies.” But he also stressed the limits of certainty. On the future of white-collar work, he said, “we do not have the answer.”
During the high-profile roundtable ending the researchers’ afternoon of exposés, Bergeaud described Europe’s trade-off in direct terms. He called the European model “very valuable” because Europe is “almost the only place in the world (that) still seems to care about the environment (and) “the security of data, the security of our privacy lives.” “At the same time,” he said, “it’s very costly economically,” because firms face “this tsunami of regulations … in Europe, which they don’t have to face in the US or in China.” His conclusion was blunt: “We have to make a choice,” because “we can’t have on the one hand the growth rate of the US or China and on the other, the preservation of those values that we care about.”
Bergeaud did not argue for abandoning those priorities. “We should keep them because they define where we are,” he said. But he argued for “a middle ground” where Europe “deregulates a little bit in some directions” to gain leverage for “growth and diffusion of AI,” and to “create our own economic champions”.
This connects to evidence Bergeaud has helped produce on how regulation shapes innovation incentives. In The Impact of Regulation on Innovation (American Economic Review, 2023), Philippe Aghion, Antonin Bergeaud and John Van Reenen implement a model using French firm-level panel data with a sharp increase in labor regulation burden at the 50-employee threshold. The setting is not AI, but the mechanism is relevant to an AI industrial strategy: rules shape firm size, incentives, and innovation decisions.
Bergeaud’s second governance point was about direction. “The market is spontaneously going to invest into the direction that maximizes profits,” he said as the conference came to a close, “and that’s not necessarily the direction that maximizes welfare.” That implies a role for the state, not only to prevent harm, but to steer incentives toward social value.
What changed in a year, and what New Delhi may test
Across these interviews, the “one year later” shift appears clear. The debate has become more concrete, and more constrained. Baptiste’s message was that energy is a hard limit. Kalogeiton’s message was that efficiency and traceability can be first-class research goals. Lelièvre’s analysis was that in scientific AI, the key question is proof: quality, accuracy, and clear limits of applicability. Bergeaud’s message was that Europe must make its trade-offs explicit, and design incentives that speed up diffusion without abandoning core safeguards.
The India AI Impact Summit in New Delhi on February 19-20, 2026, will be a useful test of whether this practical framing becomes global common sense. The question the Collège de France conference kept circling is likely to travel well: can AI scale in ways that are energy-feasible, scientifically reliable, and socially legitimate?
Further reading
India AI Impact Summit 2026 (New Delhi): https://impact.indiaai.gov.in/about-summit
The Impact of Regulation on Innovation (Aghion, Bergeaud, Van Reenen, American Economic Review, 2023): https://researchonline.lse.ac.uk/id/eprint/120206/2/Manuscript_002_.pdf
Don’t drop your samples! Coherence-aware training benefits Conditional diffusion (Dufour, Besnier, Kalogeiton, Picard, CVPR 2024): https://openaccess.thecvf.com/content/CVPR2024/html/Dufour_Dont_Drop_Your_Samples. Coherence-ware_Training_Benefits_Conditional_Diffusion_CVPR_2024_paper.html
Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method (Lesage, Lelièvre, Stoltz, Hénin, Journal of Physical Chemistry B, 2017): https://pubs.acs.org/doi/pdf/10.1021/acs.jpcb.6b10055
“Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method” (Journal of Physical Chemistry B, 2017), co-authored with Adrien Lesage, Gabriel Stoltz and Jérôme Hénin,