Skip to main content
Faculty & Research

AI on the Fault Line: Scholars and Companies Tackle New World

With the EU implementing transparency and prohibitions in 2025-2026, major players discuss logs, liability - and what counts as proof.

Vignette Professor Daniel Sokol, at the HEC Executive Forum © Daniel Brown

Professor Daniel Sokol, one of several leading academics at the HEC Executive Forum © Daniel Brown

Key takeaways

  • The AI Act is no longer theoretical. With first prohibitions already in force and GPAI transparency rules due in August 2025, companies are shifting from “principles” to operational compliance.
  • AI is reshaping management, finance, manufacturing, operations, and sustainability, according to speakers at the HEC Executive Forum on Digital Innovation & Analytics.
  • Europe’s rules framed the debate, but speakers added a global edge. Technologist Francesca Bria has warned of a U.S. “authoritarian tech stack” challenging democratic precepts.
  • HEC research proposes a new safety playbook, and one professor argued algorithms should be evaluated “like prescription drugs.”
  • As firms brace for the Act’s auditability demands, field evidence from a large saving bank in Germany suggests customers are far likelier to follow AI-generated advice when a human advisor oversees it.

Inside a modest auditorium in central Paris, the conversations were all about logs, liability and proof. As Europe edges into the world’s first comprehensive AI law, October’s third HEC Executive Forum on Digital Innovation & Analytics deep dived into the EU’s artificial-intelligence law - and beyond. Such legislation moves from principle to practice, allowing academic heavyweights and corporate operators to compare notes on what, exactly, will be legal next year… and what will still make money.

Timely Conference

The one-day meeting, co-hosted with USC and IMT-BS at the HEC Alumni headquarters, gathered academics and corporate leaders to stress-test the EU’s new rules against day-to-day realities in finance, retail, logistics and media. But it also looked beyond: in a much-circulated essay this month, technologist Francesca Bria warned of a U.S. “authoritarian tech stack” - a vertically integrated complex linking cloud, AI, payments, drones, and satellites, with private platforms taking on quasi-sovereign roles. That specter hovered over the Paris conference: if power aggregates in the stack, do democracies lose leverage? And do companies become collateral?

The conference couldn’t have been timelier: Europe’s AI Act is already biting as “unacceptable risk” practices are outlawed, and the Commission has rebuffed calls from Big Tech and European champions to delay upcoming deadlines. Transparency duties for general-purpose models are kicking in; higher-risk regimes follow. The message from Brussels: no pause is coming. Companies must adapt.

Against that timeline, the conference asked a tougher question than: “What can AI do?” Speakers delved into what should be deployed, who is accountable, and how do major players prove it works without causing harm?

Who Owns the Stack Now?

In the morning, Lynn Wu, a business professor from the Wharton School University of Pennsylvania, mapped an “AI stack” that has evolved from loose layers into a more vertically integrated pipeline - from chips and cloud to models, fine-tuned data and consumer apps. After her talk, Wu argued the shift is both economic and political: compute is scarce; the state of the art is expensive; and open-weights without open data or replicable code won’t satisfy scientific scrutiny, let alone regulators. “Only a few players have the money and resources,” she warned, which “somewhat inhibits open science.” 

Wu described the industrial structure underneath these choices. “Expect a turn to smaller, purpose-built models, distillation, and edge inference.” These systems do more with less and are easier to defend to regulators (and CFOs). “And look East: China’s ecosystem scaled fast; Europe has strong rules, few players.” 

Competition on a Razor’s Edge

That transition is not just technical; it’s political economy. If platform concentration is the upstream fact that animates fears of an “authoritarian stack,” the downstream fact is that right-sized tooling lowers barriers for mid-market firms and public agencies that don’t own compute. The policy question becomes whether conduct remedies - access terms, API non-discrimination, portability - can keep the ladder open while letting integration deliver efficiencies. USC law professor Daniel Sokol argued for exactly that focus. 

If Wu drew the architecture, Sokol supplied the market dynamics. In AI, he argued, two truths coexist: at the stack level, consolidation gives incumbents strategic leverage; at the application edge, lower costs let small firms do what was once unaffordable, creating a bloom of new entrants in games, music and productivity software. “It’s doing both at the same time,” the Californian said in a one-on-one exchange with HEC Paris. “It’s democratizing and entrenching. That puts regulators and general counsels on what you could call a ‘gray zone’,where bets on product direction collide with rules that are still being written.” 

U.S.-Europe Convergence or Divergence?

Europe’s instinct to regulate early is understandable, Sokol suggested, but ex-ante rules can mis-scope risks when the technology leaps ahead. “Europe’s first AI Act drafts didn’t even imagine today’s generative systems.” The US, for its part, relies on a patchwork: sector supervisors, state bills and “private ordering” via contract between companies. Whether the transatlantic approach converges to a race to the top or race to the bottom is still an open question.

For companies, his practical counsel was unromantic: build cross-functional governance, test for bias and fragility, and publish enough to keep customers, regulators and investors confident. “If you hide,” Sokol insisted, “society gets scared and competitors use it against you.” 

Testing algorithms like medicines

One of the forum’s most concrete governance metaphor belonged to Christophe Pérignon, HEC Paris professor of finance and Associate Dean for Research. Many firms vet models only for accuracy; Pérignon wants them vetted for side effects before deployment and throughout their life cycle – a common practice in pharmacology. He proposes equivalence tests, post-market surveillance and recall mechanisms when systems misbehave. The analogy resonates in finance, where model risk is regulated, and in the EU context, where the AI Act expects traceability, logging, and risk management rigor. Indeed, the Act is pushing firms toward exactly this auditable lifecycle. In a climate where private platforms may absorb public functions, rigorous ex-ante and ex-post controls are less red tape than social license.

Pérignon’s recent work and teaching push for auditable, interpretable pipelines that behave under stress, not just on a held-out test set. In Europe’s compliance regime, that mindset is a strategic asset: prove your model is fair, stable and understandable, or prepare to pause it.

HEC Professor Christophe Pérignon © Daniel Brown

The Human Still Matters

Where does the human fit in the wider scheme of things? Xitong Li, the forum’s organizer and HEC Paris professor of information technologies, presented field evidence from a partnership with a major German savings bank on AI-generated investment advice. The experiment compared AI-only, human-only, and human-AI collaborative advice. In Li’s account, customers were dramatically more likely to accept recommendations when a bank advisor oversaw the AI output, and the hybrid approach improved investor payoffs versus AI alone. The professor of information technologies at HEC Paris cautions against easy generalization (“heterogeneity” across firms and cultures is real), but the direction is clear: human-in-the-loop can bring in value by increasing downstream customer welfare. 

Li also offered a temperature check from three years of running the forum. The pace has quickened; the questions have matured. Corporate appetite for academic insight is growing, especially in Europe, where firms are “ambitious” but still catching up to US and China in adoption. And for 2026? Don’t predict themes too early, he smiled - except to say that ROI will dominate. CEOs now want “reasonably quick” demonstrations of value. 

“Regulation doesn’t kill innovation - it gives it credibility,” Li confided. “The challenge is turning ethical intent into operational discipline.” 

Co-organizer of the event, HEC professor Xitong Li © Daniel Brown

“We Don’t Allow Black-box Models”

In the boardroom, that discipline is spelled out in policies. Louis Dreyfus Company (LDC) - a leading global merchant and processor of agricultural goods - has formalized responsible AI guidelines that ban “black-box models” in production. “We do not use models for which we do not clearly understand the relationships between model inputs, model operations and model outputs,”  explained Alex Rykhva, who heads up LDC’s Data Science & Analytics team, and his colleague Léa Yue Xu, Research & Data Science and Analytics Portfolio Manager. LDC runs AI Labs in France and Singapore to sandbox frontier models away from critical data, learning what’s possible before anything moves to production. 

Rykhva and Xu (who holds a 2020 Master in Management degree from HEC Paris) are not only driven by the opportunities AI provides, but also - and primarily - by the need to protect confidential information (data, algorithms, use cases). That means moving fast to stay ultra-competitive, while building a proper trail for usage of data and algorithms across the Group. “We define and follow best practices, making sure we do everything appropriately for the long run,” notes Rykhva. 

Beauty’s CFO on “ethical agility”

Il Yeon You, CFO of L’Oréal Dermatological Beauty, introduced the concept of 'ethical agility,' stressing that analytics should drive swift, ethical decisions. You maintains that analytics only matter if they move decisions quickly and ethically, focusing on who accesses which signals, for what purpose, and how clearly it can be explained. Crucially, he stated: “If a model can’t be explained clear and simple, it doesn’t ship,” advocating for right-sized models, purpose-limited data, and human checkpoints for high-stake situations.

Logistics Looks to 2035

Helena Garriga, who joined Körber Supply Chain’s executive board in 2024, brought the operations perspective. The plan: AI-driven automation across warehouses, a push into pharma and health, and a workforce retooled around robotics. E-commerce in the West still trails Asia; climate pressure and geopolitics add a sense of urgency. The conclusion is unapologetically tech-forward: “Soon, warehouses will be fully automated and powered by AI,” she said in her presentation. Asia, notably China, is a growth engine; Körber is realigning its strategy to match.

If Bria’s “authoritarian stack” worries hinge on private control of critical infrastructure, Garriga’s case shows the counter-pressure: enterprise buyers are demanding traceability and safety cases in contracts. Auditability is more a procurement checkbox than a political slogan.

What’s Different in 2025

The step-change isn’t just a legal one. Energy and compute were persistent refrains. Lynn Wu noted that data-center power demand is surging and that chip metrics are shifting to performance per watt - a reminder that the “free” scale of 2023 is over. “Are we reaching a point where energy costs constraints innovation?” the Wharton academic wondered. “This is not like driving a car, we don't have enough energy, period. Even if you combine traditional oil, gas, nuclear, renewable, , we may not have enough to run this machine.” That argument, in turn, strengthens the case for edge inference, distillation, and retrieval-augmented designs that do more with less. 

LDC’s Alex Rykhva, meanwhile, sees a transition from awareness to accountability at LDC, which started with skepticism among employees about promised business outcomes through use of AI, now shifting to employees embracing the change. If last year was “let’s try it”, this year it’s “prove it, document it and keep it under control.” 

With that objective in mind, LDC has been turning to some of the fresh minds at HEC Paris: “We like to develop forward-looking projects with students. We currently have two AI-related projects with two HEC student cohorts. One is helping us enhance finance mechanisms. The other is assisting us in defining ways to leverage AI for macroeconomic analyses.”

Rykhva praised the forum’s small-room format for generating immediate, applied value. Last year, one speaker mentioned a model-building tool he’d never heard of. On the following Monday, he tried it and…:“It was fantastic! A quick hack turning into adoption.” Another serendipitous hallway debate on cotton traceability connected LDC to an apparel retailer that was wrestling with supply-chain transparency. 

Why This Matters Beyond Paris

Europe’s bet is that credible governance is not the enemy of innovation but its license to operate. The Commission has reaffirmed the Act’s cadence and signaled only limited simplifications to reduce burdens on smaller firms. That stance puts pressure on boards to build the plumbing - documentation, logs, red-teaming, incident response - now, not when an authority knocks.

But credibility will also hinge on examples that feel real. That’s where the forum delivered: a trader refusing black boxes; a bank that makes AI advice more persuasive by inserting a person; a systems scholar reminding us that watts and wafers are policy variables; a lawyer insisting that transparency can be strategy.

If there was a single fil conducteur in the daylong event, it was this: the firms that win won’t merely have the best model; they’ll have the clearest evidence that it works, the fastest way to fix it when it doesn’t, and the records to prove both. And for this, collaboration with students and academics remain crucial.

Une image contenant personne, habits, intérieur, costume

Le contenu généré par l’IA peut être incorrect.