Skip to main content
MBA

From Algorithms to Ethics: HEC Paris MBA Students Tackle the Boundaries of AI

From Algorithms to Ethics: HEC Paris MBA Students Tackle the Boundaries of AI

Love it or hate it, AI is rapidly becoming an unavoidable technology that nearly everyone encounters from social media to their day-to-day job. Still, there remains a certain hesitancy felt globally about the limits of AI. It’s a genie that can’t be put back into the bottle and perhaps this genie needs boundaries, which is what HEC Paris MBA students explored in the course From Algorithms to Ethics: Managing AI Responsibly. The course reflects our commitment to embedding AI learning throughout the MBA journey.

MBA students sitting in a classroom listening to a professor

 

During the HEC Paris MBA’s fall intensive courses week, Professor Michael Impink polled the class: I think AI development is progressing too quickly: infrastructure (government, policies, legal) and business (hiring, restructuring organizations) can’t keep up.

Most of the 50-plus students raised their hands in agreement.

“This is why you’re taking my class,” Michael said. “You want to know what’s going on, because you think AI is moving too fast. It’s hard to talk about AI now without talking about the ethical issues.”

AI is part of our daily lives so it’s not an option to understand it, said Soha Abou Ibrahim, MBA ’26. She said the course was interesting to learn how AI is developing in various industries, countries and economies and especially the ethical aspect.

“Sometimes I believe we can’t see the difference between what a human or a machine is generating,” she said. “I used to be a professor and now I’m afraid someone can say one day, ‘I published this,’ but it’s not the case. For me this is one of those things we have to take into consideration and look at limitations.”

Every industry requires some level of checks and balances. As managers begin to integrate AI tools into their everyday work projects, it’s important to understand limitations of AI for prediction and decision making, especially when using data about people, Michael said. For this reason, the course looks at the macro level and impact of AI on economies and professions, the risk on consumers, bias and fairness issues, data representation and the various frameworks to remedy these issues. 

For example, one of the cases discussed was the results users see when searching on Google about how many CEOs exist in the world. Ten years ago, Google search results primarily showed images of white men, because a majority of internet sources are reports about male CEOs. Students debated whether the number of female CEOs should reflect 11 percent, which is what was reported, or should Google change the algorithm to reflect the actual number, 27 percent. 

“You have this question of who is responsible for what,” said Rafael Bonkowski, MBA ’27. “Ethics is really gray, and it’s less of a discussion of this is right, this is wrong. The question becomes what is the role of Google in this case, is it a question of giving back what people are talking about or reflecting the world around you, or is it a responsibility if your daughter Googles CEO and sees only men? Is that the responsibility of Google? When we work on products with AI embedded, that obviously affects the world around us, whether it’s businesses or consumers, people use these products, and it shapes their reality. It shapes how people perceive the world.”

One solution is Human-in-the-loop, commonly used in finance and healthcare. A human is paired with AI to provide input into the process at multiple points, Michael explained. 

From hands-on experiences to conceptual understanding, AI is now embedded throughout the MBA journey. In one standout example, students work alongside a robotic humanoid AI agent to iterate and validate startup ideas — contrasting AI-only with human-only innovation development. This unique experience deepens students' understanding of AI's strengths and limitations in entrepreneurial contexts.

Each AI process is different, and each approach needs to be different, Michael added. It’s more about knowing when one approach is better than another. For big firms it’s much easier to monitor their production and small firms lack a lot of resources needed so they don’t monitor things as well. It changes your approach as a manager.

Michael said he wants students to understand how they can be a positive force and make AI processes less biased and fairer, that they have some agency even though AI is a black box. Generative AI, for example, is just better at doing something we were already doing, he said, but there are elements humans can do better: empathy, creativity and innovation.

A majority of people thought AI was revolutionary, but Michael’s course, Managing AI Responsibly, reminded the class that the world has already faced similar moments with past technologies, Anass Hliba, MBA ’26 said. “We should take this with a grain of salt,  in the past, some innovations created huge enthusiasm but didn’t ultimately change everything, even if they reshaped many things. For sure, we need to try to understand how AI can evolve and keep ethical questions at the center, but we shouldn’t panic about its impact.”

Valentina Tomic Pascal was one of the students who agreed AI was moving too fast. By the end of the three-day intensive, her concern was sustained. “It was comforting to see other [students] were just as fearful but more optimistic than myself,” Valentina, MBA ’27, said. “No one said we shouldn’t have regulations, but some said it hinders the speed of innovation in the EU, but I’d rather it be slower if we put safeguards around data, people’s information and their lives.”

The question becomes how to use it, Rafael added. Leaders have a responsibility to sit with their team and say these are the resources we have, here is how we encourage use of AI, and these are the boundaries. “I’m optimistic we’re getting there. This course opened up the discussion of what ethics means in AI.”