Most recent surveys converge on a picture where AI is now mainstream in student life: in higher education, around 80–90% of students report using generative AI for their studies. (The “AI in education” 2026 report highlights that about 86% of students now use AI for studies, with rapid growth between 2024 and 2025)
Explaining concepts and tutoring, summarizing and note‑making, information search, generate ideas, editing portions of text or even Writing whole essays: the dominant use‑cases show how much Generative AI tools are already part of students’ routines.
This creates a serious challenge for educators and students alike: how can they benefit from this technology while avoiding the pitfall of allowing AI to do their work for them? How can we ensure that students ask the right questions, check evidence, interpret results and continue to develop their reasoning, curiosity and critical thinking skills?
At the same time, employers increasingly expect graduates to work with AI fluently. The question is no longer whether students will use these tools, but how education should change around them.
That contradiction surfaced during Hi!ckathon, Hi! PARIS’s flagship annual AI & Data Science hackathon, held Nov. 28–Dec. 1, 2025, where student teams used AI to work with education data and prototype learning tools, only to run into a deeper challenge: AI can accelerate work, but it can’t decide what matters.
The Hi! PARIS hackathon (Hi!ckathon) is an annual AI and data science challenge where student teams build prototypes from real datasets and pitch them to a jury.
- AI can speed analysis and execution, but it doesn’t replace interpretation, students still need to define questions, test assumptions, and judge conclusions.
- “Personalized learning” doesn’t have to mean isolated learning: AI can free time for collective work, discussion, projects, and peer critique.
- Generative AI weakens some traditional assessment signals, pushing schools to rethink what counts as proof of learning.
Teach Students to Master AI while Sharpening their Analytical Skills
One Hi!ckathon challenge asked teams to work with PISA data, one of the most widely used international assessments in education. For many participants, the dataset was unfamiliar. That mattered. Before building anything, teams had to understand what PISA measures, how it is constructed, and why its results recur in education debates.
Teams had to combine two layers of information:
- performance results in reading, math, and science (often focusing on mathematics), and
- questionnaire data capturing socio-economic and cultural context: access to books, parents’ professions, well-being, bullying, and access to digital tools.
Scores alone did not tell the story. Context changed interpretation.
They encountered results that demanded careful judgment. One finding surprised many: a meaningful share of students recorded a score of zero because they did not answer the questions at all. That does not point neatly to academic ability. It raises questions about engagement and motivation, factors a dataset can hint at, but not fully explain.
AI tools helped them work through the dataset faster. They did not tell students what the results meant, which indicators deserved attention, or where the limits of the data lay. That part remained human work: slowing down, debating interpretations, and deciding what conclusions were defensible.
If AI supports personalized learning, can it also strengthen collective learning?
Much of AI’s promise in education focuses on the individual gains: adaptive quizzes, tutoring tools, assistants that respond at a learner’s pace. Personalization at scale is real, and it can help students learn more efficiently.
But the Hi!ckathon showed another side of the story. When teams used AI for routine tasks or early-stage analysis, they often gained time for the part that education struggles to scale: human interaction. Discussion, project work, and collaborative reasoning moved to the center.
Rather than replacing group work, AI sometimes made it more intense. Students debated what the data actually showed. They challenged each other’s assumptions. They refined conclusions collectively.
Several teams also explored conversational systems meant to detect signals tied to concentration, motivation, or emotional state. Chatbots and sentiment analysis appeared frequently. The tools themselves were not the main story. The concerns behind them were. Across teams, concentration came up repeatedly, followed closely by anxiety, issues students recognized in their own academic lives.
Team diversity amplified this dynamic. Students came from different countries and education systems. Differences in experience shaped how they interpreted stress, engagement, and support. Collaboration was not just a method. It became part of the inquiry.
The broader question is not whether AI can personalize learning. It is whether schools can design AI use to protect the social dimension of learning, argument, collective standards, and shared judgment.
Does AI put traditional higher education models at risk?
Generative AI is already challenging long-standing academic routines. Take-home writing assignments, for example, are harder to use as clean signals of understanding when students can outsource drafting and structure. That creates discomfort, but it also forces clarity about what educators are truly assessing.
At the same time, AI opens practical possibilities: faster feedback, tools that help identify strengths and weaknesses, and learning paths that adapt to student needs. During the Hi!ckathon, this took the form of ideas such as interactive platforms, engagement tracking, short video formats inspired by social media codes, and conversational systems designed to sustain attention over time.
None of these prototypes offer a single model for the future of education, but they do sharpen the decision institutions now face.
What should education stop treating as proof of learning, and what should it protect more firmly than ever?
Three ways in which educators can integrate AI effectively right now
- Redesign assessment around in-class reasoning, oral defense, and process, not just polished outputs.
- Teach “AI critique” as a skill: verification, bias checks, and assumption testing.
- Use AI to automate routine steps, so class time goes to discussion, collaboration, and judgment.
No one left with a final answer. But the Hi!ckathon pointed to a shift in priorities: as AI makes answers cheaper, education may matter less for producing outputs, and more for training students to question, verify, and judge what those outputs are worth.