Key findings & research contributions:
• Sociotechnical imaginaries associated with AI played a key role in the emergence of a Franco-German AI innovation ecosystem
• These shared imaginaries initially aligned partners, and became sources of tension
• The ecosystem did not collapse, but morphed into a smaller, stronger configuration
• The primary risks in AI innovation ecosystems are relational, not technical
• Managing such ecosystems requires technical coordination and continuous negotiation of shared imaginaries.
To address grand challenges in ecosystemic innovation, innovators must share understanding to align their expectations. We studied an emerging innovation ecosystem driven by AI and found that the members’ imaginaries associated with AI motivate the emergence of innovation despite high uncertainty. Yet, we also found that AI imaginaries generate tensions, preventing innovation.
Shared imaginaries make AI ecosystems emerge
First, let’s define the key concept of our study, “sociotechnical imaginaries”. Sociotechnical imaginaries are collective visions of a desirable future, shaped by shared ideas about how science and technology can support a certain social order (Jasanoff & Kim, 2019).
New innovation ecosystems are emerging - often initiated by governments, involving corporate partners, startups, academia and public institutions. These ecosystems intend to transform entire sectors through the deployment of AI. But they also raise a fundamental question: how do such diverse actors manage to collaborate on something as uncertain, abstract, and ambitious as AI to solve complex problems?
This was the starting point of our research. We wanted to understand what makes these early AI ecosystems emerge - and what threatens their development. So we studied how players build a shared understanding of what AI is, and what they want to do with it.
This process unfolds through a “test and learn” dynamic, where the meaning attributed to the technology evolves gradually, shaped by the emergence of POCs (proofs of concept), POVs (proofs of value), and early prototypes. These milestones open the door to potential industrialization, often as part of new project opportunities.
In such a context, we found the key ingredient for AI ecosystems to emerge. This is not technical alignment, this is not contractual obligation, this is shared imaginaries.
Imagining the future together: a case study
Our study focused on a Franco-German consortium launched in 2022 and funded by both states. Its objective: to design AI software that could support large-scale, sustainable and resilient renovation of existing social housing. The project brought together a major construction firm, a national social housing provider and several of its subsidiaries, an AI research lab, and a start-up - all with different priorities, capabilities, and time horizons.
At first, the ecosystem was formed remarkably fast under the influence of public funding. We show that this was due to three powerful and shared sociotechnical imaginaries associated with AI.
Three powerful and shared sociotechnical imaginaries associated with AI:
1. Innovation imaginary: the belief that AI is the next essential step for any serious organization, and that failing to invest in it could mean falling behind. This vision created a sense of urgency and gave legitimacy to the project.
2. Cybernetic imaginary: AI was seen as a system for optimizing workflows, enabling augmented data-driven decisions, and reducing uncertainty in renovation projects.
3. Finally, a techno-solutionist imaginary positioned AI as a key technology to address systemic challenges - climate change, energy poverty, or insanity housing.
These imaginaries allowed project members to share a common language and a common ambition. They aligned actors who otherwise had little in common and gave meaning to their collaboration. Yet as the project moved from ideas to execution, these imaginaries became sources of tension.
From shared vision to strategic friction
Once the project was launched, expectations started to diverge. For some partners, especially operational teams, the narratives seemed increasingly disconnected from everyday practices. Field managers in housing subsidiaries had trouble identifying relevant use cases.
Some, despite being essential contributors to the project’s success, postponed their engagement, waiting for the AI to "prove itself" before investing further.
This situation created a self-destructive dynamic, with a chicken-and-egg paradox: on the one hand, the progress of projects was significantly slowed down due to a lack of input from housing and renovation experts, while on the other, these partners attributed their limited involvement to the absence of tangible intermediate results in the projects.
At the same time, the software developed did not live up to the high expectations raised by the initial visions. For example, at one point, an AI prototype took enormous effort to deliver results that ultimately proved less useful than a dashboard already in use in one of the partner companies and based on simple statistics.
Another key challenge concerned the translation of local expertise into formalized, computable models. Renovation practices differed widely between regions, subsidiaries, teams and individual project managers. Efforts to build a standardized “digital twin” of these buildings and the way they should be renovated ran into cultural, technical, and epistemic obstacles. In some cases, actors withdrew from the modeling efforts altogether, preferring to rely on informal, experience-based decision-making.
As a result, the roadmap had to be revised. Some contributors scaled back their involvement, others redefined their roles, and the consortium moved towards more modest goals.
Ecosystem resilience requires more than early alignment
What these frictions revealed is that early innovation ecosystems are held together by shared high expectations - but also made vulnerable by them. When expectations diverge, so does commitment. And when the software developed fails to meet the visions that mobilized initial support, trust can erode quickly, and members leave.
Rather than a collapse, what we observed was a transformation. The ecosystem morphed - its initial ambition narrowed, but its internal cohesion improved as it navigated challenges together and built trust among partners. A smaller, more aligned group of contributors continued the work, while new partners joined the ecosystem to challenge and help revise use cases.
This process offers important lessons for those coordinating AI driven innovation ecosystems. First, imaginaries alignment is key but not enough. It must be complemented by careful expectation management, regular reality checks, and iterative recalibration.
The primary risks in AI ecosystems are relational, not technical.
Second, the primary risks in AI ecosystems are relational, not technical. Misunderstandings, role ambiguities, and mismatched incentives do more damage than failed code.
And third, ecosystem emergence is chaotic, nonlinear, and adaptive. Its development lies in the ability to pivot, to reformulate shared goals, and to identify a minimum viable ecosystem or a protoecosystem (Marcocchia & Maniak, 2018). This means a small, stable group of aligned contributors who can keep the project moving. Their work helps attract new partners who bring fresh value - and eventually, engage final customers.
Managing AI through imagination and reflection
Our research suggests that sociotechnical imaginaries are not just side effects of innovation, they are central to it. They are the glue that holds actors together at the outset. But they are also potential fault lines if not revisited, negotiated, and translated into credible roadmaps capable of mobilizing key contributors.
As AI continues to spread into public and private sectors alike, the question is not only how to build better algorithms - but how to design better collaboration. This means designing not only technologies, but also a space for shared learning and reflection. How to combine vision and pragmatismm to inspire with imagination, to manage risk, and to stay rooted in real-world practice.
Ultimately, AI is not only a technical issue, but also a matter of governance and organizational design. Its ability to address grand challenges will depend on our capacity to manage both dimensions effectively.
References:
Jasanoff, S., & Kim, S. H. (Eds.). (2019). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.
Marcocchia, G., & Maniak, R. (2018). Managing 'proto-ecosystems'-two smart mobility case studies. International Journal of Automotive Technology and Management, 18(3), 209-228.