Governing AI in Education Together: Reflections from Strasbourg
The Council of Europe’s 3rd Working Conference on Artificial Intelligence and Education (Strasbourg, 8–9 October 2025) took place against a complex landscape – globally, we are experiencing growing democratic backsliding, rising disinformation, and the erosion of public trust in institutions, all these significantly reshaping public discourse. In education, these trends intersect with calls to ban smartphones and technology in schools and broader narratives portraying technology as incompatible with good teaching.
Yet as artificial intelligence begins to shape how knowledge is accessed, produced, and valued, it brings both promise and risks – the potential to widen learning opportunities and drive innovation, but also to amplify misinformation, displace human judgement, deepen social divides, and strain even more the fragile trust between citizens, educators, and technology.
© Council of Europe
The Council of Europe’s approach to AI governance in education is distinctive. Its Compass for AI and Education is being built around four interrelated components: regulation and governance, education about AI, research and evidence, and education with AI. Each is anchored in a shared commitment to translate Europe’s core values, human rights, democracy, and the rule of law, into concrete educational practice.
The conference’s format mirrored the very values it sought to promote. Combining plenary discussions with hands-on Policy Labs, it brought together ministries, researchers, NGOs, teachers, parents, students, and EdTech actors in genuine co-creation. Rapporteurs wove insights iteratively across labs and plenaries, turning the event itself into a living example of participatory governance, a model of the inclusive, evidence-informed dialogue that any future AI framework will require.
© Council of Europe
The proposed regulation and governance component builds on broader legislation such as the EU AI Act, the GDPR, and the UN Convention on the Rights of the Child. It focuses on ensuring coherence and safeguarding children’s rights within the education sector, aiming to establish safeguards before AI systems are used in schools. Participants agreed that a future legal instrument should be human-rights-anchored and capacity-oriented, offering not only legal clarity but also practical support to make obligations feasible for schools and educators.
Among all the themes discussed during the conference, education about AI (AI literacy) stood out as both pressing and fragmented. Young people are already interacting with AI daily – through recommendation systems, language models, and automated assessments – yet their capacity to question or influence these systems is uneven. As Bálint Koós of the European Students’ Union, representing more than twenty million students across Europe, observed, today’s AI-literacy landscape is marked by inequality and inconsistency: unequal access to tools and unclear guidance. His intervention reframed AI literacy as a matter of equity and democracy, calling for systems that students help design rather than merely receive. For him and for many young people, this is not an abstract policy question but a lived experience. While regulation may define boundaries, education determines agency – and a coherent European approach to AI literacy is urgently needed to prevent new inequalities from deepening old ones.
The Council of Europe’s conceptual definition of AI literacy encompasses three dimensions: technological (understanding how AI works), practical (using AI tools effectively and responsibly), and human (recognising and reflecting on the impact of AI on individuals, rights, democracy, and the rule of law). The human dimension, less emphasised elsewhere, is central to the Council of Europe’s approach.
Participants agreed that the AI-literacy landscape is both crowded and incomplete. Existing frameworks mostly originate from computer science or digital-skills perspectives, prioritising how AI functions or how to use it effectively while leaving aside broader civic and ethical questions. The result is that learners often lack critical understanding, and teachers lack clear guidance on how to translate complex technological and social issues into classroom practice.
© Council of Europe
Independent, transparent research and evidence are essential for evaluating the impact of AI-based technologies in classrooms. While there was convergence on the value of a European Reference Framework linking ethics, quality, and evidence, challenges were identified in terms of coordination, feasibility, and scope. Each member state faces different contexts and priorities, yet all expressed a desire for coherence and comparability.
Across all discussions, several cross-cutting themes emerged, for example, conceptually, Europe must redefine quality education in an AI age, and culturally, inclusive dialogue and participatory governance were repeatedly highlighted as preconditions for legitimacy and trust in AI.
From the perspective of the European EdTech Alliance, this dialogue is highly relevant. It reinforces the importance of a responsible and well-informed narrative about technology, AI, and education – one that prioritises transparent evidence practices, responsible data use, and trust. A coherent European approach can strengthen the conditions under which innovation and public confidence grow together. As Professor Daniel Burgos reminded us, “Let’s not forget the human touch” — technology evolves fast, but strong human foundations must evolve with it.
© Council of Europe