What does it mean to truly centre people in artificial intelligence? Serving as a follow-up to the 23rd Informal ASEM Seminar on Human Rights held in Copenhagen on 29–31 October 2026, the programme brought together 26 early-career policymakers, academics, civil society representatives, and human rights practitioners from 21 countries to explore AI governance, digital equity, and rights protection across Asia and Europe. Held from 30 March to 1 April 2026 and delivered in partnership with the Center for Digital Society, it took participants through lectures, problem mapping, solution mapping, and cross-group discussions, moving deliberately from diagnosis to action.
Setting the Scene
Welcome remarks from Dr Wawan Masudi, Dean of the Faculty of Social and Political Sciences of Gadjah Mada University, Mr Zhang Lei, Deputy Director of ASEF, and Ambassador Andri Hadi, ASEF Governor for Indonesia, framed the event as a necessary space for cross-regional dialogue. In his opening remarks, Indonesia’s Vice-Minister for Communications and Digital Affairs, Nezar Patria, outlined the country’s emerging AI policy pathways, anchored in human centricity. Undral Ganbaata, Programme Specialist for Social Human Sciences at UNESCO, in her capacity as keynote speaker, offered a comparative view of global governance approaches, underscoring that meaningful human involvement in AI systems goes well beyond simply “looping people in.”
Before the discussions could go deep, they needed a shared foundation. Hafiz Noer, Visiting Lecturer and Researcher at the Faculty of Social and Political Sciences, Gadjah Mada University provided that with a trainer-led overview of key concepts that gave participants a common language for the days ahead. He traced AI governance across the full lifecycle- from design and data collection through to deployment and decommissioning – making clear that governance is not a single intervention but a continuous responsibility.
He drew a careful distinction between ethics and human rights: where ethics offers principles that are aspirational and self-imposed, human rights represent one of the closest formulations of a global normative framework we have, translating shared values into binding obligations with legal force.
Governance is Everyone’s Business
Multistakeholder governance was not just a talking point at Yogyakarta, but it was a demand that surfaced repeatedly across sessions. Michaela Sullivan-Paul, Project Manager for Artificial Intelligence at the European Commission, outlined the risks and harms associated with AI and good practices around risk mitigation, while pushing back against the idea that innovation and regulation are in tension.
Debby Kristin of Digital Rights Indonesia and Engage Media introduced an incident tracker monitoring alleged AI-related harms across the full lifecycle from design through deployment – making the case that human rights due diligence is both a legal obligation and sound business practice. Dinah Van Der Geest from ARTICLE 19 tied her concerns together under a single diagnosis: governance asymmetry. Those who build AI, those who suffer its harms, and those with oversight power are rarely the same actors – and the gaps between them are structural. Regulators cannot see inside the systems they nominally govern, while harms surface only after damage is done. At the same time, AI’s oversight capacity is concentrated in a handful of jurisdictions while deployment is global. Prof. Jack Linchuan Qiu of Nanyang Technological University pushed the critique further back: existing internet governance frameworks, he suggested, were never built to hold today’s technology giants to account in the first place.
The most striking contributions came from outside dominant Western frameworks. Suci Lestari Yuana of Gadjah Mada University asked how the Global South could participate as genuine contributors and not merely as labour sources or passive consumers of externally designed technology. A participant from India raised a related tension: how do you advocate for AI and human rights to communities whose immediate concerns are basic needs? The response was instructive – not to abandon the human rights frame, but to develop a contextualised vocabulary that countries could shape together. Qiu’s metaphor of an “ant society” gave this impulse theoretical grounding: an approach that centres the experiences and knowledge of communities most affected by AI and digital platforms – looking at how people actually live with, use, and resist these technologies, rather than viewing them purely through the lens of policy or market logic.
Participants left the programme with more than new ideas – they left with a network of peers committed to shaping AI from the ground up, with human rights as the foundation rather than an afterthought. That commitment found concrete expression in four declarations from Yogyakarta: that human rights must anchor AI governance; that meaningful human oversight is non-negotiable; that Asia and Europe must coordinate rather than diverge; and that shared frameworks must be matched by real access to remedies for those who are harmed. The work ahead is demanding. But Yogyakarta was proof that the will to do it together exists.
- Training programme, ASEMHRS23 Participant’s Handbook (final)
- View the photo-album here