Please ensure Javascript is enabled for purposes of website accessibility

Human Rights and Artificial Intelligence: Time to Address the Harms

Project Updates

Share:

Published:
23 Jul 2025

When debates are deeply polarised, a cautious, holistic understanding becomes essential—especially with AI, whose reach and risks are unprecedented. While AI is lauded for its high speed and efficiency in processing and analyzing data, its critics are increasingly cautioning against the colonising and dehumanising effects of AI. Interestingly, the creator of OpenAI, Sam Altman, has stated that “If this technology (AI) goes wrong, it can go quite wrong.”

Only future technological advancements will reveal what ‘quite wrong’ might truly look like; nonetheless, the gravest concern remains AI’s potential to dehumanise individuals. The easiest and most brutal way to dehumanise is by violating the inherent basic rights bestowed on humans. Unfortunately, the infringement of human rights by AI is occurring gradually and clandestinely worldwide, with both immediate and far-reaching effects on individuals. Therefore, a human rights–centric approach is essential to monitor and evaluate the impact of AI on individuals and society. While in-depth reflection on this goal presents more challenges than ways to realise the goal, now is the most opportune time to study and deconstruct the “thinking machine” and build ways to safeguard humanity from the negative externalities of AI through effective legislation, regulations, and governance.

Although AI has its advantages and disadvantages, the lack of transparency and complete understanding of how technology works results in the negative implications outweighing the positive. The consequences of AI also depend heavily on the intent behind its use and its specific applications. This is illustrated by the rise in the global number of AI incidents and controversies from 13 in 2014 to 233 in 2024. Notably, the right to privacy and the right to equality and non-discrimination appear to be the most violated human rights by AI. These human rights violations by AI have been reported across various sectors, including education, healthcare, finance, defense, government, e-commerce, media and entertainment, and legal and justice systems, resulting in varying degrees of repercussions. Prominent instances of AI misuse include but are not limited to AI recruitment tools rejecting applicants based on gender, race, or age; racial profiling; surveillance without consent; deepfake images and videos of prominent political leaders, particularly women; and amplification of disinformation on social media platforms.

All of this highlights the urgency to establish a due process for creating guardrails against such human rights violations and devise a mechanism that will compensate those affected by AI. Today, governments and judicial bodies worldwide are applying existing legislation, including privacy and data protection acts, to AI and other emerging technologies with varying degrees of success. Moreover, there has been some progress made toward regulating AI as evidenced by the EU AI Act in Europe and the ASEAN Guide on AI Governance and Ethics in Asia. However, these frameworks do not address the adaptive and often unpredictable nature of AI. Whether AI algorithms—especially those that learn and evolve autonomously—can truly be managed and controlled to safeguard human rights remains an open question. Simultaneously, considering the acknowledgment of AI’s “potential for serious, even catastrophic, harm” in the 2023 Bletchley Declaration, which is the world-first agreement on AI safety, a discussion on remedies and reparations of harms caused by AI is long overdue. The discussion should include the role of the government and the private sector, including companies developing AI as well as those employing them, and potential regulatory frameworks ensuring there is no violation of human rights.

Given these growing concerns and the urgent need for robust governance, platforms for international dialogue have become increasingly vital. In this context, the upcoming 23rd Informal ASEM Seminar on Human Rights: Human Rights & Artificial Intelligence in Copenhagen, Denmark from 29-31 October 2025 offers a timely and significant platform for a systematic and multiple stakeholder discussion on the sub-themes of ‘Remedies and Reparations of Harms (Access to Justice),’ ‘Privacy and Data Protection,’ and ‘Equality and Non-Discrimination.’ The program will bring together key stakeholders from Asia and Europe, including individuals from academia, government, non-governmental organisations, think tanks, and civil society organizations for in-depth deliberations to work toward shaping human rights-centric AI.

Find out more about the Seminar here.

 

About the author:

Mukta Amale is currently an intern with the Governance Department at the Asia-Europe Foundation (ASEF). She will soon be heading to the University of St. Gallen, Switzerland, for an exchange semester as part of her Master in International Affairs program at the National University of Singapore. Her academic interests lie in geoeconomics, maritime conflicts in Southeast Asia, and human rights violations in conflict zones.

 

 

 

 

 

 

References:

Dario Amodei — The Urgency of Interpretability

Global annual number of reported artificial intelligence incidents and controversies

Join our mailing list

Stay up to date

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at info@asef.org. We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.