Ethical and regulatory challenges posed by AI in relation to human rights.
Ethical_and_regulatory_challenges_1745947845
Recommended: Ethical and regulatory challenges posed by AI in relation to human rights.
“Artificial Intelligence and Human Rights: Recommendations for Companies”
by the Global Compact Network Germany and the Human Rights & Labour Standards
The document provides recommendations for companies to responsibly develop and implement AI while safeguarding human rights across various sectors, such as healthcare, recruitment, and law enforcement.
EXECUTIVE SUMMARY
Various frameworks, including the UN Global Compact and the UN Guiding Principles on Business and Human Rights, have already included general corporate human rights due diligence obligations.
However, the rapid development of artificial intelligence (AI) in business and society raises new ethical and regulatory questions. Notably, the EU AI Act presents a challenge for business enterprises, as the Act requires them to consider risks to fundamental and human rights when implementing AI systems.
This publication therefore explores the issue of how the development and implementation of AI will affect corporate human rights due diligence obligations.
In line with other international developments along the lines of AI and human rights, the AI Act provides an initial basis for assessing the human rights risks associated with AI.
As AI is rolled out internationally, however, human rights risks will persist, especially as regards the potential misuse of AI solutions by third parties. Different legal frameworks will also be a factor.
To protect human rights along the entire supply and value chain, companies urgently need to integrate AI-specific risks into their existing corporate due diligence processes.
They must also take appropriate measures to reduce human rights risks preemptively and to develop appropriate mitigation
