Menu

Our Foundational Values

Our Foundational Values

Our Foundational Values

We stand by our defined value system that serve as the foundation for AI systems that shall benefit humanity, individuals, societies, and the environment:

We stand by our defined value system that serve as the foundation for AI systems that shall benefit humanity, individuals, societies, and the environment:

Human rights and human dignity
Human rights and human dignity
Human rights and human dignity

We pledge to respect, protect and promote human rights and fundamental freedoms and human dignity

We pledge to respect, protect and promote human rights and fundamental freedoms and human dignity

We pledge to respect, protect and promote human rights and fundamental freedoms and human dignity

Living in peaceful just, and collaborative societies
Living in peaceful just, and collaborative societies
Living in peaceful just, and collaborative societies
Ensure diversity and inclusiveness at all times
Ensure diversity and inclusiveness at all times
Ensure diversity and inclusiveness at all times
Environment and ecosystem building and growth
Environment and ecosystem building and growth
Environment and ecosystem building and growth

The Core Principles

The Core Principles

The Core Principles

We take a human rights approach to AI and our core principles lay out a human-rights centred approach to the Ethics of AI. We put the human back in technology

We take a human rights approach to AI and our core principles lay out a human-rights centred approach to the Ethics of AI. We put the human back in technology

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms that may result from such uses.

A No Harm Approach

Safety risks as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

Safety and Security
Multi-stakeholder & Adaptive Governance & Collaboration

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

Transparency and Truth

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

Fairness and Non- Discrimation

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established. The guardrails need to be defined

Privilege of Secrecy and Information Security

States should ensure that AI systems do not displace ultimate human responsibility and accountability

Human Oversight and Determination Member

Awareness & Literacy Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

Awareness & Literacy Public understanding
Sustainability

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

Responsibility and Accountability
A No Harm Approach

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms that may result from such uses.

Multi-stakeholder & Adaptive Governance & Collaboration

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

Human Oversight and Determination Member

States should ensure that AI systems do not displace ultimate human responsibility and accountability

Safety and Security

Safety risks as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

Responsibility and Accountability

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

Sustainability

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

Fairness and Non - Discrimination

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

Right to Privacy and Data Protection

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established. The guardrails need to be defined

Transparency and Truth

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

Awareness & Literacy Public understanding

Awareness & Literacy Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

The Global Alliance for Ethical AI Innovation

Getting AI governance right is one of the most important challenges of our day, requiring mutual learning based on the lessons and best practices emerging from many jurisdictions around the world.

This Forum brings together the experiences and expertise of professionals, companies and countries at various stages of technological and policy development, allowing for a targeted exchange of knowledge and a debate with the commercial sector, academia, and civil society as a whole.

High-level decision makers, industry leaders, representatives from scientific and research institutions, and non-governmental organisations will share their perspectives and best practices on AI governance at the global, regional, and national levels. We shall also initiate conversations around the opportunities and challenges presented by AI, such as the technology's potential to advance the agenda of equity, diversity, and non-discrimination, emerging best practices of AI supervision, partnerships with the private sector through ethical impact assessments, and the impact of AI on gender equality.

The rapid emergence of artificial intelligence (AI) has provided numerous opportunities worldwide, ranging from aiding healthcare diagnostics to enabling human relationships via social media and increasing labour efficiencies through automated jobs. However, these rapid developments present significant ethical considerations. These originate from AI systems' capacity to incorporate biases, contribute to climate change, endanger human rights, and so on. Such hazards connected with AI have already begun to exacerbate existing disparities, causing further harm to already marginalised communities.

The Global Alliance for Ethical AI Innovation © 2024.

The Global Alliance for
Ethical AI Innovation

Getting AI governance right is one of the most important challenges of our day, requiring mutual learning based on the lessons and best practices emerging from many jurisdictions around the world.

This Forum brings together the experiences and expertise of professionals, companies and countries at various stages of technological and policy development, allowing for a targeted exchange of knowledge and a debate with the commercial sector, academia, and civil society as a whole.

High-level decision makers, industry leaders, representatives from scientific and research institutions, and non-governmental organisations will share their perspectives and best practices on AI governance at the global, regional, and national levels. We shall also initiate conversations around the opportunities and challenges presented by AI, such as the technology's potential to advance the agenda of equity, diversity, and non-discrimination, emerging best practices of AI supervision, partnerships with the private sector through ethical impact assessments, and the impact of AI on gender equality.

The rapid emergence of artificial intelligence (AI) has provided numerous opportunities worldwide, ranging from aiding healthcare diagnostics to enabling human relationships via social media and increasing labour efficiencies through automated jobs. However, these rapid developments present significant ethical considerations. These originate from AI systems' capacity to incorporate biases, contribute to climate change, endanger human rights, and so on. Such hazards connected with AI have already begun to exacerbate existing disparities, causing further harm to already marginalised communities.

The Global Alliance for Ethical AI Innovation © 2024.