Ethical AI: IEEE’s Ethically Aligned Design Principles

How would you ensure that your artificial intelligence (AI) and Autonomous and Intelligent Systems (A/IS) respect human rights, promote well-being, and uphold accountability? These are some of the questions that the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity, has been exploring in its Ethically Aligned Design document. This document provides a comprehensive framework and a set of principles for creating ethical AI systems that prioritize human values and social good. In this post, we will explore the IEEE’s Ethically Aligned Design document and its five principles for creating ethical AI systems that prioritize human rights, well-being, transparency, and accountability.

The IEEE’s Ethically Aligned Design document aims to provide a common vision and a set of guidelines for the ethical design and development of AI algorithims that align with human values and respect human dignity. It is organized into five principles that cover different aspects of ethical AI systems, such as human rights, well-being, transparency, accountability, and awareness of misuse. Each principle is accompanied by a set of recommendations and best practices that provide concrete guidance for AI creators and operators. The document is intended to be a living document that evolves with the advancement of AI technology and the feedback from the stakeholders.

The five principles of Ethical AI according to IEEE’s Ethically Aligned Design document are:
  • Human Rights: Ensure they do not infringe on internationally recognized human rights.
  • Well-being: Prioritize metrics of well-being in their design and use.
  • Accountability: Ensure that their designers and operators are responsible and accountable.
  • Transparency: Ensure they operate in a transparent manner.
  • Awareness of Misuse: Minimize the risks of their misuse.

In the following sections, we will explore each of these principles in detail, using examples and case studies to illustrate their relevance and application.

Human Rights: Ensure they do not infringe on internationally recognized human rights.

One of the most important principles in this framework is the principle of human rights. According to this principle, AI and A/IS should be created and operated to respect, promote, and protect internationally recognized human rights, such as the right to life, liberty, security, privacy, equality, and freedom of expression. This means that AI and A/IS should not harm or discriminate against any human being or group of human beings on the basis of their race, gender, age, religion, disability, or any other characteristic. It also means that AI and A/IS should not infringe on the autonomy and agency of human beings or manipulate their choices or preferences.

But how can we ensure that AI and A/IS adhere to this principle in practice? How can we design and deploy AI and A/IS that respect human rights in different contexts and scenarios? How can we monitor and evaluate the impact of AI and A/IS on human rights over time? These are some of the questions that I will explore in this section, using examples and case studies from various domains and sectors.

One domain where human rights are particularly relevant is health care. AI and A/IS have the potential to improve health outcomes, reduce costs, and enhance access to quality care for millions of people around the world. However, they also pose significant risks to human rights, such as the right to privacy, consent, information, and non-discrimination. For instance, AI and A/IS could be used to collect, process, and share sensitive personal data without the knowledge or consent of the patients or users. They could also be used to make recommendations that are inaccurate based on discriminatory public health data. AI or A/IS could also be used to replace or override the judgment or expertise of human health professionals.

To address these risks, we need to make sure that they are ethical and respectful of people’s rights. How can we do that? By following some key principles such as transparency, accountability, fairness, privacy, security, and explainability. These principles should guide the design and operation of AI and A/IS in health care from start to finish. To do this well, it is vital to involve different people who are affected by AI and A/IS in health care, such as patients, users, health professionals, and others.

Well-being: Prioritize metrics of well-being in their design and use.

The next IEEE’s Ethically Aligned Design Principles is Well-Being. That is A/IS creators shall adopt increased human well-being as a primary success criterion for development. This principle reflects the idea that A/IS should not only respect, promote, and protect human rights, but also enhance human well-being in various dimensions. But what does this mean in practice? How can A/IS creators measure and evaluate the impact of their systems on human well-being? And what are some examples of A/IS that embody this principle?

According to the IEEE, human well-being is a broad concept that encompasses physical, mental, emotional, social, and economic aspects. It is influenced by both objective and subjective factors, such as health, happiness, satisfaction, meaning, purpose, dignity, autonomy, agency, and justice. A/IS creators should consider these factors when designing and developing their systems, and ensure that they align with the values and preferences of the intended users and stakeholders.

One way to operationalize this principle is to use a framework such as the OECD’s Better Life Index, which measures well-being across 11 dimensions: income and wealth, jobs and earnings, housing, health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, and subjective well-being. A/IS creators could use this framework to identify the potential benefits and risks of their systems for each dimension of well-being. This could also be beneficial in designing solutions that maximize the positive outcomes and minimize the negative ones.

Some examples of A/IS that aim to increase human well-being are:
  • Care robots that assist elderly or disabled people with daily tasks, such as bathing, dressing, eating, or moving around. These robots can help to improve the quality of life and independence of their users, as well as reduce the burden on caregivers.
  • Intelligent personal assistants that provide information, guidance, or support to their users in various domains, such as health, education, finance, or entertainment. These assistants can help to enhance the knowledge, skills, productivity, or enjoyment of their users.
  • Algorithmic chat bots that offer emotional or psychological counseling to their users in a confidential and empathetic manner. These chat bots can help to reduce stress, anxiety, depression, or loneliness among their users.

These are just some examples of how A/IS can contribute to human well-being. However, A/IS creators should also be aware of the potential challenges and trade-offs that may arise when applying this principle. For instance: How to balance the well-being of different groups or individuals who may have conflicting interests or values? How to ensure that A/IS do not infringe on other ethical principles or human rights? How to monitor and evaluate the long-term effects of A/IS on human well-being?

These are some of the questions that A/IS creators should ask themselves when designing and developing their systems. By adopting increased human well-being as a primary success criterion for development, A/IS creators can demonstrate their commitment to ethical AI and create systems that benefit humanity and the natural environment.

Accountability: Ensure that their designers and operators are responsible and accountable.

Accountability states that A/IS should be designed and operated to provide an unambiguous rationale for all decisions made, and that the people or organizations responsible for creating or deploying A/IS should be identifiable and answerable for the consequences of their use.

Why is accountability important for A/IS? First of all, accountability promotes trust and transparency, which are essential for building public confidence and acceptance of A/IS. If we can understand how and why A/IS make decisions, we can evaluate their performance, reliability, and fairness. We can also identify and correct any errors, biases, or harms that may arise from their use.

Secondly, accountability fosters responsibility and learning, which are crucial for improving the quality and safety of A/IS. If we can trace the causes and effects of A/IS decisions, we can assign appropriate roles and duties to the people or organizations involved in their development and deployment. We can also learn from the feedback and outcomes of A/IS use, and apply the lessons to enhance their design and operation.

Thirdly, accountability supports accountability and justice, which are necessary for protecting the rights and interests of the people affected by A/IS. If we can verify and explain the actions and outcomes of A/IS, we can enforce the relevant laws, regulations, and standards that govern their use. We can also provide remedies and redress for any harms or damages that may result from their use.

There are several strategies that can help achieve implement accountability for A/IS, such as:
  • Developing clear and consistent ethical guidelines and codes of conduct for A/IS creators and users.
  • Incorporating explainable AI techniques that enable A/IS to provide understandable and meaningful reasons for their decisions.
  • Establishing audit trails and logs that record the inputs, outputs, processes, and interactions of A/IS.
  • Creating oversight mechanisms and institutions that monitor, review, and regulate the use of A/IS.
  • Providing accessible and effective channels for reporting, complaining, appealing, and resolving disputes involving A/IS.

To illustrate how these strategies can work in practice, let us consider some examples and case studies of accountability for A/IS.

In 2018, the European Union adopted the General Data Protection Regulation (GDPR), which grants individuals the right to access, rectify, erase, restrict, or object to the processing of their personal data by A/IS. The GDPR also requires data controllers and processors to implement data protection by design and by default, as well as to conduct data protection impact assessments for high-risk processing activities. Moreover, the GDPR imposes strict obligations and penalties for data breaches and non-compliance.

In 2019, IBM launched its AI Explainability 360 toolkit, which provides a comprehensive set of algorithms, libraries, tutorials, and demos that help developers and users understand how A/IS make decisions. The toolkit covers various aspects of explainability, such as feature importance, counterfactual explanations, contrastive explanations, adversarial explanations, influence functions, prototype selection, concept activation vectors, decision trees, rule lists, etc.

In 2020, the Partnership on AI released its AI Incident Database (AIID), which collects and analyzes publicly reported cases of harms or near-misses caused by A/IS. The AIID aims to facilitate learning from past incidents, identify common patterns and root causes of failures or risks, and inform best practices and policies for mitigating or preventing future incidents.

These examples show how accountability for A/IS can be achieved through different means and levels. However, they also highlight the challenges and trade-offs that may arise in implementing accountability for A/IS. For instance;

  • How can we balance the need for explainability with the demand for efficiency or accuracy?
  • How can we ensure that audit trails or logs are accurate, complete, and tamper-proof?
  • How can we cope with the complexity or uncertainty of A/IS decisions or outcomes?
  • How can we deal with the conflicts or inconsistencies between different ethical principles or legal frameworks?
  • How can we address the power asymmetries or information gaps between different stakeholders?

These questions require further research, discussion, and collaboration among various disciplines, sectors, and communities. As well as future posts 😉 By applying accountability to your work with A/IS, you can not only enhance their quality and safety but also contribute to their positive impact on humanity and society.

Transparency: Ensure they operate in a transparent manner.

According to IEEE, transparency means that the users and stakeholders of an artificial intelligence system (A/IS) should be able to understand how and why it makes decisions. This is a more technical definition of transparency than the broader more inclusive definition used in Rome’s Call for AI Ethics, i.e. “AI systems must be understandable and accessible to all.” Yet, both agree that transparency is essential for building trust, accountability, and fairness in AI.

But what does transparency mean in practice? How can we ensure that the A/IS we design and use are transparent enough? How can we balance the need for transparency with other factors such as privacy, security, and efficiency?

These are some of the questions that the IEEE’s Ethically Aligned Design initiative attempts to answer.

Transparency means that:
  • The purpose, capabilities, limitations, and expected outcomes of an A/IS should be clearly communicated to its users and other stakeholders.
  • The data sources, methods, assumptions, and algorithms of an A/IS should be accessible and understandable to its users and other stakeholders, as appropriate to the context and subject to reasonable restrictions.
  • The decisions and actions of an A/IS should be traceable and explainable to its users and other stakeholders, as appropriate to the context and subject to reasonable restrictions.
  • The feedback mechanisms and grievance procedures of an A/IS should be visible and accessible to its users and other stakeholders.

To illustrate the importance and application of transparency in AI, let us look at credit scores as a case study:

One example is the use of AI for credit scoring. Credit scoring is a process of assessing the creditworthiness of a borrower based on various factors such as income, assets, debts, payment history, etc. Credit scoring can affect the access and cost of credit for individuals and businesses. Traditionally, credit scoring was done by human experts using predefined rules and criteria. However, with the advent of big data and machine learning, credit scoring can now be done by AI systems that can analyze large amounts of data and find complex patterns and correlations.

While AI-based credit scoring can potentially improve the accuracy and efficiency of credit decisions, it also raises some ethical concerns. One of them is transparency. How can we ensure that the AI system is transparent enough to its users and other stakeholders? How can we know what data sources, methods, assumptions, and algorithms it uses to generate credit scores? How can we understand the rationale behind its decisions? How can we challenge or appeal its decisions if we disagree or find them unfair?

These questions are not hypothetical. They are real issues that have been faced by many people who have been affected by AI-based credit scoring systems. For instance, in 2018, Apple launched a new credit card in partnership with Goldman Sachs. The card used an AI system to determine the credit limit for each applicant. However, soon after its launch, several customers complained that the system was biased against women. They reported that they received lower credit limits than their male counterparts with similar or even worse financial profiles. One of them was David Heinemeier Hansson, a prominent software developer and entrepreneur. He tweeted that his wife received a credit limit 20 times lower than his own despite having a higher credit score and sharing all assets and accounts with him. He also claimed that when he contacted Apple’s customer service, they could not explain or justify the decision or offer any recourse.

This case sparked a public outcry and prompted an investigation by the New York Department of Financial Services. It also highlighted the need for more transparency in AI-based credit scoring systems. Without transparency, it is hard to detect, prevent, or correct any errors or biases that may arise in such systems. It is also hard to ensure that the systems are fair, accountable, and trustworthy.

Awareness of Misuse: Minimize the risks of their misuse.

Awarness of Misuse recognizes that A/IS can be used for good or evil, and that the creators of such systems have a responsibility to anticipate and prevent harmful outcomes. In this section, I will explain what this principle means, why it is important, and how it can be applied in practice.

The principle of Awareness of Misuse states that A/IS creators should consider the possible negative impacts of their systems on human rights, well-being, data agency, effectiveness, transparency, and accountability. For example, A/IS creators should ensure;

  • That their systems do not violate human dignity, privacy, or autonomy.
  • That they do not cause harm or suffering to humans or the environment.
  • That they do not manipulate or deceive users or stakeholders.
  • That they do not malfunction or fail to perform as intended.
  • That they do not obscure or hide their decision-making processes or outcomes.
  • That they do not evade or avoid responsibility or liability for their actions.

The principle of Awareness of Misuse also implies that A/IS creators should actively monitor and evaluate their systems for any signs of misuse or abuse by others. This means that A/IS creators should design their systems with safeguards and mechanisms to detect, report, and mitigate any unauthorized or unethical use of their systems. For example, A/IS creators should;

  • Implement security measures to prevent hacking, tampering, or theft of their systems.
  • Establish clear and enforceable policies and protocols for the use and access of their systems.
  • Provide users and stakeholders with information and education on the proper and ethical use of their systems.
  • Cooperate with authorities and regulators to address any legal or social issues arising from their systems.

The principle of Awareness of Misuse is important because it acknowledges the reality and complexity of the social and ethical implications of A/IS. A/IS are not neutral or value-free technologies; they reflect the values, assumptions, and biases of their creators and users. A/IS can also have unintended or unforeseen consequences that may not be apparent or predictable at the time of design or deployment. Therefore, A/IS creators have a moral duty to ensure that their systems are aligned with human values and ethical principles that prioritize human well-being. By doing so, A/IS creators can foster trust, confidence, and acceptance of their systems among users and stakeholders.

The principle of Awareness of Misuse can be applied in practice by following some practical guidelines and recommendations. According to The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, some examples of how A/IS creators can implement this principle are:

  • Conducting risk assessments and ethical impact assessments to identify and evaluate the potential misuses and risks of their systems.
  • Developing codes of ethics and codes of conduct to guide their professional behavior and decision-making.
  • Adopting design methods and tools that incorporate ethical considerations and human values into the development process.
  • Engaging with diverse and inclusive stakeholders and experts to solicit feedback and input on their systems.
  • Testing and validating their systems in realistic scenarios and contexts to ensure their safety, reliability, and robustness.
  • Documenting and disclosing the design choices, assumptions, limitations, and trade-offs of their systems.
  • Providing mechanisms for reporting, auditing, correcting, and updating their systems.
  • Participating in standardization, certification, regulation, and governance initiatives to promote ethical best practices.

Conclusion

We have reached the end. We have explored the IEEE’s Ethically Aligned Design document and its five principles for creating ethical AI systems that prioritize human rights, well-being, transparency, and accountability. This document is a valuable resource for anyone who is interested in developing or using AI systems that respect human dignity and promote social good. It is also a living document that invites public feedback and participation from diverse stakeholders and perspectives.

If you are curious to learn more about the IEEE’s Ethically Aligned Design document and its ongoing development process, I encourage you to join the conversation and share your insights and questions. You can access the document online here. Tweet me at @PaulWagle and let me know what you think about the document and how you apply its principles to your work.

Thank you for reading this blog post and I hope you found it helpful. I look forward to hearing from you and learning together about the IEEE’s work on ethics in technology and its standards development process.

Leave a Comment

Your email address will not be published. Required fields are marked *

Skip to content