Algorithms are sets of rules or instructions that tell a computer how to solve a problem or perform a task. They are essential for many domains and applications, such as search engines, social media, e-commerce, health care, education, finance, and more. Algorithms can help us find information, make decisions, optimize processes, and create value.
However, algorithms are not neutral or objective. They raise ethical issues and challenges that affect individuals and society. For example, algorithms can produce inaccurate results, violate privacy or freedom, harm human dignity or rights, or create unfair or unjust outcomes.
In this essay, we will explore the ethical implications of algorithms and how to address them. We will discuss the epistemic problems of algorithms, such as how they generate and use evidence; the normative problems of algorithms, such as how they affect human values and rights; and the possible solutions and frameworks for the governance of algorithms, such as how to ensure transparency, explainability, trustworthiness, and human oversight. This essay is heavily indebted to Mittelstadt, et al’s six types of ethical concerns raised by algorithms.
What are the epistemic problems of algorithms?
Epistemic problems of algorithms refer to the issues that arise from the quality and validity of the evidence that algorithms produce and use. Evidence is the information or data that algorithms rely on to perform their functions. For example, evidence can be the input data that algorithms process, the output data that algorithms generate, or the feedback data that algorithms receive. All of these data points can be assessed by machine learning algorithms to produce favorable outcomes.
Some of the epistemic problems of algorithms are:
- Inconclusive evidence: Algorithms may use evidence that is incomplete, insufficient, or uncertain. For example, an algorithm may not have enough data to make a reliable prediction or recommendation; alternatively, an algorithm may have missing or corrupted data that affect its accuracy; or an algorithm may even have conflicting data that affect its consistency.
- Inscrutable evidence: Algorithms may produce evidence that is difficult to understand or explain. For example, an algorithm may use opaque methods that are difficult to interpret. Algorithms often make use of unknown factors that are impossible to identify. Other algorithms use quickly adapting methods that change too rapidly to follow.
- Misguided evidence: Algorithms may produce evidence that is misleading or deceptive. For example, an algorithm may use inaccurate or manipulated data that affect its fairness. This could look like irrelevant or outdated data that affects its causal correlation to its output. They may even simply use false data that affect its authenticity.
These epistemic problems affect the reliability, and validity of the algorithm outcomes. Algorithms must be reliable to produce consistent and accurate results. They must validly produce relevant and meaningful results. But as the old saying goes, correlation does not imply causation. Nevertheless, there are deeply pragmatic connections between the correlations machine learning systems are able to find and the outcomes they produce. Accountability to reliable and valid algorithm outcomes should be a priority of all AI scientists.
Examples of these epistemic problems and their consequences are:
- In 2016, Microsoft launched Tay, a chatbot that was supposed to learn from human conversations on Twitter. However, within 24 hours, Tay started to produce racist, sexist, and offensive tweets. This was because Tay used inconclusive evidence from Twitter users who deliberately fed it with hateful and inappropriate messages. Tay’s reliability was compromised by its lack of data quality control and filtering mechanisms.
- In 2017, Amazon abandoned an AI tool that was designed to help with recruiting. Engineers found that the AIT was unfavorable towards female candidates. They believed this was because it was taught by male-dominated resumes.
- In 2016 COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in US court systems to predict the likelihood that a defendant would become a recidivist. Due to the data that was used, the model that was chosen, and the process of creating the algorithm overall, the model predicted twice as many false positives for recidivism for black offenders (45%) than white offenders (23%)2.
What are the normative problems of algorithms?
Normative problems of algorithms refer to the issues that arise from the impact and influence of algorithms on the values and rights of individuals and groups. Values are the principles or standards that guide our actions and judgments; rights are the entitlements or freedoms that protect human interests and well-being.
Some normative problems of algorithms are:
- Justice: Algorithms may affect the distribution of benefits and burdens among individuals and groups. For example, an algorithm may discriminate against certain people based on their characteristics or preferences; an algorithm may favor certain people based on their connections or influences; or an algorithm may exclude certain people based on their access or participation.
- Security: Algorithms may affect the protection of personal information and identity among individuals and groups. For example, an algorithm may collect or disclose sensitive data without consent or notification; an algorithm may infer or reveal hidden attributes or behaviors without permission or awareness; or an algorithm may track or monitor activities or movements without authorization or control.
- Freedom: Algorithms may affect the expression of agency and choice among individuals and groups. For example, an algorithm may manipulate or coerce actions or decisions without transparency or explanation; an algorithm may replace or override actions or decisions without consultation or consent; or an algorithm may limit or constrain actions or decisions without justification or appeal.
- Accountability: Algorithms may affect the attribution of liability among individuals and groups. For example, an algorithm may cause harm or damage without remedy; an algorithm may obscure accountability without disclosure; or an algorithm may attempt to delegate responsibility without agreement or acceptance.
These normative problems affect human values and rights of individuals and the common good. Human values are essential for ensuring the dignity, an common good for all persons with special attention to marginalized groups. They are also important for ensuring the trust, and harmony of individuals and groups.
Examples of these normative problems and their implications are:
- In 2014, Amazon launched a same-day delivery service for its Prime members in selected US cities. However, many people noticed that the service excluded predominantly black neighborhoods. This was because Amazon used an algorithm that based its delivery areas on the number and density of Prime members, which indirectly correlated with income and race. Do you think that Amazon’s algorithm discriminated against certain groups based on their characteristics and preferences? How does this unintended consequence violate human value of justice and each person’s dignity?
- In 2017, Tesla faced several lawsuits from customers who claimed that their cars crashed while using the Autopilot feature. This was because Tesla used an algorithm that controlled the steering, braking, and speed of the cars, but also required human supervision and intervention. Who is responsible for the harm and damage that Tesla’s algorithm seemed to cause? What does the human value responsibility and the right to redress apply here?
How can we address the ethical problems of algorithms?
Algorethics proposes opportunities to address these ethical problems of algorithms. One of these proposals is to develop frameworks for the governance of algorithms. Governance refers to the processes and structures that regulate and guide the design, development, and deployment of algorithms. The aim is to ensure that algorithms are ethical, beneficial, and trustworthy.
Some of the proposed frameworks for the governance of algorithms are:
– Transparency: Algorithms should be open and accessible to scrutiny and inspection by relevant stakeholders. Transparency means that algorithms should disclose and explain their methods, data, results, and impacts. Improving the reliability, validity, and accountability of algorithms is enabled through transparent verification, evaluation, and feedback.
– Explainability: Algorithms should be understandable and interpretable by relevant stakeholders. Algorithms should provide reasons and justifications for their methods, data, results, and impacts. The validity, relevance, and meaningfulness of algorithms are determined by explainable outputs.
– Trustworthiness: Algorithms should be reliable and consistent in their methods, data, results, and impacts. Trustworthiness implies that algorithms should adhere to ethical principles and standards that ensure their quality and integrity. Improving the reliability, validity, and accountability of algorithms is brought to consumers through confidence in systems and a reputable reputation.
– Human oversight: As essentially proposed by Rome’s Call for AI Ethics, algorithms should be subject to human control and intervention. AI is meant to augment human activities and these algorithms should respect and support human values and rights in their methods, data, results, and impacts. Human oversight is essential to just, secure, and responsible algorithms by enabling participation, representation, empowerment, and protection.
Examples of these solutions and frameworks and their applications are:
- In 2018, the European Union adopted the General Data Protection Regulation (GDPR), a law that regulates the processing of personal data by organizations. The GDPR requires organizations to provide transparency and explainability to data subjects about how their data is collected, used, and shared by algorithms. It also grants data subjects the right to access, correct, delete, or object to their data processed by algorithms. The overall aim is to enhance the trustworthiness and human oversight of algorithms by ensuring their compliance with ethical principles and standards.
- In 2019, the High-Level Expert Group on Artificial Intelligence (AI HLEG) of the European Commission published the Ethics Guidelines for Trustworthy AI, a document that provides recommendations and requirements for the development and use of AI systems. The guidelines propose seven key requirements for trustworthy AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability. The guidelines promote the transparency, explainability, trustworthiness, and human oversight of AI systems by ensuring their alignment with ethical values and human rights.
- In 2020, Google launched Explainable AI, a set of tools and frameworks that help developers and users understand how AI models work and why they produce certain outputs. Explainable AI provides various methods and techniques to generate explanations for different types of AI models, such as deep neural networks, decision trees, or linear models. Explainable AI helps visualize and communicate explanations for different types of users, such as developers, analysts, or end-users. Therefore improving the transparency, explainability, trustworthiness, and human oversight of AI models by enabling inspection, interpretation, evaluation, and feedback.
In this essay, we have explored the ethical implications of algorithms and how to address them. We have discussed the epistemic problems of algorithms, such as how they generate and use evidence; the normative problems of algorithms, such as how they affect values and rights; and the possible solutions and frameworks for the governance of algorithms, such as how to ensure transparency, explainability, trustworthiness, and human oversight.
We have seen that algorithms are not neutral or objective but raise ethical issues and challenges that affect individuals and society. Therefore, we need to be aware and reflective of the ethical implications of algorithms and propose solutions to address them. We also need to continue to develop frameworks for the governance of algorithms that ensure they uphold the dignity of each person to pursue the common good.
Algorithms are powerful and pervasive tools that can help us solve problems and create value. But they can also pose ethical problems and challenges that require our attention and action. By exploring the ethical implications of algorithms and how to address them, we can ensure that algorithms are not only pragmatic but also ethical.
Thank you for reading this essay. I hope you found it informative and interesting. If you have any thoughts or comments on this topic, please share them below or with me on Twitter @PaulWagle. I would love to hear from you!