Since 2017, more than 80 AI and big data ethical principles and values guidelines have been published. Because many are ethical in origin, the contributions tend to break along the lines of academic applied ethics. Corresponding with libertarianism there are values of personal freedom. Extending from a utilitarian approach there are values of social wellbeing. Then, on the technical side, there are values focusing on trust and accountability for what an AI does.
Here is the AI Human Impact breakdown:
Domain |
Principles/Values |
Personal |
Self‑determination |
Privacy |
|
Social |
Fairness |
Society |
|
Technical |
Performance |
Accountability |
Each of the mainstream collections of AI ethics principles has their own way of fitting onto that trilogical foundation, but the Ethics Guidelines for Trustworthy AI sponsored by the European Commission is representative, and it closely aligns with AI Human Impact.
AI Human Impact
|
EC Guidelines Trustworthy AI
|
Self‑determination |
Human agency and oversight |
Privacy |
Privacy and data governance |
|
|
Fairness |
Diversity, non-discrimination, fairness |
Society |
Societal and environmental wellbeing |
|
|
Performance |
Technical robustness and safety |
Accountability |
Accountability, Transparency |
‑
Privacy
How much intimate information about myself will I expose for the right job offer, or an accurate romantic match?
Originally, health insurance enabled adventurous activities (like skiing the double black diamond run) by promising to pay the emergency room bill if things went wrong. Today, dynamic AI insurance converts personal information into consumer rewards by lowering premiums in real time for those who avoid risks like the double black diamond. What changed?
An AI chatbot mitigates depression when patients believe they are talking with a human. Should the design – natural voice, and human conversational indicators like the occasional cough – encourage that misperception?
If my tastes, fears and urges are perfectly satisfied by predictive analytics, I become a contented prisoner inside my own data set: I always get what I want, even before I realize that I want it. How – and should – AI platforms be perverted to create opportunities and destinies outside those accurately modeled for who my data says I am?
What’s worth more: freedom and dignity, or contentment and health?
Fairness
Fairness as solidarity or social justice
Social
Which is primary: equal opportunity for individuals, or equal outcomes for race, gender and similar identity groups?
AI catering to individualized tastes, vulnerabilities, and urges effectively diminishes awareness of the others’ tastes, vulnerabilities and urges – users are decreasingly exposed to their music, their literature, their values and beliefs. On the social level, is it better for people to be content, or to be together?
An AI detects breast cancer from scans earlier than human doctors, but it trained on data from white women. Should the analyzing pause until data can be accumulated – and efficacy proven – for all races?
Those positioned to exploit AI technology will exchange mundane activities for creative, enriching pursuits, while others inherit joblessness and tedium. Or so it is said. Who decides what counts as creative, interesting and worthwhile versus mundane, depressing and valueless – and do they have a responsibility to uplift their counterparts?
What counts as fair? Aristotle versus Rawls.
Is equality about verbs (what you can do), or nouns (who you are, what you have)?
In the name of solidarity, how much do individuals sacrifice for the community?
Performance
Accountability
A chatbot responds to questions about history, science and the arts instantly, and so delivers civilization’s accumulated knowledge with an efficiency that withers the ability to research and to discover for ourselves (Why exercise thinking when we have easy access to everything we want to know?) Is perfect knowledge worth intellectual stagnation?
Compared to deaths per car trip today, how great a decrease would be required to switch to only driverless cars, ones prone to the occasional glitch and consequent, senseless wreck?
If an AI picks stocks, predicts satisfying career choices, or detects cancer, but only if no one can understand how the machine generates knowledge, should it be used?
What’s worth more, understanding or knowledge? (Knowing, or knowing why you know?)
Which is primary, making AI better, or knowing who to blame, and why, when it fails?
What, and how much will we risk for better accuracy and efficiency?
What counts as risk, and who takes it?
A driverless car AI system refines its algorithms by imitating driving habits of the human owner (driving distance between cars, accelerating, breaking, turning radiuses). The car later crashes. Who is to blame?
While every development and application is unique, the below list of questions orients human impact evaluators toward potential ethical problems and dilemmas surrounding AI technology.
The checklist is modified from the European Council’s Assessment List on Trustworthy Artificial Intelligence.