You are on page 1of 5

Social impacts of Ethical Artifical Intelligence

and Autonomous System Design


Nathan Hutchins, Zack Kirkendoll, and Dr. Loyd Hook
The Department of Electrical and Computer Engineering
The University of Tulsa
Tulsa, Oklahoma, USA
Nathan-hutchins@utulsa.edu

Abstract—With the introduction of autonomous systems Motor vehicle deaths increased 6% in 2016 from 2015 in
becoming more and more prevalent in modern society a major the United States [1]. Nearly 1.3 million people die in road
focus of research on Artificial Intelligence (AI) and Autonomous crashes each year on a global scale, with an additional 20-50
Systems (AS) ethics is becoming important. There are many million injured or disabled. Road traffic crashes rank as the 9th
questions in the public about how these AI/AS programs will leading cause of death and account for 2.2% of all deaths
make ethical decisions as well as what ethical issues will be dealt
globally. Road crashes cost $518 billion per year globally, of
with at a developmental level. Darning AI/AS development, these
choices are mainly developed by personal choice of the which $230 billion belongs to the United States [2]. A 2015
developers which is an objective view of any issue on how to crash analysis by the National Highway Traffic Safety
handle difficult situations. In most conversation about AI/AS Administration found that approximately 94% of vehicle
control, people have an issue with how ethical decisions are made crashes are driver response related due to recognition errors,
or which decision is made prompting them to want human decision errors, performance errors, and non-performance
intervention during these situations. The IEEE Standards errors [NHTSA]. The majority of the driver response issues
Association (SA) has been developing a set of basic guidelines to that caused car crashes are due to distractions [2].
help alleviate some of the ambiguity of these sensitive issues. This
paper is a literature review of AI/AS design guidelines and a case
The impact of autonomous systems could greatly reduce
study of different incidence that AI/AS systems could be involved
in producing many of the social and ethical quandaries brought these preventable deaths, accidents, and damages significantly
on by AI/AS system development. in both the United States, and eventually on a global scale, in
this single category of automotive transportation. This
reduction in human harm and financial burdens will
Keywords—Artificial Intelligence, Autonomous Systems, Ethics necessitate and drive the inevitable adoption of artificial
intelligence and autonomous systems within safety critical
I. INTRODUCTION systems. This could be the first step in implementing safety
critical fully autonomous systems on a mass scale, paving the
As autonomous system technology improves, and their way for how society designs and creates entities capable of
corresponding human safety improvements are undeniably making ethical decisions on human life en masse.
proven, then society will unavoidably adopt this new
technology at a rapid pace. Once the idea of autonomous II. ETHICAL DESIGN CONSIDERATIONS
transportation becomes culturally accepted by enough of
society, then there will be very few ways to slow its mass
There are many ethical, moral, security, privacy, and safety
implementations across a variety of systems. Currently, there
concerns with AI/AS systems that need to be addressed and
are four states and Washington D.C. that allow self-driving
understood in a transparent and easy to comprehend way
cars in some capacity. Google’s Waymo has staged itself to
before society should fully accept them. Scientists and
keep pushing forward with autonomous transportation with
philosophers have been debating which pure form of ethics or
potential for both personal ownership and transportation as a
combination of ethics would be most beneficial to utilize for
service. This rapid progress and adoption by both the
artificial intelligence and autonomous system decision making
government and the people has shown that the transition from
for a very long time. There is still no definitive answer, and
manual to autonomous systems is rapidly approaching. That is
yet that fact will not slow the creation or implementation of
why the discussion, adoption, and iterative improvement of
those very systems within our society. If no guidelines exist,
both artificial intelligence and autonomous system ethical
then corporations and individuals will create their own to
decision making, especially related to safety critical systems,
serve their specific purpose and goals, irrespective of what the
is of the up most importance for the near future, both
best or optimal ethical decision-making process should be.
domestically and internationally.
This situation does not necessarily mean that harmful ethics

978-1-5386-3403-5/17/$31.00 ©2017 IEEE


will be created and propagated to AI/AS, however, the metric for society’s consideration for AI/AS will be the
potential does exist and that potential no matter how small consequences of the perceived actions or inactions, given the
represents a problem to the entirety of the society in which breadth of the specific situation. A sufficiently capable
that AI/AS exists. A new paradigm of laws, regulations, and autonomous system is still a mechanical system, not much
guidelines will need to be created and implemented to ensure different than a storm, of which society does not care what the
AI/AS systems abide by the rules that society has established, system or storm’s intentions appear to be, but strictly cares
while presenting a smooth transition from our current about the end result, reducing or eliminating overall harm to
methodology to one that takes AI/AS into account. us.

At the present, machines cannot effectively work directly Utilitarian ethics attempts to maximize the benefit for the
with natural human language and context, so they cannot just greatest number of people, regardless of the morality of the
be fed the basic ideas of ethics and assume an autonomous consequences. For example, in the case of an unavoidable car
system will behave in accordance with them. The approach crash where the vehicle must choose between crashing into
must emphasize formal ethical rules and logic that can two different groups of people, the utilitarian approach will
accurately be interpreted by the machine. Science and always choose the option of less casualties, without
engineering are great at answering specific questions with considering any other metrics such as the importance of the
quantitative results and solve well defined problems, but they individuals or their reason for being in that situation. This
are not yet fully applicable to the nonspecific question of how approach offers a clear goal for the AI/AS, of reducing
to make the world a better place. Differentiating between damage to the human, resulting in the least loss of life
direct and indirect consequences are difficult to quantify. For compared to other ethical approaches, providing an intuitive
example, the AI/AS might be required to consider expectation for behavior. Utilization of this ethical approach
consequences in a distant time, place, sequence of events, that will result in AI/AS that does not take emotion into account,
no human could possibly be expected to consider or providing a rational and logical decision-making process, once
comprehend. Human intuition dictates that there should be a the statistical results have been quantified. Within a world of
difference in ethical value between various consequences. For only utilitarian agents, this approach would be practical.
example, the life of a human or the life of a single cell or the However, in a world where these agents must interact with so
life of a tree. A proper methodology is required to program many other agents that act under different perspectives,
transparent ethical decision making into AI/AS. In general, especially other humans, this approach would be much more
ethical systems are divided into three categories: unfair. If the autonomous vehicle interacts with a human
deontological, consequentialist, and virtue ethics. We will driver that is behaving recklessly, then the autonomous car
look at deontological and consequentialist methods in this should not crash in order to save the driver that chose to drive
paper. recklessly, instead having the consequences lie with the
reckless driver. Another potential issue is formalizing a non-
Deontological ethics attempts to provide rules based on the binary scale of quantifiable weights given to specific
rightness or wrongness of action being more important than situations. For example, should the autonomous vehicle crash
the rightness or wrongness of the consequences, that the to avoid a pedestrian that has chosen to endanger himself by
agent’s good intentions are more important than the result to walking into a busy street? What about if the autonomous car
quantify the benefit of that action. That the intentions of the has passengers? Should one passenger be endangered to save
action provide a standard for what is good and what is bad. multiple individuals that chose to put themselves in an
For example, lying would be interpreted always as a bad avoidable dangerous scenario by walking into a busy street?
action, even in the case of lying to a terrorist which could
potentially lead to harm of people. However, it is difficult to These are complex scenarios that a simple utilitarian
apply a non-consequentialist approach to all of machine ethics approach does not explicitly address. The social dilemma
because even non-consequentialists define actions by its paper found that people favored the utilitarian approach for
consequences, even if not explicitly. Certain actions are not autonomous cars, however, they stated that they would rather
considered inherently bad, until those actions then cause harm not purchase utilitarian vehicles for themselves if given the
to someone else. This ethical approach is too broad at the option [3]. This would imply needed regulation to enforce
moment, leaving a lot open to interpretation, to provide a clear utilitarian vehicles, however, people were also opposed to
and logical approach to implementing within AI/AS. strict regulation of autonomous vehicle ethical approaches.
Implementing a “do not kill” rule sounds great, however, for This provides a very real problem for utilitarian approaches to
this to be implemented programmatically for a robot to AI/AS, to provide the public with a viable desired option that
understand it explicitly, the rule under all conditions would society is willing to purchase. In retrospect, the survey results
not be met by a universal agreement such as killing a terrorist are not completely applicable since autonomous cars are not
that would have killed hundreds of innocents. This could currently on the market. Individuals must think about this
potentially be a useful ethical system to utilize in certain scenario in the abstract, without fully understanding their
situations, but might require human intervention under life or features, except for how they will behave in highly specific
death situations to regulate behavior. The most important and unlikely dilemmas. This is still an unknown technology
and this does not mean that this would be the exact response if voyeurism and discriminatory targeting and profiling. Once
a utilitarian autonomous vehicle became available on the mass acceptance of AI/AS has occurred, society’s civil
market today. liberties and privacy could easily and quickly be at stake,
without sufficient protection or oversight.
In contrast to pure ethical approaches, a merging ethical
approach could be viable, even if unpopular with Safety for AI/AS is another area of consideration for ethical
philosophers. This could be dependent on the type of AI/AS approaches. By what metrics are AI/AS considered safe, or
agent and the situation, blending the ethical theories to achieve safe enough for public use? By what ethical guidelines will
the best result for various situations using some predefined society determine that an autonomous system has made a
ethical code construct [4]. The parliamentary model is one morally good decision under specific circumstances? If a
such example. This model uses a set of mutually exclusive worker is injured on the job by an unmanned system, would
moral theories, each with an assigned probability, with a given the liability be attributed to a human supervisor, the company
weight proportional to the probability of that theory. The operating the unmanned machine in the workplace, or the
parliament then reaches a decision, allowing one theory to be original manufacturer? These are complex scenarios that
used in that situation [5]. require new perspectives on what is considered the best ethical
approach. Currently, most autonomous systems still require
A convergence of multiple ethical approaches could supervision by a human, but in the near future that may not be
potentially be more ideal, if given structural guidance, to be the case.
developed and iteratively improved upon by industry.
Allowing the public to essentially vote on their preferred There are a multitude of considerations for designing AI/AS
approach with their purchasing power, given that all available ethical approaches and how those decisions are made. For
options meet or exceed minimal AI/AS guidelines, rules, and now, there is not a clear cut easy way to design algorithms that
laws that are explicitly dictated by a governing body. AI/AS would reconcile moral values and self-interest, through
ethics needs basic assumptions, or axioms, that define certain explicit logic or machine learning, especially when
ethics in a way that everyone can agree on or at least provide a considering a spectrum of cultures with varying moral
standard to discuss these principles together. This approach, or attitudes. However, the creation, adoption, and iterative
really any ethical approach, will heavily rely on a set of ethical improvement of specific guidelines should move closer
guidelines including fairness, consistency, staying impartial, towards that goal while taking into account these
non-subjectivity, robustness, inclusivity, transparency, and a considerations. Currently, designers and manufacturers are
full collaboration between society and the developers of creating the rules as they go along, typically falling on
AI/AS. personal or cost-effective choices. Designers are not lawyers,
and they will need to be guided to balance protective laws and
Security for AI/AS should be a primary concern moving human rights standards with normal engineering capability.
forward. Security has come to the forefront of the technology There is also the concern that AI/AS will be put into situations
age, and for a very good reason that is also applicable to that the designers did not foresee. Under the current
autonomous systems. AI/AS could be hacked, just like any circumstances, the chosen ethical approach may be the one
other computer. This poses serious safety concerns in terms of that allows for the most rapid adoption of autonomous
general control of these systems and the corruption of its systems, as in general its becoming more accepted that
ethical decision-making processes. If the ethics of an autonomous vehicles will perform much better than human
autonomous agent could be corrupted then they could be drivers in more than enough situations. Getting to that point
manipulated to cause harm to other people, potentially will still require the creation of quality laws and regulations,
undetected or without consequence. This issue will most likely and the avoidance of bad decisions that could result in dire
be an arms race between manufacturers and those wishing to controversies or lawsuits. The growth and increase in visibility
manipulate AI/AS. of organizations paving the way for these standards will
hopefully provide a better approach going forward, especially
Privacy for AI/AS presents a new paradigm of potential 4th once governing bodies begin adoption of these principles.
amendment violations: “The right of the people to be secure in
their persons, houses, papers, and effects, against
unreasonable searches and seizures, shall not be violated”. III. CURRENT AI/AS ETHICS STANDARD CONSIDERATIONS
Mass production and implementation of AI/AS will provide
ample opportunities for entity’s overextension of use of power The IEEE Standards Association (SA) has been developing
in a continued state of being watched through data collection a set of basic guidelines to help alleviate some of the
and storage. UAVs provide a great tool for warranted ambiguity of sensitive ethical concerns across a broad
surveillance through use of zoom lenses, thermal vision, facial spectrum of circumstances. The IEEE Ethical Considerations
recognition, and behavior profiling. However, these devices takes a high-level look at five principles separated into human
could easily be transitioned to public monitoring for tracking rights, accountability, transparency, misuse, and measuring
individuals, groups, or other vehicles with potential for well-being that can be applied to all of AI/AS.
align closely with the IEEE standard, even with an initial
The human rights principle dictates AI/AS should be vague description. There are more explicit promises to defer to
designed to respect human rights and freedoms, be verifiably the public, and actively engage society as AI progresses.
safe and secure, and provide traceability in the case of the Although it appears to still be in the initial phase, this
system causing harm. Suggesting the need for governance collection of major corporations partnering together shows the
frameworks to oversee the processes to build public trust in high-profile importance of AI/AS in the near future, and that
AI/AS. The accountability principle dictates the need for there is a strong need for guidelines and accountability for
legislative bodies to clarify issues of responsibility and AI/AS ethical considerations.
liability for AI/AS, and for the creation of a registration
system for key parameters available within the AI/AS system. These committees and organizations are taking a first leap
The transparency principle dictates the need for a simple way into providing concrete and contextual guidelines for ethics
for users to understand what actions the system is performing within AI/AS. Without explicit recommendations, AI/AS could
and why, allowing for the understanding of actions that result pose an unknown threat to society or at the minimum greatly
in bad consequences. The misuse principle dictates the limit its quick and smooth adoption. A real world example
importance of providing awareness to ethical and security would be regulators such as the FAA that have placed high
risks to a broad audience to avoid confusion and standards for autonomous vehicles to provide an equivalent or
misunderstanding of AI/AS. The measure of well-being higher level of safety. The main goal for AI/AS is to provide a
principle dictates the need for a quantifiable method of quantifiable benefit to humanity and the environment while
mitigating risk. By aggregating these standard principles and
society’s value of specific AI/AS, to prioritize human well-
providing well-defined, if even vague, backgrounds, reasoning,
being [6]. and recommendations there will be an increase in
understanding and discussion regarding ethics in AI/AS.
Germany has recently released twenty initial ethics Providing a starting point on common ground to which the
guidelines for autonomous vehicles specifically, some of discussion can take place, iterative objective improvements can
which can be applied across the entirety of AI/AS [7]. The occur, and more involvement between disciplines and
high-level look at these principles focuses on human safety, industries will result in a more pragmatic approach to
accountability, and transparency. These principles go into developing these standards over time.
more depth, assigning weights to safety concerns and risk,
explicitly giving human safety precedence over all else while
indirectly justifying the use of a utilitarian approach in broad IV. CASE STUDY
terms. The main concern here is that these autonomous
systems provide a net gain in safety at a bare minimum. These
principles are much more explicit and relevant to To begin the case studies, we will start with a modern
version of the classic trolley problem. This problem was
manufacturers and engineers than the generic IEEE guidelines,
originally stated with a human decider but can easily be
while still maintaining enough vagueness to not hold to any
changed for an artificial decision maker as is the case for
specific pure ethical approach. autonomous vehicles. The case is as follows: An
autonomously operated vehicle traveling at high speed comes
The Engineering and Physical Sciences Research Council upon a situation where it is forced into making one of two
has released five principles specifically for designers and decisions, the vehicle can either attempt to stop skidding
builders of robots that would also be applicable to AI/AS to through a crowd of people in the road or swerve hitting one
some extent [8]. A high-level look reveals these principles to pedestrian on the sidewalk.
focus on accountability and transparency. AI/AS robots are
tools to be designed to comply with existing laws, First, we will investigate the possible losses at stake.
fundamental rights and freedoms, including privacy of society. Without considering the losses contained within the vehicle,
Robots should not be designed in a way to deceive vulnerable there are two choices, one, several injuries and possible deaths
users through exploits, but to register key parameters within by continuing into the crowd or one almost guaranteed death
the system for public viewing. Robots should have an by swerving onto the sidewalk. Several factors come into play
attributed human for which legal responsibility is given, when investigating this incident. Is the crowd in the street in
similar to a license and registration for vehicles or pets, so as the right to be in the street? Are they in a crosswalk or are they
to provide clear liability. jay walking across a busy roadway? Should the person on the
sidewalk be sacrificed to protect the many who are not
A new research group called Partnership on Artificial following the rules?
Intelligence to Benefit People and Society has begun to
An autonomous vehicle traveling at high speed on a freeway
develop a standard of ethics for the development of AI [9].
become endangered by a recklessly operated human operated
This group currently has over thirty partners including vehicle, endangering the autonomous vehicle as well as
Amazon, Apple, DeepMind, Facebook, Google, Intel, and several other vehicles also on the freeway. In the given
Microsoft. Their main 8-point plan focuses on human rights, situation, the autonomous vehicle can attempt to evade the
transparency, accountability, privacy, and security. Their goals collision with the reckless human operated smashing into the
central barricade of the freeway, injuring the occupants of the Standards Association rules for ethically aligned design and
autonomous vehicle as well as endangering several other the German government ethics guidelines for autonomous
vehicles, or the autonomous vehicle can stand its ground, vehicle, it is clear that the future of autonomous systems and
potentially crashing with the reckless vehicle and other around artificial intelligence development will require many
it. The investigation of this scenario is based around the idea guidelines to help with the process of making these difficult
of the autonomous sacrificing itself and other so avoid an ethical decisions.
incident with a reckless vehicle. Should the autonomous
vehicle avoid the reckless vehicle endangering other vehicles REFERENCES
or should the autonomous vehicle stand its ground and “take
[1] National Safety Council, “NSC Motor Vehicle
the hit”?
Fatality Estimates.”
A new autonomous vehicle development company has just [2] ASIRT, “Road Crash Statistics.” [Online]. Available:
released its two new models of autonomous vehicle for http://asirt.org/initiatives/informing-road-users/road-
commercial distribution. As a potential buyer, you have the safety-facts/road-crash-statistics. [Accessed: 01-Sep-
choice of color, style, and level of safety. The model A 2017].
autonomous vehicle was developed for the general public and [3] J.-F. Bonnefon, A. Shariff, and I. Rahwan, “The social
has all the capabilities of other current autonomous vehicles. dilemma of autonomous vehicles,” no. June, 2015.
The model X autonomous vehicle is almost exactly the same, [4] S. Bringsjord, K. Arkoudas, and P. Bello, “IEEE
with the same technology and features of the model A, with INTELLIGENT SYSTEMS Toward a General
one exception, in an incidence, given the ability to make a Logicist Methodology for Engineering Ethically
decision on loss, the model X will always minimize the loss in Correct Robots.”
the vehicle over losses outside the vehicle, adding a guarantee [5] N. Bostrom, “Overcoming Biasௗ: Moral uncertainty –
to the vehicle occupant’s life for an extra cost. This example is towards a solution?” [Online]. Available:
all but explicitly frowned upon by all developers of autonomy http://www.overcomingbias.com/2009/01/moral-
vehicles but, under currently regulation, is not illegal thus uncertainty-towards-a-solution.html. [Accessed: 01-
must be considered when developing rules for ethical Sep-2017].
autonomous system design. [6] IEEE Standards Association, “IEEE Ethically Aligned
Design,” 2016.
V. Conclusion [7] Federal Ministry of Transport and Digital
As we investigate these case studies it is important to Infrastructure, “Ethics Commission Creates World’s
emphasize how important it is for researchers to take this First Initial Guidelines For Autonomous Vehicles.”
problem seriously. Public view, and thus, opinion will be [Online]. Available:
greatly affected by how these studies are portrayed and how http://www.germany.info/Vertretung/usa/en/__pr/P__
strenuous the effort of developing these systems and rules Wash/2017/06/21-
were. No matter how simplistic the example under review is, AutonomousVehicles.html?archive=3393378.
we must understand that in the future there will be thousands [Accessed: 01-Sep-2017].
of lives riding on the decisions that are made. [8] EPSRC, “Principles of robotics,” 2010. [Online].
Available:
Given the included case study about autonomous systems https://www.epsrc.ac.uk/research/ourportfolio/themes/
and artificial intelligence ethics, it is shown that the engineering/activities/principlesofrobotics/.
importance of these arguments is only going to increase over [Accessed: 01-Sep-2017].
the next few years as the development of these systems [9] Partnership on AI, “Tenets | Partnership on Artificial
continue and they become more and more available for Intelligence,” 2017. [Online]. Available:
commercial consumption, increasing not only the likelihood of https://www.partnershiponai.org/tenets/. [Accessed:
incidence like these but also the number of real examples that 01-Sep-2017].
we have not conceived.

With the investment of research time and energy into


developing rules for AI/AS development, such as the IEEE

You might also like