The Ethics Of Artificial Intelligence

1342 Words6 Pages

Artificial intelligence (AI) is an area of computer science in which creating intelligent machines are emphasized. These machines are created to do tasks that involve aspects like learning, planning, and problem solving. Knowledge engineering is the center of AI creation motives. Artificial intelligence is made with the ideals of creating a machine capable of thinking and reacting like a human (What Is Artificial Intelligence (AI)?). With this field of science expanding rapidly, AI is becoming more complex. This realness that comes with these new technologies have led to talks of the ethics that are involved with creating them. Most philosophers and scientists believe that morals must be in place when stepping into this field of science. Roboethics …show more content…

These morals involve ensuring that machines do not harm humans and also the need to evaluate the ethical status of the machines themselves, such as rights given to them and how they are treated (Bostrom and Yudkowsky). Nick Bostrom, a researcher at the Future of Humanity Institute, and Eliezer Yudkowsky of the Machine Intelligence Research Institute collaborated on a journal in regards to the ethics of artificial intelligence. According to them, future AI systems have potential for obtaining moral statuses with individual thinking progression and learning skills. With this in mind, problems arise such as the machines developing unfair decision algorithms involving discrimination against different races and genders. When AI algorithms take on cognitive tasks previously done by humans, social requirements can be inherited and used in a way that does not benefit all that use it. Bostrom and Yudkowsky believe that it is important that artificial intelligence is made to be “robust against manipulation.” The AI must be able to stay true to the task it is given, and not be influenced to any outside forces. An individual mind must be created to stay on …show more content…

The authors also go on to describe a type of moral assessment known as the principle of substrate non-discrimination, which basically states that even though AI is nonliving, it is capable of being morally relevant. The journal closes with the idea that artificial intelligence currently offers very little ethical issues, but as the field expands and technologies become more humanlike, it is important to apply moral capabilities to the AI, to be able to create fair machines that do not obtain too much power over humans. Limitations need to be put in place in order to assure that artificial intelligence are morally sound machines that do not cause harm to humans or do their tasks in unfair manners. (Bostrom and Yudkowsky). Writers for Nature share the idea that ethical limitations need to be taken more seriously in regards to crating artificial intelligence. Stuart Russell, a professor at the University of California, Berkeley writes on the use of AI as weapons. Super-intelligent machines may be able to be used as weapons. This creates a debate of whether or not it is ethical to use weapons out of human

Open Document