Document Type

Honors Project

Publication Date

5-31-2017

Abstract

Humanity seems well on its way to creating artificial general intelligence, or AGI, within the next century. Such a creation poses great existential risk to humanity, as an AGI of suitable power could conceivably wipe us all out, either by accident or through actual malevolence, and this threat has lead many to search for a solution to the “Control Problem”. Current theories propose various kinds of rule-based solutions, like Asimov’s Three Laws of Robotics, supposing that a rule-based system would be sufficient for creating a cooperative AGI. I argue that this is not the case; rather, what is necessary is an AGI with a human-like moral system. Building on the work of Rawls and Mikhail, I have created a property-based, explanatorily accurate theory of moral grammar, which I believe will allow us to create human-like moral grammar in AGI. This, I argue, is the only way to ensure that our creations will be cooperative and work for the betterment of humanity rather than ending it.

Level of Honors

cum laude

Department

Philosophy

Advisor

Mark Phelan

Included in

Philosophy Commons

Share

COinS