Document Type

Honors Project

Publication Date

5-29-2013

Abstract

Artificial intelligence seeks to create intelligent agents. An agent can be anything: an autopilot, a self-driving car, a robot, a person, or even an anti-virus system. While the current state-of-the-art may not achieve intelligence (a rather dubious thing to quantify) it certainly achieves a sense of autonomy. A key aspect of an autonomous system is its ability to maintain and guarantee safety—defined as avoiding some set of undesired outcomes. The piece of software responsible for this is called a planner, which is essentially an automated problem solver. An advantage computer planners have over humans is their ability to consider and contrast far more complex plans of action.

Safety may be defined probabilistically, in which case the probability of “failure” must be below some given threshold θ. The process of deciding the level of safety a plan achieves is called verification. The plans considered in this work are too complex to analyze analytically (the process would take too much time and/or memory to complete). This motivates a statistical sampling based approach, which works by generating “sample traces” of the plan—like simulating a roll of dice. DAGification—the systematic expansion of this representation—allows the computation of the the required probabilities for safety with bounded levels of error and in a reasonable number of samples. This work presents several new DAGification schemes with a detailed discussion of their correctness.

Level of Honors

summa cum laude

Department

Computer Science

Advisor

Kurt D. Drebsbach