Stanford University
Center for Automotive Research at Stanford (CARS)
Dynamic Design Lab (DDL)

Autonomous Vehicles and Ethics: A Workshop
Palo Alto, California
14-15 Sept 2015


Purpose

This workshop will focus on the ethics of autonomous vehicles, examining near-term practical issues for industry, especially as it relates to programming.

It will extend discussions from our larger meeting earlier this summer.

Structure

The workshop will be a closed, invitational-only meeting with about 30 participants from academia, law, and industry, conducted under The Chatham House Rule to promote free discussions.

Unlike traditional academic workshops, at least in the US, this will truly be a working meeting, not merely a series of lectures. Each working session will begin with a 15-minute briefing about a particular issue, followed by an hour-long open discussion. The discussion will be moderated to ensure it stays focused and productive, as well as explore specific scenarios and positions.

The goal is to draw out expert insights into the issues and identify further points of contention and issues as we continue this line of research.


Working Agenda

Day One (14 Sept 2015) ——

0830-0900:   Welcome + breakfast/coffee

0900-0915:   Introductory remarks and participant introductions

0915-0930:   Crash course on ethical theories

0930-1045:   What is the prime directive?

  • Questions: Should the car obey the law first and foremost—or is the primary goal to avoid collisions, or to minimize net harm, or maximize total utility, or obey a set of conditional ethical rules, or something else? What if the ethical response is illegal, or the legally permissible (or obligatory) response is unethical? Is ethical design the same as functional safety? What are some scenarios to consider here and throughout the workshop, and do edge-cases matter? Is it hyperbole that cars may have to make life-or-death decisions?

1045-1100:   Coffee and networking break

1100-1215:   Values and weights for decision algorithms

  • Questions: How should crash-avoidance/optimization algorithms be designed—what are the classes of objects to account for, and how much weight should they get? What is the process for arriving at those weights or values? Should certain classes (e.g., passengers, pedestrians, etc.) have special status? What about purchaser consumers and transferees, including when they are business entities? How should property damage be weighed against harm to persons? How should uncertainties be accounted for?

1215-1345:   Lunch

1345-1500:   Adjustable ethics settings

  • Questions: If there’s no clear consensus on values/weights, would it be legally or ethically permissible (or obligatory) to give operators a choice in setting those weights? May different models have a different ethics profile or “personality”? Would it be better to have some standard on this “ethics setting”—and who should determine that standard? Is there a role for random-number generators to solve dilemmas? What is the role for legislation or regulation? What is the effect of political and social dynamics?

1500-1515:   Coffee and networking break

1515-1630:   Legal liabilities

  • What are the sources of legal liability for the manufacturer as well as owners, operators, pedestrians, and infrastructure providers from the preceding discussions and beyond? Does algorithmic transparency help (or hurt)? Should there be special immunities under the law or should default negligence, strict liability, and other liability doctrines become the standard? What does this issue mean for insurance, and who will be legally required to carry it?

1630-1645:   End-of-day remarks

1700-1830:   Evening reception


Day Two (15 Sept 2015) ——

0830-0900:   Breakfast/coffee

0900-1015:   Human-computer interface

  • Questions: How serious is the “handoff problem”—what are some solutions? Under what conditions would handoff of control release the manufacturer from liability, and when would it not? Can the “handoff problem” be a source of liability? Will legal standards require full automation?

1015-1030:   Coffee and networking break

1030-1145:   Abuse

  • Questions: How should the car deal with abusive behavior by other drivers, e.g., playing “chicken” with the car? How about abuse or misuse by owners? Should cars have self-defense mechanisms, e.g., in the event of a carjacking; and if so, what kind? How should hacking by owners and malicious actors be addressed? Is it ever permissible for a car to act deceptively or issue false information—would it ever need to? Should vehicles have a “kill switch” by which law enforcement agencies could stop them?

1145-1215:   The road ahead

  • Questions: How should privacy and information security be addressed? What other issues need to be addressed in the near-term? Mid-term and far-term? What are ways to better account for ethics in engineering—can we embed ethics by design?

1215-1230:   Concluding remarks




Last updated on 5 Sept 2015.

Program

  • Click here for full program

Project Sponsors

  • California Polytechnic State University, San Luis Obispo
  • Stanford University
  • US National Science Foundation

Contact

  • Patrick Lin, Ph.D., Director
    palin [at] calpoly.edu

Focus Page

  • Click here for robot-related work

Copyright © Ethics + Emerging Sciences Group
All trademarks, logos and images are the property of their respective owners.