Autopilot Cars

30 Nov 2017

One of the most controversial topics of recent automobile industries is a feature such as autopilot for cars. Tesla Motors, widely known for its all electric vehicles is one of the automobile companies that implements autopilot features. The feature allows passengers to get to their desired destination without them manually steering the wheel, drops them off at the destination, then the car finds the parking and self-parks. Tesla believes that the autopilot system is capable of driving safely than a human driver, and so it could reduce unfortunate car accidents.

Personally, I think autopilot implementation for cars is in general a positive aspect to have, since it would decrease car accidents by a huge amount. It is difficult to have a side on this issue because self-driving system is not perfect like many people point out, but the fact is that right now, one of the biggest reasons why people die in the nation is of car accidents with humans driving. If implementing the system saves a large amount of lives, I think it is a good thing. However, it is obvious that working to create a reliable system is not easy, and would require careful thoughts about ethical obligations.

First of all, the whole point of the self-driving implementation is to decrease accidents and to save more lives, so it must think about passengers’ safety. Obviously, it must be able to detect objects or humans around the car to avoid collisions, follow the law like not changing lanes when there is "double-yellow-lines", stopping at signs / lights, and etc. This obligation is relative to ACM Code of Ethics and Professional Conduct 1.1, “Contribute to society and human well-being.”

However even working under the “safety-first” intention, I imagine it would be incredibly difficult to train the AI to adjust to every dangerous cases that may or may not happen. For example, in unfortunate cases like Why Self-Driving Cars Must Be Programmed to Kill, neither of the ten people passing nor the passengers in the car should be prioritized to be kept alive. Instead, developers of the system should think of a way to save all people, as hard as they can at their best of their abilities. Thus, as the ACM Code of Conduct states in section 1.2 “Avoid harm to others,” one of the obligations of the developers would be to try to think of as many possible cases, then to prepare for those. In order to do so, developers must try to “acquire and maintain professional competence.” (ACM Code of Conduct section 2.2)