On March 18, at around 10 p.m., an Uber self-driving car hit a woman as she was crossing a street in Tempe, Arizona. The vehicle was in autonomous mode at the time of the accident, while a driver was behind the wheel.
The weather was fine when the fatal crash occurred; the weather could not have been a factor.
Uber has already suspended all road tests for its self-driving cars after the incident.
Arizona is one of the friendliest US states to auto and technology businesses. It does not have a licensing system for autonomous vehicles. By contrast, California requests companies to submit detailed testing data, such as car accidents, routes and frequencies of intervention by safety drivers.
The US Congress has moved quickly to pass the first federal law governing self-driving cars. However, the bill, called the Self-Drive Act, was stalled in the Senate.
Some lawmakers believe the authorities should step up the regulation of this new technology. The recent Uber crash may further raise safety concerns about self-driving technology.
Many companies are investing heavily in the development of autonomous cars, and hope such technology can be commercialized in the near future.
But the accident highlights the need to balance innovation with safety.
Should companies be more transparent about their new technologies?
What safety standards should be set before self-driving technology can be considered ready for major roads? We should remember that even human drivers don’t have a perfect safety record.
Should the public be properly informed that they might be involved in such tests? Under what circumstances should testing data be published and monitored?
All these discussions may appear more frequently in public in the future.
The full article appeared in the Hong Kong Economic Journal on March 28
Translation by Julie Zhu
[Chinese version 中文版]
– Contact us at [email protected]