Mind Matters Natural and Artificial Intelligence News and Analysis
cbx-271389-unsplash
Light streak from vehicles on road at night
Photo by CBX. on Unsplash

Self-driving Cars Need Virtual Rails

The alternative is more needless fatalities
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Self-driving cars have featured many widely publicized mishaps. Without all the data, we cannot determine that the self-driving cars caused these problems. That’s the role of the National Highway Traffic Safety Administration (NHTSA. But the evidence certainly points in that direction.

In one recent crash, a Tesla Model 3 collided with a trailer which sheared off the entire roof of the car, killing the passenger. Several facts indicate that the car was in self-driving mode: It did not brake but even an inattentive driver would have made a last-minute attempt to avoid the crash. Also, the car continued to drive for a third of a mile after the crash. These circumstances are eerily reminiscent of a fatal 2016 self-driving crash.

A Reddit user hacked Tesla software to see what the car may have been “thinking.” It does detect the trailer but the trailer is apparently high enough that the software can see the road ahead. It apparently confuses the trailer for an overhanging object (like a highway sign).

A similar video that another driver uploaded to YouTube shows a Tesla trying to drive into the same barrier that had killed another Tesla driver using Autopilot:

I should note that NHTSA generally assumes that problems with self-driving vehicles are the driver’s fault. The last slide from a presentation at a 2018 government–industry meeting informs us:

6. 2015 Tesla Model S Owner’s Manual it is the driver’s responsibility to “stay alert, drive safely and be in control of the vehicle at all times.” In addition, “…be prepared to take corrective action at all times.” Harold Herrera,Crash Investigation Division Special Crash Investigations Program: Investigation of a Fatal Crash Involving a Vehicle with Level 2 Automation,” at SAE Government / Industry Meeting 2018 January 24-26, 2018, Washington, DC

That is, the software is seen to merely provide driver assistance; it does not assume the responsibility of the driver. As we discussed in a previous article, the moral responsibility of the operator of the vehicle changes at different levels of self-driving. Until cars reach at least Level 4, the moral responsibility for the actions of the vehicle rests with the driver.

Unfortunately, however, hype-mongers have misled the public about both the technical and moral aspects of self-driving vehicles. Tesla, for instance, offers a video on its website claiming that “the person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”

But the person in the driver’s seat is doing something very important—he is responsible for all of the actions of the car. He is responsible for being aware and taking over should the car misbehave. That, as we have seen, happens. The fact that an automaker can so blatantly mislead the public on both the technical capabilities of the vehicle and the driver’s moral and technical responsibilities is appalling.

To add insult to injury, this same company (with the technical problems outlined above) is releasing a product this year that it calls “Full Self-Driving,” despite the fact that it isn’t fully self-driving. In fact, many in the auto industry have called out Tesla for misleading labeling. But the company has a history of dubious claims about cars that can drive themselves (Level-5 self-driving).

These technical problems affect other manufacturers as well. The difference is that their software is marketed as driver assistance, not self-driving, or autopilot, and certainly not as full self-driving.

While numerous technical hurdles remain, there is also the lurking problem is making sure that self-driving vehicles are compatible with the human drivers that predominate on the road. Will prediction algorithms succeed both when self-driving vehicles encounter each other and when they encounter human drivers? What about self-driving vehicles with software from different manufacturers? How should the public behave around a self-driving car? Who will decide who is responsible in a Level-5 self-driving vehicle wreck?

Earlier this year, Tesla said that its forthcoming software relies heavily on geocoding various facts about roads and intersections. What happens when roads change, signage changes, and stop signs get added, removed, or upgraded to stoplights? Will Tesla send representatives to every city planning meeting to know when to update their maps?

One solution, suggested by several Mind Matters authors would address the technical, social, and moral problems at the same time: virtual rail systems. A virtual rail is essentially a road that is built expressly for driverless cars. It would have:

● Signage to make clear that the roadway is a virtual rail, thus the cars may not have drivers
● Beacons to mark the roadways clearly at night
● A publicly available map of the roadways that is updated when the roads are updated
● Traffic laws specific to virtual rail
● Test criteria with which vehicle manufacturers must comply if their vehicles are to be approved for virtual rail
● Beacons embedded in the roads that tell the cars things like
things like (a) where the road is, (b) mile markers, (c) other relevant details, possibly including road conditions, using a defined, accepted communication and behavior protocol
● Appropriate (and well-defined) exception policies for exceptional events
● Entrance/exit ramps to the self-driving roadway where the car/driver can transition safely from ordinary driving to self-driving
● An autonomous vehicle “rest stop” at the end of a virtual rail roadway where the car can be parked if the driver cannot immediately reassume control.

These policy recommendations would transform autonomous vehicles from a “celebrity inventor” cultural statement into a practical engineering project. Self-driving roads would require quite a bit of investment capital but not significantly more than other kinds of roads.

If a standard were established, future roads could be built with special self-driving lanes, similar to the high-occupancy vehicles (HOV) lanes used in some cities. As these special lanes expand and interconnect, they will eventually enable significant trips in a truly self-driving vehicle.

The current mad dash to replace solid transportation planning with an AI bandage is doomed to failure, and possibly to more fatalities. Virtual railways are the solution, combining both the social and technological realities of the situation. But they require us to put our enchantment with all things AI behind and create a viable plan for the future.

Also by Jonathan Bartlett: Who assumes moral responsibility for self-driving cars?

and

Guess what? You already own a self-driving car

See also: Virtual roads and West Virginia back roads: AI’s temptation to theft over honest toil (William A. Dembski)


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Self-driving Cars Need Virtual Rails