Note: these are purely the personal views of Arthur Richards, and do not necessarily represent the views of the VENTURER project or any of its members.
I'm delighted to be part of the
VENTURER consortium, trialling autonomous cars in Bristol and South Gloucester. My role in VENTURER at Bristol Robotics Lab is to deliver a decision-making system for the car, figuring out where to move next given sensor information from other partners' equipment. We'll not be developing a complete car automation solution, but we will give it just enough brains to do something interesting in the test scenarios. In particular, we'll be looking at motion planning as a tool to resolve those scenarios that don't really appear in the Highway Code, but occur often enough round Bristol. Consider a bus trying to pass a recycling lorry opposite some roadworks and with a car trying to pull out of a drive in between...
Since VENTURER started I've had a few regular questions come up. Google's
recent accident raises plenty more (and the reports give us a fascinating view into how the car sees the world), Here is my FAQ.
Will self-driving cars have steering wheels?
Apparently
lots of people want them, because they don't trust technology and want to remain in control. That seems fair enough: given the
frustrating experience of automated checkouts, we're accustomed to the idea that automation needs a bit of supervision. In a way it makes life easier for us technology developers too: if the driver is still in charge of the vehicle, alert and ready to take control at a moment's notice, then the technology is really only there to assist the driver and doesn't have to be capable of handling all situations. In short, it doesn't have to be as reliable, because there's a human backstop if it fails. We call this capability "handover" - the ability to hand over control from computer to human, or vice versa.
But in my view, handover is a fallacy for two reasons. First, if I still have to remain as alert and responsible for the vehicle as if I'm driving it, what's the point in all this shiny expensive kit to automate it? Second, if I'm responsible for stepping in when it does something wrong, I need to be very sure that I can recognize "wrong" and know better what "right" is. That means I have to understand what the automated driving system is doing and why, so I can tell when it's gone wrong. So, now, not only do I still need to be as alert and capable a driver as I ever was, but I've got a whole load of extra techie stuff to learn about how this fancy gadget works.
You can see more problems with handover if you think about the liability issues. So let's suppose I'm in my car in auto-drive mode. I'm as alert as I am now, but I can still easily miss something:
42% of all accidents identify "driver failed to look properly" as a contributing factor. My car picks up on a movement of another car that I've not seen, and reacts. I don't like the reaction, take control, and have an accident. Who's at fault?
Meanwhile, if my car is actually pretty good in auto-drive mode, I get used to letting it run. My own driving gets worse as I'm out of practice, and since I so rarely need to do anything in the car, like it or not, my attention wanders. So, I'm not as alert as I need to be, and I'm not as practised at what to do.
But can't the car tell me when it needs me to take control?
This too is a fallacy. The handover capability is there so that if something goes wrong with the auto-driving system, you can take over control of the car. If the auto-drive is capable of detecting when it's gone wrong, then it should also be able to fix the problem. This is like saying "I'll call you if I have a problem with my phone."
So there won't be a steering wheel?
Hold on! I've argued that taking control during driving is a bad idea. However, a self-driving car is going to be a complicated bit of kit, with lots of sensors, computers and electronics all over it. Things will break. Given everything I've said above, the car needs to be able to get you safely to the side of the road, without your intervention, in the event of a failure, and this kind of "fail-safe" engineering is common. However, a secondary manual driving capability from kerbside to garage is probably essential. It would be awful if someone only had to back gently into your bumper and that knocked out a couple of sensors and immobilized the whole car. Your self-driving car is going to be covered with gadgets, and they make most sense to be at the periphery where they've got the best view, so they're going to get bumped in traffic. The steering wheel here is in a role similar to that of a get-you-home spare tyre: it's not for regular use, but it's there if there's no alternative.
So there is a steering wheel, but I shouldn't be tempted to use it?
Pretty much. The wheel is there if you want to use it or if, due to some failure, the car can't be started in auto-drive mode. It's
not there for you to take over from auto-drive mode while in motion.
But I still want to be in control? How can I gain trust in the auto-drive before I'm ready to go all the way?
This is the best question yet, and one to which I don't have a good answer. This kind of trust only comes from accumulated experience. I'm reasonably happy to sit on a flight without having watched the pilot fly a few times before, or studied the design of the aircraft. Instead, I base my trust in the history and culture of the airline and aviation business. So, somehow, all the stakeholders in auto-driving cars need to find a way to earn this trust. Easy to say...
Will my self-driving car be more efficient? Will it help the environment?
Interesting question. There are many different ways in which this technology will impact the environment, especially vehicle emissions. I've not run the numbers but we can think about the qualitatively at least. For the purposes of this thought experiment, assume that a self-driving car means one that can drive itself with no human driver attention, but is otherwise identical to any other car. (People tend to think of self-driving cars as futuristic electric vehicles, but you could apply that technology to manually driven cars just as well. The same goes for regenerative braking or auto-engine-stopping in traffic. Let's keep those separate from self-driving technology.)
On the plus side, how will self-driving cars help the environment?
The automated driving system ought to improve efficiency by optimizing acceleration and braking profiles and gear changes if necessary. Not sure how significant these are though. Optimizing speed would be a more meaningful impact, but how often is speed free to be optimized, rather than set by congestion? Not that much.
One significant positive impact could be in reduction of accidents. If self-driving really does lead to fewer accidents - and if it doesn't, there was very little point - then there could be significant reductions in knock-on congestion. Also, if the reduction in accidents could reduce the carbon costs of building spares and new ones - because there are fewer cars damaged or written off in accidents.
Long term, after a significant uptake of self-driving cars, congestion could be ameliorated by cooperative routing strategies. Self-driving technology goes hand-in-hand with the
connected car concept, and although you could add vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (
V2I) capability to any car, they seem pretty likely to come together. Imagine an optimized response to high-bandwidth traffic reporting.
OK, the negative side: how will they hurt the environment?
Potentially, self-driving cars could lead to an
increase in traffic levels, with all the knock-on congestion and environmental detriment that would bring. We all make value-driven choices when choosing modes of transport. I often prefer to take a train instead of drive because I can work on the way, or arrive refreshed instead of tired after driving. The train's rarely cheaper though, and I still have to cover the door-to-station bits myself. But, if my car can drive itself, I can go door-to-door and still get the work done or rest. Trouble parking at the far end? No problem - I'll send the car off to park itself and call it back later on.
Also, full self-driving opens up the possibility of personal car transport for people who can't actually drive manually. Would we still expect people to stop using their own cars due to eyesight, illness or just plain old age? Invariably there will be limits, but if self-driving technology emerges, it would be very harsh to withhold it from those who could benefit most in terms of independence. There are some interesting possibilities for the school run too, but don't let's get too carried away.
The main point is that fully-automated driving technology removes a number of disincentives to the use of your own car for any given journey: the need to be capable (and perhaps qualified) to drive the car; the mental exertion of driving the journey; your own time spent driving the journey; and potentially some of the time and effort in parking at the other end.
I've not run the numbers on any of these, and it'd be a big project to do properly anyway. What would be the net effect? Hard to say, but it's not "given" than self-driving cars will be good for the environment.
If two people run into the road from different sides, and the car can only physically miss one of them, how will it choose?
There are lots of variations on this question, involving various combinations of unfortunate potential victims. It turns out that this is a variant of the well-studied
trolley problem in ethics, and
some argue that this question is at the heart of self-driving car technology. I disagree.
That's not to say I'm flippant about safety. Far from it. I will do all that I can to make sure that any decision-making system I develop has minimal chance to ever end up in a situation where it might have to choose who to kill. Then I'll do some more to reduce that chance. But once in that situation, things have already gone badly wrong. The boundary of what's acceptable vs. what isn't lies before that judgement point is reached. Because things always go wrong sometimes, I cannot guarantee that we'll never cross that boundary, but as an engineer my priority is to minimize the chance of crossing it.
There's nothing in the highway code about how a human driver should choose, or any other legislation that I'm aware of. It's suggested that human drivers can be forgiven for poor choices in the trolley problem because they don't have the time to consider it ahead of time - but then if it's so important, why isn't it in the driving test?
Personally I'm also a little chilled by the idea of programming my computer to make value judgements about human lives. Presumably if it got it right, it'd tell me my journey isn't worth the risk anyway. I'd rather spend my time improving the car's ability to anticipate where people might run out in front of it. I hope I've not been at all flippant about this question, but it really is a distraction.