This is the trolley problem. Everyone has heard something similar and there are an infinite number of variants. I remember them featuring heavily in high school R.E. and Philosophy lessons as a way to engage the class in debate and to help us easily understand the different philosophical approaches.
Now though they've recently been used by many as a way to ask what an autonomous vehicle would do in a situation where it could, let’s say, either kill a group of pedestrians or kill the vehicle’s passengers. I have a problem with this.
The first issue is that the situations often described are entirely unrealistic. I’ve seen everything from a cyclist pulling out to a group of children suddenly being in the road. When was the last time a group of pedestrians suddenly entered the road right in front of you? I’ve been driving for something like 8 years and it’s certainly never happened to me. Yes, people can step out in to the road but the chances are you’ve seen them on the pavement, perhaps even looking as though they’re about to cross. They certainly don’t appear in the front of your vehicle to the point that an emergency stop isn’t sufficient.
Secondly, we’re saying that a vehicle will have the ‘intelligence’ to make reasoned and complex decisions within a matter of milliseconds but will not have the capability to either a) identify the situation unfolding and take proactive action (e.g. slowing down, pre-engaging brakes, changing lane) or b) take emergency action and very quickly bring the vehicle to a stop or avoid a collision altogether by some other means. In a situation where proactive action could not be taken why would the vehicle do anything but perform an emergency stop just as we do today? Surely if the vehicle has the technology to make complex decisions it will also have the technology to simply avoid a collision altogether (by a combination of reacting quickly and with advanced mechanics and materials i.e. braking systems).
Additionally, there is the issue that there is a huge degree of uncertainty in these situations, even those that are mildly realistic. How can the vehicle ever know how external actors are going to behave? For example, if there is going to be a collision between a vehicle and a cyclist how can the vehicle ever know how the cyclist is going to be behave in such a situation. Let’s say the vehicle makes the decision to put itself in to a wall to avoid hitting the cyclist but the cyclist manages to move out of the way anyway. The vehicle has now potentially injured the passengers for no reason. My point here is that there are far too many variables and unknowns for a vehicle to ever do more than perform an emergency stop or take proactive action if it recognises a potential incident might happen. It is not unreasonable to say that the vehicle will always perform the action with the greatest certainty in the outcome and that will always be protecting itself (and therefore the passengers) as that’s the only thing it has control over and the only thing it knows the behaviour of.
Now I’m not saying I don’t like trolley problems in their entirety, they certainly have their place, as I mentioned at the start but I don’t think that the application of autonomous vehicles is one of them. I will say this though, it has been great that these sorts of debates have and are being had but I don’t see there ever being a decision like this being made by a vehicle.