Autonomous vehicles already exist in the United States and the continued advancement and growth of these vehicles being on the road with other drivers seems a certainty. While there isn't any doubt that the sophistication of these vehicles and their "vision" is superior in most every way, in comparison to humans who are prone to error, fatigue, impairment, as well as distractions; these autonomous vehicles ultimately have to be programmed in such a way, that difficult and rare decisions, in which, somebody will get hurt or even killed, are made.
That is to say, not all accidents and collisions happen, simply because of human error, but are in some instances, inevitable, based on weather, failure of equipment to brake or tires to hold the road, or pedestrians that just jut out into the street, or blind spots on roads because of curves or abrupt inclines, and so on and so forth. Of which, because of the speed that vehicles travel at, it is not possible in every instance to actually stop that vehicle on the dime, because the time to do so before there is impact, is not enough.
Therefore, this does mean, that driverless cars will crash into not only other driverless cars, but also cars with drivers, as well as into pedestrians. The thing about human drivers, when all this is happening, is that in most instances, they are going to make a decision, of which, such a decision could be construed as being both reasonable as well as selfish, whereas others could be construed as being both reasonable as well as unselfish, as well as many other decisions that could be construed as being unreasonable and ill-advised. Basically, with human drivers, just about any possibility when it comes to decision making or the ignoring of such is possible. On the other hand, autonomous vehicles are programmed to do exactly what that programmed has instructed it to do. So that, it is fair to assume that autonomous vehicles have been programmed in such a manner that decisions about crashing into a light pole, instead of running into a stream of pedestrians has already been weighed and determined.
All of this assumes that autonomous vehicles are programmed for the greater good in which, one question is whether the programming of what is considered to be the greater good is actually the greater good. Additionally, most people have a firm opinion about their own self, in regards to their value, of which, that value is reasonably determined by that person's status, not only within their own family and friends, but also of their value to society at large. So that, within the value of those that are traveling within an autonomous vehicle, it is clear that not every passenger within the confines of an autonomous vehicle, would or should have the same value. For instance, the CEO of a major technology firm would probably be considered to be of far more value to society, than an old man, that has but six more months to live, yet an autonomous vehicle would not know this, unless such was somehow taken into account.
This thus means, that it is conceivable, that autonomous vehicles, in the future, if not already being done so today, will be programmed in a manner, that those of "high value" are always protected over those of "low or unknown value". That is to say, society is very unequal today, but at least those that are considered to be "lesser" still have a conscious choice as to what they would do under very trying circumstances in regards to an imminent vehicle collision; whereas, in a world in which the passengers of autonomous vehicles are valued at different rates, those automated decisions will protect foremost those more valued at the expense of those less valued.