The concept of self driving vehicles is far older than most would suspect, and although the technology making it possible is only becoming sophisticated now in the near future, it didn’t stop Milwaukee scientists from experimenting with the concept, almost a 100 years ago, in the 1920s.
[clip a picture]
And because it’s finally coming to fruition, people are beginning to imagine what exactly that would entail.
In october last year, TechnologyReview asks “Why self driving cars must be programmed to kill ” On the surface, it seems like a truly bizarre statement to make, but dig around, and you’ll find it’s not so far out there.

What kind of morality do you want your car do have?






Regardless of how sophisticated the sensors get, the steering mechanism, the on-board processor or how sophisticated the algorithms are; one thing will always be true:  unexpected things happen.
An child runs into the road unexpectedly, a tire blows, a sensor malfunctiones, the car in front of you makes a bad decision…
Whatever probable you think these scenarios on a given day, it’ll never be 0%. And If your car has to make a decision between crashing into a person or a group of 20 children, everyone would agree the car needs to be capable of making the choice.
In some cases crashing the driver into a wall rather than into a group of people would be morally desirable, and expectedly, although morally defensible, not many drivers are excited about that outcome.
But let’s take this to a logical extreme, let’s say there’s a group of 2 little girls, or 3 veterans and there’s a 30% chance hitting the veterans will harm the passengers, who are a family of 5, but the hitting the girls offers 20% chance of harm to the passengers.

A project from MIT, Massachusetts Institute of Technology, is attempting to answer these kinds of questions.
Presenting the project as “A platform for gathering a human perspective on moral decision made by machine intelligence”, the MoralMachine’s stated objective has been to raise discussion about this topic.

This is obviously a complex question, but does the car really have to make such hard choices?

Well… Yes! If a driver is fully relying on the car to drive, which is safer, more productive, easier and simply better in almost all ways, then somebody needs to make the moral choices. Leaving a trolley problem to a coinflip is probably the most morally indefensible position you could take.

Elon Musk stated in december last year, just two months after Teslas introduction of the autopilot, that fully autonomous driving will be ready for the market within the next 2 years. And now only just over half a year later we’ve experienced the first casualty of self driving.
This ultimately begs the questions; what exactly does full autonomy mean? How safe can it be?

A car taking moral decisions requires just 2 things:
1) being able to accurately predict cause and effect of actions
2) having a hierarchy of desired outcomes, with least human suffering being the most desirable.

Currently, a Tesla is not very smart, it can drive only itself during the safest parts of your journey, and although being a great feature for making your drive more comfortable and relaxing, it’s far from being autonomous and in no way does it merit these kinds of discussions.
One of the few things we can say for sure is that the current sensors are incapable of handling real world complexities. Another thing we can be sure about is that the software is improving very, very fast.
Speculations over a new sensor suite in Tesla vehicles, with improved redundancies and faster processing are being fueled by teslas hires of AMD chip experts Junli gu and Jim Keller early this year. Along with frequent mentions in conference calls and presentations. And we at The Next Avenue are betting it’ll.
So, how far software improvements can improve the driving without requiring hardware updates, remains to be seen. And with Teslas 8.0 firmware upgrade just around the corner, drivers are literally dying to find out.