mt_42 wrote:
opie wrote:
Protect self first or save as many lives possible regardless of self.
This is the part that I see causing the most contention...At what point does the code dictate that it's "better" to take an action that would probably kill the car's own driver rather than some other potentially "worse" alternative?
Crazy hypothetical example - The car suffers a major failure in the braking system as you are driving into a school zone. It sees a group of children crossing the road and with no other way to stop, it needs to choose between running them over or sacrificing itself (and the driver) by crashing into a nearby row of trees/ditch/school bus...
Would you drive (ride?) in a car knowing that it contains code that would potentially kill you if certain conditions were met? Is this something that would be standard across all future autonomous car manufacturers or is this a marketing campaign/nightmare? "Pick Volvo instead of Tesla as we won't try to kill you when things go bad"...
Hypotheticals like this are fun to debate but does anyone here actually code?
A compiled program does not "decide" anything. At the most basic level it reads input data from its sensors, runs through a series of predefined (coded) logic checks using the gathered sensor data, and makes control adjustments accordingly. This process loops at a ballpark frequency of ~ 120 times per second, that number being highly dependent on processor speed and code complexity/efficiency (I.E. Likely much faster).
In your "crazy hypothetical example" I propose a likely hypothetical counter: Once the program detects that it can no longer decelerate via standard hydraulic breaks, redundancy kicks in and an attempt is made using emergency braking, if that fails down shifting while calculating control probabilities for collision avoidance. If all else fails? Hand the controls back to the driver with the message "
Done everything I can, here you go, good luck". Furthermore, I will postulate that a system like this would run a self diagnostic check (possibly each logic pass) and would detect system failures and halt the vehicle / warn the driver long before your total break failure.
I'll end with the fact that all modern airliners today are fully automated and can fly themselves from departure to destination (provided both airports are properly equipped and the pilot is not bored). There is no "morality" subroutine coded into the FMS simply because that "high level" task is delegated to the human operator. Now, would you like to know the primary culprit of almost all airline crashes dating back 20 years? Tip: definitely not the autopilot.