HawthorneVillager.com

Hawthorne Village (Milton) Discussion Board
It is currently Fri Oct 24, 2025 3:51 pm

All times are UTC - 5 hours




Post new topic Reply to topic  [ 14 posts ] 
Author Message
PostPosted: Mon Aug 01, 2016 8:46 am 
Offline
User avatar

Joined: Sun Jun 01, 2008 10:14 am
Posts: 4834
Location: Milton
:idea: Gathering the human perspective for driverless cars.

:arrow: Welcome to MIT's moral machine! - http://moralmachine.mit.edu/

:?: If faced with two impossible to avoid dilemmas, who would you want your driverless to kill ? 3 elderly people crossing a street OR a young mother with a stroller ?

_________________
For Home Inspection services call Andy Shaw at Halton Home Inspection Service. 905 876 4761


Last edited by Halton Home Inspector on Mon Aug 01, 2016 12:20 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Aug 01, 2016 9:04 am 
Offline

Joined: Fri Mar 28, 2008 10:15 am
Posts: 885
Location: HVE
This is an old topic that the likes of Google and Tesla have been debating for years. The reason it's a moral dilemma is that you have to build this logic into the software's decision tree beforehand, which implies pre-meditation in legal terms, so it has the potential for liability. With traditional (human driven) cars, a case like this wouldn't raise such questions, because it's understood that a human would have a 1/2 second to decide and it's understood that's not enough time to make any kind of rational decision - the human's decision would essentially be random.

I think the software should follow suit - just randomize the decision and let it be clearly shown that it is randomized (i.e make that part of the source code publicly available). Anything else, would raise the topic of favouring one over the other in legal circles. I get that it's "3 old people vs. a baby", but the families of those 3 old people could still sue... and the moral and legal arguments can/will go both ways. There is no win. The software has to make a random decision.


Top
 Profile  
Reply with quote  
PostPosted: Mon Aug 01, 2016 9:25 am 
Offline
User avatar

Joined: Sun Jun 01, 2008 10:14 am
Posts: 4834
Location: Milton
Gecko wrote:
This is an old topic that the likes of Google and Tesla have been debating for years.


This may be an old debate but it is an ongoing debate. MIT's Moral Machine is an online interactive way for everyone to get involved and provide answers to these challenging questions.

BTW, I'm not looking for answers on this. I just thought it would be interesting for others.

_________________
For Home Inspection services call Andy Shaw at Halton Home Inspection Service. 905 876 4761


Top
 Profile  
Reply with quote  
PostPosted: Tue Aug 02, 2016 12:19 am 
Offline

Joined: Thu Sep 13, 2007 3:25 pm
Posts: 3641
I just kept telling the car to run itself into the barrier. I know the graphic showed that everyone in the car would die, but I think with cars getting safer and safer I don't think those fatalities are as certain and running a car through a crowd of people.


Top
 Profile  
Reply with quote  
PostPosted: Tue Aug 02, 2016 7:42 am 
Offline

Joined: Mon Jul 10, 2006 7:54 pm
Posts: 5224
Location: HV
I rode in my brother in law's Tesla Model S this past weekend where he demonstrated the semi-autonomous driving on the 401. It can change lanes on its own with one flick of the signal stalk. It's quite amazing, but you still feel sort of nervous and poised to intervene. It even steered perfectly between lanes on slight curves, and was able to work on orange construction lane markings.

Every semi-autonomous car should have a Black Box to help the legal stuff in case a tragedy occurs.

_________________
What is the difference between ignorance and apathy? I don't know and I don't care.


Top
 Profile  
Reply with quote  
PostPosted: Tue Aug 02, 2016 8:06 am 
Offline
User avatar

Joined: Wed May 21, 2014 1:57 pm
Posts: 1717
Kevin&Amanda wrote:
I just kept telling the car to run itself into the barrier. I know the graphic showed that everyone in the car would die, but I think with cars getting safer and safer I don't think those fatalities are as certain and running a car through a crowd of people.


I kept wondering who the hell out these barriers in ten middle of the road on the first place.

Also, anyone fat and/or old is culled from the herd. Sorry.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Tue Aug 02, 2016 8:24 am 
Offline

Joined: Wed May 28, 2008 11:45 am
Posts: 244
There is no answer to this question currently only opinion.

The cars going forward in the coming years will be more self aware and as said above-vehicle will become safer. So it now depends on the code written for the scenario as presented in MIT's tool. However the self aware vehicle will not know if you are a doctor, realtor, mother, infant, grand father, lawyer or crook. If the code written is to protect the occupants of the vehicle and have minimal damage to vehicle, the autonomous vehicle will take the path of least resistance regardless to loss of life outside the vehicle. Or if the code is to preserve life overall, then it will take a course where less people/mass are and regardless of impact/damage to the occupants inside, as long as the least amount of life is impacted. Almost similar codes but not - Protect self first or save as many lives possible regardless of self.

opie


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 03, 2016 8:02 am 
Offline

Joined: Sun May 07, 2006 12:23 pm
Posts: 526
Location: Hawthorne Village
opie wrote:
Protect self first or save as many lives possible regardless of self.


This is the part that I see causing the most contention...At what point does the code dictate that it's "better" to take an action that would probably kill the car's own driver rather than some other potentially "worse" alternative?

Crazy hypothetical example - The car suffers a major failure in the braking system as you are driving into a school zone. It sees a group of children crossing the road and with no other way to stop, it needs to choose between running them over or sacrificing itself (and the driver) by crashing into a nearby row of trees/ditch/school bus...

Would you drive (ride?) in a car knowing that it contains code that would potentially kill you if certain conditions were met? Is this something that would be standard across all future autonomous car manufacturers or is this a marketing campaign/nightmare? "Pick Volvo instead of Tesla as we won't try to kill you when things go bad"...


Top
 Profile  
Reply with quote  
PostPosted: Sun Aug 07, 2016 10:23 pm 
Offline
User avatar

Joined: Tue Jun 05, 2012 7:03 pm
Posts: 181
mt_42 wrote:
opie wrote:
Protect self first or save as many lives possible regardless of self.


This is the part that I see causing the most contention...At what point does the code dictate that it's "better" to take an action that would probably kill the car's own driver rather than some other potentially "worse" alternative?

Crazy hypothetical example - The car suffers a major failure in the braking system as you are driving into a school zone. It sees a group of children crossing the road and with no other way to stop, it needs to choose between running them over or sacrificing itself (and the driver) by crashing into a nearby row of trees/ditch/school bus...

Would you drive (ride?) in a car knowing that it contains code that would potentially kill you if certain conditions were met? Is this something that would be standard across all future autonomous car manufacturers or is this a marketing campaign/nightmare? "Pick Volvo instead of Tesla as we won't try to kill you when things go bad"...


Hypotheticals like this are fun to debate but does anyone here actually code?

A compiled program does not "decide" anything. At the most basic level it reads input data from its sensors, runs through a series of predefined (coded) logic checks using the gathered sensor data, and makes control adjustments accordingly. This process loops at a ballpark frequency of ~ 120 times per second, that number being highly dependent on processor speed and code complexity/efficiency (I.E. Likely much faster).

In your "crazy hypothetical example" I propose a likely hypothetical counter: Once the program detects that it can no longer decelerate via standard hydraulic breaks, redundancy kicks in and an attempt is made using emergency braking, if that fails down shifting while calculating control probabilities for collision avoidance. If all else fails? Hand the controls back to the driver with the message "Done everything I can, here you go, good luck". Furthermore, I will postulate that a system like this would run a self diagnostic check (possibly each logic pass) and would detect system failures and halt the vehicle / warn the driver long before your total break failure.

I'll end with the fact that all modern airliners today are fully automated and can fly themselves from departure to destination (provided both airports are properly equipped and the pilot is not bored). There is no "morality" subroutine coded into the FMS simply because that "high level" task is delegated to the human operator. Now, would you like to know the primary culprit of almost all airline crashes dating back 20 years? Tip: definitely not the autopilot.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 10, 2016 9:56 am 
Offline

Joined: Mon Jan 24, 2011 11:23 am
Posts: 553
Location: Milton
If we're at the point of self-driving cars, we can definitely push the envelope further. Who says the car needs to stay on solid ground? Give the car the ability to momentarily thrust itself upwards and over ... have it land back when it's safe. Hovering/rocket propulsion should be a definite prerequisite for these cars.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 10, 2016 10:28 am 
Offline
User avatar

Joined: Wed May 21, 2014 1:57 pm
Posts: 1717
muzee wrote:
If we're at the point of self-driving cars, we can definitely push the envelope further. Who says the car needs to stay on solid ground? Give the car the ability to momentarily thrust itself upwards and over ... have it land back when it's safe. Hovering/rocket propulsion should be a definite prerequisite for these cars.


The weed thread is -----> that way, man.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 10, 2016 10:50 am 
Offline

Joined: Mon Jan 24, 2011 11:23 am
Posts: 553
Location: Milton
Hodor wrote:
muzee wrote:
If we're at the point of self-driving cars, we can definitely push the envelope further. Who says the car needs to stay on solid ground? Give the car the ability to momentarily thrust itself upwards and over ... have it land back when it's safe. Hovering/rocket propulsion should be a definite prerequisite for these cars.


The weed thread is -----> that way, man.

Hah ... but I'm serious :)


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 10, 2016 1:16 pm 
Offline
User avatar

Joined: Wed May 21, 2014 1:57 pm
Posts: 1717
muzee wrote:
Hodor wrote:
muzee wrote:
If we're at the point of self-driving cars, we can definitely push the envelope further. Who says the car needs to stay on solid ground? Give the car the ability to momentarily thrust itself upwards and over ... have it land back when it's safe. Hovering/rocket propulsion should be a definite prerequisite for these cars.


The weed thread is -----> that way, man.

Hah ... but I'm serious :)


Oh okay. Well then I think all cars should have super strong electro magnets surrounding them. That way, the computers can flip the polarity as needed to make sure they never touch each other.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 10, 2016 10:33 pm 
Offline
User avatar

Joined: Tue Jun 05, 2012 7:03 pm
Posts: 181
If we're going to head down that road than why even bother driving? Provided the current VR 2.0 bubble doesn't burst (which I don't see happening unless they reboot the Lawnmower Man franchise), in a decade (or two) you'll be able to jack into a drone avatar at any remote point of interest that you fancy. All from the comfort of your own rat cag... erm, "living room".

Absolutely safe, just ask Douglas Quaid.

Another hit?

Ok, if our automotive messiah's prophecy is true, than that is exactly what we're doing at this very moment:

http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation

I've seen this before - the sequels sucked Right?

One more?

Image

Last one...

Well, what if I told you, Mr. Enderson, that said movie was simply a preemptive satirical mind-f*ck to desensitize us plebs into utopian bliss while the real world burned.

https://www.youtube.com/watch?v=7uW47jWLMiY


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 

All times are UTC - 5 hours


Who is online

Users browsing this forum: No registered users and 28 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
[ Time : 0.093s | 11 Queries | GZIP : Off ]