Politic?

This is a blog dedicated to a personal interpretation of political news of the day. I attempt to be as knowledgeable as possible before commenting and committing my thoughts to a day's communication.

Thursday, June 29, 2017

The Human Reliability Issue in Self-Driving Cars

"Do you really want last-minute handoffs?"
"There is a really good debate going on over whether it will be possible to solve the handoff problem."
Stefan Heck, head, Nauto, Palo Alto, California

"Imagine if the autopilopt disengages once in 10,000 miles."
"You will be very tempted to over-trust the system. Then when it does mess up, you will be unprepared."
Gill Pratt, roboticist, Toyota

"Level three autonomous driving is unsolvable."
"The notion that a human can be a reliable backup is a fallacy."
John Leonard, professor, Massachusetts Institute of Technology

"They gave the message when I was close to getting a high score [playing a game]."
"I just realized it's not so easy to put the game away."
"Humans are inattentive, easily distracted, and slow to respond.] That problem's just too difficult."
Erik Coelingh, head of safety and driver assist technologies, Volvo
Volvo, self-driving vehicle simulator
The manufacturers of self-driving cars have a terrific product. Who doesn't want to be driven around without having to be constantly alert and in charge of a vehicle? Computerized directions tell your vehicle where you want to go and the technology takes charge. Relax, do something else, that self-driving car will get you to your destination; safe and sound and no sweat on your part. Oh sure, there will be times when the computer suddenly signals that it is unable to interpret what is happening on the road, and nudges you, the 'driver' to use your human intelligence to decipher the situation and respond.

Immediately.

Except that you'd immersed yourself in a fascinating game on your smartphone and weren't paying attention. Why should you, the car was being computer-guided, and you were completely out of synch with what was happening. And then when the signal came 'over-to-you', you didn't know what was happening any more than the computer did. Google discovered to its great chagrin that even their own employees who should have an idea of what's happening and what their responsibilities are in that kind of situation, failed the test.
Google’s self-driving cars are going longer and longer before deciding to disengage and hand control back to humans.
After deciding to reward their employees by giving them their own self-driving cars to be used for their commutes to work and home, they checked back on the automatic recording of how swimmingly things went in the process. And what they saw gave those executives splitting headaches. Employees were recorded by cameras to be clambering into the back of the vehicle, climbing out an open car window, smooching, and other undriverly escapades, according to two (former) Google engineers. "We saw stuff that made us a little nervous", admitted roboticist Chris Urmson at the time in charge of the project.

We all like to believe in the 'intelligence' of robotic cars. They've been programmed, after all, to interpret and rationalize to a degree. There are set patterns they recognize and respond to. Kind of like the human brain. But not quite, not quite yet. Optimistically, engineers feel another decade of experimental research and tinkering should do the trick. For the present, autonomous driving will not be completely independent of a driver's attention when hand-offs are an instant situation requiring immediate recourse to action.

Computers may be capable at this juncture of winning chess games and sending chess masters of global renown into the sulks, but they are not yet at that stage when they are able to judge situations on the road and instantly react as a human should and would. And the human sitting in the car who has to be prepared for that moment when the computer tells him that it's his turn, and quickly, to avoid disaster, may not be prepared for that moment though he/she should be. We're dealing with human nature here, not a mechanical device.

There are five levels of human-to-machine controls that the automotive industry has established to date. Level 0 represents manual driving, and Level 6 is complete autonomy, while Level 3 represents that point when the artificial intelligence manoeuvering the car turns to the human to address an emergency. Except that though the human relies on the artificial intelligence to do its thing, the artificial intelligence, though it may not be aware of it, cannot entirely depend on the human to spell it during that emergency situation.

According to Nauto, a startup that developed a system capable of simultaneously observing both driver and outside environment to provide, as required, alerts, a "driver distraction event" occurs every six kilometers, on average. At Stanford University scientists showed research results illustrating that most drivers need over five seconds to successfully regain control of a car. Google saw what is called "overtrust" on the part of automotive engineers when it matched employees with its prototype self-driving cars.

Audi's commercial vehicle with Level 3 autonomy expected out next month will be geared to give drivers eight to ten seconds to intervene in an emergency situation, while the car moves at 60 kilometers an hour in stop-and-go traffic. The concern, needless to say, is human operators who may be engrossed in email or deep in a game they're trying to master. Perhaps the template in solving the situation can be the study of challenges faced by the airlines industry in ensuring that airplane pilots remain vigilant while the planes are on automatic pilot.

edge3technologies.com

Labels: , , ,

0 Comments:

Post a Comment

<< Home

() Follow @rheytah Tweet