Politic?

This is a blog dedicated to a personal interpretation of political news of the day. I attempt to be as knowledgeable as possible before commenting and committing my thoughts to a day's communication.

Monday, June 18, 2018

Eerily Prophetic and Mind-Chilling

"The kind of systems we are creating are very powerful."
"And we cannot understand their [potential] impact."
Bart Selman, computer science professor, Cornell University, Ithaca, New York

"This is going to be a very central question for how we think about A.I. systems."
"Right now, a lot of our A.I. systems make decisions in ways that people don't really understand."
"People who are naysayers and kind of try to drum up these doomsday scenarios -- I just, I don't understand it."

Mark Zuckerberg, CEO, founder, Facebook

"We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come."
"The time we have now is valuable and we need to make use of it."
Demis Hassabis, head, DeepMind, A.I. research laboratory, London
The T-800 from the Terminator film franchise.
The T-800 from the Terminator film franchise. Photograph: Melinda Sue Gordon/Allstar/Paramount Pictures

Many of us, thinking of a non-human, mechanical "superintelligence", think back to the profound and mysterious book by Arthur C. Clark (2001: A Space Odyssey) and collaborative film by Stanley Kubrick which brought HAL 9000 (Heuristically programmed ALgorithmic Computer) to virtual life as the controlling intelligence on the spaceship Discovery, the programmed ship director in control of the ship's mission, directing the astronauts and ostensibly supporting their program and mission, but sensing suspicion emanating from their experience that behind the mechanical facade there was an omen of sinister intent, eliciting hostility from HAL that spelled the end to their mission.

It isn't, after all, surprising, that integrating biological and mechanical science to produce a superintelligence would raise the spectre of rivalry, challenges and threats as a potential happenstance when brilliant human minds produce a mechanism that is capable of emulating the human brain and capable further of teaching itself through careful observation, how to become more intelligent, and in the process not only outthink man but endow itself with emotions as well as ambition. Artificial intelligence that could very well become a competitive threat in its ability to streamline its cognitive potential.

Google's DeepMind laboratory built a computer in 2016 programmed to compete with human experts in the ancient Chinese board game of Go. AlphaGo defeated the European champion five games to none. Astonishing A.I. researchers by its advance in intelligent analysis enabling it to compete with the best players humanity could muster and outcalculate them. Except that with Go it is not only calculations but intuition that leads to mastery of the game. And so how could an artificial intelligence acquire intuition? In beating the most expert Go player, AlphaGo produced winning moves inexplicable to experts in computer A.I.

Establishing in the process that computers are capable of self-learning to achieve the kind of superintelligence way beyond the genius functionality of an elevated-intelligence human brain. And then there was yet another revelation when OpenAI "trained" a computer system to play a video game featuring boat races. The computer was programmed to win as many games as it possibly could. It could and it did, by spinning in circles, colliding with stone walls, ramming other boats to come out the winner. A lunacy of unpredictability resonates with those who fear that superintelligent control would be out of the hands of humans.

Leaving many experts beyond trepidatiously nervous about the A.I. enterprise altogether. With no control, no way of aborting, stopping, halting, measurably being in charge, what kind of computerized Frankenstein would human ambition unleash on an unready world? Computers capable of outthinking, outplanning, outmanoeuvring human beings are not likely to surrender to their  human "authority". How could they be kept in check?

These deeply concerning questions has seen both OpenAI and DeepMind operating groups dedicated to the urgent concept of "A.I. safety". And we wish them well.

HAL 9000 on the wall in Discovery's centrifuge.

Labels: , ,

0 Comments:

Post a Comment

<< Home

() Follow @rheytah Tweet