It’s Beginning To Look A Lot Like AI

It’s Beginning To Look A Lot Like AI

Have you been good this year? Because Santa’s little helpers (and AI) are watching…

We interact with AI’s every day. They route our phone calls, approve our credit card transactions and help our doctors interpret results. AI augments our decision-making, without every getting tired.

But we have a wonderfully naïve view of Artificial Intelligence, its potential and how we control its development (before IT starts calling the shots).

Sometime in the future the intelligence of machines will exceed that of human brain power. So will this herald a new Utopia, a gift-that-keeps-on-giving, an ‘Intelligential Revolution’, with machines doing a far better job at complex and repetitive tasks than us?  Or are we teetering on the edge of “a ghost of Christmas future”, with super-intelligent devices superseding humanity?

Driverless cars will soon be on our roads with far safer driving skills than the average driver. (Not me, of course; like everyone else I know I am better than average)(!).  But these speeding machines will have a decision-making computer in charge, constantly learning to improve its skills (unlike us).

Where does the AI threat really lie?

But how do machines actually think and learn? They are black boxes of quantum computation driving algorithms that cannot be paused, opened and dissected. By design, they learn faster and more methodically than we can keep up with.  All we can do is roughly steer the parameters.

But where will these machine-learnings take us? What are these (human-defined, machine-managed) safety parameters? As Stephen Hawkins said in a recent interview [Wired, Dec 2017] “The genie is out of the bottle. We need to be mindful of its very real dangers.”

Artificial intelligence is being programmed by disparate teams in an arms race. The temptation is to build for power and speed, rather than safety.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: “Skynet fights back.”  [The Terminator, 1984]

Yes, many of these machine-learning intelligences are smart, intuitive marketing tools. But others are industrial machines, and others are autonomous, armed, military drones. And who is setting the parameters, based upon what moral mazes? [Radio 4: The Moral Maze, 22 Nov 2017 – The Morality of AI]

And all that is before the Hackers come to spoil the festive party.

Or maybe humans are merely an “evolutionary phase”, before the next, machine-created phase supersedes.  If so, is the self-awareness of Skynet-like fantasy far closer than we appreciate?

So break out the mince pies and grab a glass of mulled wine… It’s time to talk.

Value Engineers Value Engineers TwitterLinkedIn