Home Technology Artificial Intelligence is getting brainier: when will the machines leave us in...

Artificial Intelligence is getting brainier: when will the machines leave us in the dust?

678
0

To usher in the ‘Singularity’ – when computers match human intelligence – superintelligent one trick ponies like DeepMind must become jacks of all trades

The road to human-level artificial intelligence is long and wildly uncertain. Most AI programs today are one-trick ponies. They can recognise faces, the sound of your voice, translate foreign languages, trade stocks and play chess. They may well have got the trick down pat, but one-trick ponies they remain. Google’s DeepMind program, AlphaGo, can beat the best human players at Go, but it hasn’t a clue how to play tiddlywinks, shove ha’penny, or tell one end of a horse from the other.

Humans, on the other hand, are not specialists. Our forte is versatility. What other animal comes close as the jack of all trades? Put humans in a situation where a problem must be solved and, if they can leave their smartphones alone for a moment, they will draw on experience to work out a solution.

The skill, already evident in preschool children, is the ultimate goal of artificial intelligence. If it can be distilled and encoded in software, then thinking machines will finally deserve the name.

DeepMind’s latest AI, unveiled yesterday, has cleared one of the important hurdles on the way to human-level AGI – artificial general intelligence. Most AIs can perform only one trick because to learn a second, they must forget the first. The problem, known as “catastrophic forgetting”, occurs because the neural network at the heart of the AI overwrites old lessons with new ones.

DeepMind solved the problem by mirroring how the human brain works. When we learn to ride a bike, we consolidate the skill. We can go off and learn the violin, the capitals of the world and the finer rules of gaga ball, and still cycle home for tea. This program’s AI mimics the process by making the important lessons of the past hard to overwrite in the future. Instead of forgetting old tricks, it draws on them to learn new ones.

Because it retains past skills, the new AI can learn one task after another. When it was set to work on the Atari classics – Space Invaders, Breakout, Defender and the rest – it learned to play seven out of 10 as well as a human can. But it did not score as well as an AI devoted to each game would have done. Like us, the new AI is more the jack of all trades, the master of none.

There is no doubt that thinking machines, if they ever truly emerge, would be powerful and valuable. Researchers talk of pointing them at the world’s greatest problems: poverty, inequality, climate change and disease.

They could also be a danger. Serious AI researchers, and plenty of prominent figures who know less of the art, have raised worries about the moment when computers surpass human intelligence. Looming on the horizon is the “Singularity”, a time when super-AIs improve at exponential speed, causing such technological disruption that poor, unenhanced humans are left in the dust. These superintelligent computers needn’t hate us to destroy us. As the Oxford philosopher Nick Bostrom has pointed out, a superintelligence might dispose of us simply because it is too devoted to making paper clips to look out for human welfare.

In January the Future of Life Institute held a conference on “Beneficial AI” in Asilomar, California. When it came to discussing threats to humanity, researchers pondered what might be the AI equivalents of nuclear control rods, the sort that are plunged into nuclear reactors to rein in runaway reactions. At the end of the meeting, the organisers released a set of guiding principles for the safe development of AI.

While the latest work on DeepMind edges scientists towards AGI, it does not bring it, or the Singularity, meaningfully closer. There is far more to the general intelligence that humans possess than the ability to learn continually. The DeepMind AI can draw on skills it learned on one game to play another. But it cannot generalise from one learned skill to another. It cannot ponder a new task, reflect on its capabilities, and work out how best to apply them.

The futurist Ray Kurzweil sees the Singularity rolling in 30 years from now. But for other scientists, human-level AI is not inevitable. It is still a matter of if, not when. Emulating human intelligence is a mammoth task. What scientists need are good ideas, and no one can predict when inspiration will strike.

By Ian Sample