When Machines Learn Like Humans –“Our Last Great Invention?”





People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation.

Researchers have created a computer model that captures humans' unique ability to learn new concepts from a single example. Though the model is only capable of learning handwritten characters from alphabets, the approach underlying it could be broadened to have applications for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages.

On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.

Recent years have seen steady advances in machine learning, yet people are still far better than machines at learning new concepts, often needing just an example or two compared to the tens or hundreds machines typically require. What's more, after learning a concept for the first time, people can typically use it in rich and diverse ways.

Brenden Lake, at New York University, and colleagues sought to develop a model that captured these human-learning abilities. They focused on a large class of simple visual concepts — handwritten characters from alphabets around the world – building their model to "learn" this large class of visual symbols, and make generalizations about it, from very few examples.

They call this modeling scheme the Bayesian program learning framework, or BPL. After developing the BPL approach, the researchers directly compared people, BPL, and other computational approaches on a set of five challenging concept learning tasks, including generating new examples of characters only seen a few times.

On a challenging one-shot classification task, the BPL model achieved human-level performance while outperforming recent deep learning approaches, the researchers show. Their model classifies, parses, and recreates handwritten characters, and can generate new letters of the alphabet that look 'right' as judged by Turing-like tests of the model's output in comparison to what real humans produce.

So, what's in store for our future: "Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever," said I.J. Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing was the originator "technological singularity" who served as consultant on supercomputers to Stanley Kubrick, director of the 1968 film 2001: A Space Odyssey."Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make."

Good also predicted in 1965 that within 30 years humans would possess the capability to build machines smarter than ourselves. While he has subsequently revised this estimate (upwards), Good feels certain it will happen no later than 2030. Given that we are just beginning to explore what's possible with nanotechnology and Moore's Law shows no signs of slowing down, he may well be right.

The phrase technological singularity is a term of art among futurists and refers to the creation of an AI or enhanced human intelligence that begins to drive technological advancements farther, faster and beyond the ability of humans to participate. While there is substantial disagreement between futurists as to when this will happen and the potential impact, there is surprising agreement that superhuman intelligence will happen.

As far back as 1958, some scientists were already predicting that technology would one day drive mankind to the point where radical changes would occur in life as we know it. In a statement repeated for its prescience, Stanislaw Ulam, a polish mathematician who contributed greatly to the Manhattan Project, declared, "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Futurist Ray Kurzweill has this to say, "Within 25 years, we'll reverse-engineer the brain and go on to develop super-intelligence. Extrapolating the exponential growth of computational capacity (a factor of at least 1000 per decade), we'll expand inward to the fine forces, such as strings and quarks, and outward. Assuming we could overcome the speed of light limitation, within 300 years we would saturate the whole universe with our intelligence."

The Daily Galaxy via

Image credit: With thanks to anthropomatics-robotics.kit


"The Galaxy" in Your Inbox, Free, Daily