In order to evolve, AI must face its limits

From medical imaging and language translation to facial recognition and self-driving cars, artificial intelligence (AI) examples are everywhere. And let’s face it, while not perfect, the AI’s capabilities are pretty impressive.

Even something as seemingly simple and routine as a Google search represents one of the most successful examples of AI capable of searching far more information at a far greater speed than is humanly possible, and consistently returning results that (at least the most of the time) that’s exactly what you’re doing we’re looking for.

However, the problem with all these AI examples is that the artificial intelligence on display is not actually intelligent. While today’s AI can do some extraordinary things, the functionality underlying its achievements works by analyzing huge data sets looking for patterns and correlations without understanding the data it’s processing. As a result, an AI system that relies on today’s AI algorithms and requires thousands of labeled samples gives only the appearance of intelligence. It lacks real common sense. If you don’t believe me, just ask a customer service bot an off-script question.

Greetings humanoids

Subscribe now for a weekly roundup of our favorite AI stories

The fundamental flaw in AI can be traced back to the basic assumption underlying most AI developments over the past 50 years, which is that if the difficult intelligence problems could be solved, the easy intelligence problems would arise. This turned out to be wrong.

In 1988, the Carnegie Mellon robot Hans Moravec wrote: “It is comparatively easy to get computers to perform at adult levels on intelligence tests or at drafts, and it is difficult or impossible to teach them the skills of a one-year-old when it is.” that’s what perception and mobility is about.” In other words, the difficult problems turn out to be easier to solve, and seemingly simple problems can be prohibitively difficult.

Two other assumptions that have been prominent in AI development have also been proven wrong:

– First, it was assumed that if enough narrow AI applications (i.e. applications that can solve a specific problem using AI techniques) are built, they would coalesce into a form of general intelligence. However, narrow AI applications do not store information in a generalized form and cannot be used by other narrow AI applications to extend their breadth. While stitching together applications such as speech processing and image processing is possible, these apps cannot be integrated in the same way that a child integrates hearing and vision.

– Second, some AI researchers assumed that if a large enough machine learning system could be built with enough computing power, it would spontaneously show general intelligence. As expert systems attempting to capture knowledge in a particular field have clearly demonstrated, it is simply impossible to create enough cases and sample data to overcome a system’s underlying lack of understanding.

If the AI ​​industry knows that the key assumptions they made during development have been proven wrong, then why has no one taken the necessary steps to overcome them in a way that reflects true thinking in AI pushes? The answer can probably be found in AI’s main competitor: let’s call her Sally. She is about three years old and already knows many things that no AI can and can solve problems that no AI can. If you think about it, many of the problems we have with AI today are things that any three-year-old could solve.

Think of the knowledge Sally needs to stack a group of blocks. At a basic level, Sally understands that blocks or other physical objects exist in a 3D world. She knows they live on, even if she can’t see them. She inherently knows that they have a number of physical characteristics such as weight, shape and color. She knows she can’t stack any more blocks on top of a round roll. She understands causality and the passage of time. She knows she must first build a tower of blocks before she can topple it.

What does Sally have to do with the AI ​​industry? Sally has what today’s AI lacks. She has situational awareness and contextual understanding. Sally’s biological brain is capable of interpreting everything it encounters in the context of everything else it has previously learned. More importantly, three-year-old Sally turns four, and five, and 10, and so on. In short, three-year-old Sally has the natural ability to grow into a fully-functional, intelligent adult.

In stark contrast, AI analyzes huge data sets looking for patterns and correlations without understanding the data being processed. Even the youngest “neuromorphic” chips rely on abilities that are lacking in biology.

In order for today’s AI to overcome its inherent limitations and evolve into its next phase – defined as artificial general intelligence (AGI) – it must be able to understand or learn any intellectual task that a human can. It must gain consciousness. This allows him to continuously increase his intelligence and abilities in the same way that a human three-year-old grows to possess the intelligence of a four-year-old, and eventually a 10-year-old, a 20-year-old, and so on.

Unfortunately, the research required to shed light on what will ultimately be needed to replicate the contextual understanding of the human brain and enable AI to gain true consciousness is highly unlikely to be funded. Why not? Quite simply, nobody – at least nobody until now – was willing to pour millions of dollars and years of development into an AI application that could do what a three-year-old could do.

And that inevitably leads us to the conclusion that today’s artificial intelligence really isn’t that intelligent. Of course, that won’t stop numerous AI companies from boasting that their AI applications “work just like your brain.” But the truth is that if they admitted that their apps are based on a single algorithm – backpropagation – and are a powerful statistical method, they would get closer to the goal. Unfortunately, the truth just isn’t as interesting as “how your brain works”.

This article was originally published by Ben Dickson on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new technologies, and what to look out for. You can read the original article here.

Leave a Reply

Your email address will not be published.