• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Child Ex Machina

What artificial intelligence can learn from toddlers

ISTOCK

Top development teams around the world are trying to create a neural network similar to a curious but bored three-year-old kid. IQ.HSE shares why this approach is necessary and how such methods can bring us closer to creating strong artificial intelligence.

Many concepts and ideas at the core of modern developments in the field of artificial intelligence (AI) originate in Alan Turing's classic article ‘Computing Machinery and Intelligence’, published back in 1950. While the ‘Turing test’ from this article has since gained the most popularity, other aspects of this work remained unnoticed for a long time.

At the time, the famous mathematician wondered: ‘Why don't we, instead of trying to create a program that mimics the mind of an adult, try to create a program that would mimic the mind of a child?’ However, AI has developed in a completely different way.

For decades, intelligence has been regarded as a ‘universal problem solver’, and a child's mind seemed to be a completely unnecessary intermediate stage in achieving it. Why do we need to pay attention to it if it is only an extra step between the initial ‘blank page’ and ‘adult’ skills that we strive to reproduce? Scientists have achieved significant success within the framework of this approach, although not quite the success that they initially expected.

 

For example, neural networks and deep learning have allowed AI to master some very complex intellectual tasks, play chess and Go expertly, summarise the content of texts and generate their own ‘works’. At the same time, many of the skills that are elementary to humans remain practically inaccessible to machines.

They are able to file lawsuits, but they cannot make a cup of coffee in an unfamiliar kitchen or even ask ‘Why?‘ unprompted, ie they can’t do things that even small children can easily do. AI can achieve impressive results in certain narrow tasks, but it is not autonomous in its training, and is unable to try and find something new.

A Child's Approach

According to Alison Gopnik, Professor of Psychology and Affiliate Professor of Philosophy at the Tolman Hall University of California at Berkeley, the fundamental difference lies in the approach to learning new information. Modern neural networks require processing huge amounts of data that has been pre-ordered and marked up by people. Children need a much smaller number of examples and do not act following pure statistics—they are simply curious. In other words, kids constantly put forward various hypotheses about the world around them and test them by experience, by trial and error.

In recent years, having realised all these problems, developers have been trying to implement various aspects of such ‘childlike’ AI. However, such projects remain on the fringes of the industry development and do not even have a generally accepted definition. They are called both ‘Life-long Learning’, and ‘No-task Learning’, etc.

‘Robot Open-ended Autonomous Learning’ (REAL) competitions are centred around the independent formation of new skills ‘based on mechanisms such as curiosity, learning without reinforcement, independently set goals’, but such projects are not yet mainstream.

The problem is that all of today's learning paradigms lack some key features that allow people to study ‘freely’, in an open environment and without a predetermined external reward. In fact, children need no prodding or encouragement to stack cubes and discover that one structure is more stable than another. They are guided by inner interest. And this mechanism is the first thing that modern AI lacks.

Fake Interest

A person's ‘natural’ curiosity is connected with the work of his or her internal reward systems. They ‘trigger’ every time new and potentially useful connections emerge in the brain, corresponding to new concepts or associations between already familiar concepts. Cybernetic neural networks do not operate with concepts, and are guided only by event statistics.

In other words, when children see a cube in front of them, they may not even know exactly what this object is called or how many of them are nearby, but they already begin to form a holistic image. If the kid puts one cube on another and it does not fall from a height, the child intuitively forms a new ‘support’ concept. Associative connections are formed in the brain, dopamine is produced and a pleasant feeling—natural reinforcement of cognitive behaviour in humans—emerges.

Unlike us, neural networks in their traditional form lack such a positive feedback mechanism. They can only correct incorrect results if their work does not produce the desired result. Thus, the creation of a ‘childlike’ self-learning agent requires the implementation of two key aspects: the ability to operate with connections to create and change holistic concepts, and curiosity. Today, the efforts of many developers are aimed at implementing this task.

A few years ago, scientists from the University of California at Berkeley presented an AI that played Super Mario ‘out of pure interest’, without point-based rewards. The model imitates human learning in many ways: it tries to predict changes in the environment associated with certain actions as accurately as possible, and therefore strives to perform actions that give a result that is still unknown to it.

However, without the ability to operate with concepts, even ‘artificial curiosity’ will not produce the desired effect. Some engineers believe that to achieve this goal, we need to use a completely different paradigm for building neural networks, one that uses impulse neurons that are closer to the physiology of real human brain cells.

Conceptual Ensembles

A pulse neuron accumulates incoming signals, and if the charge exceeds a certain level, then the activation threshold gives a signal to the next neuron. Its operation has a certain time frame: in order for the signal to pass further along the chain, charges must accumulate for a certain period, during which the neuron will not have time to ‘relax’, losing its charge. It is easy to see that the neurons of modern artificial networks usually work differently.

They operate with information in the form of real numbers, so their operation does not depend on time: the input values are multiplied as they arrive, immediately transmitting the signal to the next layer. Such neural networks form a continuous flow of information from the first layer to the last. But if we implement it within the framework of an impulse architecture, everything changes—it allows us to create neuronal ensembles with coordinated activity. According to the hypothesis, which relates to the work of the major Canadian neurophysiologist Donald Hebb, such structures are neural correlates of various concepts and representations in our brain.

Simply put, the cells seem to be ‘singing along’ in new ensembles. Let's say the network includes one hundred neurons, but if you activate a certain percentage of them, they ‘light up’ all together. The whole ensemble corresponds to a holistic image, a concept that is actualised from memory and keeps on ‘pulsing’ for some time, remaining active. At the same time, each particular neuron can be part of a whole set of ensembles corresponding to different concepts related to each other.

The Birth of a Toddler

Currently, a number of scientific and engineering teams are working to implement the ideas described above. There are experimental systems based on impulse neural networks that form connections (ensembles) corresponding to different representations. While learning, they form new groups and associations between them.

The goal of such developments is to create a program that will demonstrate abilities that are not available to existing AI systems. These should be skills familiar to a three-year-old child: following simple commands (left-right), understanding subject (part-whole), logical (if-then) spatial (over, under, behind) and temporal (before, after, then) relationships, understanding pronouns and tenses, etc.

A mechanism jokingly referred to as a ‘dopamine addict’ is the source of constant curiosity in such models. It rewards the system upon the formation of each new concept: there is a source inside it that constantly ‘pulsates’, forcing the system to continue searching. It stops only if it gets its ‘dopamine hit’. However, this doesn’t last long, and soon the ‘addict’ demands new sources of dopamine, launching an endless search for novelty.

It is worth noting the potential danger for such a system—something encountered by the developers of Super Mario-playing AI. The creators of the model found that it quickly detects the most stochastic circumstances in the game, like a coin toss, and gets stuck on them forever because it cannot predict the result, just as it cannot stop being ‘surprised’ by it. Similarly, a ‘dopamine addict’ can get stuck in a situation that forces it to endlessly generate new, simple and meaningless connections and concepts.

Similar things happen with young children, who, for example, constantly strive to ‘stick’ to the TV. However, such situations can be avoided by taking into account the context of what is happening. New concepts arising in familiar circumstances should be ‘devalued’ so as not to give the same ‘dopamine rush’. This creates something like boredom from the monotony of what is happening in the system and stimulates the search for something truly new.

We can’t release such a curious agent into a large open world for autonomous learning, so we have to work with it ‘manually’ for now, giving images and related phoneme phrases that the system can associate with each other. However, in the future, such models can be sent to their own virtual universe, where they can act more or less independently. In such a simplified space, the ‘child’ model will be able to move itself and move particular objects, satisfying its curiosity and learning until it becomes if not an adult, then at least a semblance of a real toddler.
IQ

 

Authors: Roman Fishman, Daniil Kuznetsov

October 30, 2023