• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Machines with Common Sense

An AI enthusiast's four-decade journey to teach robots how the world works—so they may eventually be capable of brewing coffee in any kitchen

'Intelligence is ten million rules,' said Douglas Lenat, one of the creators of artificial intelligence (AI). For nearly four decades, he worked to instil 'common sense' in computers, painstakingly describing hundreds of thousands of concepts and millions of relationships between them.

While computers excel in statistics, logic is not their strong suit. Machines can effortlessly process vast amounts of data to swiftly locate the information they require, while deep learning technologies enable them to discern intricate connections and patterns within extensive data sets. Ask a search engine or a smart speaker about Yuri Gagarin's year of birth, and the response will be instant. However, inquire about the US president during the year of Gagarin's birth, and the system will falter. 

What kind of facial expression might you see on a father helping his young child learn to walk? Most people would agree that the father would be smiling, as we recognise these as joyful moments reflected in appropriate facial expressions. But this concept is entirely foreign to computers. Even the most advanced neural network is essentially just a statistical model that processes requests without comprehending the purpose of the task or its context. Machines are not aware of the myriad simple logical patterns that humans naturally acquire while growing up in the real world.

Knowledge Systems

Obviously, without basic common sense, no strong AI (or artificial general intelligence— AGI) is possible. However, neural network developers seldom view this as a challenge and focus instead on crafting solutions for specific practical tasks. At best, they anticipate an eventual qualitative leap when a 'neural network of neural networks'—integrating numerous narrowly focused, statistically driven weak artificial intelligences—will somehow attain a comprehensive understanding of how the world works.

Developing a genuine test of AI strength

Many people know Stephen Gary Wozniak as a co-founder of Apple Inc. and the creator of the company's first personal computers. However, Woz, as he is known in the IT community, is more than an exceptional engineer, hacker, and philanthropist; he is also one of Silicon Valley's pioneering thinkers and visionaries, including in the realm of artificial intelligence.

In one of his interviews, Wozniak proposed a novel 'Coffee Test' to confirm human-level artificial general intelligence (AGI), as an alternative to the classical Turing Test and its various recent modifications.

AI should not aim to imitate humans; rather, it should engage in seamless interaction with both humans and the physical environment. Most importantly, AI should have the ability to navigate a novel environment successfully, even without prior training. How can this ability be ascertained? By having the robot brew a cup of coffee!

It is not as straightforward as it may seem, though. A robot operated by an AI system should be able to enter an unfamiliar home, one it has never encountered previously, with no pre-uploaded floor plan, and figure out how to make coffee: find the coffee maker, find the coffee, and brew the drink. So far, this task remains well beyond the capabilities of existing algorithms and robotics.

This bottom-up approach has only recently gained popularity, having been historically preceded by the opposite, top-down trend. For decades, programmers had been diligently coding complex systems in which they described concepts, entities, and the logical relationships between them. Information about objects and their properties was meticulously categorised and organised within extensive knowledge bases. This approach has even made it possible to implement expert systems capable of automatically generating insights on certain topics.

The enthusiasm behind this is understandable. In the 1950s, the early pioneers of artificial intelligence found that a minimal set of basic rules could give rise to remarkably complex behaviours. Could it be possible that encoding a more comprehensive dataset might lead to, if not consciousness, then at least rudimentary intelligent thinking?

Inspired by this concept, American computer scientist Douglas Lenat, along with his extensive team, dedicated nearly four decades to curating and expanding a comprehensive knowledge graph in the ambitious Cyc project, which aims to describe the myriad entities of our world and the intricate web of relationships between them while accounting for exceptions and even exceptions from exceptions.

Countering the Japanese AI Project

As early as the 1980s, researchers recognised the limitations of artificial intelligence in dealing with a broad spectrum of tasks and a multitude of diverse concepts. In 1983, while Lenat was a professor at Stanford, he estimated that achieving progress toward artificial general intelligence would necessitate a substantial knowledge base and could potentially require several thousand person-years to implement. While research projects typically do not have access to the required resources, globalisation came to the aid of developers in this case.

At the time, Japan's economy was growing at a fast pace, rapidly displacing American manufacturers from high-tech markets such as microelectronics, household appliances, automotive and shipbuilding industries. Fearing that the potential breakthrough in AI by competitors in the Far East would leave the US behind in this crucial domain as well, the US government and major corporations extended funding to accelerate AI development. Supported projects included the Microelectronics and Computer Technology Corporation (MCC) consortium. The appointment of Bobby Ray Inman, former Director of the National Security Agency and Deputy Director of the Central Intelligence Agency, as the first president of the MCC speaks volumes about the significance attached to this endeavour.

Lenat was invited to oversee the practical implementation. He left his teaching position at Stanford to join the Cyc project (pronounced as [saɪk]). His responsibilities entailed:

 developing the CycL language suitable for coding a universal knowledge base

 developing ontologies (concepts and connections between them) covering all areas of human knowledge down to some appropriate level of detail

 developing a knowledge base with the capacity to provide automatic answers to questions, grounded in the 'common sense' derived from the ontologies

An Endless Impasse

Douglas Lenat's initial expectations were optimistic. During the initial decade of the project's implementation, its database encompassed approximately 100,000 concepts and rules, and by 2017, their number had surpassed 1.5 million. Cyc has required more than 1,000 person-years of effort to describe around 24.5 million rules and relationships, and there is no foreseeable end to the project. The system effectively handles numerous complex tasks, but it is still a long way from something that could be considered strong AI.

'This is called a combinatorial explosion,' explains Dmitry Salikhov, specialist in strong AI at the Temporal Gamescompany. 'As the number of concepts and entities described increases, the number of relationships between them grows exponentially, rapidly surpassing the physical possibility to envision and encode them all—and this is where it comes to an end.'

Fortunately, Lenat had a strategic mindset and seemingly anticipated this to be a long-term endeavour right from the beginning. Back in 1994, he left the MCC consortium to launch Cycorp, an independent company with about fifty employees engaged in building and improving Cyc. The project is self-sustaining and, as such, does not rely on the goodwill or preferences of investors.

As Lenat explained, Cycorp received a large portion of its revenues from selling 'semantic maps' that helped users pull information from various databases with a single query. This allowed the project not only to stay afloat, but to continue its development.

Working for the Future

Since 2016–2017, the knowledge base has been commercialising. Among its applications are the MathCraft teaching system, the ResearchCyc ontology, and more. The Cleveland Clinic has used Cyc algorithms to inform patient diagnoses, while the National Memorial Institute for the Prevention of Terrorism (MIPT) has employed the system for identifying potential criminals. 

'The knowledge in Cyc has gotten quite good,' explains Ken Forbus, professor at Northwestern University and user of ResearchCyc. 'Is it perfect? No. Is it comprehensive? No. Is it broader than anything else out there? Yes.'

Numerous experts envision the future of artificial general intelligence as a combined approach, where neural networks and knowledge graphs like Cyc will be used simultaneously. The Google Knowledge Graph system is constructed on similar principles. This knowledge base incorporates data from Wikipedia, the WordNet dictionary, and several other ontologies, all encoded within an algorithmic system. In response to a search query, such as 'What was Pushkin's year of birth?,' the data retrieved by the graph will be presented as an instant answer. The system’s functionality is complemented by neural networks that help deconstruct the input, eg for correctly identifying a term even if it is typed with errors.

Chatbots and numerous voice assistants are similarly structured as 'hybrids' that incorporate knowledge bases and neural networks. However, it is unlikely that even a project as extensive as Cyc will ever constitute an artificial general intelligence. 'I believe that even in Lenat's team, few people still hold faith in accomplishing the project's original objectives,' says Salikhov. It might appear that all this painstaking work has come to naught, but that's not entirely accurate.

‘Everything taking place in our field can be likened to reconnaissance through combat,' Salikhov adds. 'There is a vast, open, and entirely uncharted expanse lying ahead of us. That's why we need to dig here and dig there, explore various paths, sometimes only to confirm that the wrong direction has been chosen. If it weren't for Lenat, we might have invested millions more person-hours in exploring this dead-end path. Therefore, we owe him our thanks.'

It appears that achieving strong artificial intelligence will necessitate a blend of both approaches, integrating deep-learning neural networks with symbolic knowledge systems akin to Douglas Lenat's project. But while advancements in neural networks are progressing rapidly, Cyc remains a rare instance of a comprehensive and universal ontological database. When the time comes to combine the statistical prowess of deep learning with the heuristic wisdom of common sense, Cyc could potentially serve as the foundation for a future artificial general intelligence.
IQ

 

Text authors: Roman Fishman, Daniil Kuznetsov

Author: Daniil Kuznetsov, October 24, 2023