Home  >  Article  >  Technology peripherals  >  How to solve the “common sense” problem of artificial intelligence

How to solve the “common sense” problem of artificial intelligence

PHPz
PHPzforward
2023-04-14 11:19:021346browse

​Translator | Li Rui

Reviewer | Sun Shujuan

In recent years, deep learning has made great progress in some of the most challenging areas of artificial intelligence, including Computer vision, speech recognition and natural language processing.

How to solve the “common sense” problem of artificial intelligence

However, deep learning systems still have not solved some problems. Because deep learning systems are not good at handling new situations, they require large amounts of data to train and sometimes make bizarre mistakes that confuse even their creators.

Some scientists believe that these problems can be solved by creating larger and larger neural networks and training them on larger and larger data sets. Some people also believe that what the field of artificial intelligence needs is some "common sense" from humans.

In their new book, "Machines Like Us," computer scientists Ronald J. Brachman and Hector J. Levesque offer their thoughts on this missing piece of the artificial intelligence puzzle and possible solutions. This "common sense" conundrum has puzzled researchers for decades. In an interview with industry media, Brachman discussed what common sense is and is not, why machines don’t have common sense, and how the concept of “knowledge representation” can guide the artificial intelligence community in the right direction. The concept of “knowledge representation” has been around for decades but was shelved during the deep learning craze.

Although still stuck in the realm of hypotheticals, Machines Like Us offers a new perspective on potential areas of research, thanks to the work of these two people who have been working together since the 1970s. Scientists who study artificial intelligence in depth.

Good AI Systems Make Weird Mistakes

Brachman said, “In the past 10 to 12 years, as people have shown extraordinary enthusiasm for deep learning, there has been a lot of discussion about deep learning-based A discussion of a system that can do everything we originally wanted an AI system to do.”

In the early days of AI, the vision was to create a self-sufficient autonomous system, perhaps in the form of a robot, It can do things independently with little or no human intervention.

Brachman said, “Today, as many people are excited about what deep learning can achieve, the scope of research has narrowed a lot. Especially in the industrial field, a large amount of funding and talent recruitment has driven the research based on experience. Or the intense focus on examples-trained systems that many claim to be close to general artificial intelligence, or 'good old fashioned artificial intelligence' (GOFAI) or symbolic approaches are simply outdated or unnecessary."

The obvious However, as impressive as they are, deep learning systems are facing puzzling problems that have yet to be solved. Neural networks are susceptible to adversarial attacks, in which specially crafted modifications to input values ​​cause the machine learning model to make sudden erroneous changes to its outputs. Deep learning also struggles with understanding simple cause-and-effect relationships, and is terrible at conceiving concepts and putting them together. Large language models have been an area of ​​special interest recently, but can sometimes make very silly mistakes in generating coherent and impressive text.

Brachman said, "People's perception of these mistakes made by artificial intelligence is that they look stupid and ignorant, and humans rarely make these mistakes. But the important thing is that there are some reasons that lead to these mistakes. Hard to explain."

These errors led Brachman and Levesque to reflect on what is missing from today's AI technology and what is needed to complement or replace example-driven systems for training neural networks.

Brachman said, "If you think about it, what these systems are clearly missing is what humans call common sense, which is the ability to see things that are obvious to many people and quickly come to simple and obvious conclusions. And be able to stop yourself when you decide to do something that you immediately realize is ridiculous or the wrong choice.”

What is common sense?

The artificial intelligence community has been talking about common sense since its early days. In fact, one of the earliest artificial intelligence papers written by John McCarthy in 1958 was titled "Programs with Common Sense."

Brachman said, "This is nothing new, and it's not a name we invented, but the field has lost sight of the core meaning of what the pioneers of artificial intelligence said. If we further understand what common sense is and what possesses it, What it means, and more importantly for us, how it works and how it would be implemented, one finds little guidance in the psychological literature."

In "Machines Like Us" In the book, Brachman and Levesque describe common sense as “the ability to effectively use ordinary, everyday, experiential knowledge to achieve ordinary, everyday, practical goals.”

Common sense is essential for survival. Humans and higher animals have evolved to learn through experience, developing routine and autopilot skills that can handle most situations they face every day. But daily life is more than just the routine that people see over and over again. People are often faced with new situations they have never seen before. Some of them may be very different from normal, but most of the time, people see things a little different than what they are used to. In AI discussions, this is sometimes referred to as the "long tail."

Brachman said, "It seems to us that when these routines are interrupted, common sense is actually the first thing that is activated, allowing people to quickly understand the new situation and remember what they have done before, Quickly adjust your memory, apply it to new situations, and move on."

In some ways, common sense is somewhat different from the dual-systems thinking paradigm popularized by psychologist and Nobel Prize winner Daniel Kahneman. . Common sense is not the fast, autopilot System 1 thinking that performs most daily tasks that people can do without deliberate concentration (e.g., brushing teeth, tying shoes, buttoning buttons, driving in a familiar area). It requires positive thinking to break out of the current routine.

At the same time, common sense is not System 2 thinking. System 2 thinking is a slow thinking mode that requires full concentration and step-by-step thinking (for example, planning a six-week trip, designing software, solving complex problems, etc.) mathematical equation).

Brachman said, "People can think deeply in response to challenges. This kind of thinking makes people's brains tired and slow. Common sense allows people to avoid this situation in almost any daily life. , because there is no need to think deeply about what to do next.”

Brachman and Levesque emphasized in their published work that common sense is a “superficial cognitive phenomenon” compared with thoughtful, methodical analysis. , it runs faster.

"It's not common sense if it takes a lot of thinking to figure it out. We can think of it as 'reflective thinking,' and 'reflective' is just as important as 'thinking,'"

The dangers of artificial intelligence without common sense

Common sense requires predictability, trust, explainability and accountability.

Brachman said, "Most people don't make weird mistakes. Although people may do some stupid things, they may avoid such mistakes after reflection. Although humans are not perfect, some Errors are predictable to a certain extent.”

The challenge for AI systems without common sense is that they may make mistakes when they reach the limits of their training. Brachman said the errors were completely unpredictable and unexplainable.

Brachman said, "AI systems without common sense don't have this perspective, have no fallback to stop themselves from doing strange things, and will be vulnerable. When they make mistakes, the mistakes will mean nothing to them at all." ."

These errors can range from harmless, such as mislabeling an image, to extremely harmful, such as causing a self-driving car to drive into the wrong lane.

Brachman and Levesque write in the book, "If an artificial intelligence system encounters a problem of playing chess and its concern is to win the game, then common sense will not come into play for them, and when Common sense will come into play when people play the game of chess."

So as AI systems move into sensitive applications in open domains, such as driving cars or collaborating with humans or even engaging in open conversations, common sense will come into play. Very crucial role. There is always something new and exciting happening in these areas.

Brachman and Levesque write in their book Machines Like Us, “If we want artificial intelligence systems to be able to deal with common occurrences in the real world in a reasonable way, we need to do more than just Expertise derived from sampling what has already happened. Predicting the future based solely on seeing and internalizing what happened in the past won’t work. We need common sense.”

Revisiting Symbolic Artificial Intelligence

Most scientists agree that current artificial intelligence systems lack common sense. However, there is often some disagreement when it comes to solutions. A popular trend continues to make neural networks larger and larger. There is evidence that larger neural networks continue to make incremental improvements. In some cases, large neural networks exhibit zero-shot learning skills, performing tasks for which they were not trained.

However, there are also a large number of studies and experiments that show that more data and calculations do not solve the problem of artificial intelligence systems without common sense, but just hide them in larger and more confusing numerical weights and matrix operations. middle.

Brachman said, "These systems notice and internalize correlations or patterns. They do not form 'concepts.' Even if these systems interact with language, they simply imitate human behavior without what people think they have." Underlying psychological and conceptual mechanisms."

Brachman and Levesque advocate the creation of a system that encodes commonsense knowledge and a commonsense understanding of the world.

They wrote in the book: "Common sense knowledge is about things in the world and the properties they have. It is mediated by what we call conceptual structures. It is about the various things that may exist and the various properties they may have. A set of ideas about properties. The knowledge will be used by symbolic representations and performing computational operations on these symbolic structures. Common sense decisions about what to do amount to using this representational knowledge to consider how to achieve the goal and how to respond to the observed Brachman and Levesque believe the field needs to look back and revisit some of its earlier work on symbolic artificial intelligence to bring common sense to computers. They call this the "knowledge representation" hypothesis. This book details how to build knowledge representation (KR) systems and how to combine different pieces of knowledge to form more complex forms of knowledge and reasoning.

According to the knowledge representation (KR) hypothesis, the representation of common sense knowledge will be divided into two parts: "One is a world model that represents the world state, and the other is a conceptual model that represents the conceptual structure, and this is a A general framework for classifying items in the world."

Brachman said "Our point is to go back to some of the early thinking about artificial intelligence, where some kind of symbols and symbol manipulation procedures (what people used to call inference) engine) can be used to encode and use what people call common sense basic knowledge of the world: intuitive or naive physics, a basic understanding of how humans and other agents behave and have intentions and beliefs, how time and events work, cause and effect, etc. .This is all the knowledge we acquire in the initial year or two. Formally represented knowledge of the world can actually have a causal impact on the behavior of the machine, and can also do all the things like compositionality by manipulating symbols. And will familiarize people with Things are presented in new ways."

Brachman emphasized that the assumptions they put forward in the book can be overturned in the future.

Brachman said, "In the long term, whether it is to pre-build, pre-code all this knowledge, or let the artificial intelligence system learn in a different way, I don't know. But as a hypothesis and an experiment, I think The next step for artificial intelligence should be to try to build these knowledge bases and have systems use them to deal with the unexpected events of daily life, making rough guesses about how to deal with familiar and unfamiliar situations."

Brachman and Levesque's hypothesis builds on previous efforts to create large symbolic common sense knowledge bases such as Cyc, a project that dates back to the 1980s and collected millions of rules and concepts about the world.

Brachman said, "I think we need to go even further. We need to look at how autonomous decision-making machines can use these things in everyday decision-making contexts. It's one thing to build up factual knowledge and be able to answer dangerous types of questions." "But working in this noisy world and being able to respond rationally and promptly to unforeseen surprises is another thing entirely."

Does machine learning have a role in common sense?

How to solve the “common sense” problem of artificial intelligence

Brachman said that systems based on machine learning will continue to play a key role in the perception of artificial intelligence.

He said, “I would not push for using first-order predicate calculus to process pixels on an artificial retina or symbolic operating systems for processing speed signal processing. These machine learning systems are very good at low-sensory level recognition tasks and currently It's not clear how high up the cognitive chain these things are, but they don't make it to the end because they don't form concepts and connections between what people see in the scene and natural language."

The combination of neural networks and symbol systems is an idea that has become increasingly prominent in recent years. Gary Marcus, Luis Lamb, and Joshua Tenenbaum, among others, are proposing the development of "neuro-symbolic" systems that would combine the best of symbolic and learning-based systems to address current challenges in artificial intelligence.

Although Brachman agrees with much of the work being done in the field, he also said that the current view of hybrid artificial intelligence needs some adjustment.

I think any current neurosymbolic system would struggle to explain the difference between common sense and more structured, deeper symbolic reasoning that involves math, heavy planning and deep analysis, he said. Fundamentals. What I would like to see in this hybrid AI world is a real consideration of common sense, having the machine utilize common sense in the same way that humans do, and having it do the same things that humans do."

Original title:

How to solve AI's “common sense” problem​, author: Ben Dickson​

The above is the detailed content of How to solve the “common sense” problem of artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete