Home  >  Article  >  Technology peripherals  >  Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - "conscious Turing machine"

Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - "conscious Turing machine"

PHPz
PHPzforward
2023-04-12 22:34:081698browse

Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - conscious Turing machine

#In late May, the top international journal "Proceedings of the National Academy of Sciences" (PNAS) published an article that was reviewed in October last year. Very solid: Influenced by Turing's computational model Turing Machine (TM) and conscious global workspace theory (GWT), the author and others proposed a form from the perspective of theoretical computers, combining computational complexity theory and machine learning knowledge A theoretical computer model named "Conscious Turing Machine" (CTM) will help us further understand "consciousness".

Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - conscious Turing machine

Paper link: https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119 For example, the author team mentioned a point: calculation takes time. From this point of view, the theoretical computer perspective can change our definition of "free will", namely: free will is the freedom to calculate the consequences of different courses of action, or within the available resources (time, space, computing power and information) Calculate as many of these consequences as possible and choose the course of action that best suits your goals. ​

The author’s point of view is that consciousness is a property of all rationally organized computing systems, whether they are made of flesh and blood or metal and silicon. From this point of view, CTM does not model the brain, nor does it imply the neural correlates of consciousness, but is a simple and abstract computational model of consciousness that attempts to understand consciousness and its related phenomena. The paper is quite long, and AI Technology Review has summarized the key points as follows: Looking at "consciousness" from the perspective of a theoretical computer

1 Looking at "consciousness" from the perspective of a theoretical computer

1.1. Theory Computer Science​

Alan Turing's seminal paper "On computable numbers, with an application to the Entscheidungsproblem" can be said to be the origin of theoretical computers. This paper gave a mathematical definition of a "computing machine" now known as a Turing machine (TM). In Turing's definition, this computing machine can calculate any function that a computer or supercomputer can calculate. Theorems are the raison d’etre of mathematical theories, and Turing proved what is known as the first theorem of theoretical computers, the unsolvability of the halting problem.

In modern parlance, this theorem proves that it is impossible to have a general-purpose (debugging) program that can determine which computer programs will and will not stop, and it is impossible to construct such a program. The unsolvability of the halting problem is equivalent to the undecidability of elementary number theory, and implies a weak form of Gödel's first incompleteness theorem. After Gödel and Turing, mathematical logicians began to classify which problems were solvable and which were unsolvable, and began to study the profound levels of unsolvable problems.

As computing machines emerged and became widely available in the 1960s, we quickly learned that many important problems that were in principle solvable were practically impossible to solve, even with the fastest computers It is also impossible to solve. This is not a technical problem, but a deeper problem. Researchers in the emerging field of theoretical computing (notably Jack Edmonds, Stephen Cook, Richard Karp, and Leonid Levin) realized that among naturally finite (and therefore solvable) problems, there seemed to be a kind of solvable and unsolvable problems The dichotomy between them reflects the previous dichotomy between solvable and unsolvable. A problem that has a feasible solution can be mathematically formalized as being solvable in polynomial time P by some computer program.

Furthermore, realizing a problem solvable in polynomial time and a problem checkable in polynomial time NP may not be equivalent. In fact, if equivalence can be determined, the famous million-dollar P=?NP problem can be answered. In addition to defining a hierarchy of serial fast (multi-time) computational complexity classes, Theoretical Computers also defines a hierarchy of parallel ultra-fast (multi-time) computational complexity classes. Both hierarchies provide the definitions and selections used in the model. The understanding and implications of the dichotomy between easy and difficult, fast and slow, has set off a complexity revolution with rich theories, thought reconstructions, novel concepts and surprising applications. In fact, developments in computational complexity over the past 40 years have shown how difficulty can be harnessed to tackle seemingly impossible problems. We use a computer-generated random sequence to illustrate. This sequence is called a "pseudo-random sequence".

On the face of it, the concept of pseudo-random sequences is so jarring that von Neumann joked: "Anyone who considers arithmetical methods for generating random numbers is certainly guilty." More accurately Say, a pseudorandom sequence generator is a feasible (polynomial time) computer program for generating truly random sequences that are indistinguishable from those generated by any feasible computer program (such as those produced by independently tossing a fair coin) Come sequence. Therefore, in the polynomial time world in which humans live, pseudo-random sequences are actually truly random. This understanding would not have been possible without an account of the difference between polynomial and superpolynomial complexity in theoretical computers. One application of the above idea is to replace the random sequences in probabilistic CTM with sequences generated by a pseudo-random generator providing a (short) random seed. In particular, if probabilistic CTM has "free will", then deterministic CTM also has "free will". This deterministic CTM free will is contrary to some (perhaps most) deterministic ideas.

1.2. Now let’s talk about consciousness

The definition of CTM adopts the perspective of a theoretical computer. A CTM is a simple machine that mathematically forms (and modifies via dynamics) a conscious GWT. The concept of conscious GWT originated with cognitive neuroscientist Bernard Baars and was introduced by Dehaene and Mashour et al. in their Global Neural Network. Meta-workspace theory (GNWT). In Theater of Consciousness, Baars likens consciousness to theater actors performing on the stage of working memory, their performance being performed under the observation of an audience (or unconscious processor) sitting in the dark. In CTM, the stage of GWT is represented by short-term memory (STM) that contains the consciousness content of CTM at any moment.

The audience is represented by powerful processors, each with its own expertise. These processors constitute the long-term memory (LTM) of the CTM. These LTM processors make predictions and get feedback from the CTM world. Learning algorithms inside each processor improve the processor's behavior based on this feedback. Each LTM processor has its own specialization and competes with each other to get their questions, answers, and information in chunks on stage and then deliver this content immediately to the audience.

Conscious awareness, sometimes also called attention, is formally defined in CTM as the LTM processor's reception of CTM conscious content broadcasts. Over time, some processors are connected through links, and these LTM processors change from conscious communication through the STM to unconscious communication through the links. Propagating blocks through links can strengthen their consciousness, a process called ignition by Dehaene and Changeux. Inspired by Baars' GWT architecture, CTM also integrates some additional features that are crucial to a sense of awareness. These include its dynamics, its rich multimodal internal language (which we call Brainish), and the special LTM processor that enables CTM to create models of the world.

1.3. Complexity considerations​

The consequences of limited resources play a crucial role in our high-level explanations of consciousness-related phenomena such as change blindness and free will. role. These consequences also modify the detailed definition of CTM. Details include:

Formal definition of a block: A block is the information that each LTM processor puts into competition for consciousness on every tick of the clock;

Selects one of the competing blocks to reach consciousness A fast probabilistic competitive algorithm;

A machine learning algorithm in each processor that uses feedback from global broadcasts, other processors, and the outside world to improve processor competitiveness and reliability.

Although inspired by Turing's computer model, CTM is not a standard Turing machine. This is because what gives CTM a "sense of consciousness" is not its computational power, nor its input-output mapping, but its global workspace architecture, its predictive dynamics (loops of prediction, feedback, and learning), its rich Multimodal internal language, and some special LTM processors, such as world model processors. As mentioned before, we are not looking for a model of the brain, but a simple model of consciousness.

2 Overview of CTM model

2.1 The basic structure of CTM and the definition of consciousness in CTM

Assume that CTM has a finite The life cycle t. Time is measured in discrete clock ticks, t= 0,1,2,…T∼10^10. (About 10 times per second, the alpha brainwave rhythm). CTM was born at time 0. CTM is a seven-tuple, including and other components.

2.1.1. STM and LTM processors

In CTM, STM is a small memory that can save a single block, as defined in Section 2.2. LTM is a large-scale collection of N processors (N>10^7), each of which is a random access machine with random access memory large enough to hold a fraction of T blocks. The processor is only in the LTM, not the STM, so when the article talks about processors, it is referring to the LTM processor. Certain special LTM processors are specifically responsible for the conscious feeling of CTM. These special processors include models of the world processor, internal speech processors, and other internal general-purpose speech processors for processing internal vision, internal touch, and so on.

2.1.2. Uplink tree competition and downlink tree competition​

Down Tree (Down Tree) is a simple downward tree with a height of 1. In STM There is a root, there are N edges pointing from the root to the leaves, and there is one leaf in each LTM processor. An ascending tree is an upward binary tree with height h, N leaves, one leaf per LTM processor, and a (single) root in the STM. Each LTM processor has its own expertise. It obtains its own questions, answers and information to the STM through competition in the uplink tree, and broadcasts them immediately to the viewers of all LTM processors through the downlink tree. In order for CTM to run simply, all LTM processors submit information to the STM competition, and all processors receive all broadcasts from the STM. In humans, however, the dorsal pathway of vision is never conscious (never reaches STM), only the ventral pathway is conscious. This bottom-up/top-down cycle is similar to the global neuronal workspace (GNW) hypothesis, which states that “conscious access occurs in two consecutive stages… In the first stage, from From about 100 milliseconds to about 300 milliseconds, the stimulus rises up the cortical level of the processor in a bottom-up, unconscious manner; in the second stage, if the stimulus is deemed to satisfy the current goal and attention state, it will The top-down approach is amplified and maintained by the ongoing activity of a small subset of GNW neurons, while the rest are inhibited. The entire workspace is globally connected, and at any given time, only one such conscious representation is Active."

2.1.3. Chunks, Conscious Content, Conscious Awareness, and Stream of Consciousness

Questions, answers, and information are delivered in chunks. The block that wins the competition to enter the STM is called the conscious content of the CTM. In CTM, unlike Baars' theater metaphor, there is always the exact same actor in the STM (stage). At each step in time, the actor gets a winning block, which is the script that is played instantaneously through the descending tree. We believe that the CTM will be consciously aware of these contents when all LTM processors receive them via this broadcast. We define conscious awareness as the reception of STM broadcasts by all LTM processors, rather than the occurrence of winning blocks in STM. This definition is to emphasize that the feeling of consciousness appears after the processor, especially the world model and internal speech model are received Generated after broadcast. In CTM, our definition of consciousness is roughly consistent with what cognitive neuroscientists call "attention." What we call the feeling of consciousness in CTM is roughly consistent with what cognitive neuroscientists call "awareness" or "subjective awareness." The bubbling blocks in CTM compete with STM, and the winning block is continuously broadcast from STM to the LTM processor. The time-ordered chunks propagated from STM to LTM form a stream of consciousness. As mentioned in Section 3, this flow is part of the subjective feeling of consciousness.

2.1.4. Links, Unconscious Communication, Global Ignition

All communication between processors initially occurs through STM. For example, Processor A can submit a question to the STM by competing up the tree. If the issue wins the competition, it is broadcast to all LTM processors. Processor B can then submit the answer via competition, and if Processor B wins, it will be broadcast, and so on. If A thinks B's answer is useful enough, then a two-way connection will be formed between A and B. This connection is reminiscent of the Hebbian principle, which states that "neurons that fire together wire together." In addition to sending blocks to contention up the upstream tree, the processor also sends blocks across links. In this way, the conscious communication between A and B (via STM) can become a direct unconscious communication through the blocks sent between A and B (via the link). An additional link is formed between A and B. In our words, the link between A and B is strengthened. A link is a channel for transferring information between processors. As CTM conscious content is broadcast, those chunks sent between linked processors can enhance and sustain conscious awareness. This strengthening is related to what Dehaene and Changeux call "global ignition" in their GNWT. As Dehaene writes, "Global ignition occurs... when the broadcast exceeds a certain threshold and becomes self-reinforcing, with some neurons stimulating other neurons and this stimulation in turn transmits back excitation. The connection is in Together (cells) suddenly enter a self-sustaining state of high levels of activity, an echoing 'Cell assembly', as Hebb calls it."

2.1.5. Input and output mapping: sensors and actuators

The environment (Env) of CTM is a subset of Rm(t), where R represents a real number, m is a positive integer dimension, and t (a non-negative integer) is time. The input map sends time-varying environmental information acquired by the CTM's sensors to a designated LTM processor (for simplicity, we assume here that these sensors are part of the input map), which converts the environmental information into blocks. The output map passes command information from the LTM processor to the executor (it is assumed here that the executor is part of the output map) to operate on the environment.

2.1.6. Summary of connections

In CTM, there are five types of connections that provide paths and mechanisms for information transmission. The following figure shows these five connections between the CTM and LTM processors, which are:

  • Env-LTM: The directional edge from the environment passes through the sensor and the processor of sensory data Connection;
  • LTM—STM: through the uplink tree;
  • STM—LTM: through the downlink tree;
  • LTM—LTM: bidirectional edge between processors;
  • LTM—Env: The specific processor passes the directional edge to the environment through the executor, and the executor acts on the environment (the specific processor refers to the processor that generates finger movement instructions, and the executor receives these processors The fingers of instructions, the actuators operate on the environment through the finger movements of these processors).

Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - conscious Turing machine

Illustration: Connection between CTM and LTM processor

2.2. Brain language (inside the multi-model of CTM Language), bullet points and blocks

Brainish is the internal language of CTM and is used for communication between processors, either through competition and broadcasting or directly through links. On the other hand, the language used internally by a processor often varies from processor to processor, and there are other languages ​​besides brain language. Brain language is the language used to express internal speech, internal vision, internal feelings, imagination and dreams. Brainspeak consists of coded representations of input and output expressed in concise, multimodal brainspeak words and phrases called “gists.” Bullet points can contain the essence of a scenario, or highly scalable ideas about a proof. The gist can also be the answer to a question, some insight, an image from a dream, (a description of) a pain, etc. Brain language is better able to express and manipulate images, sounds, touches and thoughts - including non-symbolized thoughts - than external languages ​​such as English, Chinese or Doggish. The author believes that expressive inner language is an important part of conscious feeling (see Section 3 for details). Information is transferred in blocks on all edges, between processors, between STM and LTM, from input to LTM, and from LTM to output. A block is a six-tuple:

. Among them, address (address) is the address block generated by the LTM processor, t is the time when the block is generated, and gist (gist) is the information that is "concisely expressed" in the brain language. This information is the content that the processor plans to communicate. Weight is a false number provided by the processor to the main points. Intensity and mood start with |weight| and weight respectively at time t. The researchers note that the size of a block (and the size of its components, including gist) is necessarily limited by computational complexity considerations.

2.3. Probabilistic up-tree competition: coin-flip neurons and competition functions

Up-tree competition is the mechanism that determines which LTM processor can put its own block into the STM. At each timing point t = 0,1,…,T, when the tth contention begins, each processor p places its block into the processor leaf node of the ascending tree. After a block is sent into up-tree competition, as it moves up the competition tree, its address, t, gist, and weight remain the same, but its intensity and mood change to incorporate more global information. .

2.4. Computational complexity and time delay of conscious perception

For t>0 and s>0, updating the block at node vs in the uplink tree competition, the required calculations include: 1 ) Two quick calculations of f, summation and division of its values, and one quick probabilistic selection; 2) Put the address, points and weight of the selected block into node vs; 3) Correlate the child nodes of vs The strength and sentiment of the block are summed, and these sums are set to the strength and sentiment of the block at the vs node. These calculations must all be completed within 1 unit of time, which sets a bound on the size of the block on the node and the amount of calculations that can be performed on that node.

2.5. Memory and high-level storage

We assume that each processor p stores in its internal memory a sequence of tuples sorted by time t, including the block p sent by the processor to the competition , t,0, and the chunks received by the processor via the broadcast of the STM, and a selected subset of chunks received by the processor from the link or input map at time t. These sequences are an important part of CTM storage. "History" provides a high-level storage of what p has seen and done. High-level storage largely explains CTM's self-perception in conscious feelings. CTM requires high-level storage combined with predictive algorithms to create dreams (see Section 4.5 for details). This stored information may be periodically pruned so that only "significant" chunks remain, most notably those representing terrible, wonderful, or unexpected events. Typically, each processor makes predictions about the blocks it generates, modifies, and stores.

2.6. Predictive Dynamics = Predictive Feedback Learning (Sleeping Experts Algorithm, SEAs)

The processor needs feedback to evaluate the correctness of its predictions and detect errors, and learn How to improve accuracy and reduce and correct errors. •The LTM processor performs CTM predictions on all blocks, whether submitted to STM competition, submitted to other processors through links, or submitted to executors that influence the environment. •Feed back chunks received from STM broadcasts, chunks received via links, and chunks received from the environment via input maps. •All CTM learning and error correction occurs in the processor. In CTM, there is a continuous cycle of prediction, feedback and learning. CTM needs to be alert to anything out of the ordinary and any kind of surprise in order to deal with these when necessary and always improve their understanding of the world. Through this cycle, prediction errors (such as "surprises") are minimized. In particular, the processor needs to know whether it is setting weights too conservatively or too boldly so that it can modify the weight allocation algorithm. Sleep Expert Algorithms (SEAs) are a class of learning algorithms used by LTM processors to achieve this goal. What is shown here is one of the simplest versions of SEAs. Encourage the processor (increase the strength it assigns to a block) when:

  1. its block does not enter the STM, and
  2. its information (in the view of SEA ) is more valuable than the information that goes into the STM.

Suppress the processor (reduce the strength it assigns to a block) when:

  1. Its block enters the STM
  2. It The information that is discovered (possibly later) is not as valuable as the information from some of the blocks that failed to make it into the STM.

SEAs play a role in whether processors put their blocks into STM. SEAs also have an impact on whether processors will "pay attention" to the important points in the blocks sent to them through the link. The absolute value of a block's weight shows whether the processor generating the block considers its point important, which will affect whether the processor receiving the block will notice it.

2.7. Comparison between CTM and GWT models

The researchers compared CTM and Baars’ GWT models, as shown in the figure below.

Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - conscious Turing machine

Notes: Model sketches: Baars’ GWT model (left) and CTM model (right) For the sake of simplicity, this diagram simplifies many features. For example, CTM has only one "actor" on the stage, and this "actor" only holds one block at a time. Additionally, all processors in CTM are in LTM. Here, the central executive is eliminated as its functions can be taken over by the processor. In CTM, inputs and outputs go directly to and from the LTM processor, rather than directly through the STM. In CTM, blocks compete in well-defined competitions to reach the stage (STM).

Conscious awareness (attention) is the reception of the broadcast winning block (i.e. the conscious content of the CTM) by all LTM processors, rather than an event that occurs between the input and the STM. The role of verbal rehearsal and visuospatial sketchpad for Baddeley and Hitch was assumed by the LTM processor. Predictive dynamics (loops of prediction, feedback, and learning) and multimodal internal language (brain language) as well as computational and complexity considerations are salient and key CTM features.

Finally, as stated in "Extended Theory of Mind", CTM can access existing technologies in the form of LTM processors, such as Google, Wikipedia, WolframAlpha, AlphaGo, etc. The task of the LTM processor is to use these applications . This is a way to ensure that the CTM has a large and powerful collection of processors at the beginning of its life (t=0) that is scalable throughout its life.

The key features of the CTM model and its dynamics are similar to the properties of consciousness outlined by Dennett: it is neither the Master Scheduler, nor the Boss Neuron, nor the Homunculus that controls the transformation of our conscious thoughts. Or Res Cogitans. Implementing control must be a dynamic and somewhat competitive process. What exactly determines who is the winner?

should be something like the micro-emotions, positive and negative valence intensities that accompany and control the fate of all content, not just emotionally salient events like obsessive memories of pain, embarrassment or desire, but also There is the most profound and abstract theoretical thinking. Although inspired by Baars' GWT architecture, CTM integrates the functionality necessary for its conscious experience. This is the focus of the next section.

3 The feeling of consciousness

Although CTM is conscious according to the definition of consciousness content spread by STM, this definition does not explain the What creates the feeling of consciousness in CTM. The author believes that CTM's sense of consciousness is mainly brought about by its extremely expressive brain language, coupled with the architecture of CTM, the specific special processor and the predictive dynamics of CTM (prediction, feedback and learning).

1) Brain language​

Multimodal brain language accurately describes the world perceived by CTM. This perception is made up of the sense of multimodal language. Its vocabulary includes breath (what you smell in your nostrils), pain (an extremely unpleasant feeling of pain), face (what you see when you look at someone else's face), and more. The dream is important because it shows what the point can express when the CTM has neither input nor output.

2) Architecture​

This includes the up-tree competition to gain access to the STM, and the subsequent global down-tree broadcast of the winner, especially all those who play a special role in generating conscious feelings. Character handler.

3) Special processors

The author has selected some processors that have special algorithms built into them at birth.

  • The Model of The World processor builds a model of the CTM world based on information obtained from the environment or information obtained from internal memory that may have been modified. A sparse "CTM" model created by CTM that defines the CTM's internal world as a model of the world processor. Define the CTM's external world as the labels and descriptions it annotates with brain language, including the feelings they (might) have and the actions they (might) perform.
  • The internal speech processor extracts any speech that is encoded in the gist broadcast by the STM and sends it to the same location as the gist in which the input map sends the external speech (the gist created by the input map). This is initially sent via STM and then directly via the link once the link is formed. "Internal speech" is generated by an internal speech processor, which enables CTM to recall the past, predict the future, and make plans. The gist of inner speech (such as what is said and heard when talking to oneself or in dreams) is almost indistinguishable from the gist of outer speech. Human internal speech sounds so much like external speech that it is difficult to distinguish between the two, as is the case with schizophrenia patients.
  • Internal vision and internal (tactile) sensation processors, mapping any images and sensations broadcast by the STM to any location where the input mapping is sent to the external scene and external sensations. There is little difference between internal vision and external vision (the visual highlights produced by the input image). The CTM's memory and predictive abilities enable the CTM to create internal images and sensations, resulting in imaginations and dreams. To prevent hallucinations in schizophrenia, the human brain distinguishes between internal and external images. The brain has all kinds of tricks to do this, one of which is making dreams difficult to remember.

These processors inform the "eyes" and "skin" in the CTM world model, allowing them to "see" and "perceive tactilely" whatever the CTM recalls from visual memory. Anything that CTM recalls from sensory memory. These eyes and skin are the mind’s eye and mind’s skin of CTM. The author believes that these processors are internal general-purpose speech processors.

4) Predictive Dynamics

In addition, the author believes that CTM affects CTM’s sense of consciousness through a continuous cycle of prediction, feedback, and learning. This feeling is further enhanced by (parallel) predictive dynamics in CTM's world processor model, where CTM is constantly planning and testing. Positive feedback gives the CTM a signal that it understands what's going on; negative feedback - unless it's about something unpredictable, like an unexpected loud bang - gives the CTM evidence of something it doesn't know or understand. . The conscious feeling of CTM also has the following other factors:

5) Basic (general) ability to think and make plans

6) Motivation (=energy motivation) to make and implement plans .

Now return to the world processor model to describe a central task, which is to mark the various components of its model as self (self) or non-self, or unknown. How does the world processor's model decide what is self and what is not self? If, immediately after the broadcast of a block (an idea of ​​CTM), the executor performs an action in the environment - an idea that causes the same action to be repeated continuously - then this indicates that the executor is part of self.

The processor of the world model also has other important jobs to give the CTM self-awareness, including creating imagination, creating a mapping of the environment and expressing the movement of the environment, helping to plan behavior in the environment, and helping to predict behavior in the environment. Self and non-self behavior, correcting predictions of self and non-self behavior.

When the CTM finds itself thinking about its own consciousness through broadcasting, the world model processor will mark the "CTM" in the model as "conscious". Now let’s take a look at why CTM thinks it is conscious. This can't be because the world model processor, or any other processor, thinks it's conscious, since processors are just machines running algorithms - and such machines don't have feelings.

The author believes that CTM as a whole is conscious, in part because the world model processor will treat the "CTM" in its world model as conscious and propagate this view to all processes device. Here, "CTM" is a simple learned representation of a more complex CTM.

#4 High Level Explanation

This section will explore how the CTM may experience various phenomena generally associated with consciousness. The authors believe that the explanations derived from the model provide a high-level understanding of how conscious experience arises, or might arise, and that these explanations are highly consistent with the psychology and neuroscience literature.

4.1. Blindsight

In the examples below, blindsight illustrates the difference between conscious and unconscious awareness. In blindsight, a person does not consciously see the outside world. When asked to pick up something in a cluttered room, participants have a typical response: "I can't see where it is." But if the request is treated with care, participants can still perform the task adeptly. . What happened? In CTM, visual input goes directly from the vision sensor to a subset of LTM processors that process the visual input. However, in blind-sighted CTM, due to some kind of glitch, perhaps a break in the ascending tree, or the visual processor's inability to competitively input the block's information, this information cannot be uploaded to the STM, and therefore cannot be globally broadcast. . For this reason, CTMs are not consciously aware of what they can see. However, information can still be communicated between (unconscious) processors via links. Therefore, the visual information received by the vision processor can be sent through the link to the walking processor that controls the leg actuators.

4.2. Inattentional blindness

Intentional blindness occurs when a person fails to detect visual stimuli that are clearly right in front of them. Inattentional blindness is “the failure to notice the presence of something unexpected while your attention is focused on other tasks.” For example, in the famous selective attention test, the experimenter showed the audience the video "The Invisible Gorilla" and asked the audience to "count the number of passes made by the player in the white shirt." Nearly all of the viewers gave numbers that were close to correct, but when asked, "Did you see the gorilla?" they were stunned. What exactly is going on? Suppose CTM is watching a gorilla movie.

The input query about the white shirt players gains access to the STM and is then immediately broadcast to all LTM processors. To perform this task, the CTM's vision processor assigns a high density to white shirt highlights and a very low density to anything black, so blocky objects with a "gorilla" pattern have few highlights. Opportunity to enter STM.

CTM did not consciously see this gorilla. CTM's explanation of inadvertent blindness is: giving different densities to key points and lower densities to irrelevant points, then getting higher-density blocks will have a greater competitive advantage. According to simulations performed in Ref., under certain "kindled" states "spontaneous activity can prevent external sensory processing." They linked this blocking to the cause of inattentional blindness. In our opinion, blocking the human brain's "sensory processing" of black objects is roughly equivalent to CTM drastically reducing the density of black points in the blocks, thereby reducing the chance of these blocks entering STM. The effect of different densities in CTM is also consistent with the theoretical implications that human inattentional blindness "can act as a filter for irrelevant information, potentially filtering out unexpected events."

4.3. Change blindness

Change blindness occurs when a person is unable to notice large changes in a picture or scene. It is "the failure to notice that something changes from one a change from one moment to another”.

An instructive example is detective video. A detective enters the murder scene and says "Obviously, someone in this room murdered Sir Smythe" and immediately interrogates each suspect in turn. The maid said: "I'm polishing the brass in the master bedroom." The housekeeper said: "I'm buttering Master's scones." And Mrs. Smythe said: "I'm growing morning glories in the bonsai shed." This information Enough for a clever detective to solve a murder on the spot.

Yet why didn’t we notice the many jarring scene morphs between the opening screenshot and the ending?

From the perspective of CTM, when watching the "Detective" video, CTM had an overall impression, but did not notice the changes that occurred after windbreakers, flowers, paintings, etc. were replaced by other things. This is Because of the following reasons:

1) During the filming process, the director cleverly arranged the changes of the entire scene and even individual characters, removing the dark trench coat turning into a white trench coat, the bear turning into armor, the rolling pin turning into a candlestick, the deceased changing clothes and raising his legs, etc. transition to change. The video input never signals to the CTM's vision processor that the "scene" has been modified.

2) Importantly, the same bullet points describe the opening and closing scenes the same way: "A living room in a mansion with detectives, housekeepers, maids, others, and a man on the floor The dead."

Under these conditions, CTM has change blindness.

Again, the CTM explanation is consistent with the literature on change blindness in humans. Given that change detection requires adequate representation of the pre- and post-change scenes as well as comparisons, any task characteristics that affect the richness of the representation or the propensity to compare representations should influence detection. The semantic importance of the changing object appears to have the greatest impact on the likelihood that subjects will attend to, and therefore notice, the change.

4.4 Illusions, inattentional blindness and changing blindness may be considered examples of hallucinations

By definition, CTM is conscious awareness of the points in the blocks broadcast from the STM . (These points arrive at the STM from the LTM processor. The LTM processor gets these points from the sensor through input mapping, or from other LTM processors through links, or from the STM through broadcasts). Key points are stored in LTM memory for many reasons, one of which is to provide the processor with a high-level story, such as what happens in a dream.

In CTM, stream of consciousness is the sequence of key points played by STM. Every visual point at every moment gives the CTM the feeling that it's seeing the entire scene in front of it, even though it's only seeing a small part of it at most. There are several explanations for the overall illusion, chief among which is that a multimodal brain-speech gist can describe an extremely complex scene, such as "I am standing in a Japanese-style garden with streams, paths, bridges and trees." forward". Does this bullet point include the details in the 12-megapixel photos captured by the iPhone camera (like it feels like we see it)? The overall illusion is the result of highly suggestive (succinct) information in the bullet points. CTM conjured up this scene like magic. Keith Frankish calls this the illusionist theory of consciousness.

4.5. The Creation of Dreams

Dreams are the ultimate illusion. Some people claim not to dream, but most people do. Dreams may be visual, auditory, tactile, etc. Dreams are often related to emotional processes and can express great pain and fear (nightmares) or great joy (such as flying dreams). One can feel crippling pain in the leg, only to wake up and find that the pain is completely illusory and there is no pain at all, one can also fall asleep face down and wake up face up.

In CTM, a built-in sleep processor tracks time, habits, day/night, etc., and has internal algorithms to monitor sleep needs. If the sleep processor determines that sleep is needed, it will increase the density of its own blocks so that blocks can enter the STM and block other blocks from the STM. This has roughly the same effect as reducing the density of blocks on other LTM processors. This processor also blocks or greatly reduces the density of various inputs (seen and heard) and blocks signals that activate outputs (such as those received by the limbs). This is the sleep state. The sleep processor continuously monitors the need for sleep and proportionally reduces the density of its own blocks as this need decreases. This ultimately allows the dream gist (in chunks) to reach STM. This is the state of dreams.

Finally, the CTM wakes up when the sleeping processor lowers its limits on input and output. In humans, non-rapid eye movement sleep and rapid eye movement sleep can alternate several times before awakening.

When the CTM is in the dream state, a dream creator (Dream Creator) starts to become active (that is, this processor starts sending its blocks to the STM). The points in these blocks contain the kernel of thought (usually based on early CTM activities, concerns, imagination). When these chunks are broadcast, all processors, including those that play a key role in conscious sensation, receive these broadcasts and race to react. This gives CTM the same feeling of being alive in dreams as when awake.

The dream processor and other processors take turns interacting back and forth. The dialogue between the dream processor and the processor - the back-and-forth interaction - is the sequence of points that make up the dream, and this sequence is the dream's stream of consciousness.

Dreams are essentially pieces of this sequence that are spliced ​​together to produce a dream stream of consciousness (inner movie) that 1) sees, hears and feels the dream world, 2) affects the dream world Something that appears in the world. Such an (interactive) inner movie displays a sequence of sensory inputs (images, smells and sounds) and generates a sequence of actions.

Most processors cannot feed their chunks to the STM when the CTM is sleeping but not dreaming, the exceptions being the huge noise detector and the sleeping processor itself. Sleeping a processor's blocks in the STM prevents most other processors' blocks from reaching the STM. By design, the sleep processor holds an empty gist, so the CTM has no or little awareness.

After the CTM leaves the sleep state and enters the dream state, some LTM processors, such as the endoscopic processor, can send their blocks to the STM. Therefore, while dreaming, the CTM is conscious and can experience events vividly. As discussed in Section 3, key processors, such as those of inner speech, inner vision, inner sensation, and world models, play a special role in generating the conscious sensations of CTM.

These processors play a similar role when the CTM is dreaming. Here are some examples of how the processor creates dreams for the CTM:

  • The inner speech processor extracts the inner speech from the multimodal graphics broadcast by the STM and sends that speech to the receiver that receives the external speech. Same processor. This process makes dream language sound like external language. Inner vision and inner sensory processors help create dreams in similar ways.

Dreams demonstrate the power of brain talking points. What the CTM sees, hears, feels, and does in a dream must have been fabricated by a processor capable of recalling, revising, and submitting creations to the competition of the STM. These fabrications are realistic because they use the same points produced while awake.

Thus, dreams can produce a real-world feel even if the CTM is completely detached from external input. As a result, dreams can appear so realistic that CTM can become difficult to distinguish between dreams and reality (but humans have a harder time remembering dreams, so this problem is avoided in humans). Existing literature has proven that after a person sees a face, the same pattern of neural activity will appear whether the face is retrieved from memory or when the face appears in a dream. The literature also points out that in REM sleep, when people have the sensation of movement, the activation of the motor cortex in dreams is the same as that in wakefulness.

  • The world model processor predicts the effects that the CTM's behavior will have in its (internal and external) world. It does this from the effects of those actions in its world model. A dream processor can use this same prediction machine to create dreams.

Dreams also allow CTM to test themselves in unknown and possibly dangerous situations. In both humans and CTM, dreams can serve as laboratories for experimenting with various possible solutions. However, unlike in waking consciousness, since the CTM's "consistency checker" in its world model processor does not get input from the environment, inconsistencies can more easily occur unnoticed in dreams than in wakefulness.

Therefore, CTM can fly in dreams. Zadra and Stickgold assert that in humans, "dreams do not exactly recreate memories. Dreams create a narrative that has the same gist and may have the same title as some recent memory." They note that "REM sleep Provides a brain state in which weak and unexpected associations are activated more strongly than normal strong associations, explaining how REM sleep helps to find some weakly related distant associations, Maybe it explains why our dreams during REM sleep are so strange."

4.6. Free Will​

The issue of free will is ancient, appearing as early as the first century BC. Lucretius (De Rerum Natura). "If all motions are always connected with each other, the new arising from the old, in a definite sequence - if the atoms never turn so as to produce some new motion, breaking the bonds of fate, the eternal sequence of cause and effect - then the whole What is the source of the free will possessed by living things on earth?" Dr. Samuel Johnson's observation between 1709 and 1784 captures the paradox of free will: "All theories speak against will. freedom of the will, and all experience supports the freedom of the will." Stanislas Dehaen made a contemporary voice: "Our brain states are obviously not uncaused and cannot escape the laws of physics - nothing can escape. But as long as Our decisions are based on conscious thought, made autonomously, without any hindrance, and the pros and cons carefully weighed before committing to a certain course of action, which is true freedom. When this happens, we are speaking of a voluntary decision. Of course, Even if it is ultimately caused by our genes and environment.

" The author of this article added based on Dehaene that the calculation takes time. To make a decision, the CTM evaluates its alternatives in an evaluation that takes time, during which the CTM is free, in fact can feel free, to choose what it considers (or calculated) to be best. a result.

Thus, the view of theoretical computers affects our definition of free will. Free will is the freedom to calculate the consequences of different courses of action—or to calculate as many of these consequences as possible within the limits of available resources (time, space, computing power, and information)—and to choose among them the course of action that best suits one's goals.

This definition encompasses both predictive dynamics (calculating the consequences of different courses of action) and resource constraints (time, space, computing power, and information). For example, say a CTM is asked to play a specific position in a chess game. Different processors suggest different moves. The CTM's primary chess-playing processor (assuming such a processor exists, or one that has a "high-level" view of the game) is represented by broadcasting a block in the STM that it recognizes that it has a choice chess moves, and it deems worthy of a careful study of the consequences of each move. At this point, faced with possible choices of moves but without yet evaluating the consequences of those moves, the CTM is free to choose the move it thinks is best within the time limit. Does CTM feel that he has free will?

1) When considering the moment when CTM will ask itself "What action should I do?", it means that this question has risen to the STM stage and reaches the audience on the LTM processor side through broadcast. In response, some viewers contribute their own suggestions to the event, and the winner of the competition gets to be broadcast on stage. Because the key points are short, something like some of the more concise broadcasts can be reasonably stated.

2) The continuous and repeated comments, commands, questions, suggestions and answers that appear in STM and broadcast globally to LTM make CTM aware of its control. When the CTM is asked how it came to a specific suggestion (i.e., what thinking it went through in making that suggestion), its processor will be able to shed light on part of the conversation that got to that stage (although perhaps not in the short term beyond this stage).

3) Many LTM processors use competition to produce the final decision of the CTM, but the CTM only consciously knows what is going into the STM instead of submitting everything to competition. Furthermore, the vast majority of CTM, i.e. most of its processors, are unaware of the unconscious conversations between processors (via links). In the case of CTM, when the process of consciously ignoring the decision occurs enough times that the decision sometimes seems to come out of thin air. Even so, although the CTM cannot consciously know how its advice is adopted, other than the high-level content disseminated by the STM, it knows that the advice comes from within itself. CTM should be credited with suggestions (after all, they do come from within CTM), and some can be explained with a high-level narrative, and the unexplained can be said with "I don't know" or "I don't remember." . It is precisely with the knowledge of selection (CTM understands and does not understand choices) that CTM generates the feeling of free consciousness. Whether deterministic or not, this experiential feeling is a form of free will.

How important is randomness in explaining this feeling of free will? It should be noted that in CTM, the above explanation does not require the application of quantum physics. The only randomness is in the coin-flip neurons competing up the tree, and whatever randomness the processor uses in its probabilistic algorithm. Furthermore, it can be shown that the above arguments for the feeling of free will still apply to fully deterministic CTMs (e.g., CTMs using pseudo-randomness). It follows (and this will predictably lead to a heated debate) that even in a completely deterministic world, the CTM would feel that it has free will. ​

The above is the detailed content of Featured in the top international publication PNAS! Starting from theoretical computers, scientists proposed a consciousness model - "conscious Turing machine". For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete