Home  >  Article  >  Technology peripherals  >  The future of artificial intelligence is human-machine environment system intelligence

The future of artificial intelligence is human-machine environment system intelligence

王林
王林forward
2023-05-16 19:52:04930browse

The future of artificial intelligence is human-machine environment system intelligence

Military intelligence is like war, it is a fog, there is a lot of uncertainty, and it is unpredictable and unpredictable. Judging from the current development trend of artificial intelligence, in the foreseeable future wars, there are many hidden dangers of human-machine integration that have not yet been solved. Specifically, they are:

(1) In complex wars In the information environment, humans and machines absorb, digest, and use limited information within a specific period of time. For humans, the greater the pressure, the more information they misunderstand, and the easier it is to cause confusion, confusion, and accidents. It is still very difficult for machines to learn, understand and predict cross-domain unstructured data.

(2) The information required for decision-making in war is widely distributed in time and space, which determines that some key information is still difficult to obtain. Moreover, it is difficult to coordinate and integrate the important objective physical data collected by machines with the subjectively processed information and knowledge obtained by humans.

(3) In future wars, there will be a large number of nonlinear characteristics and unexpected variability, which often lead to unpredictability in the combat process and results. Based on axioms Formal logical reasoning is far from meeting the needs of decision-making in complex and ever-changing battle situations. In view of the continuous spread and proliferation of nuclear weapons, the cost of future wars between countries, regardless of their size, will become increasingly high. No matter how artificial intelligence develops, the future belongs to mankind. Human beings should jointly define the rules of the game for future wars and determine the fate of artificial intelligence, rather than artificial intelligence deciding the fate of mankind. The reason is that artificial intelligence is logical, and Future wars are not only logical, but also contain a large number of illogical factors.

(4) In view of the different classifications of autonomous equipment in different countries, there is a huge gap in the definition and understanding of the concept of strong artificial intelligence or general artificial intelligence weapons, so the most important work at present is From time to time, it is not about how to solve specific technical problems (technical iterations are updated very quickly), but about how to reach a consensus on the basic concepts and definitions of artificial intelligence applications, such as: ① What is AI? ②What is autonomy? ③What is the difference between automation and intelligence? ④What is the difference between machine calculation and human calculation? ⑤What is the boundary of human-machine function/ability distribution? ⑥What is the relationship between data, AI and risk responsibility? ⑦The difference between computability and decidability, etc.

Some definitions are still very rough and need further refinement. For example, from the perspective of human security, prohibiting autonomous weapons with "people outside the loop" is in line with universal values ​​and reduces the risk of loss of control. It is necessary, but what kind of people are often ignored in the system loop, and some irresponsible people may be worse off in the epidemic prevention system;

(5) For the world On the development of independent technologies, it is recommended to set up a joint assessment team to conduct detailed assessments and early warnings on the development of independent technologies on a regular basis, to check the technological development milestones, to conduct predictive analysis of technological development, and to monitor key institutions and R&D institutions that develop sensitive technologies. personnel to conduct targeted supervision and establish a certain degree of academic openness requirements.

(6) The security risks and challenges faced by the development of AI militarization mainly include:

① Artificial intelligence and autonomous systems may lead to Unexpected escalation of the situation and instability of the crisis;

② Artificial intelligence and autonomous systems will reduce the strategic stability between opponents (such as the strategic relationship between China and the United States, the United States and Russia will become more tense );

③ Different combinations of people and autonomous systems (including people judging human decisions, people judging machine decisions, machines judging human decisions, and machines judging machine decisions) will affect the situation escalation of both parties ;

④Machines have poor understanding of deterrence signals (especially de-escalation signals) sent by humans;

⑤Autonomous systems have no intention of attacking friendly forces or civilians accidents will raise more questions;

⑥ Artificial intelligence and autonomous systems may lead to the instability of an arms race;

⑦ Autonomy The spread of the system could trigger a serious search for countermeasures that would heighten uncertainty and raise security concerns.

Calculation deals with "complexity", and calculation deals with "complexity". Composition writing is a calculation process, but it does not use numbers and graphics, but text symbols.

Human beings cannot completely master the world, but they can try to understand it. This kind of intelligence will give birth to newer philosophical categories and thinking.

In von Neumann's last book on the relationship between the brain and the computer published before his death, "The Computer and the Brain", von Neumann Neumann summarized his views by acknowledging that the brain was not only far more complex than a machine, but that the brain seemed to fulfill its functions along different lines than he originally envisioned. Almost conclusively, he concluded that binary computers were completely unsuitable for simulating the brain. This is because he has almost concluded that the logical structure of the brain is completely different from the logical structure of logic and mathematics. Then, "from the perspective of evaluating the mathematics or logical speech actually used by the central nervous system, the mathematics we use Its external form is completely unsuitable for such work."​

Recent scientific research also confirms this. The findings of French neuroscientist Romain Brette fundamentally question the consistency of the underlying architecture of the brain and computers, known as neural coding. Influenced by the metaphor between the brain and the computer, scientists have shifted the connection between stimuli and neurons from a technical sense to a representational sense in which neuronal coding thoroughly represents the stimulus. In fact, how the neural network delivers the signal to the "downstream structure" of the idealized observer in the brain in an optimal way of decoding is still unknown and not clear even in simple models. hypothesis. This metaphor, then, leads scientists to focus only on the connections between sensations and neurons, while ignoring the true impact of animals' behavior on neurons.

The results of the research by Hungarian neuroscientist Gyorgi Bussaki are even more radical. In his book The Brain Inside Out, Bussaki points out that the brain does not actually represent information by encoding it, but rather constructs it. In his view, the brain does not simply passively accept stimuli and then represent them through neural coding, but actively searches for various possibilities to test various possible options. This is undoubtedly a complete overthrow of the metaphor of using computers as a metaphor for the brain.

Whether it is from the perspective of brain science or computer science, the metaphorical lifespan of comparing the brain to a computer may no longer last. Cobb keenly pointed out that this kind of metaphor has been applied to people's research on computers, blinding people and narrowing the scope of the real research.

The above is the detailed content of The future of artificial intelligence is human-machine environment system intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete