


DeepMind has made new achievements in the field of game AI, this time in Western chess.
#In the field of AI games, the progress of artificial intelligence is often demonstrated through board games. Board games can measure and evaluate how humans and machines develop and execute strategies in controlled environments. For decades, the ability to plan ahead has been key to AI's success in perfect-information games like chess, checkers, shogi, and Go, as well as imperfect-information games like poker and Scotland Yard.
Stratego has become one of the next frontiers of AI research. A visualization of the game’s stages and mechanics is shown below in 1a. The game faces two challenges.
First, Stratego’s game tree has 10,535 possible states, which is more than the well-studied imperfect information games Unrestricted Texas Hold’em (10,164 possible states) and Go (10,360 possible states) ).
Second, acting in a given environment in Stratego requires reasoning over 1066 possible deployments for each player at the start of the game, whereas poker only has 103 possible pairs of hands. Perfect information games such as Go and chess do not have a private deployment phase, thus avoiding the complexity of this challenge in Stratego.
Currently, it is not possible to use model-based SOTA perfect information planning technology, nor to use imperfect information search technology that decomposes the game into independent situations.
For these reasons, Stratego provides a challenging benchmark for studying large-scale policy interactions. Like most board games, Stratego tests our ability to make relatively slow, thoughtful and logical decisions in a sequential manner. And because the structure of the game is very complex, the AI research community has made little progress, and the artificial intelligence can only reach the level of human amateur players. Therefore, developing an agent to learn end-to-end strategies to make optimal decisions under Stratego's imperfect information, starting from scratch and without human demonstration data, remains one of the major challenges in AI research.
Recently, in a latest paper from DeepMind, researchers proposed DeepNash, an agent that learns Stratego self-game in a model-free way without human demonstration. DeepNask defeated previous SOTA AI agents and achieved the level of expert human players in the game's most complex variant, Stratego Classic.
Paper address: https://arxiv.org/pdf/2206.15378.pdf.
The core of DeepNash is a structured, model-free reinforcement learning algorithm, which researchers call Regularized Nash Dynamics (R-NaD). DeepNash combines R-NaD with a deep neural network architecture and converges to a Nash equilibrium, meaning it learns to compete under incentives and is robust to competitors trying to exploit it.
Figure 1 b below is a high-level overview of the DeepNash method. The researchers systematically compared its performance with various SOTA Stratego robots and human players on the Gravon gaming platform. The results show that DeepNash defeated all current SOTA robots with a winning rate of more than 97% and competed fiercely with human players. It ranked in the top 3 in the rankings in 2022 and in each period, with a winning rate of 84%.
Researchers said that for the first time, an AI algorithm can reach the level of human experts in complex board games without deploying any search methods in the learning algorithm. , it is also the first time that AI has achieved human expert level in the Stratego game.
Method Overview
DeepNash uses an end-to-end learning strategy to run Stratego and strategically place chess pieces on the board at the beginning of the game (see Figure 1a). During the game-play phase, The researchers used integrated deep RL and game theory methods. The agent aims to learn an approximate Nash equilibrium through self-play.
This research uses orthogonal paths without search, and proposes a new method that combines model-free reinforcement learning in self-game with game theory algorithm ideas-regularized Nash dynamics (RNaD) combined.
The model-free part means that the research does not establish an explicit opponent model to track the possible states of the opponent. The game theory part is based on the idea that based on the reinforcement learning method, they guide the agent to learn Behavior moves toward a Nash equilibrium. The main advantage of this compositional approach is that there is no need to explicitly mock private state from public state. An additional complex challenge is to combine this model-free reinforcement learning approach with R-NaD to enable self-play in chess to compete with human expert players, something that has not been achieved so far. This combined DeepNash method is shown in Figure 1b above.
Regularized Nash Dynamics Algorithm
The R-NaD learning algorithm used in DeepNash is based on the idea of regularization to achieve convergence. R-NaD relies on three A key step, as shown in Figure 2b below:
DeepNash consists of three components: (1) Core training component R-NaD ; (2) fine-tuning the learning strategy to reduce the residual probability of the model taking highly unlikely actions, and (3) post-processing at test time to filter out low-probability actions and correct errors.
DeepNash’s network consists of the following components: a U-Net backbone with residual blocks and skip connections, and four heads. The first DeepNash head outputs the value function as a scalar, while the remaining three heads encode the agent policy by outputting a probability distribution of its actions during deployment and gameplay. The structure of this observation tensor is shown in Figure 3:
Experimental results
DeepNash also interacts with several existing Some Stratego computer programs have been evaluated: Probe won the Computer Stratego World Championship three of the years (2007, 2008, 2010); Master of the Flag won the championship in 2009; Demon of Ignorance is Stratego's Open source implementation; Asmodeus, Celsius, Celsius1.1, PeternLewis and Vixen were programs submitted to the Australian University Programming Competition in 2012, which PeternLewis won.
As shown in Table 1, DeepNash won the vast majority of games against all these agents, even though DeepNash had no adversarial training and only used self-game.
Figure 4a below illustrates some of the frequently repeated deployment methods in DeepNash; Figure 4b shows DeepNash (blue square) on the chess piece A situation where the center is behind (losing 7 and 8) but ahead in terms of information, because the red side's opponent has 10, 9, 8 and two 7s. The second example in Figure 4c shows DeepNash having an opportunity to capture the opponent's 6 with its 9, but this move was not considered, probably because DeepNash believed that protecting the identity of the 9 was considered more important than the material gain.
In Figure 5a below, the researchers demonstrate positive bluffing, where players pretend that the value of the piece is higher than it actually is. value. DeepNash chases the opponent's 8 with the unknown piece Scout (2) and pretends it is a 10. The opponent thinks the piece might be a 10 and guides it next to the Spy (where the 10 can be captured). However, in order to capture this piece, the opponent's Spy lost to DeepNash's Scout.
The second type of bluffing is negative bluffing, as shown in Figure 5b below. It is the opposite of active bluffing, where the player pretends that the piece is worth less than it actually is.
Figure 5c below shows a more complex bluff, where DeepNash brings its undisclosed Scout (2) close to the opponent's 10, which could be interpreted as a Spy. This strategy actually allows Blue to capture Red's 5 with 7 a few moves later, thus gaining material, preventing 5 from capturing Scout (2), and revealing that it is not actually a Spy.
The above is the detailed content of The winning rate against humans is 84%. DeepMind AI reaches the level of human experts in Western chess for the first time. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Notepad++7.3.1
Easy-to-use and free code editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.