ainerd August 17, 2020

Gaming rules – but who really is the better player?

The term artificial intelligence coined by John McCarthy refers to chess as Drosophila AI, but there is a strange paradox that has gone largely unnoticed. Although AI is capable of quickly mastering physical games created by humans, such as chess, it is less competent than humans when it comes to complex computer games. Since IBM’s Deep Blue defeated the World Chess Champion in 1997, AI has made it much easier to master human strategy games than human chess.

The nonprofit OpenAI, backed by billionaire tech titan Elon Musk, has built a robot that beat a team of humans in the complex video game Dota 2. On Tuesday, Gates tweeted: ‘This is a big deal because a win is needed to advance artificial intelligence. Earlier this week ElonMusk boasted on Twitter of the ‘conquest’ of OpenAI, saying: ‘We # have beaten the best player in the world in the eSports competition ever.

In a video released Monday, an AI learning engineer at the University of California, San Diego, said that individual robots had previously won the solo version of Dota.

Playing Dota means that you have to coordinate and concentrate as a team of five players, and OpenAI’s bot “OpenAI Five” has prevailed in a number of competitions against teams of professional players and other robots and human players.

Although the bot lost, the match provided a great example of how enhanced learning is changing the game in terms of artificial intelligence.

Back at Blizzcon in November, DeepMind said that its machine learning platform had managed to overcome the game’s AI’s insane difficulty in a very short space of time. Game developers have a very difficult problem to solve: If the AI is not good every time it beats a player, it is not fun to play against him, but it cannot be bad either. It is safe to say that AI has a good reputation in gaming, and although many players have played against bots, they consider the match an instant loss. GG is often accompanied by player failures and is often a sign of a bad game – on.

After intensive training with competing models, DeepMind was able to teach AlphaStar the game as well as the best human players. The team started with reps of professional games to give Alpha Star a starting point to start the games.

Not to be outdone is Google’s DeepMind AI, which recently took on and beat several professional players in StarCraft II. A team of AI bots developed by OpenAI, the so-called “OpenAI Five,” went through a series of games against teams of professional players, which they ultimately lost last year. The AI shortened the time of the top five agents to their best agents and was used for a few weeks in a League of Legends game against the best human player in the world.

This victory raises the question of whether humans are still better than AI or not, and is there any evidence that AI is gaining human skills – such as abilities in first-person video games?

In 2019, several milestones in AI research have been reached in other multiplayer strategy games. In the game DOTA 2 a professional e-sports team was defeated by five bot players controlled by an AI. The AI also beat professional human players in a game of StarCraft II and in another game where five bots and one player – controlled AI – defeated the professional e-sports team.

In 2016, DeepMind developed an AI bot that uses complex algorithms and itself – and learns to beat one of the world’s best go players, making it possible for AI bots to beat top-level players for the first time.

DeepMind was able to develop software that learned to beat the best Go players in the world and chess players. The Google program, now called AlphaZero, uses self-learning methods and complex algorithms to learn the game quickly and play chess in just four hours. Besides chess, Alpha Zero is also able to play other board games such as shogi, learn these games quickly and then beat the shogi program Elmo.

Video games are a clean way to measure the progress of artificial intelligence and compare computers with humans. The company has developed several algorithms to learn to play simple Atari games such as the Atari 2600 and the Super Mario Bros. series.

However, this is also a very tight test and AlphaStar, like its predecessor, can only do one task at a time and only for a short time.

OpenAI researchers I spoke to pointed out that it is a second – or second – positioning and attacking skill that helps people to eliminate the illusion of summoning and control the detailed movement of many units, where computer reflexes are a huge advantage. OpenAI Five has good microprocessor performance and is so good that human players who see it might now mimic it And this is not only possible because of superior reflexes. What we’re looking for in assessing the performance of AI systems in strategy games is to win in what players call “second best” positioning or attacking skills, where computer reflexes offer huge advantages. AI are very good at it, but they are not as good as humans in the second to third second.

Leave a comment.

Your email address will not be published. Required fields are marked*