Blizzard, Google team up to teach next-generation AIs how to play Starcraft II
by
, 11-18-2016 at 01:51 AM (1045 Views)
Earlier this year, Google’s AlphaGo AI successfully beat world-class champion Go player Lee Se-dol in four games out of five. This was a significant milestone, thanks to the sheer number of positions that are possible within the game, and the difficulty of creating an AI that could efficiently evaluate them before the heat death of the universe. Now, Blizzard is teaming up with Google to create a next-generation AI capable of playing an actual computer game: Starcraft II.
At first glance, this might not seem to make much sense. After all, playing against an “AI” has been a feature of computer games for decades, in everything from first person shooters to RPGs, to chess simulators. The difference between game AI and the kind of AI Google is developing is simple: Most of what we call artificial intelligence in gaming is remarkably bereft of anything resembling intelligence. In many titles, increasing the difficulty level simply gives the computer player more resources, faster build times, inside information about player activities, or loosens constraints on how many actions the CPU can perform simultaneously. It turns the bots into overpowered thugs, but doesn’t really make them better at what they do.
Game AI isn’t really what you’d call “intelligent,” and when it breaks, the results can be hilarious
Game AI typically makes extensive use of scripts to determine how the computer should respond to player activities (we know Starcraft’s AI does this because it has actually been studied in a great deal of depth). At the most basic level, this consists of a build order for units and buildings, and some rules for how the computer should respond to various scenarios. In order to seem even somewhat realistic, a game AI has to be capable of responding differently to an early rush versus an expansionistic player who builds a second base, versus a player who turtles up and plays defensively. In an RPG, a shopkeeper might move around his store unless he notices you stealing something, at which point a new script will govern his responses to the player.
An example of AI scripting in Age of Kings
Game AI, therefore, is largely an illusion, built on scripts and carefully programmed conditions. One critical difference between game AI and the type of AI that DeepMind and Blizzard want to build is that game AI doesn’t really learn. It may respond to your carrier rush by building void rays, or counter your siege tanks with a zergling rush. But the game isn’t actually learning anything at all; it’s just reacting to conditions. Once you quit the match the computer doesn’t remember anything about your play, and it won’t make adjustments to its own behavior based on who it’s facing.
The AI that Google and Blizzard want to build would be capable of learning, adapting, and even teaching new players the ropes of the game in ways far beyond anything contemplated by current titles. It’ll still be important to constrain the AI in ways that allow for humans to win, since games like Starcraft are (to a computer) basically just giant math problems, and an unconstrained CPU opponent can micro at speeds that would make the best Korean players on Earth weep.
more...