Google’s AI subsidiary Deep Mind has built its reputation by building systems that learn to play games by playing each other, starting with little more than the rules and what constitutes a win. That Darwinian approach of improvement through competition has allowed Deep Mind to tackle complex games like chess and Go, where there are vast numbers of potential moves to consider.
But at least for board games like those, the potential moves are discrete and don’t require real-time decisionmaking. It wasn’t unreasonable to question whether the same approach would work for completely different classes of games. Such questions, however, seem to be answered by a report in today’s issue of Science, where Deep Mind reveals the development of an AI system that has taught itself to play Quake III Arena and can consistently beat human opponents in capture-the-flag games.
Not a lot of rules
Chess’ complexity is built from an apparently simple set of rules: an 8 x 8 grid of squares and pieces that can only move in very specific ways. Quake III Arena, to an extent, gets rid of the grid. In capture-the-flag mode, both sides start in a spawn area and have a flag to defend. You score points by capturing the opponent’s flag. You can also gain tactical advantage by “tagging” (read “shooting”) your opponents, which, after a delay, sends them back to their spawn.
- Voices in AI – Episode 82: A Conversation with Max Welling
- Voices in AI – Episode 84: A Conversation with David Cox
- What to know about measles in the US as case count breaks record
- NASA to perform key test of the SLS rocket, necessitating a delay in its launch
- Fiber-guided atoms preserve quantum states—clocks, sensors to come