April 18, 2012James Morrow
When I tell you to think about "game theory,” the first thing that springs to mind might be your favorite poker or chess technique, a multiplayer strategy for Call of Duty, or a fool-proof way to win Civilization. We use strategies to play these games, but really, game theory as a science has traditionally had very little to do with video games or even common board games.
Game theory is the idea of strategic human interaction, where one person's decisions affect one or more people, and ultimately, the outcome of the interaction. Political scientists like me use these theories to help analyze and predict the potential outcome of human interactions, whether in diplomacy, security situations, elections, or “the game of life."
My expertise is in applying game-theory principles to international conflict and counterterrorism. The terrorists and security forces are players in a high-stakes, real-life game, where success depends on their ability to accurately predict the behaviors and decisions of the other players involved. So, for instance, terrorists will survey possible targets, looking for weak spots and times when the target's guard is down. In response, government agents will try to defend high-priority targets, and make their patrols unpredictable. Both sides are making calculations and predictions to figure out their opponent's next move, and using those conclusions to determine their own decisions. That's game theory.
But these strategic calculations may not help you in virtual games, which can be too complex for game theory to accurately predict—sometimes even more complex than real human decision-making processes. Take, for example, the classic game Civilization. Tasked with "building an empire to stand the test of time," players are challenged to build and expand a virtual society against computer-made obstacles.
Civilization should be a prime example of a use-case for game theory, because it required strategic thinking for success. But any game theorizing is going to be undermined when you're playing against artificial intelligence.
AI players tend to gum up intricate game theory in a couple ways:
- AI is limited to what programmers can code. These responses are algorithmically derived and pattern-based, and while the algorithms can be quite sophisticated, it's generally quite clear that you're playing a machine. So human gamers must try to figure out how non-human players might response to various scenarios, challenges, and motivations.
- In order to create a game that stays challenging over time, developers compensate for their fixed AI by giving the computer the ability to do things a human player can't in order to provide different levels of difficulty. In essence, the computer can cheat. Most computer games have enough options that an optimal strategy is almost impossible for the human players to figure.
Of course, video games have come a long way since the days of the original Civilization (as has the Civilization franchise itself). Strategy games, shooters, and MMORPGs (massively multiplayer online role-playing games) have robust multiplayer modes, where humans are pitted against humans and game theory really comes into play.
Perhaps it's this human element that makes games fun. Your competitors' decisions—good and bad—become evidence of their humanity instead of AI algorithms, and your insights into human behavior becomes a competitive differentiator that can help you beat your foes.
This article is commissioned by Qualcomm Incorporated. The views expressed are the author’s own.
April 18, 20120