- [Our own Mathew Kumar attended a recent lecture that saw Bungie's Damian Isla detail the "30 seconds of fun" approach to game AI in the Halo franchise - discussing the advantage to "territorial" enemies that make "smart mistakes", giving the player opportunities to feel smart. Neat.]

The "30 seconds of fun" model Bungie has used to describe Halo's design mentality has been cited countless times.

During a recent programming keynote at the Develop conference in Brighton, Bungie's Damian Isla framed that model in terms of AI, and shared with the audience a number of AI development stories drawn from throughout the series.

"I would guess that it's one of the most sophisticated AI systems out there," said Isla of his studio's work on Halo. But he then noted that the series' AI is not so much an attempt to create a true artificial intelligence as it is a system to facilitate "a player experience," an experience that has been under "constant revision."

That experience hinges on the "30 seconds of fun" -- the notion that you can put "200 or 300 of these experiences laid back to back and you can make a game out of that."

Exploring Primal Games

A single 30-second encounter consists of a player entering a space, planning an attack approach, and then executing on that approach. "The thing that bears stating is that every one of these steps has their own associated pleasures and rewards," Isla noted. "A lot of the AI is specifically designed to address one or the other of these phases."

Halo's designers wanted the title's gameplay to explore mankind's "primal games" such as hide and seek, tag, and king of the hill, and the game's encounters were created with them in mind.

"It's evolution that taught us these primal games," said Isla. "They're the ones that are played with our reptilian brains. The idea was for the AI [to] play them back with you."

Out of that goal was born three main principles that stayed at the center of the game's design process: territory, a limited knowledge model, and satisfying enemies.

Territory

Isla pointed out that the importance of territory in Halo's encounter design is closely connected to the recharging shield mechanic that has appeared since the original game.

"Part of that recipe demands that at some point you have a safe zone," he explained. "In a sense we needed to make the AI territorial. Once you have this idea, you have to think about the problem of encounter progression as the player expands their safe zone. That itself is a pretty fun process. It gives the player a sense of progress, and is extremely plannable.

"If AI are given a territory they are supposed to occupy, it's very possible for a player to walk into a space and know that if he will open fire the whole room of enemies isn't going to come crashing down on him based on the distribution of enemies across it," he continued. "This helps build the concept of encounters as puzzles."

Enemy AI in Halo sees areas as maps of squares. If an enemy needs to take cover, it determines which squares do not fall within the player's line of sight, then plots its route from square to square. Isla pointed out that this "prolongs the 30 seconds," as well as introduces variety to the encounters, as enemy behavior is dependent on player behavior.

Added Isla, "We recapitulated this system many times in Halo 2 and 3, and formalized it with deeper topologies."

The Limited Damage Model

In order for hide and seek, one of the crucial "primal games," to work, the AI cannot always be aware of the player's location. Without that limited damage model, the player would be unable to sneak up on enemies.

When the AI makes mistakes, it gives the player opportunities to feel smart. There are "bad mistakes" -- NPCs running into walls, or standing atop one another -- which break the fourth wall, but there are also good mistakes -- if the large "Hunter" enemy misses with its melee attack, it has a long recover time during which the player is able to flank it.

Isla made special mention of AI misperception -- "the most interesting form" of good AI mistakes. If the player moves stealthily, the AI will assume the player is still sitting where the AI last knew him to be. The team took that concept further in Halo 2, explicitly designing levels to allow for plenty of cover with which to outmanoeuvre enemies.

"Each AI has an internal model of each target, and that model can be wrong," Isla summarized. "This allows the AI to be surprised by you, and this is very fun."

Satisfying Enemies

Still, Isla stressed, enemies shouldn't be dumb. "It's more fun to kill an enemy that's cunning, formidable, and tactical," he said, pointing out that that goal is not just an AI problem but also related to convincing animation and game fiction.

For one thing, enemies must be reactive: "They have to let the player know that his presence is important to them one way or the other, and they have to make the player understand how his actions affect them."

They must also be capable: "They have to be roughly player-equivalent in terms of capabilities, which doesn't mean that we're making bots, but [they must be able] to use and board all the vehicles, and use all the weapons."

Halo 3

Isla took some time to speak on some of the AI changes for Halo 3, most of which dealt with scale -- environments became more sophisticated, enemy and vehicle counts rose, environments gained more complex scripting.

Still, the AI performance challenges from the original Halo, such as perception raycasting and pathfinding, remained constant. Additionally, the team could afford less spatial processing than in the original game, as the levels' increases in geometric complexity outstripped the increases in analytical capability. "That's depressing," commented Isla.

Changes in scale affected all three major AI focal points. Territory was affected by massive battles, more characters, and more complex scripting; the limited knowledge model was affected by more realistic enemy perception, group perception, and group dialogue; and satisfying enemies were affected by an increase in enemy variety and ability, as well as the simple increase in enemy count.

For example, the introduction of more pervasive physics created significant AI issues. "I sometimes forget this when I play Halo 1, but all of the crates are static," Isla recalled. "By the time we've reached Halo 2 all of the creates are exploding and moving everywhere, and that created a terrible problem for our pathfinding. All of this is great with me, however, because I believe that more is better when it comes to AI -- the more things that you have to react to, the more things you can do, and the more AI seems to have common sense."

The Social Contract

One particularly wide-reaching issue is the interaction of player characters to AI-controlled friendly characters.

Isla elaborated by way of an example: "If I'm an AI, and the player drives up to me, which seat do I get in? This is actually quite a difficult proposition, because if I'm carrying a rocket launcher I should really get in the passenger seat so I can use it. When we tip over, what do I do? Do I just walk off? Or do I stick around and wait for the player to right it? And what does sticking around even mean? Because I should be doing all my normal behaviors -- taking cover, fighting."

The real answer, he offered, is telepathy. "This is not just a flip example," he added. "We run up against the problem of a lack of telepathy all the time."

The team took different approaches to more practically answering those questions in Halo 2 and Halo 3. In Halo 2, it was attempted to be solved behaviorally ("It was not pretty," conceded Isla). In Halo 3, the team created "concepts" for territories or objects that require interaction; these concepts holds the knowledge of what a character will do with that thing. So, rather than having behavior determine how the NPC interacts with that vehicle, the vehicle contains all that interaction knowledge.

"In Halo 2, if an AI tips over his vehicle, he walks off and forgets completely he was ever in one," said Isla. "In Halo 3, if he tips it, he remains in its vicinity fighting until there is a point where he can right it again."

According to Isla, the latter approach is "the way things should be going" -- as he puts it, "behavior should be a very thin layer on top of a world of concepts."

Plenty Left To Solve

Isla wrapped up his talk by admitting that there are a multitude of big AI problems still unsolved.

Some of these problems are related to other development fields. "Animation technology still sucks," he offered. "I'm out of luck if I want to make a game that involves picking a pencil up."

Design interfaces are in a similar state, he said; they are still incredibly abstract and require a huge amount of training for designers to learn how to effectively use.

He wondered aloud if it is possible to create a WYSIWYG interface for AI design. "What would it even look like?" he asked, adding, "I imagine something like dog training."

Rather than attempting to create a 100 percent accurately modeled AI, however -- and, after all, even humans aren't 100 percent when it comes to the type of behavior that needs to be convincingly modeled -- Isla is more excited about improving communication between players and AI.

"The problem is not that mistakes are made, but there are no ways to correct it," he explained. "I have no way of saying, 'No, try again.' What I want is a 'you're doing it wrong' button. But I don't want to give them orders; I want to have a conversation."

He was quick to note that he is not calling for a natural language solution, but something more like a symbol language, or even a binary system like "good job" and "bad job."

Of course, as he concluded, "The ultimate tragedy for the AI programmer is that this is not an AI problem but a design problem."