GDC 2000 AI Round Table Moderators Report
By Neil Kirby
Day 1: General Session
This session was the "regular" AI roundtable. We had 28 attendees, and numerous people filing in once Bill Gates finished.
State of the Industry questions:
We covered AI in 24x7 large, persistent world environments. There was a heavy emphasis in the discussion on lack of exploitable holes in the AI. We also touched on the problem of empire maintenance. Who guards the castle when the player is offline? The game AI is expected to take over and do at least as well as the player! At the very least it should be predictable by the player who owns the castle, but not predictable by any other player [go figure]. Automated agents for players should have settings the player can adjust that include how much latitude the agent has. Time zones and player rhythms have to be accommodated as well; the NPC AI never sleeps, but the players may have "a life" that takes them away. One of the brighter spots is that this environment does allow for long term data acquisition on AI performance.
There were no single development tools mentioned as off the shelf solutions. People use internal Finite State Machines and scripting.
Realism got mentioned with two major points: avoid indecision, and avoid predictability.
Further online issues came up. AI NPCs should be social, fill in the gaps, and worthwhile to talk too. They should "make real friends too." The point being that interaction has progressed a great deal since Eliza, witness the best of the chat-bots, and this progress needs to go into online games as much as it can.
No one does deep analysis, there isn't the CPU and real time available.
Terrain analysis is tough, "inverse pathfinding." The ability to generate terrain on the fly is there, but the ability to get the AI to deal with it is not. Pre-processed terrain is the norm.
A rather interesting thought was that the user ought to be informed somehow of what the AI is doing. This should avoid any mention of how the AI decided its course of actions (since the AI may be silently cheating). But liken unto reading the other side's propaganda or having spies on payroll, the strange and bizarre behavior of the AI becomes understandable - and thus much better - when the player has a clue as to what the AI was trying to do.
Day Two: RPG and Action Games
My room filled to capacity 15 minutes early. So at 35 attendees strong I shut the door and started up the discussion.
One of the high points was, "hit chair, start shooting!" In first person shooters especially, when all else fails (when anything fails) cut loose at the player, it's what they are there for. An acknowledgement of the fact that great AI is appreciated but not mandated in the genre. Generalizing, having a reasonable default set of actions really helps.
The other high point I termed, "finding a softer wall to pound your head against." Non-verbal communication of emotion is easier and more effective than talking/typing methods. While small hand gestures vary widely from culture to culture [examples were given that were completely opposites], large motions and facial expressions are universal. Gestures and expression "fill in" the illusion far more effectively than canned phrases. It also solves internationalization issues quite handily. Avoiding word-based language gets you around two huge problems. The first is language input and output. People expect speech if words are being used. Spoken word output is either canned or of poor quality; both leave a lot to be desired. Spoken input is computationally expensive, but doable. But speech input smacks you right into its sibling problem of natural language processing. With non-verbal communication, the AI doesn't talk to you, so you don't expect to talk back.
In the area of tools, there were custom tools complete with language editor and debuggers used. Others used languages such as Scheme, LUA, and Common Lisp.
It was pointed out that there is still lots of embedded data in the AI. This makes it very hard to consider off the shelf packages or even much re-use from project to project. Bucking that trend, The Sims use smart terrain. There the terrain broadcasts to nearby agents what it has to offer. Thus they can react realistically. When a hungry agent walks near a restaurant, the "I have food" broadcast allows it to decide to go on in. The restroom broadcasts that it offers hygiene, and so on. This idea could be used in other genres as well. Since many first person shooters have very little in terms of terrain changes due to game play, having the NPC next to you get nervous as the two of you approach the site of forty total loss ambushes has some appeal. The terrain would be broadcasting that this place has seen a lot of frags recently, and the NPC would be reacting to it.
There was a new reference given: "Understanding Comic Books" by Scott McCloud.
And an interesting thought problem: First person shooters for the blind.
Day 3: AI for Beginners.
As the person generally calling on people and moderating the third day, I don't have notes. Steve Woodcock again was the perfect note taker for this session, and it was great having Eric Dybsand helping make sure that everyone got their questions out.
We had 64 attendees and really wished we could have opened the divider between two of the rooms for more seating. We could have used it without loss of quality. The initial round of asking for questions was slow to get under weigh. Hindsight (and talking to people afterwards) showed that a reasonable fraction of our audience didn't have sufficient context to start out right away asking questions. This was a case where our philosophy on round table moderation, where the moderator talks least and lets the experts in the room talk most was getting in the way. Next time a handout and ten minute warm-up should go a long way to helping everyone get the most out of the session.
Even still it felt like it went quite well. There were experts present who thankfully could say their piece well and stop in a way that helped encourage more questions. Without going into topics, I can say that I relied heavily on what we have been hearing year-over-year in the round tables to answer the questions. I could site evolutionary trends as well as unmet needs. Once things were going, the discussion flowed well. Until I suddenly ran completely out of voice and Steve handed me a bottle of water. Thus revived - thank-you Steve! - we took right up where we left off until we simply ran right out of time. It felt like it went as well as last year, I was drained but very pleased by the time it was over.