Alice is, first and
foremost, a chatter-bot. A chatter-bot is, basically, an AI built to do
just one thing--hold a credible conversation. They can do lots of other
things, of course--most are built to try to help lost humans in some
fashion--but when you boil it down that's the main point of a chatter-bot.
Chatter-bots got their start in the text-based online interactive
worlds of MUDS, MUSHes, and the like, long before the World Wide Web came
around. They've continued to evolve as the WWW has taken off, and they can be
excellent references for studying how one goes about building a conversational
NPC (something that's always been a complaint of most RPG games).
Alice is one of the best of the chatter-bots, having won the
prestigious Loebner Price in 2000. The Alice-bot site (above) serves as
a coordination center for work done with and to the Alice chatter-bot.
There are source code downloads and discussion forums, and lots of support for
anybody who wants to try to adapt Alice for their project. There are
also some online links to other chatter-bots, as well as papers on the theory
behind these interesting bits of code.
Recommended study for anybody
who needs to build an NPC or two that they want players to actually
care about.
ACE
(Agent-based Computational Economics) studies the
evolution and interaction of economics systems using the approach of
interacting autonomous agents. It tries to understand why economies work the
way they do by studying the individual massed interactions of individual
agents (i.e., people in the Real World (TM)).
There's a lot of good
information on the ACE site for the programmer charged with building an
economic AI, especially if it's a massive online multiplayer role-playing game
(which are usually just filled with all kinds of autonomous agents).
There are all kinds of papers, pointers to agent-based economics software,
etc.
Be very clear--ACE is a research site more than a "grab
software and run" kind of site. But if you're interested in understanding some
of the underlying rules that can guide an economic AI, this is the place.
Gamebots is
a project being worked on by the several aficionados of both Unreal and
John Laird's SOAR
project.
The basic premise of the Gamebots project is to use
UnrealTournament as a highly-dynamic, multi-agent environment for AI
research. Rather than try to develop bots within the game, they've taken an
approach similar to that of the SOAR based Quakebot built by John Laird); socket
based communication is used by an external program to control bots in the
game.
It's pretty simple to use: You fire up our gametype, which
starts listening for TCP connections on port 3000. On either the same computer
or another one, you fire up a bot and have it connect to the
UnrealTournament game on that port. The game creates a bot, and begins
passing sensory messages over the socket to your program. In return, your
program can send commands to the game, which will then be executed by the bot
in the game. There are a couple dozen different kinds of sensory information
that the client can use to guide its actions and over a dozen actions that can
be taken (running, tracking, shooting, changing weapon...).
This
approach has two advantages over building bots in the native game code:
AI~Wheel (tilde
intended) is an interesting rules-based agent toolbox developed by one David
Albert Harrell, a cognitive scientist who specializes in something called
"Multidimensional Cognitive Physics". He's written a number of articles on the
subject (all available through his web site above) and he's made available a
tool he's written called AI~Wheel.
AI~Wheel appears to
be a somewhat spreadsheet-oriented tool that allows you to build a "challenge"
environment and various rules for interacting with that environment. You can
then drop an "agent" into this world and set it to some task. Over several
generations, the agent will learn what series of sub-tasks best achieve
whatever the desired result is, giving you (at the end) a somewhat "evolved"
and "smarter" agent.
The software is fairly complex but looks rather
powerful....kind of an "Excel for AI" kind of application. Might have utility
in helping developers solve tricky production- and supply-AI problems.
ISAAC/EINSTein is an interesting project I first saw mentioned
over on Gamasutra. Its purpose is to provide an environment for multi-agent models of land combat. The idea (one often
exposed by followers of these pages, including the author) is that certain
aspects of land combat can be viewed as emergent
behavior that evolves from the collective, non-linear, decentralized
interactions between agents following simple, personal goals.
ISAAC (Irreducible Semi-Autonomous
Adaptive Combat) is the component that provides the basic
'engine' for building the agents themselves, while EINSTein
(Enhanced ISAAC Neural Simulation Tool)
provides the environmental "laboratory" in which ISAAC agents can interact.
ISAAC takes a bottom-up approach to modelling combat by specifying
individual agent behaviors and then observing them interact en mass.
This is somewhat different from the more traditional top-down approach of
specifying how units work together and then setting up a combat situation.
ISAAC's main purpose is to the answer the questions, "To what extent
is land combat a self-organized emergent phenomenon?". Order evolves from
the interaction of the agents, rather than from an imposed rules-set.
EINSTein, on the other hand, provides the laboratory in which
multiple ISAAC agents can play. Using an object-oriented C++ code base,
EINSTein allows the user to set up the environment in any way they
wish, specifying terrain types and conditional behavior as desired. It then
provides various data colletion and analysis tools for the user to study how
the ISAAC agents interacted, what they were "thinking" at any one point
in time, etc.
A number of screenshots and movies of
ISAAC/EINSTein in action are available, in addition to several
briefings and status reports on the project's progress. The site also provides
a wealth of tools, information, research abstracts, and general AI links that
will prove valuable to any developer.
The best bit is the fact that
you can download your very own copy of ISAAC/EINSTein to play
with! I've only scratched the surface of the engine's capabilities, but it
looks like somebody interested in the possibilities of developing an
agent-based AI could definitely do some good experimentation with it.
If I sound excited about this project, I am. Check this one out
if you have any interest whatsoever in developing an agent-based AI...you won't regret it.
RARS
--short for Robot Auto Racing Simulator--is a
fantastic net.project that has been exploring
adaptive and learning AIs for years. Your task with RARS is to make a
program that can drive a car around a closed racetrack. Any kind of AI is
allowed (though most are clearly rules-based agents of some kind) and many of
the resulting "robots" are available by others for download and study.
Periodic competitions pit the creations of enthusiasts against each other, and
there are many awards and honors given to those whose robots do the best.
Robots are written in C or C++, and there are several tutorials at the
RARS web site to help get you started. I strongly recommend this site
to newbie AI developers who are looking for some kind of environment in which
to learn.
Mathematiques
Appliquees, a French-based company, has an interesting little SDK they
call Direct IA (for Direct Intelligent
Adaptation). I first saw it at the 1999 GDC and spoke extensively with
its creators. Direct IA is, essentially, an SDK which allows developers
to build autonomous agents (or groups of agents) that have the capacity to
learn, anticipate, and select their actions. Agents controlled by this SDK can
reportedly adapt to their environments and learn in realtime. A variety of
modules come with the SDK supporting such things as Reactive Behaviors,
Perception, and Motivations. Of course, you can add your own custom-rolled
bits of code.
At first glance I had thought that Direct IA
supported agents built with rules-based approaches or Finite/Fuzzy State
Machines, but after some correspondence with the Direct IA folks I see
that I was in error. Direct IA instead uses a biological approach of sorts, using a proprietary
technology developed by Mathematiques Appliquees themselves. To quote:
"Direct IA...uses both biological and cognitive modeling. These two models define the current state of the agent in real time. Analogous to the discrete state used in finite state machine, the dynamical state used in DirectIA SDK generates the behavior of the agent, but in a more realistic and non deterministic manner. To summarize, one could compare the behavioral system of DirectIA SDK with an "infinite state machine".The Direct IA package also provides an English-like scripting language for use by the developer in building his own AIs, through which he can access many of the SDK's internal methods. This language uses production rules that differ from somewhat from traditional AI rules in that they are not used to directly create the behavior of an agent; rather, they are define the influence of external or internal factors in the evolution of the agent's internal dynamic state. These rules can be activated at each time step and can influence behavior at any point. Effects are carried over from time step to time step so that one rule can have a large influence on the resulting behavior in one context and a smaller in another context.
The Angband Borg --
Angband is an adventure game much like the classic
Rogue, so it's not exactly cutting edge. Nevertheless, it's got
its share of followers and contributors, one of whom is a certain Ben
Harrison. Ben has written an "automatic player"...a 'bot...which he has named
the Angband Borg. The Borg attempts to
play the game in exactly the same way a human would, using the same input and
controls (although, for efficiency, it doesn't actually parse the screen,
unless you set an option). It's not as good as a moderate human player, but
it's easy to modify (and hopefully improve).
The basic algorithm is
simple. The Borg has a list of 'goals' (heal yourself, kill
monsters, find the next level, etc.), and each turn it tries to take the
action which best accomplishes its most important goal. Reports are that the
Borg is a fairly good AI, considering the complexity of its
environment, with about 500 object types and 550 different monster types, as
well as dozens of spells and over a hundred unique artifacts.
The
source code for the Borg, and for Angband, are available at the
Angband Borg site above. It makes for an excellent environment
for anybody who wants a "real game" to dabble developing AI for.
GalCiv/Entrepreneur -- AI designer Brad Wardell, the man behind the AI in the games Galactic Civiliation and Entrepreneur, has created an AI-related page that discusses the AI in those games. There's not much here beyond what he's personally done for those particular games, but it's interesting reading nonetheless...his games have gotten wide praise for the strength of their AIs. Definitely recommended as a study in the design of a good rules-based, personality-based strategy AI.
Intelligent Agents that Learn -- This page describes a research project carried out by two students at MIT for their "Learning Strategies for Intelligent Agents" class. The students elected to use the rather popular 'Net game Bolo as their test bed, with good results. The approach taken to building a learning mechanism into their robot (named Inducting Indy) is quite intriguing.
Intelligent Agents -- Milind Tambe, an AI researcher at the Information Sciences Institute at the University of Southern California, is doing some fascinating work regarding intelligent agents. He's written several papers on the subject (which seem exhaustive), and is participating in the SOAR/IFOR project. One of the things he's built intelligent agents for is the RoboCup Soccer tournament, so he's not all theory.
Professor Pattie Maes and her crew of assistants and colleagues are doing some very interesting AI work. It's tangentially game related but many of their theories and techniques would translate well to the game field. You can either jump to her home page (above) or straight to the group's Autonomous Agents page.