Piker Press Banner
April 22, 2024

So, You want to Write a Bot? 01

By John Trindle

My lovely wife has covered some of the social implications of bots in Dr. Strangecat. Since I have had some experience writing my own IRC bot, I'll try to chime in with some technical details.

Brief History

For millennia clever engineers (often clockmakers) created mechanical marvels, simulating birds and animals and people. These devices fell into two types. The first, most like a clock or music box, would run through a series of lifelike movements when activated. Their behavior did not change as a reaction to interaction with people. In other words, they did not learn.

The second type were more versatile. They could respond intelligently to the spoken word, and learn to recognize the noble patrons who paid for their construction. Up until this century, however, examples of the second type were almost exclusively examples of puppetry or remote control, and driven by the intelligence of their operator more than their designer.

In the 1940s, war research resulted in the first complex machines with stored programs, whose behavior could be altered easily and even adapt as data was gathered. This new adaptability stimulated interest in the questions "could machines think?"

In 1950, Alan Turing addressed this issue, separating machine behavior from the proposed Imitation Test. This, now known as the Turing Test, can be summed up by the statement "Is the subject of the test indistinguishable from a human being operating within the constraints of the test?" The constraints, such as communicating only through a teletype or being restricted to a certain subject area, are important to mask the details unimportant to the particular test such as expertise, language facility, or body odor.

The first programs which approached this goal were the chess and checkers solvers of the 1950s. They use general rules of thumb called "heuristics" to evaluate the possible outcomes of moves within the games, and to avoid evaluating whole families of solutions which would be "obviously" unproductive to a human. A trivial example might be to avoid exposing your king in a way which would allow your opponent to put him in check in the next move. The rudimentary examples of these programs used a fixed rule set, and substituted brute force (processing many unproductive solutions on the way to the best answer) for analysis. The more advanced game programs stored the actual moves of the games played, and adjusted their rules based on their success or failure history with specific opponents.

In 1966, the program Eliza (named for Eliza Doolittle in George Bernard Shaw's "Pygmalion") by Joe Weizenbaum emulated a Rogerian therapist. Rogerian therapy consists of asking the patient questions in response to their statements, and being generally supportive. Beyond being a good listener, the therapist's job is merely to facilitate the patients own self-analysis and illumination. Since the therapy is entirely patient-driven, it is relatively simple to write a program which dissects a sentence and constructs a relevant and (relatively) grammatical question in response. When the program fails in its parsing, it throws out a stock statement randomly, such as "That is very interesting. Why do you say that?"

I ran a BBS 1987-1991, and at one point I was able to install a copy of Eliza for use by callers. Unfortunately for them, all the output of the Eliza session was echoed to my screen. It was quite amazing how emotionally involved people became with what was essentially 1000 lines of code. Many of my callers were unsophisticated, and perhaps thought that there was a real person (probably me) pulling the strings. Not so, and this was the first indication I had that people's emotional needs might somewhat, however inadequately, be satisfied by a machine.

In April 1997 I joined an IRC channel which emulated a virtual Irish Pub from a popular science fiction series. This emulation required minor role playing moves such as "GreyMan sidles up to the bar, and orders a pint of Guinness" as well as the usual dialog. Some of the patrons seemed to have assigned themselves permanent roles, in particular, a bartender and a jukebox. Their vocabulary was limited and they seemed obsessed with certain activities (such as polishing the glassware, or playing a song every 5 minutes). As it turned out, of course, these patrons weren't real at all, but computer programs. I was inspired by the idea, and decided to write my own bot.

It seemed obvious to me that the number one problem with attempts at implementing Artificial Intelligence was Scope. There was no way that I could write a program that would respond as a human would, to any possible input. Since the job of Bartender and Jukebox were already taken, I decided to model a subject close at hand, my cat Max. Even the most enthusiastic cat owner would admit, cats have a limited number of certain characteristic behaviors they practice 99% of the time: Sleeping, Eating, Demanding Attention, and Going In And Out.

Bot Metabolism

The first task is to give our program a metabolism. MaxCat ran as a script within IRC, which meant (at least to a programmer first learning the script language) that his code would only execute as a result of something happening in channel. Each time someone said something, or performed an action, code would run to change variables. One variable was his stomach counter. Every time he would eat, he would add a number (say 10) to his stomach counter. Every time some event occurred in channel, his stomach counter would be decremented. After a certain amount had been added to his stomach and been digested in this manner, MaxCat would use the litter box, reducing another counter we could euphemistically call his "box score". When his stomach counter fell below a certain threshold (say, 3), Max would start asking for food.

There was a similar counter for attention. The more people would pet Max, the more would be added to his happiness counter. Just as above, every time an event occurred in channel his happiness counter would be reduced. If his happiness counter fell below a certain level (and he wasn't eating), he would bump up against someone's leg, or even jump into someone's lap.

Bot Memory

A real cat remembers you, with some level of trust. It seemed clear that it was important that a bar cat have a clear idea of who liked him, and who didn't. In some of the more interesting revisions of MaxCat, he had both a Friends list and an Enemies list. You could become Max's friend by paying him positive attention (stroking or feeding) multiple times during a channel visit. When a patron joined the channel, the first action he took prompted MaxCat to add him to a temporary list. If the patron's nickname was already on Max's list, he would be recognized as a friend immediately, otherwise the patron would be assigned a neutral score. If the patron was a friend, and behaved badly toward the bot, he would be removed from the friends list. If he didn't know you already, and you mistreated him, you'd be placed on the enemies list. Unfortunately, I quickly learned that MaxCat needed to bear grudges, as abusers tended to cycle from Friends to Enemies and back again quickly. Thus there was no automatic way to be removed from Max's enemies list. This actually seemed to be a more realistic cat characteristic, too.

The bot had other lists of patrons, too. For instance, there were some patrons who were friendly toward MaxCat, but did not want to have him in their lap. One patron's persona was a 6 inch tall stuffed bear, and found the idea of a 15 lb cat in his lap ludicrous. Another's persona was a small, fastidious tuxedo cat, with a similar lap accommodation problem.

MaxCat had temporary memory variables too, for example his physical location. It was important for Max to know that he was on the bar top, on the floor, or in a patron's lap. When things were working properly, Max would know to jump down from a person's lap when that person left the channel. He also resisted suggestions from other people to move when he was comfortable in an active lap.

I cover the language parser, bot abuse, and applications in part II

Article © John Trindle. All rights reserved.
Published on 2005-10-17
0 Reader Comments
Your Comments






The Piker Press moderates all comments.
Click here for the commenting policy.