Er, I mean human thinking about AI.
In the middle of the night, an idea occurred to me: maybe the reason my AIs aren't doing predictable things is because the interactions produce such small effects. And in particular, the results they're trying to get are not much greater than the side-effects. Basically, the signal-to-noise ratio is too small.
So I set about amping-up the interaction effects. My goal was to make a 1-3 iterations of an interaction create or remove a serious condition on an AI. E.g. three insults create a self-esteem crisis, or a single lost family member creates a family crisis.
In the process, however, I was reviewing some of the old effects, and starting to question whether they made sense. Does doing push-ups have an effect on one's need for contact with others? What about one's sense of altruism? What does it even mean to have more or less altruism? Is this a measure of the AI's likelihood of doing something altruistic next? Or is it just a measure of their discomfort with their current state of altruism?
As I dug deeper, I decided to map out the existing stats and their meanings, for easier reference. And upon doing so, I realized that I had positive traits, like self esteem, but they were being used to measure discomforts and crises, like hopelessness and futility. I started thinking of ways to express these same positive needs as the equivalent lack thereof, and turned to twitter for help.
And as a result of that discussion, Lars pointed out an interesting article about Dwarf Fortress. In it, they discuss how the Adams brothers built the AI/narrative system in DF. And their approach was pretty much the opposite of mine. Instead of setting up a series of variables and trying to make stories out of their random interplay, the DF devs came up with example stories first, and built a system to generate those stories.
I apologize if someone already told me about this, as it kind of sounds familiar. But anyway, it's got me thinking. Thinking AI thoughts.
What I may do tomorrow (or soon, anyway), is perform a similar exercise. What are some example interactions and dramas I want to see play out on the ship? And can they be broken down into components or a system?
It's something I've kind of wanted to do anyway, to see if my system could generate such stories. So maybe it's getting to be time to do so.