AI Thoughts

Er, I mean human thinking about AI.

In the middle of the night, an idea occurred to me: maybe the reason my AIs aren't doing predictable things is because the interactions produce such small effects. And in particular, the results they're trying to get are not much greater than the side-effects. Basically, the signal-to-noise ratio is too small.

So I set about amping-up the interaction effects. My goal was to make a 1-3 iterations of an interaction create or remove a serious condition on an AI. E.g. three insults create a self-esteem crisis, or a single lost family member creates a family crisis.

In the process, however, I was reviewing some of the old effects, and starting to question whether they made sense. Does doing push-ups have an effect on one's need for contact with others? What about one's sense of altruism? What does it even mean to have more or less altruism? Is this a measure of the AI's likelihood of doing something altruistic next? Or is it just a measure of their discomfort with their current state of altruism?

As I dug deeper, I decided to map out the existing stats and their meanings, for easier reference. And upon doing so, I realized that I had positive traits, like self esteem, but they were being used to measure discomforts and crises, like hopelessness and futility. I started thinking of ways to express these same positive needs as the equivalent lack thereof, and turned to twitter for help.

And as a result of that discussion, Lars pointed out an interesting article about Dwarf Fortress. In it, they discuss how the Adams brothers built the AI/narrative system in DF. And their approach was pretty much the opposite of mine. Instead of setting up a series of variables and trying to make stories out of their random interplay, the DF devs came up with example stories first, and built a system to generate those stories.

I apologize if someone already told me about this, as it kind of sounds familiar. But anyway, it's got me thinking. Thinking AI thoughts.

What I may do tomorrow (or soon, anyway), is perform a similar exercise. What are some example interactions and dramas I want to see play out on the ship? And can they be broken down into components or a system?

It's something I've kind of wanted to do anyway, to see if my system could generate such stories. So maybe it's getting to be time to do so.

Comments

Fins's picture
Fins

It's limiting, though. I mean, if the system is designed DF-way. The number of pre-made stories is obviously limited. The number of potential stories if they are entirely the result of AIs interaction - is not.

Perhaps something "in the middle" can be created. I am thinking something like Diablo maps. Very 1st Diablo game, i mean. Its maps, iirc, are built from pre-made small blocks "which make sense", but the enture structure is generated randomly, giving endless variety of maps.

Can this be done with stories, i wonder. I don't know. I'd try to dig into it a bit.

Another thought i got while reading last two entries, Daniel - is about antipathy. Humans tend to have it towards _some_ other humans. Sometimes John can't help it but feel strong antipathy to Mary, while not feeling it to Nancy. And when Jhon has to interact with both, his feelings change the outcome of such interactions significantly. In the same time antipathy is not the "deciding" factor. May be introducing corresponding variable to AIs could enrich the process?

... our lifestyles, mores, institutions, patterns of interaction, values, and expectations are shaped by a cultural heritage that was formed in a time when carrying capacity exceeded the human load. (c) William R. Catton, Jr

Malacodor's picture
Malacodor

That's exactly what I meant which my suggestion from yesterday. Each AI has a number which somewhat represents their personality, and the closer the numbers of interacting AIs are the better they get along with each other.

In your example John could have an interaction coefficient of 2, Nancy 3 and Mary 6. Since 2 is much closer to 3 than to 6, talking with Nancy makes John happier than talking with Mary.

Ran around with a clown mask before it was cool

Fins's picture
Fins

Then we both independently developed that same idea, as i didn't see that comment of yours somehow. Funny, and in the same time, giving the idea extra strength. If we two have it, i bet many other people / players would, too. :)

... our lifestyles, mores, institutions, patterns of interaction, values, and expectations are shaped by a cultural heritage that was formed in a time when carrying capacity exceeded the human load. (c) William R. Catton, Jr

dcfedor's picture
dcfedor

I agree that there are some people that just "rub you the wrong way." Sometimes you can't explain it, but that person annoys you, or you don't trust them, etc. So yeah, I agree with the concept. And the random assignment of a number is probably sufficient to get things started.

E.g. "he's personality 1, she's personality 2, and that guy is personality 7. 1 and 2 get along, but can't stand 7. 4 comes along and doesn't particularly like any of them more, but in a pinch would probably favor 2, 1, and then 7."

As for the story modularity, I get what you're saying, but it'd depend on the granularity. I think DF's model is closer to what you're likening to Diablo, where they've written stories in advance to get an idea of the pieces that make them up. So they don't have a limited number of stories, just story elements.

Check out some examples here:
http://www.bay12games.com/dwarves/story/tt_gornon.html

The analyses at the bottom show the granularity they're dealing with. Things like "the existence of specific capitals," "stealth," and "swiping items from another."

Some of these are more mechanical, and others are just set dressing. But they're not enough to tell a story on their own. They're just the fibers of a narrative thread.

Dan Fedor - Founder, Blue Bottle Games

Anase Skyrider's picture
Anase Skyrider

Here's my small loan of a million dollars (instead of two cents):

-----

1. How are you naming your systems? Are you giving them names that, when there's a higher number, it's in more quantity, or is it inconsistent?

Example: "SelfEsteem", if you have a higher value of this, then you have more self esteem. But then do you have the value "Hunger", and a person needs to eat when the value is low (despite having a high value being represented as "High hunger", or "Really hungry")?

Whatever you do, for the sake of keeping it easy to understand for the players, you should name your systems so that it's consistent with the name. Higher happiness value means the person is happier, or a higher anger value means the person is angrier. That kind of thing. I can't offer anything more specific than this since I don't know all about the AI's needs.

-----

2. If you're looking to deconstruct conflicts into many components, then I suggest looking at Firefly in the episode where Jayne sold out Simon to the feds. Look at what each character did, and what they valued.

-----

3. For more interesting and realistic people, doing people-y things, you're going to want each person to have specific personality traits. Everyone will have the same set of complex needs and desires, but their tolerances will change because of these traits, as well as how they react.

Example: Let's say -100 hunger means that the crew member dies, and the characters feel pretty hungry at around -50. Someone with a "Gluttony" trait will not only get hungrier faster, but will require more food to get the same hunger satiation (but will have more energy reserves, perhaps). Or, instead of getting hungrier faster, this person simply has a lower tolerance for when he/she eats. Instead of at -50 feeling very hungry, he/she will feel very hungry at -30, while eating the same amount as everyone else. How you balance this trait will vary, but you get the idea.

For personality things, you can see this unfold with the example I gave with 3. Jayne had a much higher priority on money than he did anyone's lives compared to his other crew members. This priority reached a threshold where it beat his loyalty and morality to his crew after being offered enough money. This leads to him selling out Simon. And Simon had a lot of loyalty and love for his sister, which is why he put so much effort into going through that hospital to help her. And to get the crew to come along, he offered them the money they'd get from the drugs in the hospital. But the reason the rest of them didn't sell out Simon is because they all have strong values in comradery. Malcolm could even be said to have a trait called "Extreme Comradery" which means he won't sell out any of his team members, but that also means he reacts somewhat aggressively to any potential threat to his crew. Jayne, of course, came to regret his actions because he still felt what he did was wrong, but was persuaded by the money, creating a kind of mental distress. The distress was amplified when Malcolm, having learned of this betrayal, admonished Jayne.

Which leads into another sub-point about how people can be persuaded to do something they know is wrong if it's overwritten by some other desire (either within them, or given to them by someone else). But if someone gets away with it, they'll be more likely to do it in the future, but if they're caught and it has negative consequences (e.g. "You betrayed my team, Jayne. Off the boat from the sky you go! ... Aww, fuck it; you can stay, but you're on thin ice!"), they'll be less likely to do it. Which can make the leader's role more important; if they don't sufficiently punish someone (thus making them less likely to do something like that again), then they might get their crew destabilized.

-----

4. People should also have biases, just like IRL. One is a negativity bias (to focus more strongly on the negative than the positive). Another is confirmation bias; you're more likely to accept what confirms what you believe than you would be to accept something that contradicts what you believe. You can even play with these biases in the traits I mentioned in #3. Someone might have an "Optimism" trait, which means they have the reverse problem; they'll focus more strongly on the positive than the negative. Someone might have the "Pessimism" trait, which makes the negativity bias more extreme. And then you might have a "Realism" trait, which would allow the person to access things on a 1-to-1 scale.

EDIT: Such negativity biases can even help with the truth that is "First impressions are important". If people generally focus more strongly on the negative than the positive, then their first experience with a new crew member being negative would kind of write the "First impressions are important" thing. And here's a fun video about a bias towards negativity, and loss aversion.
https://www.youtube.com/watch?v=vBX-KulgJ1o

Another possibly relevant thing is that humans think logarithmically. A basic example of this would be as a hunter-gatherer back out in the Savannahs. You hear rustling in the bushes. You ask yourself "Is there one tiger over there, or two?". But the next time, there's a hundred coming after you. Asking yourself the question "Is there one hundred tigers over there, or one hundred and one tigers over there?" becomes a pointless question. Even though the difference between the two situations is the same, the numbers have a different proportion. A human might try to save five dollars if all they have is twenty, but if they have a million dollars, then what's an extra five dollars?

I know you did some research in an attempt to figure out a decent system for things like wants and desires, so I hope you can also look into biases and other peculiarities in human thinking. This might be way more than can be realistically managed. I hope that this was useful. Good luck!

dcfedor's picture
dcfedor

@Anase Skyrider, very useful! And I think we're on the same page in many respects.

Regarding terminology, I agree wholeheartedly. But English is not a science-friendly language :)

For example, I have stats for one's sense of achievement, but what I'm really doing is trying to alert the player when an AI feels a lack of achievement (which drives it to do something). What is "lack of achievement?" Underachievement? Failure? Ennui? Some words seem to have no exact complementary term.

Another example of trouble is the quartet of contact, friendship, family, and intimacy. These are distinct things, and require different people to fulfill. (Except maybe contact.) However, what are their respective complements? What does one feel when they lack family? Friends? How do you distinguish one missing thing from the other? Words like isolation, loneliness, and abandonment approximate the ideas, but overlap sloppily.

I agree that the player's going to need to understand these things, though, so that's part of the reason I'm spending extra time on this system.

As for personality traits, I'd like to do this, as well. Right now, there isn't a way to specify this in data, except for fairly unchanging/superficial traits. I could label an AI as "hard-ass" and make certain interactions off-limits, but that's not realistic.

One thing I'm considering, though, is making it possible for the data to reference itself. E.g. instead of an AI experiencing stress when "StatAchievement=25," the data could say "StatAchievement=StatAchievementTolerance." Then, the AI could check against its own StatAchievementTolerance for feeling stress, and certain personality traits and events could alter that StatAchievementTolerance up or down.

With biases, this might be possible if the AI has some modifier that applies to how they remember outcomes. Right now, it's 1-to-1 with what happened, but it could also be made to ignore positive or negative outcomes.

I guess the bottom line is it's all doable, just a matter of time, tenacity, and budget :)

Dan Fedor - Founder, Blue Bottle Games

Anase Skyrider's picture
Anase Skyrider

It may help to have a chart of every single need that a crew member has. Visual references may help you to name and categorize everything, and maybe help allow you to align things so that you can keep the name consistent with the idea of keeping the bar filled up (or emptied). I'm sure you've done everything you could have, though; you've spent many months on this.

If you can't get everything consistent and aligned, then perhaps have the bars act in a way that is consistent with the name. An example I like to use for the positive/negative thing, that I wish I used earlier, is "Stamina" and "Fatigue". If you name the bar "Stamina", and it's full, what that tells me is that the person has a lot of stamina he/she can use for activities. If it's named "Fatigue", and it's full, what that tells me is that the person is incredibly tired and has little energy in his/her body to do anything. But in the case where "Stamina" doesn't exist, and all you have is "Fatigue", you'd just go with "Keep this bar empty", as opposed to treating it like stamina.

This compromise will make the bars act in a way that's consistent with the name, but bars will be inconsistent with each other; some you want to keep low, and some you want to keep high. Which can be difficult, especially if you're glancing at a long list of needs and you have to think about the name and tell yourself "Does this need to be high, or low?" (which can be made easier by coloring the bars red if they're at a bad value, and green if they're in a good value, just like NEO Scavenger).

That would depend on user feedback, as what's intuitive and easiest may vary. If at all possible, leave it to an option in the menus.

-----

"One thing I'm considering, though, is making it possible for the data to reference itself. E.g. instead of an AI experiencing stress when "StatAchievement=25," the data could say "StatAchievement=StatAchievementTolerance." Then, the AI could check against its own StatAchievementTolerance for feeling stress, and certain personality traits and events could alter that StatAchievementTolerance up or down."

This is what I was imagining. In Unreal Engine 4, with blueprints, it's called promoting a value to a variable (dunno if it's ever called that elsewhere). This way, you don't have values floating around, but have variables that you can manipulate based on contexts. Instead of having "Max Stamina = 100", it'd be "Max Stamina = Max Stamina Variable", and maybe my guy grabs a Coca Cola, and that adds a +15 to the variable "Max Stamina", and he now has more max energy (or something like that).

Based on the example with Jayne, you'd probably have a variable that we'll just call "Money Greed" (to specify the kind of greed it is). You had, in your latest video, a demonstration of priorities based on certain values. Someone like Jayne would have smaller amounts of money update this priority much more significantly than a normal person. And someone like Malcolm, who is a leader and has to be concerned with money for survival, would have an exception to his actions; he'd never choose greed over his team. Or maybe his "Comradery Stat" just can't be exceeded by anything else like money. Thus the selfish route will not be chosen, unlike Jayne.

And maybe after betraying Simon, and being admonished by Malcolm, Jayne would have a penalty to his desire for greed. Or maybe the penalty would be applied to actions where greed would supersede things like "Comradery".

This entire thing is starting to feel like a complex game of D&D.