AI Socialization

Hey Folks! Quick update today, as dinner's in the oven and will need tending.

Today was about watching the AI do its thing, and taking notes to see what needs fixing or improvement. The AI is pretty stable now, and what I'm looking for is whether it:

a) makes sense
b) is entertaining to watch

As mentioned way back when, part of this game is about procedural drama and crew management during longer voyages, so the AI needs to do interesting things. So far, it's interesting to watch, but probably not dramatic. More like watching insects. I'm not sure I'm seeing much reason in their behavior apart from initial attempts to satisfy needs. And part of that is because AIs are only planning their next move if they are the ones starting a conversation/interaction.

During the mid-interaction, AIs are just responding in the way that satisfies needs best, without any regard for the feelings or relationship they have with the other AI. This means Abner can chat with Bruce, and Bruce might insult Abner, but if Abner later flirts with Bruce, Bruce may flirt back if he needs intimacy. There's no consideration for the previous negativity. Bruce just needs intimacy, and doesn't care whom from.

That may be tricky to solve. Though, I may already have some data stored by each AI that helps. (Basically, the same data they use to start a conversation/interaction.)

I also noticed a few smaller issues with the AI, now that it's running (mostly) smoothly. AI was accidentally using replies as opening statements in conversations, and this turned out to be a bug in the code that remembers conversation results. It was storing replies as opening statements and then deciding to use the ones it liked.

I also noticed some missing data in a few negative replies, causing AIs to continue trying interactions that resulted in denial. They just weren't getting punished, so didn't care.

I think there's also an issue in there where AI's are allowing themselves to be interacted with too one-sidedly. Basically, Abner keeps starting interactions with Bruce, and Bruce keeps turning him away, and there's no end in sight. This might be the result of that missing denial penalty, but it also might mean AIs need to have a way to bail out.

Finally, I started looking into ways of making AI do more than just say "no" when they are done with another AI. I think it's possible for AIs to pathfind to a new point as part of their reply, which might make for an interesting way to get insulted and walk off. And it might also solve the above problem of not being able to bail out.

More on that tomorrow!

Comments

Azur's picture
Azur

Hello,
Maybe you could add a kindness metric (seeking a % of kindness over a period of time) in order for people who do not want to do something would still do it to be kind to the other, they live in a community after all.
Or a karma metric that everytime one refuses to do something, each consecutive request of the same other AI to this AI will have more success chance.

Best,

Malacodor's picture
Malacodor

Give each AI a random number between 1 and 10. If Abner has 1 and Bruce has 3 the difference is 2 which results in a 20% penalty on interaction benefits due to differing interests and behaviours. This way it will matter with whom they interact.

If the difference is > 5 then it's subtracted from 10, the new value is then used instead of the actual difference. This creates a circular relation similar to the numbers on a clock-face to avoid some numbers being more favourable than others. The effective difference between 1 and 8 would be 10 - (8 - 1) = 3 for example.

Memories of previous interactions increase or reduce the aforementioned penalty (and can even turn them into a bonus) depending on whether they were pleasant or not. This way the AIs can overcome initial differences or worsen them. This modifier probably needs diminishing returns or a decay over time to avoid relations becoming too extreme.

Ran around with a clown mask before it was cool

dcfedor's picture
dcfedor

@Azur, there is sort of a system like the one you propose. Each AI has certain psychological needs, including "altruism" and "family." And certain interactions fill or deplete those needs. But it's still not personal. They just do things to fulfill needs, not affect another person or the relationship.

@Malacodor, I'm not sure I completely understand how your system works, but I like the cyclical nature of it. Good narrative and drama revolves around tension and release, and your system definitely seems to do that.

Dan Fedor - Founder, Blue Bottle Games