AI: Actually Intelligent? Also, OOO Monday

Hey Folks! I think we're getting somewhere with the AI code. After another productive day of tweaking, fixing, and cleaning, I think the AI is starting to be both intelligent and predictable.

I had to revert a change I made yesterday, and make the AI start considering random new interactions only after all known options were exhausted. Trying to do both at once was causing the AI to do harmful things to itself. But once I did that, I was starting to see progress. I could spawn a lone AI, with no experience, and watch it start trying options out. Then, as it found random interactions that made a difference, it started using them deliberately to address needs it had. Neat!

However, once a second AI entered the picture, I started seeing some bugs. First, AIs were using "self" interactions, such as seeking privacy, on others. That didn't make any sense. Also, AIs were seemingly having conversations with others that were already busy doing something else, resulting in a confusing jumble of interactions for the player to follow.

I made some changes and added some checks, and by the end of the day, even multiple AIs were starting to make sense. I could spawn a lone AI, let it start experimenting with a "self" interaction, then spawn a new AI, and it would avoid bothering the busy other AI when trying interactions. And later, when both were free at the same time, they could use each other to try interactions.


Edison randomly tries SeekFamily, sees that it helps her StatFamily, so then "Plans" on using SeekFamily again to help StatFamily before trying out SeekAutonomy on Bedford.

So, as mentioned above, I think we're where we want to be with AI code. The next step will be to start updating the rest of the interaction data to make use of this new code (only about half of them were upgraded for this testing phase).

After the above was finally working, I was able to rip out the old decision logic, buggy relationships system, and other redundant code/data. I'm pretty excited to get cracking on it next week!

Speaking of next week, Monday is a stat holiday in the US (Memorial Day), so I'll be taking the day off to be with my family. So enjoy the weekend, all, and see you Tuesday!

Uncharted AI Territory

Hey Folks! AI work continues today, as I focused on the AI's ability to try new ideas if they don't have experience solving their current problems.

Yesterday's work was able to get AIs to the point where they can scan their memory for interactions that could help their current priority, and gauge which other AI/items would help most as a target of the interaction. It turned out there were still a few bugs and improvements to be made with this system.

The biggest, by far, was getting the AI memory to store not only the opening interaction in a "conversation," but also all subsequent interactions until it was over. This way, the memory it draws upon later is informed by the results of the opening line.

E.g. if Brickell uses "SeekFriendship" on "Jones," there are some immediate benefits/drawbacks to her stats, and this is stored. The new code, however, will also take into account how Jones replies, and other back-and-forth in this chain, and store all results under this opening "SeekFriendship" datum.

The result, I'm hoping, is that each AI will make better decisions.

Once that was done, I started working on code to try new things in cases where memory/experience offers nothing. My first attempt was only partially successful, as it resulted in the AI trying new things over and over even though they knew it wouldn't work.

As it turns out, I was forgetting to verify the results/validity of the random trial. I should've checked to make sure it was a benefit, and skip it if not. So I've started fixing it up so it does that.

I also needed to move this random trial code into the loop where I check each priority for memory solutions. It turns out there's some important info in that loop that I'd be better off piggybacking on, rather than rebuilding/loading in a separate, subsequent loop. It shouldn't hurt the AI decision code to do this, but it'll affect the personality a bit. Namely, for each priority an AI tries to solve, it checks known solutions first, then unknown ones, then moves on to the next priority. (Previously, it would loop through all priorities and only check known solutions, then spot check some random ones.)

We're getting there, I think. I'm hoping to see it in action tomorrow!

AI V2 Coming Online, Other Minor AI Buffs

Hey Folks! Managed to get a lot done today, and I think I'm out of the "bug bog" that was slowing AI development down.

First of all, I finally fixed the bug that was causing AIs to be able to use the same object simultaneously. This was a combo of cleaning-up the code and reworking the way "waiting" is ascribed to things during interactions. "Waiting" is more reliable now, and AI can better find things that are available to be used.

I also fixed a bug that caused AI to immediately use objects from across the ship if they couldn't pathfind to it. And while I was at it, I also forced AI to pathfind to the center of tiles instead of the closest edge, since this makes it easier to see what's going on. Along with some improvements to the way AI determines where to stand when interacting, now they are not standing atop each other when doing stuff, and it's much easier to follow their activities. Yay!

Once that was done, I could finally resume working on the "V2" AI I mused about earlier. This new AI figures out what to do next by using its memory, instead of some "hint" data I was putting into objects. The "hints" would tell the AI that "food will decrease hunger," which is fine, but I would have to remember to add every possible hint to every possible item interaction. And this setup would prevent AI from doing novel things and trying new ideas.

The new system, instead, looks at it's own priorities first. Then, it gets a list of all interactions it knows of that could help. Finally, it checks the area for all objects that could accept those interactions, and figures out which one has the best effect.

So far, so good.

There's one problem, though. This requires that the AI have some experience already in order to draw upon it. And at the moment, all AI starts out dumb as a rock. I need a way to "prime" the AI with some knowledge before setting them loose upon the world.

One approach would be to fall back on the "hint" system. I know it works, and I could force the AI to try the new "V2" mode before the old "hint" one. However, that still requires me to put a lot of data into each interaction. Plus, I think this might undermine the "V2" system since the only interactions it would "learn" are those provided by hints.

Tomorrow, I'd like to look into other ways to get the AI trying new ideas on objects in order to increase their memory and list of options. Maybe they can simulate the interaction on a dummy object in their brain, check the result, and decide if they want to do it in real life? One cool aspect of this approach is that their mental model would be based only upon conditions they can see in the target, and it might not work out the same in reality (e.g. the target has a hidden emotional state that causes a different outcome).

Could be exciting!

Cleaning-up AI

Still working out the AI bugs today. I found quite a few minor issues in the AI code that was probably contributing to errors in their behaviors. Things like AIs clearing the wrong interaction when told to stop waiting, and reliance on a "TaskBusy" condition which was unreliable. There may still be some confusion between the AI's "busy" status and the presence of "TaskBusy" conditions. Probably the patter is redundant now.

I also finally bit the bullet and added code to explicitly check if an AI's target object has been deleted. If so, it'll remove the interaction targeting that object and move on to a new task. I was worried this would just put a band-aid on a symptom without solving the problem. And in fact, patching it up may have revealed another problem!

It seems that the AI is releasing the target object prematurely after it finishes walking over to it. This could be a regression caused by my tinkering today, or maybe a new bug. Whatever the case, it's a clear place to start tomorrow. And should hopefully allow me to start working on higher level AI stuff again.

Memory Matrix, and AI Bugs

Hey Folks! Hope everyone had a good weekend. This was perhaps our most normal one yet here in Seattle. Mostly household chores, errands, and even some family outings! We're looking forward to more like this.

Back on the prototype today, I resumed work on the new AI code. By the time lunch came, I had it mostly doing what I was aiming for. Namely, it was keeping an internal record of which interactions had what effect on certain stats. That, and for each visible condition on the target of the interaction (a.k.a. "them"), it tracked the individual effects.

My intention is for this to help AIs determine not only which interaction they should be trying next, but also which target is best to try it on. I want the AI to come up with some sort of score for each possible move, and make it choose that. And the way it records data, it'll learn a bit as it goes, hopefully forming something that appears to be a relationship.

E.g. Sreyovich always avoids socializing with Mann, and instead prefers chatting with Brickell. One day, however, Brickell is really annoyed at something, and snaps at Sreyovich. Srey logs this and it starts affecting his choice of whom to socialize with, possibly preferring Mann or that new crew member that joined at the last port.

Before I could go much further, though, there was a bug to solve. My AIs would occasionally get stuck waiting forever, usually trying to eat a food packet. It seems they may be trying to access the same food packet as another AI at the same time, and one of them gets it while the other keeps waiting for a packet that no longer exists.

There are a lot of ways to deal with this scenario, including some colorful drama. But the tricky part is detecting when it happens so the AI can act accordingly. Ideally, I will find a way to avoid this bug before it happens, so AI don't unrealistically wait for nonexistent items. But worst-case scenario, I could also have them periodically check that their target is still there, and move on if not.

Once that bug is solved, I should be able to watch a large crew go at it for a while, and dump a log of their memory "matrix" for review. It should give me a better idea of how the AI could work with that memory to make better (i.e. more entertaining) decisions.

Have a good night, all!

More AI Design

Still plugging away at the new AI feature. The exact workings are a bit hard to nail down, but I'm starting to see a few neat opportunities.

Right now, I'm trying to setup a sort of needs vs. interaction memory for each AI that helps them decide who to interact with, and how, in order to satisfy their needs. On one hand, I want it to help them discover new things to try, and on the other, I want them to remember what works so they can do it again if they need it.

The system works top-down, starting with the AI's specific need from the existing stats I have in place (e.g. hunger, security, etc.). Then, I want the AI to narrow down the list of potential interactions to try that will satisfy that need. And finally, I want the AI to compare that list of interactions against known and newly discovered tools/AIs to see if any of them match.

My hope is that they'll be able to do this top-down search in a way a lot like you and I do, allowing them to adapt to what's available. And so far, this seems like it should be doable without too much building.

One of the cool opportunities I'm seeing is that I can use some of the item crafting properties from NEO Scavenger to tag objects in this game. NEO Scavenger already had a pretty good basis for describing item functionality with terms like "rigid," "flammable," and "liquid." If I copy these over and make them into conditions, like all the other stats in the game, I can use these in the existing interaction/trigger system. This could allow me to start tracking things like "items with the liquid and hydrator conditions tend to quench thirst" while "items with solid and absorbent don't quench thirst." AIs can start analyzing an item based on properties and their previous experiences to decide if a new item is worth trying.

Another cool opportunities that emerges from this is that I may not have to specify possible interactions for every object in the game. Instead, I may be able to simply define relevant properties on every item, and the AI may be able to figure out which interactions would apply. E.g. instead of AIs only ever sleeping on beds, they may discover a couch or even a stray pillow has the "cushion" and "flat" properties, and try sleeping on those instead. It could end up just being silly, or it could be really interesting.

The trick is going to be getting the AI to see this info. It's all floating in front of me right now, and I'm trying to organize it in a way I can use it. My brain's going in circles, though.

Fortunately, the weekend is upon us, so perhaps a break will help. Have a good one, all!

AI Relations and Decision Making

Hey Folks! I finally got the new interaction data format working, and was able to start testing again.

The good news is that it seems to be working as designed. I'm now seeing the upgraded stat counter values change accordingly, including the poop relief of -25 units. (Basically, clearing a full day's worth of food from the bowels.)

The bad news is, I'm starting to notice problems in my AI decision-making. AI still seems to bee-line for the food despite having bigger priorities in their list. And it seems to be stemming from the first draft of the relationship system I added way back.

Basically, whenever an AI is deciding what to do next, they check their current priority list for top issues. Then, for each in turn, they check all shipboard ConditionOwners (COs, or basically interactive objects) for interactions that could satisfy their need. Whichever one has the best combo of high priority and high reward wins, and they walk to that. So far, so good.

However, the way the AI is calculating the reward of the interaction is a bit wonky. I have a relationship object for each pair of COs in the world which is basically their mental image of each other. They decide whether to try an interaction based on this relationship. But the values for these relationships aren't really balanced. It's easy for one relationship to blind the AI to all others until it cannot be chosen anymore. E.g. raiding the fridge until it's empty, despite being super tired or needing to poop.

The solution, I'm thinking, is to make the relationship more about properties of the target, not just the target itself.

For example, the AI will have a list of conditions it knows about, like "Malfunctioning," "Kitchen," "Infirmary," and "Fridge." If it successfully gets food from the Kitchen Fridge, both "Fridge" and "Kitchen" get a bump in their score. If the AI fails to get food from the Infirmary Fridge, "Infirmary," and "Fridge" get docked points. Over time, the AI starts to build a mental model of things that work vs. not, so they might end up knowing that "Fridge" can be hit or miss, but things like "Kitchen" increase their chances of finding food, while "Infirmary" reduce them. They may still try a non-kitchen fridge in a pinch, but will definitely prefer the one in the kitchen if they have one.

There's more to it than that. I think it'll need to account for specific instances of objects, like "Jones." Things like "Man" and "Old" are good info to work from, but for drama to take place, the AI is going to need relations to specific people, not just people categories.

I'm also thinking I might have to tweak the way scores are assigned to each value so that it accounts for multiple interactions in a chain. So if an AI starts a conversation with "Jones" that results in a blow-out argument, that initial interaction should get colored with that negative data to reduce the desire to try it again in the future.

I think. This is pretty heavy stuff, and pretty theoretical. I'll have to be careful not to fall too far down this rabbit hole :)

Website and Business Planning

Got side-tracked today with website stuff. The Jibe is starting to map out the new home and games listing pages for my site, so we're working on what needs to be a part of each of those. And I wanted to give that the time it deserved to make the new site more interesting, engaging, and effective.

I also had a bit of research to do about setting up the new business, and whether I'll want a registered agent or not. So far, I'm thinking I will, since it keeps my personal and business addresses separate, and also allows me to keep the same address if I move house. Plus, the registered agent fee is actually a bit cheaper than equivalent PO Boxes.

I started researching an alternative press key handling service, too, since I may switch from dodistribute() on the new site. I've heard about, and I'm checking out Just wondering if these will make my life easier, as well as whether they'll be able to amplify my marketing efforts when the new game announcement is official.

Finally, I solved a few minor bugs today in some code, related to the interaction refactor I've been working on. My triggers needed to handle cases where there are no limitations (i.e. null settings) and should always pass. Plus, there seems to be an array that's not initialized correctly causing a null-pointer.

Sorry for the dull news day. Believe me, I'm hoping to get back to the code tomorrow. And more to the point, having something interesting to show soon!

Interaction Data Refactor

Hey Folks! I had a whole day of dev today! It's been a while since I could say that, and it was a nice change. And probably a good thing, too, as the day involved a pretty major data refactor.

I decided to move forward with the interaction refactoring. This meant changing the way interactions list:

  • Conditions to apply to Us
  • Conditions to apply to Them
  • Conditions We must have/not-have to make interaction available.
  • Conditions They must have/not-have to make interaction available.
  • List of items to add to Us
  • List of items to add to Them
  • List of items to remove from Us
  • List of items to remove from Them
  • List of items to transfer from Us to Them
  • List of items to transfer from Them to Us

As you can see, quite a lot of data per interaction. And the way it was written, there was a lot of redundancy. (Like listing the same thing multiple times in a row to stack its effects.)

The new data format makes better use of things like Condition Triggers and Loot specifications, which each can list multiple things. I can just say "5x this plus 3x that" instead of listing the same thing 5 times in a row. It also lets me to some randomization, though I'm unsure if I'll use it since that may complicate things too much.

Changing the code wasn't too hard. A few hours of tracking down all the references and it was done. Even the GUI data editor wasn't too hard to update.

However, the data is taking a while. I have something like 60 interactions in the game right now, and each requires some manual data-entry to convert. I briefly considered trying to write code to automate it, but I think it'd be a wash in terms of effort. Plus, doing it by hand means I get to review all the data I haven't seen in a while, to refamiliarize myself with it. (E.g. I can see that most social interactions get 1-5 stat points and a few major ones get 10, so this helps me continue adding more with consistent values)

I'd say I'm 80% done with the conversion now, which is further than I expected to be by the end of the day. Hopefully, I'll be able to test this out tomorrow. The results should let me create new ship items more quickly, and with more options!

Toilets Working, Data Rethink

Hey Folks! Hope everyone had a good weekend. Ours was mercifully light on chores and errands, for once. We even socialized with outsiders. Almost like normal people!

Back in space, I tried to focus on getting the toilets working, and I think they're sorted. It's a simple interaction for now. AI decides they need to poop, seeks a toilet, and for now, the toilet always allows pooping, so they proceed. There's room for variation there, of course. Things like a malfunction, or health condition preventing poop. But this was mainly about getting another physiological need operating, and that's done.

After testing a ship out with 4 beds, 2 stocked fridges, and 2 toilets, I was starting to see some AI patterns. And my first obstacles.

First of all, AI loves food. Like, no matter what is highest on their priority list, if food is in the top 3-4, it's what they go for first. I know you're probably thinking, "duh," but this isn't working as designed. Something is causing AI to think satisfying a greater need isn't as important as a lesser need, which is concerning.

One possibility is the various interactions have inconsistent hint data. Each interaction has a list of things it promises to an AI. What they get depends on if the interaction succeeds. But for the purposes of deciding what to do next, the AI needs to know which interactions might give them what they want. And which ever one promises the most for a high priority wins. So maybe food is promising too much.

It could also be something less obvious, so I'll need to debug to know for sure.

A second thing I noticed had to do with the data editing. When I specify the results of an interaction, I list the condition triggers that are applied. They'll be things like +1 achievement stat, or -1 food. Provided the trigger passes its test, the specified condition changes by the specified amount.

The problem with this is that adding/subtracting something like +5 to a stat means I have to either:

1) Create a condition trigger that's +5 to the stat, or
2) List the +1 stat condition 5 times

And if I want to subtract 5, I need a separate copy that's -5 or 5 copies of the -1. And ditto for +/-10, 15, 20, etc. And when you start to do this for each stat, you end up with long lists of almost redundant data and bloated condition trigger definitions.

I'm thinking a better way to do this is to just have a single condition for each stat and let the condition trigger multiply it by +1, -1, +5, -5, etc. as needed. This way, the "EatFood" interaction can just apply a "Hunger x-20" instead of listing "HungerDown" 20 times.

And in fact, as I was looking this over, I realized this is basically a copy of the new loot system I put into place. E.g. "Fridge01" is shorthand for 10-30 yellow food packets, so I only have to list "Fridge01" in the data, and the game knows that's referring to a loot table entry. I should be able to do this for conditions, as well, and maybe that'll simultaneously give me lots of versatility, reduce data complexity/redundancy, and reuse code.

I'll have to see if there are any gotchas to that approach, and if the refactoring is as worth it as it seems. But I've been meaning to do an optimization like this for a while, so I don't think this need is going away soon.