AI and Containers

Hey Folks! Hope everyone had a good weekend. It was a scorcher here (relatively speaking), and it would appear sweating is in the forecast for the near term. The upstairs semi-attic-office is, shall we say, not the coolest room in the house :)

I managed to make some progress on the AI/container issue I found last week. In the end, I decided to go the bookkeeping route, despite my reservations. It wasn't as hard as I anticipated, and it's the best way to be sure Bruce doesn't agree to do something for Abner if Bruce doesn't have the goods to do it.

As of now, I can see my AI trying out different containers in the ship, and successfully taking items from them. And in some cases, even consuming those items afterward! So I think it's working at a basic level.

Now the question is one of training. This AI would also be at risk of forming some pretty weird ideas about food-gathering. Things like searching toilets for food because it makes them feel productive, or not bothering with fridges after failing to find food in a toilet, because both fridges and toilets are containers and the toilet failed.

The good news is that I don't think this is a coding issue anymore. This is a data issue. My AIs have literally zero experience with the world, so their data points are, like, a list of 2. They'll try 1-2 things, and their world view will consist of 100% those two experiences.

One way I was thinking this could be solved is to just setup some AIs in an area and let them learn over time until some come out acting like relatively normal people. Then, I can save their knowledge as templates and load them into future AIs as a starting point.

The other way would be to manually generate some knowledge, either in the native AI knowledge tree format, or some special hints file that's easier to hand-edit.

Since the former sounds more fun, maybe I'll reward myself by trying that first :)

Comments

matsy's picture
matsy

Have you thought of adding a layer above individual containers? or maybe a better explanation would be grouping them by room?

You work out 'rooms', predefined by mapping constraints e.g. Enclosed area with a door = room.

Then each room has its list of containers so the Bathroom would look like {Toilet, Mirror, Cabinet A, Toiletry Bag}, and the Kitchen would {Cooker, Sink, Cupboard A, Cupboard B, Fridge}.

Obviously the AI wouldn't know it as a 'Bathroom' as we would but merely as 'Room 1', but assuming we don't keep all the food in the Bathroom. Per your current system it should have a higher success rate in the 'Room 2', the kitchen. So adding in a higher level to check a room that has had the highest success rate of total containers, it should levitate random checks to the toilet.

Obviously constrained by the fact we need to have a rooms concept, but I am assuming this will come as you've eluded to toxic gas, and radiation being emitted into the environment so bulkheads (rooms) will be a must to alleviate this issue.

Same system would work well for finding a toilet, and bunks that are under used.

Fins's picture
Fins

I'd vote for manual generation. And if possible, some ~dozen templates, random one being assigned to each AI. Representing "before the game started" experiences which humans in the ship are expected to have - after all, they are not new-borns and had some life before. So, like, AI Mary could be a "rich girl all her life before the game's events", who had her food served to her on silver plates and has very little idea where food comes from, while AI John could be a "lone father" in his past life, who raised 3 kids all by himself, and is very well aware about all sorts of containers various food can happen to be in. Etc.

... our lifestyles, mores, institutions, patterns of interaction, values, and expectations are shaped by a cultural heritage that was formed in a time when carrying capacity exceeded the human load. (c) William R. Catton, Jr

dcfedor's picture
dcfedor

@matsy, I think rooms are a useful tool, but not without complexity. We kinda touched on this way back in January:

http://bluebottlegames.com/main/node/5007

I was reminded just now as I thought about potential situations that are room-like without being rooms. And there's also the situation where one area shares multiple purposes.

However, it might be possible to get spatial context to play a part in AI. One thing I'm considering is if items have some info on neighboring items or tiles. This way, containers near feces are considered different from containers near cupboards. (Crude example.)

For now, though, AI may be able to side-step some of these issues simply by virtue of having lots of moving parts. They may get road-blocked away from learning about fridges providing food based on a failed toilet (container) food check. But they may find their way back to the fridge later while trying to satisfy another need randomly (such as autonomy, self esteem, etc.). It would offset the failure in their memory, correcting the issue.

We'll have to see. Things like privacy and security might require more deliberate data to be set, which may necessitate spatial info.

@Fins, I'll definitely use manual generation if organic/automatic learning fails. However, the going theory is that if I let AI loose overnight on a "trainer" ship layout, they'll each develop an AI knowledgebase that I can save out to data files and use these templates to "prime" newly-spawned AI in the actual game. And if it works as I hope, we get a wide range of socially dysfunctional AIs with less effort from me :)

Dan Fedor - Founder, Blue Bottle Games