Hey Folks! Hope everyone had a good weekend. We had spectacular Memorial Day weather, and did a sort of family/friends park and beach day. I think this was the first time I was at a salt water beach in...years? It was nice being back.
Back at work, I had a pile of business tasks to sort through before I could get back to the fun stuff. First of all, I had to review some more paperwork for moving the company to the US from the lawyers. Like with all legal documents, they take time to read and check each detail, so there's unfortunately no "coasting" through it. Fortunately, it was a bit more interesting than the usual work contract or EULA, since it was about company formation and operations. A first for me!
With that done, I caught-up with Tiago on the mobile port of NEO Scavenger. And he's moving fast! I think he's as far now as the former dev was after almost a year. So this is promising! He's thinking we may even have Android-ready builds to test as soon as next week. Maybe not playable yet, but functional. Exciting!
I also had a call with the web developers today, and we reviewed their first draft of the homepage wireframe design. And I think you'll like where it's going. Relief from the wall o' text may be at hand :)
Finally, there were some other miscellaneous emails to catch up on. I think I was able to get back to work a little after 4pm today, leaving me about an hour to check AI sleeping interactions, and notice a bug. So far, it looks like there might be a problem in the way AI choose how to reply to an interaction. I think I might have mixed up which AI chooses the reply. E.g. if Abner asks Bruce for a hug, Abner gets to decide what Bruce will do. Should be a simple switch, but I have to double check some things to make sure there isn't a gotcha in there somewhere.
Once that's done tomorrow, I think I should have sleeping working again. And I'll continue updating the rest of the physiological interactions I have so far (eating, pooping), and see how the whole crew behaves in practice.
Hey Folks! I think we're getting somewhere with the AI code. After another productive day of tweaking, fixing, and cleaning, I think the AI is starting to be both intelligent and predictable.
I had to revert a change I made yesterday, and make the AI start considering random new interactions only after all known options were exhausted. Trying to do both at once was causing the AI to do harmful things to itself. But once I did that, I was starting to see progress. I could spawn a lone AI, with no experience, and watch it start trying options out. Then, as it found random interactions that made a difference, it started using them deliberately to address needs it had. Neat!
However, once a second AI entered the picture, I started seeing some bugs. First, AIs were using "self" interactions, such as seeking privacy, on others. That didn't make any sense. Also, AIs were seemingly having conversations with others that were already busy doing something else, resulting in a confusing jumble of interactions for the player to follow.
I made some changes and added some checks, and by the end of the day, even multiple AIs were starting to make sense. I could spawn a lone AI, let it start experimenting with a "self" interaction, then spawn a new AI, and it would avoid bothering the busy other AI when trying interactions. And later, when both were free at the same time, they could use each other to try interactions.
So, as mentioned above, I think we're where we want to be with AI code. The next step will be to start updating the rest of the interaction data to make use of this new code (only about half of them were upgraded for this testing phase).
After the above was finally working, I was able to rip out the old decision logic, buggy relationships system, and other redundant code/data. I'm pretty excited to get cracking on it next week!
Speaking of next week, Monday is a stat holiday in the US (Memorial Day), so I'll be taking the day off to be with my family. So enjoy the weekend, all, and see you Tuesday!
Hey Folks! AI work continues today, as I focused on the AI's ability to try new ideas if they don't have experience solving their current problems.
Yesterday's work was able to get AIs to the point where they can scan their memory for interactions that could help their current priority, and gauge which other AI/items would help most as a target of the interaction. It turned out there were still a few bugs and improvements to be made with this system.
The biggest, by far, was getting the AI memory to store not only the opening interaction in a "conversation," but also all subsequent interactions until it was over. This way, the memory it draws upon later is informed by the results of the opening line.
E.g. if Brickell uses "SeekFriendship" on "Jones," there are some immediate benefits/drawbacks to her stats, and this is stored. The new code, however, will also take into account how Jones replies, and other back-and-forth in this chain, and store all results under this opening "SeekFriendship" datum.
The result, I'm hoping, is that each AI will make better decisions.
Once that was done, I started working on code to try new things in cases where memory/experience offers nothing. My first attempt was only partially successful, as it resulted in the AI trying new things over and over even though they knew it wouldn't work.
As it turns out, I was forgetting to verify the results/validity of the random trial. I should've checked to make sure it was a benefit, and skip it if not. So I've started fixing it up so it does that.
I also needed to move this random trial code into the loop where I check each priority for memory solutions. It turns out there's some important info in that loop that I'd be better off piggybacking on, rather than rebuilding/loading in a separate, subsequent loop. It shouldn't hurt the AI decision code to do this, but it'll affect the personality a bit. Namely, for each priority an AI tries to solve, it checks known solutions first, then unknown ones, then moves on to the next priority. (Previously, it would loop through all priorities and only check known solutions, then spot check some random ones.)
We're getting there, I think. I'm hoping to see it in action tomorrow!
Hey Folks! Managed to get a lot done today, and I think I'm out of the "bug bog" that was slowing AI development down.
First of all, I finally fixed the bug that was causing AIs to be able to use the same object simultaneously. This was a combo of cleaning-up the code and reworking the way "waiting" is ascribed to things during interactions. "Waiting" is more reliable now, and AI can better find things that are available to be used.
I also fixed a bug that caused AI to immediately use objects from across the ship if they couldn't pathfind to it. And while I was at it, I also forced AI to pathfind to the center of tiles instead of the closest edge, since this makes it easier to see what's going on. Along with some improvements to the way AI determines where to stand when interacting, now they are not standing atop each other when doing stuff, and it's much easier to follow their activities. Yay!
Once that was done, I could finally resume working on the "V2" AI I mused about earlier. This new AI figures out what to do next by using its memory, instead of some "hint" data I was putting into objects. The "hints" would tell the AI that "food will decrease hunger," which is fine, but I would have to remember to add every possible hint to every possible item interaction. And this setup would prevent AI from doing novel things and trying new ideas.
The new system, instead, looks at it's own priorities first. Then, it gets a list of all interactions it knows of that could help. Finally, it checks the area for all objects that could accept those interactions, and figures out which one has the best effect.
So far, so good.
There's one problem, though. This requires that the AI have some experience already in order to draw upon it. And at the moment, all AI starts out dumb as a rock. I need a way to "prime" the AI with some knowledge before setting them loose upon the world.
One approach would be to fall back on the "hint" system. I know it works, and I could force the AI to try the new "V2" mode before the old "hint" one. However, that still requires me to put a lot of data into each interaction. Plus, I think this might undermine the "V2" system since the only interactions it would "learn" are those provided by hints.
Tomorrow, I'd like to look into other ways to get the AI trying new ideas on objects in order to increase their memory and list of options. Maybe they can simulate the interaction on a dummy object in their brain, check the result, and decide if they want to do it in real life? One cool aspect of this approach is that their mental model would be based only upon conditions they can see in the target, and it might not work out the same in reality (e.g. the target has a hidden emotional state that causes a different outcome).
Still working out the AI bugs today. I found quite a few minor issues in the AI code that was probably contributing to errors in their behaviors. Things like AIs clearing the wrong interaction when told to stop waiting, and reliance on a "TaskBusy" condition which was unreliable. There may still be some confusion between the AI's "busy" status and the presence of "TaskBusy" conditions. Probably the patter is redundant now.
I also finally bit the bullet and added code to explicitly check if an AI's target object has been deleted. If so, it'll remove the interaction targeting that object and move on to a new task. I was worried this would just put a band-aid on a symptom without solving the problem. And in fact, patching it up may have revealed another problem!
It seems that the AI is releasing the target object prematurely after it finishes walking over to it. This could be a regression caused by my tinkering today, or maybe a new bug. Whatever the case, it's a clear place to start tomorrow. And should hopefully allow me to start working on higher level AI stuff again.
Hey Folks! Hope everyone had a good weekend. This was perhaps our most normal one yet here in Seattle. Mostly household chores, errands, and even some family outings! We're looking forward to more like this.
Back on the prototype today, I resumed work on the new AI code. By the time lunch came, I had it mostly doing what I was aiming for. Namely, it was keeping an internal record of which interactions had what effect on certain stats. That, and for each visible condition on the target of the interaction (a.k.a. "them"), it tracked the individual effects.
My intention is for this to help AIs determine not only which interaction they should be trying next, but also which target is best to try it on. I want the AI to come up with some sort of score for each possible move, and make it choose that. And the way it records data, it'll learn a bit as it goes, hopefully forming something that appears to be a relationship.
E.g. Sreyovich always avoids socializing with Mann, and instead prefers chatting with Brickell. One day, however, Brickell is really annoyed at something, and snaps at Sreyovich. Srey logs this and it starts affecting his choice of whom to socialize with, possibly preferring Mann or that new crew member that joined at the last port.
Before I could go much further, though, there was a bug to solve. My AIs would occasionally get stuck waiting forever, usually trying to eat a food packet. It seems they may be trying to access the same food packet as another AI at the same time, and one of them gets it while the other keeps waiting for a packet that no longer exists.
There are a lot of ways to deal with this scenario, including some colorful drama. But the tricky part is detecting when it happens so the AI can act accordingly. Ideally, I will find a way to avoid this bug before it happens, so AI don't unrealistically wait for nonexistent items. But worst-case scenario, I could also have them periodically check that their target is still there, and move on if not.
Once that bug is solved, I should be able to watch a large crew go at it for a while, and dump a log of their memory "matrix" for review. It should give me a better idea of how the AI could work with that memory to make better (i.e. more entertaining) decisions.
Still plugging away at the new AI feature. The exact workings are a bit hard to nail down, but I'm starting to see a few neat opportunities.
Right now, I'm trying to setup a sort of needs vs. interaction memory for each AI that helps them decide who to interact with, and how, in order to satisfy their needs. On one hand, I want it to help them discover new things to try, and on the other, I want them to remember what works so they can do it again if they need it.
The system works top-down, starting with the AI's specific need from the existing stats I have in place (e.g. hunger, security, etc.). Then, I want the AI to narrow down the list of potential interactions to try that will satisfy that need. And finally, I want the AI to compare that list of interactions against known and newly discovered tools/AIs to see if any of them match.
My hope is that they'll be able to do this top-down search in a way a lot like you and I do, allowing them to adapt to what's available. And so far, this seems like it should be doable without too much building.
One of the cool opportunities I'm seeing is that I can use some of the item crafting properties from NEO Scavenger to tag objects in this game. NEO Scavenger already had a pretty good basis for describing item functionality with terms like "rigid," "flammable," and "liquid." If I copy these over and make them into conditions, like all the other stats in the game, I can use these in the existing interaction/trigger system. This could allow me to start tracking things like "items with the liquid and hydrator conditions tend to quench thirst" while "items with solid and absorbent don't quench thirst." AIs can start analyzing an item based on properties and their previous experiences to decide if a new item is worth trying.
Another cool opportunities that emerges from this is that I may not have to specify possible interactions for every object in the game. Instead, I may be able to simply define relevant properties on every item, and the AI may be able to figure out which interactions would apply. E.g. instead of AIs only ever sleeping on beds, they may discover a couch or even a stray pillow has the "cushion" and "flat" properties, and try sleeping on those instead. It could end up just being silly, or it could be really interesting.
The trick is going to be getting the AI to see this info. It's all floating in front of me right now, and I'm trying to organize it in a way I can use it. My brain's going in circles, though.
Fortunately, the weekend is upon us, so perhaps a break will help. Have a good one, all!
Hey Folks! I finally got the new interaction data format working, and was able to start testing again.
The good news is that it seems to be working as designed. I'm now seeing the upgraded stat counter values change accordingly, including the poop relief of -25 units. (Basically, clearing a full day's worth of food from the bowels.)
The bad news is, I'm starting to notice problems in my AI decision-making. AI still seems to bee-line for the food despite having bigger priorities in their list. And it seems to be stemming from the first draft of the relationship system I added way back.
Basically, whenever an AI is deciding what to do next, they check their current priority list for top issues. Then, for each in turn, they check all shipboard ConditionOwners (COs, or basically interactive objects) for interactions that could satisfy their need. Whichever one has the best combo of high priority and high reward wins, and they walk to that. So far, so good.
However, the way the AI is calculating the reward of the interaction is a bit wonky. I have a relationship object for each pair of COs in the world which is basically their mental image of each other. They decide whether to try an interaction based on this relationship. But the values for these relationships aren't really balanced. It's easy for one relationship to blind the AI to all others until it cannot be chosen anymore. E.g. raiding the fridge until it's empty, despite being super tired or needing to poop.
The solution, I'm thinking, is to make the relationship more about properties of the target, not just the target itself.
For example, the AI will have a list of conditions it knows about, like "Malfunctioning," "Kitchen," "Infirmary," and "Fridge." If it successfully gets food from the Kitchen Fridge, both "Fridge" and "Kitchen" get a bump in their score. If the AI fails to get food from the Infirmary Fridge, "Infirmary," and "Fridge" get docked points. Over time, the AI starts to build a mental model of things that work vs. not, so they might end up knowing that "Fridge" can be hit or miss, but things like "Kitchen" increase their chances of finding food, while "Infirmary" reduce them. They may still try a non-kitchen fridge in a pinch, but will definitely prefer the one in the kitchen if they have one.
There's more to it than that. I think it'll need to account for specific instances of objects, like "Jones." Things like "Man" and "Old" are good info to work from, but for drama to take place, the AI is going to need relations to specific people, not just people categories.
I'm also thinking I might have to tweak the way scores are assigned to each value so that it accounts for multiple interactions in a chain. So if an AI starts a conversation with "Jones" that results in a blow-out argument, that initial interaction should get colored with that negative data to reduce the desire to try it again in the future.
I think. This is pretty heavy stuff, and pretty theoretical. I'll have to be careful not to fall too far down this rabbit hole :)
Got side-tracked today with website stuff. The Jibe is starting to map out the new home and games listing pages for my site, so we're working on what needs to be a part of each of those. And I wanted to give that the time it deserved to make the new site more interesting, engaging, and effective.
I also had a bit of research to do about setting up the new business, and whether I'll want a registered agent or not. So far, I'm thinking I will, since it keeps my personal and business addresses separate, and also allows me to keep the same address if I move house. Plus, the registered agent fee is actually a bit cheaper than equivalent PO Boxes.
I started researching an alternative press key handling service, too, since I may switch from dodistribute() on the new site. I've heard about Promoterapp.com, and I'm checking out Keymailer.co. Just wondering if these will make my life easier, as well as whether they'll be able to amplify my marketing efforts when the new game announcement is official.
Finally, I solved a few minor bugs today in some code, related to the interaction refactor I've been working on. My triggers needed to handle cases where there are no limitations (i.e. null settings) and should always pass. Plus, there seems to be an array that's not initialized correctly causing a null-pointer.
Sorry for the dull news day. Believe me, I'm hoping to get back to the code tomorrow. And more to the point, having something interesting to show soon!
Hey Folks! I had a whole day of dev today! It's been a while since I could say that, and it was a nice change. And probably a good thing, too, as the day involved a pretty major data refactor.
I decided to move forward with the interaction refactoring. This meant changing the way interactions list:
Conditions to apply to Us
Conditions to apply to Them
Conditions We must have/not-have to make interaction available.
Conditions They must have/not-have to make interaction available.
List of items to add to Us
List of items to add to Them
List of items to remove from Us
List of items to remove from Them
List of items to transfer from Us to Them
List of items to transfer from Them to Us
As you can see, quite a lot of data per interaction. And the way it was written, there was a lot of redundancy. (Like listing the same thing multiple times in a row to stack its effects.)
The new data format makes better use of things like Condition Triggers and Loot specifications, which each can list multiple things. I can just say "5x this plus 3x that" instead of listing the same thing 5 times in a row. It also lets me to some randomization, though I'm unsure if I'll use it since that may complicate things too much.
Changing the code wasn't too hard. A few hours of tracking down all the references and it was done. Even the GUI data editor wasn't too hard to update.
However, the data is taking a while. I have something like 60 interactions in the game right now, and each requires some manual data-entry to convert. I briefly considered trying to write code to automate it, but I think it'd be a wash in terms of effort. Plus, doing it by hand means I get to review all the data I haven't seen in a while, to refamiliarize myself with it. (E.g. I can see that most social interactions get 1-5 stat points and a few major ones get 10, so this helps me continue adding more with consistent values)
I'd say I'm 80% done with the conversion now, which is further than I expected to be by the end of the day. Hopefully, I'll be able to test this out tomorrow. The results should let me create new ship items more quickly, and with more options!
Hey Folks! Hope everyone had a good weekend. Ours was mercifully light on chores and errands, for once. We even socialized with outsiders. Almost like normal people!
Back in space, I tried to focus on getting the toilets working, and I think they're sorted. It's a simple interaction for now. AI decides they need to poop, seeks a toilet, and for now, the toilet always allows pooping, so they proceed. There's room for variation there, of course. Things like a malfunction, or health condition preventing poop. But this was mainly about getting another physiological need operating, and that's done.
After testing a ship out with 4 beds, 2 stocked fridges, and 2 toilets, I was starting to see some AI patterns. And my first obstacles.
First of all, AI loves food. Like, no matter what is highest on their priority list, if food is in the top 3-4, it's what they go for first. I know you're probably thinking, "duh," but this isn't working as designed. Something is causing AI to think satisfying a greater need isn't as important as a lesser need, which is concerning.
One possibility is the various interactions have inconsistent hint data. Each interaction has a list of things it promises to an AI. What they get depends on if the interaction succeeds. But for the purposes of deciding what to do next, the AI needs to know which interactions might give them what they want. And which ever one promises the most for a high priority wins. So maybe food is promising too much.
It could also be something less obvious, so I'll need to debug to know for sure.
A second thing I noticed had to do with the data editing. When I specify the results of an interaction, I list the condition triggers that are applied. They'll be things like +1 achievement stat, or -1 food. Provided the trigger passes its test, the specified condition changes by the specified amount.
The problem with this is that adding/subtracting something like +5 to a stat means I have to either:
1) Create a condition trigger that's +5 to the stat, or
2) List the +1 stat condition 5 times
And if I want to subtract 5, I need a separate copy that's -5 or 5 copies of the -1. And ditto for +/-10, 15, 20, etc. And when you start to do this for each stat, you end up with long lists of almost redundant data and bloated condition trigger definitions.
I'm thinking a better way to do this is to just have a single condition for each stat and let the condition trigger multiply it by +1, -1, +5, -5, etc. as needed. This way, the "EatFood" interaction can just apply a "Hunger x-20" instead of listing "HungerDown" 20 times.
And in fact, as I was looking this over, I realized this is basically a copy of the new loot system I put into place. E.g. "Fridge01" is shorthand for 10-30 yellow food packets, so I only have to list "Fridge01" in the data, and the game knows that's referring to a loot table entry. I should be able to do this for conditions, as well, and maybe that'll simultaneously give me lots of versatility, reduce data complexity/redundancy, and reuse code.
I'll have to see if there are any gotchas to that approach, and if the refactoring is as worth it as it seems. But I've been meaning to do an optimization like this for a while, so I don't think this need is going away soon.
Hey Folks! As some of you have no doubt noticed by the glaring red box at the top. We have (had?) an email problem.
A short while ago, Hotmail users stopped receiving emails from both the site and my personal email address. No error messages. No bounced email warnings. No emails in junk folders. Just emails sailing unannounced into a digital black hole.
My initial attempts to tackle the problem all failed, largely because I couldn't identify the problem. The bluebottlegames.com domain seemed to be clear of any spam blacklists. So did my mail server's IP. I tried checking Hotmail's site to see if they had any tools to debug my issue, but asking Microsoft a question is like interrogating a mountain. I did find a direct contact form at one point, and actually received a prompt reply! Unfortunately, it amounted to this:
Our investigation has determined that the above IP(s) do not qualify for mitigation.
I'm not sure what to do with that information. Does this mean there is no problem? That the problem won't be dealt with? Correction, I am now interrogating a mountain that replies in riddles.
Fortunately, I blindly stumbled upon a step which seems to have helped. I queried my own personal IP against any blacklists, and it turns out it was on a few. Note, this is my house's dynamic IP address. The "originating IP" in the email, not the outgoing mail server's IP. Sheesh.
Fortunately, one of the blacklists has a self-serve tool for de-listing an IP for, I guess, people in my situation who don't control our dynamic IPs. I also made another change to my mail client to ensure the outgoing server was in-line with my domain's designated sender list.
Even more fortunately, it appears this may have done the trick. I saw my first few emails appear in my Hotmail junk folder, which is an improvement! And after some flagging as "not spam," the rest were appearing in my inbox again.
Is it solved? Hard to say. I'm going to leave that red box up there over the weekend just in case. It'd stink to be an Hotmail customer waiting for a registration email or reply from me only to never get it and not know why.
After wasting most of my day on stupid email issues (stern glance at you, Microsoft), I did get a bit of game dev done. Namely, I finished my emissive model tests and know how I can fake that adequately with a "cookie" spotlight if I need to. Until I know exactly how to proceed with graphics, though, I decided to turn my attention back to gameplay.
So I tackled two of the more annoying issues in my gameplay that have been around for a while. First, I sorted my AI priorities by highest need, rather than age. I think this is working now, though the way AI evaluates goals seems like it'll need some tweaking.
And, I also adjusted my ship-building mode to have depth-sorted clicking when deleting parts. Previously, it would just delete the first object it detected under the mouse, which unfortunately sometimes deleted floors and structure before furniture, lights, or walls. Now, it appropriately checks the depth of all objects under the mouse and deletes from top to bottom.
They're fairly minor code changes, but they make a big difference in predictability.
Next week, I hope to finish adding toilets and maybe another psysiological need/item pair, and start watching to see how crew balance emotional and physical needs. At that point, it might be good to start exploring ship duties and how those fit in. (E.g. repairs, cleaning, operating ship equipment) And maybe, if I can figure out how to proceed, it'd finally be time to add a control panel to control where the ship is going? And how the user will interact with it?
Could be fun! Hopefully, I get to do more of that instead of sysadmin and biz admin for once :)
Still running at half-speed today, as I had to head downtown for a meeting this morning. But during the time I had at the computer, I experimented with different lighting techniques.
While looking into voxel meshes, I wondered whether it'd be possible for the user to set certain pixels on the surface to be bright colors, and have the game use those as sources of light. An example might be a computer screen, a neon sign, or vending machine front panel.
At first, I thought this was going to be nicely built-into Unity. They have an emissive material system which basically does exactly this. You supply a texture describing the lit surface, and the game calculates illumination on nearby objects.
However, there's a catch: it only works on static objects. And here, "static" unfortunately means placed within the Unity editor. I.e. by me, before the game ships. It won't work on items placed by users, which is kind of the whole point of the game. Basically, it's a lightmapping system, and it can only be run from within the editor. Bummer.
After poking around a bit, it seems the accepted way around this is to just add lights to the scene where the emissive surface is. Maybe I'm old-fashioned, but that seemed like a really inefficient way to go. But I guess modern engines and hardware make this more feasible than back in the day.
And what's more, there's a "cookie" setting on many light types that allows one to control the shape of the light on nearby objects. E.g. make the light conform to a rectangle, a line, or even more complex shapes like a dirty window. I've only started delving into this, but it might be a way to go.
Nothing fancy to show just yet, though. Hopefully soon!
No game dev today, folks. Sorry. Instead, I had a chat with my accountant, and then with some lawyers about the company. Nothing bad, just necessary business stuff to correspond with our move to the States.
I briefly dug into more voxel stuff in my spare time, though. Not enough to make anything interesting, but I'm trying to figure out if I can get certain voxels to emit light using the material settings. It seems like it should be possible, and I think it'd be very useful for things like control panels, signage, indicator lights, and other things. This way, we can have some really moody ship interiors.
But I'll need more time to dig into that. Hopefully, more on that tomorrow!
Hey Folks! Still digging away at admin tasks today, a lot of which involved phone calls with bank personnel. However, once that was done, I took a bit of a side trip today to look into another graphic tool.
So far, my ship pieces have been one of two things. The first is a flat sprite hovering over the floor with a rough column of mesh underneath to cast shadows. (Picture one of those single-legged cafe tables, or a really flat, wide tree.) This is for walls and other shadow-casters.
The second is just a flat sprite at floor level for, you guessed it, floor (and some items like beds). No shadows, no height.
From a top-down perspective, these look pretty good. Add normal maps and lights to them, and they actually look pretty great.
However, one problem which my contract artist and I discovered was that certain objects cannot cast a good shadow this way. When they start living together in the same scene, or interacting with a human that has height, the illusion breaks a bit.Things like chairs, grills/jail-cells, and other non-cube/perforated shapes are going to look a bit off, either casting not enough or too much shadow.
The traditional solution to this problem (and games with lighting in general) is to make 3D models for things. They have volume, surface textures, and basically look good from all sides and cast appropriate shadows. However, one reason I prefer not to use 3D is because it's hard to create art for. Both for me and for modders. I also like the pixel art look, so 3D models seem like a bit of a step backward.
I forgot about voxels, though. Or more appropriately, I figured they were either too hard to setup or wouldn't look good.
Perusing some art styles yesterday, I realized the error of my thinking. It turns out that voxel editors are actually pretty easy to use. They integrate with Unity almost seamlessly. And in one case, MagickaVoxel, they're also free!
I decided to give MagickaVoxel a try, and this is what my first 30 minutes produced:
and here it is in the prototype:
That's not too bad. It's slightly more work than creating a pixel art tile (because it has depth), but the voxel editor is pretty easy. And it can still be loaded at run-time, so modders can create stuff. More importantly, I can create non-cube and non plane objects with a pixel-art-like editor, and they'll exist in 3D in the game.
I won't say it's perfect. Voxels do have this "voxel" look to them, and this may be unavoidable. But at the very least, the top-down can look almost identical to a sprite with the advantage of accurate volume/shadow data.
And, perhaps dangerously, it makes isometric art attainable...
Hey Folks! Hope everyone had a good weekend. Ours was a bit rough, as a nasty cold is sweeping through the family the past few days. It even has a hold on yours truly, though I'm waiting to see if I've got a weaker variant of it or if it's just taking longer to tackle me. Needless to say, the weekend was derailed a bit.
As it turns out, work was thrown-off a bit, too. Though, thankfully less to do with illness, and more to do with general business stuff.
I have a few contractor things to catch up on, and I'm trying to give those the attention they need to keep things moving. Tiago's plugging away at the mobile version, and has a basic framework running. He had some questions about the source code, so I did what I could to help there. It's also his first week of work with me, so we were getting some things sorted out with time recording, payment, etc.
The artist has a new iteration sent to me as well. It uses a new 16x16 tile size, which might be an interesting route to take. However, I think the more work I see from him, the more questions I have about how to proceed. I may need to take a few steps back and rethink my options re: perspective, resolution, detail level, palette, etc.
I'm hoping I can get back to coding soon. But some of these higher-level things should hopefully pay off in the long run. Especially as delegation amplifies the rate of work being done across all fronts!
Still plugging away at some more physiological modeling in the prototype.
In NEO Scavenger, I did a lot of research to come up with a simulation model for hunger, thirst, fatigue, and other bodily effects. I'd like to carry all of this over to the ship crew simulation in the new game. And I'd like to take that a bit further with things like waste management, hygiene, and other things which become critical in a closed system (i.e. spaceship).
So today, I learned that the average human defecates 1.2 times per day, between 95-450g of matter. They also urinate 1.4l per day, on average. All of that has to go someplace on a spaceship, and making sure those tubes and storage/processors are running will be a big part of the fun. (For real, I think this will be fun.)
I've started creating a chain of conditions and trigger thresholds for poop, similar to the way hunger worked in NEO Scavenger. I'll probably do the same for urination. Then, I need to make some interactions to relieve these pressures periodically. And for that, we need a proper tool:
They're just quick-n-dirty sprites for space toilets. But I hope to have them hooked-up such that AI use them periodically during their tour of socializing and need-fulfilling. And the beds? Well, I just redid them to make them nicer than the old, super-long foam slabs that didn't make sense in zero-g.
Not quite running yet, so I can't tell how this looks. But it should be happening soon. (Exciting, I know!)
Anyway, hope you have a good weekend, and see you Monday!
Hey Folks! I tried focusing entirely on the prototype today, as I've been spread pretty thin on other business stuff lately.
It was a slow start, as I think I was in another of those development ruts where one doesn't know how to proceed. My AIs were not correctly getting items from containers, and fixing it meant a slight architectural change to make it work.
Eventually, though, I was able to get things moving. I changed some things around so that AIs can ask the ship for all items, even those inside containers. I had to suppress loot on certain containers if they were loaded from a save file vs. spawned as a new object by the user (to avoid redundant loot). I also had to rework the data format to know about parent/child relations between items, and fix some display issues with contained items appearing on top of containers.
Once this was done, however, my AI could happily wander over to the fridge, and snag a yellow food packet from it, eat it, and move on to the next thing. The food packet would be consumed, and the AI would get the boost. Item transfers are still a bit sloppy, but functional enough to work. Enough to maybe start testing more macroscopically.
In order to get a broader picture of what's going on, I decided to add a few more "procedural" items. Here, I mean "procedural" in the Hamlet's Hit Points sense. That is, something the AI needs to accomplish for physical or procedural reasons, as opposed to emotional ones. I want to see how AIs balance their emotional needs (which are already sort of working) with procedural ones. I have sleep, and food, but there should probably be more.
So I added a toilet item. The normal map is a bit wonky, so I want to wait for it to look better before posting a pic. And it still needs game data to be usable. But I now have placeable toilets.
I also revamped the bed art, just because. 2x1 instead of 3x1, and a bit more like a zero-G sleep sack from NASA. These do look better than the old beds, but I'll save the pic until later :)
Tomorrow, I'll try to get some game data into those toilets, and decide on whether or not more procedural items would be best. It may also be a good time to start exploring things like oxygen and temperature within the ship, which are a bit more complex as they involve ship integrity. But they could also be interesting to work on!
Hey Folks! Today was all about visual style brainstorming, with a bit of web maintenance thrown in.
As mentioned yesterday, I'm still in talks with the contract artist about different visual styles for the game. To aid the discussion, I did some searching for games with compatible art styles that I liked, and made a document to show pros and cons. So far, we think we're on the same page as to what would fit, and he's going to mock-up a few more things.
I'm still thinking top-down, mostly due to the simplicity of the art. (No meshes needed.) But just for the heck of it, I decided to do a quick hack to see what rotating the camera in the prototype looked like:
I do love me some isometric art. Even with the lighting and z-fighting bugs, that's really tempting. We'll see, though.
The website thing was mostly a back-end fix to get an old update script working. It helps me keep the Steam keys available to new users, and needed some patches since the new website went live. Seems to be working now!
Hey Folks! A little bit of loot work today as I fixed the issue mentioned yesterday, and a lot of graphic style discussion.
First, the loot. It turns out the issue I was having was that none of my Condition Owners (COs, basically all active objects in the game) had their containers setup when initialized. Any time I tried to add a CO to another CO's inventory, it would silently fail because there was no container installed. After making each CO have a container by default, I was able to spawn a fridge with a random number of yellow food packets inside. So far, so good!
Obviously, I'll need a more sophisticated system in the future. Not all COs will contain other COs, and there may be rules involved. But for now, this lets me get back to testing.
The other major thing going on today was a discussion of art style with my contract artist. We talked about a number of things, such as:
Camera point of view
Complex Ship Parts (e.g. non-cube/non-square)
AI Mesh Setup
Font size and style
All of these variables are up for negotiation, but each comes with a trade-off. For example, straight top-down vs. isometric or skewed top-down is a big decision. The former is easier to support and mod, but it looks a bit flat. The latter looks awesome, but makes the art more complex and harder to maintain/mod.
That said, even the top-down perspective I have now is starting to run into issues. Things with complex shadows will still look a bit odd in this camera angle, since the sides of the object are still a cube. Imagine a chair, for example, with a light nearby. Or a jail cell door. Each would cast a detailed shadow in real life, but the game would only allow for a single blob shadow. Not a game-breaker or anything, but also not ideal, and maybe distracting/inconsistent with the rest of the art style's fidelity.
We also discussed whether the 32x32 tiles might be too big, with too much detail. Looking at some other games out there, lower res can actually appear nicer since the detail level is easy to maintain. It's not exactly "uncanny valley," but pixel art suffers from a similar issue. If the sprites are too large, it becomes really hard to make them look nice.
And the subject of fonts will come up soon, as we figure out resolution. If the fonts are super crisp against a gritty pixel art style, it can look weird. But if the fonts are too pixelated, that can be hard to read.
Both of us walked away from the chat with questions unanswered. But they're important questions, and we're going to do some experimenting to see if any solutions come up. Hopefully, we can arrive at a nice compromise between awesome-looking and sustainable!