Hey Folks! Hope everyone had a good weekend. Friday's laptop failure caused a bit of stress in mine, but I think I've found a replacement laptop that'll do the trick. Should be here soon!
Getting back to the prototype, my first task today was to sort out the render-order of the sprites in the ship editor. Normally, OpenGL has a thing called "depth sorting," to help it decide which models are in front of others to reduce overdraw. However, no matter how much I tried, I couldn't get it to work. I could get variations of missing OpenGL renders and missing everything renders, but no sorted sprites. In the end, I just hacked a 2D array to store each group of sprites based on their layer info from the JSON data. Good enough for now.
Once that was done, I was still faced with another issue: shadow-casting. The old tilemap prototype used a special line-of-sight algorithm to cast shadows. This worked well for game mechanics, but didn't look so hot when it came to objects that spanned multiple tiles.
So one thing I'm trying to figure out is if I can do that fancy shadow-casting you see in some games. Combined with the normal mapped lighting I already have, I think it could really improve the visuals.
To this end, I decided to start looking into framebuffer objects (FBOs). OpenGL allows you to not only render a scene to the screen, but also render a scene to a texture, do some operation on that texture, or render additional textures, and run shaders and other operations on them as a group.
A popular use of this is to render all the normal maps to a single texture so that lighting need only be done once per pixel, instead of once per model per pixel. (I.e. better performance) It can also be used to do various things like shadows, so I want to see how hard it is to figure out.
So far, the concept seems easy enough. Just setup some extra FBOs, tell OpenGL where to find them, and when it comes time to render, switch the render target from the screen to an FBO (or multiple FBOs, if there are different layers to combine). And when done, switch the target back to the screen and have shaders combine them appropriately.
Except, I can't figure out how OpenFL tells OpenGL to do this. I've seen a few examples out there, but I haven't been able to get them to work. Either the code relies on custom libraries I don't want to use, or it uses outdated OpenFL code.
In theory, I should be able to piece together working modern OpenFL code from these examples. But no dice yet. I've only spent a few hours on that so far, though, so maybe a fresh brain will help tomorrow. For now, have a good night, all!
So, ah, looks like my "trusty" old MacBook Pro may have gone to the big Apple Store in the sky.
As I was browsing a twitter feed during lunch, the video glitched out and froze the machine. Hard-resetting resulted in a glitchy-looking Apple boot screen, then the "Kernel Panic" message. Several hours of research and debugging steps later, I may have to call it: my GPU is fried.
Unfortunately, it's in that sweet spot of repair cost ~= replacement cost. And even then, I'd have a "new" MacBook Pro with the same possible issue. (Faulty nVidia 8600)
This laptop was how I made Mac and Linux builds for NEO Scavenger. I could probably find another way to do it with my desktop, but this MacBook also was one of those machines that ran OSX/Windows 7/Ubuntu with a triple boot setup, and that was a killer feature for working away from home. And besides, it helps to have a natural Mac for testing Mac-specific things (e.g. trackpad, hardware weirdness, etc.)
Soooo, I'm looking at options. I could try to get a new MacBook and set it up similarly. This'd likely be $1200-2000. I could also try to replace exactly what I had (2008 model), which will probably run about $500. There are also "Hackintoshes," which are non-Mac machines loaded with OSX. These tend to be cheaper, since one can buy non-Mac hardware. But they are also more dodgy in their support of OSX. Plus, they wouldn't allow me to test Mac-specific issues as well (and might even cloud my testing).
This was not the problem I hoped to be tackling today. I hoped to be working on shader and lighting stuff. I only barely got some of that done, but not much more to show. (I did happen to see some old OpenFL shader example code with new eyes, so to speak. I may have missed some tricks they did when I first saw them because I was still figuring out how OpenGL worked.)
Anyway, kind of a downer to end the week on. But hey, could be worse! Anyway, have a good weekend!
Hey Folks! Those of you following my Twitter account (@dcfedor) may have already seen hints of this, but it appears I've got the OpenGL lighting shader working in the ship editor!
Above, I've split the screenshot down the middle with flat-shaded HaxeFlixel sprites on the left, and the OpenGL normal-mapped shader sprites on the right. Essentially, the right side is a 3D scene with a dynamic light near the mouse and ambient light everywhere else.
I can place and remove prefabs, and both the HaxeFlixel and OpenGL sprites will add/remove to the scene accordingly. And I've even hooked up the OpenGL view matrix to the WASD scrolling in the editor, so both will scroll the same amounts, keeping things lined-up. Looks pretty neat so far!
For one thing, you can see that the z-sorting is wrong, so some sub-structure stuff appears above the floor, and walls are hidden by sub-structure, etc. Weird that this is an issue in the 3D scene, but I think I may have the depth-sorting feature turned-off. So this may be solvable via a simple switch (since each prefab has a z-value already for layer-hiding purposes, and I can just use this).
Secondly, performance is a bit rough. I'm getting maybe 21fps with debug code on Windows in this scene. And this scene is probably a bit simpler than the final scene complexity I'm picturing. Now, performance is usually slow at first in a project, and I have a lot to learn about optimization. But this was the best I could get with quick optimizations to the code. "Release" mode in windows is 60fps, so that's encouraging. But it'll dip down into the 40s with just a few dozen more parts on-screen. I might have to learn a lot more about OpenGL to make this work, or else explore the OpenFL implementation of GL to see if I can leverage that instead. (My code kind of bypasses the "right" way to do things in OpenFL.)
Thirdly, this nice-looking lighting ignores shadows. Nothing is in place yet for walls blocking light. There are ways to do this, and with this shader framework in place, it might be doable. But again, not there yet.
Finally, while it looks nice, I'm not sure if it looks that nice. The subtle shading is pretty, but I was expecting more highlights on the walls. This could be down to the way I hand-drew the normal maps, since they have razor-thin "edges" drawn-in.
It's progress, but it'll warrant some careful consideration of the pros vs. cons.
I decided to stress test my normal-mapped OpenGL project today, to get an idea for how feasible it would be. For reference, my early estimates on ship complexity are about 200 sprites. That's a ship larger than the screen, about the scale of the Millenium Falcon for crew of 32-pixel diameter. And that ship has skeletal beam sprites underneath floor and wall sprites.
The test setup was using the same 512x512 cushion texture with diffuse and normal maps, and the point light plus ambient light shader I showed yesterday. Using the neko VM, I was able to get about 300-350 sprites before framerate dipped below 60. Not bad! Neko, though, is mainly for testing purposes due to its quick compile time. The VM makes it a bit slower than native apps.
Switching to a native Windows target in release mode, I was able to get 1350 sprites with 54fps. I think I can work with that :) Scaling it down to 500 sprites kept framerate pegged at 60, so I think it'll work.
Now the tricky part: how do I integrate this with the ship-building and simulation prototype I've been making in HaxeFlixel?
The stuff I've been doing is raw OpenGL. HaxeFlixel abstracts a lot of that to make things simpler, and to protect me from doing dumb things that break on certain platforms. And realistically, HaxeFlixel would probably run faster than what I could write in raw OpenGL. But the problem is that it doesn't support the arbitrary shader code that I've written for this lighting.
OpenFL (note, not GL, FL) is the medium-level code that HaxeFlixel is built upon. And it's also where I wrote my raw GL code. So in theory, there's a way to smush these together.
So far, however, I haven't seen a clean way to do so. I could probably get my ship code to initialize a GL sprite at the correct position and orientation, and shim an GL render call in there when the HaxeFlixel stuff renders. But I'm not sure how those would layer. (Probably HaxeFlixel on top of GL, at a guess.)
But then, I worry that my raw GL code is going to fall apart as soon as it goes to, say, Linux, or Android. In theory, it should be okay, since OpenFL is designed to catch this stuff. But I'm not sure if I'm using OpenFL correctly. Like, are there certain conditional compiler branches that OpenFL does behind the scenes when using its built-in sprite stuff? Branches that I'd be missing if I skipped OpenFL's sprite.shader technique, and just jammed vertex arrays and shaders straight into GL?
That's where I am now. And this is pretty virgin territory. Little, if any documentation is out there on doing this in HaxeFlixel. I know of a few out there doing experiments with this, and even OpenFL/HaxeFlixel devs on github are building better tools for this. But I'm just kinda bobbing on a rickety scaffold I cobbled together from moldy blueprints I pieced together from various sources.
Hey Folks! Hope everyone had a good weekend. I kept meaning to sit down and work on an RPG design presentation for a possible developer's talk, but I barely would get into it when baby stuff came up. Such is parent life :)
Conversely, I did manage to finally sort out a working lighting shader in OpenFL using OpenGL Views. Almost another day passed with little success, and lots of my wrestling with transforms. Every time I'd suspect I found an error in the shader, it'd still turn out wrong. And all of the debugging tricks I could think of (rendering colors based on direction, distance, etc.) weren't yielding any answers.
Thinking the removal of view-space would uncomplicate things a tad, I fired it up...and it worked?
I was dumbfounded. I still am, a bit. I mean, it seemed like everything I was doing before was by the book. As far as I can tell, I must've been doing something before that was world-space instead of view-space, and now that the shader was setup for world-space, things worked. And in all likelihood, it was the setup of one of my transform matrices in the app, or the TBN matrix in the vertex shader.
Whatever the case, here's what I've been striving for:
Hopefully, the improvements over last week's screenshot are obvious. We now have lighting which correctly highlights each surface from an appropriate direction, and it works regardless of the plane's orientation or position.
And, as a bonus, you can see a few additional lighting components at work. For one thing, the light has a color (yellowish orange), and it casts a warm color contrasting with a cooler, ambient blue-violet that lights everywhere else. There's also a falloff component which fades light intensity based on distance from the light, which helps illuminate areas directly under the light (which were dark without it).
This means I can have separate control over ambient lighting and point-source lighting, each with their own colors and intensities. Theoretically, additional lights are possible, too.
Now, this isn't the end goal. It's a huge step forward, and the backbone of the proposed lighting system. However, there is at least one additional piece missing: composing multiple ship parts of various orientations into a scene with lighting.
For that to work, I can see two possible approaches. The first is to simply render each ship part as planar geometry with this shader attached, and appropriate textures, and let the GPU do all the work. This would basically be a 3D engine rendered from a 2D, top-down perspective, and is maybe overkill.
The second approach would be to render all the ship parts onto a scene with their relative positions and orientations, and then do one giant lighting calculation on the whole screen. For this to work, I'd have to render the color information to one screen-sized texture, render the normal map info to another texture, then combine the two in a shader with lighting info. Also, that normal map texture would need each sprite's normals adjusted for orientation, which means transforming their normal map colors based on orientation. This, too, is pretty complex.
Before I can do that, however, I think I need to figure out how OpenGL handles multiple render targets, so I can have one for each scene texture (diffuse color, normal map, and any others I might need, like depth or occlusion info).
More on that tomorrow. For now, my brain needs a rest!
Just a quick update today as I need to head out in a few minutes.
A chunk of today was lost to an errand. But the time when I did work I was focused on memory optimization. Steve pointed out that NEO Scavenger was starting to consume 200+MB of memory when all the data was loaded, and that'd cause problems on some mobile hardware. We needed to look into trimming that fat.
It took a few hours of trial and error, and learning how Haxe/OpenFL deals with memory, but I think we found a solution. Namely, the XML parsing we do needed some explicit clean-up of XML.parse objects and strings when it was done.
That, and the encounters file needed to be split up. The file was originally larger than all other data files combined, and it was causing memory usage to balloon unnecessarily. Splitting that up, combined with the parser object nulls between each step, seems to have kept memory allocation under control. Current load is about 80MB.
So hopefully, crisis averted! I guess we'll have to keep an eye on this as we develop further, though.
That's all for this week. Have a good weekend, all!
I think I'm making progress on the OpenFL+GLSL shaders. It was a close call, but by the end of the day, I finally wrote my own normal map vertex and fragment shaders, and they seem to be displaying like they should.
If nothing else, I'm learning OpenGL, GLSL, and a lot about 3D display math. For example, one thing my previous attempts did wrong was to use deprecated OpenGL uniforms. Namely, things like gl_Normal, gl_NormalMatrix, gl_ModelViewMatrix, etc. Silly me thought, "The shader already has those values? That's really handy!"
No, no it doesn't. Instead, I needed to feed my shader these values, or else piece them together within the shader. So when I finally figured that out, I was able to make some headway on a shader that uses actual data instead of unknown values.
And the result?
So what we're looking at here is a 512x512 plane with a cushion texture applied to it, and a light where the crudely-drawn white starburst is. This cushion texture has a "normal map" (which is a texture that sort of defines surface bumpiness), and the light highlights and shadows the bumpy parts.
A few days ago, I had a version of this working almost flawlessly, with one drawback: it didn't change if the cushion square moved around or rotated. The lighting would always come from the same direction (like it was drawn permanently onto the cushion.)
This newer shader updates the highlights and shadows based on both the position and rotation of the cushion. And this is important because I'd like to try ship parts that can be rotated and placed and have the lighting react accordingly. Without this, I'd probably have to have 4 separate copies of each ship part, with normal maps for each rotation (costly/tedious to draw, and uses lots of memory).
Now, this isn't perfect. For example, there's an area directly underneath the light that should probably be lit, but seems to be in shadow. Also, the "peaks" of the cushions seem to have a rough texture to them. I'm thinking this may be a problem with the way I'm combining cushion color with the light component, or else the light is so close to the cushion that individual threads in the cushion material are causing shadows.
It's a start. I've been "wasting" a lot of time on this, but I'm still optimistic the outcome will be worth it. Especially since figuring this out now should mean I don't have to mess with it much again later, as it'll be the basis for most lighting effects used in the prototype. Just gotta figure this beast out...
Hey Folks! Today started off with a bit more tablet parsing work. And later, more shader investigation.
I think I've finished putting all the data type and parser code together, and Steve is going to plug those into his encounter processing to see if things run normally. Depending on how that goes, there may be more stuff I can do to help shoulder the burden of the mobile port. 4 years of code is a lot to convert!
Once that was done, I turned my attention back to shaders. I'm still having trouble wrapping my head around the transforms necessary to do normal-mapping. Though, as I'm discovering, a lot of the problem lies in differences between conventions in one language vs. another, and from site to site.
For example, some sites list 4x4 transform matrices as having their translation components in the final column, while others list it in the final row (i.e. transposed). Even after looking up column-major and row-major, I'm unsure which is which. Thankfully, OpenGL documentation clears this up by explicitly stating that the translation components are indices 12-15 in the Matrix array. No confusing that!
Another example is that the initial normal map tutorial I followed created a "ModelViewMatrix" while other tutorials had "Model" and "View" matrices. I later learned that the combined ModelViewMatrix is more common, but at the time, I couldn't figure out for the life of me how I was supposed to tease these apart. (In the process, though, I learned how OpenFL's Matrix3D.Create2D() function works, which is basically a translated top-down view matrix pointed at 0,0.)
Combine the above confusion with a general rustiness (or lack of experience) with matrices, and we get a pretty wide gap between what I know and what I'm trying to do.
I still think it's worth it, though. Matrix math has haunted me for years, and this is pretty integral stuff to game dev. Maybe slightly less-so for 2D games, but as per yesterday's example shaders, it can add some really juicy visuals if used cleverly.
I think tomorrow will continue this line of work. I feel like I'm making progress, albeit slowly. Learning how the 2D view matrix is made, how translation matrices look, and now, getting a better handle on tangent and binormal calcs, I think I'll be getting there soon. Just have to hang in there, brain!
In other words, shaders can do some pretty cool visual tricks. And since most desktops and many handheld/mobile devices have GPU hardware designed to use shaders, it's pretty fast, too!
In theory, Haxe/OpenFL has a way to display these shaders. And in practice, I've got this working no problem. (They have several helpful samples and demos to get you started.)
The tricky part, however, is getting this to play nice with HaxeFlixel. Right now, HaxeFlixel will support shaders that get applied to the whole camera. So this can be useful for post-process effects (antialiasing, scaling, blur, etc.).
However, it's harder to get these shaders per-sprite. And I think that's what I'm going to need if I want to try to apply lighting/highlights to individual ship parts. (An alternative to the grid-based lighting I was trying before.)
Although, now that I write about it a bit, maybe that isn't the case. Maybe I don't need to have individual sprites with shaders if I can render a whole scene to a texture, and feed that texture to a shader that selectively processes regions of the scene? E.g. one texture shows the ship sprites' actual colors, another texture shows those sprites' normals (bumps), and a third texture highlights areas where the shader should be active vs. ignored. This way, the shader would just process lighting for pixels in one area of the mask, and ignore other areas entirely (such as the GUI).
Hmm. This might save me from a lot of matrix transformation headaches.
Or, no. No it won't. I still have the problem of a rotated ship piece having normal maps that are pointing the wrong way. The shader would need to know that these ship pieces were rotated in order to light them correctly.
Anyway, the good news is that I've got a normal map shader already running. I can move it around and see the lighting on it change accordingly. The missing parts are
1 - if that item gets rotated, the lighting doesn't behave correctly
2 - this is entirely an OpenFL GLView scene, and bypasses HaxeFlixel.
The first issue is a well-known problem, and it's just a matter of me learning how to transform my vectors appropriately in the vertex shader and pass that info to the fragment shader where lighting is figured out.
The second issue is more of an unknown. I know I can composite OpenFL (or HaxeFlixel) sprites on top of a GLView scene, but I haven't been able to have a GLView scene blend with them at all. I suspect it's doable by having the shader sample the game's frameBuffer or something, but I haven't learned enough about that to sling those around yet.
But at least I made progress today! It was damned frustrating, and my brain is starting to show its age. But I'm excited for the prospects of playing with shaders in my next game!
NEO Scavenger is now officially updated to v1.13! Since the test builds have been relatively stable, I've just finished updating the default builds to 1.13 on all sites. The "test" links are no longer necessary, and have been removed for now.
Changed SampleMod to be self-contained in one folder.
Changed old _readme.txt about modding to be stored in SampleMod's folder.
Changed loading screen to only show last line of log, and to tell user how to see whole log.
Fixed a bug in treasures with a variable output that would return one less than the specified max.
Fixed a bug that allowed the dome light to be used in making noise traps.
Fixed a bug that caused 4-week pickup truck extension at St. James to be free.
Fixed a bug that deleted arrows/spears that stuck in wounds as a creature died.
The major changes here are the sample mod and loading screen. The sample mod is now stored neatly away inside its own folder, instead of having some files and other folders littering the game's main folder. And the loading screen now shows only the last line of the log messages, as well as a tip on how to see the rest. Both of these changes should help avoid confusion in new users, and prevent mod users from reporting harmless "file not found" log messages.
The bug fixes for loot will slightly increase some treasure amounts when a variable number was returned, and dying targets will no longer destroy ranged weapons impaled in their corpse. Also, the St. James parkade should behave better now.
As always, if there are any issues with the new build, let me know on the forums!
Head down and shoulder to the wheel again. More data parser code.
It's pretty mechanical work so far, but not mechanical enough to be automated. There are little exceptions to each data type, which require special handling. For example, the way headlines are defined in the data, they all share a common base item 7.0, and have their description and subgroup IDs updated based on the headlines.xml. And a similar thing is done for datafiles.
Next week, recipes among other things. And I know those will need special handling for the recipe hint scraps. So more busy work to come! Though, after a week of no reported bugs in NEO Scavenger v1.13, maybe I'll take some time aside to update all sites with the official build.
Until then, have a good weekend, and hope everyone's safe out there!
Most of the day was spent writing xml-parsing code for the mobile version. I added data types and parsers for items, hexes, factions, and treasures, and I'm getting started on creatures now. Just pure data parsing right now, so it rips all the values from the xml, converts to basic data types, and stores them in classes to match the data type. Nothing fancy, no logic, etc. Once all this is done, these data classes can be read by game objects to get the info they need to execute.
I had a few false starts this morning, though. At first, I was trying to parse the data into game-ready objects. However, I realized later that Steve was doing this intermediate data object loading step so the parser wouldn't have to do anything complicated. Let the complex stuff happen later, once the data is loaded and stored. Makes sense, and I soon figured it out and corrected my contributions.
I spoke with Steve last night about progress on NEO Scavenger Mobile. He's currently bogged-down in data-parsing code (reading items, creatures, etc. from xml), and this tends to be more repetitive/rote work. Since he already has a few examples done, we decided I could speed up the porting process if I took this part over while he focused on more structural/front-end things. Even though I wrote most of the original code, Steve's more of an expert on mobile game dev, so I feel better having an experienced eye on the more system-critical design (e.g. graphics, UI, class structures).
So for the next little while, I'm going to be getting the mobile port loading the remaining data types while he gets encounters and hex map stuff working.
We're also hoping that as I get in there and work a bit, I'll start to notice other ways I can speed up the process without conflicting with his work. It'll mean a break from space prototyping for a bit, but I think it's a good cause :)
Hey Folks! Not too much to report today, as most of the day was spent researching shaders on OpenFL and HaxeFlixel.
First, why are shaders important? Basically, shaders are what allow games to do things like lighting, bump maps, and other interesting graphics tricks. Traditionally, they're used on 3D geometry like meshes and full-screen effects. However, they can also be applied to plain old bitmaps, which is what most 2D engines like HaxeFlixel use. Getting shader support in HaxeFlixel can provide the ability to make sprites look more interesting/attractive.
And as it stands, HaxeFlixel already supports shaders. But there's one major limitation: any active shaders get applied to everything in the scene (a.k.a. camera). This is fine for things like a screen overlay, but in my ship example where there are bits of walls and lights, this won't really work (without serious hacking behind the scenes). Still, it's easy to do, and I was able to apply shaders to thousands of sprites without any performance issues.
Later, I managed to get a hold of a pull request on GitHub which introduces some shader samples to OpenFL. Included is a lighting (i.e. normal map) shader, which is something I'm interested in trying with the ship prototype.
Getting it to run wasn't too bad. And I was even able to manipulate some things to apply it to moving sprites around the screen. Performance gets a bit bogged-down around 11k sprites, which is great. And if I have a shader instance per-sprite, it bogs at 3k. Still impressive.
However, I had trouble when it came time to make each sprite show positional lighting changes. E.g. sprites behaving like the light is static in the scene as they move around it. I haven't yet figured out how to feed the fragment shader positional info per-sprite, so they all render as if a light was in the same position relative to each sprite (e.g. they all have highlights on the left, all the time).
In 3D engines, I think this'd be handled by the vertex shader passing info to the fragment shader. However, I haven't had a lot of experience writing shader code. Particularly in OpenFL. There is one example out there, the HerokuShaders, which seems to do both vertex and fragment shading in OpenFL. So it may be worth taking a look at that to see if I can piece it together.
The end result, I'm hoping, is some way to render interesting lighting/shadow effects without needing to resort to my tilemap grid hacks. These visuals are not exactly critical to the gameplay, but I'd really like for my next game to be a bit more visually enticing. I envision inhabiting the ship to be an atmospheric experience, whether it be sitting quietly in the glow of the command console, navigating a dark corridor in a derelict, or even being blinded by a methane leak while gas-mining an outer planet.
I can probably do without lighting and just diffuse-color everything (like NEO Scavenger). I'm sure it'd be fine. But while I'm here at the root of the game's architecture, it'd be good to at least know what my options are!
Hey Folks! Hope everyone had a good weekend. I tried not to stress out too much about the dev issues over the weekend, so as to come back refreshed. I couldn't help it though, and some dev thinking snuck in :)
The good news is that I was able to get some performance gains this morning. It turns out that the bottleneck wasn't the item state updates or line of sight checks. It was actually the sprite rendering.
And weirdly, the framerate drop was apparent with as little as 300-500 sprites on the screen. I knew for a fact that HaxeFlixel could throw more sprites around than that, so I dug into things a little. There's a HaxeFlixel demo called BunnyMark which is used to demonstrate this very thing. And typically, native targets such as Windows should be able to achieve 10s of thousands of sprites on-screen at once.
So I tried building that demo and running it, and sure enough, I was able to lob almost 2000 sprites at the screen before I saw framerate dip to 40 like in my prototype. I looked at the demo's code, and noticed that it didn't do anything too tricky. Basically, it added sprites to a FlxTypedGroup<FlxSprite> instead of a generic FlxGroup.
In my own code, I was adding arbitrary things to my draw groups. Sometimes sprites, sometimes whole groups of sprites. And I think this ruined HaxeFlixel's rendering optimizations. After refactoring the render code to use type groups and just the necessary sprites, I got a pretty big improvement.
Also, I hadn't really considered debug vs. release performance, as in the past, that didn't get much of a gain in Flash. However, HaxeFlixel release blows debug performance away. And where 2000 sprites would start to slow down in debug, I saw no dip in framerate at all when in release. Score!
I may still need to revisit the way I store and render things in the future. But for now, we can proceed with the prefab layouts!
And proceed I did. I started adding tools to insert padding to the layout on the left/right/top/bottom so I can scale-up the size to fit any ship layout. And then, I decided to try tackling a ship of the size I'm hoping for:
Yeah, it's ugly as sin. And unfinished. But the size is what I'm aiming for in this prototype: a Millenium Falcon or Firefly sized ship with 2-10 crew. And it runs at 30fps in debug, 60fps in release, without much more than cursory optimizations. That's promising enough for me to feel confident moving forward with this.
I'm still seeing some line-of-sight leakage through corners and diagonals. However, I think this will go away as I fill-in the backsides of the walls, since many of the leaks happen with one-sided wall corners.
Also, those hacky wall sections aren't going to cut it when it comes to sloped and curved walls. I may need to experiment with larger prefabs of curves and slopes. And probably floor tiles to match them, so the lighting doesn't stick out behind walls via the floors.
And, I should really get a few more items on-board, and see if I can hook this layout up to the solar-system flight and planet landing modules, to create a seamless play experience. I'm still sort of banking on finding the fun once these are all connected, so it'll be useful to know if that's true.
So much to do! But the good news is that I think I'm back into fun coding and design territory again for a while. Some juicy tasks ahead!
I've got the prefab-based tilemap working again. Looks like my problems were due to some bad calculations in the trimming/padding functions. Things like sign and rounding errors causing null pointers in some cases. As of this morning, I was able to build ships from prefab items and then watch my crew walk around them. And lighting could be toggled on and off.
That's the good news.
The bad news is is that it has lots of problems. Namely, the lighting looks bad on diagonal walls, and performance is pretty poor.
The lighting issue may not be fixable in the current system. Basically, the line-of-sight code spills through some "wall" tiles where one tile's corner meets another. And the stair-stepping pattern still appears. Even if I reduced the tile sizes and/or made diagonals thicker, I'm going to get this staircase pattern in the floor along walls.
And on the performance side, things don't look much better. Rendering all those overlapping prefabs really takes a toll on the draw time. Each "tile" can have a skeletal substructure, floor panel, and possible wall piece all drawn in the same space. And in a 10x10(ish) sized area, this consumes a lot of drawing time.
And to make matters worse, each of these prefabs has an update cycle where it keeps track of changing conditions on itself. It's unused at the moment, but I had envisioned using it to handle things like deterioration, heat transfer, etc. Even now, where each item has a list of zero things to do, and each one only updates once per second, all the time it takes to check each one is slowing the game down.
One thing I considered is whether to give up on real-time simulations, and switch back to something more turn-based. The advantage of such a system is that the calculations only need happen when the user clicks something (probably an end turn button, or other action). This limits the number of times calculations need to be made.
It's also advantageous because it gives players more time to relax and think. This is a very good thing, and one I think was important in NEO Scavenger.
The main down side I can think of is long voyages. Turn-to-turn crew management might be okay during critical moments, but what about the weeks (or months) of interplanetary travel? How will I handle simulation during this time? Do I just run a series of simulation steps one after the other? And how is that any different than running it in real-time? Won't the multi-turn simulation have the same performance issues?
I guess this isn't a problem exclusive to turn-based play. I'd probably need to fast-forward the game in a real-time model, too. I was originally thinking that I'd just fill that long, boring travel time with procedural drama to keep it interesting, so it would invalidate the need to fast-forward. But I suppose even the best-written sci-fi has to fast forward some of the time. A scene ends, a journey finishes without incident. We'll need to move on to the next interesting bit rather than assume every single second of simulation time will be interesting enough to watch.
More food for thought. Good thing the weekend is here! This is the sort of big-picture problem that is best suited to thinking while away from the code editor. Or maybe I need to brush-up on some turn-based games like Dwarf Fortress to see how it's done in those? Hm.
I spent today doing a lot of clean-up on the old tilemap code. As mentioned yesterday, the old code made a lot of assumptions about the tilemap data. Most notably, it didn't care about synchronizing additional tile data arrays such as those used for lighting and sockets.
So, I replaced the old code that manipulated tilemap sizes to be more friendly to this meta data. This meant rewriting the tilemap trimming and padding functions.
So far, I think the functions are roughly working. They basically strip out all the items, rebuild the tilemap to the requested new size, erase the old sockets and lighting data, and then re-add the items in their new positions to populate socket and lighting data. The delete, move, and re-add process was simpler to do (and more reliable) than trying to manually update all the item metadata while they were still installed on the tilemap.
The one issue I'm running into, however, is the trimming/padding calculations. I have some offsets and rounding errors which are causing items to be placed in the wrong places, so tomorrow, I'll have to fix that.
I feel better about where I'm leaving off today, as the code is a bit cleaner than the old mess. But little to show for the work. So far, anyway. Until tomorrow!
Hey Folks! With the NEO Scavenger v1.13 test build live, I'm going to give it a couple days to see if it has any critical issues before promoting it to the default build. So with that, it's back to prototyping!
Today's work continued efforts to build the ship from arbitrary items (prefabs) instead of square tiles. These items still get placed on the tilemap grid, but the tilemap itself is invisible and only used for things like pathfinding, light/collision tests, and socket info for placing items.
I've made some progress on this front. For one thing, I'm now able to place these parts onto separate layers based on their role. So things like structural beams will render below floors, which render below items (e.g. fridge), which render below walls...I can get a nice layered ship on screen. Plus, since the layers are well-organized, I can show/hide them incrementally (think Z-level in The Sims or Gnomoria).
I was also able to get lighting info to transmit to these items, based on the tilemap info below. It seemed to behave as designed, with shadows cast by placed wall items. The problem is that I still get some really harsh transitions from lit to black, and it looks pretty jarring.
One small hack I tried actually made a big difference: I simply made the minimum lighting dim instead of black. So areas in complete shadow still had some color and definition, and it looked pretty good.
On the other hand, a more complicated hack I introduced had a less-desirable effect. Adding blur/feathering to the lightmap to smooth the light/shadow boundaries takes us back to lightmaps that spill through walls. That's not really going to work.
Before going any further with this, however, I needed to solve a problem. Every time I tested one of these lighting models, I needed to rebuild a ship from items by hand. My old save/load layout system only cared about tiles. So I quickly added items to the data, and...
They're missing on load. Or more specifically, they're loaded, but not visible. And here we have most of my afternoon: trying to figure out why they weren't appearing. It was only late in the day that I realized it worked normally on the crew sim state, but not the build state. And this came down to reloading the graphics for each item after setting the ship's layout. (Switching states seems to kill the graphics for sprites.)
And then, I realized a big, deep problem: all my ship loading/saving/state-transition code does a lot of tilemap operations by manipulating the tilemap array directly. This was fine back when the ship only had tiles in an array. But now, it has tile tinting info, socket info, and visibility info, each in their own arrays. And these arrays get out-of-sync with the tilemap array when I muck with it.
There are quite a few places that are going to need fixing, or replacing, to get these lined-up again. Though, at least I have a clear problem to tackle tomorrow!