Grid Placement, Resolution, and Shadows

I continued working on the prototype today, focusing on units, scaling, and grid placement.

Up until now, I've been using an orthographic camera (i.e. 2D projection) viewing a 3D scene. Most of my objects are either quads (squares) or cubes, stretched to match the texture dimensions. The result was pretty much identical to 2D engines like HaxeFlixel (indeed, the rendering behind the scenes is identical). The difference is that using Unity's 3D meshes allows for things like mesh materials that support lighting, shadow-casting, etc. Unity's 2D setup only allows a subset of these features.

One down side to this 3D-viewed-as-2D setup is that the on-screen elements can get stretched/squashed if you're not careful. This can ruin the pixel art as it gets aliased (or anti-aliased). Fortunately, there are some tricks to setting that up. Most of my morning was spent getting these settings fixed, as my first attempts were fairly arbitrary "looks good" eyeballing.

Once I had sorted out some pixels-per-unit and units-per-screen ratios, I moved on to grid placement. This was just a matter of figuring out the appropriate grid size in terms of Unity units, and adjusting item positions so they snap to the nearest grid point.

This had a few hurdles. First, I was mistakenly changing pixels-per-unit when changing camera zoom and screen size. Second, the mesh-scaling I was doing to match sprite sizes to the images was causing some alignment issues. Meshes are placed according to their center, not the top-left, so a scaled mesh is going to align differently with meshes of a different scale. I had to add some adjusters to my code for this, too.

With that done, I was able to move items around the scene and place them on a grid that matched the 32-pixel minimum tile size. I also fixed the bug that caused parts to be placed when I clicked a button to select a new part type.

Lastly, I decided to start looking at whether I could dynamically generate shadow-casters for my parts based on the json data. If you recall from my earlier shadow testing, diagonal walls had some problems. Meshes can cast shadows wherever they are opaque, and I created a special mesh cube that "smeared" the texture down the sides. This works well if the wall is flush against the sprite's edge, but the diagonal cuts across the middle, and the smearing doesn't cover that hole.

To solve this, I pictured a system where the user specifies a couple of coordinates in their part data, and the game extrudes that into a shadow-casting mesh. Kinda like this:

IMAGE( Rough shadow-casting mesh concept.

Basically, the user draws the gray diagonal wall sprite, and the orange box is the bounds of the image. Within their part data, they provide a list of coordinates (green dots), and the game will connect these dots with a shadow-casting mesh at runtime. The white numbers are the coordinate system of the image, so the user would write something like "[0, 0, 1, 1]" in this case.

All the game is doing is reading those dot coordinates, extruding them towards the camera (up from the floor), so that the light (white dot) gets blocked by it.

So far, the mesh creation was pretty easy. I'm able to apply any arbitrary collection of dots to a game object that I want. The tricky part is figuring out how to get them to cast a shadow.

So far, I've been trying to replace the item's collision mesh. For some reason, I thought this is what was casting the shadows. But now that I write this, I think I'm mistaken. It's probably the regular mesh (which is why the texture on the mesh can cast a shadow in the first place). I must've been confusing something I did before as a test with a collision mesh casting shadows in the editor.

Anyway, maybe I'll try adding this green-dot-shadow-mesh to the existing mesh, and see if that works. In theory, since this shadow mesh is flat and its edge faces the camera, it should be invisible to the player (even if the sprite wasn't covering it).

Hopefully, I'll get that working tomorrow!