Biome beginnings…

I’ve been wanting to make a system for a 2D game that would offer varying biomes. The two pieces I started on were creating noise-based shaders for each type of block/square, and working on deforming the individual quads so that they weren’t perfectly square, but did always match up along seams.

The beginning of this video briefly shows the near-infinite world space variance for the four shaders that I’ve got working so far. Clockwise from the top left, the materials are: iron, copper, coal, and gold. The movement is actually the whole tileset being moved around the world space (the camera auto-follows the center of the tileset). The second part of the video shows the deformed quads – which actually brings me to the “right” and “wrong” way to devise triangles for a mesh.

Pedants would say this is very much the “wrong” way to do so – that near-square shapes should be developed by pairs of triangles in any case where it’s possible. And for mapping textures to quads, that is certainly true, however the use case here is different.

First, there are no textures being utilizes. With it being strictly shader-based, and with the shader specifying values based on world positions, the triangle design doesn’t matter at all for the visual effects. Additionally, the methods I’m using allows for easy programmatic deformation with a virtually unlimited number of points along each side of the quad to be used for the deformation. Currently that’s being controlled by specifying the number of line segments each side should be broken down into.

Before and after deformation with two segments per side

This image is just breaking the quad’s sides into two segments each.

Before and after deformation with five segments per side

Here’s five segments per side.

Before and after deformation with ten segments per side

And lastly ten segments per side.

You can see that with five and ten segments, the actual deformation in the upper left is not manifold in two dimensions. But because the visual is being driven by shaders, it doesn’t actually cause any issues, and the tiles to the left and above this tile still line up properly meaning there’s also no z-fighting. While I still need to clean up the noise used for the deformations a bit, there’s a benefit to knowing that even errant geometry isn’t going to cause issues (because in a randomly generated world with tens of thousands of tiles, there’s always the chance for float-based math to be off).

The additional benefit to using “non-standard” triangles radiating out from the center is that it makes programmatic variation much easier to accomplish as no tile needs to know anything about any of it’s neighbors to deform and still fit properly. This actually leads into another useful factor noted below. But here, the math and calculations are just much easier. In the list of vertices, vertices[0] is also point (0, 0) of the quad – the center. All other vertices radiate outward, starting from the upper-lefthand corner and moving clockwise around the quad. This also means that setting the mesh triangle[] array is easier because every triplet starts with 0, and then stutter counts upward, e.g. – (0, 1, 2, 0, 2, 3, 0, 3, 4, 0, 4, 5, 0, 5, 6, …)

With such a simple set, a basic for-loop allows this to be done without prior knowledge of how many segments each quad has.

As mentioned above, there’s another useful bit here. If you look more closely at one of the before and after images, you’ll notice that the quad is centered on the world grid, which is not necessarily the default in Unity. Typically, a quad at coordinates (0, 0) would have it’s upper-lefthand corner at (0, 0) and it’s opposite corner at either (0, 1) or (0, -1) depending on how you have things set up. Here, when building the quad programmatically, rather than using the typical range of (0, 1) I use the range (-0.5, 0.5).

There are two primary reasons for this. From an object control perspective, this means that the tile at (5, -6) is centered at (5, -6), so destruction of that tile is the destruction of a 1×1 area centered at that position; there’s no need to worry about which direction the quad expands from it’s origin because the origin is the center. The second reason is from a programmatic geometry view. Because all tiles are centered, deforming the geometry along each x- and y- value between tiles is consistent between negative and positive worldspace.

Let’s take a little tutorial approach here to see what the code looks like. Here’s the creation of the quad itself. Yes, there’s some housekeeping to do with this yet, but it’s functional and fast.

        void CreateQuad()
	{
		Mesh mesh = new Mesh();
		mesh.name = "ScriptedMesh";

		Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

		//Center
		vertices[0] = new Vector3(0f, 0f, 0f);

		for (int i = 0; i < stepsXY.Length - 1; i++)
		{
			//Top
			vertices[(0 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[i], stepsXY[stepsXY.Length - 1], 0f);
			//Right
			vertices[(1 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1], stepsXY[stepsXY.Length - 1 - i], 0f);
			//Bottom
			vertices[(2 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1 - i], stepsXY[0], 0f);
			//Left
			vertices[(3 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[0], stepsXY[i], 0f);
		}

		Vector3[] normals = new Vector3[vertices.Length];

		for (int i = 0; i < normals.Length; i++)
			normals[i] = Vector3.forward;

		int[] triangles = new int[4 * (stepsXY.Length - 1) * 3];

		int innerIndex = 0;
		for (int i = 0; i < 4 * (stepsXY.Length - 1); i++)
		{
			triangles[innerIndex++] = 0;
			triangles[innerIndex++] = i + 1;
			triangles[innerIndex++] = i + 2;
		}

		triangles[innerIndex - 1] = 1;

		mesh.vertices = vertices;
		mesh.normals = normals;
		mesh.triangles = triangles;

		mesh.RecalculateBounds();

		GameObject quad = new GameObject("Block");
		quad.transform.position = position;
		quad.transform.parent = this.parent.transform;

		MeshFilter meshFilter = (MeshFilter)quad.AddComponent(typeof(MeshFilter));
		meshFilter.mesh = mesh;

		MeshRenderer meshRenderer = (MeshRenderer)quad.AddComponent(typeof(MeshRenderer));
		meshRenderer.material = this.bMat.Material;

		this.self = quad;
	}

We create the mesh, and then determine the number of vertices. Again, because we aren’t turning a quad into a bunch of squares, this is an easy calculation, and there are no vertices aside from the center vertex that needs to be added.

Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

Here, the number of vertices is 1 for the center, plus 4 * (stepsXY.Length - 1) where stepsXY is the number of vertices along each side. We’re subtracting 1 from each side because the second corner vertex of a given side will be the first vertex for the other side. In other words, if we’re breaking each side into two segments, you’d have {(-0.5, 0.5), (0, 0.5), (0.5, 0.5)} for the top, and {(0.5, 0.5), (0.5, 0), (0.5, -0.5)} for the right side. We don’t want or need to have (0.5, 0.5) listed twice in the array of vertices, so the -1 prevents that from happening.

We then add the center vertex to the array, and run through a for-loop that also only needs to execute stepsXY.Length - 1 times, as each loop hits the same point on all four sides. Yes, I used 0 * … in the first (top) calculations – this is just for clarity. You’ll also notice that in the right and bottom calculations, we’re subtracting i rather than adding it. This is so that triangles calculation later continues to be easier and all vertices in the array exist in clockwise order starting at vertices[1].

All normals are forward-facing, so it’s easy to just fill the normals array with Vector3.Forward given as many places as you have vertices.

Now the triangles array is initialized (remember to multiply it by 3 since each triangle has three vertices). Using an index/iteration value that is external to the for-loop allows for quick calculation; remember from above that the whole array is sets of 0, x, y where x and y stutter-step upwards. Finally, we set the very last triangle point back to 1 – the first value in the vertex array that is on the outer edge (this completes the circuit around the quad).

The rest is just building the mesh out. You may have noticed that I don’t build an array of UVs, nor plug UVs into the mesh building. Again, because I’m not using textures, there’s no mapping from a texture to the mesh, and therefore UVs are not needed. The shader doesn’t care about UVs. Of course, you could build shaders that DO care about UV calculations, in which case UVs would also need to be added (which may be a bit more complicated given the triangle geometry here).

Now we’ll look at the deformation code.

        public void DeformQuad()
	{
		MeshFilter mf = this.self.GetComponent<MeshFilter>();
		Mesh m = mf.mesh;
		Vector3[] vertices = m.vertices;

		for (int i = 0; i < vertices.Length; i++)
		{
			if ((vertices[i].x == stepsXY[0] || vertices[i].x == stepsXY[stepsXY.Length - 1]) && (vertices[i].y == stepsXY[0] || vertices[i].y == stepsXY[stepsXY.Length - 1]))
				continue;

			// Top
			if (vertices[i].y == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[stepsXY.Length - 1]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Bottom
			if (vertices[i].y == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[0]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Left
			if (vertices[i].x == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[stepsXY.Length - 1], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Right
			if (vertices[i].x == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[0], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}
		}

		m.vertices = vertices;
		m.RecalculateBounds();
	}

Currently, I’m just using Unity’s built-in Perlin Noise methods in the Mathf library. This leaves much to be desired, but before I dove into creating a noise function, I wanted to ensure this all worked as I expected. Basically, this just extracts the vertices from the mesh, performs the calculations on them, and rebuilds the mesh. The first if-statement is intended to keep the corners of each quad from becoming out of alignment. It probably won’t be kept, but it was something I was trying out.

You can also see that I map the noise function’s return values from (0, 1) to (-0.3, 0.3) as I don’t want any deformed vertex landing at or close to the center of another tile. I need to play around with this value some still, but it will depend on the noise function I end up with later on.

From a performance stance, it might be better to deform the vertices as the mesh is being created initially rather than creating a perfectly square mesh then proceeding to deform it. But this is the type of optimization that will almost certainly hinder legibility of the code, and keeping the two functions separate allows for more easily making changes to either function. And really, the amount of time taken is pretty small. Even with large sets of tiles, it takes no more than ~40μs to generate and ~27μs to deform each quad, for about 17s for over 250,000 tiles. The obvious plan would eventually be to chunk them (ala Minecraft), and there’s almost definitely some room for fine-tuning the process. Plus, this is just executing it in the editor, so it would almost certainly perform better in a release executable.

What to do, what to do…?

Still playing with some new Unity 2020 features, still dabbling on Labyrintheer as well as a few other projects. Learning a bit of Machine Learning just for fun, figuring out the ins and outs of HDRP and RT, and generally using this lovely COVID pandemic as time to reset a bit and figure out what I actually want to develop sooner rather than later.

For whatever it may be worth to you, if you are interested in Machine Learning, either specific to Unity, or more generally, I cannot recommend Penny deByl’s courses on Udemy highly enough. All of her courses are great, and the ML and AI courses are no different.

https://www.udemy.com/course/machine-learning-with-unity/

While the course is ostensibly about Unity and the development takes place within Unity, it isn’t specific to the Unity ML-Agents (though there are sections for that). The bulk of the course is geared toward developing your own agents and brains in C#, which is fantastic whether you want to use Unity’s ML-Agents or not.

In the immediate now, I’ve been working on some Unity code to create a system of CCTVs and monitors to show them. It’s a core component of a potential game idea I’m futzing with at the moment. I expect to have some pictures or videos in the next week or two to show off. Until then, Happy Thanksgiving 2020.

WIP: Wednesday Thursday

I’ve been working on a few items, the most recent of which has been lootable chests that have open and close animations and audio.  The animations and audio part are not something I have a lot of experience with, but the chest is thankfully fairly simple.

Right now I’ve defined a very specific Chest Controller, though I suspect I will transform this into a Usable Object Controller that will take an enum of it’s usability type (for environmental objects) like open/close, on/off, pick or gather (for reagents) and the like.  The video above is super simple, and is still a WIP.

Currently, the player will interact simultaneously with every interactable object within a radius.  The next step is to raycast or use a hidden collider to find the object most centered in front of the player and prevent interactions with objects behind the player.

It’s a start.

Unity Profiling for Fun (and Profit?)

For those who may have been following since before this blog began, you may have seen the Iceglow Gel death sequence that includes the possibility of spawning smaller Mini Iceglow Gels.  This was, by and large, a stroke of sheer genius on my part (yeah yeah yeah, just let me have this one).  It seemed like a cool idea and has led to some other cool ideas for deaths with other mobs.  In fact, each mob has a DeathScript requirement, even if it’s just to, you know… die.

But this has also been a thorn in my side.  When an Iceglow dies, there is a major stutter before the new minis are spawned in.  I finally decided to toss up the profile to see what was going on, and decided to write this brief post on profiling, because damn, it’s handy!

For those new to Unity or development, the profiler is a pretty common tool used to “debug” the actual running game.  It can attach to a development build for better and more accurate profiling, but if you’re simply looking for bottlenecks where the definitive time and resources aren’t as important as discovering the spike, you can also simply attach the profiler to the editor itself.

Ice Glow Death Profile 1
Ice Glow Death Profile 1

This was a capture at the time of the resource spike right when the Iceglow dies.

Ice Glow Death Profile 2
Ice Glow Death Profile 2

The details of CPU usage show that the >1s spike is caused by the PhysX core baking the mesh for the minis.  I was hoping that, since this was running in a coroutine, that it wouldn’t impact the whole of the game.  Maybe I’m doing it wrong (or, at least, not the best way).  Maybe I can pre-bake those meshes.  Maybe I can use simple colliders instead (a cube is far easier to calculate).  I’m just delving into this, and I’m not sure what the best solution is yet (stay tuned for updates on that, or follow my plea for help), but for now, the profiler is my handy tool to figure out what calls are b0rking the game.

Easily as important as debugging code, and definitely more important for tuning, I can’t recommend highly enough learning to love your profiler.

Making Materials and Models! Mmmmmmmm (M)Normal Maps?

Not being an artist can seem like one of the most daunting parts of going it alone (or mostly alone) as an indie dev.  I’m a fair photographer, but that’s as far as my artistic capabilities have previously taken me.  Most of my stick figures look, well… disfigured, and things like straight lines are as simple for me as writing in Martian.  But I’ve been really working to increase my skill set here, and be able to create things in Labyrintheer that actually LOOK pretty decent.

In light of that, I started by picking up some great models from InfinityPBR.  Andrew, the awesome dude behind iPBR (as I reference it for myself) includes some great PBR materials as SBSAR files (and recently SBS files) that really helped me delve into how materials work and what was meant by all of the components: normal, roughness, metallic, height (or specular and glossiness depending on your preferred workflow); metallic/roughness versus specular/glossiness; and a bit about what all of those components can do.

But this wasn’t enough customization for me.  As I’ve mentioned before I started using Archimatix to design some architectural models (which is still something I’m getting a handle on, simply due to the sheer variability of AX and its nodes).  As I worked through some (very simple) wall models and such, I realized that I also wanted more control over the materials themselves.  iPBR offers an INCREDIBLE amount of customization, but I’m just a pain in my own arse that way.  So the next step was…  Substance Designer.

For those new to the art game, Substance Designer is an amazing software package that lets you node-author your own materials and export SBSAR files that can be used to procedurally create materials in Unity (and I believe Unreal Engine).

The beauty of the node-driven design is that, while artist-type folk seem to settle into it easily enough, we logic-driven code monkey types can also create some really stunning work since everything can be reduced to parameters and inputs and outputs.  But before you can do any of this, you need to grasp some of the basics.  I’m not high-speed enough yet to really offer a tutorial, but I can share the wealth of tutorial love that I’ve been using.  So, let’s start with normal maps.

Normal Maps

I ran across this tutorial literally this morning, which is what brought me to create this post and share this info.  More blog post, less actual tutorial, the information about normal maps here is presented concisely, tersely, and with excellent clarity.  Even someone who has never heard of a material in game parlance before can probably understand what’s being explained.

The gist of normal maps is to provide interaction between the materials and light in your scenes.  This is not the same as metallic/roughness aspects, but more to “pretend” that there’s dirt, smudges, small deformities or other similar features on your object.  When making a material, you often preview it on a perfectly flat surface.  But you still want to see the details – details that offer a 3D appearance on a completely flat 2D plane.  This is where normal maps come in.

Let’s look at the image below, for instance:

Demon Head Emboss - Coin Face
Demon Head Emboss – Coin Face

This is meant to be the head of a coin I’m working on as sort of a self-tutorial.  The eye can easily see this as an embossed image, but due to the normal map, moving the light around changes how and where shadows happen.  Here the light is off to the left, so left of the “ridges” (it’s still just a flat plane) looks brighter, and right of them produces shadows.  If I were to move the light source to the other side, the opposite would be true.  This is how normal maps help reinforce the 3D appearance of an object that doesn’t have detailed modeling done.  This is a HUGE benefit to game performance – it’s much easier to draw a coin that is perfectly flat on both sides, and apply this material to make it appear 3D than it is to produce a 3D model with this level of detail.  Easier both in actual creation of the object as well as on your GPU for rendering it.

This video shows the coin in Unity.  The scuffs and scratches are both part of the base color of the item, but the deeper scratches are also mostly in the normal map, and allow light to fall into them or be occluded from them based on the light angle.  Note that in the above video, the edge of the coin are NOT flat, those are actually angled on the model itself.  That would not be a great use of attempting to use normal maps to provide a 3D effect (at least not in any way I would be able to do it).

That’s what I have for normal maps, for now.  But I plan to continue this as a growing series of posts on PBR materials to help demystify them for those of us new to this whole thing.

Substances, Dungeon Floors, New Workflow

I’ve picked up Substance Designer and have started working on some better assets for the Dungeon biome.  Right now what I have is a bit busy, but I thought I’d share some of what I’m doing.  I started with a great tutorial on Youtube, where the presenter was kind enough to offer up his SBS file.  I made a variety of modification and exposed some parameters, kicked out the SBSAR and loaded it into my staging project in Unity.

I created three separate materials and applied them to their own GameObjects that Dungeon Architect uses to generate the floor.  Right off the bat, it looked like this:

Attempt 1
Attempt 1

Like I said… busy.  But at least it’s more interesting than what I had before.  I decided to up the game by adding a random rotation – each floor is randomly rotated by 0, 90, 180, or 270 degrees.  That looked like this:

Attempt 2
Attempt 2

Still busy, but at least a little more random.  Then I felt that the stones shouldn’t always be the same size, so I set each to have slightly different numbers of tiled stones per object:

Attempt 3
Attempt 3

Lastly, it still seemed too busy, so I lowered the overall count of each.  One was 4×4, one 5×5, the last was 4×5 making some stones also not square.  That looks like this:

Attempt 4
Attempt 4

Now it’s less busy, more random feeling, and looks decent.  I think I’ll probably go back to the Substance a bit and see what I can do about breaking them up a bit more, but for my first modification of a substance I’m pretty pleased.

Random Class, Singletons, and the “Big Debate”™ (does this sound familiar?)

A few weeks ago I wrote a VERY similar post about my Logger class and the debate over the use of Singletons.

I’ve done the same with my new randomization class, LRandom().  LRandom does several things under the hood, and is a singleton for a very specific reason.  First, I wanted to use it to replace the random class that Dungeon Architect uses during initialization of a dungeon or map.  In part because I wanted more control over randomization, and in part because I wanted all aspects to produce the same values with the same seed.  Previously, at least in earlier versions, DA would produce the same dungeon layout, but things like decorations and extras would not always be the same.

Additionally, I wanted the same random class to be able to control other randomizations during play: loot, die rolls, et cetera.  The current implementation uses a single System.Random instance, though I may extend this to have several – one for dungeon building, one for loot, one for mob AI, and so forth.

Also, the state of the System.Random is stored in a serialized file on exit.  This should, in theory and after a lot more work, allow one to pick up where they left off with the state of the randomizer where it was before.  The main goal here is to prevent “known randoms” with the given seed.  The first chest you run across won’t always have the same stuff.

The class also contains the original DA methods for NextGaussianFloat() and GetNextUniformFloat() to integrate without changing a lot of the DA code.  And lastly, it contains methods that allow the instance to be reseeded if need be, which is mostly good for debugging, but may have benefits during runtime as well.

So, currently, LRandom() and LLogger() are two singletons that I’m using that have proven very beneficial.  If you missed my last post, or want to hear more about Singletons (the good, the bad, and the ugly, so to speak), check out this great podcast from The Debug Log- Episode 73: Design Patterns: Singleton.

How To See Your Player… Making Walls Transparent

Over the past several months working on the Dungeon theme for Labyrintheer, I’ve changed my camera angle several times.  I keep moving it higher to prevent walls and such from occluding the player, but I’m never happy with such an oblique view.  So, over the past few days I’ve been looking at options to make walls transparent when they are between the player and the camera.

Some solutions simply disable the geometry.  This isn’t acceptable for my game, and I suspect for many.  You could accidentally walk backwards out of the playable area, or an errant AI could take a bad turn during it’s pathing and fall off the world.  Plus, disabling geometry just doesn’t seem like an elegant solution.  My primary goal (and I’m still working on it) is to use a shader for this directly, though that seems like it has some major pitfalls (how do you tell a shader about an object other than the one that it’s drawing?).

So, for now I’m cheating with a very small amount of code and an extra material for objects that I want to hide.

Basically, I’ve duplicated the four wall materials I have, and the duplicate materials use transparency with an alpha value of 100.

My player controller script now calculates it’s distance from the camera every frame (though I think this might be able to be done once in Awake() since the distance should be fairly static), like this:

     void Update ()
     {
         GetInput();
         ProcessInput();
 
         distanceSquared = (transform.position - Camera.main.transform.position).sqrMagnitude;
     }
 
     public float Dist()
     {
         return distanceSquared
     }

Then created a script to go on the walls (or any object that needs to be transparent to prevent occlusion), as such:

TransMaterialSwap.cs

 using UnityEngine;
 
 public class TransMaterialSwap : MonoBehaviour {
 
     public Material _original;
     public Material _transparent;
     private GameObject player;
     private playerController pC;
     private Renderer rend;
 
     void Start()
     {
         player = GameObject.FindWithTag("Player");
         pC = player.GetComponent<playerController>();
         rend = this.GetComponent<Renderer>();
     }
 
     void Update()
     {
         if ((transform.position - Camera.main.transform.position).sqrMagnitude < pC.Dist())
         {
             rend.material = _transparent;
         }
         else
         {
             rend.material = _original;
         }
     }
 }

In the inspector I set both the original material and the transparent material.  If the object is between the camera and the player, it switches the object’s material to the transparent material.  It looks like this:

There are a few issues here.  First, I still need to profile this to see if the solution gives my runtime performance a hit.  I don’t suspect it’ll be TOO bad, but it doesn’t hurt to check, especially with larger maps.  I may look into options to only run the check if the object is visible to the camera rather than always checking on Update(), every frame for every wall.  The other issue is that by making it transparent, light comes through.  I’m not sure how big an issue this will be – it’ll require some play testing.  But it may be an issue in some situations.

Lastly, as I said, I really do want to attempt this in a shader.  I figure it’s a good method to learn shader programming, even if exactly what I want isn’t possible.

Logger Class, Singletons, and the “Big Debate”™

So I’ve spent a bit of time working on getting a logger class setup, which is rudimentarily complete.  This class currently just writes out to a file and also to the debug console in Unity.  It takes in a message, a module, and a severity.  I wanted to easily have access to this class from anywhere, but since it writes to a file, I also needed to ensure that there was ever only one instance trying to access the file at a time.  This, of course, led to the singleton pattern.

Now, even a cursory glance through newbie programming blogs and books can cause one to question why this is even a pattern to begin with – it seems that everyone says to stay away from them, and I kind of understand why you don’t want to overuse them.  The Debug Log has a pretty good episode about them here.

So, singleton was the answer, and it was setup nicely enough and it works, though I still need to make it threadsafe.  But… logging:

LOG!
LOG!

So, now it’s back to work on the game itself.  I have most of the systems in place to get a sample level up and running soon.  Yay!

Dungeon Layout Metrics

So, for the current branch of changes, I’m working on level layout, design metrics, and baseline functionality.  What does all of that mean?

Layout Metrics and Design
Layout Metrics and Design

In the above photo, I have a nearly perfect layout.  The spanning tree options have kept the start (blue dot – bottom right) and end (red dot – center) pretty far apart.  The blue dot is the actual portal where the player would enter the level.  The red dot is just a red colored portal, but that portal is a marker for where the boss room will eventually be.

Of course, since everything is randomly generated, no two levels will be completely alike.  Unfortunately, that means as I change things up or even hit a seed I don’t like, things can go south.

Too Much Spanning Tree
Too Much Spanning Tree

The above shot is with a bit more weight on the spanning tree node.  There’s still a nice long path between start and end, but there are also long hallways that loop too far around (for my taste, at least).  Actually, the above example isn’t awful, but these images are the same seed, the first with 0.05 spanning and the second with 0.15.  While the spanning value may be static on all levels, there may be some small fuzziness around those as well.  Trying to find boundaries is more of a balance than I expected.

Other than basic parameters, I’m trying to prevent barrels from spawning inside of walls:

Barrel Wall
Barrel Wall

Or in large groups that get in the way:

Too Many Barrels
Too Many Barrels

I’ll be working on better “torch flicker” – right now it’s a bit stuttery looking, and I’d like it to be smoother and look more like an actual torch.  Then a few more mobs, the boss spawn, and some more decoration for the level.