Unity Custom Attributes and Custom Editors

Shapes Editor 01
Custom editor/inspector view for Shapes.cs

While working on Programmatic Meshes, it became clear that I needed some custom gizmos to help me visualize things as I moved along. It really became a necessity as I was working on sphere generation because I was having some issues where certain sizes and segment counts were creating bad geometry. This was almost certainly due to ordering of the int[] array for triangles associated to the mesh. Since everything is generated by code, that means that once all vertices are created, the code needs to also sort/order those vertices the same every time to ensure that the facet tris are created correctly.

Sphere Vertices Indices
Sphere Vertices Indices

This was my first venture into gizmos aside from some very basic line drawings. I wanted the indices of each Vector3 in the mesh so that I could ensure they were getting properly ordered each time and would meet the requirements for tri calculation. So the above was born – and damn did it ever help me see where things were occasionally not working (sadly, I don’t have a screenshot of the bad sphere, but let’s just say that it was… not an ideal geometric shape).

After getting through that, I wanted to also change the inspector so that I could enable/disable vertex visualization. As shapes become more complex, the numbering is great, but I wanted a better visualization of the vertices in the scene view. As the screenshot above illustrates, unless you rotate the view around a bit, it’s easy to get lost with where vertices actually are in relation to one another.

Sphere Vertices Visualization
Sphere Vertices Visualization

In the above GIF, you can see how movement helps determine what indices you’re viewing, but the ROYGBIV and SIZE options for visualization also help. In the ROYGBIV mode, the closest vertices are red and the furthest are violet, and everything in between follows the ROYGBIV order. With the size option, the closest vertices are the largest and the furthest away are the smallest, and they are scaled to suit in between. In either case, they’ve updated in real time. I’m not yet sure how this performs on very high density meshes, and I’m sure some optimization will be necessary, but it’s good enough for my needs for now.

I wanted the collapsible Viewables area in the inspector for this, as well as for mesh data (Vertices[], Normals[], and Triangles[]). I also wanted to be able to select the Shape (each of which is a class inheriting from Primitive()), and which Generate() method should be used (each shape has different generate methods).

For this to work, I created a few custom classes, which I added to my external DLL:

SelectableList<T> is a custom List<> type collection that has an interactive indexer property called .Selected that references the index of the selected item in the list. This has turned out to be handy for selecting items in lists from dropdowns in the inspector.

MethodCaller<T> is a custom collection that contains a reference to a class (in this case, each shape class gets a method caller), and a SelectableList<MethodInfo> of the generation methods in that class.

And lastly, MethodDictionary<T> is a custom collection class that collects MethodCaller<T> objects. Its constructor takes a filter for classes (to remove the base class and secondary base classes where inheritance takes place on multiple levels), and a filter for methods based on the name of the methods to acquire.

The creation of the MethodDictionary also builds out the dictionary based on the filters provided, so there isn’t a lot of work needed to implement it. This is definitely a plus.

In the Unity code, I also created three custom attributes:

[Segmented] is tagged to generate methods that use the segment count value. This allows that slider to be enabled/disabled on constructor selection.

[DefaultGenerator] is tagged to generate methods that the default Generate() method passes to.

[GeneratorInfo] is tagged to all generate methods and provides inline help text in the inspector, typically what segment count actually accounts for if it’s used, and what the measure/size value indicates (e.g.: circle/diameter, quad/size, hexagon/apothem*2).

Using reflection, I do something like this in the ShapeEditor.cs script:

if (shape.shapeMethods.Callers.Selected. Methods.Selected. GetCustomAttributes().SingleOrDefault(s => s.GetType() == typeof(SegmentedAttribute)) != null)

It’s not terribly pretty, but it’s fairly quick – quick enough for inspector draws – and allows the inspector panel to change on the fly as selections are made.

It’s worth noting, if you haven’t worked with custom attributes before, that the attribute doesn’t need to have fields despite all examples I came across online containing them. Without fields, it’s basically a check to see if it exists or not – a boolean of sorts applied to a reflected method to change how the inspector is drawn. Some examples:

[AttributeUsage(AttributeTargets.Method)]
public class SegmentedAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class DefaultGeneratorAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class GeneratorInfoAttribute : Attribute
{
    public readonly string info;

    public GeneratorInfoAttribute(string info)
    {
        this.info = info;
    }
}

And usage on a class method:

[Segmented]
[DefaultGenerator]
[GeneratorInfo("Generates a circle based on the 'starburst' pattern.\n\nSize is the diameter of the circle.\n\nSegments is the number of segments _per quadrant_.")]
public Mesh GenerateStarburst() { /* code */ }

I will probably write some additional posts about this, maybe with more code, as this project continues. And I’m sure I’ll have a Part 2 of Programmatic Meshes in the next week or two.

HyperDisk Kickstarter

For anyone who has backed campaigns on Kickstarter, you probably know that they’re sometimes a crapshoot. Back in 2019, I backed two portable SSD projects: HyperDisk and WarpDrive. Both were expected to deliver in early 2020, but a combination of a suddenly volatile SSD market and the COVID pandemic caused them both to sort of evaporate – vaporware, if you will. Many started hammering both as scams, demanding refunds – a fairly reasonable response given that many campaigns go that route.

WarpDrive may very well have been a scam. The creators haven’t posted an update since January 2020. HyperDisk has had updates and, much to my surprise, I actually received mine today. So, of course, I set to benchmark it against their purported 1000MB/s claims.

HyperDisk CrystalMark
HyperDisk CrystalMark

It does actually meet those speeds using the supplied USB-C to USB-C cable to the Thunderbolt 3 port on my laptop. Internally, the enclosure is a 3.1 Gen 2 interface with an M.2 NVMe SSD. The cost during the campaign was US$139, and for a comparable (what appears to be a 2242 form-factor stick) SSD with an enclosure, it came out to be pretty much a wash in February 2021 prices, though for the time it was a pretty decent deal. I haven’t opened it up to look, but I’m presuming a 2242 based on the size. I’m also not sure the branding on the stick itself.

HyperDisk CrystalInfo
HyperDisk CrystalInfo

The part number, HYPCNV001T doesn’t bring up anything in Google.

We’ll see how it holds up over time, but for now I’m not terribly displeased. I’ve come to expect Kickstarter campaigns to deliver long after their estimates. I’ve backed 128 projects to date, though many have been just a $1 backing as a show of support (if a lot of folk gave even a dollar to many “good” projects, more would end up with the necessary funding – worth thinking on). Of the ones with a deliverable product that I’ve backed at a tier to get said product, ~80% of them have gone beyond their time tables, and of those, half or so by quite a ways… often a year or more.

Biome beginnings…

I’ve been wanting to make a system for a 2D game that would offer varying biomes. The two pieces I started on were creating noise-based shaders for each type of block/square, and working on deforming the individual quads so that they weren’t perfectly square, but did always match up along seams.

The beginning of this video briefly shows the near-infinite world space variance for the four shaders that I’ve got working so far. Clockwise from the top left, the materials are: iron, copper, coal, and gold. The movement is actually the whole tileset being moved around the world space (the camera auto-follows the center of the tileset). The second part of the video shows the deformed quads – which actually brings me to the “right” and “wrong” way to devise triangles for a mesh.

Pedants would say this is very much the “wrong” way to do so – that near-square shapes should be developed by pairs of triangles in any case where it’s possible. And for mapping textures to quads, that is certainly true, however the use case here is different.

First, there are no textures being utilizes. With it being strictly shader-based, and with the shader specifying values based on world positions, the triangle design doesn’t matter at all for the visual effects. Additionally, the methods I’m using allows for easy programmatic deformation with a virtually unlimited number of points along each side of the quad to be used for the deformation. Currently that’s being controlled by specifying the number of line segments each side should be broken down into.

Before and after deformation with two segments per side

This image is just breaking the quad’s sides into two segments each.

Before and after deformation with five segments per side

Here’s five segments per side.

Before and after deformation with ten segments per side

And lastly ten segments per side.

You can see that with five and ten segments, the actual deformation in the upper left is not manifold in two dimensions. But because the visual is being driven by shaders, it doesn’t actually cause any issues, and the tiles to the left and above this tile still line up properly meaning there’s also no z-fighting. While I still need to clean up the noise used for the deformations a bit, there’s a benefit to knowing that even errant geometry isn’t going to cause issues (because in a randomly generated world with tens of thousands of tiles, there’s always the chance for float-based math to be off).

The additional benefit to using “non-standard” triangles radiating out from the center is that it makes programmatic variation much easier to accomplish as no tile needs to know anything about any of it’s neighbors to deform and still fit properly. This actually leads into another useful factor noted below. But here, the math and calculations are just much easier. In the list of vertices, vertices[0] is also point (0, 0) of the quad – the center. All other vertices radiate outward, starting from the upper-lefthand corner and moving clockwise around the quad. This also means that setting the mesh triangle[] array is easier because every triplet starts with 0, and then stutter counts upward, e.g. – (0, 1, 2, 0, 2, 3, 0, 3, 4, 0, 4, 5, 0, 5, 6, …)

With such a simple set, a basic for-loop allows this to be done without prior knowledge of how many segments each quad has.

As mentioned above, there’s another useful bit here. If you look more closely at one of the before and after images, you’ll notice that the quad is centered on the world grid, which is not necessarily the default in Unity. Typically, a quad at coordinates (0, 0) would have it’s upper-lefthand corner at (0, 0) and it’s opposite corner at either (0, 1) or (0, -1) depending on how you have things set up. Here, when building the quad programmatically, rather than using the typical range of (0, 1) I use the range (-0.5, 0.5).

There are two primary reasons for this. From an object control perspective, this means that the tile at (5, -6) is centered at (5, -6), so destruction of that tile is the destruction of a 1×1 area centered at that position; there’s no need to worry about which direction the quad expands from it’s origin because the origin is the center. The second reason is from a programmatic geometry view. Because all tiles are centered, deforming the geometry along each x- and y- value between tiles is consistent between negative and positive worldspace.

Let’s take a little tutorial approach here to see what the code looks like. Here’s the creation of the quad itself. Yes, there’s some housekeeping to do with this yet, but it’s functional and fast.

        void CreateQuad()
	{
		Mesh mesh = new Mesh();
		mesh.name = "ScriptedMesh";

		Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

		//Center
		vertices[0] = new Vector3(0f, 0f, 0f);

		for (int i = 0; i < stepsXY.Length - 1; i++)
		{
			//Top
			vertices[(0 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[i], stepsXY[stepsXY.Length - 1], 0f);
			//Right
			vertices[(1 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1], stepsXY[stepsXY.Length - 1 - i], 0f);
			//Bottom
			vertices[(2 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1 - i], stepsXY[0], 0f);
			//Left
			vertices[(3 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[0], stepsXY[i], 0f);
		}

		Vector3[] normals = new Vector3[vertices.Length];

		for (int i = 0; i < normals.Length; i++)
			normals[i] = Vector3.forward;

		int[] triangles = new int[4 * (stepsXY.Length - 1) * 3];

		int innerIndex = 0;
		for (int i = 0; i < 4 * (stepsXY.Length - 1); i++)
		{
			triangles[innerIndex++] = 0;
			triangles[innerIndex++] = i + 1;
			triangles[innerIndex++] = i + 2;
		}

		triangles[innerIndex - 1] = 1;

		mesh.vertices = vertices;
		mesh.normals = normals;
		mesh.triangles = triangles;

		mesh.RecalculateBounds();

		GameObject quad = new GameObject("Block");
		quad.transform.position = position;
		quad.transform.parent = this.parent.transform;

		MeshFilter meshFilter = (MeshFilter)quad.AddComponent(typeof(MeshFilter));
		meshFilter.mesh = mesh;

		MeshRenderer meshRenderer = (MeshRenderer)quad.AddComponent(typeof(MeshRenderer));
		meshRenderer.material = this.bMat.Material;

		this.self = quad;
	}

We create the mesh, and then determine the number of vertices. Again, because we aren’t turning a quad into a bunch of squares, this is an easy calculation, and there are no vertices aside from the center vertex that needs to be added.

Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

Here, the number of vertices is 1 for the center, plus 4 * (stepsXY.Length - 1) where stepsXY is the number of vertices along each side. We’re subtracting 1 from each side because the second corner vertex of a given side will be the first vertex for the other side. In other words, if we’re breaking each side into two segments, you’d have {(-0.5, 0.5), (0, 0.5), (0.5, 0.5)} for the top, and {(0.5, 0.5), (0.5, 0), (0.5, -0.5)} for the right side. We don’t want or need to have (0.5, 0.5) listed twice in the array of vertices, so the -1 prevents that from happening.

We then add the center vertex to the array, and run through a for-loop that also only needs to execute stepsXY.Length - 1 times, as each loop hits the same point on all four sides. Yes, I used 0 * … in the first (top) calculations – this is just for clarity. You’ll also notice that in the right and bottom calculations, we’re subtracting i rather than adding it. This is so that triangles calculation later continues to be easier and all vertices in the array exist in clockwise order starting at vertices[1].

All normals are forward-facing, so it’s easy to just fill the normals array with Vector3.Forward given as many places as you have vertices.

Now the triangles array is initialized (remember to multiply it by 3 since each triangle has three vertices). Using an index/iteration value that is external to the for-loop allows for quick calculation; remember from above that the whole array is sets of 0, x, y where x and y stutter-step upwards. Finally, we set the very last triangle point back to 1 – the first value in the vertex array that is on the outer edge (this completes the circuit around the quad).

The rest is just building the mesh out. You may have noticed that I don’t build an array of UVs, nor plug UVs into the mesh building. Again, because I’m not using textures, there’s no mapping from a texture to the mesh, and therefore UVs are not needed. The shader doesn’t care about UVs. Of course, you could build shaders that DO care about UV calculations, in which case UVs would also need to be added (which may be a bit more complicated given the triangle geometry here).

Now we’ll look at the deformation code.

        public void DeformQuad()
	{
		MeshFilter mf = this.self.GetComponent<MeshFilter>();
		Mesh m = mf.mesh;
		Vector3[] vertices = m.vertices;

		for (int i = 0; i < vertices.Length; i++)
		{
			if ((vertices[i].x == stepsXY[0] || vertices[i].x == stepsXY[stepsXY.Length - 1]) && (vertices[i].y == stepsXY[0] || vertices[i].y == stepsXY[stepsXY.Length - 1]))
				continue;

			// Top
			if (vertices[i].y == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[stepsXY.Length - 1]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Bottom
			if (vertices[i].y == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[0]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Left
			if (vertices[i].x == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[stepsXY.Length - 1], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Right
			if (vertices[i].x == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[0], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}
		}

		m.vertices = vertices;
		m.RecalculateBounds();
	}

Currently, I’m just using Unity’s built-in Perlin Noise methods in the Mathf library. This leaves much to be desired, but before I dove into creating a noise function, I wanted to ensure this all worked as I expected. Basically, this just extracts the vertices from the mesh, performs the calculations on them, and rebuilds the mesh. The first if-statement is intended to keep the corners of each quad from becoming out of alignment. It probably won’t be kept, but it was something I was trying out.

You can also see that I map the noise function’s return values from (0, 1) to (-0.3, 0.3) as I don’t want any deformed vertex landing at or close to the center of another tile. I need to play around with this value some still, but it will depend on the noise function I end up with later on.

From a performance stance, it might be better to deform the vertices as the mesh is being created initially rather than creating a perfectly square mesh then proceeding to deform it. But this is the type of optimization that will almost certainly hinder legibility of the code, and keeping the two functions separate allows for more easily making changes to either function. And really, the amount of time taken is pretty small. Even with large sets of tiles, it takes no more than ~40μs to generate and ~27μs to deform each quad, for about 17s for over 250,000 tiles. The obvious plan would eventually be to chunk them (ala Minecraft), and there’s almost definitely some room for fine-tuning the process. Plus, this is just executing it in the editor, so it would almost certainly perform better in a release executable.

Generative Glyphs

I came across this post on the Reddit sub r/Generative the other day and thought that u/ivanfleon had done something both relatively simple and also very cool. I had some ideas for generative glyphs and started by mimicking his sample there, thus was born the RectGlyph:

RectGlyph_01RectGlyph_02
Two different RectGlyph settings

The interface came shortly after RectGlyph was done as I was trying to troubleshoot work on the PolarGlyph. It made it easier to see what sort of variations could be had, but also allowed debugging to be more visual (which really helps me a lot).

I’ve always been fascinated with languages, both real and imagined. As I was working toward my PolarGlyph idea, I stumbled upon a few happy accidents, such as the RunicGlyph.

RunicGlyph_01RunicGlyph_02
Two RunicGlyph settings

And also the AngularGlyph:

AngularGlyph_01AngularGlyph_02
Two AngularGlyph settings

And eventually worked out the kinks for the PolarGlyph:

PolarGlyph_02PolarGlyph_01
Two PolarGlyph settings

I have a few others bring worked on, as well as some ideas regarding an editor so you can take your randomly generated glyphs and add line segments to or remove them from any of the glyphs in the set.

My pie-in-the-sky idea is to also be able to save them as a TrueType font so that they can be used in Unity (or anywhere), and possibly to save them as an SVG or vector sheet for use in various vector-based software.

It’s been a fun side project so far.

Micro-optimization #1: Setting a `done` flag using bitwise operators

I’m planning a series of very brief micro-optimization notes, both for my own records and to help anyone else who may be looking at some optimizations. I plan to provide minimal code, results, and brief explanations.

In this case, I came across a bitwise |= for setting a done flag in one of Penny de Byl’s Udemy courses. I was curious and decided to see if it was an optimization. It felt like it would be, but I didn’t expect by much. Sure enough, it is, but not by much. Still, if it’s a function that you have a lot of while-loops in your code checking against a boolean value, it could be handy.

The code:

static void Main(string[] args)
{
	for (int i = 0; i < 10; i++)
	{
		DoneTest1();
		DoneTest2();
	}
	Console.Read();

	Environment.Exit(0);
}

static void DoneTest1()
{
	bool done = false;
	int x = 0;
	int xSize = 100_000_000;

	while (!done)
	{
		x++;
		done |= (x < 0 || x >= xSize);
	}
}

static void DoneTest2()
{
	bool done = false;
	int x = 0;
	int xSize = 100_000_000;

	while (!done)
	{
		x++;
		if (x < 0 || x >= xSize)
			done = true;
	}
}

The results:

Using done |=  : 112ms  (1122354 ticks).
Using if  : 151ms  (1518356 ticks).
Using done |=  : 107ms  (1073112 ticks).
Using if  : 129ms  (1298421 ticks).
Using done |=  : 127ms  (1275415 ticks).
Using if  : 141ms  (1414998 ticks).
Using done |=  : 111ms  (1112100 ticks).
Using if  : 127ms  (1273705 ticks).
Using done |=  : 108ms  (1086612 ticks).
Using if  : 140ms  (1400030 ticks).
Using done |=  : 127ms  (1271739 ticks).
Using if  : 128ms  (1282120 ticks).
Using done |=  : 108ms  (1089749 ticks).
Using if  : 111ms  (1118823 ticks).
Using done |=  : 108ms  (1086191 ticks).
Using if  : 110ms  (1100477 ticks).
Using done |=  : 100ms  (1002949 ticks).
Using if  : 113ms  (1131274 ticks).
Using done |=  : 104ms  (1040928 ticks).
Using if  : 110ms  (1101986 ticks).

Each iteration shows a better performance using the bitwise |= compare and set rather than the if-statement. Across the ten iterations, the bitwise averaged 111.2ms while the if-statement averaged 126.0ms which amounts to a ~11.75% increase in performance. The bulk of the time in each set is, of course, the computer counting to 100,000,000, but given that the only difference in the two methods is the check for setting the flag, the variance is accounted for by that difference.

The Reason:

Bitwise operations on most architectures are almost always faster than other calculations, and branching statements are typically computationally heavier. When bitwise operations are an option, they usually result in more performant if less readable code.

Methodology:

For those of you not familiar with benchmarking in C#, I typically use the .NET Stopwatch class (System.Diagnostics.Stopwatch). I’ve removed the Stopwatch() code for brevity in the code example above. So long as you Start, Stop, read, and Reset your stopwatch in appropriate locations, you don’t need to worry about setup and other functionality as the only portion times is what is wrapped between Start and Stop.

I also run the program (usually a .NET Core console application) as a release executable rather than a debug executable to ensure performance isn’t being bogged down by the debugger for any reason.

Lastly, I try to run ten (or more) iterations of each thing that I’m testing. As you can see in the results, timing can vary for all manner of reasons. Sometimes the first execution of a code block is slower than subsequent executions. I also try to interleave each method or function being tested (e.g.: 1, 2, 1, 2, 1, 2, 1, 2 rather than 1, 1, 1, 1, 2, 2, 2, 2) to help ensure the code block isn’t being cached and repeated. Running only a single iteration is often misleading. In this case, all ten iterations of the bitwise comparison were faster, but it’s often the case the the slower of two methods might have a small percentage of faster executions and running single iteration may provide incorrect information about which is typically faster.

Playing with ray-tracing, Pt. 1

Back to working on Labyrintheer. But it’s Unity 2020, and I’ve been interested in playing with ray-tracing (RTX), so I started a new project, brought in some old assets and started toying with it.

The first entry is the trusty Gel Cube (from InfinityPBR) with RTX materials applied. This one is only lit by the internal point light that dies off when it dies. This is both of the attack animations and the death animation without scene lighting:

The next is with scene lighting. This is where RTX really shines with colored shadows cast through the transparent material of the GelCube:

The video quality isn’t as good as I’d hoped – need to work on that a bit.

Really, working with RTX in Unity’s HDRP isn’t terribly difficult, but there are a variety of gotchas that make it a bit of a headache, and materials are set up significantly differently (as are lights and scene volumes and…) That said, I plan to work on a few creatures first, just to get a feel for it all, then move on to bringing in the dungeons under RTX. Should be fun!

Advent of Code 2018

For those new to coding, or folk who want to exercise their brain cells a bit, check out this year’s Advent of Coding 2018.  Each day offers two puzzles to flex your coding skills, build new skills, or just generally put into practice your code-fu.  They are language agnostic, and often quite interesting.

If you want to join a private leaderboard, my code is: 427712-b8e50570 .  Feel free to join.

I also have been posting my own code to GitHub here.  It’s not pretty, and definitely not always the best approach, but I’ve been tackling each day with an idea for how to solve it rather than trying to find the quickest solution – usually with some robustness in mind that would make the code more flexible and allow for it to be used in other circumstances.  Not sure why… just how I roll.  I also haven’t been cleaning up after myself.  If I go down a path that leads to failure, it’s usually still there and either commented out or just not executed.  Since this is sort of a sandbox event for me, take any code I have in that repo with a grain of salt.  All days are currently in C# and for Visual Studio for Windows, though I may slowly put together some Mac console apps and Unity stuff in C# as well.

Making Materials and Models! Mmmmmmmm (M)Normal Maps?

Not being an artist can seem like one of the most daunting parts of going it alone (or mostly alone) as an indie dev.  I’m a fair photographer, but that’s as far as my artistic capabilities have previously taken me.  Most of my stick figures look, well… disfigured, and things like straight lines are as simple for me as writing in Martian.  But I’ve been really working to increase my skill set here, and be able to create things in Labyrintheer that actually LOOK pretty decent.

In light of that, I started by picking up some great models from InfinityPBR.  Andrew, the awesome dude behind iPBR (as I reference it for myself) includes some great PBR materials as SBSAR files (and recently SBS files) that really helped me delve into how materials work and what was meant by all of the components: normal, roughness, metallic, height (or specular and glossiness depending on your preferred workflow); metallic/roughness versus specular/glossiness; and a bit about what all of those components can do.

But this wasn’t enough customization for me.  As I’ve mentioned before I started using Archimatix to design some architectural models (which is still something I’m getting a handle on, simply due to the sheer variability of AX and its nodes).  As I worked through some (very simple) wall models and such, I realized that I also wanted more control over the materials themselves.  iPBR offers an INCREDIBLE amount of customization, but I’m just a pain in my own arse that way.  So the next step was…  Substance Designer.

For those new to the art game, Substance Designer is an amazing software package that lets you node-author your own materials and export SBSAR files that can be used to procedurally create materials in Unity (and I believe Unreal Engine).

The beauty of the node-driven design is that, while artist-type folk seem to settle into it easily enough, we logic-driven code monkey types can also create some really stunning work since everything can be reduced to parameters and inputs and outputs.  But before you can do any of this, you need to grasp some of the basics.  I’m not high-speed enough yet to really offer a tutorial, but I can share the wealth of tutorial love that I’ve been using.  So, let’s start with normal maps.

Normal Maps

I ran across this tutorial literally this morning, which is what brought me to create this post and share this info.  More blog post, less actual tutorial, the information about normal maps here is presented concisely, tersely, and with excellent clarity.  Even someone who has never heard of a material in game parlance before can probably understand what’s being explained.

The gist of normal maps is to provide interaction between the materials and light in your scenes.  This is not the same as metallic/roughness aspects, but more to “pretend” that there’s dirt, smudges, small deformities or other similar features on your object.  When making a material, you often preview it on a perfectly flat surface.  But you still want to see the details – details that offer a 3D appearance on a completely flat 2D plane.  This is where normal maps come in.

Let’s look at the image below, for instance:

Demon Head Emboss - Coin Face
Demon Head Emboss – Coin Face

This is meant to be the head of a coin I’m working on as sort of a self-tutorial.  The eye can easily see this as an embossed image, but due to the normal map, moving the light around changes how and where shadows happen.  Here the light is off to the left, so left of the “ridges” (it’s still just a flat plane) looks brighter, and right of them produces shadows.  If I were to move the light source to the other side, the opposite would be true.  This is how normal maps help reinforce the 3D appearance of an object that doesn’t have detailed modeling done.  This is a HUGE benefit to game performance – it’s much easier to draw a coin that is perfectly flat on both sides, and apply this material to make it appear 3D than it is to produce a 3D model with this level of detail.  Easier both in actual creation of the object as well as on your GPU for rendering it.

This video shows the coin in Unity.  The scuffs and scratches are both part of the base color of the item, but the deeper scratches are also mostly in the normal map, and allow light to fall into them or be occluded from them based on the light angle.  Note that in the above video, the edge of the coin are NOT flat, those are actually angled on the model itself.  That would not be a great use of attempting to use normal maps to provide a 3D effect (at least not in any way I would be able to do it).

That’s what I have for normal maps, for now.  But I plan to continue this as a growing series of posts on PBR materials to help demystify them for those of us new to this whole thing.

Substances, Dungeon Floors, New Workflow

I’ve picked up Substance Designer and have started working on some better assets for the Dungeon biome.  Right now what I have is a bit busy, but I thought I’d share some of what I’m doing.  I started with a great tutorial on Youtube, where the presenter was kind enough to offer up his SBS file.  I made a variety of modification and exposed some parameters, kicked out the SBSAR and loaded it into my staging project in Unity.

I created three separate materials and applied them to their own GameObjects that Dungeon Architect uses to generate the floor.  Right off the bat, it looked like this:

Attempt 1
Attempt 1

Like I said… busy.  But at least it’s more interesting than what I had before.  I decided to up the game by adding a random rotation – each floor is randomly rotated by 0, 90, 180, or 270 degrees.  That looked like this:

Attempt 2
Attempt 2

Still busy, but at least a little more random.  Then I felt that the stones shouldn’t always be the same size, so I set each to have slightly different numbers of tiled stones per object:

Attempt 3
Attempt 3

Lastly, it still seemed too busy, so I lowered the overall count of each.  One was 4×4, one 5×5, the last was 4×5 making some stones also not square.  That looks like this:

Attempt 4
Attempt 4

Now it’s less busy, more random feeling, and looks decent.  I think I’ll probably go back to the Substance a bit and see what I can do about breaking them up a bit more, but for my first modification of a substance I’m pretty pleased.

Archimatix: A New Tool and Some Thoughts on Labyrintheer

So… how’s things?  Yeah, that’s cool.  I’m over here playing with Archimatix (AX) and it’s pretty much the best thing ever.  I’ve been watching this great tool develop over the past several months (though the dev has been working on it for much, much longer than that), and I have to say that I am incredibly impressed.  I’ve been fooling around with tutorials and random stuff for the past day to see how things can fit into Labyrintheer.

If you recall my earlier posts on the topic, I am not an artist.  I am especially not a 3D modeler.  Much of what I’ve been able to accomplish thus far has been thanks to the incredibly awesome 3D assets provided by InfinityPBR.  I’ve seriously considered how I’d ever get those sweet, custom models for some of the things I’ve considered in Labyrintheer over the past few years, and AX is the answer (I hope).

I’ve been messing with some of the node types just to get a feel for it, and one of the things I wanted to be able to do was some oddly modern, organic shapes as structures and “art” statues in some areas – mostly towns and wilderness, but not so much dungeons and caves.  This was my first attempt at using the FreeCurve node to knock out from a block and have a very open and extremely simple thing.

Ignore the robot – I don’t think that Labyrintheer is going to have robots.  But Robot Kyle is the AX spokesbot and it here just for scale.

While this won’t make it into the game, I don’t think, I plan to use this concept to create monoliths that showcase the twelve elements in the game.  This was actually an attempt at using something like a freehand drawn Fibonacci spiral as a cutout.  It didn’t do quite what I expected it to, but that’s part of the true fun with AX…  the things that don’t work the way you expect, but give you interesting new ideas.

At any rate, I’m sure I’ll be posting about AX now and then and maybe even showing off some assets that could end up in the game.  But for now I cannot recommend Archimatix highly enough for any Unity developer or artist.  It’s an utterly fantastic tool.