Unity Custom Attributes and Custom Editors

Shapes Editor 01
Custom editor/inspector view for Shapes.cs

While working on Programmatic Meshes, it became clear that I needed some custom gizmos to help me visualize things as I moved along. It really became a necessity as I was working on sphere generation because I was having some issues where certain sizes and segment counts were creating bad geometry. This was almost certainly due to ordering of the int[] array for triangles associated to the mesh. Since everything is generated by code, that means that once all vertices are created, the code needs to also sort/order those vertices the same every time to ensure that the facet tris are created correctly.

Sphere Vertices Indices
Sphere Vertices Indices

This was my first venture into gizmos aside from some very basic line drawings. I wanted the indices of each Vector3 in the mesh so that I could ensure they were getting properly ordered each time and would meet the requirements for tri calculation. So the above was born – and damn did it ever help me see where things were occasionally not working (sadly, I don’t have a screenshot of the bad sphere, but let’s just say that it was… not an ideal geometric shape).

After getting through that, I wanted to also change the inspector so that I could enable/disable vertex visualization. As shapes become more complex, the numbering is great, but I wanted a better visualization of the vertices in the scene view. As the screenshot above illustrates, unless you rotate the view around a bit, it’s easy to get lost with where vertices actually are in relation to one another.

Sphere Vertices Visualization
Sphere Vertices Visualization

In the above GIF, you can see how movement helps determine what indices you’re viewing, but the ROYGBIV and SIZE options for visualization also help. In the ROYGBIV mode, the closest vertices are red and the furthest are violet, and everything in between follows the ROYGBIV order. With the size option, the closest vertices are the largest and the furthest away are the smallest, and they are scaled to suit in between. In either case, they’ve updated in real time. I’m not yet sure how this performs on very high density meshes, and I’m sure some optimization will be necessary, but it’s good enough for my needs for now.

I wanted the collapsible Viewables area in the inspector for this, as well as for mesh data (Vertices[], Normals[], and Triangles[]). I also wanted to be able to select the Shape (each of which is a class inheriting from Primitive()), and which Generate() method should be used (each shape has different generate methods).

For this to work, I created a few custom classes, which I added to my external DLL:

SelectableList<T> is a custom List<> type collection that has an interactive indexer property called .Selected that references the index of the selected item in the list. This has turned out to be handy for selecting items in lists from dropdowns in the inspector.

MethodCaller<T> is a custom collection that contains a reference to a class (in this case, each shape class gets a method caller), and a SelectableList<MethodInfo> of the generation methods in that class.

And lastly, MethodDictionary<T> is a custom collection class that collects MethodCaller<T> objects. Its constructor takes a filter for classes (to remove the base class and secondary base classes where inheritance takes place on multiple levels), and a filter for methods based on the name of the methods to acquire.

The creation of the MethodDictionary also builds out the dictionary based on the filters provided, so there isn’t a lot of work needed to implement it. This is definitely a plus.

In the Unity code, I also created three custom attributes:

[Segmented] is tagged to generate methods that use the segment count value. This allows that slider to be enabled/disabled on constructor selection.

[DefaultGenerator] is tagged to generate methods that the default Generate() method passes to.

[GeneratorInfo] is tagged to all generate methods and provides inline help text in the inspector, typically what segment count actually accounts for if it’s used, and what the measure/size value indicates (e.g.: circle/diameter, quad/size, hexagon/apothem*2).

Using reflection, I do something like this in the ShapeEditor.cs script:

if (shape.shapeMethods.Callers.Selected. Methods.Selected. GetCustomAttributes().SingleOrDefault(s => s.GetType() == typeof(SegmentedAttribute)) != null)

It’s not terribly pretty, but it’s fairly quick – quick enough for inspector draws – and allows the inspector panel to change on the fly as selections are made.

It’s worth noting, if you haven’t worked with custom attributes before, that the attribute doesn’t need to have fields despite all examples I came across online containing them. Without fields, it’s basically a check to see if it exists or not – a boolean of sorts applied to a reflected method to change how the inspector is drawn. Some examples:

[AttributeUsage(AttributeTargets.Method)]
public class SegmentedAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class DefaultGeneratorAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class GeneratorInfoAttribute : Attribute
{
    public readonly string info;

    public GeneratorInfoAttribute(string info)
    {
        this.info = info;
    }
}

And usage on a class method:

[Segmented]
[DefaultGenerator]
[GeneratorInfo("Generates a circle based on the 'starburst' pattern.\n\nSize is the diameter of the circle.\n\nSegments is the number of segments _per quadrant_.")]
public Mesh GenerateStarburst() { /* code */ }

I will probably write some additional posts about this, maybe with more code, as this project continues. And I’m sure I’ll have a Part 2 of Programmatic Meshes in the next week or two.

Micro-optimization #1: Setting a `done` flag using bitwise operators

I’m planning a series of very brief micro-optimization notes, both for my own records and to help anyone else who may be looking at some optimizations. I plan to provide minimal code, results, and brief explanations.

In this case, I came across a bitwise |= for setting a done flag in one of Penny de Byl’s Udemy courses. I was curious and decided to see if it was an optimization. It felt like it would be, but I didn’t expect by much. Sure enough, it is, but not by much. Still, if it’s a function that you have a lot of while-loops in your code checking against a boolean value, it could be handy.

The code:

static void Main(string[] args)
{
	for (int i = 0; i < 10; i++)
	{
		DoneTest1();
		DoneTest2();
	}
	Console.Read();

	Environment.Exit(0);
}

static void DoneTest1()
{
	bool done = false;
	int x = 0;
	int xSize = 100_000_000;

	while (!done)
	{
		x++;
		done |= (x < 0 || x >= xSize);
	}
}

static void DoneTest2()
{
	bool done = false;
	int x = 0;
	int xSize = 100_000_000;

	while (!done)
	{
		x++;
		if (x < 0 || x >= xSize)
			done = true;
	}
}

The results:

Using done |=  : 112ms  (1122354 ticks).
Using if  : 151ms  (1518356 ticks).
Using done |=  : 107ms  (1073112 ticks).
Using if  : 129ms  (1298421 ticks).
Using done |=  : 127ms  (1275415 ticks).
Using if  : 141ms  (1414998 ticks).
Using done |=  : 111ms  (1112100 ticks).
Using if  : 127ms  (1273705 ticks).
Using done |=  : 108ms  (1086612 ticks).
Using if  : 140ms  (1400030 ticks).
Using done |=  : 127ms  (1271739 ticks).
Using if  : 128ms  (1282120 ticks).
Using done |=  : 108ms  (1089749 ticks).
Using if  : 111ms  (1118823 ticks).
Using done |=  : 108ms  (1086191 ticks).
Using if  : 110ms  (1100477 ticks).
Using done |=  : 100ms  (1002949 ticks).
Using if  : 113ms  (1131274 ticks).
Using done |=  : 104ms  (1040928 ticks).
Using if  : 110ms  (1101986 ticks).

Each iteration shows a better performance using the bitwise |= compare and set rather than the if-statement. Across the ten iterations, the bitwise averaged 111.2ms while the if-statement averaged 126.0ms which amounts to a ~11.75% increase in performance. The bulk of the time in each set is, of course, the computer counting to 100,000,000, but given that the only difference in the two methods is the check for setting the flag, the variance is accounted for by that difference.

The Reason:

Bitwise operations on most architectures are almost always faster than other calculations, and branching statements are typically computationally heavier. When bitwise operations are an option, they usually result in more performant if less readable code.

Methodology:

For those of you not familiar with benchmarking in C#, I typically use the .NET Stopwatch class (System.Diagnostics.Stopwatch). I’ve removed the Stopwatch() code for brevity in the code example above. So long as you Start, Stop, read, and Reset your stopwatch in appropriate locations, you don’t need to worry about setup and other functionality as the only portion times is what is wrapped between Start and Stop.

I also run the program (usually a .NET Core console application) as a release executable rather than a debug executable to ensure performance isn’t being bogged down by the debugger for any reason.

Lastly, I try to run ten (or more) iterations of each thing that I’m testing. As you can see in the results, timing can vary for all manner of reasons. Sometimes the first execution of a code block is slower than subsequent executions. I also try to interleave each method or function being tested (e.g.: 1, 2, 1, 2, 1, 2, 1, 2 rather than 1, 1, 1, 1, 2, 2, 2, 2) to help ensure the code block isn’t being cached and repeated. Running only a single iteration is often misleading. In this case, all ten iterations of the bitwise comparison were faster, but it’s often the case the the slower of two methods might have a small percentage of faster executions and running single iteration may provide incorrect information about which is typically faster.