Materials Study – Crystals/Gemstones – Strawberry Quartz

As I mentioned on my Patreon, as part of my materials studies in Substance Designer and Blender, I really want to work on crystals and gemstones. I’ve been collecting quite a few reference images on this Pinterest board over the last few months, and have decided that the first one I’m going after is the strawberry quartz.

I have a few ideas for how to best do this. I’m initially working in Substance3D to create the surface material, but I’m also planning to use a volumetric shader combined with the surface shader in Blender to create an effect that looks more like the quartz. Those red veins are not just on the surface!

So, for my first iterations of both surface materials and volumetric shaders, I have the following first six WIP attempts:

#1 was just the first run at the surface shader. It’s a little too busy, and both the quartz and the veins are running a little too much toward pink.

#2 added a multidirectional blur to calm down the veining a bit.

#3 added the roughness and metallic channels to the texture and drove the veins more toward red.

#4 was a touchup to all of the channels, but the UVs on the model were also not right on two sides (visible on the left side in the image). It also added the initial volumetric pass inside the crystal.

#5 minor touchups.

#6 fixed UVs, fixed up some coloring and the density of the volumetric pass.

So, during these steps, widescale changes were made in Substance to the material textures themselves. In Blender, after getting Cycles setup the way I wanted, I made some minor detail changes to both the volumetric and surface shaders. There’s still a lot of work to be done, but not bad for a little bit of time in the morning.

FWIW, the blue cube in the background of the renders is there simply to verify the level of transparency. I’ll probably change the setup some so that there’s a white/50%/black background with some frosted lighting coming in from underneath and slightly to the front. The surface itself looks pretty decent, though I’m sure there will be some fine-tuning. The major changes I think still need to be made are in the volumetric shader aspect.

Earth, we have Patreon!

Just wanted to announce my new Patreon page. There’s also a link to it in the social media section at the top right of the page. The $1 tier (for now) will unlock all content, but I plan to post at least as much free content as paid.

Paid content will typically include assets (.blend, .fbx, .sbs, .sbsar files), while free content will typically be mini-tutorials, code snippets, smaller assets like textures, and discussions about tutorials and assets available elsewhere online.

Feel free to check it out, and please follow for the free content even if you aren’t interested in subscribing. ^_^

Blender: Alchemy Lab, pt. I

Something I’ve been wanting to work on for ages, and one of the key reasons I’ve started playing with Blender, is a great, fully-modeled alchemy lab or wizard’s lab. I wanted to model and skin every component from scratch, create appropriate lighting, and eventually animate a small scene.

That effort has begun…

In some cases, there are currently placeholder materials – currently just the walls and the ground. And everything is iterative in that I will likely go back to each piece over time to tweak models and materials as the scene (and my skillset) grows.

The very first thing I started modeling was a small double-bubble glass container for liquids. It turned out pretty nicely so far – standing at 0.299m tall (~11.75″), not including the cork. Doesn’t every alchemy bottle need a cork?

Double Bubble Container

The red coming through is just a red cube I was using to get the alpha, transmission, and IOR dialed in. In this case the container has water in it. I also wanted to play around with some odd glowing liquids, so I just threw together green and red options as more of a quick placeholder.

I figured the next step was that I needed to make a table to start putting stuff on. Since reference images are an amazing tool, this is the reference I was using for the table.

Alchemy Lab Simple Table Reference

Pretty simple, but it looks sturdy and timeless. I still plan to add the braces along the bottom and top of the legs, but for now I’ve got this:

Alchemy Lab Simple Table

I had a lot of fun with this one. While it adds complexity (and render time), I wanted the planks of the table top to be slightly different. Each one got a light level of deformation – small chunks removed, larger gouges, and other things that were better modeled than added via normal maps. Then using a single wood grain texture, I unwrapped the UVs for each plank and each leg and applied those unwrappings to slightly shifted areas of the texture. So one texture, but nine different results for how they appear. This adds to the feeling of a real table – all made from the same type of wood, but not with identical grain, which would be an odd thing to see.

Alchemy Lab Simple Table

Not a particularly great render, but definitely give the idea of what I was going for.

Next on my list was a coil candle, sometimes called an hour candle since you can control how long a segment burns before going out. My mom had one or two of these in her antique collection when I was a kid, and I remember thinking that it was just a really cool way to have a light source and a very rough timer at hand. Reference image:

Coil Candle

This has been what I’ve worked on the last couple of days. I ended up using Blender Rookie‘s tutorial on making a coil as the jumping off point.

Coil Candle with Brass Dish

I was able to get a rough shape for the candle, and added a brass dish for the start. I actually still have that little tail at the bottom to fix up, and a variety of details to add, but progress was still made. I added the brass spindle and the mechanism used to “stop” the candle, as well as bending the top of the candle upwards, adding a wick, and a small flame… a flame that also still desperately needs some work.

Coil Candle

There’s still some work to be done on this. Probably adding small bits of wax to the dish, definitely adding feet to the dish, and possibly a small handle. I’m also considering adding a glass baffle.

So far, there isn’t much. Given that this is all a learn-as-I-go process, it’s slow. I’m sure it’ll pick up quite a bit as I move forward. At least I really hope so lol. Right now, the scene as far as objects go looks like this:

Coil Candle and Double Bubble on Simple Table

Some things I plan to add:

  • A variety of jars and containers, glass, leaded glass, clay and ceramic.
  • A “lab” setup – small distiller and other chemistry/alchemy equipment.
  • Additional light sources – I’d like the entire scene to be lit via ray-tracing with Cycles, all from actual light sources: candles, sconces, torches, weird glowing objects, et cetera.
  • Various furnishings: tables, wall-mounted items, chests, shelving.
  • A hearth/fireplace of some sort, possibly with some basic cooking items
  • A banner or two, as I definitely want to also work with cloth
  • Animations – currently the candle flame is animated, but it sucks. I’d also like to have the light source for flames flicker and move with the flames themselves. I had done this for wall sconces in Labyrintheer programmatically, but am still trying to find the best way to go about this in Blender.
  • Packaging: If this actually works out well, and a good collection comes into being, I’d like to possibly sell this as a package, ready to go for Unity and Unreal as a sort of kitbash type of deal. Who knows how long it will take, but a production-ready kit is definitely a goal.

Blender: The omnipresent donut tutorial

I haven’t forgotten about my many unfinished development projects –Labyrintheer, Deuta, Evocuration, Atomancer – or my more personal projects – Sudoku Solver, Image Tools, XephLibs. But lo and behold, I’ve dove into Blender recently, and it’s been a lot of fun. What better way to kick things off than the ever-present Donut Tutorial from Andrew Price (aka Blender Guru). A quick search of #DonutTutorial on Twitter shows how often this tutorial is done, and with millions of views on YouTube (across Blender versions, as new versions of the tutorial are created for various major releases), the #Donutverse has become quite expansive.

Above is a small gallery of my progress through the tutorials. They’re fairly inclusive tutorials for beginners, including modeling shapes by hand, using and painting materials, using particles, setting up lights and cameras, and basic scene layout.

Given that a lot of my hobbyist development time the past few months has been programmatic meshes, it was a nice change of pace to use modeling software to create meshes and push vertices around.

If you’re looking to learn Blender, definitely check out Andrew’s YouTube channel (linked above) and particularly his tutorial playlists.

Quick Tip: Notes in Code

Going along with my last post, just a quick tip: sometimes it makes sense to use comments to show how a loop will progress. For me, at least, this is another way to get a grasp on how values are changing and ensure that calculations account for the situations you may need.

In this case, following the logic flow, I realized that the initial value for v2 was where things were breaking down. I was also able to capture the final segment without using an additional for-loop which makes things a bit cleaner. I’m actually going to evaluate some of my other triangle logic to see if I can do the same thing using the vx -= vertices.Length method to take care of the last parent loop.

Troubleshooting via UI in Unity – Deuta

As a visual learner, I’ve found that sometimes seeing something working is a better debugging tool for me than seeing an array of int values. With the array, I can still usually see where a pattern breaks down or unexpected values might be (at least when the use merits it), but it’s more difficult for me to wrap my brain around what needs to be changed.

I’ve continued work on Deuta, my project for parametric primitives in Unity. One of the things I’ve spent some time in is custom editor/inspector UIs for shape generation. I highlighted the vertex-index (vertidx) in a previous post, but I have a great example of how it helps me.

In the above video, I’m working on a parametric torus option. I have the vertices down how I want them, but as I was writing code for the tris, I clearly missed a step. I looked at the int[] of triangles I got. I could see where there was an issue. But being able to visualize it helped me understand precisely what I needed to change and how. I know how the triangles need to be ordered, but this also allowed me to see at what index it breaks down, if there’s a pattern to it or it’s just a single issue, and to see how numeric patterns played out.

I’m sure I’ll be posting more Deuta stuff over the next several weeks. I’ll likely show off some more of my troubleshooting/visual debugging steps as I work on some more complex shapes.

Unity Custom Attributes and Custom Editors

Shapes Editor 01
Custom editor/inspector view for Shapes.cs

While working on Programmatic Meshes, it became clear that I needed some custom gizmos to help me visualize things as I moved along. It really became a necessity as I was working on sphere generation because I was having some issues where certain sizes and segment counts were creating bad geometry. This was almost certainly due to ordering of the int[] array for triangles associated to the mesh. Since everything is generated by code, that means that once all vertices are created, the code needs to also sort/order those vertices the same every time to ensure that the facet tris are created correctly.

Sphere Vertices Indices
Sphere Vertices Indices

This was my first venture into gizmos aside from some very basic line drawings. I wanted the indices of each Vector3 in the mesh so that I could ensure they were getting properly ordered each time and would meet the requirements for tri calculation. So the above was born – and damn did it ever help me see where things were occasionally not working (sadly, I don’t have a screenshot of the bad sphere, but let’s just say that it was… not an ideal geometric shape).

After getting through that, I wanted to also change the inspector so that I could enable/disable vertex visualization. As shapes become more complex, the numbering is great, but I wanted a better visualization of the vertices in the scene view. As the screenshot above illustrates, unless you rotate the view around a bit, it’s easy to get lost with where vertices actually are in relation to one another.

Sphere Vertices Visualization
Sphere Vertices Visualization

In the above GIF, you can see how movement helps determine what indices you’re viewing, but the ROYGBIV and SIZE options for visualization also help. In the ROYGBIV mode, the closest vertices are red and the furthest are violet, and everything in between follows the ROYGBIV order. With the size option, the closest vertices are the largest and the furthest away are the smallest, and they are scaled to suit in between. In either case, they’ve updated in real time. I’m not yet sure how this performs on very high density meshes, and I’m sure some optimization will be necessary, but it’s good enough for my needs for now.

I wanted the collapsible Viewables area in the inspector for this, as well as for mesh data (Vertices[], Normals[], and Triangles[]). I also wanted to be able to select the Shape (each of which is a class inheriting from Primitive()), and which Generate() method should be used (each shape has different generate methods).

For this to work, I created a few custom classes, which I added to my external DLL:

SelectableList<T> is a custom List<> type collection that has an interactive indexer property called .Selected that references the index of the selected item in the list. This has turned out to be handy for selecting items in lists from dropdowns in the inspector.

MethodCaller<T> is a custom collection that contains a reference to a class (in this case, each shape class gets a method caller), and a SelectableList<MethodInfo> of the generation methods in that class.

And lastly, MethodDictionary<T> is a custom collection class that collects MethodCaller<T> objects. Its constructor takes a filter for classes (to remove the base class and secondary base classes where inheritance takes place on multiple levels), and a filter for methods based on the name of the methods to acquire.

The creation of the MethodDictionary also builds out the dictionary based on the filters provided, so there isn’t a lot of work needed to implement it. This is definitely a plus.

In the Unity code, I also created three custom attributes:

[Segmented] is tagged to generate methods that use the segment count value. This allows that slider to be enabled/disabled on constructor selection.

[DefaultGenerator] is tagged to generate methods that the default Generate() method passes to.

[GeneratorInfo] is tagged to all generate methods and provides inline help text in the inspector, typically what segment count actually accounts for if it’s used, and what the measure/size value indicates (e.g.: circle/diameter, quad/size, hexagon/apothem*2).

Using reflection, I do something like this in the ShapeEditor.cs script:

if (shape.shapeMethods.Callers.Selected. Methods.Selected. GetCustomAttributes().SingleOrDefault(s => s.GetType() == typeof(SegmentedAttribute)) != null)

It’s not terribly pretty, but it’s fairly quick – quick enough for inspector draws – and allows the inspector panel to change on the fly as selections are made.

It’s worth noting, if you haven’t worked with custom attributes before, that the attribute doesn’t need to have fields despite all examples I came across online containing them. Without fields, it’s basically a check to see if it exists or not – a boolean of sorts applied to a reflected method to change how the inspector is drawn. Some examples:

[AttributeUsage(AttributeTargets.Method)]
public class SegmentedAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class DefaultGeneratorAttribute : Attribute
{
}
[AttributeUsage(AttributeTargets.Method)]
public class GeneratorInfoAttribute : Attribute
{
    public readonly string info;

    public GeneratorInfoAttribute(string info)
    {
        this.info = info;
    }
}

And usage on a class method:

[Segmented]
[DefaultGenerator]
[GeneratorInfo("Generates a circle based on the 'starburst' pattern.\n\nSize is the diameter of the circle.\n\nSegments is the number of segments _per quadrant_.")]
public Mesh GenerateStarburst() { /* code */ }

I will probably write some additional posts about this, maybe with more code, as this project continues. And I’m sure I’ll have a Part 2 of Programmatic Meshes in the next week or two.

Programmatic Meshes, pt. I

I’m not sure how many parts this will end up being, but I’ve been spending a good amount of time lately on programmatically generating both 2D and 3D meshes. Currently, I’m working on a library to expand the Unity primitives system to create meshes based on parameters. So far, I’ve built Quads (standard/two triangle, starburst array from center), Circles (starburst), Hexagons (starburst, compact), and Triangles (equilateral, right). I’m now working on basic spheres from sets of circles.

Single Circle
Single Circle
Sphere from circles
Sphere from circles

One thing that these shapes do, so long as the shape object is kept, is track vertices across changes. I’d like to do some basic deformation options down the road, but for now this works well because for the sphere, I create just a single circle, copy its vertex[] rotate it and copy over and over until it’s complete.

The above image, “Sphere from circles” is actually just multiple circle objects rotated properly. My current speedbump is sorting the vertices for the sphere itself. The vertex at (0, 0, 0) is removed, and the vertices where x=0 are only parsed from the first circle rotation. All vertices are then ordered descending by y, so from the top down. Now I need to also sort the x/z values in clockwise order so that the vertex[] basically stores the vertices as horizontal slices from top to bottom, then clockwise for each slice (or counter-clockwise). This is necessary so that the triangles can also be generated programmatically without intervention.

My current attempt was something like this:

this.vertices = vertList.OrderByDescending(o => o.y).ThenBy(o => Mathf.Atan2(o.x, o.z)).ToArray();

But that doesn’t properly sort the x/z components. I’m sure there’s an easy formula that I’m missing, so… I’ll have to keep working through it.

Creating Sprites Programmatically in Unity

So, I’ve been working on a game idea (yeah, I know, I haven’t actually completed any games thus far… my bad!), and have created some place-holder graphics for testing a few game mechanics. In other words, the visuals are not a permanent sort of thing. However, one of the mechanics will require some programmatically generated sprites to come into being as directed by a UI window the player will be presented with.

The basic setup is like this: The main game visuals are using Unity’s Tileset feature. The grid has (currently) three overlaid tilemaps: ground, ground shadows, ground clutter; the clutter being grass and flowers and other non-interactable bits for visual effect. Each tilemap moves up one in the z-sort order and all but the ground layer are using alpha transparency.

The programmatic sprites will be one z-sort layer above those (and below the player/NPCs) and is displayed at a target transform called CircleTarget. The initial code looked like this:

if (tex == null)
{
     tex = new Texture2D(256, 256);
     tex.alphaIsTransparency = true;

     Color c = Color.red;
     
     for (int x = 120; x <= 130; x++)
          for (int y = 120; y <= 130; y++)
               tex.SetPixel(x, y, c);

      s = Sprite.Create(tex, new Rect(0, 0, 256, 256), new Vector2(0.5f, 0.5f), 32f);

      CircleTarget.GetComponent<SpriteRenderer>().sprite = s;
}

This was intended just to put a small red square down where the CircleTarget lives, but instead I was just presented with this:

Ah, you need to apply changes – but also need to set things up for transparency. So, let’s try this:

if (tex == null)
{
      tex = new Texture2D(256, 256);
      tex.alphaIsTransparency = true;

      Color c = Color.red;
      Color a = new Color(1f, 1f, 1f, 0f);

      for (int x = 0; x < tex.width; x++)
            for (int y = 0; y < tex.height; y++)
                 tex.SetPixel(x, y, a);

       tex.Apply();

       for (int x = 120; x <= 130; x++)
           for (int y = 120; y <= 130; y++)
               tex.SetPixel(x, y, c);

       tex.Apply();

       s = Sprite.Create(tex, new Rect(0, 0, 256, 256), new Vector2(0.5f, 0.5f), 32f);

       CircleTarget.GetComponent<SpriteRenderer>().sprite = s;
}

Ah, much better, however the red pixels are surrounded by a buffer of whiteish/alphaish pixels. Of course, if you have existing sprites that you’re importing into Unity that use alpha and you want pixels to look precise, we need to change how they’re filtered. For programmatic sprites, you need to do the same thing.

tex = new Texture2D(256, 256);
tex.alphaIsTransparency = true;
tex.filterMode = FilterMode.Point;  // Add this line

...

And now we have a properly transparent background red square placed at our target location.

I’m a big fan of programmatic generation of meshes, sprites, and really everything. It makes the overall footprint of the game smaller and often consumes no more memory or processing power than what you’d have anyway – not always, but often. In this case, the programmatic option is actually significantly better than having a bunch of predesigned objects as sizes can be calculated on the fly and the variations that I plan for won’t require any palette-swapping, or even any palettes at all since it’ll just be stored in code. Generation of pixel art on the fly really opens up how this game mechanic can be used. I’m sure there will be more posts on it down the road – keep an eye out.