Material Study – Crystals/Gemstone – Strawberry Quartz, Pt. 2 

Strawberry Quartz with inclusions

It’s labeled as “WIP 011”, but for now this is the final version of the strawberry quartz material study. I have the cut gemstone model in place that I wanted for it, and the surface and volume shaders both look decent. I’m sure I’ll circle back around to it at some point for improvement, but there are so many materials to study.

If you’re interested, check out my (free) post on Patreon about it here: https://www.patreon.com/posts/67185571

I also plan to work on some tutorials over the next couple of weeks. I’d love to hear feedback on anything you might want to see. My current plan is a four-part tutorial: modeling the cut gemstone, creating the surface shader, creating the volume shader, and rendering it all to video.

Materials Study – Crystals/Gemstones – Strawberry Quartz

As I mentioned on my Patreon, as part of my materials studies in Substance Designer and Blender, I really want to work on crystals and gemstones. I’ve been collecting quite a few reference images on this Pinterest board over the last few months, and have decided that the first one I’m going after is the strawberry quartz.

I have a few ideas for how to best do this. I’m initially working in Substance3D to create the surface material, but I’m also planning to use a volumetric shader combined with the surface shader in Blender to create an effect that looks more like the quartz. Those red veins are not just on the surface!

So, for my first iterations of both surface materials and volumetric shaders, I have the following first six WIP attempts:

#1 was just the first run at the surface shader. It’s a little too busy, and both the quartz and the veins are running a little too much toward pink.

#2 added a multidirectional blur to calm down the veining a bit.

#3 added the roughness and metallic channels to the texture and drove the veins more toward red.

#4 was a touchup to all of the channels, but the UVs on the model were also not right on two sides (visible on the left side in the image). It also added the initial volumetric pass inside the crystal.

#5 minor touchups.

#6 fixed UVs, fixed up some coloring and the density of the volumetric pass.

So, during these steps, widescale changes were made in Substance to the material textures themselves. In Blender, after getting Cycles setup the way I wanted, I made some minor detail changes to both the volumetric and surface shaders. There’s still a lot of work to be done, but not bad for a little bit of time in the morning.

FWIW, the blue cube in the background of the renders is there simply to verify the level of transparency. I’ll probably change the setup some so that there’s a white/50%/black background with some frosted lighting coming in from underneath and slightly to the front. The surface itself looks pretty decent, though I’m sure there will be some fine-tuning. The major changes I think still need to be made are in the volumetric shader aspect.

Blender: Alchemy Lab, pt. I

Something I’ve been wanting to work on for ages, and one of the key reasons I’ve started playing with Blender, is a great, fully-modeled alchemy lab or wizard’s lab. I wanted to model and skin every component from scratch, create appropriate lighting, and eventually animate a small scene.

That effort has begun…

In some cases, there are currently placeholder materials – currently just the walls and the ground. And everything is iterative in that I will likely go back to each piece over time to tweak models and materials as the scene (and my skillset) grows.

The very first thing I started modeling was a small double-bubble glass container for liquids. It turned out pretty nicely so far – standing at 0.299m tall (~11.75″), not including the cork. Doesn’t every alchemy bottle need a cork?

Double Bubble Container

The red coming through is just a red cube I was using to get the alpha, transmission, and IOR dialed in. In this case the container has water in it. I also wanted to play around with some odd glowing liquids, so I just threw together green and red options as more of a quick placeholder.

I figured the next step was that I needed to make a table to start putting stuff on. Since reference images are an amazing tool, this is the reference I was using for the table.

Alchemy Lab Simple Table Reference

Pretty simple, but it looks sturdy and timeless. I still plan to add the braces along the bottom and top of the legs, but for now I’ve got this:

Alchemy Lab Simple Table

I had a lot of fun with this one. While it adds complexity (and render time), I wanted the planks of the table top to be slightly different. Each one got a light level of deformation – small chunks removed, larger gouges, and other things that were better modeled than added via normal maps. Then using a single wood grain texture, I unwrapped the UVs for each plank and each leg and applied those unwrappings to slightly shifted areas of the texture. So one texture, but nine different results for how they appear. This adds to the feeling of a real table – all made from the same type of wood, but not with identical grain, which would be an odd thing to see.

Alchemy Lab Simple Table

Not a particularly great render, but definitely give the idea of what I was going for.

Next on my list was a coil candle, sometimes called an hour candle since you can control how long a segment burns before going out. My mom had one or two of these in her antique collection when I was a kid, and I remember thinking that it was just a really cool way to have a light source and a very rough timer at hand. Reference image:

Coil Candle

This has been what I’ve worked on the last couple of days. I ended up using Blender Rookie‘s tutorial on making a coil as the jumping off point.

Coil Candle with Brass Dish

I was able to get a rough shape for the candle, and added a brass dish for the start. I actually still have that little tail at the bottom to fix up, and a variety of details to add, but progress was still made. I added the brass spindle and the mechanism used to “stop” the candle, as well as bending the top of the candle upwards, adding a wick, and a small flame… a flame that also still desperately needs some work.

Coil Candle

There’s still some work to be done on this. Probably adding small bits of wax to the dish, definitely adding feet to the dish, and possibly a small handle. I’m also considering adding a glass baffle.

So far, there isn’t much. Given that this is all a learn-as-I-go process, it’s slow. I’m sure it’ll pick up quite a bit as I move forward. At least I really hope so lol. Right now, the scene as far as objects go looks like this:

Coil Candle and Double Bubble on Simple Table

Some things I plan to add:

  • A variety of jars and containers, glass, leaded glass, clay and ceramic.
  • A “lab” setup – small distiller and other chemistry/alchemy equipment.
  • Additional light sources – I’d like the entire scene to be lit via ray-tracing with Cycles, all from actual light sources: candles, sconces, torches, weird glowing objects, et cetera.
  • Various furnishings: tables, wall-mounted items, chests, shelving.
  • A hearth/fireplace of some sort, possibly with some basic cooking items
  • A banner or two, as I definitely want to also work with cloth
  • Animations – currently the candle flame is animated, but it sucks. I’d also like to have the light source for flames flicker and move with the flames themselves. I had done this for wall sconces in Labyrintheer programmatically, but am still trying to find the best way to go about this in Blender.
  • Packaging: If this actually works out well, and a good collection comes into being, I’d like to possibly sell this as a package, ready to go for Unity and Unreal as a sort of kitbash type of deal. Who knows how long it will take, but a production-ready kit is definitely a goal.

Creating Sprites Programmatically in Unity

So, I’ve been working on a game idea (yeah, I know, I haven’t actually completed any games thus far… my bad!), and have created some place-holder graphics for testing a few game mechanics. In other words, the visuals are not a permanent sort of thing. However, one of the mechanics will require some programmatically generated sprites to come into being as directed by a UI window the player will be presented with.

The basic setup is like this: The main game visuals are using Unity’s Tileset feature. The grid has (currently) three overlaid tilemaps: ground, ground shadows, ground clutter; the clutter being grass and flowers and other non-interactable bits for visual effect. Each tilemap moves up one in the z-sort order and all but the ground layer are using alpha transparency.

The programmatic sprites will be one z-sort layer above those (and below the player/NPCs) and is displayed at a target transform called CircleTarget. The initial code looked like this:

if (tex == null)
{
     tex = new Texture2D(256, 256);
     tex.alphaIsTransparency = true;

     Color c = Color.red;
     
     for (int x = 120; x <= 130; x++)
          for (int y = 120; y <= 130; y++)
               tex.SetPixel(x, y, c);

      s = Sprite.Create(tex, new Rect(0, 0, 256, 256), new Vector2(0.5f, 0.5f), 32f);

      CircleTarget.GetComponent<SpriteRenderer>().sprite = s;
}

This was intended just to put a small red square down where the CircleTarget lives, but instead I was just presented with this:

Ah, you need to apply changes – but also need to set things up for transparency. So, let’s try this:

if (tex == null)
{
      tex = new Texture2D(256, 256);
      tex.alphaIsTransparency = true;

      Color c = Color.red;
      Color a = new Color(1f, 1f, 1f, 0f);

      for (int x = 0; x < tex.width; x++)
            for (int y = 0; y < tex.height; y++)
                 tex.SetPixel(x, y, a);

       tex.Apply();

       for (int x = 120; x <= 130; x++)
           for (int y = 120; y <= 130; y++)
               tex.SetPixel(x, y, c);

       tex.Apply();

       s = Sprite.Create(tex, new Rect(0, 0, 256, 256), new Vector2(0.5f, 0.5f), 32f);

       CircleTarget.GetComponent<SpriteRenderer>().sprite = s;
}

Ah, much better, however the red pixels are surrounded by a buffer of whiteish/alphaish pixels. Of course, if you have existing sprites that you’re importing into Unity that use alpha and you want pixels to look precise, we need to change how they’re filtered. For programmatic sprites, you need to do the same thing.

tex = new Texture2D(256, 256);
tex.alphaIsTransparency = true;
tex.filterMode = FilterMode.Point;  // Add this line

...

And now we have a properly transparent background red square placed at our target location.

I’m a big fan of programmatic generation of meshes, sprites, and really everything. It makes the overall footprint of the game smaller and often consumes no more memory or processing power than what you’d have anyway – not always, but often. In this case, the programmatic option is actually significantly better than having a bunch of predesigned objects as sizes can be calculated on the fly and the variations that I plan for won’t require any palette-swapping, or even any palettes at all since it’ll just be stored in code. Generation of pixel art on the fly really opens up how this game mechanic can be used. I’m sure there will be more posts on it down the road – keep an eye out.

Palettes

I’ve started working on some color palettes for a few projects. Because we aren’t limited to 8-bit visuals these days, I’ve taken an atypical tack. Rather than having a palette for a specific biome or environment, I’m working on creating a variety of 8-color palettes that can be combined as needed in any various environment. The benefit here is that the total colors in a scene are still being limited, and the palettes can be used to ensure a visual “feel”, but more total colors can be utilized in a pixel art format.

Sulfur Pools
Seafoam
Hot & Cold
Greyscale
Flames
Cool Blue

I have ideas for about a dozen more and plan to utilize each of them in various ways. I also have CSS stylesheets made for them. I’ll upload those at some point and add links for anyone who might be interested.

Biome beginnings…

I’ve been wanting to make a system for a 2D game that would offer varying biomes. The two pieces I started on were creating noise-based shaders for each type of block/square, and working on deforming the individual quads so that they weren’t perfectly square, but did always match up along seams.

The beginning of this video briefly shows the near-infinite world space variance for the four shaders that I’ve got working so far. Clockwise from the top left, the materials are: iron, copper, coal, and gold. The movement is actually the whole tileset being moved around the world space (the camera auto-follows the center of the tileset). The second part of the video shows the deformed quads – which actually brings me to the “right” and “wrong” way to devise triangles for a mesh.

Pedants would say this is very much the “wrong” way to do so – that near-square shapes should be developed by pairs of triangles in any case where it’s possible. And for mapping textures to quads, that is certainly true, however the use case here is different.

First, there are no textures being utilizes. With it being strictly shader-based, and with the shader specifying values based on world positions, the triangle design doesn’t matter at all for the visual effects. Additionally, the methods I’m using allows for easy programmatic deformation with a virtually unlimited number of points along each side of the quad to be used for the deformation. Currently that’s being controlled by specifying the number of line segments each side should be broken down into.

Before and after deformation with two segments per side

This image is just breaking the quad’s sides into two segments each.

Before and after deformation with five segments per side

Here’s five segments per side.

Before and after deformation with ten segments per side

And lastly ten segments per side.

You can see that with five and ten segments, the actual deformation in the upper left is not manifold in two dimensions. But because the visual is being driven by shaders, it doesn’t actually cause any issues, and the tiles to the left and above this tile still line up properly meaning there’s also no z-fighting. While I still need to clean up the noise used for the deformations a bit, there’s a benefit to knowing that even errant geometry isn’t going to cause issues (because in a randomly generated world with tens of thousands of tiles, there’s always the chance for float-based math to be off).

The additional benefit to using “non-standard” triangles radiating out from the center is that it makes programmatic variation much easier to accomplish as no tile needs to know anything about any of it’s neighbors to deform and still fit properly. This actually leads into another useful factor noted below. But here, the math and calculations are just much easier. In the list of vertices, vertices[0] is also point (0, 0) of the quad – the center. All other vertices radiate outward, starting from the upper-lefthand corner and moving clockwise around the quad. This also means that setting the mesh triangle[] array is easier because every triplet starts with 0, and then stutter counts upward, e.g. – (0, 1, 2, 0, 2, 3, 0, 3, 4, 0, 4, 5, 0, 5, 6, …)

With such a simple set, a basic for-loop allows this to be done without prior knowledge of how many segments each quad has.

As mentioned above, there’s another useful bit here. If you look more closely at one of the before and after images, you’ll notice that the quad is centered on the world grid, which is not necessarily the default in Unity. Typically, a quad at coordinates (0, 0) would have it’s upper-lefthand corner at (0, 0) and it’s opposite corner at either (0, 1) or (0, -1) depending on how you have things set up. Here, when building the quad programmatically, rather than using the typical range of (0, 1) I use the range (-0.5, 0.5).

There are two primary reasons for this. From an object control perspective, this means that the tile at (5, -6) is centered at (5, -6), so destruction of that tile is the destruction of a 1×1 area centered at that position; there’s no need to worry about which direction the quad expands from it’s origin because the origin is the center. The second reason is from a programmatic geometry view. Because all tiles are centered, deforming the geometry along each x- and y- value between tiles is consistent between negative and positive worldspace.

Let’s take a little tutorial approach here to see what the code looks like. Here’s the creation of the quad itself. Yes, there’s some housekeeping to do with this yet, but it’s functional and fast.

        void CreateQuad()
	{
		Mesh mesh = new Mesh();
		mesh.name = "ScriptedMesh";

		Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

		//Center
		vertices[0] = new Vector3(0f, 0f, 0f);

		for (int i = 0; i < stepsXY.Length - 1; i++)
		{
			//Top
			vertices[(0 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[i], stepsXY[stepsXY.Length - 1], 0f);
			//Right
			vertices[(1 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1], stepsXY[stepsXY.Length - 1 - i], 0f);
			//Bottom
			vertices[(2 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[stepsXY.Length - 1 - i], stepsXY[0], 0f);
			//Left
			vertices[(3 * (stepsXY.Length - 1)) + i + 1] = new Vector3(stepsXY[0], stepsXY[i], 0f);
		}

		Vector3[] normals = new Vector3[vertices.Length];

		for (int i = 0; i < normals.Length; i++)
			normals[i] = Vector3.forward;

		int[] triangles = new int[4 * (stepsXY.Length - 1) * 3];

		int innerIndex = 0;
		for (int i = 0; i < 4 * (stepsXY.Length - 1); i++)
		{
			triangles[innerIndex++] = 0;
			triangles[innerIndex++] = i + 1;
			triangles[innerIndex++] = i + 2;
		}

		triangles[innerIndex - 1] = 1;

		mesh.vertices = vertices;
		mesh.normals = normals;
		mesh.triangles = triangles;

		mesh.RecalculateBounds();

		GameObject quad = new GameObject("Block");
		quad.transform.position = position;
		quad.transform.parent = this.parent.transform;

		MeshFilter meshFilter = (MeshFilter)quad.AddComponent(typeof(MeshFilter));
		meshFilter.mesh = mesh;

		MeshRenderer meshRenderer = (MeshRenderer)quad.AddComponent(typeof(MeshRenderer));
		meshRenderer.material = this.bMat.Material;

		this.self = quad;
	}

We create the mesh, and then determine the number of vertices. Again, because we aren’t turning a quad into a bunch of squares, this is an easy calculation, and there are no vertices aside from the center vertex that needs to be added.

Vector3[] vertices = new Vector3[1 + (4 * (stepsXY.Length - 1))];

Here, the number of vertices is 1 for the center, plus 4 * (stepsXY.Length - 1) where stepsXY is the number of vertices along each side. We’re subtracting 1 from each side because the second corner vertex of a given side will be the first vertex for the other side. In other words, if we’re breaking each side into two segments, you’d have {(-0.5, 0.5), (0, 0.5), (0.5, 0.5)} for the top, and {(0.5, 0.5), (0.5, 0), (0.5, -0.5)} for the right side. We don’t want or need to have (0.5, 0.5) listed twice in the array of vertices, so the -1 prevents that from happening.

We then add the center vertex to the array, and run through a for-loop that also only needs to execute stepsXY.Length - 1 times, as each loop hits the same point on all four sides. Yes, I used 0 * … in the first (top) calculations – this is just for clarity. You’ll also notice that in the right and bottom calculations, we’re subtracting i rather than adding it. This is so that triangles calculation later continues to be easier and all vertices in the array exist in clockwise order starting at vertices[1].

All normals are forward-facing, so it’s easy to just fill the normals array with Vector3.Forward given as many places as you have vertices.

Now the triangles array is initialized (remember to multiply it by 3 since each triangle has three vertices). Using an index/iteration value that is external to the for-loop allows for quick calculation; remember from above that the whole array is sets of 0, x, y where x and y stutter-step upwards. Finally, we set the very last triangle point back to 1 – the first value in the vertex array that is on the outer edge (this completes the circuit around the quad).

The rest is just building the mesh out. You may have noticed that I don’t build an array of UVs, nor plug UVs into the mesh building. Again, because I’m not using textures, there’s no mapping from a texture to the mesh, and therefore UVs are not needed. The shader doesn’t care about UVs. Of course, you could build shaders that DO care about UV calculations, in which case UVs would also need to be added (which may be a bit more complicated given the triangle geometry here).

Now we’ll look at the deformation code.

        public void DeformQuad()
	{
		MeshFilter mf = this.self.GetComponent<MeshFilter>();
		Mesh m = mf.mesh;
		Vector3[] vertices = m.vertices;

		for (int i = 0; i < vertices.Length; i++)
		{
			if ((vertices[i].x == stepsXY[0] || vertices[i].x == stepsXY[stepsXY.Length - 1]) && (vertices[i].y == stepsXY[0] || vertices[i].y == stepsXY[stepsXY.Length - 1]))
				continue;

			// Top
			if (vertices[i].y == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[stepsXY.Length - 1]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Bottom
			if (vertices[i].y == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + vertices[i].x, this.position.y + stepsXY[0]), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Left
			if (vertices[i].x == stepsXY[stepsXY.Length - 1])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[stepsXY.Length - 1], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}

			// Right
			if (vertices[i].x == stepsXY[0])
			{
				float noiseValue = Map(Mathf.PerlinNoise(this.position.x + stepsXY[0], this.position.y + vertices[i].y), 0f, 1f, -0.3f, 0.3f);
				vertices[i] = new Vector3(vertices[i].x + noiseValue, vertices[i].y + noiseValue, 0f);
			}
		}

		m.vertices = vertices;
		m.RecalculateBounds();
	}

Currently, I’m just using Unity’s built-in Perlin Noise methods in the Mathf library. This leaves much to be desired, but before I dove into creating a noise function, I wanted to ensure this all worked as I expected. Basically, this just extracts the vertices from the mesh, performs the calculations on them, and rebuilds the mesh. The first if-statement is intended to keep the corners of each quad from becoming out of alignment. It probably won’t be kept, but it was something I was trying out.

You can also see that I map the noise function’s return values from (0, 1) to (-0.3, 0.3) as I don’t want any deformed vertex landing at or close to the center of another tile. I need to play around with this value some still, but it will depend on the noise function I end up with later on.

From a performance stance, it might be better to deform the vertices as the mesh is being created initially rather than creating a perfectly square mesh then proceeding to deform it. But this is the type of optimization that will almost certainly hinder legibility of the code, and keeping the two functions separate allows for more easily making changes to either function. And really, the amount of time taken is pretty small. Even with large sets of tiles, it takes no more than ~40μs to generate and ~27μs to deform each quad, for about 17s for over 250,000 tiles. The obvious plan would eventually be to chunk them (ala Minecraft), and there’s almost definitely some room for fine-tuning the process. Plus, this is just executing it in the editor, so it would almost certainly perform better in a release executable.

Working with NASA images to create Unity terrain

I’ve been wanting to create a scene in Unity based on real Martian terrain and recently chose Victoria Crater as my target. I’ve taken two NASA images, a false color image and a black and white image as my starting point. These two images below:

Victoria Crater, Mars
Source: https://mars.nasa.gov/resources/5633/victoria-crater-at-meridiani-planum/
Victoria Crater
Victoria Crater, Mars

The B+W image is an ideal starting point for a greyscale heightmap, but it has some critical flaws. Since a heightmap uses the greyscale level for determining height on a body, the shadows in the upper left make that section of the crater significantly deeper, the lighter areas on the bottom right roughly the same height as the surrounding plane, and the white shining bits along the edge significantly higher than anything else. That makes for a very poor topographical map.

So, cutting sections into various layers allowed for some gross manipulation of the overall scaling of colors using histograms. The first heightmap looks like this:

Victoria Crater Heightmap WIP
Victoria Crater Heightmap WIP

This is a more accurate representation by far, though still not as good as it could be. The feathered texture on the lower right quadrant of the crater doesn’t appear in the rest of the crater, despite very definitely being there in the source images. There’s also a bit of noise around the rim that really should be resolved, though it was worth importing into Unity as a trial. The result is:

Screenshot: Victoria Crater, Unity terrain from Heightmap
Screenshot: Victoria Crater, Unity terrain from Heightmap

I’m pretty happy with it for an initial attempt. It’ll need some fleshing out on the heightmap side. GIMP is a great tool, but it’s no Photoshop and some of the finer features in PS would definitely make this easier. That said, it’s almost certainly a workable option. Maybe a future project will be training an ML brain to take astronomic images and creating topographic heightmaps from them. I’d need better sources to start with, though. For now, I’ll need another few rounds of handmade maps.

Generative Glyphs

I came across this post on the Reddit sub r/Generative the other day and thought that u/ivanfleon had done something both relatively simple and also very cool. I had some ideas for generative glyphs and started by mimicking his sample there, thus was born the RectGlyph:

RectGlyph_01RectGlyph_02
Two different RectGlyph settings

The interface came shortly after RectGlyph was done as I was trying to troubleshoot work on the PolarGlyph. It made it easier to see what sort of variations could be had, but also allowed debugging to be more visual (which really helps me a lot).

I’ve always been fascinated with languages, both real and imagined. As I was working toward my PolarGlyph idea, I stumbled upon a few happy accidents, such as the RunicGlyph.

RunicGlyph_01RunicGlyph_02
Two RunicGlyph settings

And also the AngularGlyph:

AngularGlyph_01AngularGlyph_02
Two AngularGlyph settings

And eventually worked out the kinks for the PolarGlyph:

PolarGlyph_02PolarGlyph_01
Two PolarGlyph settings

I have a few others bring worked on, as well as some ideas regarding an editor so you can take your randomly generated glyphs and add line segments to or remove them from any of the glyphs in the set.

My pie-in-the-sky idea is to also be able to save them as a TrueType font so that they can be used in Unity (or anywhere), and possibly to save them as an SVG or vector sheet for use in various vector-based software.

It’s been a fun side project so far.

What to do, what to do…?

Still playing with some new Unity 2020 features, still dabbling on Labyrintheer as well as a few other projects. Learning a bit of Machine Learning just for fun, figuring out the ins and outs of HDRP and RT, and generally using this lovely COVID pandemic as time to reset a bit and figure out what I actually want to develop sooner rather than later.

For whatever it may be worth to you, if you are interested in Machine Learning, either specific to Unity, or more generally, I cannot recommend Penny deByl’s courses on Udemy highly enough. All of her courses are great, and the ML and AI courses are no different.

https://www.udemy.com/course/machine-learning-with-unity/

While the course is ostensibly about Unity and the development takes place within Unity, it isn’t specific to the Unity ML-Agents (though there are sections for that). The bulk of the course is geared toward developing your own agents and brains in C#, which is fantastic whether you want to use Unity’s ML-Agents or not.

In the immediate now, I’ve been working on some Unity code to create a system of CCTVs and monitors to show them. It’s a core component of a potential game idea I’m futzing with at the moment. I expect to have some pictures or videos in the next week or two to show off. Until then, Happy Thanksgiving 2020.