Working with NASA images to create Unity terrain

I’ve been wanting to create a scene in Unity based on real Martian terrain and recently chose Victoria Crater as my target. I’ve taken two NASA images, a false color image and a black and white image as my starting point. These two images below:

Victoria Crater, Mars
Source: https://mars.nasa.gov/resources/5633/victoria-crater-at-meridiani-planum/
Victoria Crater
Victoria Crater, Mars

The B+W image is an ideal starting point for a greyscale heightmap, but it has some critical flaws. Since a heightmap uses the greyscale level for determining height on a body, the shadows in the upper left make that section of the crater significantly deeper, the lighter areas on the bottom right roughly the same height as the surrounding plane, and the white shining bits along the edge significantly higher than anything else. That makes for a very poor topographical map.

So, cutting sections into various layers allowed for some gross manipulation of the overall scaling of colors using histograms. The first heightmap looks like this:

Victoria Crater Heightmap WIP
Victoria Crater Heightmap WIP

This is a more accurate representation by far, though still not as good as it could be. The feathered texture on the lower right quadrant of the crater doesn’t appear in the rest of the crater, despite very definitely being there in the source images. There’s also a bit of noise around the rim that really should be resolved, though it was worth importing into Unity as a trial. The result is:

Screenshot: Victoria Crater, Unity terrain from Heightmap
Screenshot: Victoria Crater, Unity terrain from Heightmap

I’m pretty happy with it for an initial attempt. It’ll need some fleshing out on the heightmap side. GIMP is a great tool, but it’s no Photoshop and some of the finer features in PS would definitely make this easier. That said, it’s almost certainly a workable option. Maybe a future project will be training an ML brain to take astronomic images and creating topographic heightmaps from them. I’d need better sources to start with, though. For now, I’ll need another few rounds of handmade maps.

Generative Glyphs

I came across this post on the Reddit sub r/Generative the other day and thought that u/ivanfleon had done something both relatively simple and also very cool. I had some ideas for generative glyphs and started by mimicking his sample there, thus was born the RectGlyph:

RectGlyph_01RectGlyph_02
Two different RectGlyph settings

The interface came shortly after RectGlyph was done as I was trying to troubleshoot work on the PolarGlyph. It made it easier to see what sort of variations could be had, but also allowed debugging to be more visual (which really helps me a lot).

I’ve always been fascinated with languages, both real and imagined. As I was working toward my PolarGlyph idea, I stumbled upon a few happy accidents, such as the RunicGlyph.

RunicGlyph_01RunicGlyph_02
Two RunicGlyph settings

And also the AngularGlyph:

AngularGlyph_01AngularGlyph_02
Two AngularGlyph settings

And eventually worked out the kinks for the PolarGlyph:

PolarGlyph_02PolarGlyph_01
Two PolarGlyph settings

I have a few others bring worked on, as well as some ideas regarding an editor so you can take your randomly generated glyphs and add line segments to or remove them from any of the glyphs in the set.

My pie-in-the-sky idea is to also be able to save them as a TrueType font so that they can be used in Unity (or anywhere), and possibly to save them as an SVG or vector sheet for use in various vector-based software.

It’s been a fun side project so far.

What to do, what to do…?

Still playing with some new Unity 2020 features, still dabbling on Labyrintheer as well as a few other projects. Learning a bit of Machine Learning just for fun, figuring out the ins and outs of HDRP and RT, and generally using this lovely COVID pandemic as time to reset a bit and figure out what I actually want to develop sooner rather than later.

For whatever it may be worth to you, if you are interested in Machine Learning, either specific to Unity, or more generally, I cannot recommend Penny deByl’s courses on Udemy highly enough. All of her courses are great, and the ML and AI courses are no different.

https://www.udemy.com/course/machine-learning-with-unity/

While the course is ostensibly about Unity and the development takes place within Unity, it isn’t specific to the Unity ML-Agents (though there are sections for that). The bulk of the course is geared toward developing your own agents and brains in C#, which is fantastic whether you want to use Unity’s ML-Agents or not.

In the immediate now, I’ve been working on some Unity code to create a system of CCTVs and monitors to show them. It’s a core component of a potential game idea I’m futzing with at the moment. I expect to have some pictures or videos in the next week or two to show off. Until then, Happy Thanksgiving 2020.

Playing with ray-tracing, Pt. 1

Back to working on Labyrintheer. But it’s Unity 2020, and I’ve been interested in playing with ray-tracing (RTX), so I started a new project, brought in some old assets and started toying with it.

The first entry is the trusty Gel Cube (from InfinityPBR) with RTX materials applied. This one is only lit by the internal point light that dies off when it dies. This is both of the attack animations and the death animation without scene lighting:

The next is with scene lighting. This is where RTX really shines with colored shadows cast through the transparent material of the GelCube:

The video quality isn’t as good as I’d hoped – need to work on that a bit.

Really, working with RTX in Unity’s HDRP isn’t terribly difficult, but there are a variety of gotchas that make it a bit of a headache, and materials are set up significantly differently (as are lights and scene volumes and…) That said, I plan to work on a few creatures first, just to get a feel for it all, then move on to bringing in the dungeons under RTX. Should be fun!

WIP: Wednesday Thursday

I’ve been working on a few items, the most recent of which has been lootable chests that have open and close animations and audio.  The animations and audio part are not something I have a lot of experience with, but the chest is thankfully fairly simple.

Right now I’ve defined a very specific Chest Controller, though I suspect I will transform this into a Usable Object Controller that will take an enum of it’s usability type (for environmental objects) like open/close, on/off, pick or gather (for reagents) and the like.  The video above is super simple, and is still a WIP.

Currently, the player will interact simultaneously with every interactable object within a radius.  The next step is to raycast or use a hidden collider to find the object most centered in front of the player and prevent interactions with objects behind the player.

It’s a start.

Work in Progress – 2017/10/17

A lot of core work on the project has been done over the past couple of weeks – and by core I mean behind the scenes.  Yeah, I say that a lot and rarely have something new to show, and this is one of those times.

However, as an independent developer, that’s how a lot of this process goes. I fixed the helium filled GelCube issue, and they seem to be functioning fairly well on their NavMesh.  Part of this was competing systems causing race conditions.  I toyed with the idea of using physics to drive them, disabling some of the Unity AI stuff, but that led me into a rabbit hole that I wasn’t in the mood to contend with.  Instead, I resolved the conflicts and they are now pathing along as they should.

I also added a physics material for them.  They’re made out of ice, and should slide around a bit more than other GelCubes and mobs in the game.  I’m also testing a fun bit where they may have items from their biome shoved inside them…  I mean, they’re likely to pick things up as they move about and grow, right?  I’m hoping this is a small detail that is cool in the long run.  We shall see.

I’ve started setting realistic masses for Rigidbodies as well.  The player has a mass of 81kg, and a full sized GelCube has a mass of 5444kg (who knew they were so heavy, but it does make sense).

Otherwise it’s been more boring: some maintenance to remove old assets that are no longer in play, some scripting for torches, creating static materials from SBSARs now that I have the walls and floors in a good place.  Oh, and modifying the Dungeon Architect DungeonRuntimeNavigation.cs file to support multiple meshes for multiple agents.  If you use DA and are interested, you can grab that here.

DungeonRuntimeNavigation Inspector
DungeonRuntimeNavigation Inspector

Basically you use an int[] to specify the NavAgentIDs that all your mobs will use and it builds all of their meshes and navigation data at the same time.  I know I’m not the only one who wondered for a while why I couldn’t use other NavMeshAgents in my Unity project.

How To See Your Player… Making Walls Transparent

Over the past several months working on the Dungeon theme for Labyrintheer, I’ve changed my camera angle several times.  I keep moving it higher to prevent walls and such from occluding the player, but I’m never happy with such an oblique view.  So, over the past few days I’ve been looking at options to make walls transparent when they are between the player and the camera.

Some solutions simply disable the geometry.  This isn’t acceptable for my game, and I suspect for many.  You could accidentally walk backwards out of the playable area, or an errant AI could take a bad turn during it’s pathing and fall off the world.  Plus, disabling geometry just doesn’t seem like an elegant solution.  My primary goal (and I’m still working on it) is to use a shader for this directly, though that seems like it has some major pitfalls (how do you tell a shader about an object other than the one that it’s drawing?).

So, for now I’m cheating with a very small amount of code and an extra material for objects that I want to hide.

Basically, I’ve duplicated the four wall materials I have, and the duplicate materials use transparency with an alpha value of 100.

My player controller script now calculates it’s distance from the camera every frame (though I think this might be able to be done once in Awake() since the distance should be fairly static), like this:

     void Update ()
     {
         GetInput();
         ProcessInput();
 
         distanceSquared = (transform.position - Camera.main.transform.position).sqrMagnitude;
     }
 
     public float Dist()
     {
         return distanceSquared
     }

Then created a script to go on the walls (or any object that needs to be transparent to prevent occlusion), as such:

TransMaterialSwap.cs

 using UnityEngine;
 
 public class TransMaterialSwap : MonoBehaviour {
 
     public Material _original;
     public Material _transparent;
     private GameObject player;
     private playerController pC;
     private Renderer rend;
 
     void Start()
     {
         player = GameObject.FindWithTag("Player");
         pC = player.GetComponent<playerController>();
         rend = this.GetComponent<Renderer>();
     }
 
     void Update()
     {
         if ((transform.position - Camera.main.transform.position).sqrMagnitude < pC.Dist())
         {
             rend.material = _transparent;
         }
         else
         {
             rend.material = _original;
         }
     }
 }

In the inspector I set both the original material and the transparent material.  If the object is between the camera and the player, it switches the object’s material to the transparent material.  It looks like this:

There are a few issues here.  First, I still need to profile this to see if the solution gives my runtime performance a hit.  I don’t suspect it’ll be TOO bad, but it doesn’t hurt to check, especially with larger maps.  I may look into options to only run the check if the object is visible to the camera rather than always checking on Update(), every frame for every wall.  The other issue is that by making it transparent, light comes through.  I’m not sure how big an issue this will be – it’ll require some play testing.  But it may be an issue in some situations.

Lastly, as I said, I really do want to attempt this in a shader.  I figure it’s a good method to learn shader programming, even if exactly what I want isn’t possible.