After getting the debacle with my git repo fixed up, I decided to work on some shader stuff. I’ve never made a shader before, so I started with some great tutorials Makin’ Stuff Look Good in Unity, a great series of tutorials for Unity devs.
I started out with the tutorial on Winston’s barrier from Overwatch and this is what I had:
It looked pretty cool, but wasn’t much different than the tutorial, and also the hex pattern looks far more sci-fi than fantasy RPG. I made some modifications using temporary PNG assets that I reworked the RGB channels on and ended up with this:
Much better – arcane symbols work more nicely than the hexes. Of course, this will be developed over some time to get a better effect, then slightly modified for different elements (there are 12 of them in the game).
Anyone following this at all might be asking: “why the heck are you working on shaders and effects when the game systems aren’t done yet?” I think it’s a valid question. Most blogs and books on game development seem to point to it being better to get functionality in, then make things look good. There’s probably wisdom in that, and I’m sure it works for a lot of people – maybe most people. But I thrive in chaos. I also have a bit of the ADD. And being a (semi-)solo developer, I have to really work on all of the things, so… sometimes I jump around to not get bored or when I get stuck on something and want to revisit it later. There’s nothing wrong with this. Always find the work flow that works for you, rather than trying to fit yourself into the work flow you read or learned about.
I’ve been working on the core combat mechanics for the past week or two, and have finally got things feeling decent – at least to start with. I wanted to do something modular so that over time it would not only be easy to tweak, but also easy to add additional features. So, here’s a sample of what I have so far:
The base element here is the DamagePackage, which contains information about the raw damage amount (DamageArray – from each of the 12 possible elements of damage), the cause of the damage (DamageVehicle – e.g., weapon, creature, trap, environment, effect), any damage effects which would include damage over time (DoT) type effects, and any special effects from the attack (chance to be set ablaze, frozen, sleep, silence, etc.).
There is a slight error with the diagram above, as I’m typing this all out – the CharacterStats are brought in on the Talker and Listener levels.
At any rate, the DamagePackage moves to the DamageTalker, which takes in the attackers stats for any modifications, then sends the package along to the DamageListener on the attacked actor. The Listener works in reverse, unpacking the information from the package, applying resistances and such from the attacked actor’s stats, then applies that modified damage package to the actor itself.
The beauty of this system is it’s flexibility. The downside is that different components reside on different levels of the actors and their equipment. For instance, on the player, the stats and the listener are on the root, but the talker and all of it’s constituent parts live on each weapon (or spell). For a non-humanoid creature, almost everything lives on the root level. Except listeners – the listener must always reside on the same layer that the collider lives on. Well, it doesn’t have to, but it makes things worlds easier. So… this is something I’m trying to figure out. Though honestly, I could probably package this up for the Unity Asset store once I get it all cleanly situated. I’m happy with where it’s at now, but it has a LOT of work still ahead of it to be a nice, simple-to-use package.
Of course, the system needs to be set dirty whenever an actor’s stats change, or when weapons/armor are swapped out that might impact stats.
The other day I did a little post with some images showing the difference between the game’s early incarnation and where I’m at with it today. Here’s another small dose of that in the form of YouTube videos showing the torch lighting in the game.
The old setup:
The new setup:
I actually was able to reuse some code and particle effect info from the original torches. Even though the game view was from a top-down 2D camera, the torches themselves were 3D assets with particle effects, just viewed from the top. But yeah, this is a pretty big change, and I can’t complain at all about how it looks now as compared to before.
I posted these images on Facebook a couple of weeks ago, and they somehow never made it here. After struggling with a few things related to graphics and map/level generation, I feel like I finally broke through the wall that was holding me back. I started working on character creation, mobs, and am starting to work on some game mechanics finally after months of scrapping attempt after attempt.
I figured one of the core ideas behind Labyrintheer was, well… labyrinths: mazes, dungeons, et cetera. If I couldn’t get those done right, or at least make some progress toward them, then the rest of it was all for nothing.
This image shows progress on level generation. There were actually a few iterations that don’t show up here. I’m not even sure I have screenshots of them. Maybe at some point I’ll look back and try to find a few. It really started out with an idea that I wanted to create levels using cellular automata. And I did. And they actually were simply too organic. I made a small, playable game that would randomly generate a new map every time the spacebar was pressed, and would spawn the player in to one of eight edge/corner areas and create a goal that was as far away from the player as possible.
Map generation was very quick, even with large maps, but trying to skin everything immediately made me realize it just didn’t feel right. At this point, though, I wanted a true top-down 2D game (think Legend of Zelda on the NES).
After that experiment, I started looking at graph grammars and how they could apply to attaching branching groups of pre-made sections of dungeon together. I started down this road and was using Tiled and Tiled2Unity to import simple square rooms that had varying exits and would connect via those exits (top-left). The upside to this approach was that maps generated just as fast as the CA map generation, were much less organic, and had order (top-right). Green rooms came first, then yellow, the red. After, a purple room was created (boss room) and then all dead ends were capped off (blue rooms). This actually solved a few problems that I’ll now have to resolve entirely.
The downside to this approach was that it felt TOO inorganic. Even using a wider variety of rooms and hallways to create a dungeon via graph grammars gave it a feeling of being too symmetrical. I was able to use that to test some stuff with random placement of torches in a dungeon and allowing that to be the sole source of light (bottom-left).
Still, something new needed to be done. I decided that an angled view of a 3D world would fix a lot of issues, many of them related to physics, line of sight (LoS) and environments. I ended up trying, and am currently using, Dungeon Architect with Unity to create the maps (bottom-right). This has allowed me MUCH greater freedom to customize pseudo-random levels. So far I’m quite pleased.
The next part of my struggle was related to assets. I presumed there were two options – pay through the nose for an artist to give me custom assets, which is what I preferred, or use free/cheap assets and run the risk of the game looking very similar to other games out there. Neither of those were great options.
The top-left (yeah, very similar to the first top-left, right?) shows some very 8-bit stylings, which would’ve been fine if they weren’t so utterly generic. I ended up using Photoshop to create the top-right tileset for Tiled when I thought I’d use shape grammars. It wasn’t too bad, actually, for not being an artist. Multi-directional blending of regular, light, and dark dirt floors gave me pretty good control to paint tiles as I saw fit within those parameters. It still just didn’t feel right.
The bottom-left shows some early work (two months ago or so) with Dungeon Architect, some great assets from InfinityPBR that I somehow screwed up royally, and a REALLY bad camera design. The bottom-right is more or less the current look of the dungeon biome for the game. I have tons of biomes still to design, and tons and tons of stuff to still bring into the dungeon biome, but I have a good enough playable space that feels and looks “right” that I’ve gone, as I said, into the mechanics of things for a while so that I can do greater play-testing as I move along.
As a note to anyone reading this because they are embarking on their own game development adventure, just because I stopped using some tools or processes does not mean they aren’t great tools. They just weren’t right for this project. Tiled is a fantastic tool to design tile-based maps for many, many types of games and can deal with top-down, isometric, and even hexagonal tile systems. Tiled2Unity is a life saver if you’re using Tiled for a project in Unity, and Sean is incredibly helpful and quick to respond to things. Cellular automata is… well, it’s just fun. There’s tons that you can do with it, and making EXTREMELY organic feeling maps is one of those things. If you’re developing a game that is in caves or other natural systems, give it a shot. Graph/shape grammars is also really cool, and you can do a lot of different things with it. In fact, I still might use it for custom weapon creation, and even if I don’t I have another game idea for someday where shape grammars would be a perfect fit.
In the end, don’t be afraid to throw away a few months worth of work because it isn’t working. Don’t let it get you down, either. Sometimes that’s just the way of things. And I had heard it before it happened to me. It’s different when it is actually happening to you. Just know that many, many people have been there, too, and great things can come from it.
As soon as I started understanding BlendShape options for some of these models, I really wanted to dig into programmatically using them for some interesting effects. First up is a gelatinous cube that starts out melted and waits for a player to get into proximity. Currently, it only moves to its idle animation, but there will be FIERCE combat eventually. And not all gel-creatures will start in this state. Some will likely roam around, hungry for adventurers. Some might fall from the ceiling with a plop, or squish out of a hole. And this is just one type of critter that will be around to sway our Labyrintheer from his or her mission.
So, after a considerable amount of back and forth, I’ve decided to go with a fully 3D world and a fixed isometric camera. The 2D method was intriguing, but ran up against a few issues:
Physics in 2D isn’t quite as… real? I’m sure 2D physics can be made to feel real to some degree, but our worldly physics exists in a 3D world, and I just wasn’t a fan of the flat feeling.
Art in 2D is harder to make and get made. This one surprised me the most. It seems far easier to create and find 3D models and assets than 2D. It’s even easier to find 3D modelers than it is to find sprite artists.
Partly related to #1, lighting, LoS and other similar bits are much more difficult in 2D (for me, at least). These things sort of exist naturally in 3D worldspace. They seem very shoe-horned into 2D.
So, I’ve found some tools and assets that are helping me move this along. After spending months (literally) trying different methods to create the levels, I’ve settled on a tool crafted by Code Respawn called Dungeon Architect. It’s a fairly extensible dungeon/level generation tool that is giving me playable spaces out of the box. Over time I will need to modify the underlying code a bit, but for now it is a HUGE time saver. If you’re interested in checking it out, they also make a version for the Unreal Engine here.
One of my other struggles was art. I am not an artist. I can modify art like crazy, but aside from my past experience as a semi-professional photographer, creation of new art is difficult for me. That led to a big concern; I could pay thousands of dollars that I don’t currently have to artists to create fresh new content, or I could find cheap, existing assets and have a game that looks like other games that use those assets. Neither are great options. Lo and behold, a third option presented itself via the absolutely amazing and customizable assets from InfinityPBR. Tons of 3D models, materials and substances that are extremely malleable right inside the Unity engine. The brick wall in the image below is created from their Brick Generator. The options are incredible enough that I spent an entire evening – literally hours – making, remaking, and fine-tuning a wall just the way I wanted it. The image below isn’t great (it’s not runtime, it looks better in the game), but it’s a fine example of some of the stuff from InfinityPBR. The torch is also theirs, with a few particle effects and a point light added for effect.
If you are trying to break into game development and need assets, I cannot recommend InfinityPBR highly enough. The assets range from US$45-60 each, but each asset isn’t a static thing – it’s a small library of things. They also offer a $25/package option to pick up every new package (seems like a couple each month) at a HUGE discount the day they come out, before they are available on the Unity store. Even if you get some assets you might not use now, it never hurts to have a great 3D library, and at that price you really cannot go wrong.