Messing With Shaders – Realtime Procedural Foliage

 

ivy_close.png
The programmable rendering pipeline is perhaps one of the largest advances in the history of realtime computer graphics. Before its introduction, graphics libraries like OpenGL and DirectX were limited to the “fixed function pipeline”, a programmer would shove in geometric data, and the application would draw it however it saw fit. Developers had little to no control over the output of their application beyond a few “render mode” settings. This was fine for rendering relatively simple scenes, solid objects, and simplistic lighting, but as visual fidelity increased and hardware become more powerful it quickly became necessary to allow for a more customizable rendering.

The process of rendering a 3D object in the modern programmable pipeline is typically broken down into a number of steps. Data is copied into fast-access graphics memory, then transformed through a series of stages before the graphics hardware eventually rasterizes that data to the display. In its most basic form, there are two of these stages the developer can customize. The “Vertex Program” manipulates data on a per-vertex level, such as positions and texture coordinates, before handing the results on to the “Fragment Program”, which is responsible for determining the properties of a given fragment (like a pixel containing more than just color information). The addition of just these two stages opened the floodgates for interesting visual effects. Approximating reflections for metallic objects, cel-shading effects for cartoon characters, and more! Since then, even more optional stages have been inserted into the pipeline for an even greater variety of effects.

I’ve spent a considerable amount of time experimenting with vertex and fragment programs in the past, but this week I decided to spend a few hours working with the other, less common stages, mainly “Geometry Programs”. Geometry programs are a more recent innovation, and have only began to see extensive use in the last decade or so. They essentially allow developers to not only modify vertex data as it’s received, but to construct entirely new vertices based on the input primitives (triangles, quads, etc.) As you can easily imagine, this presents incredible potential for new effects, and is something I personally would like to become more experienced with.

In four or five hours, I managed to write a relatively complex effect, and the rest of this post will detail, at a high level, what I did to achieve it.

ivy_distant.png

Procedurally generated geometry for ivy growing on a simple building.

This is my procedural Ivy shader. It is a relatively simple two-pass effect which will apply artist-configurable ivy to any surface. What sets this effect apart from those I’ve written in the past is that it actually constructs new geometry to add 3D leaves to the surface extremely efficiently.

One of the major technical issues when it comes to rendering things like foliage is that the level of geometric detail required to accurately represent leaves is quite high. While a digital environment artist could use a 3D modeling program to add in hundreds of individual leaves, this is not necessarily a good use of their time. Furthermore, it quickly becomes unmaintainable if anyone decides that the position, density, or style of foliage should change in the future. I don’t know about you, but I don’t want to be the one to have to tell a team of environment artists that all of the ivy in an entire game needs to be slightly different. In this situation, the key is to work smarter, not harder. While procedural art is often controversial in the game industry, I think most developers would agree that artist-directed procedural techniques are an invaluable tool.

Ivy_Shader_Steps.png
First and foremost, my foliage effect is composed of two separate rendering passes. First, a triplanar-mapped base texture is blended onto the object based on the desired density of the ivy. This helps to make the foliage feel much more dense, and helps to hide the seams where the leaves meet the base geometry.

Next in a second rendering pass, the geometry program transforms every input triangle into a set of quads lying on that triangle with a uniform, psuedo-random distribution. First, it is necessary to determine the number of leaf quads to generate. In order to maintain a consistent density of leaf geometry, the surface area of the triangle is calculated quickly using the “half cross-product formula”, and is then multiplied by the desired number of leaves per square meter of surface area. Then, for each of these leaves, a random sample point on the triangle is picked, and a triangle strip is emitted. It does this by sampling a noise function seeded with the world-space centroid of the triangle and the index of the leaf quad being generated. These noise values are then used to generate barycentric coordinates, which in turn are used to interpolate the position and normal of the triangle at that point, essentially returning a random world-space position and its corresponding normal vector.

Now, all that’s needed is to determine the orientation of the leaf, and output the correct triangle-strip primitive. Even this is relatively simple. By using the world-space surface normal and world “up” vector, a simple “change of vector basis” matrix is constructed. Combining this with a slightly randomized scale factor, and a small offset to orientation (to add greater variety to patches of leaves), we can transform normalized quad vertices into the exact world-space positions we want for our leaves!

...

// Defines a unit-size square quad with its base at the origin. doing
// this allows for very easy scaling and positioning in the next steps.
static const float3 quadVertices[4] = {
   float3(-0.5, 0.0, 0.0),
   float3( 0.5, 0.0, 0.0),
   float3(-0.5, 0.0, 1.0),
   float3( 0.5, 0.0, 1.0)
};

...

// IN THE GEOMETRY SHADER
// Change of basis matrix converts from XYZ space to leaf-space
float3x3 leafBasis = float3x3(
   leafX.x, leafY.x, leafZ.x,
   leafX.y, leafY.y, leafZ.y,
   leafX.z, leafY.z, leafZ.z
);

// constructs a random rotation matrix from Euler angles in the range 
// (-10,10) using wPos as a seed value.
float3x3 leafJitter = randomRotationMatrix(wPos, 10);

// Combine the basis matrix by the random rotation matrix to get the
// complete leaf transformation. Note, we could use a 4x4 matrix here
// and incorporate the translation as well, but it's easier to just add
// the world position as an offset in the final step.
float3x3 leafMatrix = mul(leafBasis, leafJitter);

// lastly, we can just output four vertices in a triangle strip
// to form a simple quad, and we'll be on our merry way.
for ( int i = 0; i < 4; i ++ ) {
   FS_INPUT v;
   v.vertex = UnityWorldToClipPos( 
      float4( mul(leafMatrix, quadVertices[i] * scale), 1) + wPos 
   );
   triStream.Append(v);
}

At this point, the meat of the work is done! We’ve got a geometry shader outputting quads on our surface. The last thing needed is to texture them, and it works!

Configuration!

I briefly touched on artist-configurable effects in the introduction, and I’d like to quickly address that too. I opted to go with the simplest solution I could think of, and it ended up being incredibly effective.

venus_vertex_weights.png

Configuring procedural geometry using painted vertex weights.

The density and location of ivy is controlled through painted vertex-colors. This allows artists to simply paint sections of their model they would like to be covered in foliage, and the shader will use this to weight the density and distribution of the procedural geometry. This way, an environment artist could use the tools they’re familiar with to quickly sketch out what parts of a model they would like to be effected by the shader. It will take an experienced artist less than a minute to get a rough draft working in-engine, and changes to the foliage can be made just as quickly!

At the moment, only the density of the foliage is mapped this way (All other parameters are uniform material properties), but I intend to expand the variety of properties which can be expressed this way, allowing for greater control over the final look of the model.

TODOs!

This ended up being an extremely informative project, but there are many things still left to do! For one, the procedural foliage does not take lighting into account. I built this effect in the Unity game engine, and opted out of using the standard “Surface Shader” code-generation system, which while very useful in 99% of cases, is extremely limiting in situations such as this. I would also like to improve the resolution of leaf geometry, applying adaptive runtime tessellation to the generated primitives in order to give them a slight curve, rather than displaying flat billboards. Other things, such as color variation on leaves could go a long way to improving the effect, but for now I’m quite satisfied with how it went!

Whelp, on to the next one!

 

Advertisements

Packed Geometry Maps

Geometry Map.png

Here, I present a novel implementation of well established techniques I am calling “Packed Geometry Maps”. By swizzling color channels and exploiting current compression techniques, packed geometry maps represent normal, height, and occlusion information in a single texture asset. This texture is fully backwards-compatible with standard normal maps in the Unity game engine, and can be automatically generated from traditional input texture maps through an extension of the Unity editor, reducing texture memory requirements by 2/3, and requiring no changes to developer workflow.

Normal Maps are great, but they could be better.

Normal maps are not a new invention. The initial concept was first published sometime around 1998 as a technique for storing high-frequency surface data to be re-mapped onto low resolution geometry. Since then, normal maps in some for or another have become ubiquitous in realtime computer graphics, where the geometric resolution of models may be limited.

The basic concept is to store surface normals (the direction a surface is “facing”), in an image. When you need to know this information, such as when calculating shading on an object, you simply look up the surface normal from the texture (and depending on your implementation, re-project it into a different basis). The end result is low resolution models being shaded as though they were higher detail, capturing smaller bumps and cracks in their surface which aren’t actually represented by geometry.

Normal Map Example.png

A comparison of a simple sphere without and with normal mapping. The underlying geometry is the same for both.

Normal maps are fantastic, but as computer hardware advances other techniques have become more frequent in addition to normal mapping. Runtime tessellation for example, produces additional geometric detail to better represent rough surfaces, and produce more accurate silhouettes, at the expense of additional texture data. Ambient occlusion maps are used to represent light absorption and scattering in complex geometry, and all of these new effects require new texture data as input. The normal map should not be discounted or ignored, but there’s definitely room for improvement here!

What’s In A Normal Map?

Surface normals are typically represented by colors, using the normal RGB format. The red channel of color is used to represent the “x” component of the normal vector (usually the coordinate along the surface tangent), green for “y” (bitangent), and blue for “z” (normal). These components all sum up to create a unit-length vector, used in lighting calculations, etc. which is stored in the final normal map texture.

You might have already noticed an optimization here. If the normal vector is known to be unit-length, then why are we storing three components? In most situations, the Z component of the surface normal in tangent-space is positive, and close to one. Using these assumptions, we can remove any ambiguity in solving for the Z component, and it can be reconstructed using only the X and Y components, which contain more critical signed data.

This allows us to completely remove one color channel from our normal map texture, while still preserving all the information required to properly shade an object!

In fact, the Unity game engine already does this! Unity uses a texture compression format called DXT5nm, a specific use-case of standard DXT5 compression. This confers many advantages in terms of memory usage by sacrificing precision and image quality. The DXT5 compression format is unique in that it preserves a fixed compression ratio, and the overall quality of the output image is dependent on the content of the image itself. I won’t get into details here, but images containing shades of a single color have a higher compressed quality than images with multiple colors. Unity disposes of the “Z” component of normal maps, and swizzles the X component into the alpha channel to take advantage of the inherent strengths of the format, and reduce the number of visible artifacts after compression.

What About Those Two Extra Channels?

So, the Unity game engine simply writes zeroes into two of the color channels used by normal maps. Granted, this increases the quality of the texture due to the subtleties of DXT5 compression, but we could easily store two more low-frequency channels without a significant loss in quality. For this, I’ve opted to store ambient occlusion in red, and displacement in blue. By placing ambient occlusion (which tends to have the lowest contrast of all the input maps) in red, we can generally reduce the number of artifacts visible in the green channel of the surface normals, where they are most noticeable. Imprecision and compression artifacts in displacement and occlusion maps are also much less visible, due to the low-frequency nature of the types of data typically stored here.

Geometry Vs Traditional.png

The traditional normal, ambient occlusion, and displacement maps (left), compared to packed geometry maps (right). While geometry maps exhibit some artifacts, they’re not particularly noticeable on matte surfaces.

mirror-artifacts

Worst-case example of Packed Geometry Maps applied to a mirror sphere.

This technique tends to work quite well. The majority of the texture artifacts produced are relatively subtle on mostly matte surfaces, however it still looks considerably worse on highly metallic and reflective surfaces, and therefore is mostly recommended for environment textures. On mostly non-metal surfaces such as dirt, grass, or wood, the differences between a packed geometry map and a traditional multi-texture setup are largely imperceptible, and given the constant compression ratio of DXT5, will require 1/3 the texture memory of the equivalent traditional input maps.

Won’t This Change Our Workflow?

As a matter of fact, no. Since packed geometry maps are essentially a combination of the color channels of various input textures, they can be produced automatically as part of the import pipeline of your game engine.

I’ve written a seamless extension to the Unity engine which will automatically build geometry maps for input textures as they are imported, updating them as source assets change, and saving them as unique standalone assets that can be incorporated in the final build.

By adding this extension to your project, geometry maps will be generated automatically as you import and update source texture maps. The utility is configurable and allows manual generation, as well as adding asset labels to generated geometry maps for easy searches. Certain directories can also be excluded if they contain textures your team does not want generating maps, and the file suffixes used to identify different texture types can be configured to best suit your naming scheme. It supports live reloading when source assets are changed, and is designed to be unobtrusive as possible.

There are a few minor issues left with the editor extension, so I’m not ready to release it just yet, but I’ll post it to Github in the near future!

Using these new Geometry Maps are just as simple! I’ve written a CG-Include file which defines a simple function for unpacking geometry maps, which should serve as a mostly drop-in replacement for Unity’s “UnpackNormal” function. Unlike “UnpackNormal” however, the “UnpackGeometryMap” function only requires a single texture sample for all three input maps, and returns a struct type for convenient access.

#include “GeometryMap.cginc”

// defines three values
//     geo.normal
//     geo.displacement
//     geo.occlusion
GeometryMapSample geo = UnpackGeometryMap(tex2D(_GeoMap, IN.uv));

Including Geometry Map support in your new shaders is a snap and, should you choose, geometry maps are fully backwards-compatible with Unity’s built-in normal maps, and can simply be dropped in the “Normal Map” field of any standard shader.

In Summary

By utilizing unused texture color channels, it is possible to sacrifice some image quality for considerable savings on texture memory. A single texture map can be generated to produce backwards compatible geometry maps, representing normal, displacement, and occlusion information without considerable changes to artists’ workflow. This technique is particularly applicable to realtime applications designed to run on low-end hardware where texture memory is of significant concern, and is suitable for representing most non-metal surfaces.

In the future, I’ll look into other compression techniques that don’t degrade with additional color channels. Even if the compressed size of a single map is larger, there could still be potential for savings when compared to several DXT5 textures.

Screenspace Volumetric Shadowing

Let’s get this out of the way first, you can download a demo of the effect for Windows and Mac here!

So recently I decided to learn a little more about the Unity engine’s rendering pipeline. I had a pretty good high-level idea, but the actual stages of the process were a mystery to me. “…then it does lighting…” is not necessarily a useful level of granularity.  After a week with the documentation, and the fabulous Frame Debugger added in Unity 5, I’m fairly confident in my understanding of some of the nitty-gritty of shading.

“What do we gain from knowing this? I drop a few lights in my scene and it all works just fine!” – well, with a solid understanding of the rendering and shading pipeline, it becomes much easier to extend the process of “just dropping lights in a scene” for much greater effect. This week, I wrote a screen-space volumetric shadowing effect using CommandBuffers and a few fancy shader programs.

SSVS_Comparison.png

The effect is fairly subtle, but atmospheric scattering (the reflection of light off of tiny particles suspended in the air) is fairly important. It helps to contribute a sense of space to a scene, helps to clue the brain in on where light-sources are, and can be used to hint at the mood of a scene. Is it early morning in a dense fog, or a clear summer evening?

Now the more eagle-eyed among you will notice that I didn’t implement atmospheric scattering. This effect is actually an attempt at the opposite, estimating the thickness of shadows in the scene, and darkening areas to negate the scattering approximation of a uniform fog. Tradiational “Distance Fog” is meant to simulate light-scattering in a scene, blending objects in the distance into a uniform medium color. This effect is extremely cheap to compute, and fairly well established. I intended my shadowing system to be applied on top of existing art, and so a complete lighting overhaul was impossible as it would require designers and artists to step through a scene and update every light they’ve placed. It makes very little sense in a scene like the one above, where there isn’t any fog, but when thrown into a scene with fog and lighting, it can look quite nice.

SSVS.png

SSVS with traditional “distance fog”

My approximation clearly isn’t “correct” to real-world lighting. Shadows are the lack of light behind an occluder, not a painted-over darkness multiplied over the background, but as a first draft, it looks quite nice.

So let’s look at how this was done, and what I intend to do in the future.

Technical Explanation

Unity 5.3 introduced something called Graphics CommandBuffers. These are essentially a hook into the rendering pipeline in the form of queues of rendering instructions you can inject at various points. When my scene is loaded, I would initialize and attach a CommandBuffer to “CameraEvent.BeforeLighting” for example, and whatever commands are in that buffer would be executed every time the camera prepares to render lighting. When you’re finished, and don’t want your code to be called again, you remove it from the pipeline, and it stops being executed.

My shadowing effect attaches a minimum of four such buffers, listed here in order of execution during a frame.

  1. CameraEvent.BeforeLighting
  2. LightEvent.AfterShadowMap (for each scene light)
  3. LightEvent.AfterScreenspaceMask (for each scene light)
  4. CameraEvent.BeforeImageEffects

1) CameraEvent.BeforeLighting

CommandBuffers attached to this event are executed every frame before any lighting calculations take place.

The camera rendering the scene creates a new render target, called the “shadowBuffer” when it’s initialized, which will contain the added effects of all the lights in our scene. Every frame before any lighting takes place, this buffer is cleared to a white color. That’s the only thing done in the “beforeLighting” stage, but it’s critical to the effect working. If the buffer weren’t cleared, then the effects of the previous frame would be blended into the next, and you’d quickly get a muddy mess…

2) LightEvent.AfterShadowMap

ssvs_shadowmap

A cascaded shadowmap resulting from the shadowmap pass on a Directional Light. Color represents distance from the light source.

CommandBuffers attached to this event are executed every time a shadowmap is rendered for a particular light. Excuse my egregious use of bold, but this was one of the most critical parts of the effect, and seems to cause an extreme amount of confusion online (The working code has been reported as a bug in Unity at least 3 times now).

This means that whenever a shadowmap is rendered for a light for any reason (including the Scene View Camera in the Unity Editor!), your CommandBuffer will be called.

In this stage of the rendering pipeline, I bind the currently active render target to a global shader property. Immediately before this stage in the pipeline, the shadowmaps for the light were being rendered, so they should still be the active render target. By binding them to a shader property, I allow future stages in the pipeline to access them!

For most lights, this is all that happens and we could just render our effect here, but “directional lights” are a special case. They render screen-space masks which are sampled by the shading stage of the deferred or forward pipeline. This allows for more complex shadow filtering, and eliminates the texture lookup that might have been performed on occluded fragments. The proper transformations performed by the engine to convert world-space positions to light-space positions aren’t yet initialized for Directional Lights at this stage in the pipeline, which brings us to…

3) LightEvent.BeforeScreenspaceMask

ssvs_raymarch

The results of the raymarch shadow accumulation pass. Darker areas counted more shadowed samples.

Commands attached to this event will be executed every time a directional light is preparing to build a screenspace shadow mask as described above. It’s at this stage that Unity populates all of the transformation matrices to convert back and forth between world-space and light-space, which we coincidentally need. This is where the actual meat of the effect take place.

When this happens, the attached CommandBuffer instructs the light to render a pass into the camera’s shadowBuffer we allocated and cleared earlier. During this pass, a simple raymarch is performed in screen-space, essentially stepping through the world and checking whether or not a sample point in 3D space is in shadow or not. Here, the raymarch samples the shadowmap bound in step 2, and uses the transformation matrices bound in step 3, before counting up the number of sample points that were in shadow, and rendering that as a color (black for all shadows, white for none) into the camera’s shadowBuffer. This final color is actually multiplied with the color already in the buffer, so if two shadows overlap, they will both darken the pixels in the buffer, rather than over-writing eachother. (A more technically correct solution would be to take the minimum value of the two samples, rather than multiplying them, but then I couldn’t take advantage of hardware blend-modes, and would need a separate pass).

4) CameraEvent.BeforeImageEffects

Now for the last stage in the effect. Raymarching is quite expensive, so most of the time, the previous pass is performed at a lower resolution of 1/2, 1/4, or even 1/8. Before the user sees it, we need to blow it back up to fullscreen! Notice that there’s also quite a bit of noise in the raw raymarch data. This is because I used a dithered sampling scheme to better capture high-resolution data. We don’t want any of that in the final composite, so first, we perform a two step Gaussian blur on the low resolution raymarch data. This will effectively soften the image, and smooth out noise. Normally this isn’t a great idea because it removes high-frequency data, but because shadows are relatively “soft” anyway, it works quite well in this case. The blur also takes into account the depth of the scene, and won’t blur together two pixels if their depths are extremely different. This is useful for preserving hard edges where two surfaces meet, or between a background object and a foreground object.

Lastly, we perform a bilateral upsample back to full resolution. This is pretty much a textbook technique now, but it’s still quite effective!

5?) Considerations

There are a few considerations that are very important here. First, CommandBuffers aren’t executed inline with the rest of your scripts, they’re executed separately in the rendering pipeline. Second, when you bind a commandBuffer to an event, it will be executed every time that event occurs. This can cause issues with the Scene View Camera in the editor. It is only used for setting up your scene, but it actually triggers Light Events too!

I worked around this by adding an “OnPreRender” callback to my cameras, which re-build the commands in the Light command buffers before every frame, and then another in “OnPostRender” which tears them all down. This is absolutely critical because otherwise, the scene camera, and other cameras you may not want rendering your effects will trigger them, wasting precious resources and sometimes putting data where you don’t think it will go (For example, the scene camera triggered the same CommandBuffer as the game camera, causing the scene’s shadows to be rendered into the shadowBuffer, which caused all sorts of problems!)

As long as you think critically about what you’re actually instructing the engine to do, this shouldn’t be too bad, but I lost too many hours to this sort of issue :\

Wrapping Up

And that’s about it! I hope this gave at least some high-level insight into how you could use CommandBuffers for new effects!

In the future, I’d like to extend this to a complete volumetric lighting system, rather than just a simple shadowing demo, but for now I’m quite happy with the result!

If you want to check it out for yourself, you can download a demo here!

Air, Air Everywhere.

atmosphere_graph

Atmosphere Propagation Graph from Project: Commander

 

I have a personal game project I’ve been contributing to now and again, and it seems to be slowly devolving into a case study of over-engineering. Today I’d like to talk about an extremely robust, and extremely awesome system I got working in the past few days.

The game takes place aboard a spaceship engaged in combat with another ship. The player is responsible for issuing orders to the crew, selecting targets, distributing power to subsystems, and performing combat maneuvers, all from a first-person perspective aboard a windowless ship (after all, windows are structural weaknesses, and pretty much useless for targets more than 10 km away anyway).

Being a game that takes place in space, oxygen saturation and atmospheric pressure is obviously a constant concern, and presents several dangers to the player. I needed a way I could model this throughout the ship in a convincing, and efficient way.

 

What and Why?

We need a solution that handles a degree of granularity (ideally controllable by a designer), is very fast to update, and can handle the ambiguity of characters who may be transitioning between two areas. How can this be done?

Enter “Environment Probes”. A fairly common technique in computer graphics is the use of environment probes to capture and sample shading information in an area surrounding an object. Usually, these are used for reflections and lighting, allowing objects to blend between multiple static pre-baked reflections quickly rather than re-rendering a reflection at runtime. This same concept could be made to work with arbitrary volumetric data, rather than just lighting, and would cover many of the requirements of the atmosphere system!

So, let’s say that a designer can place “atmosphere probes” in the game world. Huzzah, all is well, but how can that data actually be used practically? Not only do we need to propagate values between probes, but characters need to be able to sample their environment for the current atmosphere values at their position, where there may or may not be a probe! Choosing just the nearest probe will introduce noticeable “seams” between areas, and still doesn’t easily give us the adjacency data we need to propagate values from one probe to the next!

lightprobestestscene-sourceselected

“Light Probes” in the Unity game engine. An artist can place probes around the environment (shown as yellow spheres), and have the engine pre-calculate lighting information at each sample.

Let’s look at the Unity game engine for inspiration. One of their newer rendering features is “Light Probe Groups”, which is used for lighting objects as described above. Their mechanism is actually quite clever. They build a Delaunay tetrahedralization of hand-placed probes, resulting in a mesh defining a series of tetrahedral volumes. These volumes can then be used to sample the probes at each of the four vertices, and interpolate the lighting data for the volume between them! In theory, this doesn’t have to just be for light. By simply generalizing the concept, we could theoretically place probes for any volumetric data!

 

Let’s Get Graphic!

I spent the majority of the time building a triangulation framework based on Bowyer-Watson point insertion. Essentially, we iteratively add in vertices one at a time, and check whether the mesh is still a valid Delaunay triangulation with each insertion. If any triangle fails to meet those constraints after the new vertex is inserted, it’s removed from the mesh, and rebuilt. This algorithm is quite simple conceptually, and works relatively quickly, making it a great choice for this system. Once this was working, it was quite simple to flesh it out in the third dimension.

Atmosphere Probes - Minimal Case.png

A simple Delaunay tetrahedralization of a series of “Atmosphere Probes”.

So now what? So far we have a volumetric mesh defined across a series of probe objects. What can we do with this?

Each probe has an attached “Atmosphere Probe” component which allows it to store properties about the air at that location. Pressure, oxygen saturation, temperature, you name it. This is nice in itself, but the mesh also gives us a huge amount of local information. For starters, it gives us a clear idea of which atmosphere probes are connected, and the distance between them. A few times every second, the atmosphere system will look at every edge in the graph and calculate the pressure difference between the two vertices it connects. Using the pressure difference, it will propagate atmosphere properties along that edge. We essentially treat each probe as a cell connected to its neighbors by edges, and design a fluid-dynamics simulation at a variable resolution. This means that the air at eye-level can be simulated accurately and used for all sorts of cool visual effects, while the simulation around the player’s ankles can be kept extremely coarse to avoid wasting precious iterations. By iterating through edges, we partially avoid the combinatorial explosion that would result from comparing every unique pair of graph vertices, and we can ensure that no cells will be “skipped over” when calculating flow.

 

Interpolation – Pretending To Know What We Don’t.

Now, how do we actually sample this data?! The probes are nice, but what if the player is standing near them, rather than on them? We want to smoothly interpolate data between these probes, so that we can sample the mesh volume at arbitrary locations. Here, we can dust off our old 2D friend, barycentric coordinates. Normally, we humans like to think in cartesian coordinates. We define a set of orthogonal directions as “Up”, “Forward”, and “Right”, and then express everything relative to those directions. “In front, and a little to the right of me…” but coordinate systems don’t always need to be this way! In theory, we could describe a location using any basis.

bary2

An example of a barycentric coordinate system. Each triplet shows the coordinates of that point within the triangle.

Barycentric coordinate systems define points relative to the positions of the vertices of any arbitrary simplex. So for a triangle, one could say “80% of vertex 1, 26% of vertex 2, and 53% of vertex 3”.  Conveniently, these coordinates are also normalized, meaning that a point exactly at vertex 1 will be expressed as (1,0,0). We can therefore use these coordinates for interpolation between these vertices by performing a weighted sum of the values of all the vertices of the simplex, using their corresponding component of the coordinate vector of the sample point!

So, the value of the point at the center of the diagram would be equal to

x = 0.33M + 0.33L + 0.33K

or, the average of the values of each vertex!

By calculating the barycentric coordinates of the sample point within each tetrahedron, we can determine how to average the values of each corner to find the value of that point! For our application, by knowing which tetrahedron the player is in, we can simply find the coordinates of the player in barycentric space, and do a fancy average to determine the exact atmospheric properties at his or her position! By clamping and re-normalizing coordinates, this system will also handle extrapolation, meaning that, even if the player exits the volume of the graph, the sampled properties will still be fairly accurate!

Wait… you just said “by knowing which tetrahedron the player is in…” How do we do that? Well, we can use our mesh from before to calculate even more useful information! We can determine adjacency between tetrahedra by checking if they share any faces. If two tetrahedra share three vertices, we know they are adjacent along the face formed by those three vertices… wait, it gets better… remember we had barycentric coordinates for our sample point anyway. Barycentric coordinates are normalized, and “facing inward”, so if any of our coordinates are negative, we know that the sample point must be contained within the adjacent tetrahedron opposite the vertex for which the coordinate is negative.

We essentially get to know if our sample point is in another tetrahedron for “free”, and by doing some preprocessing, we can tell exactly WHICH tetrahedron that point is within for “free”.

In the final solution, the player maintains a “current tetrahedron” reference. Whenever the player’s coordinates within that tetrahedron go negative, we update that reference to be the tetrahedron opposite the vertex with the negative coordinates. As long as the player moves smoothly and doesn’t teleport (which isn’t possible in the game I was writing this for), this reference will always be correct, and the sampler will always be aware of the tetrahedron containing the player. If the player does teleport, it will only take a few frames for the current tetrahedron reference to “walk” its way through the graph and correct itself! I also implemented some graph bounding volume checks,  so I can even create multiple separate atmosphere graphs, and have the player seamlessly walk between them!

 

Concavity!

The last step was ensuring that I could actually design levels the way I wanted. I quickly found that I was unable to properly design concave rooms! The tetrahedralization would build edges through walls, allowing airflow between separate rooms that should be blocked off from one-another. I didn’t want to do any geometric collision detection because that would quickly become more of a hassle, and fine-tuning doorways and staircases to allow air to flow through them is not something I wanted to bother with. Instead, I implemented “Subtraction Volumes”. Essentially a way for a level designer to hint to the graph system that a given space is impassible. Once the atmosphere graph is constructed, a post-pass runs through the tetrahedron data and removes all tetrahedra which intersect a subtraction volume. By placing them around the level, the designer can essentially cut out chunks of the graph where they see fit.

subtraction-volumes

Notice in the first image there are edges spanning vertices on either side of what should be a wall. After sphere and box subtraction volumes are added, these edges are removed.

 

Looking Forward!

And that’s about it! Throwing that together, along with a simple custom editor in the Unity engine, I now have a great tool for representing volumetric data! In the future, I can generalize the system to represent other things, such as temperature or light-levels, and by saving the data used to calculate sample propagation, I can also determine the velocity of the air at any point for drawing cool particle effects or wind sound effects! For now, the system is finished, but who knows, maybe I’ll add more to it in the future 🙂

Ludum Dare 36

Last weekend, I spent some time working on an entry for Ludum Dare 36, an online game-jam where participants try to conceive, design, and build a game in 48 (or sometimes 72) hours! It was my first real game jam, and it ended up going pretty smoothly!

I wanted to try out some of the new features in the alpha of the upcoming version (5.5) of the Unity game engine, and this seemed like the perfect opportunity to give it a shot! I also wanted to toy around with some of the ideas and mechanics we (myself, and a group of other, very talented students) used in our school project HyperMass a few years back to see if I could work out any of the issues we were having at the time.

Without further ado, I present “Civil Service”!

CivilServicePNG
One of the things I was most excited about experimenting with was the 2D Tilemap system in Unity 5.5. Unity has always been at its core a 3D engine. Dedicated 2D features were only introduced in version 4.3… 8 years after the engine’s initial release. Quietly missing from the suite of 2D features was a “tile map” editor… a way to easily build a level from small repeated “chunks”. Version 5.5 adds this much desired feature, along with others I hadn’t even conceived of, such as “Smart Sprites”, “9 Slice Sprites”, and more!

The tile editor was exceptionally easy to use, although I encountered a number of issues with corrupted projects, and flakey features (though to be fair, I was using the alpha version of most of the tools). I found (an hour before the deadline) that release builds of tilemaps don’t have proper collision geometry defined, and characters and objects will fall right through… Luckily, my map was small and static, and I was able to throw a few large invisible boxes into the world so the player couldn’t fall through the ground!

One of the other areas I wanted to experiment in was 2D physics. Keeping the physics simulation stable was a significant issue in HyperMass, and I wanted to see if removing the third dimension could improve on it. Unity uses Box2D under the hood for 2D physics, as opposed to PhysX for normal 3D, and simplifying the gameplay would theoretically simplify the problem. It actually had a considerable impact, although some of the issues we were facing with HyperMass (particularly the simulation of a rope) were still present. I now have my own theories on how to fix it, but that’s for another post 😉

The actual Ludum Dare jam was quite a lot of fun! I was initially hesitant to join, but really enjoyed the exercise. The deadline ended up being quite tight, and I struggled to get things submitted on time, but I’m quite comfortable with how things turned out!

tsGL – Improvements

I worked a bit more on tsGL over the last few days, and managed to clean up a few things that were really bothering me! So far progress has been relatively smooth and it’s actually turning out quite well! So, what changed?

Lighting!
TSGL’s lighting code is much cleaner now, and seems to work pretty well! Lighting is broken down into a multi-pass system which allows for an arbitrary number of lights to be applied to each object. Take this scene, for example…

Composite.png

tsGL – a scene featuring three point lights. Red, blue, and white.

This scene features three realtime lights, a red light in the back left corner, a blue light behind the camera, and a white light above the scene. All of these lights are drawn as “point-lights”, meaning that they act as omnidirectional sources.

First, the scene is drawn with no lights applied. This is important to capture shader-specific details on each object, such as emissive textures, reflections, and unlit details.

Next, lights are sorted based on their “importance”. This is calculated based on distance to the object being rendered and the intensity of the light. If there’s a large, bright light shining on an object, it will be drawn first, followed by all other lights until we’ve either reached the maximum allowed number.

Then, light parameters are packed into a 4×4 matrix. This may seem odd, but it also means that all attributes of a light can be passed as a single input to the GLSL shader. This allows for a large amount of flexibility in designing shader programs, as well as the convenience of not requiring several uniform variables to be defined in each one.

The Vertex Shader calculates a set of values useful for lighting, mainly the per-vertex direction of incident light, and the attenuation of brightness over distance. These are calculated per-vertex because it reduces the number of necessary calculations significantly, and small imperfections due to interpolation over the triangle are largely imperceptible!

attenuation_withsource.png

Attenuation of one of the light in the scene, visualized in false color. Ranges from red (high intensity) to green (low intensity)

By scaling the intensity of the light with the square of the distance from the source, lights will appropriately grow dimmer as they are moved farther from an object. The diffuse component of the light is also calculated per-fragment using the typical Lambertian reflectance model, and ensures that only the “light-facing side” of objects are shaded. In the above image, the intensity of a red light throughout the scene is visualized in false color, and the final diffuse light calculation is shown on the bottom right.

Awesome, but at this point our scene is just an unlit void! How do we combine the output of each light pass into a final image?

By exploiting OpenGL blend-modes, we can produce exactly the effect we want! OpenGL allows the programmer to specify an active “blend-mode”, essentially determining how new data is written to the display buffer! This is primarily used for rendering transparent objects. A window pane for instance, would need to be rendered over top of the rest of a scene, and would mix the background color with the color of the glass itself to produce a final color! This is no different!

For these lights, the OpenGL Blend Mode is set to “additive”. This will literally add together the colors of every object drawn, which in the case of lights is just what we need. Illumination is a purely additive process, and it is impossible for a light to make things darker. Because of this, simply adding the effects of several lights together will output the illuminated scene as a whole! The best part is that it works without having to pass an array of lighting information to the shader, or arbitrarily limiting the number of available lights based on hardware! While the overhead of rendering an additional pass is non-trivial, it’s a small price to pay for the flexibility allowed by this approach.

Here, we can see the influence of each of the three lights.

lights.png

The additive passes of each of the three lights featured in the scene above. By summing together these three images, we obtain the fully illuminated scene.

At the end of the day, we end up with a process that looks like this.

  1. Clear your drawing buffers. (erases the previous frame so we have a clean slate.)
  2. Draw the darkened scene.
  3. Sort the lights based on their “importance”.
  4. Set the blend mode to “additive”.
  5. For each light in the scene (in order of importance)
    1. Draw the scene again, illuminated by the light.
  6. We’re done! Display the buffer!

This solution isn’t perfect, and more powerful techniques have been described in recent years, but given the restrictions of WebGL, I find this technique to work quite well. One feature I would like to add is for the scene to only draw objects effected by a light in the additive pass, rather than the entire scene over again. This allows us to skip any calculations that would not effect an object in some way, and may increase performance, though without further testing, it’s difficult to say for sure.

Cubemaps!
This is always a fun feature to add, because it can have incredibly apparent results. Cubemaps are essentially texture-maps that exist on all sides of a cube. Rather than sampling a single point for a color, you would sample a direction, returning the color at that “angle” within the cube. By providing an image for each face, a cubemap can be built to represent lighting information, the surrounding environment, or whatever else would require a 360 texture lookup!

cubemaps_skybox

Example of a cubemap, taken from “LearnOpenGL.com”

tsGL now supports cubemaps as a specific instance of a “texture”, and they can be mapped to materials and used identically in the engine! One of the clever uses of a cubemap is called “environment mapping”, which essentially boils down to emulating reflection by looking up the color of the surrounding area in a precomputed texture. This is far more efficient than actually computing reflections dynamically, and plays much more nicely within the paradigms of traditional computer graphics! Here’s a quick example of an environment-mapped torus running in tsGL!

yakf0.gif

An environment-mapped torus, showcasing efficient reflection.

Now that cubemaps are supported, it’s also possible to make reflective and refractive materials efficiently, so shader programs can be made much more interesting within the confines of the engine!

Render Textures
Another nifty feature is the addition of render textures! By essentially binding a “camera” object to a texture, it is possible to render the scene into that texture, instead of onto the screen! This texture can then be used like any other anywhere in the drawing process, which means it’s possible to do things like draw a realtime security camera monitor in the scene, or have a mirror with realtime reflections! This can get quite costly, so it is best used sparingly, but the addition of this feature opens the door to a wide variety of other cool effects!

With the addition of both cubemaps and render textures, I hope to get shadow-mapping working in the near future, which would allow objects to appropriately cast shadows when illuminated in the scene, which was previously infeasible!

And now, the boring stuff – HTML
The custom HTML tag system has been improved immensely, and now makes much more sense. Entity tags may now be nested to define object hierarchies, and arbitrary parameters can be provided as child-tags, rather than attributes. This generally makes the scene documents far more legible, and makes adding new features in the future much easier.

Here’s a “camera” object, for example.

<tsgl-entity id=”main_camera”>
<tsgl-component type=”camera”>
<tsgl-property type=”number” name=”fov” value=”80″></tsgl-property>
<tsgl-property type=”number” name=”aspect” value=”1.6″></tsgl-property>
</tsgl-component>
<tsgl-component type=”transform”>
<tsgl-property type=”vector” name=”position” value=”0 2 0″></tsgl-property>
</tsgl-component>
</tsgl-entity>

Previously, the camera parameters would have been crammed into a single tag’s attributes, making it much more difficult to read, and much more verbose. With the addition of tsgl-property tags, attributes of each scene entity can now be specified within the entity’s definition, so all of those nice editor features like code-folding can now be exploited!

This part isn’t exactly fun compared to the rendering tests earlier, but it certainly helps when attempting to define a scene, and add new features!

That’s all for now! In the meantime, you can check out the very messy and very unstable, tsGL on GitHub if you want to try it for yourself, or experiment with new features!

tsGL – An Experiment in WebGL

dragon_turntable

For quite some time now, I’ve been extremely interested in WebGL. Realtime, hardware-accelerated rendering embedded directly in a web-page? Sounds like a fantastic proposition. I’ve dabbled quite a bit in desktop OpenGL and have grown to like it, despite its numerous… quirks… so it seemed only natural to jump head-first into WebGL and have a look around!

So, WebGL!
I was quite surprised by WebGL’s ease of use! Apart from browser compatibility (which is growing better by the week!) WebGL was relatively simple to set up. Initialize an HTML canvas context, and go to town! The rendering pipeline is nearly identical to OpenGL ES, and supports the majority of the same features as well! If you have any knowledge of desktop OpenGL, the Zero-To-Triangle time of WebGL should only be an hour or so!

Unfortunately, WebGL must be extremely compatible if it is to be deployed to popular web-browsers. This means that it has to run on pretty much anything with a graphics processor, and can’t rely on the user having access to the latest and greatest technologies! For instance, when writing the first draft of my lighting code, I attempted to implement a deferred rendering pipeline, but quickly discovered that multi-target rendering isn’t supported in many WebGL instances, and so I had to fall back to the more traditional forward rendering pipeline, but it works nonetheless!

dabrovic_pan

A textured scene featuring two point lights, and an ambient light.

Enter TypeScript!
I’ve never been particularly fond of Javascript. It’s actually quite a powerful language, but I’ve always been uncomfortable with the syntax it employs. This, combined with the lack of concrete typing can make it quite difficult to debug, and quite early on I encountered some issues with trying to multiply a matrix by undefined. By the time I had gotten most of my 3D math working, I had spent a few hours trying to find the source of some ugly, silent failures, and had decided that something needed to be done.

I was recommended TypeScript early in the life of the project, and was immediately drawn to it. TypeScript is a superset of Javascript which employs compile-time type checking, and a more familiar syntax. Best of all, it is compiled to standard minified Javascript, meaning it is perfectly compatible with all existing browsers! With minimal setup, I was able to quickly convert my existing code to TypeScript and be on my way! Now, when I attempt to take the cross product of a vector, and “Hello World”, I get a nice error in the Javascript console, instead of silent refusal.

Rather than defining an object prototype traditionally,

var MyClass = ( function() {
function MyClass( value ) {
this.property = value;
}
this.prototype.method = function () {
alert( this.property );
};
return MyClass;
})();

One can just specify a class.

class MyClass {
property : string;
constructor( value : string ) {
this.property = value;
}
method () {
alert( this.property );
}
}

This seems like a small difference, and honestly it doesn’t matter very much which method you use, but notice that property is specified to be of type string. If I were to construct an instance of MyClass and attempt to pass an integer constant, a compiler error would be thrown, indicating to me that MyClass instead requires a string. This does not effect the final Javascript, but reduces the chance of making a mistake while writing code significantly, and makes it much easier to keep your thoughts straight when coming back to a project after a few days.

Web-Components!
When trying to decide on how to represent asset metadata, I eventually drafted a variant on XML, which would allow for simple asset and object hierarchies to be defined in “scenes” that could be loaded into the engine as needed. It only took a few seconds before I realized what I had just described was essentially HTML. From here I looked into the concept of “Web-Components” a set of prototype systems that would allow for more interesting UI and DOM interactions in modern web browsers. One of the shiny new features proposed is custom HTMLElements, which allow developers to define their own HTML tags, and associated Javascript handlers. With Google Chrome supporting these fun new features, I quickly took advantage.

Now, tsGL scenes can be defined directly in the HTML document. Asset tags can be inserted to tell the engine where to find certain textures, models, and shaders. Here, we initialize a shader called “my-shader”, load an OBJ file as a mesh, and construct a material referencing a texture, and a uniform “shininess” property.

<tsgl-shader id=”my-shader” vert-src=”/shaders/test.vert” frag-src=”/shaders/test.frag”></tsgl-shader>

<tsgl-mesh id=”my-mesh” src=”/models/dragon.obj”></tsgl-mesh>

<tsgl-material id=”my-material” shader=”my-shader”>
<tsgl-texture name=”uMainTex” src=”/textures/marble.png”></tsgl-texture>
<tsgl-property name=”uShininess” value=”48″></tsgl-property>
</tsgl-material>

We can also specify our objects in the scene this way! Here, we construct a scene with an instance of the “renderer” system, a camera, and a renderable entity!

<tsgl-scene id=”scene1″>
<tsgl-system type=”renderer”></tsgl-system>

<tsgl-entity>
<tsgl-component type=”camera”></tsgl-component>
<tsgl-component type=”transform” x=”0″ y=”0″ z=”1″></tsgl-component>
</tsgl-entity>

<tsgl-entity id=”dragon”>
<tsgl-component type=”transform” x=”0″ y=”0″ z=”0″></tsgl-component>
<tsgl-component type=”renderable” mesh=”mesh_dragon” material=”mat_dragon”></tsgl-component>
</tsgl-entity>
</tsgl-scene>

Entities can also be fetched directly from the document, and manipulated via Javascript! Using document.getElementById(), it is possible to obtain a reference to an entity defined this way, and update its components! While my code is still far from production-ready, I quite like this method! New scenes can be loaded asynchronously via Ajax, generated from web-servers on the fly, or just inserted into an HTML document as-is!

Future Goals
I wanted tsGL to be a platform on which to experiment with new web-technologies, and rendering concepts, so I built it to be as flexible as possible. The engine is broken into a number of discreet parts which operate independently, allowing for the addition of cool new features like rigidbody physics, a scripting interface, or whatever else I want in the future! At the moment, the project is quite trivial, but I’m hoping to expand it, test it, and optimize it in the near future.

At the moment, things are a little rough around the edges. Assets are loaded asynchronously, and the main context just sits and complains until they appear. Rendering and updating operate on different intervals, so the display buffer tears like tissue paper. OpenGL is forced to make FAR more context switches than necessary, and my file parsers don’t cover the full format spec, but all in all, I’m quite proud of what I’ve managed to crank out in only ten or so hours of work!

If you’d like to check out tsGL for yourself, you can download it from my GitHub page!