Project Micro – Part 1

I’ve had a few personal projects in the works for quite some time now, and they haven’t been working out. The cycle generally begins with enthusiasm and a grand vision, followed by inevitable scope-creep, culminating in the realization that the project has grown far beyond my capabilities alone. The obvious answer would be collaboration, but given my existing obligations and maladroit project management, I’d prefer not to rope anyone else into the project without a stronger roadmap.

The last project to be put on hold was an iOS shooter that I quickly realized required quite a bit more artwork than my non-artist self could provide. Turns out, a game where you fight giant robots with multiple destructible parts and reconfigurable weapons requires a talented character artist. After a while, I found it disheartening to work on. Fighting a cube in a gray-box level isn’t particularly exciting, and I found myself missing the forest for the trees. I enjoy systems-oriented work and would get bogged down, spending weeks building plumbing for a house not yet connected to the water main. The boss can’t die yet, but it runs on a custom hierarchical behavior-tree system with a full visual editor, so that’s neat, I guess.

I’ve decided it’s time for a change of pace. I need a project with a tighter control on scope. A project with a more focused end goal, and a project which doesn’t necessarily require an entire team of artists to complete. Thinking through these requirements, I came up with an idea.

Project Micro

Project micro is a physics based rogue-ish survival game, where the player must construct and evolve a virtual creature, battling for supremacy in the primordial ooze. The player initially designs a simple creature using cells connected by links. There are many types of cells and links, each of which serve a different purpose as a component of a player’s creation. Players must navigate through the (very limited) environment and consume as many rival creatures as possible. At fixed time intervals, the player is invited to edit their creature, making changes as they see fit. The severity of these changes is dependent on their success (eat more opponents, change more rapidly).

Opponents function identically to the player, but are controlled by a procedurally evolved neural network. Initially, opponents will be generated using a few seed creatures, and will flounder through the water aimlessly. Each time an opponent is consumed, the current most successful AI is cloned, mutated, and spawned elsewhere on the map. The vision for the project is a continually evolving ecosystem, where opponents become increasingly effective and aggressive, until they inevitably kill the player.

The goal is to construct a micro-scale arms race, where players must constantly update their strategy in order to survive opponents which are forever adapting to the game state. While this idea isn’t necessarily new (I admit, it’s heavily inspired by the early stages of Spore), I think building a much more dynamic game could really be interesting, both from a gameplay and design perspective. It also exists as an almost entirely systems-driven experience, interacting nicely with my skill set and keeping the scope relatively small.

Here We Go…

So now I’ve got an idea, and I’m off to the races! I’m also jumping on the old “gamedev blog” bandwagon and will “try to post regular status updates” as things come together. Now, if you’ll excuse me, I’m off to build a physics engine.

Interior Mapping – Part 3

In part 2, we discussed a tangent-space implementation of the “interior mapping” technique, as well as the use of texture atlases for room interiors. In this post, we’ll briefly cover a quick and easy shadow approximation to add realism and depth to our rooms.

Hard Shadows

We have a cool shader which renders “rooms” inside of a building, but something is clearly missing. Our rooms aren’t effected by exterior light! While the current implementation looks great for night scenes where the lighting within the room can be baked into the unlit textures, it really leaves something to be desired when rendering a building in direct sunlight. In an ideal world, the windows into our rooms would cast soft shadows, which move across the floor as the angle of the sun changes.

Luckily, this effect is actually quite easy to achieve! Recall how we implemented the ray-box intersection in part 2. Each room is represented by a unit cube in tangent-space. The view ray is intersected with the cube, and the point of intersection is used to determine a coordinate in our room interior texture. As a byproduct of this calculation, the point of intersection in room-space is already known! We also currently represent windows using the alpha channel of the exterior texture. We can simply reuse this alpha channel as a “shadow mask”. Areas where the exterior is opaque are considered fully in shadow, since no light would enter the room through the solid wall. Areas where the exterior is transparent would be fully effected by light entering the room. If we can determine a sample coordinate, we can simply sample the exterior alpha channel to determine whether an interior fragment should be lit, or in shadow!

So, the task at hand: How do we determine the sample coordinate for our shadow mask? It’s actually trivially simple. If we cast the light ray backwards from the point of intersection between the view ray and the room volume, we can determine the point of intersection on the exterior wall, and use that position to sample our shadow texture!

Our existing effect is computed in tangent space. Because of this, all calculations are identical everywhere on the surface of the building. If we transform the incoming light direction into tangent space, any light shining into the room will always be more or less along the Z+ axis. Additionally, the room is axis-aligned, so the ray-plane intersection of the light ray and exterior wall can be simplified dramatically.

// This whole problem can be easily solved in 2D
// Determine the origin of the shadow ray. Since
// everything is axis-aligned, This is just the
// XY coordinate of the earlier ray-box intersection.
float2 sOri = roomPos.xy;

// Determine a 2D ray direction. This is the
// "XY per unit Z" of the light ray
float2 sDir = (-IN.tLightVec.xy / IN.tLightVec.z) * _RoomSize.z;

// Lastly, determine our shadow sample position. Since
// our sDir is unit-length along the Z axis, we can
// simply multiply by the depth of the fragment to
// determine the 2D offset of the final shadow coord!
float2 sPos = sOri + sDir * roomPos.z;

 

That’s about it! We can now scale the shadow coordinate to match the exterior wall texture, and boom! We have shadows.

Screen Shot 2019-02-27 at 11.42.23 AM.png

Soft Shadows

We have hard shadows up and running, and everything is looking great. What we’d really like is to have soft shadows. Typically, these are rendered with some sort of filtering, a blur, or a fancy technique like penumbral wedges. That’s not going to work here. We’re trying to reduce the expense of rendering interior details. We’re not using real geometry, so we can’t rely on any traditional techniques either. What we need to do is blur to our shadows, without actually performing a multi-sampled blur.

Like all good optimizations, we’ll start with an assumption. Our windows are a binary mask. They’re either fully transmissive, or fully opaque. In most cases this is how the effect will be used anyway, so the extra control isn’t a big loss. Now, with that out of the way, we can use the alpha channel of our exterior texture as something else!

Signed Distance Fields

Signed Distance Fields have been around for a very long time, and are often used to render crisp edges for low-resolution decals, as suggested in “Improved Alpha-Tested Magnification for Vector Textures and Special Effects”. Rather than storing the shadow mask itself in the alpha channel, we can store a new map where the alpha value represents the distance from the shadow mask’s borders.SDF Shadowmask.png

Now, a single sample returns not just whether a point is in shadow, but the distance to the edge of a shadow! If we want our shadows to have soft edges, we can switch from a binary threshold to a range of “shadow intensity”, still using only a single sample!

The smoothstep function is a perfect fit for our shadow sampling, remapping a range to 0-1, with some nice easing. We can also take the depth of the fragment within the room into account to emulate the softer shadows you see at a distance from a light source. Simply specify a shadow range based on the Z coordinate of the room point, and we’re finished!

Putting it all Together!

All together, our final shadow code looks like this.

#if defined(INTERIOR_USE_SHADOWS)
	// Cast a ray backwards, from the point in the room opposite
	// the direction of the light. Here, we're doing it in 2D,
	// since the room is in unit-space.
	float2 sOri = roomPos.xy;
	float2 sDir = (-IN.tLightVec.xy / IN.tLightVec.z) * _RoomSize.z;
	float2 sPos = sOri + sDir * roomPos.z;

	// Now, calculate shadow UVs. This is remapping from the
	// light ray's point of intersection on the near wall to the
	// exterior map.
	float2 shadowUV = saturate(sPos) * _RoomSize.xy;
	shadowUV *= _Workaround_MainTex_ST.xy + _Workaround_MainTex_ST.zw;
				
	// Finally, sample the shadow SDF, and simulate soft shadows
	// with a smooth threshold.
	fixed shadowDist = tex2D(_ShadowTex, shadowUV).a;
	fixed shadowThreshold = saturate(0.5 + _ShadowSoftness * (-roomPos.z * _RoomSize.z));
	float shadow = smoothstep(0.5, shadowThreshold, shadowDist);

	// Make sure we don't illuminate rooms facing opposite the light.
	shadow = lerp(shadow, 1, step(0, IN.tLightVec.z));

	// Finally, modify the output albedo with the shadow constant.
	iAlbedo.rgb = iAlbedo.rgb * lerp(1, _ShadowWeight, shadow);
#endif

And that’s all there is to it! Surprisingly simple, and wonderfully cheap to compute!

There’s still room for improvement. At the moment the shadow approximation supports only a single directional light source. This is fine for many applications, but may not work for games where the player is in control of a moving light source. Additionally, this directional light source is configured as a shader parameter, and isn’t pulled from the Unity rendering pipeline, so additional scripts will be necessary to ensure it stays in sync.

For deferred pipelines, it may be possible to use a multi-pass approach, and write the interior geometry directly into the G-buffers, allowing for fully accurate lighting, but shadows will still suffer the same concessions.

Still, I’m quite happy with the effect. Using relatively little math, it is definitely possible to achieve a great interior effect for cheap!

Elastic UITableView Headers (Done Right)

Elastic or “stretchy” table headers are all over the place. They add that extra juice to a polished app, and provide a nice solution for the otherwise revealing native scroll view bounce in iOS. I’m talking about one of these…

Don’t forget that translucency thing Apple seems so fond of these days! A good header implementation should properly underlap the translucent navigation bar, and auto-adjust view margins to push content within the safe area! It’s subtle, but notice how the background image extends under the navigation bar, matching the behavior of other native views!

So graceful! So fluid! Just look at that Auto Layout spring! Clearly, this is a must-have. Ask any iOS developer, and they’ll happily explain their own favorite way of implementing an elastic header. Unfortunately, many of these techniques are “icky”.

The most common implementation is to have the UITableViewController handle the layout and positioning of the header, usually in the viewDidLayoutSubviews() method. This should be setting off all kinds of alarms. While it’s a quick and easy solution, it relies on a ViewController to dictate the layout and display of data… that’s kind of the point of a View, isn’t it? The ViewController is ideally responsible for the formatting of data, so that it can later be presented in a view. By handling the table header layout in the ViewController, we’re tightly coupling the implementation of the controller with the UI design of the app. The ViewController shouldn’t care whether the header is elastic or static. All it should care about is assigning the data for the table to display.


So what’s the plan?

We need to build a nice re-usable table view which does what we want, but maintains a consistent interface with a “normal” table view. This way, it can be a drop-in replacement for all the UITableView instances in your app. ViewControllers shouldn’t care about layout.

Therefore, we need a UITableView subclass which…

  • features an elastic header.
  • has an identical interface to UITableView
  • plays nice with UIKit (nav-bars, safe area, etc.)
  • can be loaded from a Storyboard

Let’s start off nice and simple. A new TableView subclass.

import Foundation
import UIKit

class ElasticHeaderTableView : UITableView {
    override func layoutSubviews() {
        // Obviously, something goes here!
    }
}

We’ve immediately hit a problem! Table views already have a tableHeaderView property which has all sorts of fancy logic attached to it! Any UIView you assign to this field will mess with the content insets, tweak the scroll rect, and generally wreak havoc on our layout! Since we’re implementing custom layout logic, we can’t just update the rect of this view! We also can’t ignore it, since our target was to provide an identical interface to a standard UITableView!

This is a great opportunity to do some overriding! How about we write a new get/set, and forward the value to a new field? This effectively allows us to bypass the superclass’s implementation, since (as far as the Obj-C class definition is concerned), the stored value in the synthesized tableHeaderView field will always be nil.

override var tableHeaderView: UIView? {
    set {
        _elasticHeaderView = newValue
    }
    get {
        return _elasticHeaderView
    }
}

private var _elasticHeaderView: UIView? {
    willSet {
        _elasticHeaderView?.removeFromSuperview()
    }
    didSet {
        if let headerView = _elasticHeaderView {
            addSubview(headerView)
            bringSubviewToFront(headerView)
            
            updateHeaderMargins()
            let headerSize = headerView.systemLayoutSizeFitting(UIView.layoutFittingCompressedSize)
            contentInset.top = headerSize.height - safeAreaInsets.top
            contentOffset.y = -headerSize.height
        }
    }
}

private func updateHeaderMargins () {
    _elasticHeaderView?.directionalLayoutMargins = NSDirectionalEdgeInsets(
        top: safeAreaInsets.top,
        leading: safeAreaInsets.left,  
        bottom: 0,
        trailing: safeAreaInsets.right)
}

You’ll notice some observer blocks attached to the _elasticHeaderView. The first is a willSet. This will remove the existing header from the view hierarchy, so we don’t leave it around if the header view is changed! The second is a didSet block. This will add the new header subview, calls a a method called “updateHeaderMargins”, and adjusts the content inset for the tableView to account for the new header height, so our table view rows don’t overlap the header. updateHeaderMargins() will simply copy the safe area of the tableView into the header’s directional layout margins. This allows constraints to the view margin to factor in the portion of the tableView obscured by nav bars, or the iPhone X “notch”. While not “necessary” for many applications, this makes life a lot easier as you start using the class.

Alright, now we need to actually put things where they need to go! Every time the tableView is adjusted using Auto Layout or the scroll view scrolls, the elastic header must be adjusted to account for the new space. Let’s define a new function for that.

override var contentOffset: CGPoint {
    didSet {
        layoutHeaderView()
    }
}

override func layoutSubviews() {
    super.layoutSubviews()
    layoutHeaderView()
}

private func layoutHeaderView () {
    updateHeaderMargins()
    
    if let headerView = _elasticHeaderView {
        let headerSize = headerView.systemLayoutSizeFitting(UIView.layoutFittingCompressedSize)
        let stretch = -min(0, contentOffset.y + headerSize.height)
        let headerRect = CGRect(
            x: 0,
            y: -headerSize.height - stretch,
            width: bounds.width,
            height: headerSize.height + stretch)
        
        headerView.frame = headerRect
        
        contentInset.top = headerSize.height - safeAreaInsets.top
    }
}

Here, we override layoutSubviews and attach an observer to the contentOffset parameter. This allows us to listen for changes to the scroll position, and the view layout without blocking the delegate outlet of the view, or requiring a viewController’s didLayout method. Then, the layoutHeaderView method does the majority of the work.

  1. First, we update the header margins. It’s possible that the safe area has changed since the last update, and it’s important to keep the margins current.
  2. Next, calculate the ideal size of the header using systemLayoutSizeFitting. This allows headers to be defined with Auto Layout constraints (which is super nice)
  3. Next, calculate the “stretch”. This is the distance the header must expand beyond its ideal height to account for the scroll view’s scroll position.
  4. Calculate a rect using all of these properties, pushing the header upward beyond the scroll view’s content. This allows the header to extend above the top of the scroll view. Assigning this new rect to the header view’s frame.
  5. Lastly, update the content inset of the scroll view to account for the possibility that a safe area has changed.

Now, we have a UITableView subclass which will automatically position a stretchy Auto Layout header whenever the view changes or is scrolled! The last thing to account for is storyboard compatibility! This is a nice easy fix, since we’ve already got a convenient method for it!

override func awakeFromNib() {
    super.awakeFromNib()
    
    let header = super.tableHeaderView
    super.tableHeaderView = nil
    _elasticHeaderView = header
}

When the view loads from a Nib or Storyboard, fetch the tableHeaderView from the superclass, set it to nil, and push the view into our internal header view field. This last piece of the puzzle lets our custom UITableView subclass play nicely with Storyboards. Dragging and dropping a new UIView to the top of an elastic table view will automatically assign the header, just like any old view!


And there you have it!

With just under 100 lines of Swift, you can create an entirely self-contained UITableView with an elastic header!

It’s a drop-in replacement for any table view in your project, plays nice with Auto Layout, can be configured and loaded from a Nib or Storyboard, and requires no special treatment on the part of your ViewController!

Interior Mapping – Part 2

In part 1, we discussed the requirements and rationale behind Interior Mapping. In this second part, we’ll discuss the technical implementation of what I’m calling (for lack of a better title) “Tangent-Space Interior Mapping”.

Coordinates and Spaces

In the original implementation, room volumes were defined in object-space or world-space. This is by far the easiest coordinate system to work in, but it quickly presents a problem! What about buildings with angled or curved walls? At the moment, the rooms are bounded by building geometry, which can lead to extremely small rooms in odd corners and uneven or truncated walls!

In reality, outer rooms are almost always aligned with the exterior of the building. Hallways rarely run diagonally and are seldom narrower at one end than the other! We would rather have all our rooms aligned with the mesh surface, and then extruded inward towards the “core” of the building.

Cylindrical Building

Curved rooms, just by changing the coordinate basis.

In order to do this, we can just look for an alternative coordinate system for our calculations which lines up with our surface (linear algebra is cool like that). Welcome to Tangent Space! Tangent space is already used elsewhere in shaders. Even wonder why normal-maps are that weird blue color? They actually represent a series of directions in tangent-space, relative to the orientation of the surface itself. Rather than “Forward”, a Z+ component normal map points “Outward”. We can simply perform the raycast in a different coordinate basis, and suddenly the entire problem becomes surface-relative in world-space, while still being axis-aligned in tangent space! A neat side-effect of this is that our room volumes now follow the curvature of the building, meaning that curved facades will render curved hallways running their length, and always have a full wall parallel to the building exterior.

While we’re at it, what if we used a non-normalized ray? Most of the time, a ray should have a normalized direction. “Forward” should have the same magnitude as “Right”. If we pre-scale our ray direction to match room dimensions, then we can simplify it out of the problem. So now, we’re performing a single raycast against a unit-sized axis-aligned cube!

Room Textures

The original publication called for separate textures for walls, floors, and ceilings. This works wonderfully, but I find it difficult to work with. Keeping these three textures in sync can get difficult, and atlasing multiple room textures together quickly becomes a pain. Alternative methods such as the one proposed by Zoe J Wood in “Interior Mapping Meets Escher” utilizes cubemaps, however this makes atlasing downright impossible, and introduces new constraints on the artists building interior assets.
interior_atlas
Andrew Willmott briefly touched on an alternative in “From AAA to Indie: Graphics R&D”, which used a pre-projected interior texture for the interior maps in SimCity. This was the format I decided to use for my implementation, as it is highly author-able, easy to work with, and provides results only slightly worse than full cubemaps. A massive atlas of room interiors can be constructed on a per-building basis, and then randomly selected. Buildings can therefore easily maintain a cohesive interior style with random variation using only a single texture resource.

Finally, The Code

I’ve excluded some of the standard Unity engine scaffolding, so as to not distract from the relevant code. You won’t be able to copy-paste this, but it should be easier to see what’s happening as a result.

v2f vert (appdata v) {
   v2f o;
   
   // First, let's determine a tangent basis matrix.
   // We will want to perform the interior raycast in tangent-space,
   // so it correctly follows building curvature, and we won't have to
   // worry about aligning rooms with edges.
   half tanSign = v.tangent.w * unity_WorldTransformParams.w;
   half3x3 objectToTangent = half3x3(
      v.tangent.xyz,
      cross(v.normal, v.tangent) * tanSign,
      v.normal);

   // Next, determine the tangent-space eye vector. This will be
   // cast into an implied room volume to calculate a hit position.
   float3 oEyeVec = v.vertex - WorldToObject(_WorldSpaceCameraPos);
   o.tEyeVec = mul(objectToTangent, oEyeVec);

   // The vertex position in tangent-space is just the unscaled
   // texture coordinate.
   o.tPos = v.uv;

   // Lastly, output the normal vertex data.
   o.vertex = UnityObjectToClipPos(v.vertex);
   o.uv = TRANSFORM_TEX(v.uv, _ExteriorTex);

   return o;
}

fixed4 frag (v2f i) : SV_Target {
   // First, construct a ray from the camera, onto our UV plane.
   // Notice the ray is being pre-scaled by the room dimensions.
   // By distorting the ray in this way, the volume can be treated
   // as a unit cube in the intersection code.
   float3 rOri = frac(float3(i.tPos,0) / _RoomSize);
   float3 rDir = normalize(i.tEyeVec) / _RoomSize;

   // Now, define the volume of our room. With the pre-scale, this
   // is just a unit-sized box.
   float3 bMin = floor(float3(i.tPos,-1));
   float3 bMax = bMin + 1;
   float3 bMid = bMin + 0.5;

   // Since the bounding box is axis-aligned, we can just find
   // the ray-plane intersections for each plane. we only 
   // actually need to solve for the 3 "back" planes, since the 
   // near walls of the virtual cube are "open".
   // just find the corner opposite the camera using the sign of
   // the ray's direction.
   float3 planes = lerp(bMin, bMax, step(0, rDir));
   float3 tPlane = (planes - rOri) / rDir;

   // Now, we know the distance to the intersection is simply
   // equal to the closest ray-plane intersection point.
   float tDist = min(min(tPlane.x, tPlane.y), tPlane.z);

   // Lastly, given the point of intersection, we can calculate
   // a sample vector just like a cubemap.
   float3 roomVec = (rOri + rDir * tDist) - bMid;
   float2 interiorUV = roomVec.xy * lerp(INTERIOR_BACK_PLANE_SCALE, 1, roomVec.z + 0.5) + 0.5;

#if defined(INTERIOR_USE_ATLAS)
   // If the room texture is an atlas of multiple variants, transform
   // the texture coordinates using a random index based on the room index.
   float2 roomIdx = floor(i.tPos / _RoomSize);
   float2 texPos = floor(rand(roomIdx) * _InteriorTexCount) / _InteriorTexCount;

   interiorUV /= _InteriorTexCount;
   interiorUV += texPos;
#endif

   // lastly, sample the interior texture, and blend it with an exterior!
   fixed4 interior = tex2D(_InteriorTex, interiorUV);
   fixed4 exterior = tex2D(_ExteriorTex, i.uv);

   return lerp(interior, exterior, exterior.a);
}

And that’s pretty much all there is to it! The code itself is actually quite simple and, while there are small visual artifacts, it provides a fairly convincing representation of interior rooms!

 

Interior + Exterior Blend

 

There’s definitely more room for improvement in the future. The original paper supported animated “cards” to represent people and furniture, and a more realistic illumination model may be desirable. Still, for an initial implementation, I think things came out quite well!

Interior Mapping – Part 1

Rendering convincing environments in realtime has always been difficult, especially for games which take place at a “human” scale. Games consist of a series of layered illusions and approximations, all working (hopefully) together to achieve a unified goal; to represent the world in which the game takes place. In the context of a simplified or fantastical world, this isn’t too bad. It’s a matter of creating a unified style and theme that feels grounded in the reality of the particular game. The fantastic narrative platformer “Thomas was Alone”, for example, arguably conveys a believable world using just shape and color. As soon as a game takes place in an approximation of our real world however, the cracks start to appear. There are a tremendous number of “details” in the real world. Subtle differences on seemingly identical surfaces that the eye can perceive, even if not consciously.

uncanny valley

This CG incarnation of Dwayne Johnson as the titular “Scorpion King” is a prime example of “The Uncanny Valley”

We as humans are exceptionally good at identifying visual phenomena, and more importantly, its absence. You may have heard this referred to as “The Uncanny Valley”; when something is too realistic to be considered cute or cartoony, but too unrealistic to look… right… It’s extremely important to include some representation of those “missing” pieces, even if they’re not 100% accurate in order to preserve the illusion.

While not nearly as noticeable at first glance, missing details in an environment are equally important to preserving the illusion of a living, breathing, virtual world.

Take, for example, this furniture store from GTA IV.

GTA Furniture Store.png

A nice looking furniture store, though something’s missing…

This is a very nice piece of environment art. It’s visually interesting, it fits the theme and location, and it seems cohesive within the world… though something is amiss. The view through the windows is clearly just a picture of a store, slapped directly onto the window pane, like a sticker on the glass! There’s no perspective difference between the individual windows on different parts of the facade. The view of the interior is always head-on, even if the camera is at an angle to the interior walls. This missing effect greatly weakens the illusion.

From this, the question arises…

How do we convey volume through a window, without creating tons of work for artists, or dramatically altering the production pipeline?

Shader Tricks!

The answer (as you may have guessed from the header) lies in shader trickery! To put it simply, Shaders are tiny programs which take geometric information as input, mush it around a bunch, and output a color. Our only concern is that the final output color looks correct in the scene. What happens in the middle frankly doesn’t matter much. If we offset the output colors, we can make it look like the input geometry is offset too! If outputs are offset non-uniformly, it can be made to appear as though the rendered image is skewed, twisted, or distorted in some way.

uncanny valley

If you’ve ever seen at 3D sidewalk art, you’ve seen a real-world implementation of parallax mapping.

The school of techniques collectively known as “Parallax Mapping” do just this. Input texture coordinates are offset based on the observer angle, and a per-texel “depth” value. By determining the point where our camera ray intersects the surface height-field, we can create what amounts to a 3D projection of an otherwise 2D image. “Learn OpenGL” provides an excellent technical explanation of parallax mapping if you’re curious.

While the theory is perfect for our needs, the methodology is lacking. Parallax mapping is not without its issues! Designed to be a general-purpose solution, it suffers from a number of visible artifacts when used in our specific case. It works best on smoother height-fields, for instance. Large differences in height between texels can create weird visual distortions! There are a number of alternatives to get around this issue (such as “Steep Parallax Mapping”), but many are iterative, and result in odd “step” artifacts as the ratio of depth to iteration count increases. In order to achieve a convincing volume for our buildings using an unmodified parallax shader, we’d need to use so many iterations that it would quickly become a performance nightmare.

Interior Mapping

Parallax mapping met nearly all of our criteria, but still wasn’t suitable for our application. Whenever a general solution fails, it’s usually a good idea to sit down and consider the simplest possible specific solution that will suffice.

Raymarch

For each point on the true geometry (blue), select a color at the point of intersection between the camera ray, and an imaginary room volume (red).

In our case, we want rectangular rooms inset into our surface. The keyword here is “rectangular”. The generality of parallax mapping means that an iterative numeric approach must be used, since there is no analytical way to determine where our camera ray intersects a general height-field. If we limit the problem to only boxes, then an exact solution is not only possible, but trivial! Furthermore, if these boxes are guaranteed to be axis-aligned, the computation becomes extremely simple! Then, it’s just a matter of mapping the point of intersection within our room volume to a texture, and outputting the correct color!

interior mapping example

Example of “Interior Mapping” from the original publication by Joost van Dongen.

Originally published in 2008, the now well known “Interior Mapping”, by Joost van Dongen seems like a prime candidate! In this approach, the facade of a building mesh is divided into “rooms”, and a raycast is performed for each texel. Then, the coordinate at the point of intersection between our camera ray and the room volume can be used to sample a set of “Room Textures”, and voila! This, similar to parallax mapping, offsets input texture coordinates to provide a projection of a wall, ceiling, and floor texture within each implicit “room volume”, resulting in a geometrically perfect representation of an interior without the added complexity of additional geometry and material work!

In part 2, we’ll discuss modifications to the original implementation for performance and quality-of-life improvements!

Abusing Blend Modes for Fun and Profit!

Today I decided to do a quick experiment.

Hardware “blend modes” have existed since the dawn of hardware-accelerated graphics. Primarily used for effects like transparency, they allow a developer to specify the way new colors are drawn into the buffer through a simple expression.

color = source * SrcFactor + destination * DstFactor

The final output color is the sum of a “source factor” term multiplied by the value output by the fragment shader, and a “destination factor” term multiplied by the color already in the buffer.

For example, if I wanted to simply add the new color into the scene, I could use blend modes of One One; Our coefficients would be negligible and we would end up with

color = source + destination

If I wanted a linear alpha blend between the source color and destination color, I could select the terms SrcAlpha, OneMinusSrcAlpha, which would perform a linear interpolation between the two colors.

But what happens when we have non-standard colors? Looking back at the blend expression, logic would dictate that we can express any two-term polynomial as long as the terms are independent, and the coefficients are one of the supported “blend factors”! By pre-loading our destination buffer with a value, the second term can be anything we need, and the alpha channel of our source can be packed with a coefficient to use as the destination factor if need be.

This realization got me thinking. “Subtract” blend modes aren’t explicitly supported in OpenGL, however a subtraction is simply the addition of a negative term. If our source term were negative, surely blend factors of One One would simply subtract the source from the destination color! That isn’t to say that this is guaranteed to work without issues! If the render target is a traditional 24 or 32-bit color buffer, then negative values may have undefined behavior! A subtraction by addition of a negative would only work assuming the sum is calculated independently somewhere in hardware, before it’s packed into the unsigned output buffer.

Under these assumptions, I set out to try my hand at a neat little trick. Rendering global object “thickness” in a single pass.

Why though?

Thickness is useful for a number of visual effects. Translucent objects, for example, could use the calculated thickness to approximate the degree to which light is absorbed along the path of the ray. Refraction could be more accurately approximated utilizing both incident, and emergent light calculations. Or, you could define the shape of a “fog volume” as an arbitrary mesh. It’s actually quite a useful thing to have!

Single pass global thickness maps

So here’s the theory. Every pixel in your output image is analogous to a ray cast into your scene. It can be thought of as a sweep backwards along the path of light heading towards the camera. What we really want to determine is the point where that ray enters and exits our object. Knowing these two points, we also essentially know the distance travelled through the volume along that ray.

It just so happens that we know both of these things! The projective-space position of a fragment must be calculated before a color can be written into a buffer, so we actually know the location of every fragment, or continuing the above analogy, ray intersection on the surface. This is also true of the emergent points, which all lie on the back-faces of our geometry! If we can find the distance the ray has traveled before entering our volume, and the distance the ray has traveled before exiting it, the thickness of the volume is just the difference of the two!

So how is this possible in a single pass? Well, normally when we render objects, we explicitly disable the “backfaces”; triangles pointing away from our camera. This typically speeds things up quite a bit, because backfaces almost certainly lie behind the visible portion of our model, and shading them is simply a waste of time. If we render them however, our fragment program will be executed both on the front and back faces! By writing the distance from the camera, or “depth” value as the color of our fragment, and negating it for front-faces, we can essentially output the “back minus front” thickness value we need!

DirectX provides a convenient semantic for fragment programs. float:VFACE. This value will be set to 1 when the fragment is part of a front-face, and -1 when the fragment is part of a back-face. Just render the depth, multiplied by the inverted value of the VFACE semantic, and we’ve got ourselves a subtraction!

Cull Off // disable front/back-face culling
Blend One One // perform additive (subtractive) blending
ZTest Off // disable z-testing, so backfaces aren’t occluded.

fixed4 frag (v2f i, fixed facing : VFACE) : SV_Target {
return -facing * i.depth;
}

Unity Implementation

From here, I just whipped up a quick “Camera Replacement Shader” to render all opaque objects in the scene using our thickness shader, and drew the scene to an off-screen “thickness buffer”. Then, in a post-effect, just sample the buffer, map it to a neat color ramp, and dump it to the screen! In just a few minutes, you can make a cool “thermal vision” effect!

Issues

The subtraction blend isn’t necessarily supported on all hardware. It relies on a lot of assumptions, and as such is probably not appropriate for real applications. Furthermore, this technique really only works on watertight meshes. Meshes with holes, or no back-faces will have a thickness of negative infinity, which is definitely going to cause some problems. There are also a number of “negative poisoning” artifacts, where the front-face doesn’t necessarily overlap a corresponding backface, causing brief pixel flickering. I think this occasional noise looks cool in the context of a thermal vision effect, but there’s a difference between a configurable “glitch” effect, and actual non-deterministic code!

Either way, I encourage everyone to play around with blend-modes! A lot of neat effects can be created with just the documented terms, but once you get into “probably unsafe” territory, things start to get really interesting!

GPU Isosurface Polygonalization

Isosurfaces are extremely useful when it comes to data visualization. From medical imaging to fluid flow analysis, they are an excellent tool for understanding complex volumetric data. Games have also adopted some of these techniques for their on purposes From the more rigid implementation in the ubiquitous game Minecraft to the Gels in Portal 2, these techniques serve the same basic purpose.

I wanted to try my hand at a general-purpose implementation, but before we dive into things, we must first answer a few basic questions.

What is an isosurface?

An isosurface can be thought of as the solution of a continuous function which produces a constant output in 3D. If you’re visualizing an electromagnetic field for example, you might generate an isosurface for a given potential, so you can easily determine its overall shape. This technique can be applied to other arbitrary values as well. Given CT scan data, a radiologist could construct an isosurface at the density of a specific type of tissue, extracting a 3D representation of bones or organs to view them separately, rather than having to manipulate a less intuitive stack of images.

What will we use this system for?

I don’t work in the medical field, nor would I trust the accuracy of my implementation when it comes to making a diagnosis. I work in entertainment and computer graphics, and as you would imagine, the requirements are quite different. Digital artists can already craft far better visuals than any procedure yet known; the real challenge is dynamic data. Physically simulated fluids, player-modifiable terrain, mechanics such as these present a significant challenge for traditional artists! What we really need is a generalized system for extracting and rendering isosurfaces in real time to fill in the gaps.

What are the requirements for such a system?

Given our previous use case, we can derive a few basic requirements. In no particular order…

  1. The system must be intuitive. Designers have other things to do besides tweaking simulation volumes and fiddling with configurations.
  2. The system must be flexible. If someone suggests a new mechanic which relies heavily on procedural geometry, it should be easy to get up and running.
  3. The system must be compatible. The latest experimental extensions are fun, but if you want to release something that anyone can enjoy, it needs to run on 5 year old hardware.
  4. The system must be fast. At 60 fps, you only have 16ms to render everything in your game. We can’t spend 10 of that drawing a special effect.

Getting Started!

Let’s look at requirement no. 4 first. Like many problems in computing, surface polygonalization can be broken down into repeated instances of much smaller problems. At the end of the day, the desired output is a series of interconnected polygons which appear to make up a complex surface. If each of these component polygons is accounted for separately, we can dramatically reduce the scope of the problem. Instead of generating a polygonal surface, we are now generating a single polygon, which is a much less daunting task. As with any discretization process, it is necessary to define a regular sample interval at which our continuous function will be evaluated. In the simplest 3D case, this will take the form of a regular grid of cells, and each of these cells will form a single polygon. Suddenly, this polygonalization process becomes massively parallel. With this new outlook, the problem becomes a perfect fit for standard graphics hardware!

For compatibility, I chose to implement this functionality in the Geometry Shader stage of the rendering pipeline, as it allows for the creation of arbitrary geometry given some basic input data. A Compute Shader would almost definitely be a better option in terms of performance and maintainability, but my primary development system is OSX, which presents a number of challenges when it comes to the use of Compute Shaders. I intend to update this project in the future, once Compute Shaders become more common.

If the field is evaluated at a number of regular points and a grid is drawn between them, we can construct a set of hypothetical cubes with a single sample at each of its 8 vertices. By comparing the values at each vertex, it is trivial to determine the plane of intersection between the theoretical isosurface and the cubic sample volume. If properly evaluated, the local solutions for each sample volume will form integral parts of the global surface implicitly, without requiring any global information.

This is the basic theory behind the ubiquitous Marching Cubes algorithm, first published in 1987 and still commonly used today. While it is certainly battle-tested, there are a number of artifacts in the output geometry that can make surfaces appear rough. The generated triangles are also often non-uniform and narrow, leading to additional artifacts later in the rendering process. Perhaps a more pressing issue is the sheer number of cases to be evaluated. For every sample cell, there are 256 possible planar intersections. The fantastic implementation by Paul Bourke wisely recommends the use of a look-up table, pre-computing these cases. While this may work well in traditional implementations, it crumbles under the parallel architecture of modern GPUs. Graphics hardware tends to excel at executing large batches of identical instructions, but begins to falter as soon as complex conditional branching is involved and operations have to be evaluated individually. In my tests, I found that the look-up tables performed no better, if not worse than explicit evaluation, as the complier could not easily expand and unroll the program flow, and evaluation could not be easily batched. Bearing this in mind, we ideally need to implement a method with as few logical branches as possible.

Marching Tetrahedra is a variant of the Marching Cubes algorithm which divides each cube into 5 (or 6 for a slightly different topology) tetrahedra. By limiting our integral sample volume to four vertices instead of 8, the number of possible cases drops to 16. In my tests, I got a 16x performance improvement using this technique (though realized savings are heavily dependent on the hardware used), confirming our suspicions. Unfortunately, marching tetrahedra can have some strange surface features, and has a number of artifacts of its own, especially with dynamic sampling grids.

Because of this, I ended up settling on naive surface nets, a simple dual method which generates geometry spanning multiple voxel sample volumes. An excellent discussion of the differences between these three meshing algorithms can be found here. Perhaps my favorite advantage of this method is the relative uniformity of the output geometry. Surface nets tend to be smooth surfaces comprised of quads of relatively equal size. In addition, I find it easier to comprehend and follow than other meshing algorithms, as its use of look-up-tables, and possible cases is fairly limited.

Implementation Details

Isosurface_Sample.png

The sample grid is actually defined as a mesh, with a single disjoint vertex placed at each integral sample coordinate. These vertices aren’t actually drawn, but instead are used as input data to a series of shaders. Therefore, the shader can be considered to be executed “per-voxel”, with its only input being the coordinate of the minimum bounding corner. One disadvantage commonly seen in similar systems is a fundamental restriction on surface resolution due to a uniform sample grid. In order to skirt around this limitation, meshing is actually performed. In projected space, rather than world space, so each voxel is a truncated frustum similar to that of the camera, rather than a cube. This not only eliminates a few extra transformations in the shader code, but provides LoD implicitly by ensuring each output triangle is of a fixed pixel size, regardless of its distance to the camera.

Once the sample mesh was created, I used a simple density function for the potential field used by this system. It provides a good amount of flexibility, while still being simple to comprehend and implement. Each new source of “charge” added to the field would contribute additively to the overall potential.  However, this quickly raises a concern! Each contributing charge must be evaluated at all sample locations, meaning our shader must, in some way, iterate through all visible charges! As stated earlier, branching and loops which cannot be unrolled can cause serious performance hiccups on most GPUs!

While I was at it, I also implemented a Vertex Pre-pass. Due to the nature of GPU parallelism, each voxel is evaluated in complete isolation. This has the unfortunate side-effect of solving for each voxel vertex position up to 6 times (once for each neighboring voxel). The surface net algorithm utilizes an interpolated surface vertex position, determined from the intersections of the surface and the sample volume. This interpolation can get expensive if repeated 6 times more than necessary! To remedy this, I instead do a pre-pass calculating the interpolated vertex position, and storing it as a normalized coordinate within the voxel in the pixel color of another texture. When the geometry stage builds triangles, it can simply look up the normalized vertex positions from this table, and spit them out as an offset from the voxel min coordinate!

The geometry shader stage is then fairly straightforward, reading in vertex positions, checking the case of the input voxel, looking up the vertex positions of its neighbors, and bridging the gap with a triangle.

Was it worth it?

Short answer, no.

I am extremely proud of the work I’ve done, and the end result is quite cool, but it’s not a solution I would recommend in a production setting. The additional complexity far outweighs any potential performance benefit, and maintainability, while not terrible, takes a hit as well. In addition, the geometry shader approach doesn’t work nearly as well as I had hoped. Geometry shaders are notoriously cache-unfriendly, and my implementation is no exception. Combine this with the rather unintuitive nature of working with on-GPU procedural geometry in a full-scale project, and you’ve got yourself a recipe for very unhappy engineers.

I think the concept of on-GPU surface meshing is fascinating, and I’m eager to look into Compute Shader implementations, but as it stands, the geometry stage is not the way to go.

I’ve made the source available on my GitHub if you’d like to check it out!