Interior Mapping – Part 2

In part 1, we discussed the requirements and rationale behind Interior Mapping. In this second part, we’ll discuss the technical implementation of what I’m calling (for lack of a better title) “Tangent-Space Interior Mapping”.

Coordinates and Spaces

In the original implementation, room volumes were defined in object-space or world-space. This is by far the easiest coordinate system to work in, but it quickly presents a problem! What about buildings with angled or curved walls? At the moment, the rooms are bounded by building geometry, which can lead to extremely small rooms in odd corners and uneven or truncated walls!

In reality, outer rooms are almost always aligned with the exterior of the building. Hallways rarely run diagonally and are seldom narrower at one end than the other! We would rather have all our rooms aligned with the mesh surface, and then extruded inward towards the “core” of the building.

Cylindrical Building

Curved rooms, just by changing the coordinate basis.

In order to do this, we can just look for an alternative coordinate system for our calculations which lines up with our surface (linear algebra is cool like that). Welcome to Tangent Space! Tangent space is already used elsewhere in shaders. Even wonder why normal-maps are that weird blue color? They actually represent a series of directions in tangent-space, relative to the orientation of the surface itself. Rather than “Forward”, a Z+ component normal map points “Outward”. We can simply perform the raycast in a different coordinate basis, and suddenly the entire problem becomes surface-relative in world-space, while still being axis-aligned in tangent space! A neat side-effect of this is that our room volumes now follow the curvature of the building, meaning that curved facades will render curved hallways running their length, and always have a full wall parallel to the building exterior.

While we’re at it, what if we used a non-normalized ray? Most of the time, a ray should have a normalized direction. “Forward” should have the same magnitude as “Right”. If we pre-scale our ray direction to match room dimensions, then we can simplify it out of the problem. So now, we’re performing a single raycast against a unit-sized axis-aligned cube!

Room Textures

The original publication called for separate textures for walls, floors, and ceilings. This works wonderfully, but I find it difficult to work with. Keeping these three textures in sync can get difficult, and atlasing multiple room textures together quickly becomes a pain. Alternative methods such as the one proposed by Zoe J Wood in “Interior Mapping Meets Escher” utilizes cubemaps, however this makes atlasing downright impossible, and introduces new constraints on the artists building interior assets.
interior_atlas
Andrew Willmott briefly touched on an alternative in “From AAA to Indie: Graphics R&D”, which used a pre-projected interior texture for the interior maps in SimCity. This was the format I decided to use for my implementation, as it is highly author-able, easy to work with, and provides results only slightly worse than full cubemaps. A massive atlas of room interiors can be constructed on a per-building basis, and then randomly selected. Buildings can therefore easily maintain a cohesive interior style with random variation using only a single texture resource.

Finally, The Code

I’ve excluded some of the standard Unity engine scaffolding, so as to not distract from the relevant code. You won’t be able to copy-paste this, but it should be easier to see what’s happening as a result.

v2f vert (appdata v) {
   v2f o;
   
   // First, let's determine a tangent basis matrix.
   // We will want to perform the interior raycast in tangent-space,
   // so it correctly follows building curvature, and we won't have to
   // worry about aligning rooms with edges.
   half tanSign = v.tangent.w * unity_WorldTransformParams.w;
   half3x3 objectToTangent = half3x3(
      v.tangent.xyz,
      cross(v.normal, v.tangent) * tanSign,
      v.normal);

   // Next, determine the tangent-space eye vector. This will be
   // cast into an implied room volume to calculate a hit position.
   float3 oEyeVec = v.vertex - WorldToObject(_WorldSpaceCameraPos);
   o.tEyeVec = mul(objectToTangent, oEyeVec);

   // The vertex position in tangent-space is just the unscaled
   // texture coordinate.
   o.tPos = v.uv;

   // Lastly, output the normal vertex data.
   o.vertex = UnityObjectToClipPos(v.vertex);
   o.uv = TRANSFORM_TEX(v.uv, _ExteriorTex);

   return o;
}

fixed4 frag (v2f i) : SV_Target {
   // First, construct a ray from the camera, onto our UV plane.
   // Notice the ray is being pre-scaled by the room dimensions.
   // By distorting the ray in this way, the volume can be treated
   // as a unit cube in the intersection code.
   float3 rOri = frac(float3(i.tPos,0) / _RoomSize);
   float3 rDir = normalize(i.tEyeVec) / _RoomSize;

   // Now, define the volume of our room. With the pre-scale, this
   // is just a unit-sized box.
   float3 bMin = floor(float3(i.tPos,-1));
   float3 bMax = bMin + 1;
   float3 bMid = bMin + 0.5;

   // Since the bounding box is axis-aligned, we can just find
   // the ray-plane intersections for each plane. we only 
   // actually need to solve for the 3 "back" planes, since the 
   // near walls of the virtual cube are "open".
   // just find the corner opposite the camera using the sign of
   // the ray's direction.
   float3 planes = lerp(bMin, bMax, step(0, rDir));
   float3 tPlane = (planes - rOri) / rDir;

   // Now, we know the distance to the intersection is simply
   // equal to the closest ray-plane intersection point.
   float tDist = min(min(tPlane.x, tPlane.y), tPlane.z);

   // Lastly, given the point of intersection, we can calculate
   // a sample vector just like a cubemap.
   float3 roomVec = (rOri + rDir * tDist) - bMid;
   float2 interiorUV = roomVec.xy * lerp(INTERIOR_BACK_PLANE_SCALE, 1, roomVec.z + 0.5) + 0.5;

#if defined(INTERIOR_USE_ATLAS)
   // If the room texture is an atlas of multiple variants, transform
   // the texture coordinates using a random index based on the room index.
   float2 roomIdx = floor(i.tPos / _RoomSize);
   float2 texPos = floor(rand(roomIdx) * _InteriorTexCount) / _InteriorTexCount;

   interiorUV /= _InteriorTexCount;
   interiorUV += texPos;
#endif

   // lastly, sample the interior texture, and blend it with an exterior!
   fixed4 interior = tex2D(_InteriorTex, interiorUV);
   fixed4 exterior = tex2D(_ExteriorTex, i.uv);

   return lerp(interior, exterior, exterior.a);
}

And that’s pretty much all there is to it! The code itself is actually quite simple and, while there are small visual artifacts, it provides a fairly convincing representation of interior rooms!

 

Interior + Exterior Blend

 

There’s definitely more room for improvement in the future. The original paper supported animated “cards” to represent people and furniture, and a more realistic illumination model may be desirable. Still, for an initial implementation, I think things came out quite well!

Interior Mapping – Part 1

Rendering convincing environments in realtime has always been difficult, especially for games which take place at a “human” scale. Games consist of a series of layered illusions and approximations, all working (hopefully) together to achieve a unified goal; to represent the world in which the game takes place. In the context of a simplified or fantastical world, this isn’t too bad. It’s a matter of creating a unified style and theme that feels grounded in the reality of the particular game. The fantastic narrative platformer “Thomas was Alone”, for example, arguably conveys a believable world using just shape and color. As soon as a game takes place in an approximation of our real world however, the cracks start to appear. There are a tremendous number of “details” in the real world. Subtle differences on seemingly identical surfaces that the eye can perceive, even if not consciously.

uncanny valley

This CG incarnation of Dwayne Johnson as the titular “Scorpion King” is a prime example of “The Uncanny Valley”

We as humans are exceptionally good at identifying visual phenomena, and more importantly, its absence. You may have heard this referred to as “The Uncanny Valley”; when something is too realistic to be considered cute or cartoony, but too unrealistic to look… right… It’s extremely important to include some representation of those “missing” pieces, even if they’re not 100% accurate in order to preserve the illusion.

While not nearly as noticeable at first glance, missing details in an environment are equally important to preserving the illusion of a living, breathing, virtual world.

Take, for example, this furniture store from GTA IV.

GTA Furniture Store.png

A nice looking furniture store, though something’s missing…

This is a very nice piece of environment art. It’s visually interesting, it fits the theme and location, and it seems cohesive within the world… though something is amiss. The view through the windows is clearly just a picture of a store, slapped directly onto the window pane, like a sticker on the glass! There’s no perspective difference between the individual windows on different parts of the facade. The view of the interior is always head-on, even if the camera is at an angle to the interior walls. This missing effect greatly weakens the illusion.

From this, the question arises…

How do we convey volume through a window, without creating tons of work for artists, or dramatically altering the production pipeline?

Shader Tricks!

The answer (as you may have guessed from the header) lies in shader trickery! To put it simply, Shaders are tiny programs which take geometric information as input, mush it around a bunch, and output a color. Our only concern is that the final output color looks correct in the scene. What happens in the middle frankly doesn’t matter much. If we offset the output colors, we can make it look like the input geometry is offset too! If outputs are offset non-uniformly, it can be made to appear as though the rendered image is skewed, twisted, or distorted in some way.

uncanny valley

If you’ve ever seen at 3D sidewalk art, you’ve seen a real-world implementation of parallax mapping.

The school of techniques collectively known as “Parallax Mapping” do just this. Input texture coordinates are offset based on the observer angle, and a per-texel “depth” value. By determining the point where our camera ray intersects the surface height-field, we can create what amounts to a 3D projection of an otherwise 2D image. “Learn OpenGL” provides an excellent technical explanation of parallax mapping if you’re curious.

While the theory is perfect for our needs, the methodology is lacking. Parallax mapping is not without its issues! Designed to be a general-purpose solution, it suffers from a number of visible artifacts when used in our specific case. It works best on smoother height-fields, for instance. Large differences in height between texels can create weird visual distortions! There are a number of alternatives to get around this issue (such as “Steep Parallax Mapping”), but many are iterative, and result in odd “step” artifacts as the ratio of depth to iteration count increases. In order to achieve a convincing volume for our buildings using an unmodified parallax shader, we’d need to use so many iterations that it would quickly become a performance nightmare.

Interior Mapping

Parallax mapping met nearly all of our criteria, but still wasn’t suitable for our application. Whenever a general solution fails, it’s usually a good idea to sit down and consider the simplest possible specific solution that will suffice.

Raymarch

For each point on the true geometry (blue), select a color at the point of intersection between the camera ray, and an imaginary room volume (red).

In our case, we want rectangular rooms inset into our surface. The keyword here is “rectangular”. The generality of parallax mapping means that an iterative numeric approach must be used, since there is no analytical way to determine where our camera ray intersects a general height-field. If we limit the problem to only boxes, then an exact solution is not only possible, but trivial! Furthermore, if these boxes are guaranteed to be axis-aligned, the computation becomes extremely simple! Then, it’s just a matter of mapping the point of intersection within our room volume to a texture, and outputting the correct color!

interior mapping example

Example of “Interior Mapping” from the original publication by Joost van Dongen.

Originally published in 2008, the now well known “Interior Mapping”, by Joost van Dongen seems like a prime candidate! In this approach, the facade of a building mesh is divided into “rooms”, and a raycast is performed for each texel. Then, the coordinate at the point of intersection between our camera ray and the room volume can be used to sample a set of “Room Textures”, and voila! This, similar to parallax mapping, offsets input texture coordinates to provide a projection of a wall, ceiling, and floor texture within each implicit “room volume”, resulting in a geometrically perfect representation of an interior without the added complexity of additional geometry and material work!

In part 2, we’ll discuss modifications to the original implementation for performance and quality-of-life improvements!