Messing With Shaders – Realtime Procedural Foliage

 

ivy_close.png
The programmable rendering pipeline is perhaps one of the largest advances in the history of realtime computer graphics. Before its introduction, graphics libraries like OpenGL and DirectX were limited to the “fixed function pipeline”, a programmer would shove in geometric data, and the application would draw it however it saw fit. Developers had little to no control over the output of their application beyond a few “render mode” settings. This was fine for rendering relatively simple scenes, solid objects, and simplistic lighting, but as visual fidelity increased and hardware become more powerful it quickly became necessary to allow for a more customizable rendering.

The process of rendering a 3D object in the modern programmable pipeline is typically broken down into a number of steps. Data is copied into fast-access graphics memory, then transformed through a series of stages before the graphics hardware eventually rasterizes that data to the display. In its most basic form, there are two of these stages the developer can customize. The “Vertex Program” manipulates data on a per-vertex level, such as positions and texture coordinates, before handing the results on to the “Fragment Program”, which is responsible for determining the properties of a given fragment (like a pixel containing more than just color information). The addition of just these two stages opened the floodgates for interesting visual effects. Approximating reflections for metallic objects, cel-shading effects for cartoon characters, and more! Since then, even more optional stages have been inserted into the pipeline for an even greater variety of effects.

I’ve spent a considerable amount of time experimenting with vertex and fragment programs in the past, but this week I decided to spend a few hours working with the other, less common stages, mainly “Geometry Programs”. Geometry programs are a more recent innovation, and have only began to see extensive use in the last decade or so. They essentially allow developers to not only modify vertex data as it’s received, but to construct entirely new vertices based on the input primitives (triangles, quads, etc.) As you can easily imagine, this presents incredible potential for new effects, and is something I personally would like to become more experienced with.

In four or five hours, I managed to write a relatively complex effect, and the rest of this post will detail, at a high level, what I did to achieve it.

ivy_distant.png

Procedurally generated geometry for ivy growing on a simple building.

This is my procedural Ivy shader. It is a relatively simple two-pass effect which will apply artist-configurable ivy to any surface. What sets this effect apart from those I’ve written in the past is that it actually constructs new geometry to add 3D leaves to the surface extremely efficiently.

One of the major technical issues when it comes to rendering things like foliage is that the level of geometric detail required to accurately represent leaves is quite high. While a digital environment artist could use a 3D modeling program to add in hundreds of individual leaves, this is not necessarily a good use of their time. Furthermore, it quickly becomes unmaintainable if anyone decides that the position, density, or style of foliage should change in the future. I don’t know about you, but I don’t want to be the one to have to tell a team of environment artists that all of the ivy in an entire game needs to be slightly different. In this situation, the key is to work smarter, not harder. While procedural art is often controversial in the game industry, I think most developers would agree that artist-directed procedural techniques are an invaluable tool.

Ivy_Shader_Steps.png
First and foremost, my foliage effect is composed of two separate rendering passes. First, a triplanar-mapped base texture is blended onto the object based on the desired density of the ivy. This helps to make the foliage feel much more dense, and helps to hide the seams where the leaves meet the base geometry.

Next in a second rendering pass, the geometry program transforms every input triangle into a set of quads lying on that triangle with a uniform, psuedo-random distribution. First, it is necessary to determine the number of leaf quads to generate. In order to maintain a consistent density of leaf geometry, the surface area of the triangle is calculated quickly using the “half cross-product formula”, and is then multiplied by the desired number of leaves per square meter of surface area. Then, for each of these leaves, a random sample point on the triangle is picked, and a triangle strip is emitted. It does this by sampling a noise function seeded with the world-space centroid of the triangle and the index of the leaf quad being generated. These noise values are then used to generate barycentric coordinates, which in turn are used to interpolate the position and normal of the triangle at that point, essentially returning a random world-space position and its corresponding normal vector.

Now, all that’s needed is to determine the orientation of the leaf, and output the correct triangle-strip primitive. Even this is relatively simple. By using the world-space surface normal and world “up” vector, a simple “change of vector basis” matrix is constructed. Combining this with a slightly randomized scale factor, and a small offset to orientation (to add greater variety to patches of leaves), we can transform normalized quad vertices into the exact world-space positions we want for our leaves!

...

// Defines a unit-size square quad with its base at the origin. doing
// this allows for very easy scaling and positioning in the next steps.
static const float3 quadVertices[4] = {
   float3(-0.5, 0.0, 0.0),
   float3( 0.5, 0.0, 0.0),
   float3(-0.5, 0.0, 1.0),
   float3( 0.5, 0.0, 1.0)
};

...

// IN THE GEOMETRY SHADER
// Change of basis matrix converts from XYZ space to leaf-space
float3x3 leafBasis = float3x3(
   leafX.x, leafY.x, leafZ.x,
   leafX.y, leafY.y, leafZ.y,
   leafX.z, leafY.z, leafZ.z
);

// constructs a random rotation matrix from Euler angles in the range 
// (-10,10) using wPos as a seed value.
float3x3 leafJitter = randomRotationMatrix(wPos, 10);

// Combine the basis matrix by the random rotation matrix to get the
// complete leaf transformation. Note, we could use a 4x4 matrix here
// and incorporate the translation as well, but it's easier to just add
// the world position as an offset in the final step.
float3x3 leafMatrix = mul(leafBasis, leafJitter);

// lastly, we can just output four vertices in a triangle strip
// to form a simple quad, and we'll be on our merry way.
for ( int i = 0; i < 4; i ++ ) {
   FS_INPUT v;
   v.vertex = UnityWorldToClipPos( 
      float4( mul(leafMatrix, quadVertices[i] * scale), 1) + wPos 
   );
   triStream.Append(v);
}

At this point, the meat of the work is done! We’ve got a geometry shader outputting quads on our surface. The last thing needed is to texture them, and it works!

Configuration!

I briefly touched on artist-configurable effects in the introduction, and I’d like to quickly address that too. I opted to go with the simplest solution I could think of, and it ended up being incredibly effective.

venus_vertex_weights.png

Configuring procedural geometry using painted vertex weights.

The density and location of ivy is controlled through painted vertex-colors. This allows artists to simply paint sections of their model they would like to be covered in foliage, and the shader will use this to weight the density and distribution of the procedural geometry. This way, an environment artist could use the tools they’re familiar with to quickly sketch out what parts of a model they would like to be effected by the shader. It will take an experienced artist less than a minute to get a rough draft working in-engine, and changes to the foliage can be made just as quickly!

At the moment, only the density of the foliage is mapped this way (All other parameters are uniform material properties), but I intend to expand the variety of properties which can be expressed this way, allowing for greater control over the final look of the model.

TODOs!

This ended up being an extremely informative project, but there are many things still left to do! For one, the procedural foliage does not take lighting into account. I built this effect in the Unity game engine, and opted out of using the standard “Surface Shader” code-generation system, which while very useful in 99% of cases, is extremely limiting in situations such as this. I would also like to improve the resolution of leaf geometry, applying adaptive runtime tessellation to the generated primitives in order to give them a slight curve, rather than displaying flat billboards. Other things, such as color variation on leaves could go a long way to improving the effect, but for now I’m quite satisfied with how it went!

Whelp, on to the next one!

 

Advertisements

tsGL – Improvements

I worked a bit more on tsGL over the last few days, and managed to clean up a few things that were really bothering me! So far progress has been relatively smooth and it’s actually turning out quite well! So, what changed?

Lighting!
TSGL’s lighting code is much cleaner now, and seems to work pretty well! Lighting is broken down into a multi-pass system which allows for an arbitrary number of lights to be applied to each object. Take this scene, for example…

Composite.png

tsGL – a scene featuring three point lights. Red, blue, and white.

This scene features three realtime lights, a red light in the back left corner, a blue light behind the camera, and a white light above the scene. All of these lights are drawn as “point-lights”, meaning that they act as omnidirectional sources.

First, the scene is drawn with no lights applied. This is important to capture shader-specific details on each object, such as emissive textures, reflections, and unlit details.

Next, lights are sorted based on their “importance”. This is calculated based on distance to the object being rendered and the intensity of the light. If there’s a large, bright light shining on an object, it will be drawn first, followed by all other lights until we’ve either reached the maximum allowed number.

Then, light parameters are packed into a 4×4 matrix. This may seem odd, but it also means that all attributes of a light can be passed as a single input to the GLSL shader. This allows for a large amount of flexibility in designing shader programs, as well as the convenience of not requiring several uniform variables to be defined in each one.

The Vertex Shader calculates a set of values useful for lighting, mainly the per-vertex direction of incident light, and the attenuation of brightness over distance. These are calculated per-vertex because it reduces the number of necessary calculations significantly, and small imperfections due to interpolation over the triangle are largely imperceptible!

attenuation_withsource.png

Attenuation of one of the light in the scene, visualized in false color. Ranges from red (high intensity) to green (low intensity)

By scaling the intensity of the light with the square of the distance from the source, lights will appropriately grow dimmer as they are moved farther from an object. The diffuse component of the light is also calculated per-fragment using the typical Lambertian reflectance model, and ensures that only the “light-facing side” of objects are shaded. In the above image, the intensity of a red light throughout the scene is visualized in false color, and the final diffuse light calculation is shown on the bottom right.

Awesome, but at this point our scene is just an unlit void! How do we combine the output of each light pass into a final image?

By exploiting OpenGL blend-modes, we can produce exactly the effect we want! OpenGL allows the programmer to specify an active “blend-mode”, essentially determining how new data is written to the display buffer! This is primarily used for rendering transparent objects. A window pane for instance, would need to be rendered over top of the rest of a scene, and would mix the background color with the color of the glass itself to produce a final color! This is no different!

For these lights, the OpenGL Blend Mode is set to “additive”. This will literally add together the colors of every object drawn, which in the case of lights is just what we need. Illumination is a purely additive process, and it is impossible for a light to make things darker. Because of this, simply adding the effects of several lights together will output the illuminated scene as a whole! The best part is that it works without having to pass an array of lighting information to the shader, or arbitrarily limiting the number of available lights based on hardware! While the overhead of rendering an additional pass is non-trivial, it’s a small price to pay for the flexibility allowed by this approach.

Here, we can see the influence of each of the three lights.

lights.png

The additive passes of each of the three lights featured in the scene above. By summing together these three images, we obtain the fully illuminated scene.

At the end of the day, we end up with a process that looks like this.

  1. Clear your drawing buffers. (erases the previous frame so we have a clean slate.)
  2. Draw the darkened scene.
  3. Sort the lights based on their “importance”.
  4. Set the blend mode to “additive”.
  5. For each light in the scene (in order of importance)
    1. Draw the scene again, illuminated by the light.
  6. We’re done! Display the buffer!

This solution isn’t perfect, and more powerful techniques have been described in recent years, but given the restrictions of WebGL, I find this technique to work quite well. One feature I would like to add is for the scene to only draw objects effected by a light in the additive pass, rather than the entire scene over again. This allows us to skip any calculations that would not effect an object in some way, and may increase performance, though without further testing, it’s difficult to say for sure.

Cubemaps!
This is always a fun feature to add, because it can have incredibly apparent results. Cubemaps are essentially texture-maps that exist on all sides of a cube. Rather than sampling a single point for a color, you would sample a direction, returning the color at that “angle” within the cube. By providing an image for each face, a cubemap can be built to represent lighting information, the surrounding environment, or whatever else would require a 360 texture lookup!

cubemaps_skybox

Example of a cubemap, taken from “LearnOpenGL.com”

tsGL now supports cubemaps as a specific instance of a “texture”, and they can be mapped to materials and used identically in the engine! One of the clever uses of a cubemap is called “environment mapping”, which essentially boils down to emulating reflection by looking up the color of the surrounding area in a precomputed texture. This is far more efficient than actually computing reflections dynamically, and plays much more nicely within the paradigms of traditional computer graphics! Here’s a quick example of an environment-mapped torus running in tsGL!

yakf0.gif

An environment-mapped torus, showcasing efficient reflection.

Now that cubemaps are supported, it’s also possible to make reflective and refractive materials efficiently, so shader programs can be made much more interesting within the confines of the engine!

Render Textures
Another nifty feature is the addition of render textures! By essentially binding a “camera” object to a texture, it is possible to render the scene into that texture, instead of onto the screen! This texture can then be used like any other anywhere in the drawing process, which means it’s possible to do things like draw a realtime security camera monitor in the scene, or have a mirror with realtime reflections! This can get quite costly, so it is best used sparingly, but the addition of this feature opens the door to a wide variety of other cool effects!

With the addition of both cubemaps and render textures, I hope to get shadow-mapping working in the near future, which would allow objects to appropriately cast shadows when illuminated in the scene, which was previously infeasible!

And now, the boring stuff – HTML
The custom HTML tag system has been improved immensely, and now makes much more sense. Entity tags may now be nested to define object hierarchies, and arbitrary parameters can be provided as child-tags, rather than attributes. This generally makes the scene documents far more legible, and makes adding new features in the future much easier.

Here’s a “camera” object, for example.

<tsgl-entity id=”main_camera”>
<tsgl-component type=”camera”>
<tsgl-property type=”number” name=”fov” value=”80″></tsgl-property>
<tsgl-property type=”number” name=”aspect” value=”1.6″></tsgl-property>
</tsgl-component>
<tsgl-component type=”transform”>
<tsgl-property type=”vector” name=”position” value=”0 2 0″></tsgl-property>
</tsgl-component>
</tsgl-entity>

Previously, the camera parameters would have been crammed into a single tag’s attributes, making it much more difficult to read, and much more verbose. With the addition of tsgl-property tags, attributes of each scene entity can now be specified within the entity’s definition, so all of those nice editor features like code-folding can now be exploited!

This part isn’t exactly fun compared to the rendering tests earlier, but it certainly helps when attempting to define a scene, and add new features!

That’s all for now! In the meantime, you can check out the very messy and very unstable, tsGL on GitHub if you want to try it for yourself, or experiment with new features!

tsGL – An Experiment in WebGL

dragon_turntable

For quite some time now, I’ve been extremely interested in WebGL. Realtime, hardware-accelerated rendering embedded directly in a web-page? Sounds like a fantastic proposition. I’ve dabbled quite a bit in desktop OpenGL and have grown to like it, despite its numerous… quirks… so it seemed only natural to jump head-first into WebGL and have a look around!

So, WebGL!
I was quite surprised by WebGL’s ease of use! Apart from browser compatibility (which is growing better by the week!) WebGL was relatively simple to set up. Initialize an HTML canvas context, and go to town! The rendering pipeline is nearly identical to OpenGL ES, and supports the majority of the same features as well! If you have any knowledge of desktop OpenGL, the Zero-To-Triangle time of WebGL should only be an hour or so!

Unfortunately, WebGL must be extremely compatible if it is to be deployed to popular web-browsers. This means that it has to run on pretty much anything with a graphics processor, and can’t rely on the user having access to the latest and greatest technologies! For instance, when writing the first draft of my lighting code, I attempted to implement a deferred rendering pipeline, but quickly discovered that multi-target rendering isn’t supported in many WebGL instances, and so I had to fall back to the more traditional forward rendering pipeline, but it works nonetheless!

dabrovic_pan

A textured scene featuring two point lights, and an ambient light.

Enter TypeScript!
I’ve never been particularly fond of Javascript. It’s actually quite a powerful language, but I’ve always been uncomfortable with the syntax it employs. This, combined with the lack of concrete typing can make it quite difficult to debug, and quite early on I encountered some issues with trying to multiply a matrix by undefined. By the time I had gotten most of my 3D math working, I had spent a few hours trying to find the source of some ugly, silent failures, and had decided that something needed to be done.

I was recommended TypeScript early in the life of the project, and was immediately drawn to it. TypeScript is a superset of Javascript which employs compile-time type checking, and a more familiar syntax. Best of all, it is compiled to standard minified Javascript, meaning it is perfectly compatible with all existing browsers! With minimal setup, I was able to quickly convert my existing code to TypeScript and be on my way! Now, when I attempt to take the cross product of a vector, and “Hello World”, I get a nice error in the Javascript console, instead of silent refusal.

Rather than defining an object prototype traditionally,

var MyClass = ( function() {
function MyClass( value ) {
this.property = value;
}
this.prototype.method = function () {
alert( this.property );
};
return MyClass;
})();

One can just specify a class.

class MyClass {
property : string;
constructor( value : string ) {
this.property = value;
}
method () {
alert( this.property );
}
}

This seems like a small difference, and honestly it doesn’t matter very much which method you use, but notice that property is specified to be of type string. If I were to construct an instance of MyClass and attempt to pass an integer constant, a compiler error would be thrown, indicating to me that MyClass instead requires a string. This does not effect the final Javascript, but reduces the chance of making a mistake while writing code significantly, and makes it much easier to keep your thoughts straight when coming back to a project after a few days.

Web-Components!
When trying to decide on how to represent asset metadata, I eventually drafted a variant on XML, which would allow for simple asset and object hierarchies to be defined in “scenes” that could be loaded into the engine as needed. It only took a few seconds before I realized what I had just described was essentially HTML. From here I looked into the concept of “Web-Components” a set of prototype systems that would allow for more interesting UI and DOM interactions in modern web browsers. One of the shiny new features proposed is custom HTMLElements, which allow developers to define their own HTML tags, and associated Javascript handlers. With Google Chrome supporting these fun new features, I quickly took advantage.

Now, tsGL scenes can be defined directly in the HTML document. Asset tags can be inserted to tell the engine where to find certain textures, models, and shaders. Here, we initialize a shader called “my-shader”, load an OBJ file as a mesh, and construct a material referencing a texture, and a uniform “shininess” property.

<tsgl-shader id=”my-shader” vert-src=”/shaders/test.vert” frag-src=”/shaders/test.frag”></tsgl-shader>

<tsgl-mesh id=”my-mesh” src=”/models/dragon.obj”></tsgl-mesh>

<tsgl-material id=”my-material” shader=”my-shader”>
<tsgl-texture name=”uMainTex” src=”/textures/marble.png”></tsgl-texture>
<tsgl-property name=”uShininess” value=”48″></tsgl-property>
</tsgl-material>

We can also specify our objects in the scene this way! Here, we construct a scene with an instance of the “renderer” system, a camera, and a renderable entity!

<tsgl-scene id=”scene1″>
<tsgl-system type=”renderer”></tsgl-system>

<tsgl-entity>
<tsgl-component type=”camera”></tsgl-component>
<tsgl-component type=”transform” x=”0″ y=”0″ z=”1″></tsgl-component>
</tsgl-entity>

<tsgl-entity id=”dragon”>
<tsgl-component type=”transform” x=”0″ y=”0″ z=”0″></tsgl-component>
<tsgl-component type=”renderable” mesh=”mesh_dragon” material=”mat_dragon”></tsgl-component>
</tsgl-entity>
</tsgl-scene>

Entities can also be fetched directly from the document, and manipulated via Javascript! Using document.getElementById(), it is possible to obtain a reference to an entity defined this way, and update its components! While my code is still far from production-ready, I quite like this method! New scenes can be loaded asynchronously via Ajax, generated from web-servers on the fly, or just inserted into an HTML document as-is!

Future Goals
I wanted tsGL to be a platform on which to experiment with new web-technologies, and rendering concepts, so I built it to be as flexible as possible. The engine is broken into a number of discreet parts which operate independently, allowing for the addition of cool new features like rigidbody physics, a scripting interface, or whatever else I want in the future! At the moment, the project is quite trivial, but I’m hoping to expand it, test it, and optimize it in the near future.

At the moment, things are a little rough around the edges. Assets are loaded asynchronously, and the main context just sits and complains until they appear. Rendering and updating operate on different intervals, so the display buffer tears like tissue paper. OpenGL is forced to make FAR more context switches than necessary, and my file parsers don’t cover the full format spec, but all in all, I’m quite proud of what I’ve managed to crank out in only ten or so hours of work!

If you’d like to check out tsGL for yourself, you can download it from my GitHub page!

Heatwave

A few years back I worked on a Unity engine game for a school project, called “Distortion”. In this game, the player has a pair of scifi-magic gloves that allows him or her to bend space. I ended up writing some really neat visual effects for the game, but it never really went anywhere. This afternoon I found a question from a fellow Unity developer, asking how to make “heat ripple” effects for a jet engine, and I decided to clean up the visual effects and package them into a neat project so that others could use it too!

And so, Heatwave was born.

heatwave_flame_demo_gif

Heatwave is a post-processing effect for the Unity game engine that takes advantage of multi-camera rendering to make cool distortion effects easy! It does this by rendering a full-screen normal map in the scene using a specialized shader. This shader will render particle effects, UI elements, or overlay graphics together into a single map that is then used to calculate refractions!

heatwave_normalBuffer

The main render target is blitted to a fullscreen quad, and distorted by offsetting the UV coordinates during the copy based on the refraction vector calculated using the normal map, resulting in a nice realtime psuedo-refraction!

There are a few issues with this method, mainly that it doesn’t calculate “true” refractions. The effect is meant to look nice more than to make accurate calculations, so cool effects like refracting light around corners and computing caustics aren’t possible. The advantage however is that the effect operates in screen-space. The time required to render distortion is constant, and the cost of adding additional distortion sources is near zero, making it perfect for games, and situations where a large number of sources will be messing with light!

I’ve made a small asset-package so other Unity developers can download the sources and use them!

You can find the project on Github here!

More OpenGL Work!

I was playing more with OpenGL, and decided that I could re-write my existing 3D back-end to be far more flexible! After looking over asset libraries for quite some time, I decided that it would be a good idea to try and incorporate a flexible, reliable framework for reading asset data. I eventually settled on Assimp. A cross-platform library for loading dozens of different 3D assets, animations, skeletons, and some texture formats! It was admittedly a bit of a pain to work through, but once compiled, it was easily incorporated. Pretty soon, I got my system loading .Md5 models (The format used in older versions of the Quake engine).

After I managed to get models loading, my next goal was animation. Skeletal animation is something I had never tried before, and frankly, I was a little afraid of it. It seems like a daunting task to begin with, especially considering the seemingly infinite number of possible skeletons and numerous model formats that could be loaded. Some model formats store their joint orientations in quaternion values, so I ended up having to write a quaternion to matrix conversion system on the back end in order to allow these formats to work! Eventually though, I got simple .Md5Anim files loading, and playing (sort of). With hours of struggle, and frustration, I finally figured it was a simple problem with multiplication order of matrices. In the end, I got these models playing arbitrary animations successfully!

This system supports almost any effect, shader, or camera setup imaginable, and is flexible enough to implement a number of simple effects incredibly easily. So far, it does not support more advanced lighting features, such as shadows, but they can be added in quite simply.

Procedural Terrain in Unity

I was disappointed when I found out that the Unity terrain engine did not work properly on mobile devices, so I experimented with a number of different custom solutions. Over time, this project began to evolve into a fractal terrain generator, using simple noise algorithms to produce a randomly generated landscape. Next, I wrote custom shaders to blend between a stony texture and a grassy texture depending on the slope of the terrain. This worked well, but lacked the depth I felt it needed. The final step was implementing the custom shell model used to represent grass. The terrain is rendered multiple times with various vertex offsets in a multi-pass shader, with time and location based fluctuations to emulate wind, and a grass height based on slope, and some user specified parameters.

The system works fairly well, and allows for a simple procedural terrain that can be dropped into any scene.

You can experiment with this script by running the Unity Player here

Untitled 3D Library

In 2009 I began experimenting with OpenGL. I found it cumbersome and unwieldy, but incredibly versatile and powerful as a rendering engine. As a result, I wrote a simple wrapper library in order to streamline the OpenGL programming process, creating a series of wrapper classes and internal formats to dramatically simplify project workflow and make the entire OpenGL rendering framework much easier to use.

This unnamed library supports a number of distinct features, such as near instantaneous mesh file loading, custom texture import functions for a number of formats, support for GLSL shaders, per-pixel lighting with a large number of dynamic lights, and multi-buffered post-processing effects.

A brief video demonstrating the library.