Building a Mobile Environment for Unity

Environments are incredibly important in games. Whether it’s a photo-realistic depiction of your favorite city or an abstract void of shapes and color, environments help set the mood of a game. Lighting, color, and ambient sounds are all instrumental in the creation of an immersive and convincing world. They can be used to subtly guide the players to their objective (ever noticed how the unlocked door is almost always illuminated, while locked doors are in shadow?), or even provide contextual clues about the history of a world. I’ve always been fascinated with environments and, while I wouldn’t exactly consider myself an environment artist, I’ve spent some time working on quite a few.

Mobile games are a challenge. Smartphones keep getting faster, but as processor speeds rise, so too do the expectations of the player. Mobile environments now need to look fleshed out and detailed, while still playing at a decent frame-rate. You need to fake the things you can’t render, and you need to design around the things you can’t fake.

So! With that said, let’s take a look at a Unity environment.

This runs pretty well on mobile! It’s capped at 60 frames per second on my iPhone 5s (Video frame-rate is lower due to the video capture), and runs even faster on newer models. So, what does our scene look like? How can we take advantage of Unity 5’s built in optimizations? Let’s start with the basics.

Geometry:


This level consists of a few meshes, instanced dozens of times. I initially planned the environment to work as a “kit”, a single set containing a number of smaller meshes to be used in different ways. All of the meshes in an individual kit share the same visual theme so that they can be used interchangeably. This scene, for example, is a single kit called env_kit_factory. One of the advantages of this approach is total modularity. Building new environments can be done entirely in the editor, and incredibly quickly. This is not only faster than having your artists sculpt huge pieces of geometry, but also allows you to exploit the benefits of Unity’s Prefab system. Changing a material in the prefab will automatically update all instances in the scene, without having to manually replace all instances in a large level.

kit_demo

Every asset used in the “factory” environment.

The modularity of this geometry is useful for level construction as well. By building assets that fit together, maintain a consistent visual aesthetic, and don’t contain recognizable writing or symbols, assets can be combined in unique ways and often in configurations the artist never intended. Here’s an example. This piece of floor trim can be used as a supporting column, a windowsill, a loading dock, or whatever else you can come up with, and it looks reasonably decent.

asset reuse.gif

This wouldn’t work quite so well if the trim were modeled as part of the wall. The wall wouldn’t tile vertically for one thing, far more varieties of walls would be necessary to break up the repetition, and it would generally just be harder to work with. Allow small detail objects and decals to provide unique markers that level designers can place anywhere, instead of trying to build a hundred different versions of the same brick wall.

I used decal meshes extensively. These are essentially “stickers” that can be placed in your scene to break up the monotony of a tiled surface, and provide much more specific details than you may want to build into a modular set-piece.

leaky pipes.png

Leaky pipe decals used to disguise the seams between the pipes and the wall.

Here, we can see a set of copper pipes, and apparent water damage down the wall where they meet. This is achieved entirely with decals, and required little to no extra work, but makes the environment feel a bit more cohesive. These decals also introduce visually unique patterns that help draw the eye away from the otherwise repetitious brick pattern.

crushed boxes.png

Decals can be used to hint at purpose and story as well as provide interesting visuals. Here we can see more water damage, and this time not just from leaking pipes. These cardboard boxes look haphazard and temporary, but the damp paper and thick dust allude to years of neglect. Subtleties like this can really make an environment feel more complete, and with careful thought can be implemented without too much additional work on the part of the artist.

Textures:


I took great care when designing the factory kit to reuse textures as much as I could. For one thing, it’s an interesting challenge, but more importantly, it allows us to take advantage of Unity’s built in optimizations. All textures are at 1024 x 1024 resolution, and function as large atlases for significant portions of the environment.

textures.png

All textures used in the scene (not including secondary maps)

Here we can see that all walls, floors, bits of concrete and metals share the same texture and material setup. The advantage is twofold. First, it helps maintain a consistent aesthetic. Every brick wall in the entire scene will be colored the same, and minor changes to that texture will be carried over to the rest of the kit. This is much easier than trying to keep a dozen different brick textures in sync, and if done well can still look great.

The second advantage is based in rendering performance. Modern graphics hardware is incredibly good at drawing things. Massive parallelism is designed directly into GPUs, and it works extremely well for vertex and fragment processing. What graphics cards aren’t good at is preparing to draw things. Let’s look at how a typical model might be rendered.

  1. LOAD – ModelView matrix to local memory
  2. LOAD – Projection matrix to local memory
  3. LOAD – Albedo texture to local memory
  4. LOAD – Vertex attributes to local memory
  5. USE – Albedo texture
  6. USE – Vertex attributes
  7. RUN – Vertex shader
  8. RUN – Fragment shader
  9. RASTERIZE

Wow, even for a high-level overview that’s surprisingly complex! Every time you send data to the graphics card, that data has to be copied over from main memory to video memory. This is an extremely slow operation when compared to actually processing fragments and writing them to the framebuffer! Luckily graphics specifications like OpenGL and DirectX do not reset the state of the hardware whenever drawing is finished. If I tell the API to “use texture 5”, it will (or should, at least) keep using texture 5 until I tell it to stop. What this means is that we can organize our drawing operations in such a way that we minimize the number of times we need to copy data back and forth. If we’re drawing 100 objects that all use Texture 5, we can just set Texture 5 once and draw all 100 objects, instead of naively and redundantly setting it 99 additional times.

scene_rendering.gif

A breakdown of the batches used when rendering the scene.

The Unity game engine is actually pretty good at this! Objects that share textures will be grouped together in batches to minimize state changes, and in some cases can dramatically improve performance. There are several hundred objects in this scene, but only around 30 GPU state changes. Stepping through the Unity “Frame Debugger”, we can see that  enormous portions of the scene are rendered as one large chunk, which keeps the state switching to a minimum, and allows the graphics processor to do its thing with minimal interruption!

Lighting:


Lighting is extremely important to convey the look and feel you’re trying to get across in your scene. Unfortunately, it is also one of the most computationally expensive aspects of rendering a scene.

To dust off one of my favorite idioms about optimization, “The fastest code is code that never runs at all”. We’d all love a thousand dynamic lights fluttering around our scene, but we’re restricted in what we can do, especially on a mobile device. To circumvent the performance issues associated with real-time lighting, all lighting in the scene is either “baked or faked”.

A common technique, as old as realtime rendering itself, is the concept of a lightmap. If the lighting in a scene never changes, then there’s no reason to be performing extremely expensive lighting calculations dozens of times a second! That rock is sitting under a lamp, and it’s not going to get any less bright as long as it stays there, and the lamp remains on! This is the basic idea behind lightmapping. We can calculate the lighting on every surface in our scene once before the program is even running, and then just use the results of those calculations in the future! We “Bake” the lighting into a texture map, and pass it along with all of the others when we render our scene.

lightmapping.png

The lightmaps used in the factory scene.

Lightmapping has its disadvantages. The resolution of the baked lighting often isn’t as good as a realtime solution by the nature of it operating per-texel, rather than per-pixel, and more complex effects like specular highlights and reflections are tricky if not impossible, but that’s a small price to pay for the performance gain of baked lighting.

I wanted the scene to feel stuffy and old, so I decided that the air in the room should appear to be filled with dust. Unfortunately, this means that light needs to scatter convincingly, and subtly. True volumetric lighting has only recently become feasible in a realtime context, but it is still far from simple on a mobile device! To get around this, I faked the symptoms with some of the oldest tricks in the book.

fog.gif

Comparison of the scene with fog enabled and disabled.

First, I applied a “fog” effect. This technique, commonly seen in old N64 and PS1 games is often used to disguise the far clip plane of the camera, and add a greater sense of depth to the scene by fading to a solid color as objects approach a certain threshold distance. I liked the look of this effect when applied to the scene, as it makes the air feel thicker than normal, and gives it a hazy feel.

light shafts.png

Next, I built fake “lightshafts”. A common technique used to depict light diffusing through dust or smoke. These may look fancy, but it’s really just a mesh with a custom shader applied.

lightshaft_mesh.png

By scrolling the color of a texture slowly on one axis, while keeping its alpha channel fixed, it’s possible to make the shafts of light appear to waver slightly and gently shift between multiple shapes, exactly as if something in the air were slowly moving past the light source! This effect essentially boils down to a glorified particle effect, but it is quite convincing when used in conjunction with fog!

Wrapping Up:


  • Game environments are extremely difficult, and I’ve still got a lot to learn, but I hope some of these tips can help others when designing scenes! Remember…
  • Reuse and repurpose assets. A little bit of thought can go a long way, so plan out your assets before getting started, and you’ll find it much easier to work on later on.
  • Build environments modularly. By assembling assets into kits, not only will your level designers thank you, but building new environments becomes trivially easy.
  • Atlas textures. On mobile devices, texture memory is limited, and GPU context switches can be extremely slow. Try to consolidate your textures as much as possible to reduce overhead.
  • Bake lightmaps. The performance gain is enormous, and with the right additions, you can make something very convincing!

Thanks for sticking with me for the duration of this article, and to everyone out there building fantastic Unity games, keep up the good work!

-Andrew

Advertisements

Heatwave

A few years back I worked on a Unity engine game for a school project, called “Distortion”. In this game, the player has a pair of scifi-magic gloves that allows him or her to bend space. I ended up writing some really neat visual effects for the game, but it never really went anywhere. This afternoon I found a question from a fellow Unity developer, asking how to make “heat ripple” effects for a jet engine, and I decided to clean up the visual effects and package them into a neat project so that others could use it too!

And so, Heatwave was born.

heatwave_flame_demo_gif

Heatwave is a post-processing effect for the Unity game engine that takes advantage of multi-camera rendering to make cool distortion effects easy! It does this by rendering a full-screen normal map in the scene using a specialized shader. This shader will render particle effects, UI elements, or overlay graphics together into a single map that is then used to calculate refractions!

heatwave_normalBuffer

The main render target is blitted to a fullscreen quad, and distorted by offsetting the UV coordinates during the copy based on the refraction vector calculated using the normal map, resulting in a nice realtime psuedo-refraction!

There are a few issues with this method, mainly that it doesn’t calculate “true” refractions. The effect is meant to look nice more than to make accurate calculations, so cool effects like refracting light around corners and computing caustics aren’t possible. The advantage however is that the effect operates in screen-space. The time required to render distortion is constant, and the cost of adding additional distortion sources is near zero, making it perfect for games, and situations where a large number of sources will be messing with light!

I’ve made a small asset-package so other Unity developers can download the sources and use them!

You can find the project on Github here!

Object-Oriented Programming and Unity – Part 3

Recently, I’ve read complaints from a lot of Unity developers about how the engine doesn’t provide adequate support for object-oriented programming and design, with even very experienced developers running into this issue! In my opinion, this problem doesn’t really exist, and I feel as though most of the issues developers are facing stems from some fundamental misconceptions about how the engine is designed, so I’d like to take a moment to try and shed some light on the subject.

This is part 3 of a multi-part post. If you haven’t read part 2. I recommend you read it here.


Wrapping Up

– Unity is an incredibly flexible game engine.

Its scripting engine supports nearly all of the .NET framework, and can be made to do just about anything. Inheritance hierarchies, generic classes and functions, interfaces, reflection, they all work just fine. That being said, there are a large number of restrictions placed on us as developers. Many of the engine’s core components may not be modified, and nearly all of the classes and functions exposed in the Unity API are completely sealed. While this may seem annoying at first, it’s important to take a step back and think of what the engine is actually doing when you use those classes, and the damage that can be caused by overriding a method.

– It takes some getting used to.

Many of the restrictions of the scripting API require developers to organize code in ways that may not seem intuitive. We often get frustrated when our first set of ideas don’t work, and for many people, the immediately apparent solution that doesn’t seem to be possible “should be possible”. Keep in mind that you’re not working in a vacuum. The engine itself has a huge job to do, and considerations must be made to work both in and around it. When working within the Unity engine, some design patterns behave very well, and others don’t. The problem isn’t that the engine is broken or incomplete, but the team behind Unity decided on an architecture that may not mesh well with your code depending on your design. Take a step back, and think of what you want to do as well as what the engine wants to do.

Quick Recap

  1. Don’t build hierarchies, build composite objects.
  2. Don’t extend types, encapsulate them.
  3. Don’t program in a vacuum, consider what the engine needs to do.
  4. If you find the need to subclass a GameObject or Component, consider alternative designs.

Thank you for sticking with me and reading this article! I hope I managed to shed some light on the inner-workings of the Unity engine’s basic architecture, and make things a bit more clear. If you have any questions, feel free to contact me!

– Andrew Gotow

Object-Oriented Programming and Unity – Part 2

Recently, I’ve read complaints from a lot of Unity developers about how the engine doesn’t provide adequate support for object-oriented programming and design, with even very experienced developers running into this issue! In my opinion, this problem doesn’t really exist, and I feel as though most of the issues developers are facing stems from some fundamental misconceptions about how the engine is designed, so I’d like to take a moment to try and shed some light on the subject.

This is part 2 of a multi-part post. If you haven’t read part 1, I recommend you read it here.


Inheritance ≠ Object Oriented Programming.

OOP is a programming paradigm designed to make the design and use of a system more modular, and more intuitive to a developer. By grouping related data into objects, it can be treated as a unified collection, rather than a set of scattered elements, and can be added and removed from an architecture in a generic, and nonintrusive way.

One of the key concepts behind this is inheritance, allowing us to define “subclasses” of a class in order to extend its functionality. You can think of subclasses as a specific implementation of a more generic “parent” class, like how dogs and cats were both specific forms of animals in the previous inheritance example.

Inheritance is a large portion of traditional object-oriented programming, but the two are NOT synonymous. Object-oriented programming is merely a concept. The principles behind the Object-Oriented paradigm are equally valid with or without formal class inheritance, and can even be expressed in traditionally “non object-oriented” languages, such as C!

So why is Unity often criticized as being non-OO?

The Unity game engine maintains very tight control over its inheritance hierarchies. Developers are not allowed to create their own subclasses of many of the core components, and for good reason! Take “Colliders” for example. Colliders define the shape of an object for the physics system so that it can quickly and efficiently simulate physical interactions in the game world. Simulating physics is incredibly expensive, and as a result many shortcuts have been taken to ensure that your game runs as smoothly as possible. In order to minimize the workload, the physics system, (in Unity’s case, PhysX by NVidia), has been optimized to only process collisions on a set number of primitive shapes. If the developer were to add a new, non-standard shape, the PhysX would have no idea how to handle it. In order to prevent this, the kind folks at Unity have made Collider a sealed class, which can’t be extended.

Wait, then what can we modify?

Let’s look at the component hierarchy in Unity.

unity component hierarchy fixed

Yep, that’s it. The only portion of the Unity component hierarchy you are allowed to modify is “MonoBehaviour”.

GameObjects contain a set of attached “Behaviours”, commonly referred to as Components (while it is confusing within the context of the class hierarchy, it makes more sense when considering the exposed portions of the ECS architecture). Each of these defines a set of data and functions required by the constructed entity, and are operated on by Systems which are hidden from the developer. Each System is responsible manipulating a small subset of behaviours, for instance the physics System operates on Rigidbody and Collider components. With this in mind, how can developers create their own scripts and behaviors?

The Unity team had to come up with a solution that allowed all built-in components to be pre-compiled and manipulated without exposing any of the underlying architecture. Developers also couldn’t be allowed to create their own Systems, as they would need to make significant changes to the engine itself in order to incorporate their code into their application. Clearly, a generic System needed to be designed to allow runtime execution of unknown code. This is exactly what a MonoBehaviour does. MonoBehaviours are behaviours containing tiny Mono executables compiled while the editor is running. Much like the physics System, a MonoBehaviour System is managed by the editor, and is responsible for updating every MonoBehaviour in the game as it runs, periodically calling functions accessible to the scripting interface, such as “Start”, and “Update”. When a developer writes a script, it’s compiled to a MonoBehaviour, and is then operated on just like any other component! By adding a new System, and exposing a scripting interface, developers are now able to create nearly anything they want, without requiring any changes to the engine code, and still running with the efficiency of a compiled language, brilliant! (Keep in mind that the actual editor implementation is most likely more complex than this, however I feel that this cursory explanation is enough to effectively utilize the engine.)

Well, that’s all well and good… but what if some of my behaviours need to inherit from others?

Inheritance hierarchies work just fine within the context of MonoBehaviours! If we really needed to, we could make our own components, and have them inherit from one another, as long as the root class inherits from MonoBehaviour. This can be useful in some situations, for instance if we had a set of scripts which were dependent on another, we could provide all necessary functionality in a base class, and then override it for more specific purposes in a subclass. In this example, our MovementScript may depend on a control script in order to query input. We can subclass a generic control script in order to create more specialized inputs, or even simple AI, without changing our MovementScript.

Unity Monobehaviour Inheritance

The more experienced among you may recognize that, for this problem, perhaps implementing an interface would provide a more elegant solution than subclassing our control script. Well, we can do that too!

public interface MyInterface {}
public class MyScript : MonoBehaviour, MyInterface {}

There’s nothing special about MonoBehaviours. They’re just a very clever implementation of existing programming techniques!

MonoBehaviours sound really cool, but I have data I don’t want attached to a GameObject!

Well, then don’t use a MonoBehaviour! MonoBehaviors exist to allow developers to attach their scripts to GameObjects as a component, but not all of our code needs to inherit from it! If we need a class to represent some data, we can just define a class in a source file like you would in any traditional development environment.

using UnityEngine;

public class MyData {
    const int kConstant = 10;

    private int foo = 5;
    public int bar = 10;

    public Vector3 fooBar = new Vector3( 1, 2, 3 );
}

Now that this class is defined, we can use it anywhere we want, including in other classes, and our MonoBehaviours!

using UnityEngine;

public class MyMonoBehaviour : MonoBehaviour {

    private MyData data;

    void Start () {
        data = new MyData();

        data.fooBar = new Vector3( -1, -2, -3 );
    }
}

Also keep in mind that the entirety of the .NET framework (Version 2.0 at the time of this article) is accessible at any time. You can serialize your objects to JSON files, send them to a web-server, and forward the response through a socket if you need to. Just because Unity doesn’t implement a some feature, doesn’t mean it can’t be done within the engine.

 


This post demonstrates a few examples of how data can be handled outside of the MonoBehaviour system. This post is continued in Part 3, where we will recap a few points, and conclude this article.

PART 3 =>

Object-Oriented Programming and Unity – Part 1

Recently, I’ve read complaints from a lot of Unity developers about how the engine doesn’t provide adequate support for object-oriented programming and design, with even very experienced developers running into this issue! In my opinion, this problem doesn’t really exist, and I feel as though most of the issues developers are facing stems from some fundamental misconceptions about how the engine is designed, so I’d like to take a moment to try and shed some light on the subject.


Unity Engine Architecture, and Composition vs. Inheritance.

The Unity game engine represents the “physical” world of your program using an Entity Component System architecture. Each “object”, be it a character, a weapon, or a piece of the environment, is represented by an entity (referred to as GameObjects in the Unity engine). These entities do nothing on their own, but act as containers into which many components are placed. Each component represents a functional unit, and dictates a specific subset of the behavior of our object. Lastly, Systems act as a higher-level control system, which act on entities to update the state of their components. ECS has become increasingly popular in the game development world, as it provides some key advantages over more traditional architectures.

  1. Components are completely reusable, even across objects with dramatically different behaviors.
  2. Components provide near infinite extensibility, allowing new behaviors to be added to the game without touching any of the existing code.
  3. Components can be added and removed at run-time, allowing for the behavior of objects to change while the application is running, without any significant effort.

The entity component architecture embodies the principle of composition over inheritance, a relatively new programming paradigm which favors the structure and composition of an object over a hierarchy of inheritance. This is especially helpful when building a large application like a game, which requires many things to share large amounts of very similar code, while still being different in sometimes hundreds of ways.

Let’s look at an example.

Imagine we have a client who wants us to write a game about animals! Let’s represent a dog, and a cat in our code. The immediately intuitive solution would be to make a superclass called “Animal”, to contain the commonalities of both cats and dogs.

Animal_inheritance_corrected

That’s fantastic! Look, we just saved having to duplicate all of the code required to give our animal ears, legs, and a tail by using inheritance! This works really well until your client asks you to add a squid. Ok! Let’s add it to our animal class! Unfortunately, squids don’t have ears, or really much of a tail. They’ve also got 10 limbs, so our class hierarchy will have to change a bit. Let’s add another superclass, this time separating cats and dogs into their own group.

Animal_inheritance_2_corrected

Ok, there we go. Now the things shared across all animals can be separated out, and our dog and cat can still share legs, a tail, and ears! Our client liked the squid so much, he wants us to put in other ocean animals too! He asked us to put in a fish! Well, both fish and squid swim and have fins, so it would make sense to put them together… but fish also have a tail, and right now, a tail is defined as a part of mammals!

Animal_inheritance_3_corrected

Oh no! Suddenly, our hierarchy doesn’t look too good! While it makes the most sense to put animals into groups based on their defining characteristics, sometimes characteristics will be shared across multiple different subtrees, and we can’t inherit them from a parent!

Composition to the rescue!

Rather than defining our animals as a hierarchy of increasingly specific features, we can define them as individuals, composed of independent component parts.

Animal_composition

Notice that we still don’t have to duplicate any of our code for shared features! Attributes like legs are still shared between dogs, cats, and squids, but we don’t have to worry about where fish fit into the picture! This also means that we can add any animal component we want, without even touching unrelated animals! If we wanted to add a “teeth” component, we could attach it to dogs, cats, and (some) fish to provide new functionality, and wouldn’t even need to open the file for squids!

This model also allows us to add components at run-time to change functionality. Let’s say we have a robot. This robot normally attacks the player on sight, and does generally evil things. It’s pretty complicated too! The robot can move around the game, use a weapon, open doors, and more! What happens when our player hacks this robot character to be a good-guy? We could make a second type of robot, which can do everything the evil robot can, OR we can remove the “robot_AI_evil” component, and replace it with “robot_AI_good”. With the AI component replaced, our robot can help us and still do everything the evil robot could. If it’s well designed, it could even display more complex behavior, and use abilities it typically would use against the player to help defend against other robots!

Know that inheritance isn’t a bad thing, in fact we to use it to define our body part components, but understand that in some situations, other paradigms may be more useful!


This post provides a cursory look at the Unity engine’s general world representation, as well as a cursory look at some of the potential benefits of the “composition over inheritance” principle.
This post is continued in Part 2, where we will look at MonoBehaviours, and their role in the Unity engine.

PART 2 =>

Unfinished Projects

Whenever I have an idea, I like to run with it. Most of them never really get off the ground, but I’ve learned a lot in the process. I decided to make a quick video of some of these projects and what they’ve taught me.

 

There are no bad ideas, and if seen through, they can teach you things you’d never expect! Often the best way to learn is through experience!

Pulse Racer

I’ve released my iOS App, Pulse Racer on the app store!

Pulse Racer is a rhythm based score-attack game, which requires the player to travel along a generated course and collect “notes” which are synchronized to music.  Players are rated based on their ability to string together notes, and their final percentage at the end of each course.

pulse_screenshot01

pulse_screenshot 04

I’m really quite happy with how the project turned out and, for me, having a polished app on the store is a huge accomplishment.

Technically this app was a big endeavor as well. I’ve had the idea floating around for a year or so, but was never able to properly execute it until now. Building courses based on music is more difficult than it seems at first glance. Perhaps the most challenging part was determining the positions of notes on the course. I used a spectral-flux based onset detection algorithm, which ran a Fourier transform over audio samples, converting them to the frequency domain. Next I calculated a net difference in each spectral band between samples using a rolling average. From this, I determined the change in acoustic energy for each sample window. From there, it was simply a matter of finding local maxima (locations where the energy peaked), and I had a reasonably reliable system. Other aspects of the course are generated from the intermediate steps. For instance, the large-scale contour of the course is based on the acoustic energy graph. The radius of the cylinder is based on the spectral flux over time, and so forth. Using these techniques together produced a fun, and challenging course for almost any song input into the game. This also allowed for tracks to be pre-processed, so that no complex calculations were done during the game, allowing for the absolute maximum frame rate.

https://vimeo.com/78839283

The music I used was made by an awesome artist I found online, F-777. He was kind enough to let me use a few of his songs as included sample tracks in the game, so that users did not need to generate their own to play. My experience working with him was a blast, and I’d highly recommend him to other developers looking for some good electro music.

Pulse Racer was a blast to work on, and was a wonderful learning experience for me. It is currently available on the App Store for $1.99 should you wish to play.

Pulse Racer Website

App Store Link