GPU Isosurface Polygonalization

Isosurfaces are extremely useful when it comes to data visualization. From medical imaging to fluid flow analysis, they are an excellent tool for understanding complex volumetric data. Games have also adopted some of these techniques for their on purposes From the more rigid implementation in the ubiquitous game Minecraft to the Gels in Portal 2, these techniques serve the same basic purpose.

I wanted to try my hand at a general-purpose implementation, but before we dive into things, we must first answer a few basic questions.

What is an isosurface?

An isosurface can be thought of as the solution of a continuous function which produces a constant output in 3D. If you’re visualizing an electromagnetic field for example, you might generate an isosurface for a given potential, so you can easily determine its overall shape. This technique can be applied to other arbitrary values as well. Given CT scan data, a radiologist could construct an isosurface at the density of a specific type of tissue, extracting a 3D representation of bones or organs to view them separately, rather than having to manipulate a less intuitive stack of images.

What will we use this system for?

I don’t work in the medical field, nor would I trust the accuracy of my implementation when it comes to making a diagnosis. I work in entertainment and computer graphics, and as you would imagine, the requirements are quite different. Digital artists can already craft far better visuals than any procedure yet known; the real challenge is dynamic data. Physically simulated fluids, player-modifiable terrain, mechanics such as these present a significant challenge for traditional artists! What we really need is a generalized system for extracting and rendering isosurfaces in real time to fill in the gaps.

What are the requirements for such a system?

Given our previous use case, we can derive a few basic requirements. In no particular order…

  1. The system must be intuitive. Designers have other things to do besides tweaking simulation volumes and fiddling with configurations.
  2. The system must be flexible. If someone suggests a new mechanic which relies heavily on procedural geometry, it should be easy to get up and running.
  3. The system must be compatible. The latest experimental extensions are fun, but if you want to release something that anyone can enjoy, it needs to run on 5 year old hardware.
  4. The system must be fast. At 60 fps, you only have 16ms to render everything in your game. We can’t spend 10 of that drawing a special effect.

Getting Started!

Let’s look at requirement no. 4 first. Like many problems in computing, surface polygonalization can be broken down into repeated instances of much smaller problems. At the end of the day, the desired output is a series of interconnected polygons which appear to make up a complex surface. If each of these component polygons is accounted for separately, we can dramatically reduce the scope of the problem. Instead of generating a polygonal surface, we are now generating a single polygon, which is a much less daunting task. As with any discretization process, it is necessary to define a regular sample interval at which our continuous function will be evaluated. In the simplest 3D case, this will take the form of a regular grid of cells, and each of these cells will form a single polygon. Suddenly, this polygonalization process becomes massively parallel. With this new outlook, the problem becomes a perfect fit for standard graphics hardware!

For compatibility, I chose to implement this functionality in the Geometry Shader stage of the rendering pipeline, as it allows for the creation of arbitrary geometry given some basic input data. A Compute Shader would almost definitely be a better option in terms of performance and maintainability, but my primary development system is OSX, which presents a number of challenges when it comes to the use of Compute Shaders. I intend to update this project in the future, once Compute Shaders become more common.

If the field is evaluated at a number of regular points and a grid is drawn between them, we can construct a set of hypothetical cubes with a single sample at each of its 8 vertices. By comparing the values at each vertex, it is trivial to determine the plane of intersection between the theoretical isosurface and the cubic sample volume. If properly evaluated, the local solutions for each sample volume will form integral parts of the global surface implicitly, without requiring any global information.

This is the basic theory behind the ubiquitous Marching Cubes algorithm, first published in 1987 and still commonly used today. While it is certainly battle-tested, there are a number of artifacts in the output geometry that can make surfaces appear rough. The generated triangles are also often non-uniform and narrow, leading to additional artifacts later in the rendering process. Perhaps a more pressing issue is the sheer number of cases to be evaluated. For every sample cell, there are 256 possible planar intersections. The fantastic implementation by Paul Bourke wisely recommends the use of a look-up table, pre-computing these cases. While this may work well in traditional implementations, it crumbles under the parallel architecture of modern GPUs. Graphics hardware tends to excel at executing large batches of identical instructions, but begins to falter as soon as complex conditional branching is involved and operations have to be evaluated individually. In my tests, I found that the look-up tables performed no better, if not worse than explicit evaluation, as the complier could not easily expand and unroll the program flow, and evaluation could not be easily batched. Bearing this in mind, we ideally need to implement a method with as few logical branches as possible.

Marching Tetrahedra is a variant of the Marching Cubes algorithm which divides each cube into 5 (or 6 for a slightly different topology) tetrahedra. By limiting our integral sample volume to four vertices instead of 8, the number of possible cases drops to 16. In my tests, I got a 16x performance improvement using this technique (though realized savings are heavily dependent on the hardware used), confirming our suspicions. Unfortunately, marching tetrahedra can have some strange surface features, and has a number of artifacts of its own, especially with dynamic sampling grids.

Because of this, I ended up settling on naive surface nets, a simple dual method which generates geometry spanning multiple voxel sample volumes. An excellent discussion of the differences between these three meshing algorithms can be found here. Perhaps my favorite advantage of this method is the relative uniformity of the output geometry. Surface nets tend to be smooth surfaces comprised of quads of relatively equal size. In addition, I find it easier to comprehend and follow than other meshing algorithms, as its use of look-up-tables, and possible cases is fairly limited.

Implementation Details

Isosurface_Sample.png

The sample grid is actually defined as a mesh, with a single disjoint vertex placed at each integral sample coordinate. These vertices aren’t actually drawn, but instead are used as input data to a series of shaders. Therefore, the shader can be considered to be executed “per-voxel”, with its only input being the coordinate of the minimum bounding corner. One disadvantage commonly seen in similar systems is a fundamental restriction on surface resolution due to a uniform sample grid. In order to skirt around this limitation, meshing is actually performed. In projected space, rather than world space, so each voxel is a truncated frustum similar to that of the camera, rather than a cube. This not only eliminates a few extra transformations in the shader code, but provides LoD implicitly by ensuring each output triangle is of a fixed pixel size, regardless of its distance to the camera.

Once the sample mesh was created, I used a simple density function for the potential field used by this system. It provides a good amount of flexibility, while still being simple to comprehend and implement. Each new source of “charge” added to the field would contribute additively to the overall potential.  However, this quickly raises a concern! Each contributing charge must be evaluated at all sample locations, meaning our shader must, in some way, iterate through all visible charges! As stated earlier, branching and loops which cannot be unrolled can cause serious performance hiccups on most GPUs!

While I was at it, I also implemented a Vertex Pre-pass. Due to the nature of GPU parallelism, each voxel is evaluated in complete isolation. This has the unfortunate side-effect of solving for each voxel vertex position up to 6 times (once for each neighboring voxel). The surface net algorithm utilizes an interpolated surface vertex position, determined from the intersections of the surface and the sample volume. This interpolation can get expensive if repeated 6 times more than necessary! To remedy this, I instead do a pre-pass calculating the interpolated vertex position, and storing it as a normalized coordinate within the voxel in the pixel color of another texture. When the geometry stage builds triangles, it can simply look up the normalized vertex positions from this table, and spit them out as an offset from the voxel min coordinate!

The geometry shader stage is then fairly straightforward, reading in vertex positions, checking the case of the input voxel, looking up the vertex positions of its neighbors, and bridging the gap with a triangle.

Was it worth it?

Short answer, no.

I am extremely proud of the work I’ve done, and the end result is quite cool, but it’s not a solution I would recommend in a production setting. The additional complexity far outweighs any potential performance benefit, and maintainability, while not terrible, takes a hit as well. In addition, the geometry shader approach doesn’t work nearly as well as I had hoped. Geometry shaders are notoriously cache-unfriendly, and my implementation is no exception. Combine this with the rather unintuitive nature of working with on-GPU procedural geometry in a full-scale project, and you’ve got yourself a recipe for very unhappy engineers.

I think the concept of on-GPU surface meshing is fascinating, and I’m eager to look into Compute Shader implementations, but as it stands, the geometry stage is not the way to go.

I’ve made the source available on my GitHub if you’d like to check it out!

Messing With Shaders – Realtime Procedural Foliage

 

ivy_close.png
The programmable rendering pipeline is perhaps one of the largest advances in the history of realtime computer graphics. Before its introduction, graphics libraries like OpenGL and DirectX were limited to the “fixed function pipeline”, a programmer would shove in geometric data, and the application would draw it however it saw fit. Developers had little to no control over the output of their application beyond a few “render mode” settings. This was fine for rendering relatively simple scenes, solid objects, and simplistic lighting, but as visual fidelity increased and hardware become more powerful it quickly became necessary to allow for a more customizable rendering.

The process of rendering a 3D object in the modern programmable pipeline is typically broken down into a number of steps. Data is copied into fast-access graphics memory, then transformed through a series of stages before the graphics hardware eventually rasterizes that data to the display. In its most basic form, there are two of these stages the developer can customize. The “Vertex Program” manipulates data on a per-vertex level, such as positions and texture coordinates, before handing the results on to the “Fragment Program”, which is responsible for determining the properties of a given fragment (like a pixel containing more than just color information). The addition of just these two stages opened the floodgates for interesting visual effects. Approximating reflections for metallic objects, cel-shading effects for cartoon characters, and more! Since then, even more optional stages have been inserted into the pipeline for an even greater variety of effects.

I’ve spent a considerable amount of time experimenting with vertex and fragment programs in the past, but this week I decided to spend a few hours working with the other, less common stages, mainly “Geometry Programs”. Geometry programs are a more recent innovation, and have only began to see extensive use in the last decade or so. They essentially allow developers to not only modify vertex data as it’s received, but to construct entirely new vertices based on the input primitives (triangles, quads, etc.) As you can easily imagine, this presents incredible potential for new effects, and is something I personally would like to become more experienced with.

In four or five hours, I managed to write a relatively complex effect, and the rest of this post will detail, at a high level, what I did to achieve it.

ivy_distant.png

Procedurally generated geometry for ivy growing on a simple building.

This is my procedural Ivy shader. It is a relatively simple two-pass effect which will apply artist-configurable ivy to any surface. What sets this effect apart from those I’ve written in the past is that it actually constructs new geometry to add 3D leaves to the surface extremely efficiently.

One of the major technical issues when it comes to rendering things like foliage is that the level of geometric detail required to accurately represent leaves is quite high. While a digital environment artist could use a 3D modeling program to add in hundreds of individual leaves, this is not necessarily a good use of their time. Furthermore, it quickly becomes unmaintainable if anyone decides that the position, density, or style of foliage should change in the future. I don’t know about you, but I don’t want to be the one to have to tell a team of environment artists that all of the ivy in an entire game needs to be slightly different. In this situation, the key is to work smarter, not harder. While procedural art is often controversial in the game industry, I think most developers would agree that artist-directed procedural techniques are an invaluable tool.

Ivy_Shader_Steps.png
First and foremost, my foliage effect is composed of two separate rendering passes. First, a triplanar-mapped base texture is blended onto the object based on the desired density of the ivy. This helps to make the foliage feel much more dense, and helps to hide the seams where the leaves meet the base geometry.

Next in a second rendering pass, the geometry program transforms every input triangle into a set of quads lying on that triangle with a uniform, psuedo-random distribution. First, it is necessary to determine the number of leaf quads to generate. In order to maintain a consistent density of leaf geometry, the surface area of the triangle is calculated quickly using the “half cross-product formula”, and is then multiplied by the desired number of leaves per square meter of surface area. Then, for each of these leaves, a random sample point on the triangle is picked, and a triangle strip is emitted. It does this by sampling a noise function seeded with the world-space centroid of the triangle and the index of the leaf quad being generated. These noise values are then used to generate barycentric coordinates, which in turn are used to interpolate the position and normal of the triangle at that point, essentially returning a random world-space position and its corresponding normal vector.

Now, all that’s needed is to determine the orientation of the leaf, and output the correct triangle-strip primitive. Even this is relatively simple. By using the world-space surface normal and world “up” vector, a simple “change of vector basis” matrix is constructed. Combining this with a slightly randomized scale factor, and a small offset to orientation (to add greater variety to patches of leaves), we can transform normalized quad vertices into the exact world-space positions we want for our leaves!

...

// Defines a unit-size square quad with its base at the origin. doing
// this allows for very easy scaling and positioning in the next steps.
static const float3 quadVertices[4] = {
   float3(-0.5, 0.0, 0.0),
   float3( 0.5, 0.0, 0.0),
   float3(-0.5, 0.0, 1.0),
   float3( 0.5, 0.0, 1.0)
};

...

// IN THE GEOMETRY SHADER
// Change of basis matrix converts from XYZ space to leaf-space
float3x3 leafBasis = float3x3(
   leafX.x, leafY.x, leafZ.x,
   leafX.y, leafY.y, leafZ.y,
   leafX.z, leafY.z, leafZ.z
);

// constructs a random rotation matrix from Euler angles in the range 
// (-10,10) using wPos as a seed value.
float3x3 leafJitter = randomRotationMatrix(wPos, 10);

// Combine the basis matrix by the random rotation matrix to get the
// complete leaf transformation. Note, we could use a 4x4 matrix here
// and incorporate the translation as well, but it's easier to just add
// the world position as an offset in the final step.
float3x3 leafMatrix = mul(leafBasis, leafJitter);

// lastly, we can just output four vertices in a triangle strip
// to form a simple quad, and we'll be on our merry way.
for ( int i = 0; i < 4; i ++ ) {
   FS_INPUT v;
   v.vertex = UnityWorldToClipPos( 
      float4( mul(leafMatrix, quadVertices[i] * scale), 1) + wPos 
   );
   triStream.Append(v);
}

At this point, the meat of the work is done! We’ve got a geometry shader outputting quads on our surface. The last thing needed is to texture them, and it works!

Configuration!

I briefly touched on artist-configurable effects in the introduction, and I’d like to quickly address that too. I opted to go with the simplest solution I could think of, and it ended up being incredibly effective.

venus_vertex_weights.png

Configuring procedural geometry using painted vertex weights.

The density and location of ivy is controlled through painted vertex-colors. This allows artists to simply paint sections of their model they would like to be covered in foliage, and the shader will use this to weight the density and distribution of the procedural geometry. This way, an environment artist could use the tools they’re familiar with to quickly sketch out what parts of a model they would like to be effected by the shader. It will take an experienced artist less than a minute to get a rough draft working in-engine, and changes to the foliage can be made just as quickly!

At the moment, only the density of the foliage is mapped this way (All other parameters are uniform material properties), but I intend to expand the variety of properties which can be expressed this way, allowing for greater control over the final look of the model.

TODOs!

This ended up being an extremely informative project, but there are many things still left to do! For one, the procedural foliage does not take lighting into account. I built this effect in the Unity game engine, and opted out of using the standard “Surface Shader” code-generation system, which while very useful in 99% of cases, is extremely limiting in situations such as this. I would also like to improve the resolution of leaf geometry, applying adaptive runtime tessellation to the generated primitives in order to give them a slight curve, rather than displaying flat billboards. Other things, such as color variation on leaves could go a long way to improving the effect, but for now I’m quite satisfied with how it went!

Whelp, on to the next one!

 

Object-Oriented Programming and Unity – Part 2

Recently, I’ve read complaints from a lot of Unity developers about how the engine doesn’t provide adequate support for object-oriented programming and design, with even very experienced developers running into this issue! In my opinion, this problem doesn’t really exist, and I feel as though most of the issues developers are facing stems from some fundamental misconceptions about how the engine is designed, so I’d like to take a moment to try and shed some light on the subject.

This is part 2 of a multi-part post. If you haven’t read part 1, I recommend you read it here.


Inheritance ≠ Object Oriented Programming.

OOP is a programming paradigm designed to make the design and use of a system more modular, and more intuitive to a developer. By grouping related data into objects, it can be treated as a unified collection, rather than a set of scattered elements, and can be added and removed from an architecture in a generic, and nonintrusive way.

One of the key concepts behind this is inheritance, allowing us to define “subclasses” of a class in order to extend its functionality. You can think of subclasses as a specific implementation of a more generic “parent” class, like how dogs and cats were both specific forms of animals in the previous inheritance example.

Inheritance is a large portion of traditional object-oriented programming, but the two are NOT synonymous. Object-oriented programming is merely a concept. The principles behind the Object-Oriented paradigm are equally valid with or without formal class inheritance, and can even be expressed in traditionally “non object-oriented” languages, such as C!

So why is Unity often criticized as being non-OO?

The Unity game engine maintains very tight control over its inheritance hierarchies. Developers are not allowed to create their own subclasses of many of the core components, and for good reason! Take “Colliders” for example. Colliders define the shape of an object for the physics system so that it can quickly and efficiently simulate physical interactions in the game world. Simulating physics is incredibly expensive, and as a result many shortcuts have been taken to ensure that your game runs as smoothly as possible. In order to minimize the workload, the physics system, (in Unity’s case, PhysX by NVidia), has been optimized to only process collisions on a set number of primitive shapes. If the developer were to add a new, non-standard shape, the PhysX would have no idea how to handle it. In order to prevent this, the kind folks at Unity have made Collider a sealed class, which can’t be extended.

Wait, then what can we modify?

Let’s look at the component hierarchy in Unity.

unity component hierarchy fixed

Yep, that’s it. The only portion of the Unity component hierarchy you are allowed to modify is “MonoBehaviour”.

GameObjects contain a set of attached “Behaviours”, commonly referred to as Components (while it is confusing within the context of the class hierarchy, it makes more sense when considering the exposed portions of the ECS architecture). Each of these defines a set of data and functions required by the constructed entity, and are operated on by Systems which are hidden from the developer. Each System is responsible manipulating a small subset of behaviours, for instance the physics System operates on Rigidbody and Collider components. With this in mind, how can developers create their own scripts and behaviors?

The Unity team had to come up with a solution that allowed all built-in components to be pre-compiled and manipulated without exposing any of the underlying architecture. Developers also couldn’t be allowed to create their own Systems, as they would need to make significant changes to the engine itself in order to incorporate their code into their application. Clearly, a generic System needed to be designed to allow runtime execution of unknown code. This is exactly what a MonoBehaviour does. MonoBehaviours are behaviours containing tiny Mono executables compiled while the editor is running. Much like the physics System, a MonoBehaviour System is managed by the editor, and is responsible for updating every MonoBehaviour in the game as it runs, periodically calling functions accessible to the scripting interface, such as “Start”, and “Update”. When a developer writes a script, it’s compiled to a MonoBehaviour, and is then operated on just like any other component! By adding a new System, and exposing a scripting interface, developers are now able to create nearly anything they want, without requiring any changes to the engine code, and still running with the efficiency of a compiled language, brilliant! (Keep in mind that the actual editor implementation is most likely more complex than this, however I feel that this cursory explanation is enough to effectively utilize the engine.)

Well, that’s all well and good… but what if some of my behaviours need to inherit from others?

Inheritance hierarchies work just fine within the context of MonoBehaviours! If we really needed to, we could make our own components, and have them inherit from one another, as long as the root class inherits from MonoBehaviour. This can be useful in some situations, for instance if we had a set of scripts which were dependent on another, we could provide all necessary functionality in a base class, and then override it for more specific purposes in a subclass. In this example, our MovementScript may depend on a control script in order to query input. We can subclass a generic control script in order to create more specialized inputs, or even simple AI, without changing our MovementScript.

Unity Monobehaviour Inheritance

The more experienced among you may recognize that, for this problem, perhaps implementing an interface would provide a more elegant solution than subclassing our control script. Well, we can do that too!

public interface MyInterface {}
public class MyScript : MonoBehaviour, MyInterface {}

There’s nothing special about MonoBehaviours. They’re just a very clever implementation of existing programming techniques!

MonoBehaviours sound really cool, but I have data I don’t want attached to a GameObject!

Well, then don’t use a MonoBehaviour! MonoBehaviors exist to allow developers to attach their scripts to GameObjects as a component, but not all of our code needs to inherit from it! If we need a class to represent some data, we can just define a class in a source file like you would in any traditional development environment.

using UnityEngine;

public class MyData {
    const int kConstant = 10;

    private int foo = 5;
    public int bar = 10;

    public Vector3 fooBar = new Vector3( 1, 2, 3 );
}

Now that this class is defined, we can use it anywhere we want, including in other classes, and our MonoBehaviours!

using UnityEngine;

public class MyMonoBehaviour : MonoBehaviour {

    private MyData data;

    void Start () {
        data = new MyData();

        data.fooBar = new Vector3( -1, -2, -3 );
    }
}

Also keep in mind that the entirety of the .NET framework (Version 2.0 at the time of this article) is accessible at any time. You can serialize your objects to JSON files, send them to a web-server, and forward the response through a socket if you need to. Just because Unity doesn’t implement a some feature, doesn’t mean it can’t be done within the engine.

 


This post demonstrates a few examples of how data can be handled outside of the MonoBehaviour system. This post is continued in Part 3, where we will recap a few points, and conclude this article.

PART 3 =>

Object-Oriented Programming and Unity – Part 1

Recently, I’ve read complaints from a lot of Unity developers about how the engine doesn’t provide adequate support for object-oriented programming and design, with even very experienced developers running into this issue! In my opinion, this problem doesn’t really exist, and I feel as though most of the issues developers are facing stems from some fundamental misconceptions about how the engine is designed, so I’d like to take a moment to try and shed some light on the subject.


Unity Engine Architecture, and Composition vs. Inheritance.

The Unity game engine represents the “physical” world of your program using an Entity Component System architecture. Each “object”, be it a character, a weapon, or a piece of the environment, is represented by an entity (referred to as GameObjects in the Unity engine). These entities do nothing on their own, but act as containers into which many components are placed. Each component represents a functional unit, and dictates a specific subset of the behavior of our object. Lastly, Systems act as a higher-level control system, which act on entities to update the state of their components. ECS has become increasingly popular in the game development world, as it provides some key advantages over more traditional architectures.

  1. Components are completely reusable, even across objects with dramatically different behaviors.
  2. Components provide near infinite extensibility, allowing new behaviors to be added to the game without touching any of the existing code.
  3. Components can be added and removed at run-time, allowing for the behavior of objects to change while the application is running, without any significant effort.

The entity component architecture embodies the principle of composition over inheritance, a relatively new programming paradigm which favors the structure and composition of an object over a hierarchy of inheritance. This is especially helpful when building a large application like a game, which requires many things to share large amounts of very similar code, while still being different in sometimes hundreds of ways.

Let’s look at an example.

Imagine we have a client who wants us to write a game about animals! Let’s represent a dog, and a cat in our code. The immediately intuitive solution would be to make a superclass called “Animal”, to contain the commonalities of both cats and dogs.

Animal_inheritance_corrected

That’s fantastic! Look, we just saved having to duplicate all of the code required to give our animal ears, legs, and a tail by using inheritance! This works really well until your client asks you to add a squid. Ok! Let’s add it to our animal class! Unfortunately, squids don’t have ears, or really much of a tail. They’ve also got 10 limbs, so our class hierarchy will have to change a bit. Let’s add another superclass, this time separating cats and dogs into their own group.

Animal_inheritance_2_corrected

Ok, there we go. Now the things shared across all animals can be separated out, and our dog and cat can still share legs, a tail, and ears! Our client liked the squid so much, he wants us to put in other ocean animals too! He asked us to put in a fish! Well, both fish and squid swim and have fins, so it would make sense to put them together… but fish also have a tail, and right now, a tail is defined as a part of mammals!

Animal_inheritance_3_corrected

Oh no! Suddenly, our hierarchy doesn’t look too good! While it makes the most sense to put animals into groups based on their defining characteristics, sometimes characteristics will be shared across multiple different subtrees, and we can’t inherit them from a parent!

Composition to the rescue!

Rather than defining our animals as a hierarchy of increasingly specific features, we can define them as individuals, composed of independent component parts.

Animal_composition

Notice that we still don’t have to duplicate any of our code for shared features! Attributes like legs are still shared between dogs, cats, and squids, but we don’t have to worry about where fish fit into the picture! This also means that we can add any animal component we want, without even touching unrelated animals! If we wanted to add a “teeth” component, we could attach it to dogs, cats, and (some) fish to provide new functionality, and wouldn’t even need to open the file for squids!

This model also allows us to add components at run-time to change functionality. Let’s say we have a robot. This robot normally attacks the player on sight, and does generally evil things. It’s pretty complicated too! The robot can move around the game, use a weapon, open doors, and more! What happens when our player hacks this robot character to be a good-guy? We could make a second type of robot, which can do everything the evil robot can, OR we can remove the “robot_AI_evil” component, and replace it with “robot_AI_good”. With the AI component replaced, our robot can help us and still do everything the evil robot could. If it’s well designed, it could even display more complex behavior, and use abilities it typically would use against the player to help defend against other robots!

Know that inheritance isn’t a bad thing, in fact we to use it to define our body part components, but understand that in some situations, other paradigms may be more useful!


This post provides a cursory look at the Unity engine’s general world representation, as well as a cursory look at some of the potential benefits of the “composition over inheritance” principle.
This post is continued in Part 2, where we will look at MonoBehaviours, and their role in the Unity engine.

PART 2 =>