Post Categories


We respect your email privacy

UDK : UE4 Arrives

This week with GDC’s arrival, Epic Games announced the arrival of UE4 to everyone not one of their beta developers. With a new subscription plan, the software is $19 a month, plus 5% of gross revenue. It’s a deal that puts a game company at the finger tips of many indie game developers.


After using UE3 just a couple days I would say that it’s possible they’ve taken some pointers from Unity, or maybe they’ve just moved permanently away from the graphics interfaces of old. Now widgets can be customized and set up to a user’s liking. In essence it has more of Unity’s simplicity and even the layout overall. I’m not sure if Unity is influencing Epic, or if there is just becoming more of a global expectation with 3D programs. Programs like Maya, Unity and now UDK are starting to match philosophy and layout. This may seem random to some, but take for example that until we showed Autodesk the way we use Channel editors at Rhythm and Hues, they hadn’t even had the channels available. Now the same channel editors have become par for course in software across the board.

While some of the graphics choices are a bit larger than I’d like (they take up too much real-estate than they should on a laptop) overall they feel much better, more up to date than the old.

Another oddity of UDK in the past, was saving game packages in the Engine location on the disk. This was always a dangerous prospect to me. Now, when setting up projects it sets up the project to the User/Documents area of your drive.

Blueprints vs Kismet : Blueprints is the name of the graphical programming editor that replaces Kismet. Kismet was in need of an update, and having just gotten started I can’t say too much about this, except that the idea of making kismet “prefabs” seems to be more the plan now.

Static vs Dynamic : Another change is that in the properties of a static mesh, you can decide what the physics properties are without having to convert the mesh to a dynamic mesh.

Game Types : There are some out of the box game types you can use to set up your project; side-scroller, third person, etc. Again, it seems to borrow from the simplicity of Unity, but with the working guts of UDK that has been such a draw to developers.

There seems to be a simplicity of philosophy in the new software, a real move to encourage artists to get in and work with the tools. Although at the moment I’m still getting used to a house where someone has moved all the furniture around, it’s a move in the right direction and very promising.

UDK Static Shadows on terrain

One of my pet peeves with UDK is the surprising quality of shadows as a default on terrain. Seen here is a sequence of images that simply goes over the process of shadows in UDK. I’m sure this is not a mystery to the folk at Epic games, after all the demo games that come with UDK are typically built with BSP brushes and meshes rather than relying on the overhead that comes with complex terrains, and it’s something to take note of.

Some starter info : I’m using a moonlit night, but I have turned down some defaults in UDK.

Under World Settings : I like to turn my Environment color way down, there is simply too much light bouncing around for me. So I set the color dark, and then I set the Environment Intensity down as well. This makes it much more shadowy in my worlds.


For this test environment I’ve also turned off the default exponentialHeightFog, to keep the scene clear. The only thing I am altering in these renders are the lighting attributes on the Terrain.

NOTE : Although I will crank up the resolution of the shadow it is not recommended to do this for your game. This is A SIMPLE scene and merely to DEMONSTRATE the change in shadow quality, but real world tests of your game will show that this will not only be costly in time to bake your shadows, but also will create overhead in your maps that are loaded into memory.

FIRST RENDER : I set up a quick environment to show what shadows of my trees will look like on the default UDK ground. It’s passable, we have shadows out of the gate.


SECOND RENDER : I added a 16×16 terrain to my environment, and added a quick grass texture. There is no height change in this render, so it should be pretty clean. Now, to the new user of UDK this will look pretty promising. This is a render from the Editor window, but it’s deceptive and disappointing. When in editor mode our shadows are dynamic shadows, because they haven’t yet been rendered. To use dynamic shadows throughout our game would be very heavy in UE3 (we’re promised real time shadows in UE4).


Confusing to many who setup a scene in UDK and are seeing great shadows, is the surprise of going into game mode, and seeing your shadows evaporate. The reason can be found in the terrain’s settings. You can see the Static Lighting Resolution is super low, this line and the entry below it are what we want to focus on.


THIRD RENDER : Static Lighting 2 :
This is what the shadows look like after we bake out our lighting. There is simply not enough resolution someplace, so we lose any fidelity in our shadow renders. If we double click on the Terrain, our terrain Attributes will come up, and we can see the default static lighting Resolution is set to 2, and that Override is turned off. (Actually this is a mistake, the default is actually a more discouraging 1 – and that number can not exceed 4 without hitting Overriding Light Resolution. If overriding that number doesn’t make you nervous it should. You have reached a dilemma, and will face quality vs speed issues after that number goes up.)


Now we can start doubling that number to 4 and 8, but our progress making better shadows will be slow. So we jump to Default Static Lighting : 16 However, before you type that in, make sure Overriding Light Resolution : is ON ! Otherwise it will bounce back to 4.


FOURTH RENDER : Static Lighting : 16
In this render we can see a great improvement in our shadows. Remember at this point to Play your game and make sure the shadows are the same as in the editor window.


FIFTH RENDER : Static Lighting : 32
For this demonstration I’m doubling my numbers and keeping them power of two, but this will not be our end game, this is more about demonstration. We are seeing a much higher quality and I could live with this render and get back to other tasks. Still, I want to push it one more notch and see what I get.


LAST RENDER : Static Lighting : 64
A thing to note is that I also turned on specular highlights on my terrain in the attributes, but here we can see a higher quality. It looks good. The trade off will be how much time I am willing to wait for shadows to bake to get better results.


This tutorial was not meant to be an end all on the subject of shadows but I hope it gives guidance to the issue of what happens to terrain shadows, so that someone making a UDK game can consider some of the hidden things about UDK before proceeding too far in design. Shadows themselves are a complicated area, the information will vary for different types of objects. Terrain, static mesh, dynamic mesh all have different requirements. There is also a zone around the player which has dynamic shadows, and may be seen at times as it follows the player through the game, this is because you will have a near area in game you interact with in terms of lighting, casting your own shadow, or using a light that is carried in game).

One thing that is clear, for now in UE3 if you set your shadows to dynamic your game will not likely be able to handle shadows for long, although it will be tempting to use, you need to think smart about how to bake shadows, set up lighting UVs on objects and break your game up to make sure the lighting can still bake. Be careful how much you crank up that number for static shadows while you’re testing.

For the record : I am currently using a Static Lighting value of 10 in my game, but the results in overhead will vary depending on the size of your terrain, and the number of materials you use to paint your terrain. These numbers make my lighting too heavy to bake out daily while developing. I test small areas of my game by keeping my game areas as streaming levels, so that I can test parts of the whole and consider how to make my work, and game more efficient, while still trying to make the game look good.

UDK can be a very intuitive software to use, but shadows are a sticky point in UE3, that will challenge your patience as you wait for scene lighting to bake out. To me the end result is one of the most rewarding but lack of computer power for an Indie developer makes it one of the most challenging to deal with.

UDK Strategies in Rendering Shadows

I want to start off this blog today by saying I’m hoping to work with Render Rocket with the goal of opening up it’s render farm to game companies big and small, using the UDK swarm renderer. I’m wondering if there are others out there who think that would be a good idea, to have a farm to render those big ugly game scenes across hundreds of processors.


Here’s why this is on my mind today; Anyone developing video games using UDK software is probably used to seeing the error during gameplay stamped in red in the upper left corner of their screen. “Lighting needs to be rebuilt”. It’s a painful reminder that your ever changing scene has impacted the shadows and as it grows, that render time keeps escalating.

The reason this comes up, is that in order to keep your game from bogging down, you don’t want to simply flick the switch on for all your objects to use dynamic realtime shadows (although it sometimes is cool). Instead, we bake out shadow information, based on a second set of UVs for each game object. It’s complex, and World of Level Design has the best tutorial on setting up those secondary UVs for objects that I’ve seen.

As I get into refining my game and trying to lock down lighting and remove any errors, I’m a the point now where I need to clean up errors like the “Lighting needs to be Rebuilt”. My goal now, to see if I could circumvent my long render times by baking out lighting in packets of information rather than trying to ram the entire game level down my quad-core’s throat at once. I’ve done this, but once my render times shot past five hours, I didn’t want to leave my little laptop heating up that long without breaks.

This lighting error I’ve mentioned will pop up (irritatingly) whenever you move a mesh in your game, move an existing light or add a new one. It demands that your lighting be perfect, if you want to do a final bake out of your game. For many people this may not be an issue. The tactic is to create contained game levels that are typically interiors. On paper the game level design often looks like a map taken right out of a Dungeon’s and Dragon’s playbook. There are a series of rooms illogically connected together in terms of architecture, but that are constructed in order to minimize engine overhead in gameplay, and just because, hey, it’s cool.


As games get more sophisticated this becomes a problem. By sophisticated I mean that games actually have exteriors and interiors. So a game like Dear Esther for instance, if it was baked out on the UDK game engine would have required baking out the island, and all the foliage on the island would be potential light breaking errors. For us indie developers this can be a problem.

In the games I’ve always gravitated to creating, the environments are a mix of exteriors and interiors. To make this more complicated often the interiors are integrated with the exteriors, in that, part of the game play is that you can see outside (that’s part of the suspense). If you want to avoid super long render times, you could start by not doing this altogether. Your game would be closer to something like Silent Hill: Homecoming, where there is a pause as new game levels are loaded, and unused ones are unloaded. In that scenario seeing outside is often masked either by fake scenery out a window, or by the old foggy window trick.

Increasingly games are not doing this, a game level will be a mix of interiors and exteriors all loaded up together, a good example being Resident Evil 5, where the exteriors help give the flavor of the environment, and the environments (sometimes a small row of shanty houses) are perforated to the exterior and too small to make a separate game level by themselves.


To get around loading up the game all at once I like to Stream Levels in game based on distance (and visibility). So a building will not be held by the engine when you are a certain distance. The strategy I came up with is to build most of my lighting and kismet programming in my “landscape” level, which in UDK is known as the “Persistent Game level” and is the automatic level you get when you start building in UDK. This also solves the problem of holding game information between levels in your game. So if you need to know you picked up the “gold key”, the persistent level holds that information, and the persistent level is always loaded.

For lighting I then break everything out. For instance I have an area of my game called, “the Village” and in order to keep things organized on my computer, I name things like this : Stream_Village_A. I always use a letter at the end, in case I add more parts to the village, numbers are confusing because they mean iterations, letters identify this as being part of the village but just a separate level to stream in.


My strategy for rendering shadows is this at the moment : I unload all my streaming levels in the game. I turn off dynamic objects that I don’t want to cast shadows (those are in streaming levels that are prefixed, DYN_Village_A for instance). I have a separate breakout of game parts for doors which I want to turn off when baking out my Navigation paths for AI. The doors will be prefixed, DOOR_village_A.

Once everything is off or unloaded, I start loading in things I need one at a time. I do this for the reason that I want to identify errors and because I want to render parts of my game as quickly as possible. This strategy works really well for baking the architectural elements in my game. I turn on the village, with the persistent level and five to ten minutes later my lighting is up to date. Then I bring in the mysterious building level, and turn off the village (I can unload it from game if I saved out and reload it later if I like). I basically bake only what I need to keep this moving.

Later on, if I update STREAM_VILLAGE level, and want to rebake the lighting, I need to reload it into my main game, and render with the Persistent level on. The baseline thing to know, Persistent level is always on when I render a streaming level and whatever lights are in game are present.


When it comes to trees, my game is ambitiously filled with hundreds of trees. In order to keep the rendering lower, I have broken them up into areas, I try to max out at 50 trees. When streaming by distance I try to locate the tree central to that mass, and use that as my fixed point. Then with Persistent level and trees on, I bake lighting again.

While this strategy becomes more difficult in regards to rendering with trees it is still doable. I keep my processor from overheating and rendering for nineteen straight hours by rendering in chunks like this.


Admittedly, Foliage is still a problem for me. I am using the newer Landscape feature with UDK, and as such I so far have not been able to separate my foliage by LEVELs. This is problematic, because I see the foliage as providing much of the character of my game. To date I have in my persistent level 86,000 meshes for the grass that I cannot bake out separately, and i’m contemplating what is next in problem solving this issue. Note : I don’t even need my foliage to cast shadows, but the goal is to also have the game engine okay with that strategy and not stamp me with the dreaded, “Lighting needs to be rebuilt” error. Even if I can repress that error (Not sure I can) to have it there makes me suspicious that something I want rendered correctly is wrong, if something I don’t care about (grass) is misbehaving.

Note though : In order to work, any lights that are in other LEVELS of your game, need to stay in the game. This is because in their absence lighting gets rebaked and that changes the lighting if you remove them. This is a little more complicated than it sounds. So if my STREAM_MYSTERIOUS_BLDG has several lights, once I bake it, I need to keep that lighting in the game. My tactic right now is to NOT have separate lights in my streaming areas, but to keep them in the persistent level only.


Back to render farms. UDK has something called swarm rendering, (Not smarmy rendering although there is something smarmy about it). Swarm rendering will send out your game to multiple machines on your network. My network of computers is two, remember, Indie game developer. It’s hardly worth the time to setup, and not likely to cut much off my render times. Instead, when rendering I typically switch machines and do something else on my other computer that I need to do.

So I’m stuck with the limitations of the game engine, and my computer power.

While writing this, this morning I have heard back from Render Rocket that they are interested and would like to work with me to see if we can UDK to work on their render farm. No promises, but we’ll see where this leads.

Hallow's Way : UDK workflow for an Indie Game.

Making games for me, is like writing short stories as I’ve said before. I can be spinning several ideas at once. Recently I changed directions and focused on a short story game, working title, Hallow’s Way. I had been hoping to get it done by Halloween, but flood waters and rebuilding have slowed progress to a grinding halt at first, but I’ve been steadily getting more time to finish the game.

The main inspiration for the game began with a familiar urban legend, the “Green Light Cemetery” in New Jersey. It’s not the first time I have visited allegedly haunted places from my childhood and won’t be the last, but the real inspiration here is the fascination and fear associated with seeing the green light in the woods above the cemetery (which I do believe my younger brother Ron, and I tried to find with our intrepid childhood friend Michael Weimer). The question is, can I capture some of that fear and fascination in game play? Additionally when I’m testing a game, I try to make actions surprising enough that I too can be surprised. If I don’t have a surprise moment, or sudden fear as something unexpected happens, then I don’t feel i’m developing the game correctly. I want to be immersed in the game myself when “writing” it.

creepy doll, from my scary game... whose details I'm keeping purposefully vague.

creepy doll, from my Hallow’s Way… whose details I’m keeping purposefully vague – except to say the original concept was inspired by a Dave McKean illustration, and although the model has changed drastically from the original inspiration, I have considered putting Dave McKean in my game in some way as an incidental character – because he’s awesome.

One reason I want to work on a very short story game, which is the way I think of Hallow’s way, is that I’m still getting used to the UDK game engine. I can put together a game level fairly quickly, but completing a game from A to Z, is a different matter. Aside from testing a game, there are menus, writing artificial intelligence, and then of course creating all the assets for the game.

Since my time is limited in my studio and working on my games, one of the things I try to do is to be pretty quick and nimble. I try to accomplish things swiftly so I can move on. The creepy-doll as seen above is only a couple hours worth of work, and then another couple hours of testing the AI and getting him to work properly in game when needed. Thankfully tools like zremesher in Zbrush allow me to cut off additional time that I have previously spent making low res models for games. I’m showing this one creature of course, but he is a small part of the game. The reason I’m vague about this is not because I’m trying to hold back on what I share, but because I want to hold back if I intend to scare.

Aside from creating a list of creatures (there are fourteen others for my game) my unfamiliarity with UDK means I’m still building up a resource library of various types of AI and many other game elements. One of the things that people like to brag about UDK is how quickly you can basically create a game using resources from UDK, like their bots, or soldiers, and their environments. Additionally, people like to use the BSP brushes in UDK to create game levels quickly. A BSP brush is a brush where you can come in, create a rectangle quickly, and essentially “model” within the game package. It’s a pretty good way to test things out, however, the problem is that these are not real models. You are generating procedural models within the game engine, and to alter them you have to add or subtract. You can’t make a complex building like the Sistine chapel, and then decide to move it ten units to the left. Most people use BSP brushes to make fighting games, or Tournament games. These are fast paced (and rather pointless) games about kill or be killed. I’m not saying people don’t do amazing things with game environments and that there isn’t a lot to learn there, but it is a direction that I prefer not to go in. It’s also a type of game that I’m not interested in creating. There are no guns in my games.

My method of working is to work with modular pieces to create environments, and to do that I have had to build up my own modular library of buildings, including trees, creatures, and effects like smoke, rain and fog.

Test area for game.

Test area for game.

When I’m working in my game testing is often one of the more labor intensive parts. There is a large game level to test, and I explore it and poke at it, and try to engage the AI. The image above is a snapshot, not of my game actually but of a test area when I’m developing tricky things in my game. The reason I work in a test area is to do fast tests. I don’t want to spend twenty minutes waiting to engage my AI all the time, I want to write code, and then get into a small test area to see if it works as expected. The game area above shows a few things that might look odd to someone who isn’t familiar with engines, or UDK. There is an overlay for instance for the NavMesh. I was testing this level to with NavMesh to see how to fluidly get AI to go up stairs, and slopes to higher places than the ground plane. In the Unity game engine, I can generate a NavMesh (which stands for navigation mesh that the AI use) and it will take in the area I have in mind in three dimensions. In UDK it is limited in how high it goes in z to about the height of the player, which is odd to me. So I test this to figure out solutions in this small space quickly and to see how my AI navigates problems that I put in it’s way.

One of the things you can see are things that look like large alphabet blocks. These are models I’ve turned into rigidBody objects, or KActors in UDK. KActors obey physics. I can program my AI to interact with the rigidbodies, and I can also scatter rigidbodies in the air, and have them become live at game play to randomly scatter their location. It’s one of the things that I do to randomize a game, which I love to do. One thing I question in games, is the locked down world that you enter. Many games people present an environment where you solve a puzzle, and the puzzle is always the same. So you try and retry the puzzle until you solve it. People often replay games, and the puzzles presented are in the same exact pacing and location, nothing changes. One of the things I try to create in my games is the sense that this world is shifting when you start a new game. Another concept I’m trying to explore is the possibility of the game shifting during gameplay, which I’m going to stay vague about so again, I can maintain the element of surprise in the future.

One note about rigidbodies is that I put rigidbodies in a different game level, that is on when game play starts, but off when I generate my NavMesh because the idea is that they will be movable. If the NavMesh generates a wall around them, the assumption is that they will stay static. When I talk about different levels, I’m talking about chunks of the game that I can turn off, not “leveling up” as it were, although this could apply too. I can have a game level, where different chunks of the level go on and off, depending on your location. This can save in how much data the engine has to crunch while you’re playing, and it allows me to also separate some things out for other reasons, like for keeping my NavMesh clean as stated. I’ll give you a solid example. I created a navMesh and forgot to turn off my rigidBody level. When my AI moved through the game, they stopped on the stairs because the random objects were in the stairwell and generated a navigation mesh that saw them as permanent obstacles.

Yes, I jump back and forth between talking about generating art and story and the technical side of games. I try to understand the technical side, yet my goal is to get the technical things out of my way so that it becomes second nature, and quick to execute. I believe that more and more as I create “small” games, the library of pieces I have should allow me to write more short story games, without the huge delay involved with creating a whole new system. I’m not saying that each game doesn’t have new elements you want to add, different AI, more involved story as you get better, but you keep building a library and refine your approach rather than re-create the entire engine you made.

I think it was Stephen King, who talks about writing stories, and how there is a perfect storm that happens with an idea. You have an idea and put it aside, and then over time. You collect those ideas and don’t necessarily force it out, you can’t. At some point though the idea comes out by itself. It could be something small that happens and creates that perfect storm and makes your idea clear. The problem with trying to come up with short story game ideas is the delay in developing games. The solution for me is to continue to build a library, so that when lightning strikes and the game idea comes, I’m ready to create something more fluidly.

Actual shot from my vague creepy indie game.

Actual shot from my vague creepy indie game.

When the technical and artistic prep work is done, the short story comes in creating the gameplay. Elements like my fog and rain effects are important to me in my game stories for creating mood and movement. The same is true for sounds and music, which of course is where my games suffer the most since I am not a musician who composes music.

All of these things though can come together as part of a composition. There are million things you can do when creating any story, and the question always is, what do you need to do to tell your tale, and what can you leave out?

Other locations from my childhood appear.

Other locations from my childhood appear as incidental characters in game.

UDK : Randomly placing pickups

This is not a real post, this is me trying to muddle my way through a gaming issue. Putting pickups in UDK is a pretty easy task, but I want them randomly generated. The reason is that my games depend on it being random, and non-repeated so that you can play through the game as much as many times as you like, and the game is not predictable.

At first glance this method works. My pickups appear, and I can intersperse them throughout an environment where ever I put in pathnodes, however for some odd reason they shrink to nothing in about five to ten seconds and I am left aghast. Hoping that by sharing multiple minds will help to solve the problem and then benefit from what it is I’m doing.



Making Mistakes in Artwork.

I came up with a quick idea for a game this week. I wanted to mock it up quickly and I gave myself a mandate. Finish it by end of week, or kill it by end, one way or another. I have one day left, and although overall the game is close, I always feel this need to perfect it, improve my assets, etc. I start to spiral out quickly.

I know I talk about this a lot. I have this idea that creating a game, shouldn’t always be like making a tent-pole production film that involves hundreds of people and a budge of 200 million dollars. I have no problem with companies who do that, I think that’s awesome for artists and the gamers, and pushing the state of the art.

I’m more interested in the smaller, personal projects though. When I sit down to write a short story, I’m not thinking, “This will take me about a year to create!” I’m hoping I can finish a first draft of a story in one sitting, because when I’m in a flow that’s how it comes out for me often. When I have to force things, the story or artwork shrivels up and dies.

I saw this all the time when teaching, both in myself and my students. If there was a great expectation that a zbrush sculpt, or a drawing was going to be “it” then there is this stiffening that happens, and often procrastination. It drags out, and goes no where. I saw in my students this timid dabbling, instead of broad re-working of their drawings and sculpts. The reason is often because of risk. If you push too far, you may destroy the artwork, but by being too timid, it can fail to come to life. Another reason is that some art ideas become precious to us, and we can’t part with the initial concept and see what else “it” wants to be.

To me part of the creative process involves being loose when I sit down to work. This is difficult business in creating games, because it is in fact highly technical with lots of road blocks to get in the way.

My philosophy about the games for an indie person though is to create a sort of backlog of ideas and assets and get ready for the lightning to strike. You have to be sketching in the game world with those assets you have already, and not always be “experimenting” with some killer game play idea that would actually require six months to a year for one person to execute well.

When teaching, my students would pitch games in class. The goal was to choose one as their “pre-thesis”, each semester there was at least one person who pitched this idea;

“This is an MMO (Massive Multiplayer Online) game idea, and you can customize your character from one of several races..” The idea would go on, and often involve dragons, amazing powers and weaponry, and scores of animated cut scenes. I would try to steer them back to the ground. It’s not that those things are impossible at the student level, but starting simpler (single player for instance) is still a lot to accomplish when you’re talking about actually producing that product as your thesis.

In this vein I still try to counsel ex-students of mine, who are staying on the Indie path. “Don’t get too complex” I caution. “No, don’t think of your game as a triple-A title, please.” I beg. “Be nimble. Be fast – if you build an asset, test it in game same day.”

The reason is simply this. We need to make mistakes, whether that’s in a drawing pad, or writing short stories that we later realize suck big time, or a game that is oddly reminiscent of Kong. Doing those things, even if they aren’t a masterpiece, means cutting your chops and building a repertoire of skills so that there is improvement. It means you are doing, what you want to do, even if it isn’t triple-A caliber (yet).

It also means having some closure on something. The big things, the big games, the novels that are 300 pages, the massive oil painting, those things are sweet.

I’m just advocating for the sketches in between too.

Can Game Engines be more Accessible?

A few nights ago I made a post about game engines getting in the way of individual artists because of the heavy technical side of them and I wanted to expand on some of those remarks because I could hear a collective groan from the more technical community.

First let me say that game engines have come a long way in the last ten years. When I worked on my first video game project at Rhythm & Hues Studios, we wrote our own game engine, had a small staff of programmers (small by today’s standards where even Limbo had a team of twelve programmers) and many artists creating content as well as a game testing team working continually at debugging our game.

Now fifteen years later, individual artists are creating games and getting greenlit on Steam, or selling on indie sights like Desura.

Very popular and sometimes experimental games like Dear Esther are produced by tiny teams. One of my favorite series is created by Frictional Games, three indie artists began the company working remotely from each other and have a series of games that are now considered among the most scary games on the market and have a loyal following.

The technology has become increasingly accessible to artists and small programming teams, and what one person or a small team is capable of is downright amazing.

Although these changes have happened, there is still a sizable chunk of the market which is domineered by programmers because there is a ceiling involved with most off the shelf software. I’m not saying programmers aren’t creative and can’t make games, but that there is another creative group who are less technical and are trying to break into the industry and often hit that technical ceiling. Likewise, there are some creative programmers (the team at Frictional being a good example) who may do more should easier tools for animating characters be available so they can focus on the game play, rather than getting bogged down doing something they may not enjoy, like animating.

The technical ceiling I’m talking about can be seen in two game engines that are very popular right now with Indie developers and even triple A titles. Those engines that I have used mostly, are Unity and Unreal, also known as UDK.

Now before I talk about some of the pros and cons of the two main engines that I have used over the years. I want to point people to the link above, a Ted talk with Will Wright the creative behind the Sims and Spore.

In Spore in particular we’re seeing some remarkable things that it’s easy to miss if not for realizing how hard it is to do the things he has game players doing in high end software like Maya, Unreal and Unity to name a few. If I want to design and create a character, and then animate them, it is a long process of design, sculpting, simplifying the model, then rigging and animating and exporting and programming. This is with powerhouse software behind me.

Yet Will Wright demonstrates in Spore that he can make a character and it is auto-rigged, and animated, tested and back in game play having been designed (within constraints) by the game player.

This kind of interactivity is some of what is missing from current game engines when it comes to the more difficult things, like inserting your own custom characters and giving them animations. I’ve talked about this before to people and there is always a little scoffing, but then Larry Weinberg, a former Rhythm & Hues artist is the person responsible for a similar type of software, namely Poser. The brilliance of Poser is that Larry took the complex pipeline that a visual effects artist might use, and made it simplified. Personally, I’d like to see Poser and it’s philosophy, incorporated into Unity and Unreal. In short, it’s brilliant in it’s simplistic approach to the complex. Now imagine, a combination of Will’s character creation program and Poser, where a character is added into Unity or Unreal. A walk cycle is added, and then using Poser like controls the speed, the rhythm and other controls are tweaked in real time, with a very user friendly interface – not in Maya or another animation software.

To me this is all about pipelines in and out of software. Right now there are some things in major game engines that are not quite smoothed out, not really ready for primetime, and often badly documented at best. A good pipeline will cut down the amount of time that you do a redundant task that doesn’t really make or break a game, like for instance a walk cycle.

The point is that in games, a walk cycle is not the make or break of a game. Spending long hours rigging, and animating each character should and can be simplified and essentially automated. I know that sounds like a tall order, but I’m pretty sure this will happen eventually.

Okay, that said let me get back to the technical ceiling and some pros and cons of the two main game engines, Unity and Unreal.


Unity has steadily gained momentum in recent years and making it’s presence well known for making video games. Part of what makes it so engaging is the intuitive and Mac like nature of the user interface. When it comes to packaging the game up, and making titles and menu buttons it is pretty simple, it’s the stuff I can teach in one class. You could compress the game into something playable and get it out to friends with less than three button clicks; Apple like ease of use.

On the other hand if you don’t know loads about programming you are likely to hit the ceiling with Unity fairly quickly and then become mired in trying to make a type of player work within the game you want to make, finding that there is conflict between different types of game play and the scripts you are now using. When you find a script you like, let’s say for instance a third person controller game, and you want to bring that character controller into your game, there will be conflict with the scripts. There are essentially different teams working on different types of game play, and these things don’t have to work together. They only have to work together should you bring them into your game project. This is a different philosophy from Unreal, where something brought into your game will not break your game should you change direction.

In Unity, you may come up with an idea for a game and realize you have no idea how to make it happen. So then it takes lots of research and trial and effort.

A great thing about Unity is the online documentation, and the community which is also easy to navigate and find information. Unique too is the in-editor store for downloading content made by others. What does that mean? Well imagine being in Maya and having a button connected to Turbosquid so you can quickly search for a model you need and download it right into your project. This has created another growing community of entrepreneurs who see a need for something, like a buoyancy script so that a player can swim through water. Someone who needs this, suddenly has it, often for a small fee.

Additionally, when building environments Unity does more for the artist than some other game engines. Getting your work into Unity happens in real-time, if make a new asset you can put it in the proper folder, and it appears in your engine, likewise for any scripting you are doing. This means you don’t have to keep exiting the software to tweak your program and that means a lot of time is saved and you spend more time in game testing your product.

The scripting languages in Unity give the user a lot of flexibility, which means that someone starting out with scripting has some choices depending on their comfort level.

Now this is a rough evaluation of Unity, overall I would say it is fun to create games in Unity, and should you sit down for a day and create something you may find at the end of the day you are knee deep in your own game creation.


Perhaps the most popular game engine right now is Unreal or UDK. There are a huge number of games that use this engine and it’s well deserved. The software interface isn’t as slick to look at as Unity. For me learning Unreal was a little slow going. While Unity mouse and keyboard controls matched popular software like Maya, Unreal has mapped controls their own way which is somewhat odd and unnatural to me. This made diving into the software quickly a problem because I was forced to watch videos on just navigating the software before I could explore.

While Unreal will release updates every few months for their software, they have yet to release their newest version of the software to the Indie community and this is a big downside to committing their software. Unity on the other hand, will project when the next version of their software will be available, and they try to stick to that as close as possible. In the Unreal universe there has been impatience from the indie community who are waiting for the tiniest crumb of news about when Unreal 4 will become available, and to confound the community there is no indication of when the software will be released.

This means that there is some frustration waiting for tools in Unreal to make a significant leap. Some of the leaps forward, like the newer terrain tool, is buggy and not quite ready for full use.

Additionally, the software is fragmented. If you want to make trees, you need to use speed tree. If you want to make menus, you have to use Flash and essentially jump through some very vague hoops. It is not straight forward. Publishing the game likewise is more complicated than Unity’s three buttons or less philosophy.

On the other hand getting started with Unreal may seem more difficult and indeed more frustrating in some areas, but the ceiling for what you can do without heavy programming is higher. So for instance if I want to add ladders, or a zero gravity zone, I can simply put these things in. (Please note I haven’t used Unity since 3.5 so my knowledge may be a little dated). Additionally, there is far more under the hood in terms of creating AI that works in Unreal than Unity. Creating volumes for different things, like a swimmable volume, are easy and I will say that Kismet is far more friendly to use than the Unity version of the same. This means that a user can try many things quickly in Unreal’s kismet and create many different types of games.

One of the most impressive things to me about Unreal though is the renderer. The visual quality of Unreal is just much better than Unity. Unreal has softness to it, less pixelation and the ease of using atmospherics makes for an enjoyable experience creating environments. The renderer overall reminds me of the rendering quality that admired about Rhythm & Hues proprietary software. The quality made Maya’s render look amateur, and this is true for the Unreal render vs Unity’s. Unity games have a more crisp look to them, while Unreal has a soft quality, with automatic light rays and ambient occlusion.

Now you’re wondering why I think this all can be better? Well, for starters Unreal is sitting on the next version of their software and have for some time now. The new version is supposed to revolutionize how users will interact with the software and free up artists to make whatever they want in a game. Yet no one knows when this is forthcoming. Perhaps Unity will beat them to the punch and improve some of the things that make programming unique games difficult, or improve their render quality significantly in the next version.

I appreciate the ease of use of this software, and that I don’t need to have a programming team if I simply want to get started creating an Indie game on my own, however I believe as these software packages move forward we will see more artists working individually, and small teams like Frictional popping up. The more this happens, the more we will have break out artists/programmers creating amazing and rich worlds and stories, that are NOT triple A titles, but potentially so much more enjoyable to dive into for a few hours.

The tech has definitely come a long way since I began doing this in the 80’s, no doubt about it. Yet I’m still waiting to sit down at my computer and “compose” a game like a musician might, before it becomes stale in my head and falters.