Exercise 5 – Designing a Game

The Brief:

For this project, we were tasked with designing our own game, with planned narrative, concept art and ideas of how the actual game will play, i.e. types of mechanics and how it would control.

I was placed in a group with Oliver Addison, Lewis Rhodes and Josh van Wyk and as a group we came up with the title ‘Euphoria’, a single-player, story-driven adventure game that takes place in a dystopian future in a city that strives to be ‘Utopia’, we call this Euphoria.

The game is under the first-person shooter genre, but also incorporates stealth elements along with puzzle-solving gameplay, and to help make the game a more engaging experience for the player, given what the plot is, it is also open-world.

The Story:

The character you play as is born into Euphoria as part of a high-ranking family, but at a certain age you are ‘enlightened’ with the ability of free thought, which is then you realise the world have not been granted this, and you realise what sort of world you live in.
Assuming the role of a PeaceKeeper, expected to squash out rebellions, you can now begin to make your own choices. Do you remain a loyalist? Join the rebellion? Or remain neutral and attempt to juggle both of these ideologies as you play through.

The Gameplay:

Like mentioned above, this is an open-world first-person shooter with stealth elements, but something that brings our whole narrative together is how vital your actions can be to affecting the story.

From the moment you assume the role of your character, you are given a dramatic choice to decide, and from here the game will begin deciding on your alignment through a percentage based system, much similar to the likes of the ‘inFAMOUS’ or ‘Fallout’ games.

You can explore two worlds, the wastelands, home to many outposts claimed by the rebellion, many slums with a high concentration of branded clothing, due to the high output, and Euphoria itself, where you can remain close to your family and have access to free healthcare (in the form of health packs that you are not able to use outside Euphoria). In each of these places you can take part in main quests and side quests.

The main conflict would be in the form of people from the rebellion and Helper bots that can transform into bigger mechs and vehicles, such as bipedal tanks.

The Characters / My Slides:

I was tasked with creating the character designs and moodboards and writing the backstories and summaries of each one. Below you will see my slides:


Target Markets:

We are aiming for individuals above the age of 18 as we would like to show more gritty content to leave an impact on those who play our game and make them think.

We plan for our game to be on Xbox One, the PlayStation 4 and PC, giving us a widespread of possible fans to play the game.

Those who enjoy such games as ‘Call of Duty’, ‘Metal Gear Solid’, ‘Fallout’, and ‘Wolfenstein’ will most likely be drawn to this style of game, as it incorporates strong narrative, first person shooter and stealth gameplay.

People who enjoy playing games to get lost in the world and explore, and those who are fans of games that are very plot-driven will get the most enjoyment out of this game, along with anyone who enjoys collection achievements (whether it be Steam, Xbox or PlayStation’s Trophies) for the most part.

The Presentation:

Here is the presentation that we all contributed to while bringing our ideas together, and was what we presented to our class and teacher for this unit:

Here – Saved on Google Docs


EX7: Keying

What is Keying?:

What keying is, it to separate objects in your shot from the green screen, we are able to do this with the use of video editing software, like After Effects. We use green for the editing process as green shows up a lot less than blue or red on our skin, and its easier to identify what the software has to be removing and keeping.

Why do we separate the footage in to separate keys?:

This is because different areas of the subject that you are keying can react differently to the green screen.

Why are alpha layers so important?:

The alpha channels are for changing the transparency of the image, for example, if it is black, then it is completely transparent, whereas if its white it is solid, we can find a good mix in between these values to make your subject semi-transparent. This is useful in removing the white outlines left by the lighting in the subject we are keying.

Why do we need to despill?

This is known as the process of removing the green that has spilled on the subject, and by doing this we can make the footage look more realistic. (I can’t think of any human with a green glow.)


What are light wraps used for?

This is to make the subject blend into the background by making the lighting of the background hit the subject, thereby making it more realistic.

My attempt:


First I removed the grain from the image to make the entire process more easier on myself. After I that I selected the pen tool and started using it around the area of her legs, to separate parts of the subject, and animated the mask every now and then so the subject would not move out of place.


I then applied the footage to the timeline.


I repeated this process except now for her torso and head.

Then after masking everything that needed to be masked, I used KeyLight to separate and remove the green backgrounds.

After that, I composited the background into the footage, and using the curves to colour correct the subject so that they would match and blend in with the background lighting, making it appear more realistic.


And here is the final result:

EX9b: 3D Projection

What is Projection Mapping?:

This is the process of generating a 3D space through the usage of 2D images. To do this you must lay out the layers you are using at different distances from each other while having a camera enabled to navigate through, which creates the illusion that the layers are now a 3D scene.

What is 3D Layering?:

This is essentially 2D layers in a 3D space, which After Effects then gives you the ability to move on Z axis (as well as the X and Y) and using tools to change many of properties, such as rotating without any warping.

These can be used with in-software cameras to create what is called ‘parallax’.


What is Parallax and why do we use it in Matte Paintings?:

What we mean by ‘parallax’ is the effect of objects closer to you moving faster than objects that are at a further distance away from you.
Applying this knowledge with matte painting allows us to perceive depth, creating an almost 3D environment.

Render Passes:

This is a picture layer of your scene (matte painting) that can then be composited with other layers to create the complete picture.

My attempt:


I first got my matte painting from the last exercise, and then began to break it down into render passes to export as PNGs, and drag into After Effects. I then made a new composition starting with the sky, and then dragged and dropped the layers following it after, keeping it in the intended order.


I then checked each layer to become a 3D layer, and opened up a camera in the Layer drop down menu, and then selected it to observe the scene in a top view.


I then started selecting each layer and dragging them out, leaving a small distance between each one:


Which then created the illusion of depth in the ‘active camera’ view, however it looked a lot messier as the images weren’t sized for this purpose.


After fixing the proportions up, I then added a keyframe and began adjusting the camera’s position in the X,Y and Z axis, and allowed it to preview, which left me with this final result.

EX9a: Matte Painting

What is a Matte Painting?:

Sometimes referred to as a ‘DMP’ (digital matte painting), it is a painting of a certain environment or location that wouldn’t be possible to film, such as a space station, a good example of exactly this would have been in the original Star Wars trilogy, like with this background here:

Original Trilogy - Matte Paintings 04.jpg
This would have been on a sheet and placed in front of the camera while filming. This shot in particular is especially impressive as it tricked viewers into thinking there were really that many actors on screen at once.

This helped create the illusion that the characters were actually where it looks like they are, and would require lots of skill to make the composition seem natural.

It is one of the earliest forms of VFX and is still being used just as much today, as projects become more ambitious with the scope of their worlds due to expectations of VFX today.

How were early matte paintings created?:

Originally, as there wasn’t software for it, matte paintings were created by highly skilled artists using oil paints onto glass.


Compositing Rules:

Like most things to do with VFX, there are many rules to consider for this method.

– Lighting: We want the effects to blend in with the scene, so having poor lighting will make the matte painting stand out and appear obviously fake to the viewer, so it is important that everything is lit properly.

– Colour Matching: We also want to be making sure that the matte painting is colour corrected properly, this can also be done vice versa, this so nothing will stand out and become glaring for the viewer to spot. Matching black points is vital for creating a good shot.

– Focus Matching: This is to match the focus of the camera for when the footage was being taken, typically, you would write down the camera settings you had when you took the shot to help recreate the desired effect.

– Perspective Matching: This is making sure the matte painting is painted at the perspective the footage was when it was taken.

Some more examples of Matte Paintings:

From Indiana Jones
From Lord of the Rings
Original Trilogy - Matte Paintings 16
From Star Wars


My attempt:

We were tasked with creating our own digital matte paintings on Photoshop, and given a large folder of resources to use, including aztec / mayan buildings and mountains.

Messing with alpha channels and the lasso tool, I was able to extract the parts needed for my matte painting, such as the bushes.

I started with the aztec building and cropped it out after removing the tourists from the original picture with the clone stamp too, and then moved it for use in a new Photoshop file.



I then added more grass to fill the bottom of the canvas, and used the clone stamp tool for each picture to blend in with each other, and then added bushes to the far left of the painting.


After this, I added to mountains go behind the bushes to add more to the world, and colour corrected them to help blend in with the overall composition so far.


After this, I added the background of a cloudy sky, and added a small mountain range to the far right (and colour corrected it) also.


I then added ivy to the building to help it look a little bit different and then clone stamped some areas to make them look more natural.


Then finally, I added some branches that didn’t require cropping as they were already transparent PNG files, these help bring the composition together through framing, and below is the final, rendered result:

ooo eee ooo aaa aaa.jpg

EX8: Matchmoving & Stabilisation


What is Matchmoving?:

This is the process that allows you to composite computer graphics with your footage and syncing the movement with what is called ‘tracking’ so it appears that the graphic is moving in the scene.

What tools you can use?:

Tracking Points: Which you can use to place dots to track the position, rotation and the scale.
Warp Stabiliser: Helps stabilise the footage and will cut corners and warp to help create the illusion of stillness.

Which areas are more easy to track?:

Any area that has more detail are easier to track for the software as its easier to pick up more of the features in different positions, whether it be wrinkles, spots, or small patterns that are common within the subject.

My attempt:

First I converted the footage of the hand with blue dots into a TIFF image sequence. Then I took the first frame of the sequence into Photoshop, and began to hide the dots using the clone stamp tool, which duplicated the patterns on one part of the subject to hide the dots keeping its appearance natural.

The layer for hiding the blue spots to later be applied to the footage.

I then tracked the hand with the use of the blue dots, and used two tracking points with ‘track rotation, scale and position’ enabled, it then begins processing the frames and created a path for the dots to move with the hand.


I then added the hand corrections to the path and rendered the video out, leaving me with this result:


As an extra task, I then decided to add a tattoo to the hand, and decided to use this PNG image.


And feathered it slightly, so that it would look as though it was part of the skin, and this was the result:



Why Stabilise?:

This is to take footage that has unwanted movement (i.e. shaking) and getting rid of the movement. This also makes the footage easier on the eyes, and easier to track things onto.

My attempt:

We were supplied footage of a keyboard that would shake as it was playing, and then tasked with using After Effects to get rid of the shaking, below you will find the exact footage we were given, and as you can see it moves quite a lot.

First I imported the footage over into After Effects, and dragged the clip into the preview space, but before I could work on it, I must change the video from 30fps to 25fps as we are from the U.K region.

I then use the warp stabiliser (like mentioned above) to analyse the footage, which it then processes automatically, after giving it time to do this, I change the motion settings from ‘Smooth Motion’ to ‘No Motion’.

I then selected the option to stabilise along with cropping and auto-scaling, and then after this, I used the subspace warp too.

I then preview the video to make sure it is working correctly, and see that it still has grain visible, making it clear that footage is being played, and then rendered it out, the final result can be found here:


Exercise 2 – Current Software Technologies

In the GAVI industry, software is always being developed further so that we can get the best out out of it, and expand our skills, whether it be through art, programming, animating or editing, there’s something for everyone to try.
Here I will be discussing current software used in the games industry, and that I have personal experience with.

3D Modelling:

There is many software you can download for 3D modelling, such as Autodesk’s Mudbox or Zbrush, but today I will be focusing on Autodesk Maya.


Autodesk Maya is 3D modelling software that allows you to create 3D models and assets for use in your project, and was developed and released on February 1998 (19 years ago as of writing!).

As someone who uses Maya regularly in their course, I can say that Maya is an excellent software for modelling, it has all the tools you could need to be creating fully 3D, industry standard character models, assets and even environments for your project. But Maya is far more than just modelling, it doubles as animation software as you can rig your model, and then along with its simple to understand UI, and in-depth animation processes where it can even process and calculate the movements, and with that you can see its a very functional piece of kit.

Not only can you model and animate, it comes with its own inbuilt lighting tools, so you can light your scene appropriately before you render it, along with compatibility with graphics editor software like ‘Photoshop’ so you can texture your creations after UV’ing them.

Maya is excellent for modelling assets to that can be imported over to game engines, and also animating, with its in-depth tools, it would be great for creating cutscenes for your game.

Graphics Editors:

Like modelling software, there are many graphic editors on the market for people who are in the games industry to use, like GIMP or FireAlpaca, but I will be talking about the most popular example: Adobe Photoshop.


Although its not currently capable of 3D editing, it is currently on its way with developments we can expect in the future, but regardless, Adobe Photoshop is 2D application software that allows for you to edit photos, create images, animate, texture and graphic design.

Many people part of the games industry will be using this or similar software a lot at their work space, depending on what role they have. It works great with the software mentioned above like Maya as you can use Photoshop to create textures (whether it be regular, specular or bump) for your models by exporting the UV net you made and then painting over it in a new layer until you’re ready to export it and apply it to your model.
Though Photoshop also has many other uses in the industry, as it can be used for art and design, so artists can develop concept art and create matte paintings to get a feel for the game world. You can also use it to design the logo and many other things of a similar nature.

Game Engines:

Now these are something that is vital to the games industry, for quite obvious reasons. As a result, you can find many game engines consisting of both 2D and 3D capabilities, all having their own unique features developers can take advantage of, but all with their own negatives too.
These engines include the Unreal Engine, Havok, GameMaker Studio and many, many more, but I will be talking about Unity mostly for this post.


As briefly spoken about above, Unity is capable of developing games of both the 2D and 3D variety. Two of its biggest features is the play area and the coding, we can use the play area to build the world of our game, we can move and stretch things into place to fit with level plan. We can port over the models we created in Unity and apply code to them, enabling them to become the player character, walk, run, attack, jump and interact with things as an output to the control inputs, we can also apply physics and weight to the character and world to develop gameplay that feels satisfying.



EX6: Clean-Up

What is Clean-Up?:

Clean-Up is the process of removing unwanted areas of your footage, a good example of this being either a street sign or person.

My attempt:

My task was to take footage and remove the signs out from the scene.

To do this, I begun by rendering the footage given to us as a TIFF image sequence which would allow me to get the frame to work with. I then exported the frame to Photoshop and used the clone stamp tool to begin removing the signs and replacing them with other elements of the background like the grass.


I also moved the bush that was beside the sign as it proved to cause difficulties, but this was what the end result looked like.


And this is the final result:

Sign removal final