Voodoo: Unity3D R&D - Specialmoves

Voodoo: Unity 3D
R&D Project

Specialmoves Labs

The Voodoo project came about through a desire to build a 3D application for the iPad, learning as much as possible along the way about the tools and workflow involved. The result is an interactive doll where the user can grab and toss him around a 3D environment, pull his limbs with multitouch gestures, and optionally map a Facebook profile picture to his face.

This article outlines the Unity3D for iOS development experience and some of the technical hurdles we encountered along the way.

Development environment

Publishing from Unity to iOS requires the iOS SDK which is only available for Mac. Setting up iOS dev accounts and installing the SDK is not a straightforward process, so thankfully the Unity website provides a step by step guide. Having set up the SDK, installing Unity3D is a simple affair.

Hello World

Our “hello world” came in the shape of Robin Hood – the 3D character asset leftover from the online game Robin Hood Showdown. He was dropped into an empty scene in a new Unity project, and prepared for the physics engine using the built in ragdoll wizard. Testing within the Unity IDE (by pressing the play button), we saw Robin quickly falling off the bottom of the screen – the gravity worked! The next step was building for the iPad. In the File > Build Settings popup, we set up an iPad build. On clicking Build And Run for the first time, an XCode project is generated and immediately compiled for the iPad. Again the Untiy website has some good info on publishing builds.

Importing models

Soon afterwards we had a draft doll model ready to import. The ragdoll wizard takes the character mesh and skeleton from a 3D modelling package (in our case 3ds Max), and re-interprets the limbs as solid objects for the physical simulation, each with a mass relating to the size of the limb for added realism of motion. The wizard also creates “character joints”, which connect the limbs together while restricting their range of movement as if they were real human joints. This saves the hassle of configuring individual limbs and joints, however it does assume that the ragdoll is to behave like a human. We later discovered that some changes to the assumed behaviour were required.

Performance considerations

After a few initial iPad prototypes it became clear that we needed to consider performance optimisation from the start. With just a fairly low-poly character model and a few flat planes for walls in the scene, the frame rate had already dropped to unacceptable levels. Fortunately Unity3D iOS includes a performance profiler which outputs the time taken (in milliseconds) to complete each process in the rendering pipeline, averaged over the previous 30 frames. This can be used to root out any potential performance bottlenecks.

cpu-player>     min:  9.8   max: 24.0   avg: 16.3
cpu-ogles-drv>  min:  1.8   max:  8.2   avg:  4.3
cpu-waits-gpu>  min:  0.8   max:  1.2   avg:  0.9
cpu-present>    min:  1.2   max:  3.9   avg:  1.6
frametime>      min: 31.9   max: 37.8   avg: 34.1
draw-call #>    min:   4    max:   9    avg:   6     | batched:    10
tris #>         min:  3590  max:  4561  avg:  3871   | batched:  3572
verts #>        min:  1940  max:  2487  avg:  2104   | batched:  1900
player-detail   physx:  1.2 animation:  1.2 culling:  0.5 skinning:  0.0 batching:  0.2  render: 12.0 fixed-update-count: 1 .. 2
mono-scripts    update:  0.5   fixedUpdate:  0.0 coroutines:  0.0
mono-memory     used heap: 233472 allocated heap: 548864  max number of collections: 1 collection total duration:  5.7

Example iOS profiler output. It turned out, in our case, the bottleneck was a high number of draw calls being made per frame. A bit of research showed this was because the default shaders Unity uses (to apply effects to textures) were more advanced than necessary. We had no requirement for special lighting effects such as specular highlighting, which require several draw calls to create, so we changed all textures to use the far simpler Vertex-Lit shader. Shader performance can have a big impact.

Touch

Engineering the way the user interacts with the 3D character was a key challenge of the project. We produced prototypes of moving the doll around by shaking and tilting the iPad, and of different ways one might connect touch-drag gestures to movements of the doll in 3D. This challenge was compounded by the fact that Unity3D has minimal touch gesture recognition, forcing us to write custom code to recognise any swipe/drag interactions. After some prototyping and playing, we decided that the core interaction would be to touch-drag the limbs of the doll. This required writing a touch manager, enabling us to track multiple touches on screen so users could pull the doll in all directions. The connection of the user’s finger to the doll’s limb was created in several steps:

  1. at the point where the screen is touched, a line is fired forwards in 3D space to check where it connects with a limb
  2. an invisible character joint is attached to the limb at the point it intersects the line
  3. the position of this joint is restricted so that it is always the same distance from the screen when moved
  4. as the finger moves across the screen, the invisible joint is moved to the same position in 3D, pulling the limb and the rest of the doll with it

Creating the finger-limb connection using a joint is much simpler than manipulating the limb in code, as it saves having to to calculate the limb’s new position and rotation based on where it is grabbed. Also, directly setting the positions of rigidbodies under the control of the physics engine is not recommended as it can have unstable effects.

Behaviour

We held many discussions about how to make the doll character animate in an interesting manner. It was not enough for him to simply fall limp to the floor as soon as the app starts – a behaviour the ragdoll physics engine gives for free. We wanted him to stand unaided so it would be more satisfying to push him over. We considered several options:

  • disable the ragdoll physics while the doll is standing
  • attach an invisible joint to the top of the doll’s head to suspend him
  • put the doll on an invisible stand attached to his pelvis
  • make the doll joints more rigid and design the doll in such a way that he could balance on his feet with the physics forces still applied

The simplest option was to configure the doll so that he could balance on his feet, because we wouldn’t have to program new behaviours or enable/disable certain restrictions as the user interacted. However we didn’t want to be restricted in the design of the doll (i.e. make it bottom-heavy with large feet to balance), so we modified the solution. His joints were made more rigid (which also supported the notion that he is a doll and not a limp human), and we added invisible skis to his feet. When his stance was set with legs apart he would not fall to either side, and the invisible skis prevented him from falling forwards or back.

Communicating outside of Unity

An important goal of ours was accessing photos from Facebook, so we investigated our options for accessing Facebook’s APIs. Facebook do provide an SDK for iOS, but as Unity is a layer of abstraction away from the iOS code, it is not possible to access the functions directly. Our solution was based on the plugins feature of Unity, where we are able to write code to explicitly call functions in the native iOS code and access the Facebook API. Setting up the Facebook SDK for iOS required deeper iOS development knowledge, but luckily Alex was on hand to help. We built code that would pause the Unity screen and pop up a working Facebook login screen, but as time was against us that’s as far as we took it. We eventually opted for a simpler solution for loading Facebook images.

Facebook profile pictures

As a shortcut to getting images into Unity, we used the Unity WWW class. This enables us to load an image directly from a URL into a texture, as long as the URL is publicly accessible. This rules out accessing a Facebook user’s photo gallery, but if they haven’t disabled it in their privacy settings it is possible to load their profile picture. All we need is their Facebook ID – the username or number in the URL of their profile page – and using the Facebook Graph API we can get their profile image from the url: http://graph.facebook.com/<facebookID>/picture?type=large.

Image manipulation

The final challenge was mapping the custom face photo to the mask of the doll. Our initial implementation used a Projector component, which works much like a real life projector in that you point it at a surface and it projects an image onto it. In 3D games this invisible component is frequently used to apply textures on top of other textures, such as blob shadows under characters, or bullet holes on the wall. After some tests we found the projector was not flexible enough to give the result we wanted, so we set about editing the mask texture directly. Because Unity has no built in image manipulation functions, this required writing tools to perform per pixel masking, scaling and translating. Unfortunately performing these manipulations without hardware acceleration on iPad proved very sluggish indeed, which made it tricky to position and scale the face within the mask. For a production release another implementation would be needed, such as previewing the position/scale using 3D planes and only computing the mask when editing is complete. Thankfully once the mask editing interface was closed, performance returned to normal.

Learnings

Coming from a Flash development background, the project was an excellent learning opportunity and a real eye opener to the power of dedicated 3D game engines such as Unity3D. Having worked previously with 3D in Flash, I found the built in physics and the many functions for 3D calculation and manipulation to be a welcome addition. It was also a great introduction to iOS development and deployment using Xcode, and I recommend it to anyone looking to get into this area. We also learned a lot when it came to our R&D process. As we encountered problems along the way it was vital that we knew exactly what our priorities were, and what features we really couldn’t do without. It’s inevitable in any learning project that some tasks will take longer than expected and it is important to react to this to squeeze the most out of the time available.