Comping scene

Using nuke I comped together the footage with the effect from Houdini.

I used a tracker to remove the jitter from the video. Colour corrected the fire to try and make it look more like it belongs in the scene,

The add mix helps the fire to blend into the scene.

Standard over merge:

Add mix node2

Rendered result:

I’m dissapointed that the rendering of the effect was so poor.

I think that with a better render of the pyro, the shot what look more effective.

I am however happy with the way that fire emits from the actors hands, and thanks to the motion capture, follows his movements well.

A follow on step from here would be to create parts of the scene in Houdini, such as the ground, to allow for reflections. As well as showing specular reflections on the actors.

Renders and Adventures with Arnold for Houdini

Mantra render of effect

This was the best render I was able to achieve with the built in Mantra renderer.

If I decreased the voxel size or upped the render quality it seemed to go beyond the capabilities of the machines available to the university. It would cause them to crash alot.

The next option was to try Arnold for Houdini for the renderer.

This did not go very well on the machines at the uni. After a great deal of effort from Anna the technician we were unable to get Arnold rendering on the university machines.

In the end I was able to get arnold working on my home machine. This is what arnold is able to produce when rendering pyro

As you can see it looks fantastic. Once Arnold for houdini is working on the university machines I will look into producing a better quality render.

Motion capture from footage

I now needed to obtain more accurate motion capture data from the footage.

First I rendered out two shots from the footage that only showed each of the two “wizards”.

I found an online service which can convert footage into motion capture data.

This was the result:

As you can see, this does a pretty great job at capturing the motion from the footage.

Its not perfect however, and needs some cleaning up.

Using Autodesks MotionBuilder, I wanted to clean up the animation.

I also wanted to use a different model to the one imported from the motion capture software.

This is one of the default mannequins from mixamo.

I pulled in the mocap data in MotionBuilder

This binds the animation of the mocap to the new model.

Adding feet contacts and a floor should help them to be more realistic as its stops them from clipping through the floor.

Houdini Continued – Pyro!

Using a pyro node to simulate fire, based on the emmision popnet.

Adding burn to the node, as an extra attribute to simulate reality. Adding fire properties to the popnet.

I had to use a an equation to get the emitter to stop emitting after frame 83.

Rasterizing the popnet and these attributes creates a volume form the flow of particles. This is important for a fire sim as a pyro sim used a volume to diplay fire.

Pyro node

Using a pyro node based on the rasterized popnet.

Pyro solver sims the fire “particles”

Bake volume simulates the colours and smoke of the fire

Disturbing the fire creates more realistic effects.

Result

Houdini Continued – Particle emitter source

Now that I have the character in the scene. I can use the model to define the source of emissions for the fire “spell”.

After unpacking the geo the model is displayed in wireframe displaying points on the mesh.

I then blast out two points on the mesh, and merge then togeher to create a line that goes between the two palms of the model.

I do the same thing to two points on the end of palms. I initially used the finger tips, but I later discovered that the fingertip really do not determine the direction of the “spell” it is the palm. Now I have two lines.

Using the midpoints of the two lines, I can create a 3rd line defining the direction of the spell.

This is what the node network looks like.

Merging geo network into anoter geo node allows me to create an emitter. Blasting out the surface I can use which will maintain the position and direction of the hands.

A scatter node creates random particles on the surface

Using an add noise node can be used to scale with Time to randomize the source.

Changing the emission type to all points means that all of the points are now emitters.

The points now have a life expectancy which means that they will exist for a time and then fall off.

A popnet, or popnetwork can be used to simulate the emitter. This means that the emmision scattered noised is used to generate pops at intervals.

Adding a popforce of gravity means that they will fall to the ground once created.

Having a negative jitter birth rate means any pops before the current pop are superseded.

The points are then told to inherit the velocity so that they have momentum as they are moved.

Houdini Continued – Particle collisions

I needed the fire to collide with the second wizard in the scene, for this to occur, the emitter particles need to collide with the geometry of the second wizard.

Improved pyro

To this, a collision calculation node is used on the geo of the second wizard.

This creates a kind of shell around the model of the second wizard, which means that if any particles were to collide with the vop they would bounce off and in the correct direction.

Superior collision detection.

Now within the popnet, the emitted particles collide with the geo and change direction accordingly.

This did have the desired effect for the stream of particles representing the spell. However, as you can see, when I apply the pyro to the volume, it treates the collision shell on the second wizard as a fire source! Meaning he was burning from frame 1.

It took me a couple of days to sort out this issue, where I made used of a houdini forum.

The solution came from a very kind person on the forum who told me I needed a “dopio” node to filter out parts of the dotnet.

Houdini Continued – Importing Mocap and models

To import he mocap data from mixamo.com into Houdini, I will need to use an import node.

I first used an agent node, used predominately for character model importing

It allows you to import the baked in motion cap data along side the model by defining mixamo as the source.

The model and the skeleton with animation are then imported into the scene.

Due to the issues I was having I discovered a better node to use for the import:

Fbx character import node. This can then be used along side a bone deform node, do define the bone transformations applied to animate the model.

Now I have imported the model into the scene, along side the animation.

Learning Houdini and Mocap

So I want to try to add some effects to this shot.

https://vimeo.com/728464899

To do this I will need to need to understand how to use Houdini to make a “flamethrower” type effect.

I had to get my head around Houdini as it is software that I haven’t used before.

Like similar application, the software gives you a stage where you can view geometry in 3D space and change parameters and settings on the left.

The software uses a node based system, similar to Nuke and Mayas Hypershade.

To start adding geometry to scene, a “geo” node is required.

Within this node, you are able define objects and apply other nodes to affect its behavior within the scene.

Mocap

To comp the effect I want to create using Houdini effectively into the scene, I will need use motion data from the scene to impact the how the effect renders. This is useful for positioning and to replicate realistic subtle movements that humans exhibit.

The rendered effect will need extra information for effective composition.

This will include speculator light effecting the surrounding environment and also the subjects.

As you can see in this image, the fire creates a strong orange specular reflective light onto the subject. Also, noted for later comping, there is a really cool natural lens flair in this shot.

As you can see in the base footage:

This information will need to be comp’d into the scene in order for it to look accurate.

There is a possibility of retroactively adding this the base footage but I think it would look far more realistic to generate the spectacular data from the render of the effect, to use to comp on top of the scene.

For this, I will need geometry of the actors within Houdini to use.

To make things easier to start with I have decided to use some motion capture data.

Using data from mixamo.com I have found this motion capture, showing a 2 handed spell cast.

This is the model I will use to interact with Houdini.

Here is the motion data I will use to simulate receiving the “hit”.