This week we had a workshop on camera tracking using 3D Equaliser. This software was my favourite to date in terms of using it for camera tracking, as the elemts within the software are easy to follow and enjoyable to work with.
We were given footage within Camden, and we had to use camera tracking points to create the most accurately tracked environment and camera movement possible.
We had to create a tracking point, adjust the boxes to determine what pattern would be tracked. We also were able to adjust the gamma input, contrast and saturation to make sure the patterns were easier to track.


Everytime a point is tracked it gives a bar gooing from green to red. If the bar enters the red area it means the tracking isn’t accurate, and as close to the green the bar stays the more accurate the tracking is. I found restarting tracking points that entered the red or even orange area enabled me to have more of an easy job towards the end of making sure all points were tracked accurately.
We then carried on to track down the path using this same technique.

We were then allowed to see the camera movement in 3D and how stable it was, and if there were any breaks in the line.


I noticed there were a few breaks in my camera line, so I worked out where I needed to add points and this was sucessfuly fixed.
We then added points to the bridge and the back walls to create a more full environment of the footage.


When we click ‘use result’ on the 3D page, it creates a graph called the ‘Deviation Browser’ of all the tracking points, and tells us the average amount of accuracy. The higher the line goes, the more unnacurate the tracking is. This was my graph for my first attempt:

We were told 0.5 was ‘perfect’ so I aimed to improve my accuracy and get it much lower than what I had. This process consisted of getting rid of trackers with really high waves that brought the average higher, and gooing back and re doing some tracking points. This got my average point down to roughly 0.59.
We were then told to add the data of the camera used and implement in ‘Lens distortion’, which would match the camrea tracking more accurately.

This got my accuracy down to 0.3, along with some more tweaking that I did previously.

We then calculated the final environment, and began to play around with using these points and the camera movement to see how we could implement 3D models into the footage.



We then used ‘Mesh points’ in the 3D model tool to make a mesh of all of the tracking points in each area to create the environment that can be imported to Nuke and Maya.


We then imported the lens data for Nuke, and the file including footage, tracked camera, and 3D models into Maya.

From this, we could see the 3D environment areas and the tracked camera so we would be able to include 3D models that would stay in the environment as the camera moves.

This workshop was immensely useful and I learnt how camera tracking can be used alongside 3D models to create scenes we see in films. It was very engaging and camera tracking was found to be satisfying by most in the class, opening my eyes to this being soemthing I potentially could also be interested in doing in the future.