Week 6 – Level up! More Motion Tracking

This week we learnt about ‘Matchmoving’ which is the technique that allows computer generated graphics to be inserted into live-action footage with correct position, scale, orientation, and motion. Also known as motion tracking, its what allows movie monsters to run down Main Street and robots to run through crowds and look real!

It can track the 2D position (the x and y) of a small cluster of pixels and ‘solve’ the 3D position. It can reverse engineer camera movement, camera angle and lens angle and attach this data to another layer or objects. However, like the more basic motion tracking we looked at in week 5, it cannot track pixels if they are obscured by another object of motion blur.

When experimenting with Matchmoving we first used it to attach some text to a box.

This we experimented a little more creating a VFX hole in a road… complete with terribly placed fire and smoke! If I had more time to work on this I would have cleaned it up a lot more however it still gets across the potential of Matchmoving.

Week 5 – Visually Effective Visual Effects… Motion Tracking!

In this weeks workshop we looked at Motion Tracking.

An important part of Motion Tracking is understanding what it can and cannot do. Motion Tracking can track the 2D position – the X and Y – of a small cluster of pixels of which the data can then be attached to another layer of object. However it cannot track pixels that are obscured by another object or by motion blur and it cannot track the 3D position – the additional Z axis – of a cluster of pixels.

Our first experimentation involved the Motion Tracking of points on a box.

We learnt that the Motion Tracking tags struggles to track pixel clusters that had been filmed in a low shutter speed because of the motion blur as opposed to a higher shutter speed. Also if the points were obscured or went off the edge of the footage then the tracking was lost. We also experimented using 4 tracking points using the Corner Pin Perspective to achieve different results… we discovered that by using the 4 tracking points we were able to skew the image to make it appear 3D when it isn’t.

Our second experimentation involved using Motion tracking to create a 2D sky replacement. We attached the tracking points to a section of building and the tip of a tree that were not obscured by anything, then attached that data to a null layer and attached our sky replacement to that. Then using a mask and feathering it out created a very basic sky replacement. We also then explored ways of making the sky replacement more realistic and believable by adding in different lighting effects and adding in light sources.

Week 4 – Augmented Reality

This week we had a look at Augmented Reality. A technology that superimposes a computer-generated image on a user’s view of the real world, thus providing a composite view. Mostly this involved looking at what was possible with the Augment App (http://www.augmentedev.com)

 
With Augment you are able to simulate your 3D models in real world, real size and real time. This is all done by using trackers. You can either use the universal tracker which is tracked by the app automatically, you can set your own images as trackers or you can go trackerless which automatically applies your 3D object to the floor.
The App is great fun in your spare time but is also potentially really useful in for sales in businesses. I had a play around with the App myself and here are some of the things that I produced.
This is Clive in a lecture theatre… with left shark.

Week 3 – I project a riot

Projection Mapping

Projection Mapping uses everyday video projectors, but instead of projecting on a flat screen (e.g. to display a PowerPoint), light is mapped onto any surface, turning common objects of any 3D shape into interactive displays. More formally, projection mapping is “the display of an image on a non-flat or non-white surface”.

One of the first examples of Projection Mapping commercially was at DisneyLand California 1969. The dark ride featured a number of interesting optical illusions, including a disembodied head, Madame Leota, and 5 singing busts, the ‘Grim Grinning Ghosts‘, singing the theme song of the ride. These were accomplished by filming head-shots of the singers (with 16 mm film) and then projecting this film onto busts of their faces.

In 1998 Projection mapping really started to get traction when it was pursued in academia. “Spatial Augmented Reality” was born out of the work by at UNC Chapel Hill by Ramesh RaskarGreg WelchHenry Fuchs and Deepak Bandyopadhyay et al. It all got started with a paper The Office of the Future [2]. The Office of the Future envisioned a world where projectors could cover any surface. Instead of staring at a small computer monitor, we would be able to experience augmented reality right from our desk. This means we could Skype with life-size versions of our office mates, view life-size virtual 3D models. This work even featured an early real-time, imperceptible 3D scanner (like the Kinect).

Today Projection mapping as an art artform can be used for many things including advertising, live concerts, theater, gaming, computing, decoration and anything else you can think of. Specialized software or just some elbow grease can be used to align the virtual content and the physical objects. One software that allows you to manipulate this virtual content is HeavyM.

HeavyM is a ready-to-use projection mapping software. It allows you to create visual animations and project them on real objects with a videoprojector. You can also use your own video contents and adapt them on your structures. In addition to be a tool, HeavyM is a community where happy people can share their ideas and project, help each other or express themselves.

Another Projection Mapping software is an app called Dynamapper.