From Ed Zajac’s wireframe – the first 3d animated model – (1963) to James Cameron’s Avatar (2010) CGI is less than 50 years old. But look how far we’ve come! Of course, under all the rendered surfaces of Avatar, we’ll find the familiar wireframe and polygon models.
Overview: CGI and film
Its important to understand some of the fundamentals of CGI – how 3d models are made, how they are animated, how they made life-like or credible. So here’s my essential guide to CGI:
In MediArtInnovation I’ll be examining all the actual and likely components of the emerging total cinema, beginning with the tools that characterise what John Gaeta (vfx designer on The Matrix series) calls Virtual Cinematography. These will include motion-capture, blue-screen/green-screen live action, motion-tracking, expression-capture, crowd-simulation, character-simulation, augmented reality, projection mapping and previz software.
Overview: CGI and film
The very beginnings of CGI on film are in the 1950s, when cameras were put in front of cathode-ray-tube (CRT) screens and the images on the screen either filmed in runtime (for example, at 24 frames/sec) or shot frame-by-frame onto a customised camera or film-recorder. This is the technique John Whitney (one of the pioneers of computer-animation) used for his contribution to the titles sequence for Vertigo. John Whitney uses analog computer animation in short films and in 1958, for Saul Bass: titles sequence for Hitchcock’s Vertigo.
Saul Bass and John Whitney: Vertigo film titles 1958
3d computer graphics uses the three-dimensional coordinates of X, Y, and Z – with usually, X as horizontal, Y as vertical and Z as depth dimensions. By including a unit scale the position and size of objects like this blue cube can be identified by its X,Y,Z coordinates.
By shuffling these coordinates (changing the numbers), objects can be repositioned, stretched, moved, distorted in any way numerically possible. Computer graphics is about coordinates and calculations.
In CGI, a ‘virtual’ camera (VC) can be defined anywhere in XYZ space. Unlike a real camera, a virtual camera can have an infinitely adjustable (elastic) lens – from extreme telephoto to ultra wide-angle – to macro – just by adjusting these values in software. The virtual camera can be moved anywhere in XYZ space at whatever speed the animator desires. The virtual camera can shoot at whatever frames rates are desired – from ultra slo-mo to stop-frame, and it can shoot in stereo 3d too. The virtual camera can be set to any aperture, any depth-of-field, any filter, any colour-space. A VC is not limited by gravity or mechanics – it is infinitely flexible.
Early computer graphics, though only simple wireframes, pushed processing power at that time to its limits. Early computers would take several seconds to compute a 3d wireframe , several minutes to compute a complex 3d model, often hours to compute a rendered image (with surface textures, point-source lighting etc) Each generation of computing power allowed further refinements to CGI – such as surface rendering, smooth-shading, texture-mapping, transparency, smoke and flame effects, cloud effects, skin-rendering, hair-rendering, fur-rendering, trees and forests, fractal landscapes, realistic humanoid and animal movement (motion-capture), and so on.
Indicative of the kind of experimental computer-imaging techniques of the mid 1960s, Ken Knowlton’s discovery that computers can be programmed to calculate the grey-scale of tonal images:
Ken Knowlton developed the Beflix (Bell Flicks) animation system in 1963, which was used to produce dozens of artistic films by artists Stan VanDerBeek, Knowlton and Lillian Schwartz. Instead of raw programming, Beflix worked using simple “graphic primitives”, like draw a line, copy a region, fill an area, zoom an area, and the like.”
Stan Vanderbeek + Ken Knowlton: Poemfield 1964 – 1967
Vanderbeek was very helpful to me when, as a student, he sent me a bundle of material to help me with my 1967 graduation thesis on the Gesampkunstwerk – the composite art work. He studied with the composer John Cage, the dancer Merce Cunningham and architect-philosopher Buckminster Fuller (at Black Mountain College) was a very influential film-maker, and developed his famous movie-dome studio in Stony Point, New York.
Bell Labs – a leading digital research centre at that time, made this short (16 min) documentary: A Computer Technique for the Production of Animated Movies (c1964), recording some of Knowlton’s research.
Big steps in CGI came in the late 1960s, early 1970s. Wireframes could be clad with polygon surfaces, animation became more sophisticated. The earliest feature film to carry CGI images was Richard Heffron’s Futureworld (1976), in which we glimpse some of these then state of the art advances.
Futureworld – while not a Michael Crichton project like his WestWorld, follows a fairly predictable story trope – Westworld was a futuristic theme-park ‘world’ where robotic cowboys populate a small western town. Adventurous theme-park visitors dress in character, and can live among the cowboy robots – they can pick gun-fights with them, even shoot them. The cowboy robots only use blanks in their guns. But something goes wrong with one of the robots, and he starts shooting back at the visitors, with real bullets.
(You recognise this plot in Jurassic Park and many of Crichton’s other hugely successful movies)
Same thing with Futureworld, only this time it wasn’t by Crichton.
The next big step came with Disney’s Tron (Stephen Lisberger, 1982). Here the plot entails our hero, played by Jeff Bridges, actually entering the cyberspace world of a computer and battling with evil virus-code villains. This is the earliest movie to feature colour-keyed live action film (heavily tinted here to give a CRT-blue colour), optically combined with a computer-generated, hand-coloured wireframe set.
Despite its obvious shortcomings compared to CGI now, Tron was a box-office success and prepared the way for more computer animation – leading a dozen years later to John Lassiter’s Toy Story (1995) – the first completely computer generated feature.
To marry CGI with live action means that the position of objects and people in the real-world (in studio movie-sets or on location) has to be synchronised with the ‘people’ and objects in CGI space. (And of course the camera-lensing, lighting and aperture has to be synchronised too). This is where two technologies come in: motion-capture (MOCAP) and motion-control.
MOCAP is the capture of an actor’s motion and movement. It is usually done in a green-screen or blue-screen studio. Motion control is feeding data on camera position, aperture, filters etc from the real-world camera to the virtual camera, so that images from both worlds can be aligned in perfect synch.
Before Digital: Optical and Mechanical FX
Of course, the world of virtual cinematography has grown out of, and ultimately was based upon, the pre-digital world of optical special effects (opticals or mechanicals), where production designs, directors and cinematographers built real 3d sets and used either in-camera or laboratory-processed special effects. These early effects techniques – like this example of glass-matte painting – emerged in the 1930s (it was used in Gone With the Wind, Selznick 1939), and were still being used in Steven Spielberg’s Raiders of the Lost Ark in 1982.
Optical FX was the name given to in-camera effects and laboratory (processing) effects. In-camera effects include all the ways you could manipulate the filmic image inside the camera – such as double or multiple exposures, iris-fades, rewinding film, matte-shots (where part of the lens is blocked-off by a cut-out mask or glass matte – the film is shot with the matte, then the film is re-wound and a different matte used to expose the film.Glass-mattes – This technique was in widespread use in the 1930s (notably Gone With the Wind – Selznick, 1939) and involves interposing a glass painting between the camera and the live-action to add landscape or other ‘location’ cues, skyscapes, background scenery etc). With careful control of depth-of-field, this becomes an in-camera effect.
Slit-scan is an analog or manual technique in which a physical optical slit is used to progressively expose a frame of film. It is used for special fx, especially for elongating objects. It was used in the early 1960s by Bernard Lodge to create the Dr Who titles-sequence (1963), and later by the optical fx wizard Doug Trumbull for the stargate sequence in Kubrick’s 2001 a Space Odyssey (1968).
to be continued…
One thought on “Virtual Cinematography and Total Cinema”
Hi there! Do you use Twitter? I’d like to follow
you if that would be ok. I’m absolutely enjoying your blog and look forward to