Virtual Cinematography and Total Cinema


Ed Zajac: Simulation of a Two-Giro Gravity Gradient Attitude Control System (1963) – the earliest 3d CGI film that I have found. It illustrates the hypothesis that only two gyro-servos would be needed to orientate a satellite in orbit.


James Cameron: Avatar 2010. Movies that rely on on about 80% CGI are commonplace in the last decade or so. Cameron’s Avatar, like Cuaron’s Gravity (2013), Jackson’s Lord of the Rings (2001-4), Verbinski’s Pirates of the Caribbean (2003), depend upon considerable CGI, and demonstrate just how far we have come from Ed Zajac’s 1963 film.

From Ed Zajac’s wireframe – the first 3d animated model – (1963) to James Cameron’s Avatar (2010) CGI is less than 50 years old. But look how far we’ve come! Of course, under all the rendered surfaces of Avatar, we’ll find the familiar wireframe and polygon models.

Overview: CGI and film

Its important to understand some of the fundamentals of CGI – how 3d models are  made, how they are animated, how they made life-like or credible. So here’s my essential guide to CGI:

In MediArtInnovation I’ll be examining all the actual and likely components of the emerging total cinema, beginning with the tools that characterise what John Gaeta (vfx designer on The Matrix series) calls Virtual Cinematography. These will include motion-capture, blue-screen/green-screen live action, motion-tracking, expression-capture, crowd-simulation, character-simulation, augmented reality, projection mapping and previz software.

Overview: CGI and film

The very beginnings of CGI on film are in the 1950s, when cameras were put in front of cathode-ray-tube (CRT) screens and the images on the screen either filmed in runtime (for example, at 24 frames/sec) or shot frame-by-frame onto a customised camera or film-recorder. This is the technique John Whitney (one of the pioneers of computer-animation) used for his contribution to the titles sequence for Vertigo. John Whitney uses analog computer animation in short films and in 1958, for Saul Bass: titles sequence for Hitchcock’s Vertigo.



Saul Bass and John Whitney: Vertigo film titles 1958

3d computer graphics uses the three-dimensional coordinates of X, Y, and Z – with usually, X as horizontal, Y as vertical and Z as depth dimensions. By including a unit scale the position and size of objects like this blue cube can be identified by its X,Y,Z coordinates.


By shuffling these coordinates (changing the numbers), objects can be repositioned, stretched, moved, distorted in any way numerically possible. Computer graphics is about coordinates and calculations.


In CGI, a ‘virtual’ camera (VC) can be defined anywhere in XYZ space. Unlike a real camera, a virtual camera can have an infinitely adjustable (elastic) lens – from extreme telephoto to ultra wide-angle – to macro – just by adjusting these values in software. The virtual camera can be moved anywhere in XYZ space at whatever speed the animator desires. The virtual camera can shoot at whatever frames rates are desired – from ultra slo-mo to stop-frame, and it can shoot in stereo 3d too. The virtual camera can be set to any aperture, any depth-of-field, any filter, any colour-space. A VC is not limited by gravity or mechanics – it is infinitely flexible.


1963_Fetter_Boeing-man animation_wireframe

William Fetter: Boeing Man wireframe 1963 These 3d animations were used to provide ergonomic design data for the planning of airplane cockpit instrument layout. To my knowledge they weren’t committed to film.

Early computer graphics, though only simple wireframes, pushed processing power at that time to its limits. Early computers would take several seconds to compute a 3d wireframe , several minutes to compute a complex 3d model, often hours to compute a rendered image (with surface textures, point-source lighting etc) Each generation of computing power allowed further refinements to CGI – such as surface rendering, smooth-shading, texture-mapping, transparency, smoke and flame effects, cloud effects, skin-rendering, hair-rendering, fur-rendering, trees and forests, fractal landscapes, realistic humanoid and animal movement (motion-capture), and so on.


This sequence of 2d wireframes is by Charles Csuri, one of the great pioneers of CGI. He was experimenting here with hand-drawn images that have been digitised. Csuri demonstrates what can be done by algorithmically distorting the images (stretching, squeezing, copying, pasting etc). Yes – in the early days of computer graphics, all these basic operations had to be discovered and illustrated!


In this plate (like the others above, from Gene Youngblood: Expanded Cinema, 1970), we see samples from John Whitney’s mesmeric computer-generated films, produced using his adapted analog computer, tinted with filters, and presented as triple-image projections in 1967).

Indicative of the kind of experimental computer-imaging techniques of the mid 1960s, Ken Knowlton’s discovery that computers can be programmed to calculate the grey-scale of tonal images:


Ken Knowlton: Studies in Perception No1 1966. The realisation that the computer was a medium dawned very slowly, really by increments as computer-imaging, then computer animation, computer-poetry, computer-music (etc) emerged in the 1960s. The far-sighted Jascia Reichardt celebrated the computer as an art medium in her 1968 exhibition at London ICA – called Cybernetic Serendipity. Knowlton’s was one of the computer-images featured. This is also reproduced from Gene Youngblood’s brilliant 1970 overview: Expanded Cinema (1970).

Ken Knowlton developed the Beflix (Bell Flicks) animation system in 1963, which was used to produce dozens of artistic films by artists Stan VanDerBeek, Knowlton and Lillian Schwartz.[9] Instead of raw programming, Beflix worked using simple “graphic primitives”, like draw a line, copy a region, fill an area, zoom an area, and the like.”


Stan Vanderbeek: Poemfield 1964-1967. The radically innovative Vanderbeek partners with Ken Knowlton – a great pioneer of computer graphics – to create computer-generated typographic poetry. “Poem Field is the name of a series of 8 computer-generated animations by Stan VanDerBeek and Ken Knowlton in 1964-1967. The animations were programmed in a language called Beflix (short for “Bell Flicks”), which was developed by Knowlton.” (wikipedia)

Stan Vanderbeek + Ken Knowlton: Poemfield 1964 – 1967

Vanderbeek was very helpful to me when, as a student, he sent me a bundle of material to help me with my 1967 graduation thesis on the Gesampkunstwerk – the composite art work. He studied with the composer John Cage, the dancer Merce Cunningham and architect-philosopher  Buckminster Fuller (at Black Mountain College) was a very influential film-maker, and developed his famous movie-dome studio in Stony Point, New York.

Bell Labs – a leading digital research  centre at that time, made this short (16 min) documentary: A Computer Technique for the Production of Animated Movies (c1964), recording some of Knowlton’s research.


Big steps in CGI came in the late 1960s, early 1970s. Wireframes could be clad with polygon surfaces, animation became more sophisticated. The earliest feature film to carry CGI images was Richard Heffron’s Futureworld (1976), in which we glimpse some of these then state of the art advances.


Richard Heffron: Futureworld, 1976. Heffron revisits the familiar territory of Michael Crichton’s Westworld (1973), where Yul Brynner plays a robotic replica of a cowboy-gunslinger. Brynner makes a cameo appearance in Futureworld too – alongside Peter Fonda. But Futureworld was the first major feature to incorporate 3d CGI – and the clip below illustrates the state of the art in the mid-1970s.

Futureworld – while not a Michael Crichton project like his WestWorld, follows a fairly predictable story trope – Westworld was a futuristic theme-park ‘world’ where robotic cowboys populate a small western town. Adventurous theme-park visitors dress in character, and can live among the cowboy robots – they can pick gun-fights with them, even shoot them. The cowboy robots only use blanks in their guns. But something goes wrong with one of the robots, and he starts shooting back at the visitors, with real bullets.

(You recognise this plot in Jurassic Park and many of Crichton’s other hugely successful movies)

Same thing with Futureworld, only this time it wasn’t by Crichton.


Stephen Lisberger: Tron 1982 ERarly chromakey/blue-screen matting of live-action filmed characters into a CGI wireframe set.

The next big step came with Disney’s Tron (Stephen Lisberger, 1982). Here the plot entails our hero, played by Jeff Bridges, actually entering the cyberspace world of a computer and battling with evil virus-code villains. This is the earliest movie to feature colour-keyed live action film (heavily tinted here to give a CRT-blue colour), optically combined with a computer-generated, hand-coloured  wireframe set.


Lisberger’s 1982 storyboard/visualisation of a frame for Tron. Xerox photocopy machines had a considerable impact on graphics at this time, but the marker-pen was the usual storyboard tool,

Despite its obvious shortcomings compared to CGI now, Tron was a box-office success and prepared the way for more computer animation – leading a dozen years later to John Lassiter’s Toy Story (1995) – the first completely computer generated feature.

To marry CGI with live action means that the position of objects and people in the real-world (in studio movie-sets or on location) has to be synchronised with the ‘people’ and objects in CGI space. (And of course the camera-lensing, lighting and aperture has to be synchronised too). This is where two technologies come in: motion-capture (MOCAP) and motion-control.

MOCAP is the capture of an actor’s motion and movement. It is usually done in a green-screen or blue-screen studio. Motion control is feeding data on camera position, aperture, filters etc from the real-world camera to the virtual camera, so that images from both worlds can be aligned in perfect synch.

Before Digital: Optical and Mechanical FX

Of course, the world of virtual cinematography has grown out of, and ultimately was based upon, the pre-digital world of optical special effects (opticals or mechanicals), where production designs, directors and cinematographers built real 3d sets and used either in-camera or laboratory-processed special effects. These early effects techniques – like this example of glass-matte painting – emerged in the 1930s (it was used in Gone With the Wind, Selznick 1939), and were still being used in Steven Spielberg’s Raiders of the Lost Ark in 1982.

Optical FX was the name given to in-camera effects and laboratory (processing) effects. In-camera effects include all the ways you could manipulate the filmic image inside the camera – such as double or multiple exposures, iris-fades, rewinding film, matte-shots (where part of the lens is blocked-off by a cut-out mask or glass matte – the film is shot with the matte, then the film is re-wound and a different matte used to expose the film.


Diagram showing the set-up for shooting with glass matte paintings – the real location and the glass background painting are married together in the camera.


Another glass matte rig The compositing together was an in-camera effect – the foreground scene is shot through a matte, so that painting and live-action are composited together on the film negative.


Norman Dawn: Glass matte technique used in film 1907 According to the blogger Yang, at the excellent thespecialeffectsblog.blogspot: “Norman Dawn made several improvements on the matte shot to apply it to motion picture, and was the first director to use rear projection in cinema. Dawn combined his experience with the glass shot with the techniques of the matte shot. Up until this time, the matte shot was essentially a double-exposure: a section of the camera’s field would be blocked with a piece of cardboard to block the exposure, the film would be rewound, and the blocked part would also be shot in live action. Dawn instead used pieces of glass with sections painted black (which was more effective at absorbing light than cardboard), and transferred the film to a second, stationary camera rather than merely rewinding the film. The matte painting was then drawn to exactly match the proportion and perspective to the live action shot. The low cost and high quality of Dawn’s matte shot made it the mainstay in special effects cinema throughout the century. [2] Traditionally, matte paintings were made by artists using paints or pastels on large sheets of glass for integrating with the live-action footage.[1] The first known matte painting shot was made in 1907 by Norman Dawn (ASC), who improvised the crumbling California Missions by painting them on glass for the movie Missions of California.[2] Notable traditional matte-painting shots include Dorothy’s approach to the Emerald City in The Wizard of Oz, Charles Foster Kane’s Xanadu in Citizen Kane, and the seemingly bottomless tractor-beam set of Star Wars Episode IV: A New Hope.”

Glass-mattes – This technique was in widespread use in the 1930s (notably Gone With the Wind – Selznick, 1939) and involves interposing a glass painting between the camera and the live-action to add landscape or other ‘location’ cues, skyscapes, background scenery etc). With careful control of depth-of-field, this becomes an in-camera effect.


Doug Trumbull: Slit-scan rig for Kubrick’s 2001 A Space Odyssey, 1968

Slit-scan is an analog or manual technique in which a physical optical slit is used to progressively expose a frame of film. It is used for special fx, especially for elongating objects. It was used in the early 1960s by Bernard Lodge to create the Dr Who titles-sequence (1963), and later by the optical fx wizard Doug Trumbull for the stargate sequence in Kubrick’s 2001 a Space Odyssey (1968).


Doug Trumbull/Stanley Kubrick: Star-gate sequence from 2001 A Space Odyssey (1968) – a 9.5 minute sequence engineered using slit-scan (and some old footage from Dr Strangelove). This is the most remarkable example – outside movies that are about the effect of drugs – of the influence of counter culture experimental film-making upon major Hollywood productions.

to be continued…




The Cinema of Totality

(This is the first in a series of posts about the emergence of indicators of a Cinema of Totality)


Oscar Schlemmer: Triadic Ballet, Bauhaus 1922 Ideas of the gesamptkunstwerk – the total art work – drove both Walter Gropius (founder and director of the Bauhaus), and Lazlo Moholy Nagy, who in 1923 took over as head of the Preliminary (Foundation) Course from Johannes kitten. Oscar Schlemmer had his ideas too, inspired by von Kliest’s essay on the Marionette Theatre (1798), and concurring with Moholy Nagy’s vision of a theatre of machines.

Back in 1924, in his book Theatre of the Bauhaus, the artist-designer-film-maker Lazlo Moholy Nagy, then in charge of the Foundation year at the Bauhaus, included a piece headed Theatre, Circus, Variety in which he described THE COMING THEATER – A THEATER OF TOTALITY:

Every form process or Gestaltung has its general as well as its particular premises, from which it must proceed in making use of its specific media. We might, therefore, clarify theatre production if we investigated the nature of its highly controversial media: the human word and the human action, and, at the same time, considered the endless possibie creator—man.”

“It must be said, however, that those ideas, in contradistinction to a literary-illustrative viewpoint, have unquestionably advanced creative theater precisely because they were diametrically opposed. They canceled out the predominance of the exclusively logical-intellectual values. But once the predominance has been broken, the associative processes and the language of man, and consequently man himself in his totality as a formative medium for the stage, may not be barred from it. To be sure, he is no longer to be pivotal—as he is in traditional theater—but is to be employed ON AN EQUAL FOOTING WITH THE OTHER FORMATIVE MEDIA.”


Examples of indicators of the emerging Cinema of Totality. The Wachowsky’s The Matrix (1999) in many ways mapped out the new ‘virtual cinematography’ – the intricate compositing of live-action-motion-capture-CGI, freeing the camera both in space and time.

I believe that we are witnessing the early stages of a Cinema of Totality. Very early stages, I suggest, as the true art of the 21st century is still only being hinted at in the great variety of emergent art-forms and media innovation that has marked the last twenty years. These nascent forms include interactive installation-art, augmented reality, virtual reality, e-books, web-sites, virtual worlds, projection-mapping, webcasting, geographical information systems, social media, fractals, CGI, robotics, simulation software, chatbots, machine-generated literature, games, MMORPGs, hypertext and hypermedia, multi-touch, gesture recognition, speech-recognition, machine-learning, video-art, artificial life, genetic algorithms, QR Quick-Response Codes, hologram projection, tamagotchi, exoskeletons, transmedia narrative, synthespians, software agents, Internet of Everything, wikipedia, internet archive,  Skype, animatronics, interactive documentary, digital journalism, realtime motion capture, PreViz software, – and of course a swath of acronymic technologies, including: html, vrml, RSS, PHP, Java, CCTV, HD, QTVR, VVVV, RFID, GPS, GIS, etc.

Some key books have hinted at the possibilities here: Brenda Laurel: Computers as Theatre (1991), Howard Rheingold: Virtual Reality (1991), Clifford Pickover: Computers and the Imagination (1991), Steven Levy: Artificial Life (1992), Neil Stephenson: Snowcrash (1992), Noam Chomsky: Manufacturing Consent (1992), Scott Bukatman: Terminal Identity (1993), Howard Rheingold: The Virtual Community (1993), April Greiman: Hybrid Imagery (1993), Kevin Kelly: Out of Control (1994), Bob Cotton and Richard Oliver: The Cyberspace Lexicon (1994), Robert Pepperell: The Post-Human Condition (1995), Sherry Turkle: Life on the Screen (1996), Mitchell Resnick: Turtles, Termites and Traffic Jams (1997), Janet Murray: Hamlet on the Holodeck (1997), Bob Cotton and Richard Oliver: Understanding Hypermedia 2.000 (1998), Hans Moravec: Robot (1999), Scott McCloud: Reinventing Comics (2000), Bob Cotton and Malcolm Garrett: You Aint Seen Nothing Yet (2000), Jay David Bolter: Remediation (2000), Brenda Laurel: Utopian Entrepreneur (2001), Lev Manovich: The Language of New Media (2001), Martin Dodge and Rob Kitchin: Atlas of Cyberspace (2001), Lucien King: Game On (2002), Bob Cotton: Futurecasting Digital Media (2002), Howard Rheingold: Smart Mobs (2002), Jeffrey Shaw: Future Cinema (2002), Bill Moggeridge: Designing Interactions, Charlie Gere: Art, Time and Technology (2006), Stephen Keane: Cinetech (2007), Mike Faulkner: Audio-Visual Art and VJ Culture (2008), Jesse Schell: The Art of Game Design (2008), Erik Brynjolfsson and Andrew McFee: The Second Machine Age (2014), Lev Manovich: Software Takes Command (2014).


While most of these example films are blockbusters, and several are series (to amortise the original costs over several films), around 30% of the top 100 greatest grossing movies would fit in as illustrations of the totality of directorial freedom in the last decade or so. If you can imagine it, we can build it.


These were all hints as to the essential building blocks of a 21st century iteration of Wagner’s Gesamptkunstwerk (composite art work) – hints as to the ingredients of a new total art form. But I want to talk about a recent trend apparent in big-budget (blockbuster) movie-making. A trend that has several signs of a new Cinema of Totality.


Since the successful integration of (first of all) models and optical effects, then later, computer-generated imagery (CGI) with film (in movie-industry terms, going all the way back to Fritz Lang’s Metropolis (1927), Stanley Kubrick’s 2001 A Space Odyssey (1968), George Lucas’ Star Wars (1977), and Stephen Lisberger’s Tron (1982), but more particularly and perhaps more spectacularly, in James Cameron’s Terminator films (from 1984), then in a string of mostly hugely successful films that synthesise CGI, motion-capture, scenography, prosthetics, analglyptic 3d, virtual cinematography as well as multi-channel sound, costume design and other traditional filmic arts. These include: (in chronological order):


Luc Besson: The Fifth Element (1997)

Wachowskis: The Matrix series (from 1999)

Chris Columbus: Harry Potter and the Philosopher’s Stone (2001)

Peter Jackson: Lord of the Rings series (from 2001)

Gore Verbinski: Pirates of the Caribbean (2003)

Frank Miller, Robert Rodriguez: Sin City (2005)

Andrew Adamson: Chronicles of Narnia 2005

James McTeigue: V for Vendetta (2006)

Christopher Nolan: The Dark Knight (2008)

James Cameron: Avatar (2010)

Alfonso Cuaron: Gravity (2013)

(an indicative list only – you can think of many more films like this, in which the director seems to have total creative freedom)

In the last decade or so, it is computer graphics modelling and animation (CGI for short) that has catalysed this creative freedom – allowing director’s, production designers and cinematographers the freedom to think and to create entirely outside the constraints of gravity and space/time. From the very first high-level CGI I saw in the early 1980s (it was a much treasured showreel from John Whitney Jr’s Triple I studio) it was obvious that this was not only part of the future of animation, but an essential component in the future of movies.

Of course, CGI wasn’t a sudden phenomena – on the contrary, the art and science of computer graphics has a long history – with its foundations in the work of the MIT Lincoln Lab (light pen 1958), Boeing Aircraft Company (3d wireframes) in the early 1960s, and its breakthrough-moment came with the work of Ivan Sutherland and his 1963 doctoral hypothesis: the Sketchpad software (interactive, screen-based 3d modelling), and the work of the flight-simulation company he set up with David Evans (Evans and Sutherland) in 1968.


This is an abbreviated chronology of CGI:


Jay Forrester: Industrial (System) Dynamics 1958

William Fetter  (then at Boeing)- coins ‘computer graphics’ in 1960

Steve Russell: Space War computer game 1961

William Fetter:  wireframe 3d modelling including Boeing Man – ergonomic 3d wireframe of human body 1964

Arthur Appel: Hidden-line removal 1967

Arthur Appel: Ray-casting 1968

Henri Gouraud: Gouraud Shading – 1971

Michael Noll: raster-scan (bitmaps) 1971

Bui Tuong Phong: Phong Shading 1973

Ed Catmull: Texture mapping 1974

Martin Newell: Utah Teapot model 1975

Turner Whitted: Ray Tracing 1979

Benoit Mandelbrot: Fractal landscapes (used in Star Trek Wrath of Khan, 1982)

Rebecca Allen: Motion Capture for Twyla Tharp: Catherine Wheel (capturing and modelling realistic human motion)1983

William Reeves: Particle System (rendering fire, explosions, clouds, smoke etc)

Cornell University: Radiosity 1984

Jamnes Kajiya: Path-Tracing 1986

Will Wright: Sim City complex system simulation (based on Forrester’s System Dynamics) 1989

John Carmack and John Romero: Wolfenstein 3D realtime 3d first-person shooter 1992

John Carmack: Doom realtime 3d first-person shooter 1993

Kristin Dana: skin and cloth textures 1999

cel-shading 2000

Voxel Volume Ray-Casting

Stephen Regeleious: Massive/Prime crowd-rendering 2001


This is hardly a definitive list, but it gives some impression of the phenomenal development of computer graphics over the 54-year period: 1960 – present. The components of CGI – the hardware and software – are of course still developing. The software is a fascinating example of the success of the scientific collaborative sharing of knowledge aka the open-source ideal. The software packages used in the industry now (Maya for example) are the result of over 50 years of innovation by computer scientists, artists, CG specialists, and graphics programmers. The gradual accretion of algorithms for modelling and rendering is a phenomenal example of international collaboration, and has resulted in a set of immensely powerful tools for film-makers, (as well as for artists, architects, designers, scientists, medical researchers, etc, etc).


A scene from The Matrix – the universe of options echoes the range of potential techno-aesthetic components of the cinema of totality.


In this brief mapping of the trendings towards a Cinema of Totality, a few zeitgeist concerns resonate through the last 11 decades or so. One is immersion – the desire to surround an audience with light/sound /visuals/scenographics/information/signals/signs/meaning – another is interactivity – the trending towards active artist-audience-artist dialogue, machine-dialogue, interpersonal dialogue, intermedia and transmedia dialogue. Another is non-linearity – a zeit-form iterating through 11 decades in the atonal composers, in James Joyce (especially in Finnigans Wake, 1939), and in William Burroughs (early 1960s), and in Dziga Vertov (Man With A Movie Camera, 1929). And in Maturana, Cybernetics (Norbert Weiner, 1947), in Psycho-geography (Guy Debord 1954), in T.S. Eliot (The Wasteland, 1922). The other main recursive strand is multi-media, intermedia, transmedia (a mixing of media – film, ballet, literature, music, dance, poetry, theatre etc etc), and finally a kind of synaesthesia – a mixing of the senses.


These are the gradually evolving forms that will composite, catalyse and gell together into a radical 21st century art form, orchestrated by a student genius who gets it. Who gets the immediacy, the interlocution, the smartness, the genius, the richness of a form that marries sensory, kinaesthetic, physical, intellectual, algorithmic, lyrical and expressive modes of a form that is still being born, still nascent, still, like an unchecked lottery ticket, resonant with hope. We love it.



(best short history of computer graphics is at