Virtual Cinematography and Total Cinema


Ed Zajac: Simulation of a Two-Giro Gravity Gradient Attitude Control System (1963) – the earliest 3d CGI film that I have found. It illustrates the hypothesis that only two gyro-servos would be needed to orientate a satellite in orbit.


James Cameron: Avatar 2010. Movies that rely on on about 80% CGI are commonplace in the last decade or so. Cameron’s Avatar, like Cuaron’s Gravity (2013), Jackson’s Lord of the Rings (2001-4), Verbinski’s Pirates of the Caribbean (2003), depend upon considerable CGI, and demonstrate just how far we have come from Ed Zajac’s 1963 film.

From Ed Zajac’s wireframe – the first 3d animated model – (1963) to James Cameron’s Avatar (2010) CGI is less than 50 years old. But look how far we’ve come! Of course, under all the rendered surfaces of Avatar, we’ll find the familiar wireframe and polygon models.

Overview: CGI and film

Its important to understand some of the fundamentals of CGI – how 3d models are  made, how they are animated, how they made life-like or credible. So here’s my essential guide to CGI:

In MediArtInnovation I’ll be examining all the actual and likely components of the emerging total cinema, beginning with the tools that characterise what John Gaeta (vfx designer on The Matrix series) calls Virtual Cinematography. These will include motion-capture, blue-screen/green-screen live action, motion-tracking, expression-capture, crowd-simulation, character-simulation, augmented reality, projection mapping and previz software.

Overview: CGI and film

The very beginnings of CGI on film are in the 1950s, when cameras were put in front of cathode-ray-tube (CRT) screens and the images on the screen either filmed in runtime (for example, at 24 frames/sec) or shot frame-by-frame onto a customised camera or film-recorder. This is the technique John Whitney (one of the pioneers of computer-animation) used for his contribution to the titles sequence for Vertigo. John Whitney uses analog computer animation in short films and in 1958, for Saul Bass: titles sequence for Hitchcock’s Vertigo.



Saul Bass and John Whitney: Vertigo film titles 1958

3d computer graphics uses the three-dimensional coordinates of X, Y, and Z – with usually, X as horizontal, Y as vertical and Z as depth dimensions. By including a unit scale the position and size of objects like this blue cube can be identified by its X,Y,Z coordinates.


By shuffling these coordinates (changing the numbers), objects can be repositioned, stretched, moved, distorted in any way numerically possible. Computer graphics is about coordinates and calculations.


In CGI, a ‘virtual’ camera (VC) can be defined anywhere in XYZ space. Unlike a real camera, a virtual camera can have an infinitely adjustable (elastic) lens – from extreme telephoto to ultra wide-angle – to macro – just by adjusting these values in software. The virtual camera can be moved anywhere in XYZ space at whatever speed the animator desires. The virtual camera can shoot at whatever frames rates are desired – from ultra slo-mo to stop-frame, and it can shoot in stereo 3d too. The virtual camera can be set to any aperture, any depth-of-field, any filter, any colour-space. A VC is not limited by gravity or mechanics – it is infinitely flexible.


1963_Fetter_Boeing-man animation_wireframe

William Fetter: Boeing Man wireframe 1963 These 3d animations were used to provide ergonomic design data for the planning of airplane cockpit instrument layout. To my knowledge they weren’t committed to film.

Early computer graphics, though only simple wireframes, pushed processing power at that time to its limits. Early computers would take several seconds to compute a 3d wireframe , several minutes to compute a complex 3d model, often hours to compute a rendered image (with surface textures, point-source lighting etc) Each generation of computing power allowed further refinements to CGI – such as surface rendering, smooth-shading, texture-mapping, transparency, smoke and flame effects, cloud effects, skin-rendering, hair-rendering, fur-rendering, trees and forests, fractal landscapes, realistic humanoid and animal movement (motion-capture), and so on.


This sequence of 2d wireframes is by Charles Csuri, one of the great pioneers of CGI. He was experimenting here with hand-drawn images that have been digitised. Csuri demonstrates what can be done by algorithmically distorting the images (stretching, squeezing, copying, pasting etc). Yes – in the early days of computer graphics, all these basic operations had to be discovered and illustrated!


In this plate (like the others above, from Gene Youngblood: Expanded Cinema, 1970), we see samples from John Whitney’s mesmeric computer-generated films, produced using his adapted analog computer, tinted with filters, and presented as triple-image projections in 1967).

Indicative of the kind of experimental computer-imaging techniques of the mid 1960s, Ken Knowlton’s discovery that computers can be programmed to calculate the grey-scale of tonal images:


Ken Knowlton: Studies in Perception No1 1966. The realisation that the computer was a medium dawned very slowly, really by increments as computer-imaging, then computer animation, computer-poetry, computer-music (etc) emerged in the 1960s. The far-sighted Jascia Reichardt celebrated the computer as an art medium in her 1968 exhibition at London ICA – called Cybernetic Serendipity. Knowlton’s was one of the computer-images featured. This is also reproduced from Gene Youngblood’s brilliant 1970 overview: Expanded Cinema (1970).

Ken Knowlton developed the Beflix (Bell Flicks) animation system in 1963, which was used to produce dozens of artistic films by artists Stan VanDerBeek, Knowlton and Lillian Schwartz.[9] Instead of raw programming, Beflix worked using simple “graphic primitives”, like draw a line, copy a region, fill an area, zoom an area, and the like.”


Stan Vanderbeek: Poemfield 1964-1967. The radically innovative Vanderbeek partners with Ken Knowlton – a great pioneer of computer graphics – to create computer-generated typographic poetry. “Poem Field is the name of a series of 8 computer-generated animations by Stan VanDerBeek and Ken Knowlton in 1964-1967. The animations were programmed in a language called Beflix (short for “Bell Flicks”), which was developed by Knowlton.” (wikipedia)

Stan Vanderbeek + Ken Knowlton: Poemfield 1964 – 1967

Vanderbeek was very helpful to me when, as a student, he sent me a bundle of material to help me with my 1967 graduation thesis on the Gesampkunstwerk – the composite art work. He studied with the composer John Cage, the dancer Merce Cunningham and architect-philosopher  Buckminster Fuller (at Black Mountain College) was a very influential film-maker, and developed his famous movie-dome studio in Stony Point, New York.

Bell Labs – a leading digital research  centre at that time, made this short (16 min) documentary: A Computer Technique for the Production of Animated Movies (c1964), recording some of Knowlton’s research.


Big steps in CGI came in the late 1960s, early 1970s. Wireframes could be clad with polygon surfaces, animation became more sophisticated. The earliest feature film to carry CGI images was Richard Heffron’s Futureworld (1976), in which we glimpse some of these then state of the art advances.


Richard Heffron: Futureworld, 1976. Heffron revisits the familiar territory of Michael Crichton’s Westworld (1973), where Yul Brynner plays a robotic replica of a cowboy-gunslinger. Brynner makes a cameo appearance in Futureworld too – alongside Peter Fonda. But Futureworld was the first major feature to incorporate 3d CGI – and the clip below illustrates the state of the art in the mid-1970s.

Futureworld – while not a Michael Crichton project like his WestWorld, follows a fairly predictable story trope – Westworld was a futuristic theme-park ‘world’ where robotic cowboys populate a small western town. Adventurous theme-park visitors dress in character, and can live among the cowboy robots – they can pick gun-fights with them, even shoot them. The cowboy robots only use blanks in their guns. But something goes wrong with one of the robots, and he starts shooting back at the visitors, with real bullets.

(You recognise this plot in Jurassic Park and many of Crichton’s other hugely successful movies)

Same thing with Futureworld, only this time it wasn’t by Crichton.


Stephen Lisberger: Tron 1982 ERarly chromakey/blue-screen matting of live-action filmed characters into a CGI wireframe set.

The next big step came with Disney’s Tron (Stephen Lisberger, 1982). Here the plot entails our hero, played by Jeff Bridges, actually entering the cyberspace world of a computer and battling with evil virus-code villains. This is the earliest movie to feature colour-keyed live action film (heavily tinted here to give a CRT-blue colour), optically combined with a computer-generated, hand-coloured  wireframe set.


Lisberger’s 1982 storyboard/visualisation of a frame for Tron. Xerox photocopy machines had a considerable impact on graphics at this time, but the marker-pen was the usual storyboard tool,

Despite its obvious shortcomings compared to CGI now, Tron was a box-office success and prepared the way for more computer animation – leading a dozen years later to John Lassiter’s Toy Story (1995) – the first completely computer generated feature.

To marry CGI with live action means that the position of objects and people in the real-world (in studio movie-sets or on location) has to be synchronised with the ‘people’ and objects in CGI space. (And of course the camera-lensing, lighting and aperture has to be synchronised too). This is where two technologies come in: motion-capture (MOCAP) and motion-control.

MOCAP is the capture of an actor’s motion and movement. It is usually done in a green-screen or blue-screen studio. Motion control is feeding data on camera position, aperture, filters etc from the real-world camera to the virtual camera, so that images from both worlds can be aligned in perfect synch.

Before Digital: Optical and Mechanical FX

Of course, the world of virtual cinematography has grown out of, and ultimately was based upon, the pre-digital world of optical special effects (opticals or mechanicals), where production designs, directors and cinematographers built real 3d sets and used either in-camera or laboratory-processed special effects. These early effects techniques – like this example of glass-matte painting – emerged in the 1930s (it was used in Gone With the Wind, Selznick 1939), and were still being used in Steven Spielberg’s Raiders of the Lost Ark in 1982.

Optical FX was the name given to in-camera effects and laboratory (processing) effects. In-camera effects include all the ways you could manipulate the filmic image inside the camera – such as double or multiple exposures, iris-fades, rewinding film, matte-shots (where part of the lens is blocked-off by a cut-out mask or glass matte – the film is shot with the matte, then the film is re-wound and a different matte used to expose the film.


Diagram showing the set-up for shooting with glass matte paintings – the real location and the glass background painting are married together in the camera.


Another glass matte rig The compositing together was an in-camera effect – the foreground scene is shot through a matte, so that painting and live-action are composited together on the film negative.


Norman Dawn: Glass matte technique used in film 1907 According to the blogger Yang, at the excellent thespecialeffectsblog.blogspot: “Norman Dawn made several improvements on the matte shot to apply it to motion picture, and was the first director to use rear projection in cinema. Dawn combined his experience with the glass shot with the techniques of the matte shot. Up until this time, the matte shot was essentially a double-exposure: a section of the camera’s field would be blocked with a piece of cardboard to block the exposure, the film would be rewound, and the blocked part would also be shot in live action. Dawn instead used pieces of glass with sections painted black (which was more effective at absorbing light than cardboard), and transferred the film to a second, stationary camera rather than merely rewinding the film. The matte painting was then drawn to exactly match the proportion and perspective to the live action shot. The low cost and high quality of Dawn’s matte shot made it the mainstay in special effects cinema throughout the century. [2] Traditionally, matte paintings were made by artists using paints or pastels on large sheets of glass for integrating with the live-action footage.[1] The first known matte painting shot was made in 1907 by Norman Dawn (ASC), who improvised the crumbling California Missions by painting them on glass for the movie Missions of California.[2] Notable traditional matte-painting shots include Dorothy’s approach to the Emerald City in The Wizard of Oz, Charles Foster Kane’s Xanadu in Citizen Kane, and the seemingly bottomless tractor-beam set of Star Wars Episode IV: A New Hope.”

Glass-mattes – This technique was in widespread use in the 1930s (notably Gone With the Wind – Selznick, 1939) and involves interposing a glass painting between the camera and the live-action to add landscape or other ‘location’ cues, skyscapes, background scenery etc). With careful control of depth-of-field, this becomes an in-camera effect.


Doug Trumbull: Slit-scan rig for Kubrick’s 2001 A Space Odyssey, 1968

Slit-scan is an analog or manual technique in which a physical optical slit is used to progressively expose a frame of film. It is used for special fx, especially for elongating objects. It was used in the early 1960s by Bernard Lodge to create the Dr Who titles-sequence (1963), and later by the optical fx wizard Doug Trumbull for the stargate sequence in Kubrick’s 2001 a Space Odyssey (1968).


Doug Trumbull/Stanley Kubrick: Star-gate sequence from 2001 A Space Odyssey (1968) – a 9.5 minute sequence engineered using slit-scan (and some old footage from Dr Strangelove). This is the most remarkable example – outside movies that are about the effect of drugs – of the influence of counter culture experimental film-making upon major Hollywood productions.

to be continued…




The Cinema of Totality

(This is the first in a series of posts about the emergence of indicators of a Cinema of Totality)


Oscar Schlemmer: Triadic Ballet, Bauhaus 1922 Ideas of the gesamptkunstwerk – the total art work – drove both Walter Gropius (founder and director of the Bauhaus), and Lazlo Moholy Nagy, who in 1923 took over as head of the Preliminary (Foundation) Course from Johannes kitten. Oscar Schlemmer had his ideas too, inspired by von Kliest’s essay on the Marionette Theatre (1798), and concurring with Moholy Nagy’s vision of a theatre of machines.

Back in 1924, in his book Theatre of the Bauhaus, the artist-designer-film-maker Lazlo Moholy Nagy, then in charge of the Foundation year at the Bauhaus, included a piece headed Theatre, Circus, Variety in which he described THE COMING THEATER – A THEATER OF TOTALITY:

Every form process or Gestaltung has its general as well as its particular premises, from which it must proceed in making use of its specific media. We might, therefore, clarify theatre production if we investigated the nature of its highly controversial media: the human word and the human action, and, at the same time, considered the endless possibie creator—man.”

“It must be said, however, that those ideas, in contradistinction to a literary-illustrative viewpoint, have unquestionably advanced creative theater precisely because they were diametrically opposed. They canceled out the predominance of the exclusively logical-intellectual values. But once the predominance has been broken, the associative processes and the language of man, and consequently man himself in his totality as a formative medium for the stage, may not be barred from it. To be sure, he is no longer to be pivotal—as he is in traditional theater—but is to be employed ON AN EQUAL FOOTING WITH THE OTHER FORMATIVE MEDIA.”


Examples of indicators of the emerging Cinema of Totality. The Wachowsky’s The Matrix (1999) in many ways mapped out the new ‘virtual cinematography’ – the intricate compositing of live-action-motion-capture-CGI, freeing the camera both in space and time.

I believe that we are witnessing the early stages of a Cinema of Totality. Very early stages, I suggest, as the true art of the 21st century is still only being hinted at in the great variety of emergent art-forms and media innovation that has marked the last twenty years. These nascent forms include interactive installation-art, augmented reality, virtual reality, e-books, web-sites, virtual worlds, projection-mapping, webcasting, geographical information systems, social media, fractals, CGI, robotics, simulation software, chatbots, machine-generated literature, games, MMORPGs, hypertext and hypermedia, multi-touch, gesture recognition, speech-recognition, machine-learning, video-art, artificial life, genetic algorithms, QR Quick-Response Codes, hologram projection, tamagotchi, exoskeletons, transmedia narrative, synthespians, software agents, Internet of Everything, wikipedia, internet archive,  Skype, animatronics, interactive documentary, digital journalism, realtime motion capture, PreViz software, – and of course a swath of acronymic technologies, including: html, vrml, RSS, PHP, Java, CCTV, HD, QTVR, VVVV, RFID, GPS, GIS, etc.

Some key books have hinted at the possibilities here: Brenda Laurel: Computers as Theatre (1991), Howard Rheingold: Virtual Reality (1991), Clifford Pickover: Computers and the Imagination (1991), Steven Levy: Artificial Life (1992), Neil Stephenson: Snowcrash (1992), Noam Chomsky: Manufacturing Consent (1992), Scott Bukatman: Terminal Identity (1993), Howard Rheingold: The Virtual Community (1993), April Greiman: Hybrid Imagery (1993), Kevin Kelly: Out of Control (1994), Bob Cotton and Richard Oliver: The Cyberspace Lexicon (1994), Robert Pepperell: The Post-Human Condition (1995), Sherry Turkle: Life on the Screen (1996), Mitchell Resnick: Turtles, Termites and Traffic Jams (1997), Janet Murray: Hamlet on the Holodeck (1997), Bob Cotton and Richard Oliver: Understanding Hypermedia 2.000 (1998), Hans Moravec: Robot (1999), Scott McCloud: Reinventing Comics (2000), Bob Cotton and Malcolm Garrett: You Aint Seen Nothing Yet (2000), Jay David Bolter: Remediation (2000), Brenda Laurel: Utopian Entrepreneur (2001), Lev Manovich: The Language of New Media (2001), Martin Dodge and Rob Kitchin: Atlas of Cyberspace (2001), Lucien King: Game On (2002), Bob Cotton: Futurecasting Digital Media (2002), Howard Rheingold: Smart Mobs (2002), Jeffrey Shaw: Future Cinema (2002), Bill Moggeridge: Designing Interactions, Charlie Gere: Art, Time and Technology (2006), Stephen Keane: Cinetech (2007), Mike Faulkner: Audio-Visual Art and VJ Culture (2008), Jesse Schell: The Art of Game Design (2008), Erik Brynjolfsson and Andrew McFee: The Second Machine Age (2014), Lev Manovich: Software Takes Command (2014).


While most of these example films are blockbusters, and several are series (to amortise the original costs over several films), around 30% of the top 100 greatest grossing movies would fit in as illustrations of the totality of directorial freedom in the last decade or so. If you can imagine it, we can build it.


These were all hints as to the essential building blocks of a 21st century iteration of Wagner’s Gesamptkunstwerk (composite art work) – hints as to the ingredients of a new total art form. But I want to talk about a recent trend apparent in big-budget (blockbuster) movie-making. A trend that has several signs of a new Cinema of Totality.


Since the successful integration of (first of all) models and optical effects, then later, computer-generated imagery (CGI) with film (in movie-industry terms, going all the way back to Fritz Lang’s Metropolis (1927), Stanley Kubrick’s 2001 A Space Odyssey (1968), George Lucas’ Star Wars (1977), and Stephen Lisberger’s Tron (1982), but more particularly and perhaps more spectacularly, in James Cameron’s Terminator films (from 1984), then in a string of mostly hugely successful films that synthesise CGI, motion-capture, scenography, prosthetics, analglyptic 3d, virtual cinematography as well as multi-channel sound, costume design and other traditional filmic arts. These include: (in chronological order):


Luc Besson: The Fifth Element (1997)

Wachowskis: The Matrix series (from 1999)

Chris Columbus: Harry Potter and the Philosopher’s Stone (2001)

Peter Jackson: Lord of the Rings series (from 2001)

Gore Verbinski: Pirates of the Caribbean (2003)

Frank Miller, Robert Rodriguez: Sin City (2005)

Andrew Adamson: Chronicles of Narnia 2005

James McTeigue: V for Vendetta (2006)

Christopher Nolan: The Dark Knight (2008)

James Cameron: Avatar (2010)

Alfonso Cuaron: Gravity (2013)

(an indicative list only – you can think of many more films like this, in which the director seems to have total creative freedom)

In the last decade or so, it is computer graphics modelling and animation (CGI for short) that has catalysed this creative freedom – allowing director’s, production designers and cinematographers the freedom to think and to create entirely outside the constraints of gravity and space/time. From the very first high-level CGI I saw in the early 1980s (it was a much treasured showreel from John Whitney Jr’s Triple I studio) it was obvious that this was not only part of the future of animation, but an essential component in the future of movies.

Of course, CGI wasn’t a sudden phenomena – on the contrary, the art and science of computer graphics has a long history – with its foundations in the work of the MIT Lincoln Lab (light pen 1958), Boeing Aircraft Company (3d wireframes) in the early 1960s, and its breakthrough-moment came with the work of Ivan Sutherland and his 1963 doctoral hypothesis: the Sketchpad software (interactive, screen-based 3d modelling), and the work of the flight-simulation company he set up with David Evans (Evans and Sutherland) in 1968.


This is an abbreviated chronology of CGI:


Jay Forrester: Industrial (System) Dynamics 1958

William Fetter  (then at Boeing)- coins ‘computer graphics’ in 1960

Steve Russell: Space War computer game 1961

William Fetter:  wireframe 3d modelling including Boeing Man – ergonomic 3d wireframe of human body 1964

Arthur Appel: Hidden-line removal 1967

Arthur Appel: Ray-casting 1968

Henri Gouraud: Gouraud Shading – 1971

Michael Noll: raster-scan (bitmaps) 1971

Bui Tuong Phong: Phong Shading 1973

Ed Catmull: Texture mapping 1974

Martin Newell: Utah Teapot model 1975

Turner Whitted: Ray Tracing 1979

Benoit Mandelbrot: Fractal landscapes (used in Star Trek Wrath of Khan, 1982)

Rebecca Allen: Motion Capture for Twyla Tharp: Catherine Wheel (capturing and modelling realistic human motion)1983

William Reeves: Particle System (rendering fire, explosions, clouds, smoke etc)

Cornell University: Radiosity 1984

Jamnes Kajiya: Path-Tracing 1986

Will Wright: Sim City complex system simulation (based on Forrester’s System Dynamics) 1989

John Carmack and John Romero: Wolfenstein 3D realtime 3d first-person shooter 1992

John Carmack: Doom realtime 3d first-person shooter 1993

Kristin Dana: skin and cloth textures 1999

cel-shading 2000

Voxel Volume Ray-Casting

Stephen Regeleious: Massive/Prime crowd-rendering 2001


This is hardly a definitive list, but it gives some impression of the phenomenal development of computer graphics over the 54-year period: 1960 – present. The components of CGI – the hardware and software – are of course still developing. The software is a fascinating example of the success of the scientific collaborative sharing of knowledge aka the open-source ideal. The software packages used in the industry now (Maya for example) are the result of over 50 years of innovation by computer scientists, artists, CG specialists, and graphics programmers. The gradual accretion of algorithms for modelling and rendering is a phenomenal example of international collaboration, and has resulted in a set of immensely powerful tools for film-makers, (as well as for artists, architects, designers, scientists, medical researchers, etc, etc).


A scene from The Matrix – the universe of options echoes the range of potential techno-aesthetic components of the cinema of totality.


In this brief mapping of the trendings towards a Cinema of Totality, a few zeitgeist concerns resonate through the last 11 decades or so. One is immersion – the desire to surround an audience with light/sound /visuals/scenographics/information/signals/signs/meaning – another is interactivity – the trending towards active artist-audience-artist dialogue, machine-dialogue, interpersonal dialogue, intermedia and transmedia dialogue. Another is non-linearity – a zeit-form iterating through 11 decades in the atonal composers, in James Joyce (especially in Finnigans Wake, 1939), and in William Burroughs (early 1960s), and in Dziga Vertov (Man With A Movie Camera, 1929). And in Maturana, Cybernetics (Norbert Weiner, 1947), in Psycho-geography (Guy Debord 1954), in T.S. Eliot (The Wasteland, 1922). The other main recursive strand is multi-media, intermedia, transmedia (a mixing of media – film, ballet, literature, music, dance, poetry, theatre etc etc), and finally a kind of synaesthesia – a mixing of the senses.


These are the gradually evolving forms that will composite, catalyse and gell together into a radical 21st century art form, orchestrated by a student genius who gets it. Who gets the immediacy, the interlocution, the smartness, the genius, the richness of a form that marries sensory, kinaesthetic, physical, intellectual, algorithmic, lyrical and expressive modes of a form that is still being born, still nascent, still, like an unchecked lottery ticket, resonant with hope. We love it.



(best short history of computer graphics is at




Alexandr Medvedkin: AgitProp Train 1923


Alexandr Medvedkin: Agit-prop Train scans from a German history of Russian Cinema, published c1935

“Medvedkin was head of the Russian propaganda train that travelled around the Soviet Union making Social realist documentaries, as the narrator ironically comments; “films made at the top to instruct the grass roots, footage gathered at the grass roots, to be edited at the top.” (from:

The Agit Trains were highly decorated railway carriages/goods wagons, each carriage a different form of agitprop (agitation+propaganda) – a theatre wagon, a cinema wagon, film-studio and film-processing, etc. There’s a wonderful recreation of an Agit-Train in Warren Beatty’s feature Reds (1981), which maps the life and work of the American communist and journalist John Reed.


Warren Beatty: Reds 1981 This brilliant feature is a biopic of John Reed, the American journalist and communist who covered the Russian revolution and civil war – 1919-1923

Medvedkin and Dziga Vertov were two of the film-makers actively involved in the Bolshevik AGITPROP trains during the Russian Civil War. These trains toured the Russian countryside distributing leaflets, staging plays, showing films, illustrating the benefits of the Revolution for an illiterate peasantry. The AGITPROP trains are wonderful examples of how multimedia – and film and electronic media, were  creatively developed as ‘education, agitation and propaganda’ media.


Medvedkin’s Agit-train logotype from c1933


“…one of the primary aims of the agitprop trains was health education. Vertov and Medvedkin filmed people during the day, edited all night, then played the film back to their audience/participators next day. When they played another village the film, they found that these agitprop media had the power to unify…”(Henny 1983) at:,M1


Alexandr Medvedkin: Happiness 1935 Medvedkins most successful film, a slapstick social comedy set in Russia before the revolution, and in the USSR before Stalin’s Collectivisation (mass-redistribution and state-ownership of land) in the early 1930s. “The original print featured an experimental color sequence illustrating a poor peasant’s dreams of becoming a king.[1] It was the first color production of the Mosfilm studio. According to Medvedkin, “At that time the peasant could not dream of anything multicoloured of beautiful. His dream was limited, and in that the technology helped”. The sequence was discarded on account of its poor technical quality and is apparently lost.” (wikipedia)

graphic design in the 1920s

In the 1990s, as the new digital media and networks were emerging, it became obvious that graphic design was the core discipline through which the components of the new media could be integrated, designed and presented.

Graphic design is a twentieth-century discipline that drew upon the 500 year-old traditions of printing and publishing, the 19th century developments in advertising, poster design, and photography, and the early 20th century innovations of a swathe of influential designers.


El Lissitzsky: Electro-Library! 1923 This fascinating poster, designed by El Lissitzky in 1923, is indicative of the revolution in print design in the 1920s. El Lissitzky is one of the great prototypical designers of the early 20th century. He epitomises the designer who is familiar with the latest reprographic technologies, and in his stage and exhibition-design he is already mapping the territory that will be fully explored in the latter half of the century, and in this, and his own philosophy of art/science/technology, firmly based on mathematics, he has become an exemplar for current multimedia designers. El Lissitzky sits alongside the greatest designers of the 20th century. Along with Rodchenko, Moholy Nagy, Tschichold, Rand, and half a dozen others, these designers set the agenda for the blend of information-design, graphics, photo-collage and combinations of typography and photography that now dominates both design education and current practice in the digital arts. His astonishing, and as Hans Magnus Enzenberger points out (in Towards a Theory of New Media 1970), at the time virtually incomprehensible suggestion of: “The future of book design resulting as an Electro-Library (what is that?). from the poster: “the design of the bookspace through the material of the process block, which creates the new optics. the supernaturalistic reality of the perfected eye. the continuous page-sequence – the bioscopic book. the new book demands new writers. Inkwells & goose-quills are dead. the printed page transcends time and space, the printed page, the infinity of the book, must be transcended. THE ELECTRO-LIBRARY!


Alexander Rodchenko: poster for Battleship Potemkin 1924 Rodchenko became an early master of the integration of typography and photography. It’s hard for a young 21st century designer, used to working entirely in the digital domain, to understand the technical difficulties of producing this kind of work in the 1920s. There were two main ways of making these graphic works: Making a photomontage with printed type and with photographs, then having a half-tone plate made of the entire graphic, and overprinting with extra colours and with a separate overprinting with letterpress display fonts. Or secondly, it would be pheasible to have halftone plates made for the contone (photographic, continuous-tone) components, and physically cut the zinc plates, before mounting them on type-high pywood for printing. To retain the sharp letterforms, an overprinting of the type would be the final printing.


Rodchenko: Books! with his photograph of Lily Brik A master of type-design using the bold cyrillic display faces of the 1920s, this poster by Rodchenko has become an icon of the period, perhaps even more so than Dziga Vertov’s fim The Man with a Movie Camera (1929), or Sergei Eisenstein’s Battleship Potemkin (1924)

These designers, from Peter Behrens and his ground-breaking identity for AEG (1906-1914), to the DADA artist-typographer Raoul Hausmann, and the De Stijl artists (1917-1920), to Lazlo Moholy Nagy at the Bauhaus, Alexander Rodchenko in civil-war Russia, Fortunada Despero in Italy, Karel Tiege, Piet Zwart, Jan Tschichold in other parts of Europe, plus the influence of DADA, De Stijl, Surrealism, Russian Agitprop, (etc) all contributed to the emergence of the new profession.


Raoul Hausmann: DADA issue 1 1918 the DADAists were the first to really break with 19th century conventions of type-design and typography. Here Hausmann uses display fonts, body-copy fonts and printer’s furniture (the rules, blocks, logos and symbols) to make an assymetric design, that was a radical redefinition of page-design.

Other important ingredients in this emerging discipline  (not really to be called ‘graphic design’ until the 1950s) included late 19th century designers like William Morris, Frank Lloyd Wright, Charles Rennie Mackintosh, Aubrey Beardsley, the Beggarstaffs, Alphonse Mucha, Henri Toulouse Lautrec, Arthur Macmurdo, (etc).

And of course, the emerging information sciences of the 1920s and 1930s played a core role in informing the graphic design of statistical information (Otto and Marie Neurath: Isotypes (1930s); and geographic information (Harry Beck: London Underground Map 1933, Phylis Pearsall: London A-Z, 1936), and information taxonomies (the work of Paul Otlet in the 1930s).

All these various ingredients, together with innovations in letterpress, hot-metal, photo-gravure, halftone, trichromatic colour printing, (etc) informed the new industrial-design of graphic artefacts.

Recently, in its digital iteration, graphic design has embraced the moving image, animated typography, even the use of sound and music. It has become the design discipline that above all others, integrates all these multimedia components, and information-design, as well as becoming the art underpinning the graphical user-interface and screen-based content design generally.

I’m planning more detailed posts on individual designers, as well as on the development of all the component media of the 21st century media-ecology.




Maya Deren: Grid of Stills from Maya Deren films 1944

1944_Deren_grid-of-stills_cThis post-board or mood-board of stills from Deren’s work, assembled by her in 1944, includes stills from Meshes of the Afternoon, Choreography for Camera, At Land, and Ritual in Transfigured Time. A prolific artist, dancer, choreographer, film-maker, Deren embraced the skill-set that came to represent avant garde American film-making in the post-war period, forming a kind of archetype for the counter culture.

John Brunner: The ‘Club of Rome’ Quartet 1975

John Brunner was one of the new generation of sci-fi writers who emerged in the late 1960s, and whose work was show-cased by Michael Moorcock in New Worlds Science Fiction. This new wave was characterised by innovations in form and in content. Brunner had spent his early career writing space-opera style sci-fi, whereas from 1967-68 onwards he focused on issues relating to the very sad state that the world had found itself in. By this time, the issues of over-population, pollution, the nuclear arms race and the power of the military industrial complex had become a major concern to the world’s youth. Brunner was to tackle these issues, incorporating them into his partially non-linear writing structure, in books that were later described as his ‘Club of Rome‘ Quartet.


Brunner’s Club of Rome Quartet: Stand on Zanzibar (1968), Jagged Orbit (1969), The Sheep Look Up (1972) and The Shockwave Rider (1975)

The Club of Rome was a group of academics, providing research and investigation into global issues, and attempting to catalyse a change in our thinking, and by 1972 they had commissioned a detailed computer simulation of the world, its resources, its population, and run simulated projections to determine the impact of diminishing resources and increasing population. The projection was called ‘Limits to Growth‘. The simulator was based on Jay Forrester’s World Dynamics model evolved in the late 1960s from his earlier development of System Dynamics.

The Club of Rome stated its objectives as early as 1968: “The club states that its mission is “to act as a global catalyst for change through the identification and analysis of the crucial problems facing humanity and the communication of such problems to the most important public and private decision makers as well as to the general public.”

The Limits to Growth report, coming a decade after Rachel Carson’s Silent Spring (1962) – the book that alerted us all to concerns about what we were doing to our environment -in a sense catalysed 10 years of mind-change. By the early 1970s Richard Buckminster Fuller had already begun his World Resources Inventory, and described his World Game idea (1964). Stewart Brand had piublished his Whole Earth Catalog (1968). All these were attempts to raise public awareness of these pressing environmental issues – as was the creation of Friends of the Earth and Greenpeace over this period (1969-72). Brunner was incorporating these concerns in his zeitgeist books. The final book in this Quartet, The Shockwave Rider dealt more with the emerging trends in world communications networks, phone phreaking, computer virus’ (Brunner called them Worms), hacking and online identity-shifting. Shockwave Rider (inspired by Alvin Toffler’s Future Shock 1972) became the definitive British contribution to the Cyberpunk genre of science-fiction, a genre characterised by Venor Vinge’s True Names (1978), William Gibson’s Neuromancer (1983), and Bruce Sterling’s Islands in the Net (1988).

William Gibson describes Brunner’s work: “No one except possibly the late John Brunner, in his brilliant novel The Sheep Look Up, has ever described anything in science fiction that is remotely like the reality of 2007 as we know it.”[1]

Shockwave Rider had a major impact on me and my generation.

Cynthea Brazeal: Kismet Affective Robot 1999

Cynthea Brazeal is associate professor of Media Arts and Sciences at MIT. She is a pioneer of social robotics – the investigation of how to make a robot capable of responding to social intercourse with humans  with some of  the signature facial expressions that mirror the ‘human qualities’ of social exchange, understanding, and empathy. Kismet was an early illustration of Brazeal’s doctoral thesis – on Affective Robotics – at MIT AI Lab from 1999.

“Kismet is an expressive robotic creature with perceptual and motor modalities tailored to natural human communication channels. To facilitate a natural infant-caretaker interaction, the robot is equipped with visual, auditory, and proprioceptive sensory inputs. The motor outputs include vocalizations, facial expressions, and motor capabilities to adjust the gaze direction of the eyes and the orientation of the head. Note that these motor systems serve to steer the visual and auditory sensors to the source of the stimulus and can also be used to display communicative cues.”




David Hanson:Diego San Robotic Baby Boy 2011

Considerable effort in the research community has been put to the development of human-like robots, the smart (AI-based) software that drives them, and making the prosthetic ‘front-end’ – the eyes, face, mouth, voice etc – that is the effective and hopefully affective user-interface of the robot – what you see, how you gauge understanding, how you apply the Turing test etc. David Hanson (of Hanson Robotics) has produced some remarkable prototypes illustrating progress in this area since Kismet.

What exactly is the relevance of Kismet to the world of film?

We are heading rapidly towards the scenario in which sophisticated CGI and AI software – a development of what is happening right now in Pre-Viz and Games software – enables us to create feature-length movie-experiences in which several or even most of the protagonists are not just soft-machines (like us humans), but are really software-machines – a CGI developed ultra-realistic humanoid soft-robot or cyborg equipped with a sophisticated software-brain, with a built-in expert-system style memory retrieval mechanism, chatbot-style conversational capabilities, and other AI allowing it to keep in-character, and play its part in relevant plot development. Importantly these synthespians, virtual-actors (vactors) or digital dramaturges, will be equipped with the kind of algorithms developed by Cynthia Brazeal and David Hanson – they will be able to assume human characteristics, facials expressions and affective responses relevant to their fictional characters. Eventually of course, real actors will sell the software rights to their individual portfolio of personal motion-capture-data, expression-capture data, likeness-data, allowing movie-makers to cast (say) Richard Burton (in his prime) with Gloria Swanson (in her prime) with Peter Lorre (at his best) and Brad Pitt (at 20 years old) together in the same movie – the ultimate casting machine. Bit-parts and crowd scenes supplied by future generations of Massive Prime’s crowd-simulator, the whole locked together into movies using the catalog of virtual cinematography effects that we have already begun to develop.