Chapman’s Oscar-winning entry for Expo67 was commissioned by the province of Ontario. It uses a ‘multi-dynamic image’ technique – a phrase invented by Chapman to describe the use of ‘dynamic frames’ – filmed sequences that varied dynamically in size as they were projected, and the multiple use of these ‘screens’ or ‘panes’ within the vast screen he was using at Expo67 – a screen that measured 66 feet by 30 feet – (i-Max size). Remember that in 1967 computers weren’t ready to process this kind of media-making, so that Chapman had to use auditor’s printed spreadsheets to work out how his multi-dynamic film should be shot, how it should be storyboarded, and finally how the multi-image effect that he wanted should be described accurately enough for the Todd-AO optical-printing specialists in Hollywood to actually assemble all Chapman’s clips (180,000 feet of film) together – as he wanted – into an 18-minute multi-dynamic film.
Chapman’s A Place to Stand was his first widely-promoted attempt to realise his multi-dynamic image approach. This is a fragment of the 70mmm film with sample images optically (photographically) printed as dynamic frames within the span of the 70mm frame.
A Place to Stand is a multi-image treatment – a motion-montage – about the province of Ontario. Chapman’s film content follows the ‘city symphony’ ideas of the 1920s (Ruttman: Berlin – Symphony of a Great City, and Vertov: Man With a Movie Camera, etc), and following the avant garde experimental approach of those early attempts to capture a physical space, Chapman’s film invents new techniques – a new form, in fact – presaging the digital compositing software that came into wide use in the last two decades.
BTW users of contemporary 21st century compositing software like Adobe AfterFX, Nuke, Maya Composite, Apple Motion etc, will find it hard to understand the level of difficulty facing Chapman in his quest for the multi-dynamic image form. It is relatively so easy today to assemble and view test composites in realtime, or after only a few minutes rendering time, and see the results on large flatscreen display monitors – as you are working. Try to imagine this kind of compositing being planned using a standard Movieola. This is how Chapman describes part of the process:
|“180,000 feet of film were shot. Some additional footage of material I had not time to shoot myself was shot by David Mackay, using TDF cameramen. After completely familiarising myself with the footage, I worked out a storyboard of the entire film. Although it was theoretical, it did give me an impression of how the subject matter could be structured. I then had to devise my own charts as did Barry Gordon who translated my charts into his own lab charts in a language that the lab could comprehend. The lab was most impressed with the clarity of Barry Gordon’s technical instructions.
|To edit the film I had a 2 picture head moviola which was the closest one could get to visualising the results. One could only use it to compare actions of any 2 shots at one time and designate the length of shots. In normal film editing, one works with the actual footage and soon discovers that frame or two on any shot can make a difference in rhythm. With the Ontario film I could never “see” the film develop. The charts indicated the movement of the shots. Because of the shortage in time their could be no changes in structure in any of the sequences once they returned from the lab. It was a tremendous discipline for me, for once I had made a creative decision, I could not change my mind. The entire concept of development therefore, was on paper in chart form.”from http://www.in70mm.com/news/2011/canadian_short/place/index.htm
There’s a short clip of Chapman’s film here:
So like graphic designers of the time, Chapman had to provide the optical printing lab at ToddAO with a set of written instructions and multi-image storyboards, then wait for several days or weeks while the Lab constructed his multi-dynamic film. Nowadays we can visualise this more or less in realtime. What lucky bastards we are!
Vordemberge-Gildewart was a late member of the De Stijl group that had been formed by Theo van Doesberg and Piet Mondrian in 1917. One of the first painters to focus on abstraction from the very beginning of his career, he produces these balanced, beautiful and harmonious works throughout his life (died in 1962). In Art and Photography (1968), the art-historian Aaron Scharf describes the kind of harmonious aesthetic-technical innovation process evangelised by De Stijl:
“The idea of art as play, discussed by Kant and then by Schiller late in the 18th century, and elaborated upon by Konrad Lange at the turn of the nineteenth, became an important consideration in the aesthetics of twentieth century theoreticians. The spiritual pleasure inherent in the freedom of experimentation was believed by the De Stijl artist, Theo van Doesburg, to be an essential pre-requisite of the truly creative process – the gestalting or forming process as he called it. ‘Play he wrote, is the first step of creation.’ In Film as Pure Form in 1929, characteristically structuring its evolution, he noted that, like other media, photography, having first gone through a phase of imitation, then a second stage of experimentation and manipulation in the mastering of its technical means, must now (as with film), give way to purely creative expression.”
Ducos du Hauron: colour pictogram 1870
Du Hauron’s experimental photography in the second half of the 19th century produced several world-class innovations. Du Hauron was amongst the first to explore colour photography, producing a carbon-colour print using cyan, magenta and yellow filters (equivalent to the 20th century cibachrome print) as early as 1868. He wrote a book on colour photography the following year (Les Couleurs en Photographie 1869), and produced what is widely thought of as the first colour photograph Angouleme Region View of Agen in 1872, and this still-life of flowers around the same time:
Ducos du Hauron: Colour photograph using his carbon-colour 3-colour process (around 1869-1872).
Du Hauron went on to invent the analglyptic (red-green) stereo process, and also the lovely colour pictograms (direct prints from objects) pictured above.
What I love about this kind of innovation is that it emerges from a synthesis of art and science – by an artist who has the modernist or neo-scientific approach that characterises much of 20th century art and design. The creative exploration of phenomena, based on a neo-scientific but predominantly aesthetic investigation, is a strand of innovation that has given us Photoshop, Postscript and much of the core media-processing and media-authoring software that underpins digital media.
Following the fabulous Montreal Expo67 with its rich feast of multi-image films and a/v shows by the likes of Roman Kroitor (the inventor of the iMax format) and Christopher Chapman whose film A Place to Stand introduced his Multi-Dynamic image technique, it seemed only natural that Norman Jewison and cinematographer Haskell Wexler should use these multi-screen (multi-image) techniques in this stylish feature, showcasing the charisma of Steve McQueen and the sophisticated beauty of Faye Dunaway. Expo67 had been an inspiring world expo for those of us interested in a/v shows, ‘light-shows’ and the potential of non-linear pictorial narratives. No small number of artists and photographers were engaged in these speculations (Marc Boyle, Joan Hills, Malcolm Lewis and others in the UK, Andy Warhol’s Exploding Plastic Inevitable and multi-screen films like Chelsea Girls (1965). The originator in the world of the movies was of course Abel Gance with his phenomenal 3-screen panoramic Napolean in 1927.
Haskell Wexler went on to make his masterpiece Medium Cool a year or so later. It is rumoured that Roman Kroitor whose multi-screen exhibit Labyrinth was one of the high-spots of Expo67, was called in as advisor on The Thomas Crown Affair, for the multi-image/multi-screen sequences, apparently used in the film to compress several long sequences of footage into one shorter multi-screen sequence.
Chris Chapman: A Place to Stand 1967
I was a post-grad at Clive Latimer’s Light/Sound Workshop at Hornsey College of Art the following year, and we produced a big show in conjunction with the Archigram group at Oxford MOMA, showcasing a number of multi-screen, immersive a/v work by the likes of John Bowstead, Ron Herron, Dennis Crompton, Peter Cook, Gary Crossley, Tony Rickaby and myself.
In 1970, Gene Youngblood’s Expanded Cinema traced the range of ways in which movie-making was stretching out beyond the confines of the single screen.
The use of multiple parallel strands of images and film-sequences strangely disappeared from the movies until the last decade or so, when Mike Figgis introduced the idea in his Time Code (2000). Then the use of multi-screen cascaded through TV shows like Spooks (2002) and 24 (2001). It is more in new media multi-window works like Chris Milk’s The Wilderness Downtown (2009) that the potential of this kind of pictorial narrative really becomes apparent.
Leon Theremin was a Russian scientist and inventor, who invented what was probably the first electronic instrument. He was inspired by the idea that free-form gestures could become musical instruments (when moved in the electrical field of his instrument), and he also invented installation technologies that responded to dancers. A prolific inventor (he developed the first passive listening device or bug), he was nevertheless imprisoned by Stalin in 1938. He was reinstated as a USSR citizen in 1956.
One of the first reactions to music that many of us experience is the mimicing of the act of conducting or of playing an instrument. Theremin capitalises on this in his ultimate air-guitar – a subtle and powerful instrument that responds to gesture. Along with his inventions of covert listening devices and motion-detectors as well as the Thereminovox, this makes Theremin the godfather of much intermedia and new media experimentation from the 1960s onwards. Versions of the Theremin were used to create the characteristic Startrek theme and Bob Whitsell’s Electro-Theremin featured in the Breach Boys’ Good Vibrations, and Wild Honey.
The gestural interface (to computers) has been an area of promising research since the late 1970s when Nicholas Negroponte’s ARCHMAC team demonstrated the Put That There! spatial-data management system – a gestural and voice interface. Theremin’s work was recognised by later electronic music innovators like Robert Moog. The gestural interface was illustrated brilliantly by Stephen Spielberg in his version of Philip K. Dick’s Minority Report (2002)
A brilliant creative team at the forefront of video projection-mapping and other realtime production special effects, Marshmallow Laser Feast had this to say about their work for Sony in 2012:
“We directed and produced 3 videos for the launch of the Sony PlayStation Video Store. Our job was to bring a living room alive with hints at various hollywood blockbuster franchises – so we decided to push projection mapping to a new level. We projection mapped a living room space with camera (or head) tracking and dynamic perspective. All content realtime 3D, camera (or head) is tracked to match and update the 3D perspective in realtime to the viewers point of view. Add to this real props, live puppetry, interaction between the virtual and physical worlds, a mixture of hi-tech and lo-tech live special effects, a little bit of pyrotechnics and a lot of late nights.”
Taking up photography at age 18, and being amongst the first unofficial war photographers till he was wounded in 1915, the Hungarian Kertesz moved to Paris in the 1920s and began photographing the artists and dancers there, exhibiting his work along with the likes of Berenice Abbott, Man Ray, Lisette Model and Philippe Halsman. In 1933, he was commissioned by the Parisian humour magazine La Sourire to take some nude photographs, and he employed a fun-house (fun-fair) distorting mirror to do this, returning to a theme of distortions that he had explored back in Hungary, shooting swimmers underwater. These delightful ‘surrealistic’ photographs are very much in the spirit of the time – Dali had painted ‘The Persistence of Memory’ with its melting distortions in 1931.
Gustav Klucis is among the four artists (also Hannah Hoch, Raoul Hausman, El Lissitzky) who claim to have invented the political photo-montage around 1918. His graphically sophisticated montages have more in common with modern graphic design than with art-agitprop, and there is no doubt that Klucis brought a considerable professional talent to his work for the Revolution. He became a professor of colour theory at the art school where he studied (VKhUTEMAS in Moscow), and developed multimedia designs for the Agitprop programme. Despite his loyalty to the Communist cause he was executed as a Latvian by Stalin in 1938. His wife and partner Valentina Kulagina only found out his fate in 1989.
Klucis is a one of the great early 20th century multimedia artists, with his wife Valentina Kulagina designing some of the most technically sophisticated photo-graphics of the period (ranking with Lazlo Moholy-Nagy’s photoplastics), and sketching and designing all kinds of rostrums and PA system stands for the AGITPROP education and propaganda programme.
In an era used to digital photo-montage and object-oriented graphic design software, its hard to understand just how difficult it was to produce this kind of integration of photography and graphics for the dominant letterpress printing technology of the time. The creation of halftone zinc plates from the original photographs, the cutting-out and trimming of these metal plates, and the mounting of halftone with line-block graphics on a type-high chunk of plywood, had none of the ease and fluidity of 21st century processes. The fact that they broke new ground integrating contemporary zeitgeist-design with photo-montage (although there are precedents by the Reutlinger Studio in the first decade of the 20th century) helps us ignore the technical crudity of the printed image, and recognise the brilliant innovations of Klucis, Rodchenko, the Stenberg brothers and others during this period of radical innovation.
Broadcast initially as a BBC promotional video, illustrating the diversity of music and the quality of their music coverage, this video was so successful that it was co-opted for use in the BBC Save the Children Charity appeal and used to promote the charity single of A Perfect Day, raising nearly £2.25 million. I was at AMX Digital – a multimedia design studio in central London, and on behalf of Apple Computer, we had a copy of the broadcast quality taped video and were tasked with testing the quality of a Quicktime Sorensen compression algorithm on the video, so that it could be played online at the extremely limited bandwidth available to most people at this time. Supervised by AMX’s Alastair Scott, the Quicktime version was great, and illustrated for me the mutability of the digital media rapidly becoming mainstream.
The first in a wave of java-encoded visualisation software that revealed the power of live computation to arrange text-data in a meaningful visualisation of their relationships, Thinkmap’s Visual Thesaurus was a revelation to all of us working at the cutting edge of design and technology. In 1997, I was at AMX Studios, working with the designer Malcolm Garrett on a book called Understanding Hypermedia 2.000, and the AMX lead designer Maxine Gregson showed me the Visual Thesaurus. It was a revelation – a superb synthesis of design and software, functional, fun and funky at the same time.
Peter Mark Roget completed his Thesaurus of English Words and Phrases in 1852, having begun work on compiling a list of words linked by the same idea as early as 1805.
“The Visual Thesaurus is a 3D interactive reference tool, powered by Thinkmap, that gets students of all ages excited about words. Using our visualization technology, the Visual Thesaurus takes a unique, and remarkably beautiful, approach to presenting the results of a word lookup.
The Visual Thesaurus creates an animated display of words and meanings — a visual representation of the English language. The Thinkmap visualization places your word in the center of the display, connected to related words and meanings. You can then click on these words or meanings to explore further.”(Thinkmap)
Frederic Vavrille’s delightful (2005) early interface for his LivePlasma recommendation engine (top), obviously based on the same software principles as the Visual Thesaurus, brought colour and multi-media – pictures, audio-tracks, links to video (etc)