Chapman’s Oscar-winning entry for Expo67 was commissioned by the province of Ontario. It uses a ‘multi-dynamic image’ technique – a phrase invented by Chapman to describe the use of ‘dynamic frames’ – filmed sequences that varied dynamically in size as they were projected, and the multiple use of these ‘screens’ or ‘panes’ within the vast screen he was using at Expo67 – a screen that measured 66 feet by 30 feet – (i-Max size). Remember that in 1967 computers weren’t ready to process this kind of media-making, so that Chapman had to use auditor’s printed spreadsheets to work out how his multi-dynamic film should be shot, how it should be storyboarded, and finally how the multi-image effect that he wanted should be described accurately enough for the Todd-AO optical-printing specialists in Hollywood to actually assemble all Chapman’s clips (180,000 feet of film) together – as he wanted – into an 18-minute multi-dynamic film.
Chapman’s A Place to Stand was his first widely-promoted attempt to realise his multi-dynamic image approach. This is a fragment of the 70mmm film with sample images optically (photographically) printed as dynamic frames within the span of the 70mm frame.
A Place to Stand is a multi-image treatment – a motion-montage – about the province of Ontario. Chapman’s film content follows the ‘city symphony’ ideas of the 1920s (Ruttman: Berlin – Symphony of a Great City, and Vertov: Man With a Movie Camera, etc), and following the avant garde experimental approach of those early attempts to capture a physical space, Chapman’s film invents new techniques – a new form, in fact – presaging the digital compositing software that came into wide use in the last two decades.
BTW users of contemporary 21st century compositing software like Adobe AfterFX, Nuke, Maya Composite, Apple Motion etc, will find it hard to understand the level of difficulty facing Chapman in his quest for the multi-dynamic image form. It is relatively so easy today to assemble and view test composites in realtime, or after only a few minutes rendering time, and see the results on large flatscreen display monitors – as you are working. Try to imagine this kind of compositing being planned using a standard Movieola. This is how Chapman describes part of the process:
|“180,000 feet of film were shot. Some additional footage of material I had not time to shoot myself was shot by David Mackay, using TDF cameramen. After completely familiarising myself with the footage, I worked out a storyboard of the entire film. Although it was theoretical, it did give me an impression of how the subject matter could be structured. I then had to devise my own charts as did Barry Gordon who translated my charts into his own lab charts in a language that the lab could comprehend. The lab was most impressed with the clarity of Barry Gordon’s technical instructions.
|To edit the film I had a 2 picture head moviola which was the closest one could get to visualising the results. One could only use it to compare actions of any 2 shots at one time and designate the length of shots. In normal film editing, one works with the actual footage and soon discovers that frame or two on any shot can make a difference in rhythm. With the Ontario film I could never “see” the film develop. The charts indicated the movement of the shots. Because of the shortage in time their could be no changes in structure in any of the sequences once they returned from the lab. It was a tremendous discipline for me, for once I had made a creative decision, I could not change my mind. The entire concept of development therefore, was on paper in chart form.”from http://www.in70mm.com/news/2011/canadian_short/place/index.htm
There’s a short clip of Chapman’s film here:
So like graphic designers of the time, Chapman had to provide the optical printing lab at ToddAO with a set of written instructions and multi-image storyboards, then wait for several days or weeks while the Lab constructed his multi-dynamic film. Nowadays we can visualise this more or less in realtime. What lucky bastards we are!
Following the fabulous Montreal Expo67 with its rich feast of multi-image films and a/v shows by the likes of Roman Kroitor (the inventor of the iMax format) and Christopher Chapman whose film A Place to Stand introduced his Multi-Dynamic image technique, it seemed only natural that Norman Jewison and cinematographer Haskell Wexler should use these multi-screen (multi-image) techniques in this stylish feature, showcasing the charisma of Steve McQueen and the sophisticated beauty of Faye Dunaway. Expo67 had been an inspiring world expo for those of us interested in a/v shows, ‘light-shows’ and the potential of non-linear pictorial narratives. No small number of artists and photographers were engaged in these speculations (Marc Boyle, Joan Hills, Malcolm Lewis and others in the UK, Andy Warhol’s Exploding Plastic Inevitable and multi-screen films like Chelsea Girls (1965). The originator in the world of the movies was of course Abel Gance with his phenomenal 3-screen panoramic Napolean in 1927.
Haskell Wexler went on to make his masterpiece Medium Cool a year or so later. It is rumoured that Roman Kroitor whose multi-screen exhibit Labyrinth was one of the high-spots of Expo67, was called in as advisor on The Thomas Crown Affair, for the multi-image/multi-screen sequences, apparently used in the film to compress several long sequences of footage into one shorter multi-screen sequence.
Chris Chapman: A Place to Stand 1967
I was a post-grad at Clive Latimer’s Light/Sound Workshop at Hornsey College of Art the following year, and we produced a big show in conjunction with the Archigram group at Oxford MOMA, showcasing a number of multi-screen, immersive a/v work by the likes of John Bowstead, Ron Herron, Dennis Crompton, Peter Cook, Gary Crossley, Tony Rickaby and myself.
In 1970, Gene Youngblood’s Expanded Cinema traced the range of ways in which movie-making was stretching out beyond the confines of the single screen.
The use of multiple parallel strands of images and film-sequences strangely disappeared from the movies until the last decade or so, when Mike Figgis introduced the idea in his Time Code (2000). Then the use of multi-screen cascaded through TV shows like Spooks (2002) and 24 (2001). It is more in new media multi-window works like Chris Milk’s The Wilderness Downtown (2009) that the potential of this kind of pictorial narrative really becomes apparent.
Svoboda was a brilliant scenographic artist and stage designer. He founded the Laterna Magicka group and the Diapolyekran (multi-screen array) was invented for the Brussels World Fair of 1958, and had a major impact at the Canadian Expo67 in Montreal.
“Polyekran offers the possibility of free composition, a free shaping and creation on several screens. Images of real objects and people are projected, but the relationships among them are not realistic, but rather supra-realistic, perhaps surrealistic. Essentially, it’s the principle of abstract and pure collage, which is an old and basic technique of theatre. “Op art” is perhaps simply a more recent name for it. In any case, the contrast of varied things on stage is basic to theatre; the objects thereby acquire new relationships and significance, a new and different reality.”
In comparison with Polyekran, which is totally a film spectacle and technically a concern of film, Laterna Magika is theatre with living actors, singers, dancers, musicians. . . . On the one hand we used familiar scenographic techniques such as slides and film projection. New expressive possibilities were added by panoramic film and projection with multi-exposure on several screens at once. A second feature is the use of mobile screens that are joined to the performance of a live actor.
(Svoboda, quoted in “O svetelnem divadle,” Informacni Zpravy Scenograficke Laboratore (Sept. 1958), P. 5.)