April 2003
The concept of using digitally created scenery and sets first occurred to me back in 1993, with a viewing of the film, The Lawnmower Man. I thought the aspect of combining VR and animations with traditional live theater held great possibilities. In 1995 I was awarded a grant from Henry Ford Community College’s Technology Improvement Fund to explore the use of computer-assisted teaching aids. While most of the grant was for classroom items such as presentation software and hardware, I reserved a small portion of the grant for investigatory research in the area of computer-assisted theatrical production. From 1995 to 2000 we worked our way through various programs and hardware devices. Our task became additionally complicated because of our desire to use 3D stereoscopic projection. We tested various imaging programs and stereo display options. In early 2001 I was awarded a grant by HFCC’s Technology Improvement Fund Committee to purchase 4 Barco projectors and a polarization-preserving screen so that our images could be projected in 3D stereo. In October of 2001 it was decided that we would use Lightwave as our stereo animation and rendering tool. It was at that time that I met Bob Biestek, who was to become our animator. Bob and I were immediately faced with a new set of problems. There was absolutely no information or literature about rendering in 3D stereo in Lightwave. It seemed possible in theory and indeed there were several people on internet chatboards who claimed to have rendered Lightwave in 3D stereo. A period of experimentation ensued where we tried to “interlace” left and right eye stereo renderings for 3D stereo output for display on LCD video projectors. Eventually we arrived at a suitable method.
The next consideration was whether or not to make this a purely “real time VR” play or to incorporate some pre-recorded elements such as animations. While the purist concept sounds good in theory, it contradicts several already-established principles of live performance. One of these is the fact that several “pre-recorded” factors of real time performance already control the structure and pace of live theater. One of these is the script itself. The playwright’s use of language often creates a rhythm and pace of its own. The entire discipline of dance is predicated upon the axiom that the pre-recorded (canned) music is interpreted by the artist. There are effects that animation programs can do that “real time” programs cannot accomplish. It was decided to incorporate a variety of live and recorded elements into the show to have a wide creative palette from which to choose.
The next consideration was the choice of play. The most obvious choices for shows using new media technology are ones that can utilize styles such as surrealism, expressionism, impressionism, etc. Our department is no stranger to stylistic experiments. In 1991, I staged a version of Macbeth based on the principles of Antonin Artaud and in 1993 staged a version of A Macbeth; these two productions still remain two of the most controversial shows ever presented in the Metro Detroit area. Additionally, I published a paper concerning the use of digital techniques in Artaudian theater production in 1996. The issue in this production however, was different. The first requirement was that it was necessary to demonstrate the possibilities of the new technology to as broad an audience as possible. Rather than produce a “freak ’em out in your face” style production, we wanted as many traditional theater audience members as possible to view our show. The point was to demonstrate how digital techniques could be applied to traditional theater and not just the obvious choices such as performance art and other stylistic experiments.
The Tempest was chosen because it was easy to design the show with a science-fiction look, which seemed an appropriate choice for a show using such futuristic technology. Once the script and production concept was chosen, work began on the scenery and animations. It was decided that we wanted to give the robot Ariel a little more to do that just flying in and out. We located a program called Magpie Pro. This program was a lip-synchronization program that allowed an actor’s voice to create accurate lip movements on an animated model. The task was time consuming because a different model had to be created for each vowel sound. Finally, we decided that we wanted to add an element of real-time object manipulation. We located a program called World Up, which allowed an object created in Lightwave to be imported into it with an intermediary program called Polytrans and controlled by a joystick.
The Nature Of Stereoscopic 3D used in The Tempest
Unfortunately, most people get their exposure to stereoscopic 3D from IMAX, which is essentially intended as a novelty rather than a serious application of art to technology. A major problem is IMAX’s practice to give the entire film a negative parallax so that the entire image is projecting out into the theater. In this case the right eye sees less of the image at the left of the screen than the left eye. This makes the image look odd and is very disturbing if one looks away from the center of the screen. Thus, most of The Tempest images created an illusion of depth without jarring the audience out of the world of the play by pushing objects into their laps. Our goal was for the images to appear natural looking and cause as little eye strain as possible so that the audience could enjoy Shakespeare’s words as well.
The Design Of The Show
It was intended that the show would resemble 1950’S science fiction films such as The Forbidden Planet (1956) and The Day The Earth Stood Still (1951). (The Forbidden Planet was a retelling of The Tempest: Dr. Morbius was Prospero, Robby The Robot represented Ariel, and The Monster From The Id represented Caliban.) Thus the spaceship, stellar images, effects, and planetscapes evoked the ambiance of the beginnings of filmed science fiction. Additionally, our production was intended as an homage to the early days of filmed 3D, which had their first wave of popularity in the 1950s.
All images were original to this production and were created by Virtual Theatricality Staff and Students. All actors and technicians were students and were enrolled in a class entitled “Virtual Reality In Theatrical Production.” The images are in stereoscopic 3D. A left and a right eye view are created via imaging programs and outputted through polarized projectors. Glasses polarized at different angles for the left and right eye are worn by the audience. The glasses force the eyes to maintain separate left and right eye views, thus creating the illusion of depth in 3 dimensions.
There are several different technological aspects to the production. The major areas are:
3D Stereoscopic Animation
The images are drawn, painted, and animated in a program called Lightwave 3D by Newtek. The animations are then processed again using a technique called “interlacing” that combines left and right eye views corresponding to the odd and even video fields in a video frame. The final animation is synchronized to the soundtrack in a digital video editor. The final animation is burned to DVD for playback on an ordinary DVD player for the production. The introduction (Space Storm) was rendered in such a manner.
3D Stereoscopic Animation with Magpie for Lip Synching
The role of Shakespeare’s Ariel was portrayed by an animated character in the form of a robot. The robot’s voice was provided by HFCC actress Joanna Graham. Her words were synchronized to Ariel’s lip movements using appropriate vowel and consonant images. The HFCC student who created Ariel’s lip synch movements is Nick Riley. Bob Biestek designed Ariel.
3D Stereoscopic Digital Video
The VTL has a stereo video camera that feeds images into a digital video editor.
The VTL has the capability of creating 3D stereo images of actors. That is how the large image of Prospero was created. It was done in our green screen room and combined with a starfield background. Our chief engineer and systems designer is Alan Contino.
Interactive 3D Stereoscopic Images
One of the most exciting of our production experiments is the ability to create objects that student operators can manipulate in real time. The bundle of wood, pignuts, jay’s nest, and berries are such objects. The Lightwave objects designed by Bob Biestek were imported into a program called World Up. World Up is a game developer’s software that allows real-time manipulation of objects. The object was outputted in stereoscopic 3D and controlled by a joystick. Our World Up technician was Nick Riley. Our programmer who wrote the script for World Up that allows for advanced functions like geometry substitution (allowing objects to transform into each other) was Margaret Green. Our World Up controller was John Wilson.
Rehearsal Process
The rehearsal process was challenging and exciting. We went through many evolutions regarding our technical setup. At first we positioned our “control central” maze of computers, wires, video mixers, and sound equipment behind the screens, but soon found that it was necessary for all technicians have a clear view of the action. Additionally, we started with all technical elements for the show in place with our first rehearsal in January as opposed to the traditional theater method of bringing in technical elements a week or so before the show opens. This allowed experimentation with timing and methods. About half way through the rehearsal process, our video mixer operator (Alan Contino) experimented with combining scenes and objects: the result was much a much more dynamic stereo. We discovered many axioms regarding the use of digital scenery. One was that real props do not mix well with digital scenery and props. We originally had a complicated device that Ferdinand assembles during the second scene between himself and Miranda. It took three actors to place and remove the device. Compared with the speed-of- light scenery changes capable by our video mixer, the change seemed clunky and archaic. Thus we radically altered the use of those props for the scene. Part of the creative process was very much like making a film. Thus animator Bob Biestek and I discussed concepts and visual themes and created storyboards for the opening sequence much like working on a motion picture. Sometimes the simple things are the most complicated. For example, you will probably just take for granted that the nut, jay’s nest, berries and wood created by Caliban appear and disappear. This effect was impossible to accomplish with a video mixer and script had to be written by computer programmer Margaret Green. The research and resulting work took about 20 hours. The onscreen effect lasts less than one half second.
We consider tonight’s performance an experiment in possibilities We have discovered some interesting advantages to be had from using digital VR scenery and props. Obviously some of the advantages include the ability to change a physical scene in less than 1 second as opposed to the mechanical methods of traditional scenery. Another luxury of design afforded by the digital approach to theatrical production is the ability to change what doesn’t work in an efficient manner. If a painted drop does not work, it must be taken down, whited out and completely redone. Digital drops require nothing but reworking and retouching of already extant files. Digital scenic stage techniques can present a play in a visually stimulating manner, incorporating 3D stereo effects, animations, transformations and other effects that would cost many thousands of dollars for each show if attempted along traditional lines. With computer generated scenery, once the initial cost of imaging and production equipment is absorbed, costs for future productions become minimal.
I believe many of the things my cast, crew and I consider disadvantages have to do with the work and research of actually setting up the physical part of the show. If theaters were equipped with stereo imaging systems and computers featuring animation and VR programs as part of their standard equipment, the creative process would be free of much clutter, confusion, and hassle. I have no doubt that this will come to pass. In the theater of the future, I believe that the scenery, props, and maybe even some cast members will exist on a disc that will be placed in an imaging system capable of creating astounding digital scenery, characters, and props.
I would like to thank my student Nick Riley, who has been with me since the beginning of this project. His constant experimentation and dedication are responsible for a large part of tonight’s unique performance. He never gave up. Never. I appreciate that deeply. I would also like to thank our video engineer and videographer, Alan Contino, who came aboard at the beginning of The Tempest rehearsals. His professional help has added another dimension to our show. Geoff Collins in HFCC’s Voice and Data department was instrumental in getting some of the VR software and hardware to function properly. Margaret Green in HFCC’s Computer Information Systems department helped us immensely with writing code for the World Up program. I would also like to thank my Dean, Dr. Ed Chielens and my Associate Dean, Mr. Rick Goward. Both of these gentlemen were instrumental in making our way much easier. They believed where others with less vision would not. The both contributed a great deal to this project. I would especially like to thank HFCC’s Technology Improvement Committee, whose generous grants allowed us to develop our vision. I am fortunate to have such supportive people on my side.
The Virtual Theatricality Lab is dedicated to forging the live performance technologies and styles of the next millennium. The VTL exists to nurture the daring creative visions of performers, designers and technicians who will embrace the multidimensional technological performance arena of the 21st century and beyond.