The Edlund POV

Richard Edlund, ASC. Photo by Owen Roizman, ASC.

Richard Edlund, ASC. Photo by Owen Roizman, ASC.

Every year, I try to check in with the legendary Richard Edlund, ASC, to gauge his view of industry advancements in the worlds of cinematography and visual effects, and this time around, as always, the conversation was educational. My decision to tap Richard’s formidable brain was timely—he received the Visual Effects Society’s Lifetime Achievement Award in early February to honor his legendary career. In the unlikely event you didn’t already know, Edlund is among the most decorated visual effects’ supervisors in history, with a slew of Academy Award nominations, wins, special awards, commendations, and Sci-Tech honors for his various inventions and collaborations dating back to his days on the front lines helping George Lucas and John Dykstra, ASC, craft the original Star Wars at an embryonic ILM to his breakthrough years running Boss Film Studios, among other high points. He’s an Academy Governor, has been a driving force in the Academy’s Scientific and Technical Awards program for years, is a 2008 ASC President’s Award winner, and that doesn’t even count all the BAFTA’s, Emmy’s, and other honors that have come his way. (Not to mention Richard’s rich previous life photographing rock album covers, inventing the Pignose portable guitar amplifier, and shooting inserts and low-budget visual effects for many of the TV shows you and I grew up with back in the day.)


Photos of Richard Edlund, ASC and various crews from his Boss Film and Star Wars periods courtesy of Richard Edlund.

But Richard is also that increasingly rare breed of visual effects’ supervisor whose background is in photography and cinematography first, not computer science—he’s also an important member of the ASC, in fact. These days, he is busy as ever, working with partner Helena Packer to line up projects for their duMonde VFX company in New Orleans. But Richard still found time recently to offer his thoughts on the higher frame rate (HFR), 3D, and 4k movements sweeping the industry, and lots more.

Edlund calls himself “a longtime cheerleader” for the higher frame rate work that Douglas Trumbull and Gary Demos advocated back in the analog days (and still), but he suggests the real revolution has been digital projection, because that development has made both 3D and HFR viable creative tools and feasible for big-screen viewing on a wide scale.

“It’s interesting to note that just a few years before the time that Peter Jackson decided to shoot The Hobbit at 48fps, digital projection started to become common in theaters,” Edlund notes. “As a result of digital projection becoming common, the strobing artifact [from mechanical film projectors] essentially goes away.”


That’s because the digital projector, as Edlund explains, is capable of reducing “dark time” between frames to almost nothing. “It’s just the time it has to reload the next frame from the server—a [tiny] percent of the full time [a frame is projected before switching to the next image],” he continues. “So there is no double-bladed shutter [on a mechanical projector] involved anymore. For each frame projected mechanically, the pulldown takes a quarter of the time, and during this time, the first blade of the shutter closes, and then the shutter opens for a quarter of the time. Then, the second blade of the shutter closes for a quarter of the time, and then the shutter opens on that same frame for a quarter of the time. The reason for this is that, unless projected images are flickered on 48 times per second, the images will appear to flicker [to the human eye]. What happens is the camera moves too rapidly, and so, the brain anticipates movement from frame to frame, and will interpolate the action, and so, the subject will see vertical lines, stars, and other sharply contrasted objects doubly. This is an unwelcome 24 frame per second artifact, which we call ‘strobing.’ To the visual scientist, it is known as the ‘phi effect.’ Most people have no idea that, when they go to see a movie, they are only seeing it half the time!

“The point is that digital projection has brought us several advantages that are not much ballyhooed about, and one of them is the fact that you do not have to deal with that strobing, and let’s not forget about weave, jitter, dirt, scratches, and fading. The other big thing is that, when the cinematographer composes the image, the projector will project it exactly as he composed it. The projectionist can’t be slightly out of frame or cut something off anymore.”

Freedom With Limits

Thus, for all these reasons, Edlund happily rejoices in the rise of digital projection and the looming end of film projection. “It’s time we left the technology of the Industrial Revolution behind,” he declares with a chuckle.

However, as HFR enters the discussion, Edlund points out that largely it was “engineering decisions” that led to previous standards, such as 24fps, which rose out of the need to run film at a high enough rate to provide adequate sound fidelity. In order to find room on prints for the sound track, he points out, the Academy Aperture eventually evolved. Even today, he stresses, there will likely continue to be similar parameters, or technological limitations, resulting from other types of engineering or economic pressures, even in the era of digital projection.

For example, “when you pan and move the camera real quick, you still don’t want to pan left and right too fast, because then it will judder,” he explains. “That is a term that became useful to people shooting IMAX, where the image changes greatly when the camera pans. The image can change by three or four feet on the screen if you pan too fast, causing a ‘judder’ effect when [digitally] projected on a huge screen.”

Therefore, while one artifact may die quietly, new technology gives birth to others.

“So it’s a matter of taste, isn’t it? I mean, some people are noticing the differences in higher frame rates and others aren’t,” he points out. “I’ve done lots of 60 frame per second tests. I worked on Showscan movies. What increases is the apparent resolution, which is why people say that [The Hobbit] looks maybe too clear in some frames. Someone I know told me they thought Gandalf’s staff looked like it was made of resin, for instance. Keep in mind; shooting at 24fps, each frame is exposed at about a 48th of a second. Therefore, when you pan the camera, the resolution at that speed can go to hell—you lose horizontal resolution. But, at 72fps, that period is just 96th of a second, and therefore, each frame is much sharper—you don’t have so much blur—but that might make some things in the frame look sharper than you want them to. So, in the end, what you do is trade one artifact for another whenever you switch from one technology to another. You are losing things you are accustomed to, and therefore [seeing different or new things, which you might perceive as artifacts]. When David Fincher shot [2007’s] Zodiac using the [Thomson Viper FilmStream camera]—a system with a tiny 2/3-in. chip, he got enormous depth of field. And he decided to leave it—he shot it with infinite depth of field. Since then, most cameras have one Super 35mm size CMOS chip, which have the same depth of field and soft backgrounds that we have all become used to [in the film world]. But still, these are all artistic choices that filmmakers are able to decide on.”


And that is Edlund’s ultimate point: what is really remarkable about today’s advances is not any one particular advance, in and of itself. It is the fact that, under certain conditions, creative people have been freed to make creative decisions about whether to use those abilities at all, and if so, how, and to what extent, in order to express themselves artistically.

In fact, Edlund predicts that as 4k takes hold and continues its current advance into the consumer space, with home viewing and theatrical viewing becoming more similar experiences, filmmakers “could choose frame rates on a shot-by-shot basis. You could shoot something like a quiet scene in a dark room with two people talking at 12 frames a second. And then, if there is an action scene with a semi-truck hurtling at you, you could shoot that at 120 frames a second. And that is what is really exciting—what artists will be able to do with these kinds of advances.”

Edlund points to Ang Lee’s Life of Pi, shot by Claudio Miranda, ASC, as an example for, in that case, not only its artistic use of 3D, but also its artistic use of high resolution imagery generally, even for the 2D version—a feat that duly impressed him.

“I didn’t see that movie in 3D, but I did see it at a very high resolution,” he says. “I found it mind-boggling, what the guys at Rhythm & Hues [under direction of supervisor Bill Westenhofer] did. A believable CG lion that Ang Lee could direct, covered with hair moving every single frame at a high resolution? And each of those frames taking probably a week of number-crunching time to process it, using the fastest technology available today?  One of the most amazing scenes to me was the animation where the hyena was attacking the zebra and then the lion gets into it, and the hyena freaks out and bounces off the side of the boat—hair-raising! That was really believable—great work.”

Will it be Necessary?

But, mixed into Edlund’s awe is the need for compromise once again. As he mentioned, it took “outrageous number crunching, centuries of number crunching on Life of Pi ” to achieve such a feat. And that is the rub for Edlund’s discipline—4k, 3D, and other innovations mean that “terrifically more data” needs to churn through visual effects pipelines, “more number crunching than we have ever seen before, which can slow things down like [rendering used to do].”

In that case, he adds, it won’t in the long term be a big deal because manufacturers will no doubt continue the processing speed march that has been going on unabated for years now. However, Edlund wonders how and when will we know when the demand is real and necessary? 3D, for instance, he suggests may continue its theatrical advance, but he doubts that home 3D viewing will ever take off the way he expects that 4k home viewing of 2D imagery will for the simple reason that home 3D viewing in most circumstances simply can’t replicate the experience in a large cinema.

And, besides, if HFR can make images seem hyper real, doesn’t that lead to a paradox by which real-world images will logically always look better at higher frame rates than will manufactured, fantastical, supernatural, or sci-fi imagery of the type that Peter Jackson and James Cameron are using to promote HFR? Perhaps HFR’s niche should be small, thoughtful dramas rather than the Hobbit’s and Avatar’s of the world?

Perhaps, Edlund says, but perhaps not—the innovation probably isn’t necessary in most dramatic settings, he suggests.

“If a director has a particular reason to do it, he’ll be able to do it, but the question is, will it be necessary?” he asks. “Will it add anything? As [Marshall McLuhan] said, the medium is the message. If the technology works, artists will grab onto it and use it, and patrons will flock to theaters to see it. But if it is expensive, or there are flaws in the technology or presentation, then they won’t use it or people won’t want to watch it. So it kind of self regulates. Just because we will have [HFR] doesn’t mean many people will need to be using it—same with 3D and all the rest. It’s all a tradeoff. [Cameron and Jackson] can afford it and like it. Meanwhile, Christopher Nolan is still shooting Batman movies on film.”


All that said, when it comes to the visual effects profession, Edlund both raves about the digital revolution and laments that some of his younger peers are losing the kind of photographic foundation that launched Edlund himself to such dizzying heights. Like many people, he is glad that “we don’t have to shoot our plates in 65mm anymore and then lock the camera off, and we [usually] don’t have to worry about motion control anymore. I love that I can now operate cameras with an iPad or use a little Nikon or Canon high definition camera for shots. I have a tiny HD camera in my office sitting next to an old monster [Bell & Howell 2709 35mm camera], and we are talking less than 100 years elapsing between those two technologies. I can shoot 40 minutes of HD on that little camera and record sound with it also. It’s beyond belief how far we have come.”

However, at the same time, Edlund hasn’t changed his traditional desire to “find guys who ran camera or made a movie” to work on his visual effects crews. “At Boss Film, we used to have a list on the bulletin board of ‘must watch movies.’ We had folks coming in who had never seen Gone With the Wind or Citizen Kane. These days, I meet artists out of [major schools] and they don’t know any movies that pre-date Jurassic Park except, maybe, Star Wars. They can still learn a lot by running a camera, making mini movies, watching movies, and studying the world around them.”

Schwartzman’s Lament

John Schwartzman, ASC, works with Red Epic cameras and 3Ality stereo rigs during production on “The Amazing Spider-Man.” Photo by Jaimie Trueblood, SMPSP, courtesy of John Schwarztman, ASC.

These are exciting times for John Schwartzman, ASC. Heading out of a state-of-the-art digital 3D tentpole extravaganza, The Amazing Spider-Man, Schwartzman recently segued right into the modestly budgeted Disney feature, Saving Mr. Banks, currently in production. That film is, literally, an actual “film”—it’s being shot on film negative (Kodak Vision3 5213 and 5219 stock) directly in the wake of Schwartzman’s experience participating in as complex a file-based digital workflow as the industry has yet seen for Spider-Man.

In a recent conversation, a reflective Schwartzman was in a melancholy mood about the industry’s shift, despite his success straddling a traditional film world that seems to be phasing out of the mainstream in his industry and the new world of all-digital movie-making. In fact, he wondered how long cinematographers like him would even get opportunities to shoot film on major projects.

“I’m pretty much convinced that Saving Mr. Banks will be the last time that I ever use motion picture negative on a studio film,” Schwartzman told me. “I doubt it will happen after this one.”

Schwartzman and I suspect many of his colleagues are grappling with the nature of the transition they find themselves in the middle of. He certainly has mastered the new paradigm, has lots of good things to say about digital workflows and potential, accepts that the industry move away from film is inevitable, and recognizes digital tools and workflows will only improve over time. At the same time, he laments that the primary reason behind these changes has nothing to do with filmmaking, what looks best on the big screen, or creative preferences—rather, it’s the child of technological innovation bumping directly into today’s cold, hard economic realities, in his view.

Photo by Jaimie Trueblood, SMPSP, courtesy of John Schwarztman, ASC.

The Projection Dichotomy

His recent immersion inside the world of big-studio 3D illustrates this dichotomy.

As I documented in my article in the August issue of American Cinematographer, Schwarwtzman’s team captured that movie in native stereo using Red Epic cameras and 3Ality Technica’s TS-5 computerized rigs, among other digital technologies. One of the points of the article was to illustrate how pleased Schwartzman was with these new acquisition tools for shooting stereo in the field. Indeed, he emphasizes that the production proved “it is now possible to acquire 3D in a way that does not push the production schedule anymore. You can now do a 3D movie in the same time you do a 2D movie, and you can acquire it at a very high degree of resolution.”

That was the good news. When I chatted with Schwartzman as he was heading off to shoot Saving Mr. Banks, however, he pointed out that acquiring in native stereo and projecting it pristinely on a big screen are two very different things. Such movies, he says, can currently be captured at “a degree of resolution four times higher than it will be projected to the average viewer. That’s because 4k is four times the resolution of 2k—not double the resolution. It’s not arithmetic, it’s logarithmic.”

“Therefore, we now have the ability with Epic and [Sony’s] F65 to record in 5k or 8k, when, in reality, the projectors that will show those movies are only projecting in 2k,” he adds. “Even the best system we have today for projection in 3D, besides IMAX—the RealD system—is 2k interlaced and closer to 1k.”

Thus, Schwartzman laments, few people will get to see the current generation of native 3D films, like The Amazing Spider-Man, with the brightness, dynamic range, and black levels they can currently experience with 2D movies. In other words, in Schwartzman’s opinion, “the viewing experience for 3D is different, because you gain depth and stereo. But to get that at the Cineplex, you have to give up brightness and dynamic range and resolution 100 percent of the time. If you are comparing apples to apples [in terms of color and brightness], to use an analogy, 2D would be high-def, while 3D is still standard def.”

“True IMAX is better, of course,” he elaborates. “By IMAX, I mean a movie shot on 70mm, 15-perf film. It is hard to have a movie look better than the way that Christopher Nolan and Wally Pfister [ASC] made The Dark Knight Rises look in IMAX. There, I saw a 70mm, 15-perf piece of film. But IMAX is the only venue whose entire business model is based on making and exhibiting beautiful images. There are about 600 of those theaters in the U.S., and that is a much different experience from seeing 3D in a typical multiplex.”

Schwartzman hastens to add that the 3D exhibition experience is not lagging, in his view, because the technology does not exist to improve it. He points to brightness jumps and other breakthroughs in laser systems currently being developed by manufacturers like Sony, Barco, Christie, Laser Light Engines, Red Digital, and others. Similarly, he points out that 4k projection systems already exist for an improved theatrical viewing experience, and are slowly easing into multiplexes. But few studios are routinely conforming motion pictures in 4k, he points out, so few audiences are getting to watch true 4k movies right now even if they are in a theater using a 4k projector.

Saving Mr. Banks is being shot on film in an anamorphic format, full Academy aperture, but with little or no visual effects,” he points out. “And yet, Disney isn’t going to finish the film in 4k. Ironically, it’s a perfect film to finish photochemically—Scope has more resolution than anything [digital] that exists today, and yet theatergoers will not see half of what the negative captures. Ten years ago, we didn’t even think this was an issue to worry about.

“Sony, God bless them, is trying to finish in 4k to be future proofed, but nobody is [routinely] doing 4k right now [for current theatrical viewing],” he says. “We will get there—there will be 4k flatscreen TV’s coming to homes pretty soon. But right now, the studios are not [routinely] finishing in 4k. I heard there is supposed to be something like 17,000 4k theaters in the United States by 2015, but how many of them will actually be projecting in 4k any time soon?”

Photo by Jaimie Trueblood, SMPSP, courtesy of John Schwarztman, ASC.

It’s a Business

Schwartzman understands and accepts why this is so, of course. “Is the theater owner going to say to himself, ‘am I going to spend $175-thousand dollars to buy a Sony or Barco or Christie 4k projector and the servers they need to run it?’ Right now, that’s a tough business model, and craftspeople like me understand that.”

Similarly, Schwartzman talks eloquently about the rapid proliferation of the digital intermediate being a business development, rather than a ubiquitous creative necessity across the board. And, he suggests film is being phased out of the larger studio theatrical movie paradigm for the same reason.

“It’s a complete business issue,” he says. “It’s all about distribution. It’s a lot cheaper to send a hard drive or beam from a satellite 3,000 versions of a DCP than it is to ship 180 lbs. of film. But 180 lbs. of film looks better—I don’t care what anyone says, I will argue that point. However, this is the world we are in, and so, we have to embrace it. The genie is not going back into the bottle.”

Thus, the truth, according to Schwartzman, is that “economic forces have basically put a governor on the engine of what we can show creatively [to the masses]. They have always done that, but the digital world changes much faster than the photochemical world ever did.”

In other words, the loving craftsmanship that cinematographers and their colleagues put into high-resolution imagery and stereo imagery will not be fully seen, absorbed, or appreciated by audiences for some time to come. And that is the heart of Schwartzman’s lament.

So what is a cinematographer to do? How should he or she respond in practical production environments, knowing their artistry will eventually enter a financially based distribution maze, where numbers on a spreadsheet will dictate, as much as their hard work, how exactly their images will be delivered to audiences? The answer, Schwartzman suggests, is that, for the cinematographer, nothing should really change creatively or philosophically on set.

“You still try to get the best looking release you can,” he says. “I never consider it in terms of how I shoot a movie. But in this case, it is the best-looking digital release. The truth is that everything will eventually be digitally projected, and therefore, it makes business sense to acquire digitally—for the studios, it just streamlines the pipeline. That is the world we are in now. For the craftsperson, that is difficult to swallow, but then again, [digital projection technology] will improve, and someday, people will be seeing better results in 3D and 4k. But, right now, it’s tough.”

Thus, Schwartzman’s only real complaint is the fact that no one knows when, or how, economic realities will shift enough to permit the best exhibition technically possible of high resolution and stereo imagery on a large scale. He is simply suggesting that only filmmakers “like James Cameron, with money and clout, have a hope of ramrodding changes” onto the business landscape that will allow laser, or other new exhibition technologies, to gain a foothold in the near future. More likely such a shift, like the original digital cinema shift before it, will take many years to match up to a business model suited to supporting it across the exhibition industry.

“I get to see [a pristine version] in 4k [while making a 3D movie], but I’m seeing it in 2D, not 3D, and when you see it on a big screen at 4k in 2D or IMAX in 3D, it can look great, but 3D in 2k, as compared to 2D in 4k, as a cinematographer, when I see it, I can’t help but be a little disappointed,” he sums up. “There is nothing worse than seeing something shot by an award-winning cinematographer who knows what he is doing, but the projection is inferior—so inferior you would rather watch it in 2D. In my opinion, for instance, as great as Hugo ended up looking in 3D, it still looks better in 2D in terms of color and dynamic range and darkness. I hope they find a way to step up and raise their game in terms of projection of these beautiful images we can now capture at more than 4k. I can’t wait for the day when the only difference between 3D and 2D on a big screen will be the depth, and not dynamic range or brightness.”

Final Reflections from Bruce Surtees

Bruce Surtees during the Eastwood years.

Among the joys of researching and writing my upcoming book, Clint Eastwood: Master Filmmaker at Work (coming this Fall from Abrams—find it here in the shameless plug department) was getting the opportunity to do an extended interview last summer with Clint’s original cinematographer, Bruce Surtees. Bruce passed away this past February at the age of 74 due to complications from diabetes, making the interview I did with him the final one of his illustrious career, as near as I can tell. Ironically, when we spoke, Bruce was upbeat about his health and said he was still receiving offers to shoot movies, but had no interest because he had keen memories of watching his father, Oscar-winning cinematographer Robert Surtees (ASC), labor on movie sets until shortly before he died in 1985. Bruce felt he needed to spend some time relaxing in his twilight years.

“I could go back to work, because I feel like my health issues are in check,” Bruce told me last July. “I lost 50 pounds—it’s not that hard once you get away from craft services and aren’t eating all the time. But I don’t want to work from 6 a.m. to 10 p.m. anymore. I did that all my life, and now, I need to do other things.”

And so, Surtees was spending much of his time fly fishing and enjoying digital still photography and art at his Northern California home, near the Carmel region where he first settled in the 1970’s after shooting Eastwood’s directorial debut, Play Misty for Me, there on location.

Despite his desire to avoid the limelight, probably because my book was about his friend’s art and aesthetic, Surtees agreed to discuss his Eastwoodian memories with me last year. I count myself fortunate I had the opportunity—he gave me great insight into Clint’s methodology, his view of cinematography, and how specific early Eastwood films and sequences were shot.

Fond Friends

In my book, Eastwood calls Surtees “the guy that Jack Green (ASC), Tom Stern (ASC), and all of us learned our boldness from. Bruce was absolutely fearless with light levels.”

Indeed, Eastwood, Green, and Stern—the two men who followed him for lengthy stints as Eastwood’s cinematographers—and many others, such as Eastwood’s longtime key grip, Charlie Saldana, all reminisce fondly about the late cinematographer, his creative and lighting philosophies, and the great influence he had on the Eastwood aesthetic to this day. Surtees, after all, worked first as a cameraman and then as a cinematographer for Eastwood’s mentor, director Don Siegel, and eventually hired Green as an operator (at Eastwood’s request), Stern as his gaffer, and Saldana as his key grip. So the fact that there was and is great visual synergy in Eastwood’s imagery and style from Play Misty for Me all the way to last year’s J. Edgar, shot by Stern (about which I wrote a cover story in the December issue of American Cinematographer) should be no surprise.

On location in the Eastwood years, Bruce Surtees (rear left) with colleagues and friends including then-gaffer Tom Stern (front left) and key grip Charles Saldana (front right).

Green recalls being schooled by Surtees to “stay off the moth side” in terms of aiming his camera lens away from an actor’s brightly lit side, toward the darker, shadier part of the image—an aesthetic that is easily recognizable today as a fundamental Eastwood look. He also recalls becoming close friends with Surtees after a rocky start between them on Firefox because Surtees was asked by Eastwood to work with Green without having any input into his choice of an operator. Green says he remains profoundly grateful to this day for Surtees’ decision to go to Eastwood and lobby for Green to be moved up to cinematographer following Pale Rider, Surtees’ last film for Eastwood in 1985. In my book, Surtees states he simply felt Green was ready and deserved the break.

Stern, meanwhile, today calls him a “brave artist who influenced me a great deal” and will happily go to great lengths to explain the pride he feels in having inherited the job and the aesthetic pioneered by Surtees on Eastwood’s behalf. It’s a responsibility that Stern says makes him often think of Surtees when he is working on an Eastwood picture.

Eastwood was in Atlanta busy filming Trouble With the Curve for his producing partner, and first-time director, Rob Lorenz, when Surtees passed away in February, but told the Los Angeles Times Surtees was “very creative, he had good thoughts and ideas.” In the same article, Stern reiterated what he had told me—that Surtees “was a very important mentor to me.” When I spoke with Saldana recently, he became choked up speaking about his old friend, with whom he stayed in touch long after Surtees stopped working on Eastwood films, recalling fondly golf trips and boat rides in Northern California with Surtees. The longtimers on the Eastwood team, in fact, are full of anecdotes and good memories of working with Surtees on many classic Eastwood films.

Looking Back

And Surtees had great memories himself near the end of his life. When we spoke, he waxed poetic about getting to live for over 40 years on the Monterey Peninsula, commenting on what a wonderful filming location that region is. “I need to be visual, I need to have exteriors right outside (his window),” he said. “I don’t like modern cities, particularly L.A.—they look like boxes. I believe that is why you have so many great Italian cameramen—you get up in the morning, in a place like Rome, walk down streets where you have lived since you were a child, and you get very visual. Italians are visual because art is all around them, and that influences their lives.”

He compared Eastwood’s camera movement to good ballet, suggesting that is a good way for young cinematographers to study “good composition in movement. Staging shots, a sense of timing and movement, when to dolly—we would shoot a lot like ballet in that sense,” he said. “Many movies are shot with a proscenium effect—back and forth, closeups over shoulders, very little real composition and movement to speak of. We would shoot with movement going on for a long time.”

Throughout our conversation, Surtees would emphasize the notion of simplicity—a philosophy that made him a good fit with Eastwood. Few lights, move lights around, and you are good to go if you have good composition and subtle lighting. He also declared himself a man out of the black-and-white era, and lamented the loss of the lighting skill in today’s cinema that was required to light classic black-and-white movies back in the day.

And speaking of low light, Surtees remarked that although Dirty Harry was directed by Siegel 40 years ago last year—in 1971—Eastwood was, as was common in the relationship between Eastwood and Siegel, deeply involved in how the movie was shot and put together. “A lot of what you saw in that movie were Clint’s ideas—dramatic points, story points, camera moves,” Surtees told me. “Of course, Don Siegel listened to him—he had a lot of great ideas. Why wouldn’t he listen to him?”

In particular, Surtees called Dirty Harry“a great location film” due to its strategic use of the city of San Francisco. In fact, he enjoyed reminiscing about how that film’s visuals came together. Among his memories


  • “One of the great shots is the helicopter shot (at the end of the Kezar Stadium sequence),” he explained. “We pull out and the (stadium) kind of fades away (into the fog). That was because the helicopter couldn’t come back down and land at Kezar Stadium. It was all fogged in and the helicopter couldn’t come back—it was too dangerous. So they flew over to Oakland to land, and it was fogged in there, so they ended up having to go to San Jose. But it worked for us—that is why the camera pulls away the way it does there.


  • “The difference between Clint and Don and a lot of people was they didn’t like to shoot up front in a city,” he said. “They were more interested in shooting the back of San Francisco. Everyone shoots the bridge, Market Street, Downtown, Chinatown. But we knew the best way to shoot Chinatown was to get in the alley behind all the Chinese restaurants, so that is what we did.


  • “And the scene where Clint’s character jumps on top of the (hijacked school bus)—that was not what you might expect. I decided to climb up on the bus and shoot the part where Clint jumps from the trestle bridge onto the back of the bus. I had an excellent operator, but I felt Clint was doing the stunt, and if he fell, he could have been badly hurt, so we probably wouldn’t want to try it more than once. So Clint did it and we got it, but his head almost fell into the matte box.


  • “Also, the scene where they find the dead girl—that was shot at dawn. In the script, it could have been any time. But we all felt it should be when the sun comes up, so you can see the outline of San Francisco and the (Golden Gate Bridge). Clint was always discussing things like that with Don Siegel and I.”


  • “For the alley shot, where they find another body in the film—I shot that just with the lights off the police motorcycle and the car. No motion picture lights whatsoever. They would have been too bright for that kind of scene, creating the effect of a hard light. I went into the alley and said, why should we light it? It’s already lit.”


But perhaps Surtees’ personal favorite shot he captured for Eastwood came near the end of Honkytonk Man when Eastwood’s character succumbs to his illness and dies on a bed in a cheap hotel room.

“That was one of the best scenes we ever did,” he recalled. “There was all this natural light coming in through the window before I tried to light the scene in any way. We discussed how beautiful that was, and I told Clint, wouldn’t it be nice to shoot the (dying scene) all in one take in the natural light? He said he thought he could do it, and that’s what we did.”

Bruce Surtees (L) and his friend, Charles Saldana.

All photos courtesy of Charles Saldana.

Euro Styled, and Nicely Done

As awards’ season entered hyper-drive, I thought about two cinematographers in particular whom I felt were likely to receive serious award consideration this year. With the ASC, Academy Award, and BAFTA nominations now out, it’s nice to see I was right about how their industry peers felt about their work. The cinematographers I was thinking about are the legendary Janusz Kaminski and Hoyte van Hoytema, FSF, NSC. Both are being recognized this year for their work on period films–Janusz for Steven Spielberg’s World War I piece, War Horse, and Hoyte for filming the Cold War era spy film, Tinker Tailor Soldier Spy, directed by Tomas Alfredson.

Illustrating how subjective award nominations are, both men received BAFTA nominations this year, Hoyte received an ASC nomination, and Janusz received an Academy Award nomination. Just my opinion, but I felt they were both deserving of nods from all three, but of course, there was reams of fine work this year and each body has different demographics and criterion for deciding these things. Nonetheless, I was impressed with Kaminski’s work because of the broad and painterly way he was able to romanticize the skies and landscapes of Europe and visualize the brutal, dark horror of World War I simultaneously. van Hoytema also tackled Europe in a period piece, albeit many decades later, for Alfredson’s film, and in that case, I was impressed by the care and craftsmanship that went into painting the gray, drab, and, as Hoyte says, “scruffy” nature of dimly lit Eastern Europe in the 1970’s, where spies anonymously plied their trade at the height of the Cold War. I had previously been impressed with Hoyte’s work on a much different period piece last year, The Fighter  (AC Jan. ’11)  for which I wrote a feature article for American Cinematographer, and once again this year, I think he has proven himself to be among the world’s rising cinematographers, and Janusz’ track record speaks for itself.

I talked to both Hoyte, Janusz, and other cinematographers recently for an article in Variety about the art of shooting period work, but since Variety’s format and space permit only a cursory treatment of the topic, I thought I’d take the opportunity here to elaborate on the work the two men did this year. A detailed technical examination of how Tinker Tailor Soldier Spy was shot can be found in the December 2011 issue, written by Jean Oppenheimer, and War Horse  can be found in the current January issue of AC, written by Patricia Thomson. Following on those articles, I got each man to share some additional creative thoughts about period work on the two projects.

Hoyte van Hoytema, FSF, NSC

As it relates to Tinker Tailor Soldier Spy, van Hoytema points out that, in recent years, he has only rarely shot contemporary stories, that he previously shot an 1980’s-era period piece with Alfredson in the Swedish film Let the Right One In (2008), and later did extensive period work in The Fighter, among other pictures. Generally, he says, he prefers and gravitates to such period work.

“I think it is always easier to be creative if you have had a little bit of distance to (the subject matter) somehow,” he suggests. “I think time gives things perspective, and so, you can be a little bit more analytical about certain things. You have many references in art and documentary imagery that helps you see how the time is defined in a visual sense. It’s nice, as a cinematographer, to be able to re-invent yourself a little bit by (visualizing) aesthetics that people know very well. There is a little nostalgia there also, and I think nostalgia is a good provocateur for creating visuals.”

Hoytema talks at length about studying reference material on Eastern Europe in the 1970’s to come up with the grayish/noir aesthetic seen in Tinker Tailor Soldier  Spy. But, in the end, that aesthetic, he says, emanated out of the particular story originally crafted as a novel by John le Carré. Thus, though the stories don’t take place all that many years apart, he took his visual approach on the film in a totally different direction than the work he did for David O. Russell on The Fighter. 

“The cinematography on The Fighter was so adjusted for the actor’s performances that I would sometimes sacrifice more precise light and angles and lenses to make sure I didn’t limit the actors,” van Hoytema explains. “But on Tinker Tailor, I had more precise visual control—it was more about painting the universe in a different way. We wanted to make something that was really meticulous in storytelling in a cinematic kind of way that feels a little less random. It’s more thorough in its angles and cuts than The Fighter. This movie is more about mood, atmosphere, smells, in addition to the performances.”

On the collaborative front, the cinematographer also suggests that in period work, he feels an obligation, when feasible, to honor the labors of his production design and costume colleagues through his camera work. The palette and nature of many visuals, after all, are designed with those people working hand-in-hand with van Hoytema and the director.

“In a visual sense, I worked very close with the production designer [Maria Djurkovic] on this project,” he says. “We spent a lot of time together, and she came up with a lot of inspirational material for me. So, yes, I adjusted my cinematography because of her work and the costumes, out of respect for them. We drew up some sort of palette feeling that we liked together, and also colors we wanted to avoid. So, when the actors came in, in a way, this whole world was sketched for them. We could give them lots of [visual] references as they started working on their performances, so in away, it was a very directional process that we were all involved in together.”

Janusz Kaminski

Meanwhile, Patricia Thomson’s article in the January issue of AC details Janusz Kaminski’s work on War Horse. But I was also curious to hear his thoughts about the experience visualizing World War I in War Horse compared to his award-winning effort to bring the stinging horror of World War II to life for Spielberg in Saving Private Ryan (AC Aug. ’98). Kaminski says camera movement and lighting involved totally different approaches for the two pictures. The frenetic, handheld approach for following action in Private Ryan was replaced by what Kaminski calls “a more straightforward approach” for depicting the first World War’s trench warfare, and a wider, “more objective” lensing strategy for the madness in the trenches.

“For trench war, we had to be substantially different than Private Ryan,” he says. “There is rain and smoke and explosions and all that stuff, so you had to be less personally involved [than the intimate camera used on Private Ryan.”]

However, the movie also features a beautiful, classical visualization of a cavalry charge as British troops, early in the war, descend on German lines. Initially, the charge of men atop majestic animals is painterly and stylized—an homage to the end of horseback Calvary as modern warfare came onto the scene. Eventually, of course, the attack goes wrong, the horses enter a dark forest where German troops lie in wait, and the darkness of trench-style warfare then slowly takes over the movie.

I use the word “painterly” because, indeed, Spielberg and Kaminski were trying to evoke classic paintings of heroic cavalry charges on historic battlefields. Then, later, when things go dark and gloomy, once again, they were hoping to evoke classic imagery of the war’s blown-out, haunted battlefields.

“We wanted to cut through those shots and find what the emotional approach to the [Cavalry charge] was,” Kaminski says. “It was meant to look glorious, like in so many paintings. There are first World War paintings from the battlefield that are beautiful like this. I wanted to convey that beauty—the horses charging, the glory of the soldiers pointing [their swords], almost like gladiators. So the idea was to start with the glorification of battle from those paintings, and then later, we see the reality of the war, death, and all the destruction.

“That’s when we change, and reference black-and-white [photos] from the war, where you see the battlefield of Verdun [in France] that are famous. It was almost total destruction—land with one barren tree sticking out of the ground, destroyed tanks. We based a shot on one of those specific [photos]. But in terms of color and (camera movement), that was our total artistic creation.”

Kaminski is particularly proud of his work on War Horse for many reasons, not the least of which is the fact that the story gave him a chance to simultaneously create, in certain scenes, what he calls “storybook paintings” such as the richly golden sky at the film’s climax, but also brutally harsh imagery that realistically illustrates the war in all its madness. He suggests there is a perfectly understandable desire, or tendency, to romanticize imagery in period films like War Horse. But the ability to know when to give in to that compulsion, and when to resist it, is a skill Kaminski is well acquainted with.

“The easiest way to do a period movie is to have images perfect for that specific story,” he says. “So you might have to get away from [the tendency to do] beautification of the images in period films. Amistad (AC Jan. ’97) was a good example for us. That film dealt with slavery. Because that was the subject matter, I tried to stay away from the conventions of warm light. Consequently, sequences involving the slaves had bluish tones, not warm tones. I certainly wanted to use warm light, but the story did not allow for that because warm light romanticizes the story, and there was nothing romantic about that particular movie. I do like to go against the expected, that is interesting to do, but only if you have a good reason. [War Horse] was pretty conventional in terms of capturing the beauty of the particular period, except, of course, for the war moments. When we showed horses laboring to pull artillery up a hill, I did not want to have romantic images there—I wanted it bluer and greyer and grittier to have the audience feel the hardship of the particular scene. But in other places, I was able to have that romantic, storybook feeling, so that was one great thing about this movie.”  

Tinker Tailor Soldier Spy photos by Jack English, courtesy of Focus Features.

War Horse photos by Andrew Cooper, SMPSP and David Appleby, courtesy of DreamWorks.

Outside Reality: “Bunraku” Blooms

The virtual “Bunraku” skies were hyper-stylized, with multi-directional lighting in order to evoke a feeling as far away from the experience of viewing a real sky as possible.

When cinematographer Juan Ruiz-Anchia, ASC, put together his Filmmakers Forum column for the upcoming November issue of American Cinematographer about his experience shooting the stylized live-action-CG hybrid film, Bunraku, he mentioned what an ambitious and experimental movie it was.

When I got to talking with him, Juan also described Bunraku as a sterling example of a filmmaking collaboration. He told me, as he describes in his column, how director Guy Moshe put together a team consisting of himself, co-producer Alex McDowell, production designer Chris Farmer, and visual effects supervisor Oliver Hotz to labor a couple of years to make the movie look like it was taking place in an Origami-style, bending paper universe. The effort was meant to evoke the feeling of watching a theatrical presentation in the style of the Japanese art form of the same name—Bunraku—that the movie pays homage to.

Moshe agrees on the importance of the group effort he enjoyed with his colleagues to achieving the unique vision he had for the film.

“It was a great collaboration—I had my vision for it coming in, but it was a bit abstract at first, which was intended to allow (his team) a certain freedom to explore within the boundaries of the universe I imagined,” Moshe explains. “The key department heads would come in with suggestions based on ideas we discussed, and I would weigh these possibilities and be able to determine the ‘rules’ of this universe. And then, those guys helped me flesh it all out and bring it to life. Our collective effort took the seed that was the original vision and made it really bloom.”

As Juan describes in his column, Bunraku is a colorful, exotic, action-fantasy built around the concept of ancient Japanese puppet theater. It’s filled with delicately choreographed martial arts battles that borrow from tales of the Samurai and the Old West, and a wild, flying camera without boundaries that carries the action from one scene to another in a non-traditional way. The virtual background is based on theatrical stylings and classic art, features unique, constantly changing skies, and was incredibly complicated and technically challenging to create.

In his column, Juan explains how he shot the live action on film on stages in Romania—this was over three years ago and he felt no available digital cameras were up to the task at the time. Since he covers that part of the story in detail, I decided to reach out to two colleagues he achieved unity with—director Guy Moshe, the mastermind of the whole thing, and visual effects supervisor Oliver Hotz, who oversaw the approximately 1,000 digital effects’ shots completed by his own company, Origami Digital of Los Angeles, as well as about 200 shots from other vendors. They explained the creation and evolution of the virtual Bunraku universe.

Totally Unreal

Filmmakers say the design of the Bunraku universe took years to plan as Moshe and co-producer McDowell (also a production designer by trade) created the “folding universe” idea. Eventually, as his column will tell you, Juan shot the live action in such a way as to establish a unique, stylized color palette of evolving/changing hues through extensive use of lighting gels.

Moshe says the idea was to make the skies look as far from realistic as possible, and let “all the visuals be perceived as fake in the sense of clearly being an alternate reality, or rather, an artificial reality. I wanted it to look colorful, theatrical, holistic, based on early European paintings like German Expressionism, and so on. When I went back and studied that kind of art, I discovered something very interesting—those paintings were broken up into graphic lines. Like them, I wanted a graphic interpretation of reality. That’s where we came up with the theme of Origami art, which relies on geometric lines and also ties in with our story’s Japanese themes. That became the building block for this universe we created.”

Moshe elaborates that the reason for pursuing this kind of a look and style was because of his original goal to achieve cinematically what the Bunraku form of Japanese theater achieves on a stage—to reveal the puppeteer manipulating the unfolding action. Only, in this case, the puppeteer is metaphorical—he is the camera and the camera movement, in essence. So, a universe stylized and beyond anything realistic in this fashion was the proper environment for making this illusion work cinematically.

“I wanted every frame to reflect this concept—that as an audience member, you are constantly aware you are watching a show,” the director explains. “I wanted the cinematic puppeteer to constantly be felt.”

Creatively Challenging

While Ruiz-Anchia shot the movie, Hotz and his team at Origami Digital worked to create the environment and match colors and themes to the live-action imagery. A key part of this mission involved an ongoing, evolving pre-visualization process on location, even after production was underway. Ron Frankel’s company, Proof Inc. of Los Angeles, under the direction of Proof’s pre-visualization supervisor, Jotham Herzon, created original pre-viz during the film’s lengthy development phase, which began the process of creating a template for the Bunraku world. But that visualization process was ongoing and continued unabated once production began.

Indeed, for certain complicated sequences, filmmakers employed a process of creating some kind of previz or post-viz piece after almost every take on set to make sure the final effect would work as they envisioned.

Virtually all of “Bunraku” was pre-visualized down to minute details. Here, an animated version of the scene (above) and actors emulating the animation on set.

“We had a couple artists with me in Romania to do pre-viz on location, even on set as we figured things out,” Hotz elaborates. “There is one specific sequence that takes place with a character descending multiple staircases, fighting his way through a prison that is meant to appear as if it was shot in one long continuous take. For practical reasons (the staircase set was built for two stories but the story required the character to descend four stories), we determined that the sequence had to be shot in multiple passes and stitched together in post. To ensure that we were getting the plates needed to complete the shot, takes that Guy liked were composited together on set before each setup was struck. That gave us an opportunity to see what the final shot would look like while we were filming.”

The skies, in particular, were influenced by the famous paintings of Lyonel Feininger, and Moshe gives Hotz and his team great credit for adapting the particularly hyper-stylized concept and making it come to life with computer animation.

“Those skies alone took months, especially because we were creating them to work in a 360-degree universe,” says Moshe. “A lot of material you perceive inside a green-screen environment is created by light reflections. There is no real matter in a CG environment—you can’t type in ‘wood’ and get something that looks like wood, or iron or plastic. Instead, the only way to achieve it is by making a surface reflect light in a similar way, because everything we perceive in reality is just light reflection. Therefore, to make something look real, you need to manipulate things based on photographic reference and real-life situations, and then you mimic the shades and materials and how they behave in real light to make it seem real so the eye cannot tell the difference. However, in our movie, nothing is supposed to be real or made out of typical materials. We have a forest of paper trees, for example. No one has ever see a forest made of paper trees obviously. So there was already a challenge there in trying to make something look like real matter, but then stay true to our concept in which everything needs to be fake as though it were made for a theatrical stage. Just thinking of it can make your mind bend.

“Then, in addition, we had it all intentionally lit from multiple directions in order to create a deliberate, but beautiful, artificiality. So our sky emits some kind of avant-garde theatrical or circus type light, with two or three sources of light from every direction. That’s the work Juan did on set and it was gorgeous, but then, Oliver had to also do it with visual effects and spread it on an entire universe. Since there really are no points of reference, we had to determine it on a per shot basis.

“And then, Oliver also had to deal with technical issues about tracking every shot and removing greenscreens, which were more complicated because we shot on film and there was a lot of grain. Plus, he also had to deal with so many colors—we had lighting so intricate and specific that he had to recreate the same type of light on the horizon in a 2½-D kind of way. That’s very complicated to do with 3D tools, so I really have to commend Oliver.”

Quality Control

Moshe, as director, and Ruiz-Anchia, as cinematographer, were therefore far more involved in working with Hotz throughout the visual effects process than is typical, simply by virtue of often having to put their heads together and experiment with different CG looks until they pushed into territory that felt right to Moshe. That came after Hotz was hotly involved on set during the shooting phase in order to literally document each one of Ruiz-Anchia’s lighting setups for reference, among other things.

The other important innovation that Hotz’s team arranged for the production involved wiring multiple stages at MediaPro Studios in Romania with a live video conferencing tool. Since two or more units were working simultaneously throughout production, the system gave Moshe a way to communicate with Ruiz-Anchia and others, and see what they were shooting, even when they were on separate stages.

Director Guy Moshe shoots still photos of actor Woody Harrelson in the stylized light created onset during production of “Bunraku.”

“I guess you would call it a form of video assist, so that Guy could always monitor shots and give advice to and chat with the cinematographer,” says Hotz. “Before flying to Romania, we had developed a system for cataloging set data notes that linked the information with a recording from each camera feed. It’s based on a technology platform we developed at Origami called LOCO—in this case, we called it LOCO DVR. When we realized there would constantly be two units shooting with multiple cameras, we leveraged the scalability of our system and were able to provide Guy a way of essentially being in two places at once.

“We had a little visual effects desk at each stage, and Guy could walk to it and talk on a laptop to whomever he needed to talk to on the other stage and see exactly what they were shooting on that stage. We’ve had incarnations of it for prior work, but here, we geared it solely to provide interconnectivity on multiple stages. We could stream one or multiple cameras to any place the director was, either way. The editor (Glenn Garland), who was in Romania, also had access to those feeds, and each take was automatically archived on the server and linked to the appropriate set notes for him to review.”

Hotz suggests this method of collaboration on a relatively low-budget movie in a stylized, hybrid, fantasy genre is quite unusual, but turned out to be a meaningful and educational experience.

“I wasn’t on shows like Sin City or 300, but from what I know about such work, usually they have dedicated visual effects supervisors who supervise teams that each do particular parts of the job,” he says. “Here, we wanted all of us together—not just myself. We wanted our artists at Origami sitting with the director, everybody interacting and contributing ideas. That let us get to a result quicker while, at the same time, giving our creative team the freedom of exploration.”

George Ratliff and Tim Orr: Seeking Cinematic Salvation

Cinematographer Tim Orr agreed to shoot director George Ratliff’s Salvation Boulevard because he loved the quirky script, was friends with Ratliff, and had always wanted to work with him. The dark comedy, starring Pierce Brosnan, Greg Kinnear, Jennifer Connolly, Ed Harris, and Marisa Tomei satirizes religious obsessions and hypocrisy in the world of mega-churches and wraps those concepts around an offbeat attempted murder mystery. Creatively, Ratliff wanted to use light thematically throughout the story and felt he needed a lighting wizard to assist him in this mission. When Benoit Debie, who shot Ratliff’s previous feature film, Joshua, was unavailable, Ratliff quickly turned to Orr, whom he considers to be “a master of light.”

Pierce Brosnan is Pastor Dan, whose inner turmoil is at the center of “Salvation Boulevard.” Photo by Mark Preston. An IFC Films Release.

“I’ve known Tim for years and am a big fan of his early work with [director] David Gordon Green,” says Ratliff. “What always made me scratch my head in wonder was how he captured the kind of naturalism you see in his films, especially outdoor light. I always felt there was a Terrence Malick sort of feeling to those kinds of shots. He’s a master with light and a good guy to work with. Plus, on a movie like Salvation Boulevard, which is a small movie, we had to work very fast, and Tim has the discipline for that sort of work. On this movie, we did it in a 26-day shoot, working very fast, and yet, Tim helped me get a big movie feel to it, and succeeded in attaining an amazing, crisp look, which is very much in keeping with the subject matter.”

Orr’s collaboration with Ratliff on Salvation Boulevard, an independent film from Mandalay Vision shot in late 2010 in Michigan, offers an example of when things go particularly right in the collaboration between director and cinematographer. Ratliff emphasizes that he and Orr were simpatico on most key creative issues going into the project, and then, Orr would routinely help him figure out ways to make the plan better in the trenches.

Cinematographer Tim Orr.

Director George Ratliff.

“Early in the process, we sat down and did storyboards and, luckily, we were feeling the same way almost shot for shot,” adds Ratliff. “But then, we’d get on set, and inevitably, more often than not, Tim would find a better idea than I storyboarded, and he’d give me that particular gift each shooting day.”

And Orr, for his part, raves about how smoothly his partnership with Ratliff went, using terms like “easy conversations in the prepatory phase,” “clear ideas,” “it was easy for George to articulate what he wanted the movie to look like,” “no micro-managing,” and “very collaborative” to describe how they worked together.

“I get his vision in those first conversations, and then I go off and develop further ideas about how to light scenes, what they should look like, what colors to use in terms of selecting gels, and so on,” adds Orr. “Then, I bring in my gaffer [Jonathan Bradley] and develop those ideas further. That’s when we get down to figuring out the best instruments to use. When we get to color, we look through our gel selections and try to find the right colors that are evocative of the decisions we need to make for particular scenes.”

Orr adds that the two men relied heavily “for inspiration and reference” on the work of the Coen Brothers, particularly Fargo and No Country for Old Men.

“We also looked at deep, dark, brown wood tones in movies like [the Coen Bros.] used in Miller’s Crossing, ” Orr elaborates. “But in terms of the modern contemporary look of it all, No Country for Old Men was a strong visual influence. Like that movie, this movie has comedy but I still viewed it as more of a dramatic thriller. So we wanted to shoot it with a predominantly naturalistic look, but also a rich look, using color in a vibrant, evocative way. No Country for Old Men was shot so beautifully by Roger Deakins [ASC], and yet, at the same time, the photography gave you the feeling that, at times, you were certainly not safe, which is something we wanted to bring into the world of Greg Kinnear’s character [Carl], who gets dragged into a murder mystery. We wanted a darker, richer drama. So, in that sense, we lit it dark, but tried to use contrast in color for a lot of different locations, whether to underscore what is happening with a character or a particular idea or moment in the script.”

Greg Kinnear plays Carl in “Salvation Boulevard,” shot by Tim Orr. Photo by Mark Preston. An IFC Films Release.

Filmmakers shot the movie on film (Kodak Vision2 5260 500T stock for day exteriors and Vision3 5217 200T), with a set of Panavision Primo lenses and, most of the time, a single Panavision Platinum camera.

Orr’s expertise with the manipulation of light and color was crucial as the story moves visually to the places where characters are mentally at particular times. When they feel despair or a fall from grace, as opposed to when they feel close to God or closer to attaining salvation, the lighting schemes change accordingly. This methodology commences with the story’s unintentional attempted murder—a point at which “a tonal shift occurs in movie, and along with it, a visual shift,” says Ratliff.

“We start with a fairly naturalistic look early on, but when the [apparent murder] takes place in the office of the professor [played by Ed Harris], it’s a key moment in the story because it kicks off sucking the Carl character into the lie that the Pastor Dan character [Brosnan] cooks up to cover up the shooting,” Orr elaborates. “So, at that point, we try to steer it toward a darker tone, a darker atmosphere, and use a fair amount of decently heavy contrast, not only in terms of light values but in terms of colors. I tried to use a warm lamp lit key light motivation, and then exterior light was a reasonably deep cyan color that just filters into rooms from one side.”

Soon, Pastor Dan comes close to losing his faith, and a key sequence shows him in his study and, eventually, watching TV where the color palette of a latenight movie about the devil starts to take over his world.

“In the study, I tried to exaggerate whatever natural color was in there, but in a subtle way, so that it doesn’t take you out of the movie,” Orr adds. “I would use a sodium vapor look and uncorrected cool white so that they kind of collide with one-another. Although there are natural colors in this world, they appear unnatural at times, underscoring the turmoil inside the character.

“Then, when he finds the devil staring at him from the TV across the room, the initial motivation was firelight, but I tried to turn it slightly redder as the personification of the devil right in front of him. As he begins to sense that feeling, he’s overtaken by it, and at that point, I try to overtake the room with a fairly red light. Those are examples of pushing color in hopes of underscoring the emotional tones of a scene.”

Another example comes near the end of the film after the pastor is stabbed, and lies bleeding in the rear of an SUV, presuming himself to be dying and hoping he’s about to enter paradise. Again, the scene starts stylized, dark, and red as he lies in the back of the vehicle.

“But then, the Carl character stumbles on the vehicle and opens the trunk, and when the trunk opens, there is a blast of white light seen from the eyes of Pastor Dan as his salvation,” Orr adds. “To him, it’s the white light of heaven. Then, we filter through that white light and gradually come back to a more natural exposure as we see it is the face of Carl he is looking at, not the face of an angel.”

Throughout production, this kind of work was often particularly challenging because the movie was shot entirely on locations. Thus, executing thematic lighting was, occasionally, tricky.

“There is one scene where Pastor Dan is in the shower and he is praying and an almost God-like light shines down on him in the shower,” says Orr. “That was a practical location and, of course, bathrooms are not known for their size, so that was a bit difficult. We rigged a mirror up into the ceiling and used a 4k HMI with a dimmer shutter on it, and bounced that light into the mirror at the correct angle to produce the shaft of light. That’s an example of how to deal with things when you do not have the ability to put a lighting asset exactly where you want. So the movie wasn’t so much about getting too crazy in terms of instruments that we used. It was more about being creative and collaborative about how we used them.”

This kind of approach is seen throughout the movie. The mega-church sequences, for instance, were shot in the Ford Center of Performing Arts in Dearborn, Michigan, constantly utilizing old-school camera and compositing tricks to make the auditorium mimic a modern church, fill the place up, light it, display appropriate imagery on giant video screens, and film seemingly fervent religious rallies there.

“We had one day in the auditorium,” says Ratliff. “It was crazy, but you just work it out when you are all on the same page. We only had about 150 extras to film and composite together to fill the place up, we had to arrange fake video for the video screens, and we had to film in a fake AV room on a different day in a different location. To work on such a short schedule, Tim and I and his gaffer and crew had to be in alignment. Everyone just pitched in together to get it done—that was the collaborative nature of this movie.”

Orr suggests the plan worked so well because he, Ratliff, and Bradley “share similar tastes in terms of lighting. And yet, we all bring new ideas to the table. Filmmaking is such a collaborative art, and on a film like this, the more you make it a collaboration, the experience of making the film, and the film itself, are richer for it. That was certainly the case on this movie. I think we were all pretty much of a like-minded spirit.”

That spirit helped make happy creative partners of Orr and Ratliff, who are likely to work together again, both men suggest. Ratliff says “we were already friends, and on this movie, we found our working relationship.” In fact, the director chuckles, the only concern he has about Orr is when he approaches him with the words “I’m going to try and sell you on something.”

“He does that a lot,” the director says. “But that’s OK, because I’ve come to have such implicit trust in his eye. He’s a really fine DP.”

Attack of the Stereographers!

A 3D camera rig configured on the end of a crane while shooting "Pirates"--something filmmakers did routinely on location. (Photo by Rob Engle, copyright 2011)

When I wrote the cover story for the June issue of American Cinematographer magazine on the challenges and methodologies that the camera team led by Dariusz Wolski, ASC, faced in shooting Pirates of the Caribbean: On Stranger Tides in native stereo in faraway locations under rigorous conditions, I commented about how rapidly the stereoscopic paradigm is evolving, advancing, and shifting things on the larger industry landscape. Since that movie was shot—indeed, even since my article about the movie was written—that landscape has continued to shift. Tools, techniques, theories, and business considerations relative to major decisions like whether to capture in stereo or convert a tentpole movie later in post have all advanced dramatically.

In fact, even while I was writing the article things were shifting on the Pirates project alone. Near the end of the shoot, the project tried out Element Technica Atom stereo rigs configured with early generation lighter-weight Red Epic cameras with 5k Mysterium-X sensors after shooting the bulk of the movie with Pace rigs and Red One [4k Mysterium chip] cameras. That was done, in part, to familiarize Wolski’s crew with those rigs and the Epics, since Wolski was preparing to transition to Ridley Scott’s Promethius using them—a project Wolski and his guys were working on at press time.

Also, Rob Engle served as the movie’s 3D Supervisor, performing the post-production stereo processes, including convergence, plate corrections, 3D visual effects reviews, and 2D to 3D conversion for a small number of shots. I regret I didn’t realize Engle’s official credit was 3D Supervisor at the time I wrote my article (I called him a consultant), nor did I give him proper credit for the role he played early on in helping the project make crucial 3D decisions. He is a 3D veteran, of course, having supervised the stereo work on several projects, including Jerry Bruckheimer Films’ first 3D effort, G-Force (2009), which was an early benchmark that quality conversion of major feature films was possible.

On Pirates, also produced by Bruckheimer Films, Engle was brought in early and played a crucial role in certain key decisions about how to capture the film. Chief among them were the decisions to shoot with native stereo camera rigs, rather than using a conversion process, and recommending the use of the 3ality Digital SIP system to monitor rig alignment (see my June article for more on all this). He also suggested shooting with a large image pad, allowing parallel photography and convergence in post—a suggestion supported by the visual effects team. Due to commitments to other projects, Engle was unable to serve as on-set stereographer, and so he recommended industry colleague Dave Drzewiecki to handle that role. (Drzewiecki left late in production to transition to another show, and Wolski’s second assistant, James Goldman, finished the on-set part of the stereography job.)

Stereographer Dave Drzewiecki on location during production of “Pirates of the Caribbean: On Stranger Tides” using a 3ality Digital Stereoscopic Image Processor (SIP) system and a Pace equipment rack. (Photo by Rob Engle, copyright 2011)

Since then, several huge feature films (Promethius, The Amazing Spider-Man, The Hobbit and Jack the Giant Killer to name just a few) are now doing the on-location stereo capture thing, and a large swath of other major feature films, originally captured in 2D, are now either being converted to produce a 3D version, or at least, putting that option on the table for serious consideration. The recently released Thor exemplifies that approach, as does Green Lantern, a film whose production story you can read from me in American Cinematographer’s July issue, and there are many others.

Since the tools, people, timelines, financial arrangements, and creative use of 3D vary wildly from project to project, I was curious to figure out what part of the equation is, in fact, consistent, and who is indispensible. The answer to both those questions is, loosely, “a 3D expert or experts” to one degree or another, although further clarification is needed to explain whether we are referring to the on-set stereographer, a post stereographer, a combination of both, or particular combinations of “3D consultants,” or “3D supervisors” in various permutations. But no matter what you call them, no matter how many of them there are, and no matter how they divide the labor, what is clear is that the good ones all have both extensive experience on stereoscopic projects and a great deal of education about the science and art of composing stereo imagery. Thus, every 3D production for mass distribution needs their help at some level, and no one more than cinematographers.

Dave Drzewiecki, left, and Rob Engle on location with "Pirates" while working in a light rain on the island of Kauai. (Photo courtesy of Rob Engle, copyright 2011)

To clarify these and other points, I went back to chat with both Drzewiecki and Engle, who normally works as an effects and 3D supervisor at Sony Pictures Imageworks, but who worked independently on Pirates on behalf of Bruckheimer Films. Both men have worked on several major 3D films in various capacities in recent years, are involved with education programs and events designed to teach industry professionals about the nuances of making movies in stereo, and both have strong views about where the 3D trend is going right now, and where it needs to go next.

Information is Out There

Both emphasize that the current 3D feature film trend has some things in common with its synched projector predecessors in the 1950’s, and briefly, the single-strip processes of the 1970’s and ‘80s—namely that capturing in native stereo remains bulky, risky, and complicated, although perhaps less so than when “they used to have to put two full-size Mitchell [film] cameras together in a blimp configuration as monster rigs,” as Drzewiecki recalls. Rather, what has changed is the maturation of digital cinema, which allows theaters to exhibit 3D movies en masse, removing many of the barriers that made it difficult to seamlessly, consistently, and accurately shoot, manipulate, and project 3D in a film format in earlier eras. Drzewiecki, who is something of a history buff regarding the ebbs and flows of stereoscopic feature filmmaking, adds that is why 3D is finally viable enough to do commercially on a continual basis.

Inventor Floyd Ramsdell's embryonic 3D film camera rig, designed based on beam-splitting principles, from the late 1940's.


Diagrams from Floyd Ramsdell's original patent application for his beam splitter technology, originally filed in 1949. Stereographer Dave Drzewiecki suggests Ramsdell's work illustrates that the scientific and geometric understanding of cinematic 3D image capture is nothing new and has been available for decades.

To examine Ramsdell’s entire patent application, check it out online here.

But it remains less than easy to do well, whether the stereo images are captured in a controlled environment, on location, or built later in post. The biggest problem, both men agree, is that modern filmmakers, including many cinematographers, are stuck in 2D conventions when it comes to composition, framing, depth of field, and other things. Therefore, they suggest, the industry needs to get educated, and in the meantime, needs to open normally close-knit creative teams to input from 3D experts.

“It took Avatar to make [3D] believable for the industry in a business sense,” says Drzewiecki. “Before that, 3D had ups-and-downs but was never consistent, and as a result, there were not a lot of 3D filmmakers or cinematographers with 3D experience around to pass their knowledge down the line. For the information to be carried out to the next generation, the next person, the next production, you need a consistency of execution—an educational process has to happen, and that will take time.

“But, that said, the information is out there. What is new is the equipment—the cameras and the rigs. But that is just hardware. The principles of stereo, how you use it effectively to block a shot or frame a shot—that information has been around for years. They had specialists all along, going back to the days of House of Wax (1953)—even earlier. They weren’t always understood, but they were there, and they knew the mathematics, the geometry of the relationships between the left eye and right eye images. I’ve read articles from the 1950’s, even the ‘30s, which fully understood the geometrical relationships of the camera environment versus the projection or viewing environment. So, in that sense, with that information available, and better technology, there is now no real excuse for not doing 3D well—except brute force ignorance. It’s been documented and studied, but only a small group of people has that information. Will they be listened to or not?”

Central to this discussion is a simple technical challenge—how best to align imagery to get the easiest to absorb, clearly visible image for audiences to enjoy without taxing their eyes and brains. Rob Engle suggests that it’s essentially impossible to photograph a 100% perfect 3D image to put onto a big screen. But, he adds, with the right knowledge, pipeline, tools, time, and money, stereo can now entertain the masses as a legitimate cinematic device. But that means forgetting the notion that, even with the most sophisticated technology, one can just point-and-shoot one’s way to stereoscopic success.

“My definition of a perfect 3D image is where you switch between left and right eye images, and the only thing that changes is the horizontal position of the objects in the picture,” Engle explains. “The main idea when you are shooting 3D in the field is to try and capture in-camera as much of the actual depth as you find in the real world. The challenge there is that, even with these amazing rigs, you have two different cameras with two different optical paths. It is nearly impossible to align them perfectly, but you want the alignment to be as good as possible. You can then do more work on the images in post, because after all, you are showing people a two-hour-plus movie, and any misalignment would cause eye fatigue and take them away from enjoying your film.”

Therefore, Engle emphasizes, a significant part of the equation involves giving the movie a stereo massage in post, even for movies shot in stereo. That is a big part of what he does on shows like Pirates, which he was just finishing up when we spoke at press time. Working with a modern 3D-capable DI system, he routinely pores over edited versions of such films.

“I’m looking with filmmakers at the edit for creative things mostly,” Engle says. “Where to play things in depth, correcting technical errors like color, focus, and alignment, and working with visual effects’ vendors to give them feedback on what works or doesn’t in terms of their work. With the extra image pad we had on Pirates, it wasn’t uncommon to make slight framing adjustments in 3D to give a better stereoscopic composition, as well. Generally, I’m overseeing all aspects of the 3D to make sure the film is satisfying creatively in terms of what the filmmakers want, and technically, to make sure it does not cause people fatigue to watch it. So I feel, regardless of size of budget, that it is important to have someone in post who knows something about 3D to oversee those details.”

The Current Embryonic Stage

All of which means that people like Drzewiecki and Engle are rapidly rising on the ladder of importance in terms of creative collaborators who intimately partner with directors and cinematographers on 3D features. They, after all, are the ones with, as Engle calls it, “a 3D mindset.” Engle suggests the stereographer’s job on set, for instance, is to monitor the production and “feed that 3D information back to the DP.”

“The stereographer is part of the camera team,” Engle says. “The DP can’t possibly do it all at the same time—he can’t shoot the film creatively and think about every aspect of 3D. So he needs someone to come back and say if you change the shot this way, or move the camera that way, it will work more effectively in 3D. At this stage in the evolution of 3D, most DP’s are not used to thinking in (stereoscopic depth), and appreciate advice from someone who has that experience. I’m working on The Amazing Spider-Man now, and have been having a great time working with [cinematographer] John Schwartzman [ASC]. We started our relationship on Spider-Man, but he and I also got to trade feedback on Green Hornet, which he shot and for which I supervised some of the conversion work. That collaboration helped build our relationship, and is now paying dividends on Spider-Man. ”

Drzewiecki emphasizes that regardless of the technical challenges or limitations, the real roadblock the industry faces in terms of stereo production lies in educating filmmakers how best to use the medium creatively. He warns that, “often 3D is still a gimmick forced upon [films] by studios, and the filmmakers do nothing creative with the format. Sometimes, 3D is not even a creative consideration during shooting. The truth is, few [filmmakers] really know today how to use 3D creatively and effectively. The subject often gets too caught up in conversations about equipment, not technique.”

Therefore, when experts like Drzewiecki and Engle show up, the way they can most efficiently interact and collaborate with cinematographers, directors, and others remains an evolving process currently. Those relationships are new, and the technology is new—sometimes brand new. Indeed, in the short term, it’s worth remembering that even through all the movie industry’s attempts to make 3D into a lasting, substantive business, the latest attempts to do so, and the technology and techniques that go along with those attempts, are all embryonic at this point—essentially in their earliest stages. The Polar Express’ arrival as the first IMAX full-length animated 3D feature (its 3D was also supervised by Engle) was only seven years ago, and Avatar’s creative, technical, and financial breakthroughs only took place a couple of years ago. 3D tentpole features therefore were, until just recently, rare and risky events.

That’s changing, but not seamlessly or easily or cost-effectively in many cases. Thus, for the time being, it appears that with all the “wow” power of 3D, it will, in the short term, make filmmaking harder, not easier, and will reduce, rather than expand, options available to filmmakers in the field. Both Engle and Drzewiecki feel this will remain the case for some time to come. Therefore, despite their own vested interest in seeing 3D grow and thrive, they are cautioning it is not always appropriate, when or if resources, time, or the visions of particular filmmakers do not coincide with the requirements of high-end stereo production.

As far as stereo acquisition goes, Drzewiecki compares the situation to the introduction of sound to motion pictures. In the long run, that development fundamentally changed the industry forever. In the short run, when it first happened, he suggests, filmmakers got bogged down with all sorts of technical limitations while trying to figure out how to make the darn thing work.

“Having to record sound threw the filmmaking process overall back many years in terms of creativity when it first happened,” Drzewiecki suggests. “There were all sorts of limits placed on sets and actors and equipment to permit them to record sound adequately when the whole thing began. At this point, 3D rigs are still bulky, unwieldy, and in fact, plagued with problems from time to time. So more time will have to pass before you get a larger group of technicians that are comfortable and confident, and equipment providers that have the bugs worked out of their systems. That will all take time.”

But the stereo conversion process has its own limitations, as well. One need look no further than (2010’s) Clash of the Titans for an illustration of those limitations. The 3D problems on that picture, however, were largely a result of business getting ahead of the process—“the production simply did not allow enough time for an adequate conversion to be done on that show,” Engle suggests.

“Going into [Pirates], we knew we had a very short post-production schedule which would not lend itself well to a conversion,” Engle adds. “Part of the decision to shoot native on ‘Pirates’ was due to that inflexible post-production window. What the industry learned in 2010 is that it is not possible to convert a movie to 3D with an eight-week conversion schedule and do a decent job. That is what got conversion a lot of negative feedback. So, now, there is an evolution across the industry that 3D can be used as a creative tool, but only if you start early enough, plan it well, and take enough time. Some studios have really understood that—[Dreamworks Animation] has built 3D into its pipeline and finishes all [CG] movies in 3D as part of their official process, for instance. And now, across the industry, despite time and cost pressures, people are building more time into their post schedules for 3D conversions. At the same time, there are now several companies that can do a great job with that if you give them enough time to do it properly.”

Seeking Education

These and other issues return us to the topic of education. Since early 2010, the Sony 3D Technology Center has been offering training programs and educational opportunities for industry types interested in learning stereo, and Drzewiecki has been teaching classes there and elsewhere in partnership with Local 600 on the subject. Engle has spoken on these topics at numerous industry educational events for ASC, AMPAS, Siggraph, DGA, VES, I3DS, and others. And the ASC Technology Committee, of course, is closely studying emerging technologies and standards for stereoscopic filmmaking.

But even with those important programs and projects, and with more on the way, Drzewiecki points out that the industry still lacks enough qualified instructors who have actual in-the-field experience making stereoscopic films, rather than mere theoretical knowledge. And, besides, he adds, “the 3D experience is different for everyone anyway. We all have different levels of depth perception—it’s a lot like color in that regard. Some people feel that ‘less is more’ when it comes to 3D—just adding a minimal amount of depth can make the images fascinating. But others feel just the opposite. So it’s definitely a matter of taste.”

But as the industry travels further down this path, as technology continues to improve things, as education proliferates and professionals get more stereoscopic experience, there is every reason to presume the cost and complexities involved with making stereo movies will eventually come down. Therefore, it is reasonable to assume that the business of 3D movie-making is, this time, here to stay.

But on the other hand, as Engle emphasizes, that depends on that whole series of assumptions coming to fruition. At the end of the day, he says, it will all come down to whether or not risks associated with this kind of filmmaking go up, or come down, as far as filmmakers, studios, and financiers are concerned.

“It will become easier to shoot native stereo faster and more cost effectively, and it will also become cheaper and easier to convert movies,” Engle says. “If that continues, we’ll see a lot more stereo, and it will go far past being a fad as it was in the past. With that in mind, there is likely to be a seesaw effect as to whether or not to shoot in 3D or convert as each technique becomes better and most cost-effective. Additionally, each production will have unique requirements, which may push the decision one way or another. Currently, it seems that conversion is popular with many films if for no other reason than it allows the studio to defer the risk of deciding to make the film in 3D until later in the process.”

For another, detailed take on where the 3D trend may be taking us, from the point of view of a veteran cinematographer, check out this fascinating blog posting John Bailey, ASC, put up on this website last year.


Hardwicke’s and Walker’s Fairytale Fun

Director Catherine Hardwicke guesses that her new film, Red Riding Hood, is possibly the first major studio feature film with a female director, cinematographer (Mandy Walker, ACS), costume designer (Cindy Evans), editors (Nancy Richardson and Julia Wong), and digital colorist (Maxine Gervais). She joyfully refers to these colleagues as “bad asses” and “rock stars” whom she was “honored” to work with on reimagining a classic fairy tale for the big screen. Recently, I wrote about Hardwicke’s close collaboration with Walker and Gervais to paint the film’s visuals during the digital intermediate phase at Warner Bros. Motion Picture Imaging (MPI) in the April issue of American Cinematographer, but along the way, I learned more about the efforts Hardwicke and Walker took to distinguish the story’s visuals during production in the Vancouver area last year.

One of the things I learned was about work she and her team did to make sure the colors and style of the movie, as well as the dark woods that play such a prominent role in the story, did not in any way resemble the hugely successful Twilight vampire movie she directed in 2008, which was shot in a similar geographic region (Oregon’s Pacific Northwest, with the sequels being also shot in the Vancouver area).

Cinematographer Mandy Walker on set during production of “Red Riding Hood.” Walker was working with director Catherine Hardwicke for the first time, chosen after Hardwicke saw her work on Baz Luhrmann's “Australia,” and the two immediately became close visual collaborators.

“Oh no,” Walker chuckles when asked if there are any visual similarities between Red Riding Hood and Twilight. “This is a completely different creature than her vampire film. Catherine made it very clear she wanted to create a new world that people had not ever seen before.”

Indeed, Hardwicke emphasizes that she turned down the chance to direct Twilight sequels because she “doesn’t like sequels and I didn’t want to do the same, old thing.” Thus, she concedes that when it was decided that Red Riding Hood should be shot in Vancouver, “I was terrified that all the locations were done to death by the Twilight franchise. There were certain trees we established in the first (Twilight) movie, and I didn’t want the woods in this film to look like those.”

Hardwicke found Walker early in the film’s development when the cinematographer she partnered with on her previous movies, Elliot Davis, proved to be unavailable due to a scheduling conflict. She was taken with Walker’s efforts on Baz Luhrmann’s Australia, and began researching her work.

Director Catherine Hardwicke with actor Gary Oldman. Meticulous design and color planning, such as the specific purple hue of Oldman's clerical robes in the story, were a major part of Hardwicke's agenda on the project.

“This particular movie is set in snow, we build a village, and we create a fairy tale world almost from scratch, mostly on stages except for a few days on location,” Hardwicke says. “I wanted someone who had worked a lot on stages. There were many good cinematographers that I looked at, but when I researched what she had done for Baz on Australia, I called her up. I was in a hurry—we had to design sets and get going, and I needed a DP to talk to about what would work and what wouldn’t. Mandy came over and we brainstormed. We realized the stages were smaller than what we ideally wanted, and I also realized I had to fit the entire movie into the number Warner Bros. gave me—$42 million (and just 42 days of principal photography). I found Mandy to be very collaborative in helping me in those early conversations, and it became a great relationship.”

In the end, most of the film was shot on stages where filmmakers could control their environment thanks to opulent sets from production designer Tom Sanders. Hardwicke did, however, take three crucial exterior scenes to the only place in Vancouver that prominently features exotic trees that are not indigenous to the Pacific Northwest—the 55-acre VanDusen Botanical Gardens, a landscaped development located in the center of Vancouver.

“The greens’ team dressed (VanDusen Botanical Gardens) to create our own unique forest,” Hardwicke adds. “And then, the big woods outside of grandmother’s house were created entirely on a soundstage—gigantic, hollow trees we constructed, up to six feet in diameter, that lean in and have spikes and thorns on them. One of the main locations is a village made out of heavy logs. The studio was worried that it would be too monochromatic, so we added several whimsical ‘golden’ Aspen trees for a pop of color.”

Walker explains that although those were the only exterior locations in the project, they were crucial because “they gave us scope and distance, which we could only achieve occasionally with visual effects to extend our sets on stage. The art department dressed the forest with colorful plants and flowers, and we always had atmosphere and mist on both stages and location as a visual motif to give a magical feel to the environment. I replicated the (studio) lighting by using sun for shafts of light, and when we had no sun, I had 18k HMI lights in Condors to create the effect.

“For locations where we decided we would not be diffused enough for the sun, such as the open areas of the haystacks and the river scene at the end of the movie—we rehearsed all our angles, and then we waited until the sun was behind a (distant) mountain, and then really worked fast as we could to get all the coverage in a few hours before sunset,” Walker adds. “It was crazy, but we had no choice on quite a fast schedule.”

But for creation of the even more exotic woods located near grandmother’s house, Hardwicke decided to create the effect entirely on stages. Walker points out those stages were not as large as would have been ideal, and therefore, lighting was particularly cramped—a challenge her team handled with strategic fervor.

“I worked on a plan with Dave Tickell, my gaffer, and Mike Kirilenko, my grip, to be able to have as much flexibility in our lighting as possible, as well as being able to change the look in a short amount of time,” Walker explains. “We had a series of 30-ft. light boxes filled with a mix of daylight and tungsten Kinoflos in the ceiling that had a quarter-grid underneath. They were our ambience for night and day. Surrounding the sets were light boxes (25-ft. by 16-ft.) that were on an I-beam so they could travel the length of the stage. They all tilted and could go up and down, and the corner ones could pan also. They were filled with Mole Pars and covered in the quarter grid. We would dim them to create a warm, directional sun that was diffused by clouds, as it was winter and most of the time snowing or recently having snowed in the story. For night scenes, we would gel them blue to be soft, directional moonlight. And for night, when the predominant source of light was from fires, we gelled them with full 85 and put them in flicker mode. For some scenes, the look was gaps in the clouds and sun poking through in shafts, to which the visual effects’ team added the skies in the wide shots. For this effect, we had 20k MoleBeam projectors, sometimes gelled with an 85 gel for warm effect, scattered around the stage. On the ground, we made big and small light boxes that we could easily bring to have light on a lower angle. Catherine likes to move the camera a lot to follow the actors and their performances around the sets with a lot of 360-degree shots. The interiors were the biggest challenges (for that) as the sets were very small, cramped, and had low ceilings that were in the shot a lot of the time. We made up small light boxes—covered wagons—and used Chinese Lanterns a lot as they could squash up against the ceiling. Sometimes, we had them on boom poles, as the camera would move, and we would travel with it.”

All this attention to detail was necessary since Hardwicke labored so hard to give the movie a unique medieval fairy-tale look. Reference for much of that look overall, from a design perspective, she brought to Walker in the form of ancient paintings—imagery created by artists like Hieronymus Bosch and Pieter Bruegel.

“They gave us the color palette, and also references to costume, and some to lifestyle, since the film is set in medieval times,” says Walker. “The other more cinematic references were storybook images and photographers such as Bill Henson and Rocky Schenck—photographers whose images are ‘other worldly’ and have a sense of fantasy in a modern sense. We also watched many other fairy-tale films, but we always wanted to create our own world and style that was original. Production designer Tom Sanders, (visual effects supervisor) Jeff Okun, and (costume designer) Cindy Evans were all part of creating that style. We spent a lot of time in pre-production working out how all the departments could contribute to this.”

More About Deakins’ Doings

Roger Deakins (left) with John Wells on the set of "The Company Men."

Joining the Lifetime Achievement Award party for Roger Deakins, ASC, BSC at the ASC Awards banquet in February will be Deakins’ latest collaborator—writer/director/producer John Wells. Wells, of course, is best known for creating television’s ER for NBC and TNT’s Southland. But he also recently launched his feature film-directing career with Deakins’ help—the Boston-based, recession drama The Company Men. It is the second Deakins-shot film to come out in recent months, on the heels of his ASC and Academy Award-nominated work on the Coen Brothers’ True Grit. (That work, of course, represents the ninth Oscar nomination in Deakins’ illustrious career and the third straight year he’s either been nominated or co-nominated for both ASC and Oscar honors on at least one film.)

You can read all about Deakins’ career and his work on True Grit in the January 2011 issue of American Cinematographer. (article link) But Wells sums that career up as well as anyone—“Roger is very good at what he does.”

“Any time you can collaborate with someone who is a master, you are lucky and get a chance to learn something,” Wells adds. “I was intimately familiar with his work and had admired him for a long time. What he does with light and the camera, the simplicity of it, and the lack of artifice in his work—I love that. As a film director, getting him to shoot my film meant starting at the top. Believe me, it’s a tremendous luxury to have someone like Roger Deakins by your side when directing your first film.”

Ben Affleck as Bobbie Walker and Tommy Lee Jones as Gene McClary in John Wells's film THE COMPANY MEN. Folger/ The Weinstein Company.

Wells lured Deakins the old-fashioned way—he sent his script to Deakins’ agent, with a presumption he wouldn’t even consider the modestly budgeted film at a particularly busy time in Deakins’ career. “When they told me Roger wanted to come in and meet me, I didn’t believe them,” Wells adds. “But he came, we had a great meeting, and that was that.”

Deakins actually shot The Company Men almost two years ago, while simultaneously consulting on an animated film for DreamWorks Animation, and then transitioning into True Grit. Subsequently, he has continued to become more involved in providing expertise to the emerging disciplines of virtual camera and virtual light with more consulting work for DreamWorks, even while shooting (at press time) his first digital feature with Arri’s Alexa camera on Andrew Niccol’s upcoming film, Now.

“I didn’t shoot (The Company Men and True Grit) back to back exactly, but I worked on animation in-between,” Deakins says. “It’s not hard to transition in my mind (between projects). The way you work is different from director to director anyway. That’s even true in animation. On one film, I’m working on using motion capture and a camera room to create shots in a virtual world. On another film (Gore Verbinski’s Rango), the director prefers to storyboard everything, and that’s how his shots are done. So, on that project, I have been consulting on how they are lighting it, rather than setting up shots. I enjoy working on animated films, actually. It’s not as personal as live action because so many more people are involved. But, it is similar in the sense that you are using a camera of a kind for framing and motion, and lighting to help tell a story.”

Ben Affleck as Bobbie Walker and Kevin Costner as Jack Dolan in John Wells's film THE COMPANY MEN. Folger/ The Weinstein Company.

Still, live action remains Deakins’ true cinematic love. But Company Men was certainly different from his work with the Coen Brothers in the sense that it was a much simpler, straightforward character piece, on a modest budget, with few visual effects or what he calls “camera trickery.” The movie details the consequences of unemployment in the midst of the recession on three colleagues (played by Ben Affleck, Tommy Lee Jones, and Chris Cooper) in Boston’s shipbuilding industry. The entire movie was shot in 40 days in and around Boston. That schedule, the budget, and his director’s feature film experience were all modest, but those realities only made the project more attractive to Deakins.

“To me, the attraction is always the script,” he says. “If you only have a limited amount of money to work with on a film worth doing, then so be it—just do your best. Whatever the budget, your ambitions are larger than what you can achieve anyway. But I will say it was an advantage working with John Wells, given his television experience. He’s particularly adaptable. I’ve actually worked with a lot of first-time directors, not that I really consider him a first-time director given the fact that he has done some of the most sophisticated TV work out there. I find it refreshing. If someone has a passion for their project and a vision for what they want, then it doesn’t matter to me if they have done 100 films or never done a film.”

Deakins’ interaction with Wells revolved almost exclusively around creative questions, while Wells left virtually all technical decisions to Deakins.

“Any time I work with someone for the first time, I like to have quite a bit of prep,” Deakins says. “So we talked the film over a lot and spent a great deal of time looking at locations. In the end, it’s a character piece with nothing about how it is photographed out of the ordinary at all. The photography is very quiet and matter of fact, and I felt my role was to give the actors space to do their jobs, while being as economical and in the background as possible. That’s as important to the job as anything clever you might do with cinematography or lighting.”

Still, Wells suggests that any film featuring Deakins as cinematographer has a major advantage because Deakins the camera operator comes with the package.

“That’s an extraordinary advantage,” Wells says. “(As director), you have this enormous sense of confidence that the person whose eye is actually in the lens is the same person you have had all those conversations with throughout the entire process. Normally, you don’t have your operator with you to discuss every shot during prep. With Roger, you get both, and that’s a great luxury.”

The experience with Deakins was so positive for Wells that he says he “really wants to work with him again,” adding that “if we do, it will probably with a digital camera from what he’s telling me about (the Alexa camera).”

Indeed, at press time, Deakins was still in production on Now, becoming intimately familiar with Alexa. Like many around the industry, he’s duly impressed.

“To me, Alexa is the first [digital] camera that succeeds in getting an image that is not exactly film, but does something that film cannot do,” he says. “It has better color space than film, more latitude, and basically, it’s faster and incredible in low light. This film [Now] has lots of night exterior work, low light levels, a low budget, and we are working fast and furious on it, and [Alexa] is holding up fantastically well.”

And Deakins continues diving further into the digital world with his ongoing work as a “visual consultant” on animated films—a piece of his career that launched with WALL-E in 2008. He has now signed up for three more upcoming DreamWorks films.

Combined, these recent developments contribute more chapters to the ongoing saga of a cinematographer on the cusp of a Lifetime Achievement Award from his peers while at the height of his powers. That’s an honor Deakins deeply appreciates, but at the same time, finds “weird” when you consider he doesn’t believe he’s anywhere close to the end of the road.

“I’m very busy, but I enjoy it,” he says. “When I’m done with Now, I’ll probably spend some time just working on animation, and then see what happens later this year.”

Dean and Dave Embrace TV’s Digital Paradigm Shift

“Leverage” starring Timothy Hutton (photo courtesy of TNT).

While industry buzz about digital acquisition often centers around high-profile feature films, such as David Fincher’s The Social Network, television is, of course, where the real digital action happens. That’s because episodic television’s conversion to digital acquisition is almost total at this point. If we call the Finchers, the Lucas’, the Michael Mann’s, and others leaders of the movie world’s digital acquisition movement, then we have to give some credit to Dean Devlin and his team on the TNT drama, Leverage, for elbowing to the front of television’s digital production line.

Devlin is, of course, a longtime movie and television producer, and for the last three years, he has been producing Leverage using a completely tapeless, Red-based workflow that has been increasingly drawing interest around the industry. Leverage, in fact, was the first episodic drama to convert fully to the Red workflow in 2008. Since then, the show has evolved to the point where it’s shot entirely in Portland, with data constantly flowing electronically to Los Angeles, where it’s all put together entirely under one roof at Devlin’s Electric Entertainment in Los Angeles.

In the last couple of years, I’ve been chatting on-and-off with Devlin about his uncompromising evangelism of the digital way, including an interesting conversation that became part of the final cover story ever written for the late, great Millimeter magazine in 2009. More recently, I caught up with him to find out how Leverage’s workflow has progressed as the show pushed through season three and prepared to launch into season four this coming February. Devlin was effusive as ever about the tapeless, soup-to-nuts approach, even going so far as to insist “there is no longer any question about this workflow’s viability—the real question is when is everybody converting. Digital is where it’s all going.”

But Devlin also pointed out that the whole paradigm shift wouldn’t work without an adventurous director of photography along for the ride. That cinematographer is Australian Dave Connell, ACS. Connell hopped on Devlin’s digital roller coaster early on, starting with the Triangle miniseries for Sci-Fi Channel in 2005, which he says was his first digital job after years shooting film. (Triangle was shot using Sony’s HDW-F900.) Next, Devlin hired him to shoot the Leverage pilot—Devlin’s directorial debut, shot in 2008 using Panavision’s Genesis system. Then, while waiting to see if TNT would pick up the show, Connell shot the third installment of the Librarians series of television movies for Devlin. The first two installments were shot using Genesis, but for the third movie, they tried an early version of the Red system, and following that experience, they decided to commit to the Red for Leverage, recording exclusively to solid state hard drives. Season three was shot using Red One cameras configured with 4k Mysterium chips, and now, heading into season four, they are expecting to convert to the new Red Epic system before the season is over.

Dean Devlin (right) with cinematographer Dave Connell recently on the set of Devlin's new pilot, “Braintrust,” which is being shot using Red One cameras, using the “Leverage” workflow. (photos courtesy of Electric Entertainment).

When he spoke with me, Connell was prepping to shoot yet another pilot for Devlin ahead of season four of Leverage—a pilot called Braintrust, built entirely on the Leverage workflow and infrastructure. Connell marveled at how far the system, the workflow, and his trust of that workflow have come since Leverage commenced. He also marvels at his friend Devlin’s dogged pursuit of an all-digital broadcast production present and future—for Leverage specifically, for his growing business at Electric, and for the larger industry.

“I have a good relationship with Dean—being his DP on [Leverage] and on Braintrust is easier because he has the same team, hires the same people, and we all know how everything works,” says Connell. “[For both], we are doing all shooting in Portland and all post in Los Angeles. I can move the camera a lot using Red. I can virtually color-grade shots in camera, I can view dailies online each day, we can zoom into the image without quality loss with the 4k chip far better than we could a year ago, and I’m [collaborating] with a guy [Devlin] who believes in this workflow, and worked hard to make it happen.”

And the two men are, indeed, simpatico on their preferred way of making television. Devlin eagerly promotes the notion that the tapeless Red format is revolutionary and important for television in terms of cost and flexibility because, as he says, “it is software based, so in a sense, every few weeks, we get a brand new camera for free over the Internet. And these are significant improvements over where we were (when the show started). Now, with the Mysterium chip, it’s a whole new ballgame, and it will keep evolving.”

Connell wholeheartedly agrees. The cinematographer suggests that once the workflow launches, if you stick with it, and know what you are doing, the rapid improvements you can make along the way are startling.

“Back when we started, it was build 10 or 12 of the camera and you needed so much light to use them,” Connell says. “They were rated higher than they said, about 300 ASA, and the viewfinder was way too dark. But, they have come so far—it will continue getting better. The [Mysterium] version we used [for season 3] has an ASA of about 800 to 2,000, and that lets me shoot on locations I couldn’t shoot on just a year earlier.”

Red One cameras as configured for shooting during the show's recently completed third season.

The questions of quality and nuance and the loss of a filmic sensibility are hotly being debated around the industry right now, of course. But Devlin argues the imagery is “beautiful.” He suggests the resolution and quality debate in terms of how the imagery shows up on a broadcast screen is largely akin to the debate over the differences between any new entertainment-related technology and what came before—black-and-white to color, vinyl to CD, and so on.

“People often show me tests that are apples to apples, but they are not both apples,” Devlin suggests. “They each have limitations, but if you take the limitations of one and the limitations of the other, digital is a hands-down winner for [broadcast television production].”

Connell agrees, and emphasizes that, regardless of one’s aesthetic preferences, television has, in fact, moved inexorably toward digital acquisition. Therefore, he suggests, broadcast directors of photography keen on keeping up with the changes need to work with bosses like Devlin.

“I think he’s revolutionized the TV industry a little bit,” Connell says. “What’s he’s done to make Electric a one-stop shop is pretty impressive. It’s quite a factory now, and it’s a model other people are looking at. On my end, as the cinematographer, it’s a hard grind, to be sure, but it is also easier in many other ways.”