3D Workshop: Edges, Screen, IA

 The November issue of American Cinematographer features an article I wrote about low-budget 3D, based on a workshop I attended in Gothenburg, Sweden which was led by cinematographer Geoff Boyle, also known as the father of CML, the Cinematography Mailing List. Geoff was assisted by post specialist Thomas Harbers.

I wanted to offer some 3D images and notes from the workshop to complement the article, and raise some 3D topics. I must emphasize that I am still a student of 3D. I do not pretend to be a 3D expert, rather I propose to share my notes and questions on the subject, as I “deepen” ;-) my knowledge. Please do not hesitate to give me your corrections and explanations.


thefilmbook-Geoff-Boyle-with-PS-Technik-Freestyle-Rig-and-2-Alexas-photo-benjamin-bGeoff Boyle showing us 2 Alexas on a P+S Technik Freestyle Rig


anaglyph glasses

I now believe that anyone interested in cinematography should own a pair of Red/Cyan anaglyph 3D glasses. I got mine for free at a trade show, but one can also order some on the internet (check the links below). The Greek etymology is ana + glyph, to carve upon. The word originally referred to low relief sculptures with a slight offset between background and foreground. Anaglyph 3D also has a slight offset, a horizontal shift between the red and the cyan images.

thefilmbook-anaglyph-glassesIt is in this spirit that I will offer some anaglyph images from the workshop to complement my article.

If you don’t have glasses, I would like to convince you to get some soon, and in the meantime, you can analyze anaglyph images sans glasses, like many stereographers do on the set. They do so because viewing anaglyph with glasses quickly tires your eyes, however you can still get a quick visual indication without glasses of the amount of depth from the thickness of the red or cyan offset.


colored edges

During the workshop, I shot the image below with a P+S Technik Freestyle rig on my shoulder — with two tiny SI-2K cameras. Let’s start looking at the anaglyph image without glasses. (You might want to click on the image to see a bigger size in a separate browser window). Look for the colored edges. If there are no colored edges, the image is the same for each eye and the object position is on the screen, just like all the objects in a 2D movie.

Objects with red and cyan edges are either behind or in front of the screen: the larger the colored edge, the greater the distance from the screen. This edge is sometimes measured as a percentage of the total screen. During the workshop, Geoff Boyle once positioned a 1 cm piece of tape on our 1 meter screen, saying “here is a 1% reference”. We also used the built-in grids of our Transvideo monitors to display percentage offsets.


click on any image for closer view

edge violation

Let’s put on the glasses. When I look at a 3D images with my glasses on, I sometimes get confused about where the screen plane is. One way is to hold your hand out and point your finger sideways at where you think the screen is. You can also move your mouse cursor around. In both cases you’ll probably end up finding the screen position halfway up the ramp.

During the workshop, Geoff discussed what is sometimes called “edge violation“. This happens when an object that is in front of the screen intersects the edge of the frame. This creates what a friend of mine used to call “cognitive dissonance”.

In this image, the bottom of the ramp seems to be in front of the screen until you look at the edge, and then it pops behind. This is because we suddenly see the ramp blocked by the screen edge, and our brain tells us the ramp is therefore behind the frame line. You will get a similar disquieting impression of the ramp popping in front of and behind the screen when you slide your mouse cursor near the bottom of the ramp.


screen distance

With a computer screen, you can easily notice how the perceived depth changes with distance from the screen. Move your head very close to the screen and the image appears to get squashed; move your head far away and the image seems to stretch out.

For the geometrically-inclined this depth change is well explained by David Romeuf here. Romeuf argues that there is an “ideal observer distance” for 3D. (This is also addressed by 3D supervisor François Garnier in my article on Pina).


permanent IA

The mantra for the workshop was: “you can’t change IA in post”. IA stands for the interaxial distance, the distance between the 2 camera lenses. IA defines the amount of depth you will have. An IA of zero, with 2 identical images, is equivalent to 2D: there is no depth because all the objects are on the screen. The bigger the IA the bigger the distance between the closest and farthest object in the scene. Geoff illustrated this with some computer simulated slides:


These slides simulate shooting parallel (as opposed to converged), and then using a HIT (horizontal image transform) to align the images from the 2 cameras on the woman in front, which takes more pixels for bigger IAs. The 2 things I noted here are:
1. Varying the IA significantly changes the over-all depth
2. Although you can’t change IA in post, you can, and often do, change the screen position

Geoff discussed 2 possible IA issues:
1. Medical IA. Too much depth can sometimes hurt your eyes by forcing them to diverge. According to Geoff you don’t want to go too much more than 3% total HIT.
2. Human IA. The average accepted value for the distance between our eyes is 65 millimeters (2.5 inches). If you go very wide, you may see the world through the eyes of a giant, and things can appear unnaturally small. Inversely, a very small IA may give you a mouse’s view of the world. That’s the theory, but we didn’t often notice this phenomenon during the workshop.


That’s it for now. I’ll do a second post with a few more images soon.
Thanks to Geoff and Thomas for supplying the images.

Your 3D comments & corrections are most welcome!



Anaglyph glasses
The cheapest supplier I found for cardboard glasses offers 10 pairs for 10 dollars, as opposed to 50 pairs for 20 dollars which is the more common offering.

Gothenburg Film Studios Workshops

My next post about this topic is
3D Workshop: Faraway Flatness, Faraway Softness


6 Responses to “3D Workshop: Edges, Screen, IA”

  • simply superp……………

  • a neat and gentle explanation………… thanks………………

  • Thanks to Geoff and Thomas………………..

  • Sir,

    Super,how i can join one of your workshops in the future.

    Raaj B

  • Dear Raaj

    I was just a student in this course,
    which was given by
    Geoff Boyle at the Gothenburg Studios
    They have not announced a 3D workshop so far this year

    There are also some 3D courses given by the Santa Fe HD Workshop
    although most are in Los Angeles

    Here is a list of 3D courses I found on the net:

    Hope this helps



    PS Please keep us posted of any 3D workshops in India

  • Sir,
    1) There are currently no two filters [red,green,blue-green] when combined reproduce true color – one problem with anaglyph glasses.
    2) A more severe problem with anaglyth glasses is what is termed as “ghosting” or more currently in 3D tv lingo as “transferrence” This occurs because many/most analyph glases do not entirely separate light into the left and right eye – some light is passes by BOTH filters – thus some light is seen in the same perspective in both eyes – not natural
    3) There is what is termed as a stereo window – it is not physical but optical. In still photography is is approximately at a distance 30 times the separation between the lenses recording the image. Viewing stereo/3D images is truely like looking through a “window”. Thinking about the image in this fashion one can understand that recording objects in a scene closer than this “stereo window” distance, unless it is done carefully, will look unrealistic (e.g., like a tree branch seemingly “inside” the window but dut off by the sides of the window. One can violate the rule of no objects closer than the window distance by arranging the object such that it does not insersect the edges of the scene closer than the window – it will appear to come THROUGH the window. One violation of this rule that I’ve seen is a scene in AVATAR, when the viewer is walking through a vegetation tunnel on 3-4 sides of the image and the vegetation is passing by – yes, “broken” by the scene edges but the visual effect is entirely believeable to the viewewr and enhances the visual experience.
    4) The human interocular distance [distance between the eyes] varies greatly, probably like a bell curve, with the median/mean 60-65cm. When a viewer is displayed the far point, or infinity point, in a scene at a separation between the the overlapping two images of more than the viewer’s eye separation distance, the viewer must diverge his/her eyes, which is not natural, and will be uncomfortable possibly to the point of pain. On the other end of the scale, the separation between near points and far points of an overlapped image pair [called a stereo pair] produces the 3D effect. And this also is a limitation – a human’s field of vision is is very wide, but the portion which we focus is only about 10 degrees – in real life the brain separates out the other objects in the entire field and we only “focus” on the objects in the 10 deg portion. When the viewer is presented an entire 3D scene he/she is forced to focus on the entire scene. If the separation between the near and far points on an overlapping pair is too great, the viewer is forced to only look at portions of the scene – not what the producer wants. The separation distance which a viewer can tolerate comfortably, like acceptable far point separation distance, varies.
    5) The separation between far/infinity points can be corrected after image recording, the separation between near and far points can not easily be corrected [distortion would work].

    In conclusion, given the physical diversity in humans, it amazes me that 3D movies can be produced which are perceived with “good” 3D effects by a majority of the population. To accomodate more and more of the population, the 3D effects must be “dumbed down”.

    Anyone can create and view a “still” 3D image using only one camera – even a cell phone camera – by separating the distance between the two exposures and combining them on a computer, or looking through a 3D slide viewer or 3D print viewer. So long as the object distances are more than ~30 times the separation distance between the shots, and the amount of stereo depth is kept at a “reasonable amout” [therein is the rub, as I've discussed], stunning results can be achieved. Results such as a 3D image that might mimic Godzilla view of downtown seattle from the SpaceNeedle, Mount St. Helens viewed through the eyes of some alien creature [two shots from a moving plane with probably one mile of separation], small objects as viewed by a lizard/mouse/bird [shot separation very small].


Leave a Reply