Steroscopic Viewing
Quite a lot of starting information is here.
- Most of the pictures early in the notes.
- We see three dimenisional images partially because
- Our eyes are offset by 50 to 80 MM and thus produce two different
images which are sent to our brain.
- This is called innerpupillary distance.
- The brain uses the difference in these two immages (known as parallax) to extract depth information in the scene.
- Apparently we have a good trig processor built into our brains.
-
- We can mimic stereopsis (or binocular parallax) in a graphics setting
- Generating a view from two different positions in space.
- This is called binocular disparity.
- By the way, this is just one of a long list of Depth Cues
- Motion Parallax (observer moving)
- When the observer moves the objects appear to move relative to a fixed background.
- Apparently birds and squirrles use this all of the time
- Depth from motion (objects moving)
- Perspective
- Relative Size
- Familiar Size
- Occlusion
- Lighting and Shading
- Defocus Blur
- Others
- Notice that many of these techniques are done in the graphics system.
- Binocular disparity is relatively easy in OpenGL
- Really all we need to do is generate two images.
- Slightly offset the camera along the right vector (say +/- BD/2)
- render / shif render.
- Presentation is always the issue
- We need to be able to present a different image to each eye.
- Here is a novel approch.
-
- Passive vs Active.
- Polarized glasses
- linear polarization
-
- Circular polarization
-
- Anaglyph
- Encode the image in color.
- Typically red and cyan
- Glasses have color filters.
- Yuck, you lose color.
- Two images, physical seperation
- Oculus Rift.
- A 7" screen.
- 24 bits per pixel.
- 1280x800, with too 640x800 images.!v