History
In the late-1890's, a British film pioneer named William Friese-Greene filed a patent for a 3-D movie process. When viewed stereoscopically, it showed that the two images are combined by the brain to produce 3-D depth perception. On June 10, 1915, Edwin S. Porter and William E. Waddell presented tests to an audience at the Astor Theater in New York City. In red-green anaglyph, the audience was presented three reels of tests, which included rural scenes, test shots of Marie Doro, a segment of John Mason playing a number of passages from Jim the Penman (a film released by Famous Players-Lasky that year, but not in 3-D), Oriental dancers, and a reel of footage of Niagara Falls. However, according to Adolph Zukor in his 1953 autobiography The Public Is Never Wrong: My 50 Years in the Motion Picture Industry, nothing was produced in this process after these tests.The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented, i.e., the stereo animation camera. The first anaglyph (use of red-and-blue glasses,invented by L.D. DuHaron) movie was produced in 1915 and in 1922 the first public 3D movie was displayed. Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. In 1935 the first 3D color movie was produced. By the Second World War, stereoscopic 3D still cameras for personal use were already fairly common.
In the 1950s, when TV became popular in the United States, many 3D movies were produced. The first such movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also featured stereophonic sound. Alfred Hitchcock produced his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films. The Soviet Union also developed 3D films, with Robinzon Kruzo being its first full-length 3D movie, in 1946.
Subsequently, television stations started airing 3D serials in 2009 based on the same technology as 3D movies
Technologies
There are several techniques to produce and display 3D moving pictures. The basic requirement is to display offset images that are filtered separately to the left and right eye. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separate offset images to each eye, or have the lightsource split the images directionally into the viewer's eyes (no glasses required). Common 3D display technology for projecting stereoscopic image pairs to the viewer include:
- 'With lenses
- Anaglyphic 3D (with passive red-cyan lenses)
- Polarization 3D (with passive polarized lenses)
- Alternate-frame sequencing (with active shutter lenses)
- Head-mounted display (with a separate display positioned in front of each eye, and lenses used primarily to relax eye focus)
- Without lenses: Autostereoscopic displays, sometimes referred to commercially as Auto 3D.
Various other display techniques have been described, such as holography, volumetric display and the Pulfrich effect, which was used by Doctor Who for Dimensions in Time in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the viewer centered along the X axis, we find that the screen coordinates are simply the sum of two terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious that the eyes will be looking along the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer is not too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.
Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens. Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed for further image processing.
After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. This data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.
2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature.