Saturday, 7 May 2011

3D TELEVISION

History

In the late-1890's, a British film pioneer named William Friese-Greene filed a patent for a 3-D movie process. When viewed stereoscopically, it showed that the two images are combined by the brain to produce 3-D depth perception. On June 10, 1915, Edwin S. Porter and William E. Waddell presented tests to an audience at the Astor Theater in New York City. In red-green anaglyph, the audience was presented three reels of tests, which included rural scenes, test shots of Marie Doro, a segment of John Mason playing a number of passages from Jim the Penman (a film released by Famous Players-Lasky that year, but not in 3-D), Oriental dancers, and a reel of footage of Niagara Falls. However, according to Adolph Zukor in his 1953 autobiography The Public Is Never Wrong: My 50 Years in the Motion Picture Industry, nothing was produced in this process after these tests.
The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented, i.e., the stereo animation camera. The first anaglyph (use of red-and-blue glasses,invented by L.D. DuHaron) movie was produced in 1915 and in 1922 the first public 3D movie was displayed. Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. In 1935 the first 3D color movie was produced. By the Second World War, stereoscopic 3D still cameras for personal use were already fairly common.
In the 1950s, when TV became popular in the United States, many 3D movies were produced. The first such movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also featured stereophonic sound. Alfred Hitchcock produced his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films. The Soviet Union also developed 3D films, with Robinzon Kruzo being its first full-length 3D movie, in 1946.
Subsequently, television stations started airing 3D serials in 2009 based on the same technology as 3D movies

 Technologies


There are several techniques to produce and display 3D moving pictures. The basic requirement is to display offset images that are filtered separately to the left and right eye. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separate offset images to each eye, or have the lightsource split the images directionally into the viewer's eyes (no glasses required). Common 3D display technology for projecting stereoscopic image pairs to the viewer include:
Single-view displays project only one stereo pair at a time. Multi-view displays either use head tracking to change the view depending of the viewing angle, or simultaneously project multiple independent views of a scene for multiple viewers (automultiscopic); such multiple views can be created on the fly using the 2D plus depth format.
Various other display techniques have been described, such as holography, volumetric display and the Pulfrich effect, which was used by Doctor Who for Dimensions in Time in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the viewer centered along the X axis, we find that the screen coordinates are simply the sum of two terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious that the eyes will be looking along the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer is not too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.
Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens. Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed for further image processing.
After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. This data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.
2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature.

Smart TV

 Smart TV, which is also sometimes referred to as "Connected TV", (not to be confused with Internet TV, Web TV or LG Electronics's upcoming "SMART TV" branded NetCast Entertainment Access devices), is the phrase used to describe the current trend of integration of the internet into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets / set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had. Similar to how the internet, web widgets, and software applications are integrated in modern smartphones, hence also the name ("Smart TV" versus "Smart Phone").
The technology that enables Smart TVs is incorporated into devices such as television sets, set-top boxes, Blu-ray players, game consoles, and companion devices.These devices allow viewers to search and find videos, movies, photos and other content on the web, on a local cable TV channel, on a satellite TV channel, or stored on a local hard drive.

                                                        Definition

A Smart TV device is either a television set with integrated internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set. Smart TVs may be thought of as a information appliance or the computer system from a handheld computer integrated within a television set unit, a such Smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run complete operating system or mobile operating system software providing a platform for application developers.

Technology

While the concept of Smart TVs is still in its incipient stages, with up and coming software frameworks such as the proprietary Google TV and the open source XBMC platforms getting a lot of public attention in the news media within the consumer electronics market area, and commercial offerings from companies such as Logitech, Sony, LG, Boxee, Samsung and Intel have indicated products in the area that will give television users search capabilities, ability to run apps (sometimes availabe via an 'app store' digital distribution platform), interactive on-demand media, personalized communications, and social networking features.

 Operating system

There are a multiple array of mobile operating systems currently available, and while most are targeting smartphones, nettops or tablet computers, some also run on Smart TVs or were even designed specifically for Smart TV usage.[46] Most often the operating system of Smart TVs are originally based Linux, Unix, Android, or another open-source software platforms.

 Social networking

A number of Smart TV platforms come prepackaged, or can be optionally extended, with social networking capabilities, with which users can both glean updates from, and post their own updates to, existing social networking services (i.e., Boxee's "social networking layer" - libboxee -, which interfaces with Facebook and Twitter, among other services), including posts related to the content currently being played. As social network and social news posts by users are already a growing means of web audience measurement, the addition of social networking synchronization to Smart TV and HTPC platforms may provide a similarly-greater affording of interaction with both on-screen content and other viewers than is currently available to most televisions, while simultaneously providing a much more cinematic experience of the content than is currently available with most computers.

Playstation Move by Sony

PlayStation Move is a motion-sensing game controller platform for the PlayStation 3 (PS3) video game console by Sony Computer Entertainment (SCE). Based on a handheld motion controller wand, PlayStation Move uses the PlayStation Eye camera to track the wand's position, and inertial sensors in the wand to detect its motion. First revealed on June 2, 2009, PlayStation Move launched in mainland Europe and most Asian markets on 15 September 2010, in Australasia on 16 September 2010, in North America and the UK on 17 September 2010, in Japan on 21 October 2010. Hardware available at launch included the main PlayStation Move motion controller, a supplementary PlayStation Move navigation controller, and an optional PlayStation Move charging station. It competes with the Wii Remote/Wii MotionPlus and Kinect motion control systems for the Wii and Xbox 360 home consoles, respectively.
Although PlayStation Move is implemented on the existing PlayStation 3 console, Sony stated that it treated Move's debut as its own major "platform launch", planning an aggressive marketing campaign to support it. The tagline for PlayStation Move from E3 2010 was "This Changes Everything", including partnerships with Coca-Cola, as part of the "It Only Does Everything" marketing campaign which debuted with the redesigned "Slim" PlayStation 3.

Hardware

As with the PlayStation Wireless Controllers (Sixaxis, DualShock 3), both the main PlayStation Move motion controller and the PlayStation Move navigation controller use Bluetooth 2.0 wireless radio communication, and an internal lithium-ion battery which is charged via a USB Mini-B port on the controller. Up to four Move controllers can be used at once (four Move motion controllers, or two Move motion controllers and two Move navigation controllers).

Motion controller

The primary component of PlayStation Move, the PlayStation Move motion controller is a wand controller which allows the user to interact with the PlayStation 3 through motion and position in front of the PlayStation Eye camera.



Technology

The PlayStation Move motion controller features an orb at the head which can glow in any of a full range of colors using RGB light-emitting diodes (LEDs). Based on the colors in the user environment captured by the PlayStation Eye camera, the system dynamically selects an orb color that can be distinguished from the rest of the scene. The colored light serves as an active marker, the position of which can be tracked along the image plane by the PlayStation Eye. The uniform spherical shape and known size of the light also allows the system to simply determine the controller's distance from the PlayStation Eye through the light's image size, thus enabling the controller's position to be tracked in three dimensions with high precision and accuracy. The sphere-based distance calculation allows the controller to operate with minimal processing latency, as opposed to other camera-based control techniques on the PlayStation 3.

A pair of inertial sensors inside the controller, a three-axis linear accelerometer and a three-axis angular rate sensor, are used to track rotation as well as overall motion. An internal magnetometer is also used for calibrating the controller's orientation against the Earth's magnetic field to help correct against cumulative error (drift) by the inertial sensors. The inertial sensors can be used for dead reckoning in cases which the camera tracking is insufficient, such as when the controller is obscured behind the player's back.
 The controller face features a large ovoid primary button (Move), small action buttons (Triangle, Circle, Cross, Square), and a regular-sized PS button, arranged in a similar configuration as on the Blu-ray Disc Remote Control. On the left and right side of the controller is a Select and Start button, respectively. On the underside is an analog trigger (T). On the tail end of the controller is the wrist strap, USB port, and extension port.
The motion controller features vibration-based haptic technology. In addition to providing a tracking reference, the controller's orb light can be used to provide visual feedback, simulating aesthetic effects such as the muzzle flash of a gun, or the paint on a brush.
Using different orb colors for each controller, up to four motion controllers can be tracked at once with the PlayStation Eye. Demonstrations for the controller have featured activities using a single motion controller, as well as those in which the user wields two motion controllers, with one in each hand. To minimize the cost of entry, Sony has stated that all launch titles for PlayStation Move will be playable with one motion controller, with enhanced options available for multiple motion controllers.
All image processing for PlayStation Move is performed in the PlayStation 3's Cell microprocessor. According to Sony, use of the motion-tracking library entails some Synergistic Processing Unit (SPU) overhead as well an impact on memory, though the company states that the effects will be minimized. According to Move motion controller co-designer Anton Mikhailov, the library uses 1-2 megabytes of system memory.

Navigation controller

The PlayStation Move navigation controller (originally referred to as the PlayStation Move sub-controller and also known as the navi-controller) is a one-handed supplementary controller designed for use in conjunction with the PlayStation Move motion controller for certain types of gameplay. Replicating the major functionality of the left side of a standard PlayStation Wireless Controller, the PlayStation Move navigation controller features a left analog stick (with L3 button function), a D-pad, and L1 and L2 analog triggers.The navigation controller also features Cross and Circle action buttons, as well as a PS button. Since all controls correspond to those of a standard Wireless Controller, a Sixaxis or DualShock 3 controller can be used in place of the navigation controller in PlayStation Move applications.

 

Accessories

Announced at E3 2010, the PlayStation Move charging station is a charging base unit designed to charge two PlayStation Move controllers (e.g. motion controllers, navigation controllers).
The PlayStation Move shooting attachment is an accessory for the PlayStation Move motion controller that adapts the motion controller into a handgun form. The motion controller is fitted into the gun barrel so that the motion controller's T trigger is interlocked with the trigger on the gun attachment, while leaving all the topmost buttons accessible through a hole in the top, similar to the Wii Zapper.
The PlayStation Move Sharp Shooter Attachment is an accessory for the PlayStation Move motion controller that adapts both the motion controller and navigation controller into a submachine gun form, which features an adjustable shoulder support. The motion controller is fitted into the gun barrel so that the motion controller's T trigger is interlocked with the trigger, and the navigation controller is clipped into a holder below this gun barrel. However, the accessory goes far deeper by adding several extra buttons and controls (via the EXT connector on the base of the Move Motion Controller). These extra buttons include Triangle and Square buttons (on both sides, located near the T and M buttons), RL button (located under the gun's clip) and pump-action mechanism (located under the barrel) which both can be used to reload (or alternately may serve another function depending on future game design), 3-setting Firing Rate control, M-button lock, and secondary M button (located below the Trigger) for easy access. It has been announced that this peripheral is officially supported by the following "hardcore" shooter games, Killzone 3 and SOCOM 4. Due to overwhelming community demand and the massive amount of positive feedback from sharp shooter users, it has been announced that Resistance 3 will also be supported.[ source : wikipedia ]

Microsoft Kinect ( A New Era In Gaming )

Kinect for Xbox 360, or simply Kinect (originally known by the code name Project Natal), is a "controller-free gaming and entertainment experience" by Microsoft for the Xbox 360 video game platform. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures and spoken commands. The project is aimed at broadening the Xbox 360's audience beyond its typical gamer base. Kinect competes with the Wii Remote Plus and PlayStation Move and PlayStation Eye motion control systems for the Wii and PlayStation 3 home consoles, respectively.
Kinect was launched in North America on November 4, 2010, in Europe on November 10, 2010, in Australia, New Zealand and Singapore on November 18, 2010 and in Japan on November 20, 2010. Purchase options for the sensor peripheral include a bundle with the game Kinect Adventures and console bundles with either a 4 GB or 250 GB Xbox 360 console and Kinect Adventures.
Kinect holds the Guinness World Record of being the "fastest selling consumer electronics device". It sold an average of 133,333 units per day with a total of 8 million units in its first 60 days.10 million units of the Kinect sensor have been shipped as of March 9, 2011.
Microsoft has announced the release of a non-commercial Kinect software development kit for Windows in spring 2011, with a commercial version following at a later date.





Kinect is based on software technology developed internally by Rare, a subsidiary of Microsoft Game Studios owned by Microsoft, and on range camera technology by Israeli developer PrimeSense, which interprets 3D scene information from a continuously-projected infrared structured light. This 3D scanner system called Light Coding employs a variant of image-based 3D reconstruction.
The Kinect sensor   is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software", which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. At launch, voice recognition was only made available in Japan, the United Kingdom, Canada and the United States. Mainland Europe will receive the feature in spring 2011. The Kinect sensor's microphone array enables the Xbox 360 to conduct acoustic source localization and ambient noise suppression, allowing for things such as headset-free party chat over Xbox Live.
The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating the sensor based on gameplay and the player's physical environment, accommodating for the presence of furniture or other obstacles.
Described by Microsoft personnel as the primary innovation of Kinect, the software technology enables advanced gesture recognition, facial recognition and voice recognition. According to information supplied to retailers, Kinect is capable of simultaneously tracking up to six people, including two active players for motion analysis with a feature extraction of 20 joints per player. However, PrimeSense has stated that the number of people the device can "see" (but not process as players) is only limited by how many will fit in the field-of-view of the camera.
 
Through reverse engineering efforts, it has been determined that the Kinect sensor outputs video at a frame rate of 30 Hz. The RGB video stream uses
8-bit VGA resolution (640 × 480 pixels) with a Bayer color filter, while the monochrome depth sensing video stream is in VGA resolution (640 × 480 pixels) with 11-bit depth, which provides 2,048 levels of sensitivity. The Kinect sensor has a practical ranging limit of 1.2–3.5 m (3.9–11 ft) distance when used with the Xbox software. The area required to play Kinect is roughly 6m², although the sensor can maintain tracking through an extended range of approximately 0.7–6 m (2.3–20 ft). The sensor has an angular field of view of 57° horizontally and 43° vertically, while the motorized pivot is capable of tilting the sensor up to 27° either up or down. The horizontal field of the Kinect sensor at the minimum viewing distance of ~0.8 m (2.6 ft) is therefore ~87 cm (34 in), and the vertical field is ~63 cm (25 in), resulting in a resolution of just over 1.3 mm (0.051 in) per pixel. The microphone array features four microphone capsules and operates with each channel processing 16-bit audio at a sampling rate of 16 kHz.
Because the Kinect sensor's motorized tilt mechanism requires more power than can be supplied via the Xbox 360's USB ports, the device makes use of a proprietary connector combining USB communication with additional power. Redesigned Xbox 360 S models include a special AUX port for accommodating the connector, while older models require a special power supply cable (included with the sensor) that splits the connection into separate USB and power connections; power is supplied from the mains by way of an AC adapter.[ source : wikipedia ]

Android (operating system)

Android is a software stack for mobile devices that includes an operating system, middleware and key applications. Google Inc. purchased the initial developer of the software, Android Inc., in 2005. Android's mobile operating system is based on the Linux kernel. Google and other members of the Open Handset Alliance collaborated on Android's development and release. The Android Open Source Project (AOSP) is tasked with the maintenance and further development of Android. The Android operating system is the world's best-selling Smartphone platform.
Android has a large community of developers writing applications ("apps") that extend the functionality of the devices. There are currently over 250,000 apps available for Android. Android Market is the online app store run by Google, though apps can also be downloaded from third-party sites. Developers write primarily in the Java language, controlling the device via Google-developed Java libraries.
The unveiling of the Android distribution on 5 November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 80 hardware, software, and telecom companies devoted to advancing open standards for mobile devices. Google released most of the Android code under the Apache License, a free software and open source license.

The Android open-source software stack consists of Java applications running on a Java-based, object-oriented application framework on top of Java core libraries running on a Dalvik virtual machine featuring JIT compilation. Libraries written in C include the surface manager, OpenCore media framework, SQLite relational database management system, OpenGL ES 2.0 3D graphics API, WebKit layout engine, SGL graphics engine, SSL, and Bionic libc. The Android operating system, including the Linux kernel, consists of roughly 12 million lines of code including 3 million lines of XML, 2.8 million lines of C, 2.1 million lines of Java, and 1.75 million lines of C++.[source : wikipedia]

Friday, 6 May 2011

TABLETS


tablet computer, or simply tablet, is a complete mobile computer, larger than a mobile phone or personal digital assistant, integrated into a flat touch screen and primarily operated by touching the screen. It often uses an onscreen virtual keyboard or a digital pen rather than a physical keyboard.
The term may also apply to a "convertible" notebook computer whose keyboard is attached to the touchscreen by a swivel joint or slide joint so that the screen may lie with its back upon the keyboard, covering it and exposing only the screen for touch operation.

As of 2010, two distinctly different types of tablet computing devices exist, whose operating systems are of different origin. Older tablet personal computers are mainly x86 based and are fully functional personal computers employing a slightly modified personal computer OS (such as Windows or Android) supporting their touch-screen, instead of a traditional display, mouse and keyboard. A typical tablet personal computer needs to be stylus driven, because operating the typical desktop based OS requires a high precision to select GUI widgets, such as a the close window button.

Since mid-2010, new tablet computers with mobile operating systems forgo the Wintel paradigm, have a different interface and have created a new type of computing device. These mobile OS tablet computer devices are normally finger driven and use multi-touch capacitive touch screens, instead of the simple resistive touchscreens of typical stylus driven systems (also a standard external USB keyboard can be used). First of these was the iPad, with Samsung Galaxy Tab and others following. In forgoing the x86 precondition (a requisite of Windows compatibility), the new class of tablet computers use a version of an ARM architecture processor heretofore used in portable equipment (e.g., MP3 players and cell phones) now powerful enough (especially with the introduction of the ARM Cortex family) for tasks such as internet browsing, light production work and gaming.[source : wikipedia ]

SMARTPHONES

The Future of Smartphones

Smartphones are getting thinner and cheaper, and as a result are entering the consumer market. For the past few years smartphones have been aimed at prosumers, or “professional consumers” (prosumers can also refer to “production consumers”, or consumers who drive the design, production and alteration of a product). Prosumers are generally early adopters of products. They have disposable income and great enthusiasm for particular products or technologies. Smartphone developers find prosumers very useful when designing applications and hardware. As prosumers pick and choose the phones that offer the applications they want, developers can tweak designs and move towards mass production. Analysts predict that one billion smartphone handsets will be sold by 2011 [Source: eCommerce Times].
While input methods will vary, the research firm, ARCchart, forecasts that 38 percent of all mobile phones will use touchscreens or touchpanels by 2012 [Source: LinuxDevices.com]. The iPhone uses an advanced touchscreen, for example, and can even detect multiple points of contact simultaneously.

Security
Perhaps the most challenging consideration for the future is security. Smartphones and PDAs are already popular among many corporate executives, who often use their phones to transmit confidential information. Smartphones may be vulnerable to security breaches such as an Evil Twin attack. In an evil twin attack, a hacker sets a server’s service identifier to that of a legitimate hotspot or network while simultaneously blocking traffic to the real server. When a user connects with the hacker’s server, information can be intercepted and security is compromised.


One downside to the openness and configurability of smartphones is that it also makes them susceptible to viruses. Hackers have written viruses that attack SymbianOS phones. The viruses can do things like turning off anti-virus software, locking the phone completely or deleting all applications stored on the phone.
On the other side, some critics argue that anti-virus software manufacturers greatly exaggerate the risks, harms and scope of phone viruses in order to help sell their software.





Photo courtesy of © 2006 SMobile Systems
Symbian Skull Virus: Skulls will continuously display a flashing skull animation in the background regardless of what application the user is using.

The incredible diversity in smartphone hardware, software and network protocols inhibit practical, broad security measures. Most security considerations either focus on particular operating systems or have more to do with user behavior than network security.

With data transmission rates reaching blistering speeds and the incorporation of WiFi technology, the sky is the limit on what smartphones can do. Possibly the most exciting thing about smartphone technology is that the field is still wide open. It's an idea that probably hasn't found its perfect, real-world implementation yet. Every crop of phones brings new designs and new interface ideas. No one developer or manufacturer has come up with the perfect shape, size or input method yet. The next "killer app" smartphone could look like a flip phone, a tablet PC, a candy bar or something no one has conceived of yet.[source:communication.howstuffworks.com]