First turned down by Nintendo more than 2 years ago and recently picked up by Microsoft, the technology behind Microsoft’s Xbos 360’s new Project Natal is complex and interesting. The company that developed the technology is called PrimeSense, a tech company based out of Israel. The question does not lend itself to any easy answers, and unless you are a mathematician, engineer or a scientist, it may not make a whole lot of sense either. But basically here is a run down of the inner working of the new Project Natal that will be released for the 360 later this year, supposedly in Q4 and will for sure be a huge hit.
“PrimeSense technology for acquiring the depth image is based on Light Coding. Light Coding works by coding the scene volume with near-IR light. The IR Light Coding is invisible to the human eye. The solution then utilizes a standard off-the-shelf CMOS image sensor to read the coded light back from the scene. PrimeSense’s SoC chip is connected to the CMOS image sensor, and executes a sophisticated parallel computational algorithm to decipher the received light coding and produce a depth image of the scene. The solution is immune to ambient light.”
“The PrimeSensor is built around PrimeSense’s PS1080 SoC. The PS1080 controls the IR light source in order to project the scene with an IR Light Coding image. The IR projector is a Class 1 safe light source, and is compliant with the IEC 60825-1 standard. A standard CMOS image sensor, receives the projected IR light and transfers the IR Light Coding image to the PS1080. The PS1080 processes the IR image and produces an accurate per-frame depth image of the scene.”
“The PrimeSensor includes two optional sensory input capabilities: color (RGB) image and audio (the PrimeSensor has two microphones and an interface to four external digital audio sources).”
“To produce more accurate sensory information, the PrimeSensor performs a process called Registration. The Registration process’s resulting images are pixel-aligned,which means that every pixel in the color image is aligned to a pixel in the depth image.”
“All sensory information (depth image, color image and audio) is transferred to the host via a USB2.0 interface, with complete timing alignment.”