3Dconnexion offers an SDK for developers who want to experiment with gaming control. You can find it at: www.3dconnexion.com/news/ press/032007_SDK.php.
The Saitek Cyborg Command Unit, which is a more targeted gaming peripheral, reveals how an ancillary controller can also create more immersion. When using the controller you can concentrate one hand entirely on movement and the other on shooting with the mouse. The device has 144 programmable buttons, making it easy for you to find a 'natural' way to launch grenades or pick up objects in the game. In Crysis multiplayer matches, you'll notice considerably improved movement and higher kill counts thanks to more precise control.
The Command Unit highlights a key point about computing control. Our brains are wired to split functions into halves, which is why any future input devices should focus on one aspect of control – such as movement or clicking – and not attempt to use one general device for all control and movement. It also explains why the most complex keyboards and controllers, such as those that allow you to map keys to a particular game, often fail in the marketplace.
In the future, controllers should become more focused on the task at hand. Imagine a photo-editing controller that lets you quickly apply Gaussian blur to an image with just one click. Music and video producers have already figured out this categorisation and controllers now exist simply for fast-forwarding through video scenes or audio clips.
Controlling a PC is one thing; becoming fully immersed in an environment is more challenging. Once again, Nintendo is the clear leader in offering peripherals that enhance gaming. The Wii Fit makes you think more about weight loss, posture and balance than the fact that you are standing on a small white platform only a few centimetres high.
Successful immersion requires two factors: the hardware interface must become a natural extension of your interaction, and the software must interact fluidly with the hardware peripheral. 3D displays, 3D surround sound audio and head-mounted displays all point to how computing will change in the next few years.
No hardware add-on has so much potential – and yet is so disappointing – as the 3D display. These monitors could pull gamers, video producers and even software developers into a more realistic realm. In this environment, a video producer could interact with video – creating special effects, pulling files out of easy-to-organise bins and assembling footage – as though the film rolls were sprawled out on a table before him.
Limitations of 3D displays
Interestingly, the main limitation to 3D technology has more to do with human-computer interaction than actual display technology. In fact, 3D displays have matured gradually and solved several complex problems. For example, glass manufacturing is now much cheaper and 3D goggles look less like the instruments an eye doctor uses to test your eyesight and more like sunglasses. As a result recent first-person shooters use 3D polygons that can be easily mapped to three dimensions in space.
The hurdle is to do with physics. Your eyes have an amazing ability to focus. When you see an object in the distance, your eyes turn in their sockets to focus accurately. We are all literally 'drawn to the light', focusing on the brightest object. The most common method used to create 3D screen is called 3D Stereo, and it involves two planes of glass.
The Samsung HL56A650 DLP 59in display and the iZ3D 3D Monitor 22in both use this method. The back screen controls colour intensity and the front screen controls polarisation – the effect of offsetting the image so it looks 3D. The doublepane glass approach requires that you wear goggles and sit directly in front of the monitor.