Today we’re wrapping up our coverage of last month’s NVIDIA GPU Technology Conference, including the show’s exhibit hall. We came to GTC to get a better grasp on just where things are for NVIDIA's still-fledging GPU compute efforts along with the wider industry as a whole, and we didn’t leave disappointed. Besides seeing some interesting demos – including the closest thing you’ll see to a holodeck in 2010 – we had a chance to talk to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going in the next couple of years. The GPU-centric future as NVIDIA envisioned it may be taking a bit longer than we hoped, but it looks like we may finally be turning the corner on when GPU computing breaks in to more than just the High Performance Computing space.
Scalable Display Technologies’ Multi-Projector Calibration Software: Many Projectors, 1 Display
Back in 2009 when we were first introduced to AMD’s Eyefinity technology by Carrell Killebrew, he threw out the idea of the holodeck. Using single large surface technologies like Eyefinity along with video cards power enough to render the graphics for such an experience, a holodeck would become a possibility in the next seven years, when rendering and display technologies can work together to create and show a 100 million pixel environment. The GPUs necessary for this are still years off, but it turns out the display technologies are much closer.
One of the first sessions we saw at GTC was from Scalable Display Technologies, a MIT-spinoff based in Cambridge, MA. In a session titled Ultra High Resolution Displays and Interactive Eyepoint Using CUDA, Scalable discussed their software based approach to merging together a number of displays in to a single surface. In a nutshell, currently the easiest way to create a single large display is to use multiple projectors to each project a portion of an image on to a screen. The problem with this approach is that calibrating projectors is a time-consuming process, as not only do they need to be image-aligned, but also care must be taken to achieve the same color output from each projector so that minute differences in projectors do not become apparent.
Scalable however has an interesting solution that does this in software, relying on nothing more on the hardware side than a camera to give their software vision. With a camera in place, their software can see a multi-projector setup and immediately begin to calibrate the projectors by adjusting the image sent to each projector, rather than trying to adjust each projector. Specifically, the company is taking the final output of a GPU and texture mapping it to a mesh that they then deform to compensate for the imperfections the camera sees, and adjusting the brightness of sections of the image to better mesh together. This rendered mesh is used as the final production, and thanks to the use of intentional deformation, it cancels out the imperfections in the projector setup. A perfect single surface, corrected in the span of 6 seconds versus minutes and hours for adjusting the projectors themselves.
Along with their discussion of their technology, GTC Scalable is showing off a custom demonstration unit using 3 720P projectors to project a single image along a curved screen. Why curved? Because their software can correct for both curved and flat screens, generating an image that is perspective-correct even for a curved screen. The company also discussed some of the other implementations of their technology, where as it turns out their software has been used to build Carrell’s holodeck for a military customer: a 50 HD projector setup (103.6MPixels) used in a simulator, and kept in calibration with Scalable’s software. Ultimately Scalable is looking to not only enable large projection displays, but to do so cheaply: with software calibration it’s no longer necessary to use expensive enterprise-grade projectors, allowing customers to use cheaper consumer-grade projectors that lack the kind of hardware calibration features this kind of display would normally require. Case in point, their demo unit uses very cheap $600 projectors. Or for that matter it doesn’t even have to be a projector – their software works with any display type, although for the time being only projectors have the ability to deliver a seamless image.
Wrapping things up, we asked the company about whether we’d see their software get used in the consumer space, as at the moment it’s principally found in custom one-off setups for specific customers. The long and the short answer is that as they’re merely a software company, they don’t have a lot of control over that. It’s their licensees that build the final displays, so one of them would need to decide to bring this to the market. Given the space requirements for projectors it’s not likely to replace the multi-LCD setup any time soon, but it’s a good candidate for the mancave, where there would be plenty of space for a triple-projector setup. We’ve already seen NVIDIA demonstrate this concept this year with 3D Vision Surround, so there may very well be a market for it in the consumer space.
Micoy & Omni-3D
The other company on hand showing a potential holodeck-like technology was Micoy, who like Scalable is a software firm. Their focus is on writing the software necessary to properly build and display a 3D environment on an all-encompassing (omnidirectional) view such as a dome or CAVE, as opposed to 3D originating from a 2D surface such as a monitor or projector screen. The benefit of this method is that it can encompass the entire view of the user, eliminating the edge-clipping issues with placing a 3D object above-depth of the screen; or in other words this makes it practical to render objects right in front of the viewer.
At GTC Micoy had an inflatable tent set up, housing a projector with a 180° lens and a suitable screen, which in turn was being used to display a rolling demo loop. In practice it was a half-dome having 3D material projected upon it. The tent may have caught a lot of eyes, but it was the content of the demo that really attracted attention, and it’s here where it’s a shame that pictures simply can’t convey the experience, so words will have to do.
I personally have never been extremely impressed with 3D stereoscopic viewing before – it’s a nice effect in movies and games when done right, but since designers can’t seriously render things above-depth due to edge-clipping issues it’s never been an immersive experience for me. Instead it has merely been a deeper experience. This on the other hand was the most impressive 3D presentation I’ve ever seen. I’ve seen CAVEs, OMNIMax domes, 3D games, and more; this does not compare. Micoy had the honest-to-goodness holodeck, or at least the display portion of it. It was all-encompassing, blocking out the idea that I was anywhere else, and with items rendered above-depth I could reach out and sort-of touch them, and other people could walk past them (at least until they interrupted the projection). To be quite clear, it still needs much more resolution and something to remedy the color/brightness issues of shutter glasses, but still, it was the prototype holodeck. When Carrell Killebrew talks about building the future holodeck, this is no doubt what he has in mind.
I suppose the only real downside is that Micoy’s current technology is a tease. Besides the issues we listed earlier, their technology currently doesn’t work in real-time, which is why they were playing a rolling demo. It’s suitable for movie-like uses, but there’s not enough processing power right now to do the computation required in real-time. It’s where they want to go in the future, along with a camera system necessary to allow users to interact with the system, but they aren’t there yet.
Ultimately I wouldn’t expect this technology to be easily accessible for home-use due to the costs and complexities of a dome, but in the professional world it’s another matter. This may very well be the future in another decade.
19 Comments
View All Comments
dtdw - Sunday, October 10, 2010 - link
we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going in the next couple of years.shouldnt you add 'the' before adobe ?
and adding 'is' after computing ?
tipoo - Sunday, October 10, 2010 - link
" we had a chance to Adobe, Microsoft, Cyberlink, and others about where they see GPU computing going "Great article, but I think you accidentally the whole sentence :-P
Deanjo - Sunday, October 10, 2010 - link
"While NVIDIA has VDPAU and also has parties like S3 use it, AMD and Intel are backing the rival Video Acceleration API (VA API)."Ummm wrong, AMD is using XvBA for it's video acceleration API. VAAPI provides a wrapper library to XvBA much like there is VAAPI wrapper for VDPAU. Also VDPAU is not proprietary, it is part of Freedesktop and the open source library package contains a wrapper library and a debugging library allowing other manufacturers to implement VDPAU support into their device drivers. In short every device manufacturer out there is free to include VDPAU support and it is up to the driver developer to add that support to a free and truly open API.
Ryan Smith - Sunday, October 10, 2010 - link
AMD is using XvBA, but it's mostly an issue of semantics. They already had the XvBA backend written, so they merely wrote a shim for VA API to get it in there. In practice XvBA appears to be dead, and developers should use VA API and let AMD and the others work on the backend. So in that sense, AMD are backing VA API.As for NVIDIA, proprietary or not doesn't really come in to play. NVIDIA is not going to give up VAPAU (or write a VA API shim) and AMD/Intel don't want to settle on using VAPAU. That's the stalemate that's been going on for a couple of years now, and it doesn't look like there's any incentive on either side to come together.
It's software developers that lose out; they're the ones that have to write in support for both APIs in their products.
electroju - Monday, October 11, 2010 - link
Deanjo, that is incorrect. VA API is not a wrapper. It is the main API from freedesktop.org. It is created by Intel unfortunately, but they help extend the staled XvMC project to a more flexible API. VDPAU and XvBA came later to provide their own way to do about the same thing. They also include a backward compatibility to VA API. VDPAU is not open source. It is just provides structs to be able to use VDPAU, so this means VDPAU can not be changed by the open source community to implement new features.AmdInside - Sunday, October 10, 2010 - link
Good coverage. Always good to read new info. Often looking at graphics card reviews can get boring as I tend to sometimes just glance at the graphs and that is it. I sure wish Adobe would use GPU more for photography software. Lightroom is one software that works alright on desktops but too slow for my taste on laptops.AnnonymousCoward - Monday, October 11, 2010 - link
Holodeck? Cmon. It's a 3D display. You can't create a couch and then lay on it.Guspaz - Tuesday, October 12, 2010 - link
I'm sort of disappointed with RemoteFX. It sounds like it won't be usable remotely by consumers or small businesses who are on broadband-class connections; with these types of connections, you can probably count on half a megabit of throughput, and that's probably not enough to be streaming full-screen MJPEG (or whatever they end up using) over the net.So, sure, works great over a LAN, but as soon as you try to, say, telecommute to your office PC via a VPN, that's not going to fly.
Even if you're working for a company with a fat pipe, many consumers (around here, at least) are on DSL lines that will get them 3 or 4 megabits per second; that might be enough for lossy motion-compensated compression like h.264, but is that enough for whatever Microsoft is planning? You lose a lot of efficiency by throwing away iframes and mocomp.
ABR - Tuesday, October 19, 2010 - link
Yeah, it also makes no sense from an economic perspective. Now you have to buy a farm of GPUs to go with your servers? And the video capability now and soon being built in to every Intel CPU just goes for nothing? More great ideas from Microsoft.