Nvidia has hinted that it has a project in the works that will enable Nvidia's CUDA technology on AMD GPUs.
At a recent roundtable event, Nvidia's chief scientist Bill Dally explained why a developer would write an app using C with CUDA extensions if it wouldn't work on AMD GPUs.
"In the future you'll be able to run C with CUDA extensions on a broader range of platforms," stated Dally, "so I don't think that will be a fundamental limitation."
Although Dally didn't name AMD outright, he added: "I'm familiar with some projects that are underway to enable CUDA on other platforms."
Article continues below
Differing GPGPU technologies
While both Nvidia and AMD have announced support for open GPGPU standards such as OpenCL and Microsoft's DirectX Compute, both companies also have their own GPGPU technologies.
Nvidia has CUDA, and AMD has Stream. Unsurprisingly, however, Dally reckons that CUDA is the strongest.
"What I've seen them [AMD] offering so far is using the Brook programming language, which we actually developed at Stanford for programming GPUs," added Dally.
Dally used to work at Stanford University before he joined Nvidia, and he claims that "we moved on from Brook because it had a large number of limitations."
"I think CUDA is a far more productive programming environment," he insisted.
Steam plus CAL versus CUDA
An enhanced version of Brook called Brook+ is used in AMD's Stream technology, along with AMD's Compute Abstraction Layer (CAL). Even so, Dally still says that it can't match CUDA.
"I think it [Stream] is missing some of the abstractions that are in CUDA," said Dally, "I think people are far more comfortable with CUDA.
"For evidence of this go to CUDA Zone and look at the hundreds of applications that people have ported over to CUDA, and then look at their [AMD's] corresponding website for GPU computing.
"There are just a handful of things, and they're generally things that are already available in CUDA. CUDA has just got a lot more traction among the people who are programming parallel applications. It's an easier language to use."
Dally was also unconcerned about people buying AMD GPUs instead of Nvidia GPUs if they supported CUDA.
"We don't care whether they [GPGPU apps] are restricted to running on our GPUs or on a broader range of platforms," said Dally. "We produce the best GPUs that there are, so given a fair competitive environment, people will choose our GPUs."
AMD 'no comment'
AMD wouldn't confirm or deny the rumour. Spokesperson Sasa Marinkovic said that "AMD does not comment on rumours or speculation", adding: "AMD does support OpenCL which is an open, cross-platform standard.
"The GPGPU community has been eagerly awaiting an open, industry-wide programming standard and OpenCL provides that standard."
However, Dally says that there are still some benefits to using C with CUDA extensions instead of OpenCL.
"I think most people find that C with CUDA extensions is the most convenient way to write applications," said Dally.
"OpenCL is really a driver interface. It's an API and a set of calls. With a kernel, you basically make an API call with the code for that kernel as a string, and the compilation actually happens in the driver on the fly.
"Being able to write in C for CUDA and running NVCC and pre-compiling your kernel seems to be a more efficient way of operating."
Of course, Nvidia would say that about its own technology, but if AMD also supported C with CUDA extensions on its GPUs, then there would be little reason for developers not to use it.
Plus, if AMD did support CUDA, then CUDA technologies such as GPU-accelerated PhysX would also be available to owners of AMD GPUs.