Intel and NVIDIA spent the latter part of last year pushing their HPC supplemental compute capabilities. For Intel, this meant unveiling a new set of Xeon Phi coprocessors. For NVIDIA, this meant pushing OpenACC support into the GNU Compiler Collection (GCC). For HPC developers, both developments meant 2014 would be a year of tough questions for platform choosers.

Duncan Poole, president of OpenACC and employee at NVIDIA, said that developers had been asking when an open-source implementation of OpenACC would be made available. Thus, the decision was made to push OpenACC API support into GCC.

Nathan Sidwell, developer at Mentor Graphics, said that the tool-chain team at his company is now using its knowledge of GCC development to bring OpenACC to the compiler collection. The work will take some time, he said.

“The development goal is to implement OpenACC 2.0,” said Sidwell. “Obviously, starting from no implementation at all, that’s quite a large project. We’ll be implementing this in stages. There will be cases where we need to implement useful functionality first so people can use it for many OpenACC programs first. The new features will be done later.”

But not everyone is excited about the potential of compiling OpenACC applications with GCC. James Reinders, director of software products and multicore evangelist at Intel, has long advocated the inclusion of OpenACC-like support in the much longer-lived OpenMP HPC API set.

“OpenMP was formed in 1996 and came out with a spec in 1997,” he said. “It started with a pretty simple vision that was to bring together a standardization of directives that would allow users and compiler writers to adhere to one method to span multiple architectures. OpenMP was successful with that.

“Back when we formed OpenMP, it was a collection of a lot of companies that worked well together. I was manager of the Intel Fortran compiler. My bookshelf was stuffed with versions of manuals for other compilers: VAX, Sun, IBM, Cray. The reason I had those is I had to know, how did this standard spell a directive that did something similar to what another’s compiler did and what another one did? Every engineer can argue about the perfect way to support a compiler, but our compiler had a switch that would support all the compilers: a switch for Sun, for Cray…

“OpenMP did a fabulous job of bringing that all together,” he continued. “Unfortunately, it’s not a vision I see shared by the pseudo-standard called OpenACC. It’s not a standard in the same sense of others, where it’s driven by a collection of companies to suit multiple needs. OpenACC was put out to solve the needs of one company. It solves that need. It’s a proprietary standard for NVIDIA, supported by people who want to see some things offloaded to a GPU.”

Reinders added that OpenACC was, ostensibly, created to speed up the creation of a solution to NVIDIA’s problems. The original idea was to merge these changes into OpenMP, something that was discussed in November of 2011, he said.

That merger hasn’t happened. Michael Wolfe, secretary of OpenACC and employee at the Portland Group, said that it is still in the works, but there had been some pushback from the OpenMP side over proposed changes.

“It is true that all the vendor members of OpenACC are also members of OpenMP,” explained Wolfe. “Conceptually, when we started OpenACC, we wanted to eventually reduce this down to a single standard. [The OpenMP] execution model limits the choices they can make, and there was pushback not to change that execution model. We’re hoping OpenACC directives are not as prescriptive as the OpenMP directives are. [That means] more flexibility of mapping the parallelism of the program onto the parallelism of the device.”

Reinders took issue with the idea that OpenMP is refusing to change, however. “All the companies have worked on OpenMP 4.0. All the companies: Cray, CAPS Enterprise, NVIDIA, and different labs have worked on OpenMP, and came together to create what they call OpenMP Target, which is supposed to span a lot of architectures,” he said.

“Those have done quite well, but nothing is ever perfect, and OpenMP 4.0 is out there now, and I think it’s very unfortunate to see some of the commentary that’s questioning OpenMP’s commitment to standardization. It’s not perfect. No standard is. I know the committee on accelerators for OpenMP is continuing to look at and evaluate things.”

No matter who’s right, it’s a sure bet that 2014 will see further expansion of the interest in HPC and supplemental compute, regardless of which standard is chosen.