OpenMP 4.0 has made its way into the Intel compiler set. The company today announced the release of the Intel Parallel Studio XE 2015 family, which includes three different bundles of tools for developers working in C, C++ and FORTRAN. With the introduction of support for OpenMP 4.0 comes the ability to apply explicit vector programming to projects compiled with Intel’s tool chain.

James Reinders, chief evangelist for Intel, said that the new vectorization support gives developers access to the OpenMP SIMD (single instruction, multiple data) directives. “OpenMP 4.0 gives us some standard ways to give the compiler the extra hint that’s needed to vectorize loops,” he said.

(Related: Here comes OpenMP)

Thus, with Intel’s 2015 update of its compilers, developers can add two lines of code to encapsulate the portions of their application that they need to vectorize. The new compiler will then parallelize the code while other compilers simply ignore the directives, leaving the code compatible with other compilation processes.

The OpenMP 4.0 standard also introduced a way to offload processing tasks to onboard hardware, most notably to GPUs and Intel’s own Phi co-processor. While Intel has added this offloading support to Intel Parallel Studio XE 2015, straight support for GPU compute is not fully implemented, and thus is restricted to other devices and to Intel’s own hardware.

Intel’s performance data analysis platform, VTune, was updated to support Mac OS X in this release. This allows users viewing remotely collected performance data to view that data on their Mac desktop—a frequently requested feature, said Reinders.

But perhaps the most significant change to Intel’s suite of tools is one that has been secretly in the works for a number of years. Reinders said that Intel Parallel Studio XE 2015 has revamped the way compiler reports are handled, yielding a much more comprehensible output. “We’ve consolidated our reports under a common framework. If you’re trying to understand what the compiler has done to your code, the new reports are worth a look,” he said.

“We have a new optimization report. We’re really proud of this, and we think it will make it more approachable. I think optimization reports are amazingly more tractable. I’ve never seen anything as clear and comprehensive as what we have now; it was really done based on a whole lot of user feedback. People are squeezing for performance and are willing to read what the compiler puts out, but they wanted help reading that, as it was indecipherable.”

Reinders and the Intel team are already working on next year’s releases, and one thing they’re starting to consider is the next version of C++. The company’s view on the subject, he said, is that the language needs to add more parallelization features in order to meet the OpenMP standard, where things get gray.

“If you look at OpenMP, it’s very directive-oriented,” said Reinders. “C++ is about keywords and other ways of looking at it. There’s an argument that directives are a little confusing in that sort of language. I’d like to see some of these things have a really native C++ flair to them. That’s something we’re discussing. There are things in our compiler to do the SIMD with a more programmatic API instead of the original syntax. We have extensions there to help customers explore vectorization and see if that’s of interest to them. This is all in our discussions with the C++ committee.”