log in | about 
 

The GNU C++ compiler produces efficient code for multiple platforms. The Intel compiler is specialized for Intel processors and produces even more efficient, but Intel-specific code. Yet, as some of us know, one does not get more than 10-20% improvement in most cases by switching from the GNU C++ compiler to the Intel compiler.
(The Intel compiler is free for non-commercial uses.)

There is at least one exception: programs that rely
heavily on a math library. It is not surprising that users of the Intel math library often enjoy almost a two-fold speedup over the GNU C++ library when they explicitly employ highly vectorized, linear algebra, and statistical functions. What is really amazing is that ordinary mathematical functions such as exp, cos, and sin can be computed 5-10 times faster by the Intel math library, apparently, without sacrificing accuracy.

Intel claims a 1-2 order-magnitude speedup for all standard math functions. To confirm this, I wrote simple benchmarks. They run on a modern Core i7 (3.4 GHz) processor in a single thread. The code (available from GitHub) generates random numbers that are used as arguments for various math functions. The intent of this is to represent plausible argument values. Intel also reports performance results for “working” argument intervals and admits that it can be quite expensive to compute functions (e.g., trigonometric) accurately for large argument values.

For single-precision numbers (i.e., floats), the GNU library is capable of computing only 30-100 million mathematical functions per second, while the Intel math library completes 400-1500 million of operations per second. For instance, it can do 600 million exponentiations or 400 million computations of the sine function (with single-precision arguments). This is slower than Intel claims, but still an order of magnitude faster than the GNU library.

Are there any accuracy tradeoffs? The Intel library can work in the high accuracy mode, in which, as Intel claims, the functions should have an error of at most 1 ULP (unit in the last place). Roughly speaking, the computed value may diverge from the exact one only in the last digit of mantissa. For the GNU math library, the errors are known to be as high as 2 ULP (e.g., for the cosine function with a double-precision argument). In the lowest accuracy mode, additional order-of-magnitude speedups are possible. It appears that the Intel math library should be superior to the GNU library in all respects. Note, however, that I did not verify Intel accuracy claims and I appreciate any links in regard to this topic. In my experiments, to ensure that the library works in the high-accuracy mode, I make a special call:

  1. vmlSetMode(VML_HA);

I came across the problem of math-library efficiency, because I needed to perform many exponentiations to integer powers. There is a well-known approach called exponentiation by squaring and I hoped that the GNU math library implemented it efficiently. For example, to raise x to the power of 5, you first compute the square of x using one multiplication, and another multiplication to compute x4 (by squaring x2). Finally, you can multiply x4 by x and return the result. The total number of multiplications is three. Note that the naive algorithm would need four multiplications.

The function pow is overloaded, which means that there are several versions that serve arguments of different types. I wanted to ensure that the correct, i.e., the efficient version was called. Therefore, I told the compiler that the second argument is an unsigned (i.e., non-negative) integer as follows:

  1. y = pow(x, (unsigned)5);

Big mistake! For the reason, which I cannot fully understand, this makes both compilers (Intel and GNU) choose an inefficient algorithm. As my benchmark shows (module testpow), it takes a modern Core i7 (3.4 GHz) processor almost 200 CPU cycles to compute x5. This is ridiculously slow, if you take into account that one multiplication can be done in one cycle (or less if we use SSE).

So, the following handcrafted implementation outperforms the standard pow by an order of magnitude (if the second argument is explicitly cast to unsigned):

  1. float PowOptimPosExp0(float Base, unsigned Exp) {
  2. if (!Exp) return 1;
  3. float res = Base;
  4. --Exp;
  5.  
  6. while (Exp) {
  7. if (Exp & 1) res *= Base;
  8.  
  9. Base *= Base;
  10. Exp >>= 1;
  11. };
  12.  
  13. return res;
  14. };

If we remove the explicit cast to unsigned, the code is rather fast even with the GNU math library:

  1. int IntegerDegree=5;
  2. pow(x, IntegerDegree);

Yet, my handcrafted function is still 20-50% faster than the GNU pow.

Turns out that it is also faster than the Intel's version. Can we make it even faster? One obvious reason for inefficiency are branches. Modern CPUs are gigantic conveyor belts that split a single operation into a sequence of dozens (if not hundreds) of micro operations. Branches may require the CPU to restart the conveyor, which is costly. In our case, it is beneficial to use only the forward branches. Each forward branch represents a single value of an integer exponent and contains the complete code to compute the function value. This code "knows" the exponent value and, thus, no additional branches are needed:

  1. float PowOptimPosExp1(float Base, unsigned Exp) {
  2. if (Exp == 0) return 1;
  3. if (Exp == 1) return Base;
  4. if (Exp == 2) return Base * Base;
  5. if (Exp == 3) return Base * Base * Base;
  6. if (Exp == 4) {
  7. Base *= Base;
  8. return Base * Base;
  9. }
  10.  
  11. float res = Base;
  12.  
  13. if (Exp == 5) {
  14. Base *= Base;
  15. return res * Base * Base;
  16. }
  17.  
  18. if (Exp == 6) {
  19. Base *= Base;
  20. res = Base;
  21. Base *= Base;
  22. return res * Base;
  23. }
  24.  
  25. if (Exp == 7) {
  26. Base *= Base;
  27. res *= Base;
  28. Base *= Base;
  29. return res * Base;
  30. }
  31.  
  32. if (Exp == 8) {
  33. Base *= Base;
  34. Base *= Base;
  35. Base *= Base;
  36. return Base;
  37. }
  38. }

As a result, for the degrees 2-16, the Intel library performs 150-250 operations per second, while the customized version is capable of making 600-1200 exponentiations per second.

Acknowledgements: I thank Nathan Kurz and Daniel Lemire for the discussion and valuable links; Anna Belova for editing the entry.