vendredi 14 juillet 2017

g++ performance regression between c++98 and c++11?

So I've got a little test program (the cfloat.h is here) I'm using to evaluate ways of doing complex arithmetic in C++. When I compile thusly:

g++ -I./ test.cc -O3 -fcx-limited-range -ggdb3 -o test

I get the following output:

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

c99time:  0.028698
stltime:  0.029364
mytime:   0.028931
cftime:   0.029315

When I move to C++11, however:

g++ -std=c++11 -I./ test.cc -O3 -fcx-limited-range -ggdb3 -o test

I get:

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

a[0]: (0.911647379398345947, 0.197551369667053223)
d[0]: (0.614946126937866211, 1.948743700981140137)

c99time:  0.029283
stltime:  0.030001
mytime:   0.039238
cftime:   0.028929

Where there is a marked regression in performance on the simple "mycfloat" class I wrote. What's more, if you look at the generated assembly for the "benchcore" function (c++98 here and c++11 here) they're identical. Can anyone help me figure out where this performance change is coming from, I'm seeing it with g++ 4.8.5 (RHEL 7) and g++ 5.4.0 (Ubuntu 16.04).

Aucun commentaire:

Enregistrer un commentaire