Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong results for small DTRMV on x86-64 #1332

Closed
grisuthedragon opened this issue Oct 20, 2017 · 68 comments
Closed

Wrong results for small DTRMV on x86-64 #1332

grisuthedragon opened this issue Oct 20, 2017 · 68 comments

Comments

@grisuthedragon
Copy link
Contributor

Usually I use the POWER8 platform but today I found a bad error in the x86-64 code for the DTRMV routine.

I want to compute x <- A*x, where

  • A is nonunit upper triangular,
  • N = 1...64,
  • LDA = 64,
  • and INCX=1

and get wrong results on a Haswell (16 cores, Xeon CPU E5-2640 v3) based system using gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) if OpenBLAS is compiled via

make USE_OPENMP=1 

I compared the results with the Netlib implementation and obtained:

Correct RESULT: DTRMV(U,N,N,    1, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    2, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    3, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    4, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    5, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    6, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    7, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    8, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    9, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   10, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   11, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   12, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   13, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   14, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   15, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   16, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   17, A,   64, X,     1)  MAXERR = .568434188608080D-13
Wrong RESULT  : DTRMV(U,N,N,   18, A,   64, X,     1)  MAXERR = .113686837721616D-12
Wrong RESULT  : DTRMV(U,N,N,   19, A,   64, X,     1)  MAXERR = .113686837721616D-12
Wrong RESULT  : DTRMV(U,N,N,   20, A,   64, X,     1)  MAXERR = .227373675443232D-12
Wrong RESULT  : DTRMV(U,N,N,   21, A,   64, X,     1)  MAXERR = .227373675443232D-12
Wrong RESULT  : DTRMV(U,N,N,   22, A,   64, X,     1)  MAXERR = .454747350886464D-12
Wrong RESULT  : DTRMV(U,N,N,   23, A,   64, X,     1)  MAXERR = .909494701772928D-12
Wrong RESULT  : DTRMV(U,N,N,   24, A,   64, X,     1)  MAXERR = .909494701772928D-12
Wrong RESULT  : DTRMV(U,N,N,   25, A,   64, X,     1)  MAXERR = .181898940354586D-11
Wrong RESULT  : DTRMV(U,N,N,   26, A,   64, X,     1)  MAXERR = .363797880709171D-11
Wrong RESULT  : DTRMV(U,N,N,   27, A,   64, X,     1)  MAXERR = .727595761418343D-11
Wrong RESULT  : DTRMV(U,N,N,   28, A,   64, X,     1)  MAXERR = .363797880709171D-11
Wrong RESULT  : DTRMV(U,N,N,   29, A,   64, X,     1)  MAXERR = .291038304567337D-10
Wrong RESULT  : DTRMV(U,N,N,   30, A,   64, X,     1)  MAXERR = .582076609134674D-10
Wrong RESULT  : DTRMV(U,N,N,   31, A,   64, X,     1)  MAXERR = .145519152283669D-10
Wrong RESULT  : DTRMV(U,N,N,   32, A,   64, X,     1)  MAXERR = .582076609134674D-10
Wrong RESULT  : DTRMV(U,N,N,   33, A,   64, X,     1)  MAXERR = .596598620663450D+06
Wrong RESULT  : DTRMV(U,N,N,   34, A,   64, X,     1)  MAXERR = .605368209769108D+06
Wrong RESULT  : DTRMV(U,N,N,   35, A,   64, X,     1)  MAXERR = .114251427897636D+07
Wrong RESULT  : DTRMV(U,N,N,   36, A,   64, X,     1)  MAXERR = .176011411015678D+07
Wrong RESULT  : DTRMV(U,N,N,   37, A,   64, X,     1)  MAXERR = .273520848158376D+07
Wrong RESULT  : DTRMV(U,N,N,   38, A,   64, X,     1)  MAXERR = .400535392194670D+07
Wrong RESULT  : DTRMV(U,N,N,   39, A,   64, X,     1)  MAXERR = .583437738276019D+07
Wrong RESULT  : DTRMV(U,N,N,   40, A,   64, X,     1)  MAXERR = .850833665131698D+07
Wrong RESULT  : DTRMV(U,N,N,   41, A,   64, X,     1)  MAXERR = .123493561359394D+08
Wrong RESULT  : DTRMV(U,N,N,   42, A,   64, X,     1)  MAXERR = .173391397036582D+08
Wrong RESULT  : DTRMV(U,N,N,   43, A,   64, X,     1)  MAXERR = .260484477387416D+08
Wrong RESULT  : DTRMV(U,N,N,   44, A,   64, X,     1)  MAXERR = .376086864422978D+08
Wrong RESULT  : DTRMV(U,N,N,   45, A,   64, X,     1)  MAXERR = .562860046618746D+08
Wrong RESULT  : DTRMV(U,N,N,   46, A,   64, X,     1)  MAXERR = .722429815166366D+08
Wrong RESULT  : DTRMV(U,N,N,   47, A,   64, X,     1)  MAXERR = .129301313570969D+09
Wrong RESULT  : DTRMV(U,N,N,   48, A,   64, X,     1)  MAXERR = .193137989170661D+09
Wrong RESULT  : DTRMV(U,N,N,   49, A,   64, X,     1)  MAXERR = .279222204411076D+09
Wrong RESULT  : DTRMV(U,N,N,   50, A,   64, X,     1)  MAXERR = .678662007793433D+09
Wrong RESULT  : DTRMV(U,N,N,   51, A,   64, X,     1)  MAXERR = .121269930403095D+10
Wrong RESULT  : DTRMV(U,N,N,   52, A,   64, X,     1)  MAXERR = .106155723183043D+10
Wrong RESULT  : DTRMV(U,N,N,   53, A,   64, X,     1)  MAXERR = .306408230139373D+10
Wrong RESULT  : DTRMV(U,N,N,   54, A,   64, X,     1)  MAXERR = .470622143023516D+10
Wrong RESULT  : DTRMV(U,N,N,   55, A,   64, X,     1)  MAXERR = .716298126556327D+10
Wrong RESULT  : DTRMV(U,N,N,   56, A,   64, X,     1)  MAXERR = .103315607770989D+11
Wrong RESULT  : DTRMV(U,N,N,   57, A,   64, X,     1)  MAXERR = .837779499609675D+10
Wrong RESULT  : DTRMV(U,N,N,   58, A,   64, X,     1)  MAXERR = .255092826195343D+11
Wrong RESULT  : DTRMV(U,N,N,   59, A,   64, X,     1)  MAXERR = .388799116737581D+11
Wrong RESULT  : DTRMV(U,N,N,   60, A,   64, X,     1)  MAXERR = .612787305842372D+11
Wrong RESULT  : DTRMV(U,N,N,   61, A,   64, X,     1)  MAXERR = .421813369205623D+11
Wrong RESULT  : DTRMV(U,N,N,   62, A,   64, X,     1)  MAXERR = .531064994597937D+11
Wrong RESULT  : DTRMV(U,N,N,   63, A,   64, X,     1)  MAXERR = .218143907859646D+12
Wrong RESULT  : DTRMV(U,N,N,   64, A,   64, X,     1)  MAXERR = .160600130355114D+12

The demonstration code is here: https://gist.github.com/grisuthedragon/32d8ad11e1d722414f921509b97d5507

@martin-frbg
Copy link
Collaborator

Is this with 0.2.20 or current git "develop" ?

@grisuthedragon
Copy link
Contributor Author

grisuthedragon commented Oct 20, 2017

With the current develop (b7cee00).

I checked the same on

  • a SandyBride Xeon ( 16 Cores, E5-2690)
  • a Westmere Xeon
  • and my usual OpenPOWER 8 system

and the result ist the same for N = 1..16 the difference is in O(eps). For N=17..32 the difference is slightly increasing. For N>=32 is crashes completely as shown above. The fact on the OpenPower is interessting because the DTRMV routine is involved in some QR decomposition routines and so this may related to #1191

Changing the leading dimension does not influence the results as long N <= LDA <= 32.

If the number of threads is restricted to 1 everything is correct.

@martin-frbg
Copy link
Collaborator

Bisecting shows that this is not a recent failure, this behaviour appears to have been present in the last libGoto already.

@martin-frbg
Copy link
Collaborator

This code is indeed unchanged from (at least) GotoBLAS2-1.08 released in 2009.
The range calculations in trmv_thread.c and related level2 threading helpers have become a bit suspect in the context of #1089 already (where it appeared that the computed values could exceed the actual data size), but I do not really understand the workings of this code. (To be clear, undoing my PR #1262 does not help however)

@grisuthedragon
Copy link
Contributor Author

Because of the fact that trmv is rarely used and from the computational density not the perfect one to have a threaded version I would turn of threading for it by setting nthreads=1 in interfaces/trmv.c. I tried this for the example above and the code where I got this error and it gives correct results without a noteworthy performance loss.

Furthermore, it seems to fix #1191 because in the example I used there, the dtpqrt function was used which has dtrmv inside its call graph. After setting the number of threads to one for the trmv routine the results stay correct for 100 independent runs.

@brada4
Copy link
Contributor

brada4 commented Oct 24, 2017

Is it fast in many gigabytes scale too?

@grisuthedragon
Copy link
Contributor Author

First thing must be to deliver the correct results and the the performance. Because it helps nobody to have incredible fast results, which are completely wrong.

@martin-frbg
Copy link
Collaborator

I would still prefer to understand just what is going wrong with your specific case, given that this bug appears to have gone unnoticed for years (and is not triggered by the compile-time tests).

@grisuthedragon
Copy link
Contributor Author

The fact that the bug does not trigger the compile-time tests (as they are in NETLIB BLAS) is easy to explain. There are (together with the cblas interface) 735 calls to DTRMV and the largest parameter N is 9 and the largest leading dimension is 10 which is the area where my check code also gives the correct results. And the fact that it never get noticed. puuh.. From my experience I would say that it is rarely used and in most cases replaced by TRMM.

@martin-frbg
Copy link
Collaborator

Curioser and curioser... on my Kaby Lake laptop, repeated runs of your test code give varying patterns of "Correct" and "Wrong" results for all N beyond 16 - even with nthreads clamped to 1 in interface/trmv.c and confirming that only level2/trmv_U.c gets called. So there may be more than just one bug here. (Nehalem target behaves well with the nthreads=1 hack, but fails without it)

@martin-frbg
Copy link
Collaborator

Seems daxpy_microk_haswell-2.c is (subtly ?) broken on top of whatever is wrong with level2/trmv_thread.c - using either the generic C implementation or any of the older microkernels in its place brings the Haswell behaviour in line with Nehalem/Sandybridge/Atom

@martin-frbg
Copy link
Collaborator

Btw increasing N beyond 64 in your test code shows the results for even larger N still creeping away into madness, or is that simply a limitation in your code I am too dense to see now ?

@grisuthedragon
Copy link
Contributor Author

I only did the example upto N=64 because that was the maximum size I need in my application. But in principal my code should work for larger N as well.

@martin-frbg
Copy link
Collaborator

martin-frbg commented Oct 25, 2017

Bad news then... Things apparently go wrong in trmv_U.c when (or shortly after) the problem size exceeds the DTB_ENTRIES (which is 64, the last "correct" calculation has N=67).
Update: a CORE2 build gets the right result (with nthreads forced to 1) up to much higher values of N,
probably by virtue of its DTB_ENTRIES count of 256)

@martin-frbg
Copy link
Collaborator

Unfortunately I am getting nowhere with this at the moment.

@grisuthedragon
Copy link
Contributor Author

I would suggest to deactivate the threading for this routine. It seems that nearly nobody uses it for large scale stuff where the threading gets important. Otherwise the error would be detected earlier and did not stay 10 years inside Goto/OpenBLAS.

I check how often it is used and LAPACK and there are only 12 routines calling it and in most cases N <= 64 and so we only have a small benefit from a multithreaded version.

@martin-frbg
Copy link
Collaborator

My thoughts as well - deactivate multithreading with an appropriate comment in the code for now, and
disable the GEMV unroling in trmv_U.c and similar affected functions (trsv apparently) that is the additional source of errors for N > DTB_ENTRIES. It just would have been nicer to understand and correct the underlying assumptions, also this still leaves the doubt about the validity of the Haswell daxpy microcode.

@martin-frbg
Copy link
Collaborator

Workaround merged now, hopefully the actual background issue (and possible implications for similar coding in at least trsv) will become clear eventually.

This was referenced Dec 8, 2017
@martin-frbg
Copy link
Collaborator

(Un)fortunately the bug discussed in #1388 does not appear to have any bearing on the failure of trmv_thread.c, so it seems the workaround will have to stay in place for now. I believe we should also replace the daxpy_microk_haswell-2.c with its sandybridge or nehalem counterpart for the next release, even though the problem exposed by the test case here does not show up in the compile-time tests.

@MigMuc
Copy link

MigMuc commented Dec 11, 2017

I tested the fortran program from above on a Piledriver Core (AMD FX(tm)-4300 Quad-Core Processor)
and do get wrong results, too.

Correct RESULT: DTRMV(U,N,N,    1, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    2, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    3, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    4, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    5, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    6, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    7, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    8, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    9, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   10, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   11, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   12, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   13, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   14, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   15, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   16, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   17, A,   64, X,     1)  MAXERR = .111022302462516D-15
Correct RESULT: DTRMV(U,N,N,   18, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   19, A,   64, X,     1)  MAXERR = .222044604925031D-15
Correct RESULT: DTRMV(U,N,N,   20, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   21, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   22, A,   64, X,     1)  MAXERR = .355271367880050D-14
Correct RESULT: DTRMV(U,N,N,   23, A,   64, X,     1)  MAXERR = .177635683940025D-14
Wrong RESULT  : DTRMV(U,N,N,   24, A,   64, X,     1)  MAXERR = .142108547152020D-13
Correct RESULT: DTRMV(U,N,N,   25, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   26, A,   64, X,     1)  MAXERR = .142108547152020D-13
Correct RESULT: DTRMV(U,N,N,   27, A,   64, X,     1)  MAXERR = .355271367880050D-14
Wrong RESULT  : DTRMV(U,N,N,   28, A,   64, X,     1)  MAXERR = .113686837721616D-12
Wrong RESULT  : DTRMV(U,N,N,   29, A,   64, X,     1)  MAXERR = .142108547152020D-13
Correct RESULT: DTRMV(U,N,N,   30, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   31, A,   64, X,     1)  MAXERR = .284217094304040D-13
Correct RESULT: DTRMV(U,N,N,   32, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   33, A,   64, X,     1)  MAXERR = .284217094304040D-13
Correct RESULT: DTRMV(U,N,N,   34, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   35, A,   64, X,     1)  MAXERR = .568434188608080D-13
Correct RESULT: DTRMV(U,N,N,   36, A,   64, X,     1)  MAXERR = .177635683940025D-14
Correct RESULT: DTRMV(U,N,N,   37, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   38, A,   64, X,     1)  MAXERR = .909494701772928D-12
Correct RESULT: DTRMV(U,N,N,   39, A,   64, X,     1)  MAXERR = .177635683940025D-14
Wrong RESULT  : DTRMV(U,N,N,   40, A,   64, X,     1)  MAXERR = .363797880709171D-11
Wrong RESULT  : DTRMV(U,N,N,   41, A,   64, X,     1)  MAXERR = .363797880709171D-11
Correct RESULT: DTRMV(U,N,N,   42, A,   64, X,     1)  MAXERR = .177635683940025D-14
Wrong RESULT  : DTRMV(U,N,N,   43, A,   64, X,     1)  MAXERR = .145519152283669D-10
Wrong RESULT  : DTRMV(U,N,N,   44, A,   64, X,     1)  MAXERR = .363797880709171D-11
Wrong RESULT  : DTRMV(U,N,N,   45, A,   64, X,     1)  MAXERR = .582076609134674D-10
Wrong RESULT  : DTRMV(U,N,N,   46, A,   64, X,     1)  MAXERR = .465661287307739D-09
Wrong RESULT  : DTRMV(U,N,N,   47, A,   64, X,     1)  MAXERR = .909494701772928D-12
Wrong RESULT  : DTRMV(U,N,N,   48, A,   64, X,     1)  MAXERR = .116415321826935D-09
Wrong RESULT  : DTRMV(U,N,N,   49, A,   64, X,     1)  MAXERR = .465661287307739D-09
Correct RESULT: DTRMV(U,N,N,   50, A,   64, X,     1)  MAXERR = .222044604925031D-15
Wrong RESULT  : DTRMV(U,N,N,   51, A,   64, X,     1)  MAXERR = .465661287307739D-09
Correct RESULT: DTRMV(U,N,N,   52, A,   64, X,     1)  MAXERR = .222044604925031D-15
Correct RESULT: DTRMV(U,N,N,   53, A,   64, X,     1)  MAXERR = .000000000000000D+00
Wrong RESULT  : DTRMV(U,N,N,   54, A,   64, X,     1)  MAXERR = .149011611938477D-07
Correct RESULT: DTRMV(U,N,N,   55, A,   64, X,     1)  MAXERR = .710542735760100D-14
Wrong RESULT  : DTRMV(U,N,N,   56, A,   64, X,     1)  MAXERR = .181898940354586D-11
Wrong RESULT  : DTRMV(U,N,N,   57, A,   64, X,     1)  MAXERR = .149011611938477D-07
Wrong RESULT  : DTRMV(U,N,N,   58, A,   64, X,     1)  MAXERR = .931322574615479D-09
Wrong RESULT  : DTRMV(U,N,N,   59, A,   64, X,     1)  MAXERR = .745058059692383D-08
Wrong RESULT  : DTRMV(U,N,N,   60, A,   64, X,     1)  MAXERR = .298023223876953D-07
Wrong RESULT  : DTRMV(U,N,N,   61, A,   64, X,     1)  MAXERR = .149011611938477D-07
Wrong RESULT  : DTRMV(U,N,N,   62, A,   64, X,     1)  MAXERR = .298023223876953D-07
Wrong RESULT  : DTRMV(U,N,N,   63, A,   64, X,     1)  MAXERR = .745058059692383D-08
Correct RESULT: DTRMV(U,N,N,   64, A,   64, X,     1)  MAXERR = .000000000000000D+00

Uncommenting the daxpy mikrokernel for Pildriver, like

#include "daxpy_microk_nehalem-2.c"
#elif defined(BULLDOZER)
#include "daxpy_microk_bulldozer-2.c"
#elif defined(STEAMROLLER) || defined(EXCAVATOR)
#include "daxpy_microk_steamroller-2.c"
//#elif defined(PILEDRIVER)
//#include "daxpy_microk_piledriver-2.c"
#elif defined(HASWELL) || defined(ZEN)
/*
this appears to be broken, see issue 1332
#include "daxpy_microk_haswell-2.c"
*/
#include "daxpy_microk_sandy-2.c"
#elif defined(SANDYBRIDGE)
#include "daxpy_microk_sandy-2.c"
#endif

the results are fine.

Correct RESULT: DTRMV(U,N,N,    1, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    2, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    3, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    4, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    5, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    6, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    7, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    8, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,    9, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   10, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   11, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   12, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   13, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   14, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   15, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   16, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   17, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   18, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   19, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   20, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   21, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   22, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   23, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   24, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   25, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   26, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   27, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   28, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   29, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   30, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   31, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   32, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   33, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   34, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   35, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   36, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   37, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   38, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   39, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   40, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   41, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   42, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   43, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   44, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   45, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   46, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   47, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   48, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   49, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   50, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   51, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   52, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   53, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   54, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   55, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   56, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   57, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   58, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   59, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   60, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   61, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   62, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   63, A,   64, X,     1)  MAXERR = .000000000000000D+00
Correct RESULT: DTRMV(U,N,N,   64, A,   64, X,     1)  MAXERR = .000000000000000D+00

@martin-frbg
Copy link
Collaborator

Interesting, and in addition piledriver and steamroller microkernels appear to be almost identical so the latter will probably also be affected. Could you try including the daxpy_microk_bulldozer-2.c there instead of falling back to the plain C routine - I do not have any AMD machines for testing ? (Alternatively, changing the "if n < 640" near the top of daxpy_microk_piledriver-2.c to something like
"if n <0" to make sure that block does not get executed may also tell us something - though the difference between the two functions appears to be limited to a few prefetch instructions.)
This really needs looking at by someone more experienced with assembler coding than me - the commonality appears to be the use of the vfmadd231pd instruction, but I have no idea if it is used wrongly here or has any unpleasant side effects in the context of this specific testcase.

@MigMuc
Copy link

MigMuc commented Dec 12, 2017

I tried setting the "if n < 0" on the Piledriver microkernel and reverted the Preprocessor directives to the initial state but that didn't work either. Then I tried with the "daxpy_mikrok_sandy-2.c" and that worked for me. It is essentially an AVX assembly, which is supported on my CPU. That's why I think it works for me, too.

@MigMuc
Copy link

MigMuc commented Dec 12, 2017

I think we should also test the other precisions, too.

@martin-frbg
Copy link
Collaborator

martin-frbg commented Dec 20, 2017

I find nothing wrong with the vfmadd231pd according to all documentation and code samples available on the net.
And there appears to be a compiler component to this problem now as well - with gcc 7.2, the fused multiply-add works just as well as the separate vmulpd,vaddpd used by the older "working" kernels. However, both are not entirely correct (whereas the gcc 4.8.5-built vmulpd/vaddpd is totally "correct" in the testcase) - and worse, the "N" values for which it calculates wrong results vary with every invocation of the test. Not sure what to make of this, probably need to check with some intermediate release of gcc and/or upgrade the assembler as well.

@brada4
Copy link
Contributor

brada4 commented Dec 21, 2017

Dirty workspace?

@grisuthedragon
Copy link
Contributor Author

So I took a short look on the results shown by @quickwritereader an for this example

(gdb) print x3(j)
$15 = 2.7604278427335776e+21
(gdb) print x4(j)
$16 = 2.7604278427335781e+21

the seem to by correct. Because double precision only gives 16 digits in decimal representation and the change in the last digit is cause by the round off errors caused by the differing operation order. Therefore, one should use a MAXERROR definition which judges the relative error and not the absolute one. ( Blame me, because I used the absoulte one in the example, but from the application where I detect the error it was enough to show this. )

@MigMuc
The DCOPY statement is there, because I used the code as drop-in replacement for DTRMV when I searched for the error.

@martin-frbg
Copy link
Collaborator

The DCOPY looks harmless enough, but somehow it manages to skew the calculations although they were using random numbers from the start. Perhaps the repeated use of netlib DTRMV leads to an accumulation of denormal values in X ?

@grisuthedragon
Copy link
Contributor Author

grisuthedragon commented Jan 9, 2018

Perhaps the repeated use of netlib DTRMV leads to an accumulation of denormal values in X ?

That might be a problem in the above example but does not explain the reproducible jump from 10E-10 to 10E6 at N=32

@martin-frbg
Copy link
Collaborator

One would probably need to dump X at N=32 in your above example to see what kind of ill-conditioning occurs at that point that happens to affect the fused multiply-add more than the alternative implementation.

@MigMuc
Copy link

MigMuc commented Feb 11, 2018

In did some tests using modified versions of the daxpy_microk_piledriver-2.c. I replaced all the fma instructions with separate mul and add instructions, to be consintent with the Sandy Bridge reference implementation, which proved to be very slow on the Piledriver core I use. As noted by Agner Fog in http://agner.org/optimize/optimizing_assembly.pdf in section 12.10 decoding of ymm registers is less efficient on Bulldozer, and Piledriver cores. The following benchmark gives some insight:
daxpy_test

The new Piledriver daxpy kernel is here (replace .txt with.c):
daxpy_microk_piledriver-2.txt

@martin-frbg
Copy link
Collaborator

Sorry, it is not quite clear what this is telling - I take it you get comparable though still somewhat lower performance from a modified DAXPY kernel for piledriver that does not show the problem assumed to exist with the present one ? Did you change your mind over the validity of the final DCOPY in the original test case ?

@brada4
Copy link
Contributor

brada4 commented Feb 12, 2018

@MigMuc can you explain methodology used for your graph and provide input data?

@MigMuc
Copy link

MigMuc commented Feb 12, 2018

There are at least two issues with this thread. The first issue as far as I can tell was the error introduced by the threaded implementation of dtrmv. This was mitigated by forcing it to be single threaded. The second issue seemed to be an incorrect implementation of the daxpy_microkernel at assembly level. As already stated by @grisuthedragon his Fortran program checks for absolute values rather then relative values

Because double precision only gives 16 digits in decimal representation and the change in the last digit is cause by the round off errors caused by the differing operation order. Therefore, one should use a MAXERROR definition which judges the relative error and not the absolute one.

Having this in mind the commented FMA (Haswell,Piledriver,etc.) versions of the microkernel implementation which apparently give slightly different result due to different rounding than the mul+add versions could be reactivated again. In the case we would like to use the mul+add version,
I wanted to offer an optimized, and slightly slower version, of the microkernel for Piledriver cores.
We cannot achieve the throughput which can be achieved using FMA instructions but I think this solution is better than using the Sandy Bridge version, at least for Piledriver cores.

@brada4 The tests above were done with the daxpy.goto program in the benchmark folder. The Piledriver FMA versions are the ones which are currently commented out. The new versions are the ones given in the file above.

@MigMuc
Copy link

MigMuc commented Feb 12, 2018

@martin-frbg regarding the validity of the DAXPY. I think you were right when saying that it works as a feed-back loop. I still don't get the idea even @grisuthedragon answered this question.

IMHO I would be a good idea to reactivate the optimized microkernels for Haswell, Zen, Piledriver, Steamroller and Excavator cores.

@martin-frbg
Copy link
Collaborator

Yes, unless any new evidence of a problem comes up I intend to revert the changes to the daxpy microkernels soon, and probably the ones for trmv threading as well.

@brada4
Copy link
Contributor

brada4 commented Feb 13, 2018

@MigMuc just explain the reason behind using base-1000 counting system in picture.

@martin-frbg
Copy link
Collaborator

@brada4 I do not get your point, what is it you think is wrong with the labeling of the graph ?

@brada4
Copy link
Contributor

brada4 commented Feb 13, 2018

It does not start at zero and zero, and does not look at small samples, example - mixing 2 wave streams for this particular case. it is not really primary use of BLAS but anyway a use case.

@MigMuc
Copy link

MigMuc commented Feb 13, 2018

The benchmark was started with N=128 and stepsize=64. I could have used a stepsize of 16 to account for the condition of the size of the array beeing a multiple of 16 for the microkernel invocation.
The plot was generated with gnuplot without explicitly setting the axis limits.

@brada4
Copy link
Contributor

brada4 commented Feb 13, 2018

Actually better would be start=127 step=29 , something that drills odd cases heavily

@martin-frbg
Copy link
Collaborator

I have now restored the AVX microkernels for DAXPY, but the multithreading problem with TRMV appears to be real (and extend to ZTRMV according to xianyi's ATLAS-derived BLAS-Tester).

@martin-frbg
Copy link
Collaborator

Closing as the original issue in DTRMV (and same in ZTRMV) has been fixed (or rather worked around) in #1382 and #1539, and the temporary doubts about AXPY turned out to be an extreme case of accumulation of rounding differences in fused multiply-add compared to discrete operations.

TiborGY added a commit to TiborGY/OpenBLAS that referenced this issue Jul 7, 2019
* With the Intel compiler on Linux, prefer ifort for the final link step 

icc has known problems with mixed-language builds that ifort can handle just fine. Fixes OpenMathLib#1956

* Rename operands to put lda on the input/output constraint list

* Fix wrong constraints in inline assembly

for OpenMathLib#2009

* Fix inline assembly constraints

rework indices to allow marking argument lda4 as input and output. For OpenMathLib#2009

* Fix inline assembly constraints

rework indices to allow marking argument lda as input and output.

* Fix inline assembly constraints

* Fix inline assembly constraints

* Fix inline assembly constraints in Bulldozer TRSM kernels

rework indices to allow marking i,as and bs as both input and output (marked operand n1 as well for simplicity). For OpenMathLib#2009

* Correct range_n limiting

same bug as seen in OpenMathLib#1388, somehow missed in corresponding PR OpenMathLib#1389

* Allow multithreading TRMV again

revert workaround introduced for issue OpenMathLib#1332 as the actual cause appears to be my incorrect fix from OpenMathLib#1262 (see OpenMathLib#1388)

* Fix error introduced during cleanup

* Reduce list of kernels in the dynamic arch build

to make compilation complete reliably within the 1h limit again

* init

* move fix to right place

* Fix missing -c option in AVX512 test

* Fix AVX512 test always returning false due to missing compiler option

* Make x86_32 imply NO_AVX2, NO_AVX512 in addition to NO_AVX

fixes OpenMathLib#2033

* Keep xcode8.3 for osx BINARY=32 build

as xcode10 deprecated i386

* Make sure that AVX512 is disabled in 32bit builds

for OpenMathLib#2033

* Improve handling of NO_STATIC and NO_SHARED

to avoid surprises from defining either as zero. Fixes OpenMathLib#2035 by addressing some concerns from OpenMathLib#1422

* init

* address warning introed with OpenMathLib#1814 et al

* Restore locking optimizations for OpenMP case

restore another accidentally dropped part of OpenMathLib#1468 that was missed in OpenMathLib#2004 to address performance regression reported in OpenMathLib#1461

* HiSilicon tsv110 CPUs optimization branch

add HiSilicon tsv110 CPUs  optimization branch

* add TARGET support for  HiSilicon tsv110 CPUs

* add TARGET support for HiSilicon tsv110 CPUs

* add TARGET support for HiSilicon tsv110 CPUs

* Fix module definition conflicts between LAPACK and ReLAPACK

for OpenMathLib#2043

* Do not compile in AVX512 check if AVX support is disabled

xgetbv is function depends on NO_AVX being undefined - we could change that too, but that combo is unlikely to work anyway

* ctest.c : add __POWERPC__ for PowerMac

* Fix crash in sgemm SSE/nano kernel on x86_64

Fix bug OpenMathLib#2047.

Signed-off-by: Celelibi <celelibi@gmail.com>

* param.h : enable defines for PPC970 on DarwinOS

fixes:
gemm.c: In function 'sgemm_':
../common_param.h:981:18: error: 'SGEMM_DEFAULT_P' undeclared (first use in this function)
 #define SGEMM_P  SGEMM_DEFAULT_P
                  ^

* common_power.h: force DCBT_ARG 0 on PPC970 Darwin

without this, we see
../kernel/power/gemv_n.S:427:Parameter syntax error
and many more similar entries

that relates to this assembly command
dcbt 8, r24, r18

this change makes the DCBT_ARG = 0
and openblas builds through to completion on PowerMac 970
Tests pass

* Make TARGET=GENERIC compatible with DYNAMIC_ARCH=1

for issue OpenMathLib#2048

* make DYNAMIC_ARCH=1 package work on TSV110.

* make DYNAMIC_ARCH=1 package work on TSV110

* Add Intel Denverton

for OpenMathLib#2048

* Add Intel Denverton

* Change 64-bit detection as explained in OpenMathLib#2056

* Trivial typo fix

as suggested in OpenMathLib#2022

* Disable the AVX512 DGEMM kernel (again)

Due to as yet unresolved errors seen in OpenMathLib#1955 and OpenMathLib#2029

* Use POSIX getenv on Cygwin

The Windows-native GetEnvironmentVariable cannot be relied on, as
Cygwin does not always copy environment variables set through Cygwin
to the Windows environment block, particularly after fork().

* Fix for OpenMathLib#2063: The DllMain used in Cygwin did not run the thread memory
pool cleanup upon THREAD_DETACH which is needed when compiled with
USE_TLS=1.

* Also call CloseHandle on each thread, as well as on the event so as to not leak thread handles.

* AIX asm syntax changes needed for shared object creation

* power9 makefile. dgemm based on power8 kernel with following changes : 32x unrolled 16x4 kernel and 8x4 kernel using (lxv stxv butterfly rank1 update). improvement from 17 to 22-23gflops. dtrmm cases were added into dgemm itself

* Expose CBLAS interfaces for I?MIN and I?MAX

* Build CBLAS interfaces for I?MIN and I?MAX

* Add declarations for ?sum and cblas_?sum

* Add interface for ?sum (derived from ?asum)

* Add ?sum

* Add implementations of ssum/dsum and csum/zsum

as trivial copies of asum/zsasum with the fabs calls replaced by fmov to preserve code structure

* Add ARM implementations of ?sum

(trivial copies of the respective ?asum with the fabs calls removed)

* Add ARM64 implementations of ?sum

as trivial copies of the respective ?asum kernels with the fabs calls removed

* Add ia64 implementation of ?sum

as trivial copy of asum with the fabs calls removed

* Add MIPS implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add MIPS64 implementation of ?sum

as trivial copy of ?asum with the fabs replaced by mov to preserve code structure

* Add POWER implementation of ?sum

as trivial copy of ?asum with the fabs replaced by fmr to preserve code structure

* Add SPARC implementation of ?sum

as trivial copy of ?asum with the fabs replaced by fmov to preserve code structure

* Add x86 implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add x86_64 implementation of ?sum

as trivial copy of ?asum with the fabs calls removed

* Add ZARCH implementation of ?sum

as trivial copies of the respective ?asum kernels with the ABS and vflpsb calls removed

* Detect 32bit environment on 64bit ARM hardware

for OpenMathLib#2056, using same approach as OpenMathLib#2058

* Add cmake defaults for ?sum kernels

* Add ?sum

* Add ?sum definitions for generic kernel

* Add declarations for ?sum

* Add -lm and disable EXPRECISION support on *BSD

fixes OpenMathLib#2075

* Add in runtime CPU detection for POWER.

* snprintf define consolidated to common.h

* Support INTERFACE64=1

* Add support for INTERFACE64 and fix XERBLA calls

1. Replaced all instances of "int" with "blasint"
2. Added string length as "hidden" third parameter in calls to fortran XERBLA

* Correct length of name string in xerbla call

* Avoid out-of-bounds accesses in LAPACK EIG tests

see Reference-LAPACK/lapack#333

* Correct INFO=4 condition

* Disable reallocation of work array in xSYTRF

as it appears to cause memory management problems (seen in the LAPACK tests)

* Disable repeated recursion on Ab_BR in ReLAPACK xGBTRF

due to crashes in LAPACK tests

* sgemm/strmm

* Update Changelog with changes from 0.3.6

* Increment version to 0.3.7.dev

* Increment version to 0.3.7.dev

* Misc. typo fixes

Found via `codespell -q 3 -w -L ith,als,dum,nd,amin,nto,wis,ba -S ./relapack,./kernel,./lapack-netlib`

* Correct argument of CPU_ISSET for glibc <2.5

fixes OpenMathLib#2104

* conflict resolve

* Revert reference/ fixes

* Revert Changelog.txt typos

* Disable the SkyLakeX DGEMMITCOPY kernel as well

as a stopgap measure for numpy/numpy#13401 as mentioned in OpenMathLib#1955

* Disable DGEMMINCOPY as well for now

OpenMathLib#1955

* init

* Fix errors in cpu enumeration with glibc 2.6

for OpenMathLib#2114

* Change two http links to https

Closes OpenMathLib#2109

* remove redundant code OpenMathLib#2113

* Set up CI with Azure Pipelines

[skip ci]

* TST: add native POWER8 to CI

* add native POWER8 testing to
Travis CI matrix with ppc64le
os entry

* Update link to IBM MASS library, update cpu support status

* first try migrating one of the arm builds from travis

* fix tabbing in azure commands

* Update azure-pipelines.yml

take out offending lines (although stolen from https://github.com/conda-forge/opencv-feedstock azure-pipelines fiie)

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* Update azure-pipelines.yml

* DOC: Add Azure CI status badge

* Add ARMV6 build to azure CI setup (OpenMathLib#2122)

using aytekinar's Alpine image and docker script from the Travis setup

[skip ci]

* TST: Azure manylinux1 & clean-up

* remove some of the steps & comments
from the original Azure yml template

* modify the trigger section to use
develop since OpenBLAS primarily uses
this branch; use the same batching
behavior as downstream projects NumPy/
SciPy

* remove Travis emulated ARMv6 gcc build
because this now happens in Azure

* use documented Ubuntu vmImage name for Azure
and add in a manylinux1 test run to the matrix

[skip appveyor]

* Add NO_AFFINITY to available options on Linux, and set it to ON

to match the gmake default. Fixes second part of OpenMathLib#2114

* Replace ISMIN and ISAMIN kernels on all x86_64 platforms (OpenMathLib#2125)

* Mark iamax_sse.S as unsuitable for MIN due to issue OpenMathLib#2116
* Use iamax.S rather than iamax_sse.S for ISMIN/ISAMIN on all x86_64 as workaround for OpenMathLib#2116

* Move ARMv8 gcc build from Travis to Azure

* Move ARMv8 gcc build from Travis to Azure

* Update .travis.yml

* Test drone CI

* install make

* remove sudo

* Install gcc

* Install perl

* Install gfortran and add a clang job

* gfortran->gcc-gfortran

* Switch to ubuntu and parallel jobs

* apt update

* Fix typo

* update yes

* no need of gcc in clang build

* Add a cmake build as well

* Add cmake builds and print options

* build without lapack on cmake

* parallel build

* See if ubuntu 19.04 fixes the ICE

* Remove qemu armv8 builds

* arm32 build

* Fix typo

* TST: add SkylakeX AVX512 CI test

* adapt the C-level reproducer code for some
recent SkylakeX AVX512 kernel issues, provided
by Isuru Fernando and modified by Martin Kroeker,
for usage in the utest suite

* add an Intel SDE SkylakeX emulation utest run to
the Azure CI matrix; a custom Docker build was required
because Ubuntu image provided by Azure does not support
AVX512VL instructions

* Add option USE_LOCKING for single-threaded build with locking support

for calling from concurrent threads

* Add option USE_LOCKING for single-threaded build with locking support

* Add option USE_LOCKING for SMP-like locking in USE_THREAD=0 builds

* Add option USE_LOCKING but keep default settings intact

* Remove unrelated change

* Do not try ancient PGI hacks with recent versions of that compiler

should fix OpenMathLib#2139

* Build and run utests in any case, they do their own checks for fortran availability

* Add softfp support in min/max kernels

fix for OpenMathLib#1912

* Revert "Add softfp support in min/max kernels"

* Separate implementations of AMAX and IAMAX on arm

As noted in OpenMathLib#1912 and comment on OpenMathLib#1942, the combined implementation happens to "do the right thing" on hardfp, but cannot return both value and index on softfp where they would have to share the return register

* Ensure correct output for DAMAX with softfp

* Use generic kernels for complex (I)AMAX to support softfp

* improved zgemm power9 based on power8

* upload thread safety test folder

* hook up c++ thread safety test (main Makefile)

*  add c++ thread test option to Makefile.rule

* Document NO_AVX512 

for OpenMathLib#2151

* sgemm pipeline improved, zgemm rewritten without inner packs, ABI lxvx v20 fixed with vs52

* Fix detection of AVX512 capable compilers in getarch

21eda8b introduced a check in getarch.c to test if the compiler is capable of
AVX512. This check currently fails, since the used __AVX2__ macro is only
defined if getarch itself was compiled with AVX2/AVX512 support. Make sure this
is the case by building getarch with -march=native on x86_64. It is only
supposed to run on the build host anyway.

* c_check: Unlink correct file

* power9 zgemm ztrmm optimized

* conflict resolve

* Add gfortran workaround for ABI violations in LAPACKE

for OpenMathLib#2154 (see gcc bug 90329)

* Add gfortran workaround for ABI violations

for OpenMathLib#2154 (see gcc bug 90329)

* Add gfortran workaround for potential ABI violation 

for OpenMathLib#2154

* Update fc.cmake

* Remove any inadvertent use of -march=native from DYNAMIC_ARCH builds

from OpenMathLib#2143, -march=native precludes use of more specific options like -march=skylake-avx512 in individual kernels, and defeats the purpose of dynamic arch anyway.

* Avoid unintentional activation of TLS code via USE_TLS=0

fixes OpenMathLib#2149

* Do not force gcc options on non-gcc compilers

fixes compile failure with pgi 18.10 as reported on OpenBLAS-users

* Update Makefile.x86_64

* Zero ecx with a mov instruction

PGI assembler does not like the initialization in the constraints.

* Fix mov syntax

* new sgemm 8x16

* Update dtrmm_kernel_16x4_power8.S

* PGI compiler does not like -march=native

* Fix build on FreeBSD/powerpc64.

Signed-off-by: Piotr Kubaj <pkubaj@anongoth.pl>

* Fix build for PPC970 on FreeBSD pt. 1

FreeBSD needs DCBT_ARG=0 as well.

* Fix build for PPC970 on FreeBSD pt.2

FreeBSD needs those macros too.

* cgemm/ctrmm power9

* Utest needs CBLAS but not necessarily FORTRAN

* Add mingw builds to Appveyor config

* Add getarch flags to disable AVX on x86

(and other small fixes to match Makefile behaviour)

* Make disabling DYNAMIC_ARCH on unsupported systems work

needs to be unset in the cache for the change to have any effect

* Mingw32 needs leading underscore on object names

(also copy BUNDERSCORE settings for FORTRAN from the corresponding Makefile)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants