-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong results for small DTRMV on x86-64 #1332
Comments
Is this with 0.2.20 or current git "develop" ? |
With the current develop (b7cee00). I checked the same on
and the result ist the same for N = 1..16 the difference is in O(eps). For N=17..32 the difference is slightly increasing. For N>=32 is crashes completely as shown above. The fact on the OpenPower is interessting because the DTRMV routine is involved in some QR decomposition routines and so this may related to #1191 Changing the leading dimension does not influence the results as long N <= LDA <= 32. If the number of threads is restricted to 1 everything is correct. |
Bisecting shows that this is not a recent failure, this behaviour appears to have been present in the last libGoto already. |
This code is indeed unchanged from (at least) GotoBLAS2-1.08 released in 2009. |
Because of the fact that trmv is rarely used and from the computational density not the perfect one to have a threaded version I would turn of threading for it by setting Furthermore, it seems to fix #1191 because in the example I used there, the |
Is it fast in many gigabytes scale too? |
First thing must be to deliver the correct results and the the performance. Because it helps nobody to have incredible fast results, which are completely wrong. |
I would still prefer to understand just what is going wrong with your specific case, given that this bug appears to have gone unnoticed for years (and is not triggered by the compile-time tests). |
The fact that the bug does not trigger the compile-time tests (as they are in NETLIB BLAS) is easy to explain. There are (together with the cblas interface) 735 calls to DTRMV and the largest parameter N is 9 and the largest leading dimension is 10 which is the area where my check code also gives the correct results. And the fact that it never get noticed. puuh.. From my experience I would say that it is rarely used and in most cases replaced by TRMM. |
Curioser and curioser... on my Kaby Lake laptop, repeated runs of your test code give varying patterns of "Correct" and "Wrong" results for all N beyond 16 - even with nthreads clamped to 1 in interface/trmv.c and confirming that only level2/trmv_U.c gets called. So there may be more than just one bug here. (Nehalem target behaves well with the nthreads=1 hack, but fails without it) |
Seems daxpy_microk_haswell-2.c is (subtly ?) broken on top of whatever is wrong with level2/trmv_thread.c - using either the generic C implementation or any of the older microkernels in its place brings the Haswell behaviour in line with Nehalem/Sandybridge/Atom |
Btw increasing N beyond 64 in your test code shows the results for even larger N still creeping away into madness, or is that simply a limitation in your code I am too dense to see now ? |
I only did the example upto N=64 because that was the maximum size I need in my application. But in principal my code should work for larger N as well. |
Bad news then... Things apparently go wrong in trmv_U.c when (or shortly after) the problem size exceeds the DTB_ENTRIES (which is 64, the last "correct" calculation has N=67). |
Unfortunately I am getting nowhere with this at the moment. |
I would suggest to deactivate the threading for this routine. It seems that nearly nobody uses it for large scale stuff where the threading gets important. Otherwise the error would be detected earlier and did not stay 10 years inside Goto/OpenBLAS. I check how often it is used and LAPACK and there are only 12 routines calling it and in most cases N <= 64 and so we only have a small benefit from a multithreaded version. |
My thoughts as well - deactivate multithreading with an appropriate comment in the code for now, and |
Workaround merged now, hopefully the actual background issue (and possible implications for similar coding in at least trsv) will become clear eventually. |
(Un)fortunately the bug discussed in #1388 does not appear to have any bearing on the failure of trmv_thread.c, so it seems the workaround will have to stay in place for now. I believe we should also replace the daxpy_microk_haswell-2.c with its sandybridge or nehalem counterpart for the next release, even though the problem exposed by the test case here does not show up in the compile-time tests. |
I tested the fortran program from above on a Piledriver Core (AMD FX(tm)-4300 Quad-Core Processor)
Uncommenting the daxpy mikrokernel for Pildriver, like
the results are fine.
|
Interesting, and in addition piledriver and steamroller microkernels appear to be almost identical so the latter will probably also be affected. Could you try including the daxpy_microk_bulldozer-2.c there instead of falling back to the plain C routine - I do not have any AMD machines for testing ? (Alternatively, changing the "if n < 640" near the top of daxpy_microk_piledriver-2.c to something like |
I tried setting the "if n < 0" on the Piledriver microkernel and reverted the Preprocessor directives to the initial state but that didn't work either. Then I tried with the "daxpy_mikrok_sandy-2.c" and that worked for me. It is essentially an AVX assembly, which is supported on my CPU. That's why I think it works for me, too. |
I think we should also test the other precisions, too. |
I find nothing wrong with the vfmadd231pd according to all documentation and code samples available on the net. |
Dirty workspace? |
So I took a short look on the results shown by @quickwritereader an for this example
the seem to by correct. Because double precision only gives 16 digits in decimal representation and the change in the last digit is cause by the round off errors caused by the differing operation order. Therefore, one should use a @MigMuc |
The DCOPY looks harmless enough, but somehow it manages to skew the calculations although they were using random numbers from the start. Perhaps the repeated use of netlib DTRMV leads to an accumulation of denormal values in X ? |
That might be a problem in the above example but does not explain the reproducible jump from 10E-10 to 10E6 at N=32 |
One would probably need to dump X at N=32 in your above example to see what kind of ill-conditioning occurs at that point that happens to affect the fused multiply-add more than the alternative implementation. |
In did some tests using modified versions of the The new Piledriver daxpy kernel is here (replace .txt with.c): |
Sorry, it is not quite clear what this is telling - I take it you get comparable though still somewhat lower performance from a modified DAXPY kernel for piledriver that does not show the problem assumed to exist with the present one ? Did you change your mind over the validity of the final DCOPY in the original test case ? |
@MigMuc can you explain methodology used for your graph and provide input data? |
There are at least two issues with this thread. The first issue as far as I can tell was the error introduced by the threaded implementation of dtrmv. This was mitigated by forcing it to be single threaded. The second issue seemed to be an incorrect implementation of the daxpy_microkernel at assembly level. As already stated by @grisuthedragon his Fortran program checks for absolute values rather then relative values
Having this in mind the commented FMA (Haswell,Piledriver,etc.) versions of the microkernel implementation which apparently give slightly different result due to different rounding than the mul+add versions could be reactivated again. In the case we would like to use the mul+add version, @brada4 The tests above were done with the daxpy.goto program in the benchmark folder. The Piledriver FMA versions are the ones which are currently commented out. The new versions are the ones given in the file above. |
@martin-frbg regarding the validity of the DAXPY. I think you were right when saying that it works as a feed-back loop. I still don't get the idea even @grisuthedragon answered this question. IMHO I would be a good idea to reactivate the optimized microkernels for Haswell, Zen, Piledriver, Steamroller and Excavator cores. |
Yes, unless any new evidence of a problem comes up I intend to revert the changes to the daxpy microkernels soon, and probably the ones for trmv threading as well. |
@MigMuc just explain the reason behind using base-1000 counting system in picture. |
@brada4 I do not get your point, what is it you think is wrong with the labeling of the graph ? |
It does not start at zero and zero, and does not look at small samples, example - mixing 2 wave streams for this particular case. it is not really primary use of BLAS but anyway a use case. |
The benchmark was started with N=128 and stepsize=64. I could have used a stepsize of 16 to account for the condition of the size of the array beeing a multiple of 16 for the microkernel invocation. |
Actually better would be start=127 step=29 , something that drills odd cases heavily |
I have now restored the AVX microkernels for DAXPY, but the multithreading problem with TRMV appears to be real (and extend to ZTRMV according to xianyi's ATLAS-derived BLAS-Tester). |
* With the Intel compiler on Linux, prefer ifort for the final link step icc has known problems with mixed-language builds that ifort can handle just fine. Fixes OpenMathLib#1956 * Rename operands to put lda on the input/output constraint list * Fix wrong constraints in inline assembly for OpenMathLib#2009 * Fix inline assembly constraints rework indices to allow marking argument lda4 as input and output. For OpenMathLib#2009 * Fix inline assembly constraints rework indices to allow marking argument lda as input and output. * Fix inline assembly constraints * Fix inline assembly constraints * Fix inline assembly constraints in Bulldozer TRSM kernels rework indices to allow marking i,as and bs as both input and output (marked operand n1 as well for simplicity). For OpenMathLib#2009 * Correct range_n limiting same bug as seen in OpenMathLib#1388, somehow missed in corresponding PR OpenMathLib#1389 * Allow multithreading TRMV again revert workaround introduced for issue OpenMathLib#1332 as the actual cause appears to be my incorrect fix from OpenMathLib#1262 (see OpenMathLib#1388) * Fix error introduced during cleanup * Reduce list of kernels in the dynamic arch build to make compilation complete reliably within the 1h limit again * init * move fix to right place * Fix missing -c option in AVX512 test * Fix AVX512 test always returning false due to missing compiler option * Make x86_32 imply NO_AVX2, NO_AVX512 in addition to NO_AVX fixes OpenMathLib#2033 * Keep xcode8.3 for osx BINARY=32 build as xcode10 deprecated i386 * Make sure that AVX512 is disabled in 32bit builds for OpenMathLib#2033 * Improve handling of NO_STATIC and NO_SHARED to avoid surprises from defining either as zero. Fixes OpenMathLib#2035 by addressing some concerns from OpenMathLib#1422 * init * address warning introed with OpenMathLib#1814 et al * Restore locking optimizations for OpenMP case restore another accidentally dropped part of OpenMathLib#1468 that was missed in OpenMathLib#2004 to address performance regression reported in OpenMathLib#1461 * HiSilicon tsv110 CPUs optimization branch add HiSilicon tsv110 CPUs optimization branch * add TARGET support for HiSilicon tsv110 CPUs * add TARGET support for HiSilicon tsv110 CPUs * add TARGET support for HiSilicon tsv110 CPUs * Fix module definition conflicts between LAPACK and ReLAPACK for OpenMathLib#2043 * Do not compile in AVX512 check if AVX support is disabled xgetbv is function depends on NO_AVX being undefined - we could change that too, but that combo is unlikely to work anyway * ctest.c : add __POWERPC__ for PowerMac * Fix crash in sgemm SSE/nano kernel on x86_64 Fix bug OpenMathLib#2047. Signed-off-by: Celelibi <celelibi@gmail.com> * param.h : enable defines for PPC970 on DarwinOS fixes: gemm.c: In function 'sgemm_': ../common_param.h:981:18: error: 'SGEMM_DEFAULT_P' undeclared (first use in this function) #define SGEMM_P SGEMM_DEFAULT_P ^ * common_power.h: force DCBT_ARG 0 on PPC970 Darwin without this, we see ../kernel/power/gemv_n.S:427:Parameter syntax error and many more similar entries that relates to this assembly command dcbt 8, r24, r18 this change makes the DCBT_ARG = 0 and openblas builds through to completion on PowerMac 970 Tests pass * Make TARGET=GENERIC compatible with DYNAMIC_ARCH=1 for issue OpenMathLib#2048 * make DYNAMIC_ARCH=1 package work on TSV110. * make DYNAMIC_ARCH=1 package work on TSV110 * Add Intel Denverton for OpenMathLib#2048 * Add Intel Denverton * Change 64-bit detection as explained in OpenMathLib#2056 * Trivial typo fix as suggested in OpenMathLib#2022 * Disable the AVX512 DGEMM kernel (again) Due to as yet unresolved errors seen in OpenMathLib#1955 and OpenMathLib#2029 * Use POSIX getenv on Cygwin The Windows-native GetEnvironmentVariable cannot be relied on, as Cygwin does not always copy environment variables set through Cygwin to the Windows environment block, particularly after fork(). * Fix for OpenMathLib#2063: The DllMain used in Cygwin did not run the thread memory pool cleanup upon THREAD_DETACH which is needed when compiled with USE_TLS=1. * Also call CloseHandle on each thread, as well as on the event so as to not leak thread handles. * AIX asm syntax changes needed for shared object creation * power9 makefile. dgemm based on power8 kernel with following changes : 32x unrolled 16x4 kernel and 8x4 kernel using (lxv stxv butterfly rank1 update). improvement from 17 to 22-23gflops. dtrmm cases were added into dgemm itself * Expose CBLAS interfaces for I?MIN and I?MAX * Build CBLAS interfaces for I?MIN and I?MAX * Add declarations for ?sum and cblas_?sum * Add interface for ?sum (derived from ?asum) * Add ?sum * Add implementations of ssum/dsum and csum/zsum as trivial copies of asum/zsasum with the fabs calls replaced by fmov to preserve code structure * Add ARM implementations of ?sum (trivial copies of the respective ?asum with the fabs calls removed) * Add ARM64 implementations of ?sum as trivial copies of the respective ?asum kernels with the fabs calls removed * Add ia64 implementation of ?sum as trivial copy of asum with the fabs calls removed * Add MIPS implementation of ?sum as trivial copy of ?asum with the fabs calls removed * Add MIPS64 implementation of ?sum as trivial copy of ?asum with the fabs replaced by mov to preserve code structure * Add POWER implementation of ?sum as trivial copy of ?asum with the fabs replaced by fmr to preserve code structure * Add SPARC implementation of ?sum as trivial copy of ?asum with the fabs replaced by fmov to preserve code structure * Add x86 implementation of ?sum as trivial copy of ?asum with the fabs calls removed * Add x86_64 implementation of ?sum as trivial copy of ?asum with the fabs calls removed * Add ZARCH implementation of ?sum as trivial copies of the respective ?asum kernels with the ABS and vflpsb calls removed * Detect 32bit environment on 64bit ARM hardware for OpenMathLib#2056, using same approach as OpenMathLib#2058 * Add cmake defaults for ?sum kernels * Add ?sum * Add ?sum definitions for generic kernel * Add declarations for ?sum * Add -lm and disable EXPRECISION support on *BSD fixes OpenMathLib#2075 * Add in runtime CPU detection for POWER. * snprintf define consolidated to common.h * Support INTERFACE64=1 * Add support for INTERFACE64 and fix XERBLA calls 1. Replaced all instances of "int" with "blasint" 2. Added string length as "hidden" third parameter in calls to fortran XERBLA * Correct length of name string in xerbla call * Avoid out-of-bounds accesses in LAPACK EIG tests see Reference-LAPACK/lapack#333 * Correct INFO=4 condition * Disable reallocation of work array in xSYTRF as it appears to cause memory management problems (seen in the LAPACK tests) * Disable repeated recursion on Ab_BR in ReLAPACK xGBTRF due to crashes in LAPACK tests * sgemm/strmm * Update Changelog with changes from 0.3.6 * Increment version to 0.3.7.dev * Increment version to 0.3.7.dev * Misc. typo fixes Found via `codespell -q 3 -w -L ith,als,dum,nd,amin,nto,wis,ba -S ./relapack,./kernel,./lapack-netlib` * Correct argument of CPU_ISSET for glibc <2.5 fixes OpenMathLib#2104 * conflict resolve * Revert reference/ fixes * Revert Changelog.txt typos * Disable the SkyLakeX DGEMMITCOPY kernel as well as a stopgap measure for numpy/numpy#13401 as mentioned in OpenMathLib#1955 * Disable DGEMMINCOPY as well for now OpenMathLib#1955 * init * Fix errors in cpu enumeration with glibc 2.6 for OpenMathLib#2114 * Change two http links to https Closes OpenMathLib#2109 * remove redundant code OpenMathLib#2113 * Set up CI with Azure Pipelines [skip ci] * TST: add native POWER8 to CI * add native POWER8 testing to Travis CI matrix with ppc64le os entry * Update link to IBM MASS library, update cpu support status * first try migrating one of the arm builds from travis * fix tabbing in azure commands * Update azure-pipelines.yml take out offending lines (although stolen from https://github.com/conda-forge/opencv-feedstock azure-pipelines fiie) * Update azure-pipelines.yml * Update azure-pipelines.yml * Update azure-pipelines.yml * Update azure-pipelines.yml * DOC: Add Azure CI status badge * Add ARMV6 build to azure CI setup (OpenMathLib#2122) using aytekinar's Alpine image and docker script from the Travis setup [skip ci] * TST: Azure manylinux1 & clean-up * remove some of the steps & comments from the original Azure yml template * modify the trigger section to use develop since OpenBLAS primarily uses this branch; use the same batching behavior as downstream projects NumPy/ SciPy * remove Travis emulated ARMv6 gcc build because this now happens in Azure * use documented Ubuntu vmImage name for Azure and add in a manylinux1 test run to the matrix [skip appveyor] * Add NO_AFFINITY to available options on Linux, and set it to ON to match the gmake default. Fixes second part of OpenMathLib#2114 * Replace ISMIN and ISAMIN kernels on all x86_64 platforms (OpenMathLib#2125) * Mark iamax_sse.S as unsuitable for MIN due to issue OpenMathLib#2116 * Use iamax.S rather than iamax_sse.S for ISMIN/ISAMIN on all x86_64 as workaround for OpenMathLib#2116 * Move ARMv8 gcc build from Travis to Azure * Move ARMv8 gcc build from Travis to Azure * Update .travis.yml * Test drone CI * install make * remove sudo * Install gcc * Install perl * Install gfortran and add a clang job * gfortran->gcc-gfortran * Switch to ubuntu and parallel jobs * apt update * Fix typo * update yes * no need of gcc in clang build * Add a cmake build as well * Add cmake builds and print options * build without lapack on cmake * parallel build * See if ubuntu 19.04 fixes the ICE * Remove qemu armv8 builds * arm32 build * Fix typo * TST: add SkylakeX AVX512 CI test * adapt the C-level reproducer code for some recent SkylakeX AVX512 kernel issues, provided by Isuru Fernando and modified by Martin Kroeker, for usage in the utest suite * add an Intel SDE SkylakeX emulation utest run to the Azure CI matrix; a custom Docker build was required because Ubuntu image provided by Azure does not support AVX512VL instructions * Add option USE_LOCKING for single-threaded build with locking support for calling from concurrent threads * Add option USE_LOCKING for single-threaded build with locking support * Add option USE_LOCKING for SMP-like locking in USE_THREAD=0 builds * Add option USE_LOCKING but keep default settings intact * Remove unrelated change * Do not try ancient PGI hacks with recent versions of that compiler should fix OpenMathLib#2139 * Build and run utests in any case, they do their own checks for fortran availability * Add softfp support in min/max kernels fix for OpenMathLib#1912 * Revert "Add softfp support in min/max kernels" * Separate implementations of AMAX and IAMAX on arm As noted in OpenMathLib#1912 and comment on OpenMathLib#1942, the combined implementation happens to "do the right thing" on hardfp, but cannot return both value and index on softfp where they would have to share the return register * Ensure correct output for DAMAX with softfp * Use generic kernels for complex (I)AMAX to support softfp * improved zgemm power9 based on power8 * upload thread safety test folder * hook up c++ thread safety test (main Makefile) * add c++ thread test option to Makefile.rule * Document NO_AVX512 for OpenMathLib#2151 * sgemm pipeline improved, zgemm rewritten without inner packs, ABI lxvx v20 fixed with vs52 * Fix detection of AVX512 capable compilers in getarch 21eda8b introduced a check in getarch.c to test if the compiler is capable of AVX512. This check currently fails, since the used __AVX2__ macro is only defined if getarch itself was compiled with AVX2/AVX512 support. Make sure this is the case by building getarch with -march=native on x86_64. It is only supposed to run on the build host anyway. * c_check: Unlink correct file * power9 zgemm ztrmm optimized * conflict resolve * Add gfortran workaround for ABI violations in LAPACKE for OpenMathLib#2154 (see gcc bug 90329) * Add gfortran workaround for ABI violations for OpenMathLib#2154 (see gcc bug 90329) * Add gfortran workaround for potential ABI violation for OpenMathLib#2154 * Update fc.cmake * Remove any inadvertent use of -march=native from DYNAMIC_ARCH builds from OpenMathLib#2143, -march=native precludes use of more specific options like -march=skylake-avx512 in individual kernels, and defeats the purpose of dynamic arch anyway. * Avoid unintentional activation of TLS code via USE_TLS=0 fixes OpenMathLib#2149 * Do not force gcc options on non-gcc compilers fixes compile failure with pgi 18.10 as reported on OpenBLAS-users * Update Makefile.x86_64 * Zero ecx with a mov instruction PGI assembler does not like the initialization in the constraints. * Fix mov syntax * new sgemm 8x16 * Update dtrmm_kernel_16x4_power8.S * PGI compiler does not like -march=native * Fix build on FreeBSD/powerpc64. Signed-off-by: Piotr Kubaj <pkubaj@anongoth.pl> * Fix build for PPC970 on FreeBSD pt. 1 FreeBSD needs DCBT_ARG=0 as well. * Fix build for PPC970 on FreeBSD pt.2 FreeBSD needs those macros too. * cgemm/ctrmm power9 * Utest needs CBLAS but not necessarily FORTRAN * Add mingw builds to Appveyor config * Add getarch flags to disable AVX on x86 (and other small fixes to match Makefile behaviour) * Make disabling DYNAMIC_ARCH on unsupported systems work needs to be unset in the cache for the change to have any effect * Mingw32 needs leading underscore on object names (also copy BUNDERSCORE settings for FORTRAN from the corresponding Makefile)
Usually I use the POWER8 platform but today I found a bad error in the x86-64 code for the DTRMV routine.
I want to compute x <- A*x, where
and get wrong results on a Haswell (16 cores, Xeon CPU E5-2640 v3) based system using gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) if OpenBLAS is compiled via
I compared the results with the Netlib implementation and obtained:
The demonstration code is here: https://gist.github.com/grisuthedragon/32d8ad11e1d722414f921509b97d5507
The text was updated successfully, but these errors were encountered: