Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove BLAS.vendor() that was deprecated in 1.7 #44379

Merged
merged 4 commits into from
Mar 3, 2022
Merged

Conversation

ViralBShah
Copy link
Member

@ViralBShah ViralBShah commented Feb 28, 2022

BLAS.vendor() had a deprecation warning in 1.7 and 1.9. I think it should be ok to remove in 1.9.

@ViralBShah ViralBShah added the linear algebra Linear algebra label Feb 28, 2022
@ViralBShah
Copy link
Member Author

Pinging @chriselrod @ChrisRackauckas @YingboMa @DilumAluthge @dmbates @kshyatt Since it appears that your packages may be using BLAS.vendor() and may need replacing with BLAS.get_config().

@fredrikekre
Copy link
Member

It seems like it doesn't cost anything to keep the function and warning around, or?

@DilumAluthge
Copy link
Member

@ViralBShah Do you have a list of the packages that currently use this function?

@giordano
Copy link
Contributor

https://juliahub.com/ui/Search?q=BLAS.vendor%28%29&type=code

@ViralBShah
Copy link
Member Author

ViralBShah commented Feb 28, 2022

The function is hard-coded to provide the answer as openblas, which is wrong when we load MKL.jl or Apple Accelerate. It is thus best to avoid these wrong answers.

These are the list of affected packages as @staticfloat reported:
https://juliahub.com/ui/Search?q=BLAS.vendor%28%29&type=code

Note that in all these cases, even if you load another BLAS, it will report openblas as the vendor. This was meant to be for the use case when we built other BLAS libraries into the system image, and we no longer do that.

@DilumAluthge
Copy link
Member

FWIW the ping list seems incomplete, as @tkf should have been pinged for BenchmarkCI.jl.

@ViralBShah
Copy link
Member Author

I feel that most of the package ecosystem that needs this will be able to quickly update to BLAS.get_config(), and if necessary, when we go towards feature freeze, we can add back the deprecation warning and the empty function. Does that sound like a good plan?

@ViralBShah
Copy link
Member Author

Apologies - the pinging list was built by eyeballing package names whose authors I knew in my head.

@DilumAluthge
Copy link
Member

Just from a quick eyeball of the list, many uses seem to be just printing the output of BLAS.vendor() for logging purposes. Those should be easy to migrate.

Here's one interesting use that will be a little more work: https://github.com/tkf/BenchmarkCI.jl/blob/5191667898565b80baf36e72503ded0f178413a2/src/runtimeinfo.jl#L50

"""
    blas_num_threads() :: Union{Int, Nothing}
Get the number of threads BLAS is using.
Taken from:
https://github.com/JuliaLang/julia/blob/v1.3.0/stdlib/Distributed/test/distributed_exec.jl#L999-L1019
See also: https://stackoverflow.com/a/37516335
"""
function blas_num_threads()
    blas = LinearAlgebra.BLAS.vendor()
    # Wrap in a try to catch unsupported blas versions
    try
        if blas == :openblas
            return ccall((:openblas_get_num_threads, Base.libblas_name), Cint, ())
        elseif blas == :openblas64
            return ccall((:openblas_get_num_threads64_, Base.libblas_name), Cint, ())
        elseif blas == :mkl
            return ccall((:MKL_Get_Max_Num_Threads, Base.libblas_name), Cint, ())
        end

        # OSX BLAS looks at an environment variable
        if Sys.isapple()
            return tryparse(Cint, get(ENV, "VECLIB_MAXIMUM_THREADS", "1"))
        end
    catch
    end

    return nothing
end

Can this be achieved by just doing LinearAlgebra.BLAS.get_num_threads()? Or is it more complicated than that?

@tkf

@ChrisRackauckas
Copy link
Member

SciML usage is already version specific:

https://github.com/SciML/LinearSolve.jl/blob/main/src/init.jl#L4-L9

    @static if VERSION < v"1.7beta"
      blas = BLAS.vendor()
      IS_OPENBLAS[] = blas == :openblas64 || blas == :openblas
    else
      IS_OPENBLAS[] = occursin("openblas", BLAS.get_config().loaded_libs[1].libname)
    end

Other packages can copy this workaround and we can move on without a major issue. It's mostly the SciML squad anyways. There's probably a few spots that haven't done this yet, but oh well rip the bandaid off and we'll find it.

@fredrikekre
Copy link
Member

The function is hard-coded to provide the answer as openblas, which is wrong when we load MKL.jl or Apple Accelerate. It is thus best to avoid these wrong answers.

Yes, and this is why there is a deprecation warning. Isn't a result better than an error? If the result of the function causes problems downstream that will be noticed anyway. Even if new code can be updated, it causes unecessary churn and everyone needs to do new releases etc etc.

@palday
Copy link
Contributor

palday commented Feb 28, 2022

We can update MixedModels do something like the suggestion above but it feels a little weird to drop a function that was deprecated after LTS before the next LTS is blessed.

@palday
Copy link
Contributor

palday commented Feb 28, 2022

And looking again, we're already doing that:

https://github.com/JuliaStats/MixedModels.jl/blob/12a60bafb2dab37def329c65e9544c676ce6f41c/test/runtests.jl#L7-L14

@static if VERSION  v"1.7.0-DEV.620"
    @show getproperty.(BLAS.get_config().loaded_libs, :libname)
else
    @show BLAS.vendor()
    if startswith(string(BLAS.vendor()), "openblas")
        println(BLAS.openblas_get_config())
    end
end

@YingboMa
Copy link
Contributor

The SciML workaround exists because checking the vendor at runtime is very expensive. The drawback here is that users must change IS_OPENBLAS[] manually after switching the BLAS library if LinearSolve.jl is already loaded.

@ChrisRackauckas
Copy link
Member

Yeah, it was like 3 orders of magnitude slower than what we had before, so we had to resort to fixing it at using time.

@staticfloat
Copy link
Member

I should note that, in the event that you actually deeply care about the library that you're using, it's not enough to just scan the list of loaded libraries, you need to either ensure that there is only one library loaded, or inspect things symbol by symbol. using MKL, by default, clears out OpenBLAS, which is fine, but if someone does something more complicated, you need to use BLAS.lbt_find_backing_library() to ensure that the symbols you care about are actually going where you expect.

I kind of don't expect most people to care about this that much, so I would just ensure that there's only one ILP64 library loaded at once and throw an error otherwise.

For the curious, you can get the nitty-gritty details via something like the following:

julia> using LinearAlgebra
       BLAS.lbt_find_backing_library("dgemm_", :ilp64)
LinearAlgebra.BLAS.LBTLibraryInfo
├ Library: libopenblas64_.so
├ Interface: ilp64
├ Complex return style: normal
├ F2C: plain
└ CBLAS: conformant

@YingboMa
Copy link
Contributor

We need to check if something is using OpenBLAS because it is slow enough that we want to use a Julia implementation for more matrix sizes. If the user is not using OpenBLAS, then we assume it's something better.

@palday
Copy link
Contributor

palday commented Feb 28, 2022

We just log it for debugging purposes at the start of runtests.jl -- we've occasionally run into things where MKL and OpenBLAS converge to slightly different optima. (And one really bad case where a numerical instability was discovered by swapping BLAS....)

@staticfloat
Copy link
Member

The SciML workaround exists because checking the vendor at runtime is very expensive.

Checking takes ~121μs on my machine:

julia> using LinearAlgebra, BenchmarkTools
       @btime BLAS.lbt_get_config().loaded_libs
  121.468 μs (4960 allocations: 275.59 KiB)
1-element Vector{LinearAlgebra.BLAS.LBTLibraryInfo}:
 LBTLibraryInfo(libopenblas64_.so, ilp64)

I agree, 5k allocations is a bit much though. This happens because there's a lot of extra information that gets pulled out every time. I've got a PR here to cache things until someone calls one of the lbt_set_* functions that would alter the config, which speeds things up nicely:

julia> using LinearAlgebra, BenchmarkTools
       @btime BLAS.lbt_get_config().loaded_libs
  2.855 ns (0 allocations: 0 bytes)
1-element Vector{LinearAlgebra.BLAS.LBTLibraryInfo}:
 LBTLibraryInfo(libopenblas64_.so, ilp64)

@ViralBShah
Copy link
Member Author

ViralBShah commented Mar 1, 2022

Note that this function is not exported and not documented. The actual number of packages impacted is low enough that I feel we can safely execute this.

@ViralBShah
Copy link
Member Author

ViralBShah commented Mar 1, 2022

Yes, and this is why there is a deprecation warning. Isn't a result better than an error? If the result of the function causes problems downstream that will be noticed anyway. Even if new code can be updated, it causes unecessary churn and everyone needs to do new releases etc etc.

The result will be incorrect when using MKL.jl and non-default BLAS, and is strictly worse than an error. If we choose to keep it, my preference would be to generate an exception.

@ChrisRackauckas
Copy link
Member

121μs is very large in comparison to a small lu factorization. For example:

julia> using LinearAlgebra, BenchmarkTools

julia> A = rand(10,10);

julia> @btime lu(A);
  5.300 μs (3 allocations: 1.05 KiB)

so checking if it's OpenBLAS would give a 24x performance regression! We want to check if it's OpenBLAS because even that is slow, so we want to move to RecursiveFactorization.jl in that case. But MKL is good enough that we want to just use that if the user is using MKL. And 5us to determine the right algorithm would be an insane cost, hence we cache it. 3ns is fine though 😅.

@ViralBShah
Copy link
Member Author

The other possibility is that we know the different kinds of BLAS supported in LBT. So we just respond with whichever ILP64 BLAS is loaded by looking at the output of get_config().

@chriselrod
Copy link
Contributor

chriselrod commented Mar 1, 2022

FWIW:

julia> A = rand(100,100);

julia> using LinearAlgebra

julia> @benchmark lu($A)
BenchmarkTools.Trial: 3913 samples with 1 evaluation.
 Range (min  max):  526.514 μs  83.364 ms  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     837.870 μs              ┊ GC (median):    0.00%
 Time  (mean ± σ):     1.272 ms ±  5.036 ms  ┊ GC (mean ± σ):  0.09% ± 1.89%

  ▁ ▄█
  █▃██▃▃▂▂▂▃▃▂▂▂▂▂▂▂▁▂▂▁▂▂▂▂▁▁▁▁▂▁▂▂▁▁▁▂▂▁▂▁▁▂▁▁▂▁▁▁▁▁▂▁▁▂▁▁▁▂ ▂
  527 μs          Histogram: frequency by time         6.69 ms <

 Memory estimate: 79.05 KiB, allocs estimate: 3.

julia> using MKL

julia> @benchmark lu($A)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min  max):  44.580 μs   1.654 ms  ┊ GC (min  max): 0.00%  93.10%
 Time  (median):     45.746 μs              ┊ GC (median):    0.00%
 Time  (mean ± σ):   51.352 μs ± 44.320 μs  ┊ GC (mean ± σ):  2.87% ±  3.56%

  ▄█▇▃▁  ▁▂▂▁                                  ▁▂▂▃▂▂▂  ▁▁▁▁  ▂
  ██████▇████▇▇▆▃▄▄▁▄▃▁▁▁▁▃▁▁▁▁▁▁▃▅▄▃▁▃▁▁▃▁▁▁▅███████████████ █
  44.6 μs      Histogram: log(frequency) by time      78.2 μs <

 Memory estimate: 79.05 KiB, allocs estimate: 3.

julia> using RecursiveFactorization

julia> @benchmark RecursiveFactorization.lu($A)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min  max):   41.652 μs  37.723 ms  ┊ GC (min  max): 0.00%  1.94%
 Time  (median):      44.728 μs              ┊ GC (median):    0.00%
 Time  (mean ± σ):   110.351 μs ±  1.551 ms  ┊ GC (mean ± σ):  1.13% ± 0.08%

   ▃▅▇██▇▆▄▃▃▂▂                                    ▁▁▁▁▁       ▂
  ██████████████▇▅▁▁▁▄▃▃▃▁▁▄▅▃▁▁▃▃▁▁▁▃▁▁▁▁▁▁▁▁▁▃▆▆▇███████████ █
  41.7 μs       Histogram: log(frequency) by time      76.2 μs <

 Memory estimate: 79.12 KiB, allocs estimate: 5.

LinearSolve.jl will use RecursiveFactorization at this size regardless of BLAS vendor. It'll choose RecursiveFactorization over OpenBLAS but not some other library over the size range 101:500. Below that, it's always RF, and above it's always the other library.

If the check were expensive, it'd be best to just always use RF.

@ViralBShah ViralBShah marked this pull request as draft March 1, 2022 01:25
@tkf
Copy link
Member

tkf commented Mar 1, 2022

If the check were expensive, it'd be best to just always use RF.

Can't we just cache some important information in some atomic fields somewhere when switching the backend so that lookup is extremely cheap? (Edit: never mind, I just saw #44383 mentioned above)

Note that this function is not exported and not documented. The actual number of packages impacted is low enough that I feel we can safely execute this.

I generally support this idea and this can be removed even without a ping to the package authors. I don't mind if this breaks my package if it's my "fault."

(That said, #40882 indicates that we want to "be nice" with people using this function. I think we can also just swap depwarn with error if so. There are private codebases that we can't detect with JuliaHub.)

@ViralBShah ViralBShah added the needs news A NEWS entry is required for this change label Mar 1, 2022
@ViralBShah
Copy link
Member Author

ViralBShah commented Mar 1, 2022

I've made vendor lead to an error now. I plan to merge this tomorrow.

@ViralBShah
Copy link
Member Author

ViralBShah commented Mar 2, 2022

As recommended by @fredrikekre in JuliaLinearAlgebra/GenericLinearAlgebra.jl#85 (review), I am updating this PR to now instead do:

  • vendor() returns :lbt
  • libblas_name and liblapack_name are set to "libblastrampoline"

This solves the issue of giving the correct answers, and maintaining backward compatibility. @staticfloat and I have plans to provide more structured ways of querying for vendor inside the get_config() api.

@ViralBShah ViralBShah marked this pull request as ready for review March 2, 2022 21:04
@ViralBShah ViralBShah removed the needs news A NEWS entry is required for this change label Mar 3, 2022
@ViralBShah ViralBShah merged commit bf6d9de into master Mar 3, 2022
@ViralBShah ViralBShah deleted the vs/blas-vendor branch March 3, 2022 04:10
@ViralBShah ViralBShah added the backport 1.8 Change should be backported to release-1.8 label Mar 3, 2022
@KristofferC KristofferC mentioned this pull request Mar 3, 2022
47 tasks
@KristofferC KristofferC removed the backport 1.8 Change should be backported to release-1.8 label Mar 4, 2022
@KristofferC
Copy link
Member

Removing backport label since this causes the following to error on 1.8 (https://build.julialang.org/#/builders/41/builds/3672):

signal (11): Segmentation fault
in expression starting at none:0
unknown function (ip: (nil))

@ViralBShah
Copy link
Member Author

Note that this and #44360 are a pair (so either both backport or neither do).

It should be possible to figure out this segfault, since these PRs are fairly straightforward. I'll try this out on 1.8 and see if I can resolve it asap.

@ViralBShah
Copy link
Member Author

Actually this is happening because it seems like #44360 is not backported to 1.8 before this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra
Projects
None yet
Development

Successfully merging this pull request may close these issues.