We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider the following example:
using SimpleChains using LinearAlgebra X = randn(Float32, 8, 8, 1, 10_000); Y = reduce(hcat, [opnorm(X[:,:,1,i]) for i in 1:10_000]); testnet = SimpleChain( (static(8), static(8), static(1)), SimpleChains.Conv(SimpleChains.relu, (3, 3), 64), SimpleChains.Conv(SimpleChains.relu, (3, 3), 64), SimpleChains.Conv(SimpleChains.relu, (3, 3), 32), Flatten(3), TurboDense(SimpleChains.relu, 64), TurboDense(SimpleChains.relu, 16), TurboDense(identity, 1), ); testnetloss = SimpleChains.add_loss(testnet, SquaredLoss(Y)); p = SimpleChains.init_params(testnet); G = SimpleChains.alloc_threaded_grad(testnet);
When trying to train this network on an Apple M1, I run into
julia> SimpleChains.train_unbatched!(G, p, testnetloss, X, SimpleChains.ADAM(), 100) ERROR: MethodError: no method matching __vstore!(::typeof(VectorizationBase.vsum), ::Ptr{SIMDTypes.Bit}, ::Bool, ::Int64, ::Static.False, ::Static.False, ::Static.False, ::Static.StaticInt{16}) Closest candidates are: __vstore!(::F, ::Ptr{T}, ::Union{Bool, Float16, Float32, Float64, Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, SIMDTypes.Bit}, ::Union{Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, Static.StaticInt, VectorizationBase.LazyMulAdd{<:Any, <:Any, <:Union{Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, Static.StaticInt}}}, ::A, ::S, ::NT, ::Static.StaticInt{RS}) where {T<:Union{Bool, Float16, Float32, Float64, Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8}, F<:Function, A<:Static.StaticBool, S<:Static.StaticBool, NT<:Static.StaticBool, RS} @ VectorizationBase ~/.julia/packages/VectorizationBase/e4FnQ/src/llvm_intrin/memory_addr.jl:1770 __vstore!(::F, ::Ptr{SIMDTypes.Bit}, ::VectorizationBase.AbstractSIMDVector{W, B}, ::Union{VectorizationBase.LazyMulAdd{<:Any, <:Any, <:Union{VectorizationBase.MM{W}, VectorizationBase.Unroll{<:Any, <:Any, <:Any, <:Any, W}, VectorizationBase.Vec{W}}}, VectorizationBase.MM{W}, VectorizationBase.Unroll{<:Any, <:Any, <:Any, <:Any, W}, VectorizationBase.Vec{W}}, ::A, ::S, ::NT, ::Static.StaticInt{RS}) where {W, B<:Union{Bool, SIMDTypes.Bit}, F<:Function, A<:Static.StaticBool, S<:Static.StaticBool, NT<:Static.StaticBool, RS} @ VectorizationBase ~/.julia/packages/VectorizationBase/e4FnQ/src/llvm_intrin/memory_addr.jl:1622 __vstore!(::F, ::Ptr{T}, ::VectorizationBase.AbstractSIMDVector{W}, ::Union{Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, Static.StaticInt, VectorizationBase.LazyMulAdd{<:Any, <:Any, <:Union{Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, Static.StaticInt}}}, ::A, ::S, ::NT, ::Static.StaticInt{RS}) where {T<:Union{Bool, Float16, Float32, Float64, Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8}, F<:Function, A<:Static.StaticBool, S<:Static.StaticBool, NT<:Static.StaticBool, RS, W} @ VectorizationBase ~/.julia/packages/VectorizationBase/e4FnQ/src/llvm_intrin/memory_addr.jl:1681 ... Stacktrace: [1] _vstore! @ ~/.julia/packages/VectorizationBase/e4FnQ/src/strided_pointers/stridedpointers.jl:198 [inlined] [2] macro expansion @ ~/.julia/packages/LoopVectorization/DDH6Z/src/reconstruct_loopset.jl:965 [inlined] [3] _turbo_! @ ~/.julia/packages/LoopVectorization/DDH6Z/src/reconstruct_loopset.jl:965 [inlined] [4] convlayer!(∂f::SimpleChains.ForwardDiffElementwise{typeof(relu)}, _∂C::StrideArraysCore.BitPtrArray{Tuple{Static.StaticInt{6}, Static.StaticInt{6}, Static.StaticInt{64}}, (true, false, false), 3, 1, 0, (1, 2, 3), Tuple{Static.StaticInt{1}, Static.StaticInt{8}, Static.StaticInt{48}}, Tuple{Int64, Int64, Int64}}, _C::StrideArraysCore.PtrArray{Tuple{Static.StaticInt{6}, Static.StaticInt{6}, Static.StaticInt{64}}, (true, true, true), Float32, 3, 1, 0, (1, 2, 3), Tuple{Static.StaticInt{4}, Static.StaticInt{24}, Static.StaticInt{144}}, Tuple{Static.StaticInt{0}, Static.StaticInt{0}, Static.StaticInt{0}}}, _A::StrideArraysCore.PtrArray{Tuple{Static.StaticInt{8}, Static.StaticInt{8}, Static.StaticInt{1}}, (true, true, true), Float32, 3, 1, 0, (1, 2, 3), Tuple{Static.StaticInt{4}, Static.StaticInt{32}, Static.StaticInt{256}}, Tuple{Static.StaticInt{0}, Static.StaticInt{0}, Static.StaticInt{0}}}, _K::StrideArraysCore.PtrArray{Tuple{Static.StaticInt{3}, Static.StaticInt{3}, Static.StaticInt{1}, Static.StaticInt{64}}, (true, true, true, true), Float32, 4, 1, 0, (1, 2, 3, 4), Tuple{Static.StaticInt{4}, Static.StaticInt{12}, Static.StaticInt{36}, Static.StaticInt{36}}, NTuple{4, Static.StaticInt{0}}}, _b::StrideArraysCore.PtrArray{Tuple{Static.StaticInt{64}}, (true,), Float32, 1, 1, 0, (1,), Tuple{Static.StaticInt{4}}, Tuple{Static.StaticInt{0}}}) @ SimpleChains ~/.julia/packages/SimpleChains/fifFm/src/conv.jl:241 [...]
The text was updated successfully, but these errors were encountered:
Interestingly, I get a different method error on x64.
Sorry, something went wrong.
No branches or pull requests
Consider the following example:
When trying to train this network on an Apple M1, I run into
The text was updated successfully, but these errors were encountered: