You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is really interesting to see that Numba gets faster than Julia. However, during my repro, I found some benchmarks in this repo quite unrelated to the conclusions.
TL;DR;: After fixing the benchmark, the examination shows that Julia is >10x faster than Numba for the case evaluate_functions.
Consider the case evaluate_functions, numba uses parallel for loop. It seems to produce faster programs than other languages, however it is quite misunderstanding.
Programs using prange gives different results from that use range, while the latter is the behaviour of the corresponding Julia program.
In [17]: @njit(parallel=True)
...: defevaluate_functions(n):
...: """ ...: Evaluate the trigononmetric functions for n values evenly ...: spaced over the interval [-1500.00, 1500.00] ...: """
...: vector1=np.linspace(-1500.00, 1500.0, n)
...: iterations=10000
...: foriinrange(iterations):
...: vector2=np.sin(vector1)
...: vector1=np.arcsin(vector2)
...: vector2=np.cos(vector1)
...: vector1=np.arccos(vector2)
...: vector2=np.tan(vector1)
...: vector1=np.arctan(vector2)
...: returnvector1
...:
In [18]: evaluate_functions(10)
Out[18]:
array([1.46030424, 1.13579218, 0.81128013, 0.48676808, 0.16225603,
0.16225603, 0.48676808, 0.81128013, 1.13579218, 1.46030424])
In [19]: @njit(parallel=True)
...: defevaluate_functions(n):
...: """ ...: Evaluate the trigononmetric functions for n values evenly ...: spaced over the interval [-1500.00, 1500.00] ...: """
...: vector1=np.linspace(-1500.00, 1500.0, n)
...: iterations=10000
...: foriinprange(iterations):
...: vector2=np.sin(vector1)
...: vector1=np.arcsin(vector2)
...: vector2=np.cos(vector1)
...: vector1=np.arccos(vector2)
...: vector2=np.tan(vector1)
...: vector1=np.arctan(vector2)
...: returnvector1
...:
In [20]: evaluate_functions(10)
Out[20]:
array([-1500. , -1166.66666667, -833.33333333, -500. ,
-166.66666667, 166.66666667, 500. , 833.33333333,
1166.66666667, 1500. ])
The Julia behaviour:
functionevaluatefunctions(N)
#x = linspace(-1500.0, 1500.0, N)
x =collect(range(-1500.0, stop=1500.0, length=N))
M =10000for i in1:M
y =sin.(x)
x =asin.(y)
y =cos.(x)
x =acos.(y)
y =tan.(x)
x =atan.(y)
endreturn x
end
julia>evaluatefunctions(N)
10-element Vector{Float64}:1.46030423766862571.1357921848534510.81128013203816310.486768079222875240.162256026407615830.162256026407615830.486768079222875240.81128013203816311.1357921848534511.4603042376686257
Fixing the incorrect use of prange gives a fair result:
It is really interesting to see that Numba gets faster than Julia. However, during my repro, I found some benchmarks in this repo quite unrelated to the conclusions.
TL;DR;: After fixing the benchmark, the examination shows that Julia is >10x faster than Numba for the case
evaluate_functions
.Consider the case
evaluate_functions
, numba usesparallel
for loop. It seems to produce faster programs than other languages, however it is quite misunderstanding.Programs using
prange
gives different results from that userange
, while the latter is the behaviour of the corresponding Julia program.The Julia behaviour:
Fixing the incorrect use of
prange
gives a fair result:The text was updated successfully, but these errors were encountered: