Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loosen nanosecond resolution restriction (round 2) #23

Merged
merged 3 commits into from
Oct 6, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion REQUIRE
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
julia 0.4
Compat 0.7.20
Compat 0.8.0
JLD 0.6.4
10 changes: 5 additions & 5 deletions benchmark/benchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@ using JLD
const suite = BenchmarkGroup()

# Add some child groups to our benchmark suite.
suite["utf8"] = BenchmarkGroup(["string", "unicode"])
suite["string"] = BenchmarkGroup(["unicode"])
suite["trigonometry"] = BenchmarkGroup(["math", "triangles"])

# This string will be the same every time because we're seeding the RNG
teststr = UTF8String(join(rand(MersenneTwister(1), 'a':'d', 10^4)))
teststr = join(rand(MersenneTwister(1), 'a':'d', 10^4))

# Add some benchmarks to the "utf8" group
suite["utf8"]["replace"] = @benchmarkable replace($teststr, "a", "b")
suite["utf8"]["join"] = @benchmarkable join($teststr, $teststr)
suite["string"]["replace"] = @benchmarkable replace($teststr, "a", "b")
suite["string"]["join"] = @benchmarkable join($teststr, $teststr)

# Add some benchmarks to the "trigonometry" group
for f in (sin, cos, tan)
Expand All @@ -25,5 +25,5 @@ end
# Load the suite's cached parameters as part of including the file. This is much
# faster and more reliable than re-tuning `suite` every time the file is included
paramspath = joinpath(Pkg.dir("BenchmarkTools"), "benchmark", "params.jld")
#tune!(suite); JLD.save(paramspath, "suite", params(suite));
# tune!(suite); JLD.save(paramspath, "suite", params(suite));
loadparams!(suite, JLD.load(paramspath, "suite"), :evals, :samples);
Binary file modified benchmark/params.jld
Binary file not shown.
2 changes: 1 addition & 1 deletion doc/manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -841,7 +841,7 @@ Caching parameters in this manner leads to a far shorter turnaround time, and mo

# Miscellaneous tips and info

- Times reported by BenchmarkTools are limited to nanosecond resolution, though derived estimates might report fractions of nanoseconds.
- BenchmarkTools restricts the minimum measurable benchmark execution time to one picosecond.
- If you use `rand` or something similar to generate the values that are used in your benchmarks, you should seed the RNG (or provide a seeded RNG) so that the values are consistent between trials/samples/evaluations.
- BenchmarkTools attempts to be robust against machine noise occurring between *samples*, but BenchmarkTools can't do very much about machine noise occurring between *trials*. To cut down on the latter kind of noise, it is advised that you dedicate CPUs and memory to the benchmarking Julia process by using a shielding tool such as [cset](http://manpages.ubuntu.com/manpages/precise/man1/cset.1.html).
- On some machines, for some versions of BLAS and Julia, the number of BLAS worker threads can exceed the number of available cores. This can occasionally result in scheduling issues and inconsistent performance for BLAS-heavy benchmarks. To fix this issue, you can use `BLAS.set_num_threads(i::Int)` in the Julia REPL to ensure that the number of BLAS threads is equal to or less than the number of available cores.
15 changes: 15 additions & 0 deletions src/BenchmarkTools.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
module BenchmarkTools

using Compat
import JLD

# `show` compatibility for pre-JuliaLang/julia#16354 builds
if VERSION < v"0.5.0-dev+4305"
Base.get(io::IO, setting::Symbol, default::Bool) = default
end

const BENCHMARKTOOLS_VERSION = v"0.0.6"

##############
# Parameters #
##############
Expand Down Expand Up @@ -65,4 +68,16 @@ export tune!,

loadplotting() = include(joinpath(dirname(@__FILE__), "plotting.jl"))

#################
# Serialization #
#################

# Adds a compatibility fix for deserializing JLD files written with older versions of
# BenchmarkTools. Unfortunately, this use of JLD.translate encounters a weird scoping bug
# (see JuliaCI/BenchmarkTools.jl#23.). Even though it's currently unused, I've decided to
# leave this code in the source tree for the time being, with the hope that a fix for
# the scoping bug is pushed sometime soon.

# include("serialization.jl")

end # module BenchmarkTools
8 changes: 4 additions & 4 deletions src/execution.jl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ end

function _lineartrial(b::Benchmark, p::Parameters = b.params; maxevals = RESOLUTION, kwargs...)
params = Parameters(p; kwargs...)
estimates = zeros(Int, maxevals)
estimates = zeros(maxevals)
completed = 0
params.gctrial && gcscrub()
start_time = time()
Expand Down Expand Up @@ -117,7 +117,7 @@ end
function tune!(b::Benchmark, p::Parameters = b.params;
verbose::Bool = false, pad = "", kwargs...)
warmup(b, false)
estimate = minimum(lineartrial(b, p; kwargs...))
estimate = ceil(Int, minimum(lineartrial(b, p; kwargs...)))
b.params.evals = guessevals(estimate)
return b
end
Expand Down Expand Up @@ -256,8 +256,8 @@ function generate_benchmark_definition(eval_module, out_vars, setup_vars,
__sample_time = time_ns() - __start_time
__gcdiff = Base.GC_Diff(Base.gc_num(), __gc_start)
$(teardown)
__time = max(Int(cld(__sample_time, __evals)) - __params.overhead, 1)
__gctime = max(Int(cld(__gcdiff.total_time, __evals)) - __params.overhead, 0)
__time = max((__sample_time / __evals) - __params.overhead, 0.001)
__gctime = max((__gcdiff.total_time / __evals) - __params.overhead, 0.0)
__memory = Int(fld(__gcdiff.allocd, __evals))
__allocs = Int(fld(__gcdiff.malloc + __gcdiff.realloc +
__gcdiff.poolalloc + __gcdiff.bigalloc,
Expand Down
11 changes: 4 additions & 7 deletions src/parameters.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ type Parameters
seconds::Float64
samples::Int
evals::Int
overhead::Int
overhead::Float64
gctrial::Bool
gcsample::Bool
time_tolerance::Float64
Expand Down Expand Up @@ -80,16 +80,13 @@ end
nullfunc()
end
sample_time = time_ns() - start_time
return Int(cld(sample_time, evals))
return (sample_time / evals)
end

function estimate_overhead()
x = typemax(Int)
x = typemax(Float64)
for _ in 1:10000
y = overhead_sample(RESOLUTION)
if y < x
x = y
end
x = min(x, overhead_sample(RESOLUTION))
end
return x
end
53 changes: 53 additions & 0 deletions src/serialization.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
const VERSION_KEY = "__versions__"

const VERSIONS = Dict("Julia" => string(VERSION), "BenchmarkTools" => string(BENCHMARKTOOLS_VERSION))

type ParametersPreV006
seconds::Float64
samples::Int
evals::Int
overhead::Int
gctrial::Bool
gcsample::Bool
time_tolerance::Float64
memory_tolerance::Float64
end

type TrialPreV006
params::Parameters
times::Vector{Int}
gctimes::Vector{Int}
memory::Int
allocs::Int
end

function JLD.readas(p::ParametersPreV006)
return Parameters(p.seconds, p.samples, p.evals, Float64(p.overhead), p.gctrial,
p.gcsample, p.time_tolerance, p.memory_tolerance)
end

function JLD.readas(t::TrialPreV006)
new_times = convert(Vector{Float64}, t.times)
new_gctimes = convert(Vector{Float64}, t.gctimes)
return Trial(t.params, new_times, new_gctimes, t.memory, t.allocs)
end

function save(filename, args...)
JLD.save(filename, VERSION_KEY, VERSIONS, args...)
return nothing
end

@inline function load(filename, args...)
# no version-based rules are needed for now, we just need
# to check that version information exists in the file.
if JLD.jldopen(file -> JLD.exists(file, VERSION_KEY), filename, "r")
result = JLD.load(filename, args...)
else
JLD.translate("BenchmarkTools.Parameters", "BenchmarkTools.ParametersPreV006")
JLD.translate("BenchmarkTools.Trial", "BenchmarkTools.TrialPreV006")
result = JLD.load(filename, args...)
JLD.translate("BenchmarkTools.Parameters", "BenchmarkTools.Parameters")
JLD.translate("BenchmarkTools.Trial", "BenchmarkTools.Trial")
end
return result
end
8 changes: 4 additions & 4 deletions src/trials.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@

type Trial
params::Parameters
times::Vector{Int}
gctimes::Vector{Int}
times::Vector{Float64}
gctimes::Vector{Float64}
memory::Int
allocs::Int
end

Trial(params::Parameters) = Trial(params, Int[], Int[], typemax(Int), typemax(Int))
Trial(params::Parameters) = Trial(params, Float64[], Float64[], typemax(Int), typemax(Int))

@compat function Base.:(==)(a::Trial, b::Trial)
return a.params == b.params &&
Expand Down Expand Up @@ -246,7 +246,7 @@ function prettytime(t)
else
value, units = t / 1e9, "s"
end
return string(@sprintf("%.2f", value), " ", units)
return string(@sprintf("%.3f", value), " ", units)
end

function prettymemory(b)
Expand Down
14 changes: 14 additions & 0 deletions test/SerializationTests.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
module SerializationTests

using Base.Test
using BenchmarkTools

old_data = BenchmarkTools.load(joinpath(dirname(@__FILE__), "data_pre_v006.jld"), "results")
BenchmarkTools.save(joinpath(dirname(@__FILE__), "tmp.jld"), "results", old_data)
new_data = BenchmarkTools.load(joinpath(dirname(@__FILE__), "tmp.jld"), "results")

@test old_data == new_data

rm(joinpath(dirname(@__FILE__), "tmp.jld"))

end # module
12 changes: 6 additions & 6 deletions test/TrialsTests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -154,12 +154,12 @@ tj_r_2 = judge(tr; time_tolerance = 2.0, memory_tolerance = 2.0)
@test BenchmarkTools.prettydiff(1.0) == "+0.00%"
@test BenchmarkTools.prettydiff(2.0) == "+100.00%"

@test BenchmarkTools.prettytime(999) == "999.00 ns"
@test BenchmarkTools.prettytime(1000) == "1.00 μs"
@test BenchmarkTools.prettytime(999_999) == "1000.00 μs"
@test BenchmarkTools.prettytime(1_000_000) == "1.00 ms"
@test BenchmarkTools.prettytime(999_999_999) == "1000.00 ms"
@test BenchmarkTools.prettytime(1_000_000_000) == "1.00 s"
@test BenchmarkTools.prettytime(999) == "999.000 ns"
@test BenchmarkTools.prettytime(1000) == "1.000 μs"
@test BenchmarkTools.prettytime(999_999) == "999.999 μs"
@test BenchmarkTools.prettytime(1_000_000) == "1.000 ms"
@test BenchmarkTools.prettytime(999_999_999) == "1000.000 ms"
@test BenchmarkTools.prettytime(1_000_000_000) == "1.000 s"

@test BenchmarkTools.prettymemory(1023) == "1023.00 bytes"
@test BenchmarkTools.prettymemory(1024) == "1.00 kb"
Expand Down
Binary file added test/data_pre_v006.jld
Binary file not shown.
6 changes: 6 additions & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,9 @@ println("done (took ", toq(), " seconds)")
print("Testing execution..."); tic()
include("ExecutionTests.jl")
println("done (took ", toq(), " seconds)")

# This test fails due to a weird JLD scoping error. See JuliaCI/BenchmarkTools.jl#23.
#
# print("Testing serialization..."); tic()
# include("SerializationTests.jl")
# println("done (took ", toq(), " seconds)")