Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial benchmarking setup. #892

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jerzywilczek
Copy link
Collaborator

@jerzywilczek jerzywilczek commented Dec 10, 2024

Here's an initial benchmarking setup. For context, the end goal for this is for us to be able to run a set of these benchmarks on any computer and figure out what kind of smelter setups can be run on it.

You can run it with:

cargo run --bin benchmark -- --help

to get the information on how to run a single test.

The full command is quite verbose, and looks like this:

cargo run --release --bin benchmark -- \
    --framerate 24 \
    --decoder-count maximize \
    --file-path examples/assets/BigBuckBunny.mp4 \
    --output-width 1280 \
    --output-height 720 \
    --encoder-preset ultrafast \
    --warm-up-time 10 \
    --measured-time 10 \
    --video-decoder vulkan_video_h264 \
    --framerate-tolerance 1.05

The help message will tell you all of the options for each of the arguments.

It can currently run a single test configuration. A single configuration is comprised of a couple of constant parameters (the resolution, the input video, the encoder preset etc.) and currently two variable parameters: framerate and decoders count. The variable parameters can be set to:

  • a constant
  • exponential iteration, meaning that the test will run a couple of iterations with the parameter being multiplied by 2 in every consecutive run, until the test stops running
  • maximizing, meaning that the benchmark will binsearch the maximum number the configuration works for

Importantly, you can set multiple parameters to iterate and a single parameter to maximize, which would mean the setup would run for all iterative combinations and for each of them find the maximum number for the maximized parameter.

For example, if we set the framerate to iterate and the decoders count to maximize, the setup will run with framerate set to 1, 2, 4, 8, 16 etc. and for each framerate find the maximum number of inputs. The iteration will terminate when it is impossible to run the system due to the super high framerate even with a single decoder.

What I plan to add to this later:

  • more parameters that can be variable: encoder count, encoder preset, resolution etc.
  • a possibility to run without one of the elements of the pipeline, e.g. just the decoders and the renderer, without the encoder, or just the encoders, without the decoders
  • a predefined set of benchmark runs that can be run with a single command, once we decide on what we want to measure

Additionally to asking whether you like this system or think it should be changed somehow, I would like to get your opinion on my handling of the arguments that are set to iterate or maximize (the run_args function and the other functions it calls). I feel a bit like using zig for the last couple weeks got into my head and that there is a better, more idiomatic way of doing what I'm doing here, but I just can't figure it out.

I also fixed an unrelated typo in vk-video, because it was annoying me.

@jerzywilczek jerzywilczek force-pushed the @jerzywilczek/initial-benchmark branch 5 times, most recently from 7c10e80 to 4e52a6c Compare December 11, 2024 10:19
@jerzywilczek jerzywilczek force-pushed the @jerzywilczek/initial-benchmark branch from 4e52a6c to 131428a Compare December 11, 2024 10:49
Pipeline::start(&pipeline);

let start_time = Instant::now();
while Instant::now() - start_time < bench_config.warm_up_time {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
while Instant::now() - start_time < bench_config.warm_up_time {
while start_time.elapsed() < bench_config.warm_up_time {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternative approach would be to just do thread sleep and reading from receiver in a separate channel

},
},
)
.unwrap();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I came across an error

InputError(InputId("input_121"), Mp4(IoError(Os { code: 24, kind: Uncategorized, message: "Too many open files" })))

It should be handled somehow imo. Maybe just log it and assume that last count was max? I think it would be pretty anoying when I got such error after running test for longer time.

Maybe make run_single_test return Result<bool, some-kind-of-error> and handle those on higher level?

Copy link
Member

@noituri noituri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤓

Comment on lines +89 to +115
fn run_args_iterate(
ctx: GraphicsContext,
args: &Args,
arguments: Box<[Argument]>,
reports: &mut Vec<SingleBenchConfig>,
) -> bool {
for (i, argument) in arguments.iter().enumerate() {
if matches!(argument, Argument::IterateExp) {
let mut any_succeeded = false;
let mut count = 1;
loop {
let mut arguments = arguments.clone();
arguments[i] = Argument::Constant(count);

if run_args_iterate(ctx.clone(), args, arguments, reports) {
any_succeeded = true;
count *= 2;
continue;
} else {
return any_succeeded;
}
}
}
}

run_args_maximize(ctx, args, arguments, reports)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this code. On the first sight I thought there might be an infinite recursion. While now I understand what's happening here, the flow here is hard to follow. Also, I tried to run --decoder-count with iterate_exp and it didn't do anything. I'm not sure if I used it correctly tho

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants