Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New feature: generic jar that can be used to benchmark any executable jar #7

Closed
asarkar opened this issue Apr 2, 2018 · 9 comments
Closed

Comments

@asarkar
Copy link

asarkar commented Apr 2, 2018

I was referred to this repo from spring-projects/spring-boot#12686. I'm trying to understand the purpose of each submodule, and if I can reuse any for benchmarking my code. The README looks promising, and the approach methodical, but the plethora of submodules make it very hard to understand what each one is for, how to run the benchmarking here, and if any of it could be used for other projects.

It'd be very useful to have a textual description of each submodule in the README, along with a column depicting reusability for projects outside this repo.

@dsyer
Copy link
Owner

dsyer commented Apr 2, 2018

There are instructions in the README(s) showing how to run the benchmarks. Hopefully they work, but you haven’t said if you tried it.

I’m also not sure what “column depicting reusability“ means. Maybe you could expand on that a bit. An example?

@asarkar
Copy link
Author

asarkar commented Apr 2, 2018

OK, let me try to document the request for better documentation 😄

Module Purpose Reusable
benchmarks Blah Yes
minimal Whatever No

Where, Reusable means the jar can be used for benchmarking an external project by, say, running with java -cp my-project-to-benchmark.jar -jar benchmarks.jar

Edit:
Regarding running the app, I wasn't referring to running the pet clinic or the minimal apps here, but any Spring Boot app in the wild. Looking at the ProcessLauncherState, it appears that your goal was to time the apps with various versions of Boot. Mine isn't the same; my goal is to find out what exactly is taking longer at startup with a single version of Boot. I already have a uber jar for my app that's deployed in a Docker container. There is exactly zero references to benchmarking Boot startup using JMH, other than this repo; some people have tried other things with Boot and JMH, which don't look very convincing to me.

@dsyer
Copy link
Owner

dsyer commented Apr 3, 2018

I think what you are asking for is not so much documentation as a new feature: a general purpose benchmarks.jar that can be used to time any boot app on start up. Its an interesting idea, but I’m not sure it is generally implementable. There would probably be restrictions on the way the apps were implemented or configured.

@asarkar
Copy link
Author

asarkar commented Apr 3, 2018

a general purpose benchmarks.jar

I’m making one that can launch either the main class or the jar. Since all the JMH options can also be specified from the command line, if I launch the host JVM with whatever settings I want, and have the Boot app inherit it, it should at least be usable for my use case. I’ll post a link here for your review once I get it working (hopefully in a day or two)

@asarkar
Copy link
Author

asarkar commented Apr 3, 2018

@dsyer https://github.com/asarkar/spring/tree/master/boot-benchmark.

Comparing my implementation with yours, I'm not sure how yours is working with @TearDown(Level.Iteration), because that means only the last of the many JVMs spawned is stopped in the end. It appears that @TearDown(Level.Invocation) should be used instead.

@dsyer
Copy link
Owner

dsyer commented Apr 26, 2018

I don't really understand the difference between Level.Invocation and Level.Iteration. They seem to mean the same for these tests anyway.

@dsyer dsyer changed the title Document the repository structure and submodules usages New feature: generic jar that can be used to benchmark any executable jar Apr 26, 2018
@asarkar
Copy link
Author

asarkar commented Apr 26, 2018

The difference between the Invocation and Iteration levels is discussed here: https://stackoverflow.com/a/49645606/839733. In short, they work like JUnit Before and BeforeClass, respectively.

If a new JVM is spawned by the test, which is the case if we are benchmarking startup, using Iteration means the JVM are not shut down after each test method execution, which is apparent from the code I posted in the SO question above.

@dsyer
Copy link
Owner

dsyer commented Apr 26, 2018

I think because the measurement batch size defaults to 1, only one JVM is launched per iteration in the code in this project. YMMV of course.

dsyer added a commit that referenced this issue Apr 27, 2018
In a "normal" JMH benchmark there are many invocations per iteration
but by default there is only one. We do need to kill the JVM after
every launch though, so, while it doesn't make any difference for the
default configuration, it's safer to use Level.Invocation.

See gh-7
@dsyer
Copy link
Owner

dsyer commented Apr 27, 2018

See launcher module in 624165f

@dsyer dsyer closed this as completed Apr 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants