-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New execution mode "profiling" #8556
Comments
I strongly agree, just is not always easy due to the many different flavours of profilers per-technology. But if each group of languages vote for a specific tool, I think this to be an awesome initiative, which I can commit my help for the java side (being myself an active conteibutor/user of async profiler). |
@otrosien Since you mention |
I propose doing both performance and profiling runs. It may not be feasible every time but every nth round - to be figured out. |
I love this idea. I'm not sure we'd have any bandwidth to work on this before the holidays, but I'll ping @msmith-techempower (currently on vacation) and see if he has any room for something like this when he gets back. |
This does sound like a cool idea. We implemented the addition of some really high-level metric captures a few years ago via From a really high level, this is actually fairly straight-forward to do manually. When I was rewriting the way requests are routed on Gemini, I would simply start the application container as if it were being run by the toolset, connect to it with YourKit to capture data, then run the Docker container that does the Building tooling to do this generally might be complicated since each language has several flavors of profilers, but we could make it configurable at the framework level. Unclear how automated runs would do it, but maybe like you said we could split out every 10th run (or something) is a profiling run, and that might produce results like normal, but also additional artifacts from the profilers we could host on tfb-status. |
In light of this it would also be really helpful to get some stats on the db server, if only just cpu samples. That would be very helpful to understand whether bottlenecks are on the app or db side. |
It would be extremely helpful for the framework submitters to understand the performance of their submissions, especially where the bottlenecks are when running in the target environment. For this purpose, I would propose to create a dedicated execution mode "profiling", which writes out profiling information, for example flamegraphs, either generically using perf_events, or using dedicated per-platform profiling support, like https://github.com/async-profiler/async-profiler for the JVM.
Ideally, this should be applicable holistically without requiring involvement of the individual framework contributors. Imagine the benefit of supplying flamegraphs for all applications as part of the reports on tfb-status.techempower.com - the bar for actually finding and fixing performance bottlenecks would be lowered drastically.
One option could also be providing Docker base images per platform (python, jvm etc) which have all the tools preinstalled, setting up a first baseline in alignment on runtime versions like used JDKs (see my comment in #3442)
The text was updated successfully, but these errors were encountered: