-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create superpmi-asmdiffs pipeline #61194
Conversation
Tagging subscribers to this area: @JulieLeeMSFT Issue Detailsnull
|
`asmdiffs` can download coredistools, for example, so don't output download progress if the user wants to suppress it. Also, move the `download` log file to the "uploads" area so it gets uploaded to the AzDO files. Temporarily, only do diffs with benchmarks, so speed things up
ceb7fec
to
5bd071d
Compare
Add downloading of portable git for use on Helix for jit-analyze. It is added to the correlation payload that is copied to Helix machines (currently, we only use Windows Helix machines).
Overall summary.md file should be uploaded as well.
a37188a
to
c806ca6
Compare
Sample run AzDO results with forced diffs (loop cloning disabled for diff compiler), with summary in the "Extensions" tab: and the published artifacts contain the "overall summary" Markdown files in the |
Add a few more comments
@dotnet/jit-contrib This is ready for review. PTAL. |
Some ideas for improvements:
|
The extensions thing is pretty cool; I had no idea that existed. I find the current format requires a lot of scrolling to fully comprehend -- there is a lot of data and the interesting bits can be buried. It would be nice to have everything in one table and perhaps sorted by largest impact or some such. Doesn't work as well for detail/expand collapse but perhaps we can get to this via internal links or something? Also (assuming interesting/ unexpected diffs appear) is it obvious how to repro exactly what the CI did? Maybe echo out the repro commands somewhere (including exact base jit version used / exact collections used if we ever start versioning beyond the jit guid) |
The example I gave (here) has a lot of diffs, because I disabled loop cloning to force diffs. However, we are generating diffs for 6 platforms. And we propose to also generate PerfScore diff results. Do you have any suggestion on how to summarize this better? Kunal suggested something like "Impact" table here. Printing out repro commands makes sense. Can probably specify the precise baseline git hash used, but assume the diff is the current tree, and then just specify the correct Note, however, I don't want to gate this change on creating the "perfect" output. |
Yes, something very much like that would be great.
Agreed -- no need to hold this up based on my feedback; it is quite useful even as-is (eg trigger on community PRs that we think should be no diff). As we get mileage on it we will figure out enhancements to make it more useful. |
Currently it's set to trigger on every PR that touches the JIT directory. I think we should keep that level of checking. |
One more optimization we spoke about it to skip creating |
This affects `superpmi.py download` output
@kunalspathak, others: I updated this change for the feedback. |
Here's the Extensions page result we now generate if there are no diffs: This run completed successfully except the aspnet collection replay has MISSING data, and that is reported as an error code. |
…script There will be more changes required if we ever run on non-Windows platforms, so don't keep these partial measures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for doing this, can't wait to trigger it on one of my PRs.
I don't see the logic to upload other metric |
Agree -- thanks for working on this. 👍 |
True, I haven't done that. I can pass Generating the PerfScore metrics separately would require iterating over all the respective MCH directories, invoking jit-analyze specifically for each base/diff pair, and then collecting a summary_PerfScore.md file, which would be separately summarized and uploaded. This is more work than I want to do. So I think for now I'm not going to add additional metrics to the summary. |
Also, reduce timeout values
What do you mean? It will print the |
The implementation of the accurate CodeSize doesn't kick in when any metrics are specifically requested: runtime/src/coreclr/scripts/superpmi.py Lines 2030 to 2033 in 1d352fc
There is no "accurate" total PerfScore number currently, just CodeSize. |
Can we do something like this in command = [ jit_analyze_path, "--md", md_summary_file, "-r", "--base", base_asm_location, "--diff", diff_asm_location ]
if base_bytes is not None and diff_bytes is not None:
m_command = command + [ "--override-total-base-metric", str(base_bytes), "--override-total-diff-metric", str(diff_bytes) ]
run_and_log(m_command, logging.INFO)
# run again for non-code size metrics.
non_code_size_metrics = [m for m in self.coreclr_args.metrics if m != 'CodeSize']
if len(non_code_size_metrics ) > 0:
m_command = command + [ "--metrics", ",".join(non_code_size_metrics) ]
run_and_log(m_command, logging.INFO) |
By the way, I didn't realize until now that we can pass |
Something like that would work, but we'd then have two .md files, or have to force the second jit-analyze to append to the first one. I think a better solution is to pass "override" data on a per-metric basis. That is, teach jit-analyze to understand A not-mutually-exclusive alternative is to teach superpmi.py to (optionally?) create one summary.md file for each metric, e.g., |
I agree. We can just ask But then, we need to still change the logic around passing |
Yes, or generalize it as I suggest. |
Ok, whatever is easy and clean is fine then. |
Create a new
runtime-coreclr superpmi-asmdiffs
pipeline that runs SuperPMI asmdiffs for every change in the JIT directory.The diffs are run on two platforms: Windows x64 and Windows x86. Linux, and Arm64 and Arm32, asm diffs are done using cross-compilers, as follows:
The resulting summary .md files are uploaded into the pipeline artifacts, one .md file per platform (so, one for the Windows x64 runs and one for the Windows x86 runs). The results are also displayed in "Extensions" page of the AzDO pipeline.
It looks like the runs take about 50 minutes to complete (assuming not much waiting for machines).
The asm diffs pipeline is similar to the "superpmi-replay" pipeline, except:
main
branch. Given this, it downloads the matching baseline JITs from the JIT rolling build artifacts in Azure Storage.jitutils
repo and builds thejit-analyze
tool, needed to generate the summary .md file.git
installation, asgit diff
is used byjit-analyze
for analyzing the generated .dasm files of the diff.Extensions
page.As part of this implementation,
a. The
azdo_pipelines_util.py
was renamed tojitutil.py
, and a lot of utility functions from superpmi.py were moved over to it. This was mostly to share the code for downloading and uncompressing .zip files. (There might be some slight changes to the output from the superpmi.py download commands that I'll have to look into.) However, I also moved a bunch of simple, more general helpers, for possible future sharing.b.
jitrollingbuild.py download
can now take no arguments and download a baseline JIT (from the JIT rolling build Azure Storage location), for the current enlistment, to the default location. Previously, it required a specific git_hash and target directory. There is similar logic in superpmi.py, but not quite the same.c. The
superpmi.py --no_progress
option was made global, and applied in a few more places.Fixes #59445