-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf] Refactor tests.yml into one file per package per language #5083
Conversation
The following pipelines have been queued for testing: |
@heaths, @jsquire, @christothes: This set of PRs is a huge improvement over the status quo, where the metadata for all tests across all languages lives in a single file in the tools repo: These PRs move this metadata to the language+service directories, which allows updating the perf tests in a single PR to the language repo, rather than also requiring a PR to the tools repo. It also allows languages to additional tests or arguments, beyond the standard set we keep consistent across all languages. The perf v-team has been waiting months for this refactoring. So I would like to land these PRs with the current design, and we can discuss further improvements after these are merged. The
It may be possible to move some or most of this information into the perf test source code itself. For example, in .NET, the list of arguments could be expressed as a set of custom attributes on the perf test class. And I can see some advantages to this. However, one of the principles we have been using for our performance infrastructure, is to keep the code specific to each language as simple as possible, and move the complexity into the language-agnostic PerfAutomation app. Anything language-specific needs to be implemented 5 times (.NET, Java, JS, Python, C++), and kept consistent across the languages. So we have tried to keep the language-specific perf frameworks as simple and lightweight as possible, and encoding this metadata in If .NET would like to prototype an end-to-end solution that moves this metatadata from My personal opinion is the current |
I think the main concern here for me is the number of things that a performance test author has to remember to update and the disconnect for understanding why the new test that you wrote wasn't executed. I'd prefer to see us remove the need to have metadata present to run the test (run them all by default) and, instead, let the dashboard show results that don't align if it's not set. That way, it's clear that a test is running and that "something" needs to happen for it to be grouped with its peers from other languages which can help drive the necessary follow-up conversations to discover what steps are needed. |
This reverts commit dc6775e.
The following pipelines have been queued for testing: |
The feedback provided from .NET folks above around the design is helpful and can be factored in next iteration of perf infra. |
The following pipelines have been queued for testing: |
I've set up a meeting with @jsquire and @pallavit to discuss next steps, if any. I'm thinking cross-language and appreciate @mikeharder's design ideas we discussed offline. I think there's relatively low-cost ways to achieve the same thing without complicating the onboarding process for new contributors to maintain yet another document, since they already have to write code to add tests. The value of this PR over the existing process is not in question: clearly this is a vast improvement. But the less maintenance hassles in the long run the better. Ideally, I think the runners can effectively multiplex the different permutations into separate processes (this is how .NET test runners generally work, and even multi-targeting builds that we already use for testing). That may be harder on some languages than others, granted. But I also empathize with Mike that this is something we do relatively infrequently. Still, if we make it a docs issue...well, I've already seen more than enough cases where we have docs - somewhere - and people still go outside those norms like we've seen recently with tests/samples/snippets for .NET. More automation and less manual effort tends to help. To clarify, though, I see no reason to hold up this current effort. It's clearly better than what exists now with a disconnected, centralized repo. |
Sync eng/common directory with azure-sdk-tools for PR Azure/azure-sdk-tools#5083 See [eng/common workflow](https://github.com/Azure/azure-sdk-tools/blob/main/eng/common/README.md#workflow) --------- Co-authored-by: Mike Harder <mharder@microsoft.com>
Hello @azure-sdk! Because this pull request has the p.s. you can customize the way I help with merging this pull request, such as holding this pull request until a specific person approves. Simply @mention me (
|
Sync eng/common directory with azure-sdk-tools for PR Azure/azure-sdk-tools#5083 See [eng/common workflow](https://github.com/Azure/azure-sdk-tools/blob/main/eng/common/README.md#workflow) --------- Co-authored-by: Mike Harder <mharder@microsoft.com>
TODO
Testing