Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking Issue for Metrics Initiative #128914

Open
1 of 5 tasks
jieyouxu opened this issue Aug 10, 2024 · 3 comments
Open
1 of 5 tasks

Tracking Issue for Metrics Initiative #128914

jieyouxu opened this issue Aug 10, 2024 · 3 comments
Labels
-Zmetrics-dir Unstable option: metrics directory A-diagnostics Area: Messages for errors, warnings, and lints A-metrics Area: Metrics C-tracking-issue Category: An issue tracking the progress of sth. like the implementation of an RFC D-diagnostic-infra Diagnostics: Issues that affect all diagnostics, or relate to the diagnostic machinery itself. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@jieyouxu
Copy link
Member

jieyouxu commented Aug 10, 2024

The Metrics Initiative

This is a tracking issue for the Metrics Initiative. Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however not meant for large scale discussion, questions, or bug reports about a feature. Instead, please:

Please file dedicated issues for specific concerns that you wish to register. Discussions or concerns become very hard to track on GitHub issues once they reach more than a couple of comments, and GitHub's UI will then collapse discussions.

Context

Motivation

Excerpt from the Council Zulip thread as summary (edited by me):

We're envisioning three use cases for the Metrics Initiative:

  1. Supporting feature development, e.g. answering specific questions such as when the old and new trait solvers diverge, or helping identify and resolve bugs.
  2. Guiding improvements to User Experience, e.g. knowing which compiler errors are causing the most confusion or are hit the most frequently, focusing on improving those first, and verifying that the improvements help.
  3. Improving perf feedback loops and insight, e.g. helping identify pathological edge cases, similar to work @nnethercote has done manually in the past

We're focusing initially on the first use case since we see that as the most likely to have a significant impact.
We want to get to the point where other contributors can leverage the metrics to answer their own questions while we continue to build up the supporting infrastructure.

To do that, we'd like to gather specific use cases where people would like to leverage metrics and build the supporting infrastructure around those real-world needs

Guiding Aims

  • Trust: Do not violate the trust of our users
    • NO TELEMETRY, NO NETWORK CONNECTIONS
    • Emit metrics locally
    • User information should never leave their machine in an automated manner; sharing their metrics should always be opt-in, clear, and manual.
    • All of this information would only be stored on disk, with some minimal retention policy to avoid wasteful use of users’ hard drives
  • Feedback: improving feedback loops to assist with iterative improvement within the project
    • answer questions from real production environments in a privacy-preserving way
    • improve legibility of rare or intermittent issues
    • earlier warnings for ICEs and other major issues on nightly, improving the likelihood that we'd catch them before they hit stable.
  • Performance impact
    • leave no trace (minimize performance impact, particularly for default-enabled metrics)
  • Extensible:
    • it should be easy to add new metrics as needed
    • Only add metrics as a way to answer a specific question in mind, with an explicitly documented rationale
  • User experience:
    • improving user experience of reporting issues to the project
    • improving the user experience of using the compiler, measuring the impact of changes to user experience

Suggested Use Cases

TODO

  • Inform project members of the initiative and gather real-world needs to build initial metrics infrastructure around
  • Complete the metrics loop (e.g., design and implement tools to send metrics back to the project for analysis, tools to analyze metrics locally and notify users when issues we want insight into have been encountered, analyzing metrics in crater runs)
  • Automated cleanup of metrics to prevent unbounded disk usage

Concerns and Related Issues

Related Labels

  • A-metrics Area: Metrics for issues and PRs containing discussions, bugs, implementation work or are otherwise related to metrics.
  • -Zmetrics-dir Unstable option: metrics directory for the unstable metrics output directory flag.

Implementation History

@jieyouxu jieyouxu added A-diagnostics Area: Messages for errors, warnings, and lints T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. C-tracking-issue Category: An issue tracking the progress of sth. like the implementation of an RFC D-diagnostic-infra Diagnostics: Issues that affect all diagnostics, or relate to the diagnostic machinery itself. -Zmetrics-dir Unstable option: metrics directory labels Aug 10, 2024
@jieyouxu

This comment was marked as resolved.

@VorpalBlade

This comment was marked as resolved.

@jieyouxu jieyouxu added the A-metrics Area: Metrics label Aug 13, 2024
@rust-lang rust-lang locked as too heated and limited conversation to collaborators Aug 13, 2024
@rust-lang rust-lang unlocked this conversation Aug 15, 2024
@jieyouxu

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
-Zmetrics-dir Unstable option: metrics directory A-diagnostics Area: Messages for errors, warnings, and lints A-metrics Area: Metrics C-tracking-issue Category: An issue tracking the progress of sth. like the implementation of an RFC D-diagnostic-infra Diagnostics: Issues that affect all diagnostics, or relate to the diagnostic machinery itself. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

2 participants