-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Store aggregate read/execute count statistics #8286
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
8 Skipped Deployments
|
Logs
See job summary for details |
🟢 Turbopack Benchmark CI successful 🟢Thanks |
|
.cloned() | ||
.expect("No arguments for trait call"); | ||
let stats = Arc::clone(stats); | ||
turbo_tasks.run_once(Box::pin(async move { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could use a tokio::task::spawn
instead. That's much more lightweight
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't do this because resolve_trait_method
needs to be run within the context of turbo_tasks
:
thread 'test_dyn_trait_methods' panicked at crates/turbo-tasks/src/manager.rs:1301:17:
cannot access a task-local storage value without setting it first
And it looks like passing through the task-local context isn't safe either: https://github.com/vercel/turbo/blob/main/crates/turbo-tasks/src/manager.rs#L1323-L1329
Tobias Koppers - fix typo (vercel/turborepo#8619) Benjamin Woodruff - Store aggregate read/execute count statistics (vercel/turborepo#8286) Tobias Koppers - box InProgress task state (vercel/turborepo#8644) Tobias Koppers - Task Edges Set/List (vercel/turborepo#8624) Benjamin Woodruff - Memory: Use `triomphe::Arc` for `SharedReference` (vercel/turborepo#8622) Will Binns-Smith - chore: release npm packages (vercel/turborepo#8614) Will Binns-Smith - devlow-bench: add git branch and sha to datapoints (vercel/turborepo#8602) --- Fixes a `triomphe` package version conflict between turbopack and swc by bumping it from 0.1.11 to 0.1.13.
…67164) Writes out the statistics gathered via vercel/turborepo#8286 to a file. Stats can be formatted and sorted with jq: ``` jq 'to_entries | sort_by(.value.cache_miss) | reverse | from_entries' <input.json >output.json ``` Output looks something like this (once formatted and sorted): https://gist.github.com/bgw/4e2df35b9e410bf71fe51ecaffc3c7bf Without #67165, this requires that SIGINT be sent directly to the `next-server` process to allow a clean exit: ``` pkill -INT next-server ``` But with #67165, a simple <kbd>ctrl</kbd>+<kbd>c</kbd> works.
…epo#8547) ## Why? I need a better system for handling process exit to handle flushing to cache hit statistics (vercel/turborepo#8286). ## What? - Supports letting another thing listen for <kbd>ctrl</kbd> + <kbd>c</kbd>. In the case of next-server, we need to let node.js own and configure these signal handlers. - Supports setting up a <kbd>ctrl</kbd> + <kbd>c</kbd> handler for you, but now explicitly panics if you would try to set up multiple global <kbd>ctrl</kbd> + <kbd>c</kbd> listeners. - Allows async work to happen during cleanup. This wasn't possible using `Drop` because `Drop` isn't async. - Allows cleanup work to be scheduled after application initialization, potentially making the API a bit more flexible. ## Testing Instructions ``` cargo test -p turbopack-trace-utils ``` Add some `println!()` logging to the end of the `on_exit` future in turbopack-cli. ``` TURBOPACK_TRACING=overview cargo run -p turbopack-cli -- dev ``` Hit `ctrl+c` and see that my `println!()` runs, so I know the tracing flush finishes.
### Why? I want to determine percent "cache hit" rates for tasks. Tasks with very low task hit rates should likely have their annotations removed. Eventually, we might be able to use this information in a more automated way, by leaving the annotation in, but skipping the caching for low-cache-hit tasks. ### What? This implementation only logs persistent tasks, which should compromise all or the majority of tasks we care about for memory usage. The implementation should bail out quickly if caching is disabled, so it should be okay to leave in release builds, which is important for making it easy to gather statistics from willing users. ### Testing Run included unit tests! This is used as part of canary...bgw/cache-hit-stats
…epo#8547) ## Why? I need a better system for handling process exit to handle flushing to cache hit statistics (vercel/turborepo#8286). ## What? - Supports letting another thing listen for <kbd>ctrl</kbd> + <kbd>c</kbd>. In the case of next-server, we need to let node.js own and configure these signal handlers. - Supports setting up a <kbd>ctrl</kbd> + <kbd>c</kbd> handler for you, but now explicitly panics if you would try to set up multiple global <kbd>ctrl</kbd> + <kbd>c</kbd> listeners. - Allows async work to happen during cleanup. This wasn't possible using `Drop` because `Drop` isn't async. - Allows cleanup work to be scheduled after application initialization, potentially making the API a bit more flexible. ## Testing Instructions ``` cargo test -p turbopack-trace-utils ``` Add some `println!()` logging to the end of the `on_exit` future in turbopack-cli. ``` TURBOPACK_TRACING=overview cargo run -p turbopack-cli -- dev ``` Hit `ctrl+c` and see that my `println!()` runs, so I know the tracing flush finishes.
### Why? I want to determine percent "cache hit" rates for tasks. Tasks with very low task hit rates should likely have their annotations removed. Eventually, we might be able to use this information in a more automated way, by leaving the annotation in, but skipping the caching for low-cache-hit tasks. ### What? This implementation only logs persistent tasks, which should compromise all or the majority of tasks we care about for memory usage. The implementation should bail out quickly if caching is disabled, so it should be okay to leave in release builds, which is important for making it easy to gather statistics from willing users. ### Testing Run included unit tests! This is used as part of canary...bgw/cache-hit-stats
…epo#8547) ## Why? I need a better system for handling process exit to handle flushing to cache hit statistics (vercel/turborepo#8286). ## What? - Supports letting another thing listen for <kbd>ctrl</kbd> + <kbd>c</kbd>. In the case of next-server, we need to let node.js own and configure these signal handlers. - Supports setting up a <kbd>ctrl</kbd> + <kbd>c</kbd> handler for you, but now explicitly panics if you would try to set up multiple global <kbd>ctrl</kbd> + <kbd>c</kbd> listeners. - Allows async work to happen during cleanup. This wasn't possible using `Drop` because `Drop` isn't async. - Allows cleanup work to be scheduled after application initialization, potentially making the API a bit more flexible. ## Testing Instructions ``` cargo test -p turbopack-trace-utils ``` Add some `println!()` logging to the end of the `on_exit` future in turbopack-cli. ``` TURBOPACK_TRACING=overview cargo run -p turbopack-cli -- dev ``` Hit `ctrl+c` and see that my `println!()` runs, so I know the tracing flush finishes.
### Why? I want to determine percent "cache hit" rates for tasks. Tasks with very low task hit rates should likely have their annotations removed. Eventually, we might be able to use this information in a more automated way, by leaving the annotation in, but skipping the caching for low-cache-hit tasks. ### What? This implementation only logs persistent tasks, which should compromise all or the majority of tasks we care about for memory usage. The implementation should bail out quickly if caching is disabled, so it should be okay to leave in release builds, which is important for making it easy to gather statistics from willing users. ### Testing Run included unit tests! This is used as part of canary...bgw/cache-hit-stats
…epo#8547) ## Why? I need a better system for handling process exit to handle flushing to cache hit statistics (vercel/turborepo#8286). ## What? - Supports letting another thing listen for <kbd>ctrl</kbd> + <kbd>c</kbd>. In the case of next-server, we need to let node.js own and configure these signal handlers. - Supports setting up a <kbd>ctrl</kbd> + <kbd>c</kbd> handler for you, but now explicitly panics if you would try to set up multiple global <kbd>ctrl</kbd> + <kbd>c</kbd> listeners. - Allows async work to happen during cleanup. This wasn't possible using `Drop` because `Drop` isn't async. - Allows cleanup work to be scheduled after application initialization, potentially making the API a bit more flexible. ## Testing Instructions ``` cargo test -p turbopack-trace-utils ``` Add some `println!()` logging to the end of the `on_exit` future in turbopack-cli. ``` TURBOPACK_TRACING=overview cargo run -p turbopack-cli -- dev ``` Hit `ctrl+c` and see that my `println!()` runs, so I know the tracing flush finishes.
### Why? I want to determine percent "cache hit" rates for tasks. Tasks with very low task hit rates should likely have their annotations removed. Eventually, we might be able to use this information in a more automated way, by leaving the annotation in, but skipping the caching for low-cache-hit tasks. ### What? This implementation only logs persistent tasks, which should compromise all or the majority of tasks we care about for memory usage. The implementation should bail out quickly if caching is disabled, so it should be okay to leave in release builds, which is important for making it easy to gather statistics from willing users. ### Testing Run included unit tests! This is used as part of canary...bgw/cache-hit-stats
…67164) Writes out the statistics gathered via vercel/turborepo#8286 to a file. Stats can be formatted and sorted with jq: ``` jq 'to_entries | sort_by(.value.cache_miss) | reverse | from_entries' <input.json >output.json ``` Output looks something like this (once formatted and sorted): https://gist.github.com/bgw/4e2df35b9e410bf71fe51ecaffc3c7bf Without #67165, this requires that SIGINT be sent directly to the `next-server` process to allow a clean exit: ``` pkill -INT next-server ``` But with #67165, a simple <kbd>ctrl</kbd>+<kbd>c</kbd> works.
…67164) Writes out the statistics gathered via vercel/turborepo#8286 to a file. Stats can be formatted and sorted with jq: ``` jq 'to_entries | sort_by(.value.cache_miss) | reverse | from_entries' <input.json >output.json ``` Output looks something like this (once formatted and sorted): https://gist.github.com/bgw/4e2df35b9e410bf71fe51ecaffc3c7bf Without #67165, this requires that SIGINT be sent directly to the `next-server` process to allow a clean exit: ``` pkill -INT next-server ``` But with #67165, a simple <kbd>ctrl</kbd>+<kbd>c</kbd> works.
Tobias Koppers - fix typo (vercel/turborepo#8619) Benjamin Woodruff - Store aggregate read/execute count statistics (vercel/turborepo#8286) Tobias Koppers - box InProgress task state (vercel/turborepo#8644) Tobias Koppers - Task Edges Set/List (vercel/turborepo#8624) Benjamin Woodruff - Memory: Use `triomphe::Arc` for `SharedReference` (vercel/turborepo#8622) Will Binns-Smith - chore: release npm packages (vercel/turborepo#8614) Will Binns-Smith - devlow-bench: add git branch and sha to datapoints (vercel/turborepo#8602) --- Fixes a `triomphe` package version conflict between turbopack and swc by bumping it from 0.1.11 to 0.1.13.
…67164) Writes out the statistics gathered via vercel/turborepo#8286 to a file. Stats can be formatted and sorted with jq: ``` jq 'to_entries | sort_by(.value.cache_miss) | reverse | from_entries' <input.json >output.json ``` Output looks something like this (once formatted and sorted): https://gist.github.com/bgw/4e2df35b9e410bf71fe51ecaffc3c7bf Without #67165, this requires that SIGINT be sent directly to the `next-server` process to allow a clean exit: ``` pkill -INT next-server ``` But with #67165, a simple <kbd>ctrl</kbd>+<kbd>c</kbd> works.
Why?
I want to determine percent "cache hit" rates for tasks. Tasks with very low task hit rates should likely have their annotations removed.
Eventually, we might be able to use this information in a more automated way, by leaving the annotation in, but skipping the caching for low-cache-hit tasks.
What?
This implementation only logs persistent tasks, which should compromise all or the majority of tasks we care about for memory usage.
The implementation should bail out quickly if caching is disabled, so it should be okay to leave in release builds, which is important for making it easy to gather statistics from willing users.
Testing
Run included unit tests!
This is used as part of vercel/next.js@canary...bgw/cache-hit-stats