You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a very large feature request. To be clear, I don't expect the cargo-vet team to take on this work. I hope this can be the start of a conversation about this kind of tooling.
We do unsafe code reviews for crates on Fuchsia. When we do, it's usually 1-3 people auditing every usage of unsafe in a crate. Our current process for this is:
Create a git commit that adds or bumps the version of the crate.
Leave a bunch of review comments in gerrit. These are a mix of unsafe justification (this is sound and here's why) and soundness issues (this is unsound and here's why).
Once we've reviewed every usage of unsafe, we create a separate commit summarizing our audit in our audits.toml and check that in.
Finally, we check in the change introducing the new crate (or updated version of an existing crate).
This has a few pitfalls:
We have to manually search for unsafe uses and there have been times when we've missed a few.
Because we have to go out of our way to certify, we sometimes forget to actually check in an audit. We could feasibly fix this with more process / actually using cargo vet right. We are very far from perfect right now. But I still think this would be a process improvement.
We regularly update several crates in a single commit, which means that comments from multiple crates get mixed together.
Because we version our crates in paths, the diffs are not always very clean. Files that should be marked as renamed often get marked as 100% deleted and then 100% added at another path. So it's not easy to navigate.
There's no comment filtering by anything other than resolved/unresolved. I want to send our audit notes upstream in a format that is easy for maintainers to incorporate.
Nothing would make me happier than if someone could click a button and add our review notes as safety comments.
My ideal tool would be something like this:
Web interface
Plug in the crate and version (or delta) to start a review
unsafe gets detected so we don't miss them during review
Share link with multiple people to add comments
Tag comments so upstream maintainers can address them easily (e.g. "safety comment", "soundness issue", "logic issue", etc)
Generates an artifact we can either link people to or ship elsewhere (e.g. structured notes, markdown, or a permalink to the review)
Open-source so others can use it to audit crates (auditing difficulty is slowing adoption)
I suspect that others would benefit from a such a tool. We might have some Fuchsians with spare time who could contribute to developing it, and I would be willing to help out as well.
The text was updated successfully, but these errors were encountered:
We got partway through a web UI for the certify flow last year (see #293 and #330) but never saw it through to completion. We were intentionally leaving the review experience as out-of-scope (because building a good code-review tool is hard, and we're primarily leaning on sourcegraph for that), but I'm certainly open to something more ambitious if folks are excited to work on it!
i'm want to add here, that recently source graph is a pain in the ass. a lot of diffs just show as "empty repository". While the command line alternative exists, auditing multiple thousand of lines via the local diff is a pain in the ass too.
maybe considering a configuration for local UI tools would be sufficient
This is a very large feature request. To be clear, I don't expect the cargo-vet team to take on this work. I hope this can be the start of a conversation about this kind of tooling.
We do unsafe code reviews for crates on Fuchsia. When we do, it's usually 1-3 people auditing every usage of
unsafe
in a crate. Our current process for this is:unsafe
, we create a separate commit summarizing our audit in ouraudits.toml
and check that in.This has a few pitfalls:
My ideal tool would be something like this:
unsafe
gets detected so we don't miss them during reviewI suspect that others would benefit from a such a tool. We might have some Fuchsians with spare time who could contribute to developing it, and I would be willing to help out as well.
The text was updated successfully, but these errors were encountered: