-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSSF Scorecard(#1541) #2615
OSSF Scorecard(#1541) #2615
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2615 +/- ##
==========================================
+ Coverage 77.56% 77.65% +0.09%
==========================================
Files 600 606 +6
Lines 9872 9905 +33
Branches 1353 1355 +2
==========================================
+ Hits 7657 7692 +35
Misses 1905 1905
+ Partials 310 308 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Apart from linting errors is this the only thing that I had to do? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like you'll need to add "securityscorecards" to the allowed word list https://github.com/intel/cve-bin-tool/blob/main/.github/actions/spelling/allow.txt
Also... that's a really weird chron schedule. Does it really need to run every hour or whatever it's doing?
I'll add it in the allow file; As far as I read, I got to know that these scorecards are quite important; So I did this. By default it would run just once a week. I'll again make it once a week. |
Hah, of course they'd tell you that they're very important. Short answer is "weekly scans would be sufficient for my needs" but since you spent the time to read up on it I'll give you a longer perspective below: The OpenSSF scorecard is based on a bunch of recommendations of best practices from security experts involved with the OpenSSF. It's not a bad system for seeing how well you adhere to best practices as they're known and maybe identify places where you could do better. BUT there's actually no evidence that adhering to any of this actually makes more secure software right now. As far as I know, it hasn't been around long enough and it's still evolving so no one's actually done a study showing how it compares with actual found vulnerabilities or anything. And back when I was in grad school and doing postdoc work, I can tell you that a lot of the security and code quality indicators that got tested did not actually help much. A lazy search turned up this old paper about complexity metrics vs security, for example: https://collaboration.csc.ncsu.edu/laurie/Papers/p47-shin.pdf (spoiler: complexity doesn't correlate strongly with security) but one of my favourite papers was on automated code repair where people often said "but the auto-generated code is going to be not as good as human generated code" and then it turned out that humans could easily be convinced that auto-generated security fixes were fine if you also auto-generated comments https://squareslab.github.io/papers-repo/pdfs/FryISSTA12_PREPRINT.pdf -- which is all to say that there's often still surprises in what we think will work as best practice in computer science, even now. So... I'm interested in the OpenSSF scorecard and similar efforts because it's potentially helpful and it can help reassure users that we're trying to do our best, but we have to be aware that it could potentially turn out to be a "cargo cult" effort where we're doing some number of things blindly that may not actually affect our security outcomes. Which is a long way of I absolutely wouldn't expect to spend more than maybe a half hour per month on this effort. Weekly scans should be sufficient (maybe even more than sufficient) so we can see how we're doing and decide if it's worth making changes. |
@terriko I updated the scorecard. Also thank you for these papers; I'll read them for sure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You definitely don't have to read the whole papers if you're busy, but flipping through the conclusions is often pretty educational. :)
Anyhow, looks like this is ready to merge. I'll fix the gitlint error during merge, but next time you can avoid it by using https://www.conventionalcommits.org/ style commit messages/PR titles.
Thanks! I read one of them today, and reading the second one now; just because I thought if it might help here. |
Fixes #1541