-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create scorecard.yml (OpenSSF) #1708
Conversation
@baentsch many thanks for the comments! |
PR updated - only generates .sarif file, and runs on both PRs and merges to main
|
Example sarif file can be found as attachment on https://github.com/open-quantum-safe/liboqs/actions/runs/8066088271?pr=1708 |
In the scan above, we have two categories of issues reported:
To fix these carries some risk of trial/error, but ultimately should not change results, it's just being more explicit about what we expect an action to do
This requires making the version of github actions we use explicit, so that a release can always be regenerated. The downside is maintenance to update even within minor versions. I see the reasoning for it, it may work better alongside an automated tool like dependabot (I know it works on major minor versions, not sure on commit hash) I didn't notice other issues in the sarif (viewed with the sarif viewer in vscode, from ms dev labs) |
Rebased, to review current findings and correct. |
e7c98d1
to
c0aa784
Compare
Update on this PR - good things
There are however still a few errors relating to freezing of pip dependencies
|
Pip requirements.txt install was failing in the build since once shas are used, ALL dependencies need to be listed (resursively) and a few were missing. fixed by
Also tested loading the new requirements.txt into a clean pip venv which works fine. |
@baentsch the checks are nearly clean, with the only OSSF negative points (score 8/10) caused by:
We have the same in Windows. It's not detected, but should fix in the same way Aside: ossf scorecard doesn't know/check brew packages, which of course can also change the results! To address the ossf report, we would need to specify hashes. I did this for our import script (see above) which uses requirements.txt I'm think this is the cleanest way to do here too - and in future may be easier to maintain if we use dependabot? I could create the required 'requirements.txt' somewhere and refactor the above. The big question would be where in the source tree. it could go in the .github/workflows directory, but is this clutter? somewhere else? What do you think? |
The mac builds are using Python 3.12.x whilst the Windows build is using 3.9.x -- this is why the dependencies resolve differently. These versions are the standard, supplied, python versions for the respective runners I initially used a common file, then a slight variation from a test machine running windows but with python 3.12 Additional consideration - for consistent results, having a managed, specific version of python may be advisable. |
Changed approach to build requirements txt to
This is done in a clean venv, and has been tested on macos (3.12) and windows (3.9) |
This is now ready for review:
Follow-on activity (after merge)
|
this is really cool! thanks for doing the heavy lift |
Very good, thanks. Also good to know that no software changes are/were required to get to this "green state".
Before doing that, I'd like to
Such documentation seems prudent to have in this PR already (also see my single questions) to avoid a situation in which things fail for users and they don't know where to look for documentation to "dig themselves" out of the proverbial hole this may have plunged them into. My suggestion would be to place this in a file "PROCEDURES.md" in the "docs" directory. It already should contain three sub items:
Good proposal, but this feature must be understandable (by reading documentation) by anyone not attending dev calls too.
Suggest to wait with this until |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add "self-help" documentation as per discussion item(s).
Thanks Nigel for getting this started! And thanks Michael for a thorough analysis. We will definitely need some hand-holding as this lands and the first few times it pops up during a PR. |
To answer the remaining comments:
Both of the above seem reasonable to me given the intent of the scorecard recommended approach is to avoid unknown changes to what is being produced - for example not just a new, but a modified/hacked/illicit version of a library.
PROCEDURES.md - I can add changes there, but would suggest within this PR it focusses on scorecard, and we raise additional issue/issues to handle DCO and SBOM (which I can also help with). Agree with the point about making sure the change is understandable from the contents of the repo/docs (I could also add a pointer/info in the requirements file itself for extra clarity) Rolling out to repo - makes sense we allow a little time (a few weeks?) after the change is merged into liboqs (suggest a checkpoint at the TSC?) before working on the other repos -- and of course they should each go through their own pr approval). As to which repos, ultimately any that has any supported code within. |
So in summary my plan (if reviewers agree) is to:
Will return to TSC/rollout once this PR is merged. |
Hopefully I've addressed the review issues. No findings are reported. I propose
Trying to maintain a PR with build changes (albeit minor) is a little tricky in merging as there is other activity in this area. I hope the changes are sufficient know to close this one out. In addition proactive scans are already being done by the openssf team, and getting our mitigations in place will improve our already public score. |
@planetf1 could you do a squash commit or similar? |
docs/PROCEDURES.md
Outdated
Currently this is used withou `.github/workflows` but the same principle applies elsewhere. | ||
|
||
To make this easier, a version of the `requirements.txt` without hashes has been saved as `requirements.in`. This is | ||
to make maintenance easier, but it is not used at script execution time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So when updating a dependency version, one first has to update "requirements.in", then run the tool, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK -- then why not make this explicit in the documentation? Along the lines "When updating dependencies, a contributor should first update 'requirements.in', then run the pip-tools as per the below, then check for correct operation and only then commit."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworded in a more prescriptive way. Hopefully this is clearer. (in next push)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the documentation regarding version pinning.
Good to know there's different tools that update these hashes. That in my mind begs the question what manual step is required (by committers as well as reviewers) to ascertain that the tool did the right thing/obtained a "proper" hash? Without this manual step, one could obviously simply run the tool "blindly" on the (original) "requirements.in" or (version comments in the) CI job .yml files, much like an automated preprocessing step that could be integrated to CI... Or is the recommendation to just check whether the hashes change in a PR?
The reason for suggesting it's manual is that the author is then likely to be closely checking the action logs and output, plus be aware if any failures occur. The exact hash could be searched for in github to match the version used. I've added a note about this in PROCEDURES.md |
Updated docs, rebased - @baentsch please check the updates are ok for you. |
@planetf1 Our etiquette is that the person who created a PR also merges it, squashing all commits into one with a reasonably small set of commit messages to let posterity know what this is really about (e.g., removing all commit messages related only to PR discussions). |
Thanks, that's helpful and what I expected. I'll sort the squash out soon (juggling tasks!) and merge |
… action permissions open-quantum-safe#1706 Signed-off-by: Nigel Jones <jonesn@uk.ibm.com>
Squashed to one commit following successful review @baentsch @dstebila I do not have permissions to merge. Once tests complete would you be able to do this? |
@planetf1 do you want me to merge it? |
Merged (shortened title a bit, moving text to contents). Thanks for the contribution @planetf1 ! |
Adds a github action to run the OpenSSF scorecard tool, and posts results to security->code scanning
Fixes #1706
This is a new CI action. It's only effect is
No branch protection rules are being changed to require successful completion of this action
No additional testing is required
Example output is shown in the referenced issue
As per https://github.com/ossf/scorecard it's possible some additional permissions may be needed (depending on branch rotection setup). These can be dealt with as a second iteration
This PR also does not attempt to address any findings from the scorecard scan. It's purely activating the tool so that we can review and act on those findings subsequently.
Draft for