Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(readme): CII Best Practices badge 100% completion #357

Closed
4 tasks done
petermetz opened this issue Nov 4, 2020 · 4 comments
Closed
4 tasks done

docs(readme): CII Best Practices badge 100% completion #357

petermetz opened this issue Nov 4, 2020 · 4 comments
Assignees
Labels
bug Something isn't working documentation Improvements or additions to documentation good-first-issue Good for newcomers Hacktoberfest Hacktoberfest participants are welcome to take a stab at issues marked with this label. P1 Priority 1: Highest Security Related to existing or potential security vulnerabilities
Milestone

Comments

@petermetz
Copy link
Contributor

petermetz commented Nov 4, 2020

Checklist to resolve

  • Refactor maintainer group into two separate groups one is maintainers as-is and another one for security admins (Peter volunteered for this)
  • Dependabot alerts fixed (Peter volunteered for this)
  • ci: add fuzzer tests for our APIs #495 Fuzzer test case for the API server and all the plugins (Dave is helping out with looking at options)
  • Figure out why the LGTM.com tool is not firing for us despite being configured to do so (Ry volunteered to help)

Describe the bug

Depends on mergint the PR #338 which already takes care of the CII checklist item Release Notes (screenshot below)
cactus-cii-badge-release-notes-2020-11-04 14-22-41

We are not yet passing the CII Best Practices check (looks like from the TSC mailing list that we need 100%)
https://bestpractices.coreinfrastructure.org/en/projects/4089

To Reproduce

Visit the link or the project readme where the badge is at and observe <100% compliance.

Expected behavior

We should have 100% compliance on the badge.

Screenshots

cactus-cii-badge-app-2020-11-04 14-09-46

Hyperledger Cactus release version or commit (git rev-parse --short HEAD):

v0.2.0

cc: @sfuji822 @takeutak @jonathan-m-hamilton @dhuseby @dcmiddle

@petermetz petermetz added bug Something isn't working good-first-issue Good for newcomers Security Related to existing or potential security vulnerabilities Hacktoberfest Hacktoberfest participants are welcome to take a stab at issues marked with this label. labels Nov 4, 2020
@petermetz petermetz added this to the v0.3.0 milestone Nov 4, 2020
@petermetz petermetz added the documentation Improvements or additions to documentation label Nov 4, 2020
@dcmiddle
Copy link

dcmiddle commented Nov 4, 2020

Thanks for the ping @petermetz
If you have any reactions to the badge process feel free to drop them here. For example, question xx was not relevant or if not for xx we wouldn't have realized this was a best practice. Overall the process was helpful or bureaucratic or something else.

@petermetz
Copy link
Contributor Author

Thanks for the ping @petermetz
If you have any reactions to the badge process feel free to drop them here. For example, question xx was not relevant or if not for xx we wouldn't have realized this was a best practice. Overall the process was helpful or bureaucratic or something else.

@dcmiddle Overall the process was useful IMO. It wasn't very fun of course to grind through the questions, but I don't see how that could be improved.
There were questions that I marked as not relevant, but I would actually advocate to keep all of those as well (as long as there is an option to say that it's not relevant). Here's why: It forces people to think it through at least once (at least that's what it did for me).
A great example is the memory safety questions which someone working on a NodeJS project could just hand-wave at, but if they use native C/C++ addons then suddenly it is relevant and needs attention from the maintainers.


Future improvement idea that's not really tied to any specific question:

A sort-of meta-metric that shows a number of a peer reviews (and the reviewer's name if they choose to publish it) from others similar how people can peer review and sign off on a research paper.

Why? There's a big difference between me clicking through a questionnaire and saying 'yeah sure' to everything (the urge is always high to call one's own baby beautiful) and 5 other people also being in agreement with all my self reported answers.
The free-form explanation input fields we have for all the answers is a good start IMO, but it could be taken to the next level with the peer review mechanism described above where people could be like: "I clicked on the link you said would work for reporting vulnerabilities, but it went to HTTP 404 so thumbs down on this".


Another meta-metric suggestion would be to have the form show if the answers were regularly updated or not (every six months or a year maybe for a recommended period?).

Why? Maybe this month I have no code with manual memory management in the repo, but next week someone adds some inline assembly or whatever else as a performance optimization and now suddenly it's relevant and we need to take care of it according to the CII badge questions.

@dcmiddle
Copy link

dcmiddle commented Nov 9, 2020

Very helpful feedback. Thanks @petermetz!

@petermetz petermetz self-assigned this Nov 19, 2020
@petermetz petermetz modified the milestones: v0.3.0, v0.4.0 Jan 8, 2021
@petermetz petermetz modified the milestones: v0.4.0, v0.7.0 Feb 9, 2021
@petermetz petermetz removed their assignment Feb 9, 2021
@petermetz petermetz modified the milestones: v0.7.0, v0.8.0 Aug 10, 2021
@petermetz petermetz modified the milestones: v0.8.0, v0.9.0 Aug 17, 2021
@petermetz petermetz modified the milestones: v0.9.0, v0.10.0 Sep 2, 2021
@petermetz petermetz modified the milestones: v0.10.0, v2.0.0 Aug 20, 2023
@petermetz petermetz self-assigned this Aug 20, 2023
@petermetz petermetz added the P1 Priority 1: Highest label Aug 20, 2023
@petermetz petermetz moved this from Todo to In Progress in Cacti_Scrum_Project_v2_Release Sep 17, 2023
@petermetz
Copy link
Contributor Author

From https://github.blog/2022-08-15-the-next-step-for-lgtm-com-github-code-scanning/

Three years ago, the team that built LGTM.com joined GitHub. From that moment on, we have worked tirelessly to natively integrate its underlying CodeQL analysis technology into GitHub. In 2020, GitHub code scanning was launched in public beta, and later that year it became generally available for everyone. GitHub code scanning is powered by the very same analysis engine: CodeQL.

We’ve since continued to invest in CodeQL and GitHub code scanning. Today, GitHub code scanning has all of LGTM.com’s key features—and more! The time has therefore come to announce the plan for the gradual deprecation of LGTM.com.
End of August 2022: no more user sign-ups and new repositories

TLDR: lgtm.com is gone, the new version of it is CodeQL which we've had for years, so we can consider this task finished as such.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation good-first-issue Good for newcomers Hacktoberfest Hacktoberfest participants are welcome to take a stab at issues marked with this label. P1 Priority 1: Highest Security Related to existing or potential security vulnerabilities
Projects
None yet
Development

No branches or pull requests

2 participants