-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EPIC] [MVP] Improvements to Thoth advises output #434
Comments
@mayaCostantini: This issue is currently awaiting triage. The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig user-experience |
One of the requirements for computing software stack quality scores based on OSSF Scorecards would be to have Scorecards data linked to each project latest release instead of the project repository head commit SHA. This feature request has already been proposed on the scorecards project side. What about helping them implementing this feature and improving the scorecards cronjob directly instead of computing this data on our side? cc @goern |
This sounds reasonable. Nevertheless, we would use the data via big query? |
Yes, but the information will already be computed in the dataset and we will not need to associate the head commit SHA to the release ourselves. |
Problem statement
As a Python Developer,
I would like to have concise information about the quality of my software stack and all its transitive dependencies,
so that I get some absolute metrics such as:
Which would be aggregated and compared to metrics for packages present in Thoth's database to provide a global quality metric for a given software stack, eventually given a specific criteria (maintenance, code quality...), in the form of a percentage or score (A, B, C...).
We consider the metrics derived from direct and transitive dependencies to be of the same importance, so there will not be any difference in the weight given to information carried by the two types of dependencies.
Proposal description
--scoring
flag is passed onthamos advise
thamos#1149Taking the example of OSSF Scorecards, we already aggregate this information in prescriptions which are used directly by the adviser. However, the aggregation logic present in
prescriptions-refresh-job
only updates prescriptions for packages already present in the repository. We could either aggregate Scorecards data for more packages using the OSSF BigQuery dataset or have our own tool that computes Scorecards metrics on a new package release, which could be integrated directly intopackage-update-job
for instance. This would most likely consist in a simple script querying the GitHub API and computing the metrics on the project's last release commit.package-update-job
or on a regular scheduleFor example, if a software stack is in the 95th percentile of packages with the best development practices (CI/CD, testing...), score it as "A" for this category. Compute a global score from the different category scores.
Additional context
Actionable items
If implemented, those improvements will most likely be a way for maintainers of a project to show that they use a trusted software stacks to their users. AFAICS, this would not provide any actionable feedback to developers about their dependencies.
Acceptance Criteria
To define.
The text was updated successfully, but these errors were encountered: