Replies: 5 comments 11 replies
-
some example idea to showcase threads. New threads can be created by scrolling all to the bottom and starting a comment. New replies to existing treads can be added by writing a reply. |
Beta Was this translation helpful? Give feedback.
-
I suggest: use the actual version that applies, and do not “normalize”(modify) in any means. For the PHP ecosystem Furthermore, In this case, the Usage Example: Real-world Example:
|
Beta Was this translation helpful? Give feedback.
-
If you are going to do these kinds of assessments, and advertise their results, you really need to provide more information on the methodology used. I mean down to the actual CLI commands you ran, and the repositories you ran them against. You should also publish the raw results so people can verify your results. Many of these tools have a wide range of flags and options that can impact results in one way or another. Trade-offs with those options are usually documented. "We ran tool X" is not sufficient information.
So the tools didn't identify the licenses, but how did you? Did you use automation? Was it a manual review? Were those licenses well-known ones from the SPDX catalogue, or custom ones?
How was that assessment done across the 9970 repositories? Did you do it manually? How did you verify that something is accurate? |
Beta Was this translation helpful? Give feedback.
-
I do not understand what this discussion is about.
This post feels to me like being both confused and confusing. Me wants data! |
Beta Was this translation helpful? Give feedback.
-
I am yawning. cdxgen v9 was used instead of v10. Also, there are a number of parameters and prerequisites that needs to be followed correctly. Let me know if you're interested in a proper evaluation which requires a payment, since we don't offer consulting services for free. |
Beta Was this translation helpful? Give feedback.
-
Assessments results on discrepancy of SBOM ecosystem and some suggestions
Background
As SBOM can be widely used in software software chain management, the capability and issues within SBOM ecosystem can influence the employment of users, thus accurately assessments of the current SBOM state is important. To this end, we have conducted a series of assessments on key characteristics in SBOM applications to reveal the potential discrepancies hindering usage.
Questions
We asked 3 questions:
1. Compliance: Do SBOM tools generate outputs that adhere to user requirements and standards?
2. Consistency: Do SBOM tools maintain consistency in transforming the produced SBOM?
3. Accuracy: How accurate are the SBOM produced by tools in reflecting the objective software?
Upon 9970 SBOM documents generated from 6 SBOM tools (sbom-tool, ort, syft, gh-sbom, cdxgen and scancode) in both SPDX and CycloneDX on 1162 GitHub repositories, we assess these questions. To evaluate accuracy, 100 repositories are annotated for benchmark, comprising 660 components and 4,000 data fields.
Results
This table shows average results across all the 6 tools, results are all in package level. Note that in the results for information of software itself is quite poor, for instance, we have 89.59% repositories contain licenses while only a minority are identified.
The findings indicate that while SBOM tools 100% support mandatory standards requirements (including Doc.: specVersion, License, Namespace, Creator; Comp.: Name, Identifier, downloadLocation, verificationCode), their performance in user case support is at 49.37% and the consistency within these supported use cases is on average of 17.63% (as the table shows). Accuracy assessments reveal significant discrepancies, with accuracy rates of 8.62%, 25.81%, and 12.3% as in software metadata, identified denpendent components, and detailed component information, underscoring substantial areas for improvement within the SBOM ecosystem.
Suggestions
name
with their information sources like pip, maven, npm, etc., while others do not. Inversion
tools varing in recording like whether add a 'V' before theversion
string this will lead to problems in utilizing SBOM from different SBOM tools. We suggest to require tools to specify their pattern in recoring information without the standard's explicit specification.NOASSERTION
,NONE
andNone
could be confusing in specific data fields. For instance,version
can naturally be empty in packages as the developers didn't record them in the software, tools deal empty ones into empty string or the three forms, which could lead to inconsistency for further exchange. We suggest to provide specific marks for these natually empty data fields.hashes
even does not specify the object the hash is performed on. We suggest to demand tools in creating checksums explicitly illustrate their process for creating the checksums, e.g. salt value or other preprocessing.We hope our findings can help promote the SBOM ecosystem, any questions or discussions are welcomed.
Fast check on code
We provide a fast check code at here based on part of our dataset.
Examples:
For checksum, here are examples that the file, hash algorithm are both matched, yet they still didn's get the same checksums:
Beta Was this translation helpful? Give feedback.
All reactions