You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are the rough steps today with v1-5 to get a match from the DB, starting within a matcher:
Matchers use the search package Criteria to access the given vulnerability.Provider , where the provider is DB schema agnostic and passed to the matcher
The search functions are static and require continually passing a provider
In v6 we improve on this by requiring the matcher to instantiate a client with specific configuration and access to the store at construction, and the client provides the raw search functionalities
The provider is DB agnostic, which requires that all data is fully deserialized from the DB (even if it’s not needed)
In v6 we improve on this by searching against indexed tables and only deserialize related blobs if they are needed beyond a specific step
The search package should be where DB model deserialization occurs to leverage as many optimizations as possible while searching. This would remove some unnecessary abstractions (the vulnerability.Provider).
Matchers search by criteria against a client (where the client is driven by search criteria and is passed into the matcher at matcher construction)
Motivating example (not finalized)
// from within the search packagetypeResourcesstruct {
Store v6.StoreReaderAttributedMatcher match.MatcherType
}
typeCriteriafunc(Resources) ([]match.Match, error)
typeInterfaceinterface {
GetMetadata(id, namespacestring) (*vulnerability.Metadata, error)
ByCriteria(criteria...Criteria) ([]match.Match, error)
}
typeClientstruct {
resourcesResources
}
funcNewClient(store v6.StoreReader, matcherType match.MatcherType) *Client {
return&Client{
resources: Resources{
Store: store,
AttributedMatcher: matcherType,
},
}
}
func (cClient) ByCriteria(criteria...Criteria) ([]match.Match, error) {
varmatches []match.Matchfor_, criterion:=rangecriteria {
m, err:=criterion(c.resources)
iferr!=nil {
returnnil, err
}
// TODO: add matcher type to all matches...matches=append(matches, m...)
}
returnmatches, nil
}
Example search criteria function
// from within the search packagefuncByCPE(p pkg.Package) Criteria {
returnfunc(rResources) ([]match.Match, error) {
// use db v6 specific indexes to raise matches -- r.store.Get*()// use common functions like onlyVulnerableMatches(), etc., // to account for platform CPE, version filtering, etc.
}
}
This allows the matcher to implement their own custom criteria but also use common criteria.
Note from the above example that we’re able to get raw DB models, but we’re still getting it from a store object that is tailored to know how to access the object efficiently.
A store implementation is provided for all available database objects, embedded into the final Store interface.
The DB search client queries and refines the final set of vulnerability candidates (using the injected DB-specific store reader into the client) . The search methods access the DB with the raw sqlite models, including the ability to optionally fetch associated blob values or not -- this deferral is critical to performance gains.
This implies the following incremental additions, each with ways to read and write entries to and from the DB:
This implies that a new search client needs to be implemented with existing (common) criteria:
Add ByCPECriteria
Add ByLanguageCriteria
Add ByDistroCriteria
Ideally all of these changes are done incrementally and do not affect the existing v5 implementation. We should only remove the v5 implementation when we are ready to cutover to v6. This also implies that we should consider making the search client a shared concern but the criteria implemented within each db schema -- this is still open to design/options.
The text was updated successfully, but these errors were encountered:
Here are the rough steps today with v1-5 to get a match from the DB, starting within a matcher:
Some observations out of this are:
Changes
(feel free to browse the prototype)
The search package should be where DB model deserialization occurs to leverage as many optimizations as possible while searching. This would remove some unnecessary abstractions (the vulnerability.Provider).
Matchers search by criteria against a client (where the client is driven by search criteria and is passed into the matcher at matcher construction)
Motivating example (not finalized)
Example search criteria function
This allows the matcher to implement their own custom criteria but also use common criteria.
Note from the above example that we’re able to get raw DB models, but we’re still getting it from a store object that is tailored to know how to access the object efficiently.
A motivating example (not final)
Which each shard of the store accumulates to a full store object (reader and writer too):
Example (again, not finalized)
A store implementation is provided for all available database objects, embedded into the final Store interface.
The DB search client queries and refines the final set of vulnerability candidates (using the injected DB-specific store reader into the client) . The search methods access the DB with the raw sqlite models, including the ability to optionally fetch associated blob values or not -- this deferral is critical to performance gains.
This implies the following incremental additions, each with ways to read and write entries to and from the DB:
DBMetadataStore
Add v6 DB metadata store #2146ProviderStore
Add v6 provider store #2232BlobStore
Add v6 vulnerability & blob stores #2243AffectedPackageStore
Add AffectedPackage store #2245VulnerabilityStore
Add v6 vulnerability & blob stores #2243This implies that a new search client needs to be implemented with existing (common) criteria:
ByCPECriteria
ByLanguageCriteria
ByDistroCriteria
Ideally all of these changes are done incrementally and do not affect the existing v5 implementation. We should only remove the v5 implementation when we are ready to cutover to v6. This also implies that we should consider making the search client a shared concern but the criteria implemented within each db schema -- this is still open to design/options.
The text was updated successfully, but these errors were encountered: