-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coalesce common plain filters in a bucket into a single trie #528
Comments
Is there some special reason ABP needs such double filters ? |
This comment has been minimized.
This comment has been minimized.
Related issue: - uBlockOrigin/uBlock-issues#528 (comment) Following STrie-related work in above issue, I noticed that a large number of filters in EasyList were filters which only had to match against the document origin. For instance, among just the top 10 most populous buckets, there were four such buckets with over hundreds of entries each: - bits: 72, token: "http", 146 entries - bits: 72, token: "https", 139 entries - bits: 88, token: "http", 122 entries - bits: 88, token: "https", 118 entries These filters in these buckets have to be matched against all the network requests. In order to leverage HNTrie for these filters[1], they are now handled in a special way so as to ensure they all end up in a single HNTrie (per bucket), which means that instead of scanning hundreds of entries per URL, there is now a single scan per bucket per URL for these apply-everywhere filters. Now, any filter which fulfill ALL the following condition will be processed in a special manner internally: - Is of the form `|https://` or `|http://` or `*`; and - Does have a `domain=` option; and - Does not have a negated domain in its `domain=` option; and - Does not have `csp=` option; and - Does not have a `redirect=` option If a filter does not fulfill ALL the conditions above, no change in behavior. A filter which matches ALL of the above will be processed in a special manner: - The `domain=` option will be decomposed so as to create as many distinct filter as there is distinct value in the `domain=` option - This also apply to the `badfilter` version of the filter, which means it now become possible to `badfilter` only one of the distinct filter without having to `badfilter` all of them. - The logger will always report these special filters with only a single hostname in the `domain=` option. *** [1] HNTrie is currently WASM-ed on Firefox.
@gorhill A filter like Edit:
Edit2: Well, that is not happening, the logger reports |
Yes my bad, it's a regression, I overlooked that case. It's an easy fix, I will post an update asap. |
This was a regression introduced in 3f3a154 Reported in issue: - uBlockOrigin/uBlock-issues#528 (comment)
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
The bidirectional trie allows storing the right and left parts of a string into a trie given a pivot position. Releated issue: - uBlockOrigin/uBlock-issues#528 Additionally, the mandatory token-at-index-0 rule for FilterPlainHnAnchored has been lifted, thus allowing the engine to pick a potentially better token at any position in the filter string. *** TODO: Eventually rename `strie.js` to `biditrie.js`. TODO: Fix dump() method, it currently only show the right-hand side of a filter string.
With the above commit, the number of discrete filters in the array went from 57 to 19 for the "ad" bucket, and 77 to 7 in the "ads" bucket (the two largest buckets). |
@mjethani @monzta @ryanbr @kzar Could you please reply ? |
@mapx In the end, it does not really matter anymore, I added code in 1.19.0[1] to deal with these efficiently, so they are no longer a concern to me. [1] Incidentally, sleazy uBlock has a commit with a resembling commit message. |
I don't believe there is any specific reason that Adblock Plus should require these, but if filter list authors are trying to distinguish between HTTP(S) and other protocols, I can understand. Now we could address this with a regular expression filter (i.e. Historically it has been that "regular expression filters are inefficient" because Adblock Plus developers say so. This is now rubbish. Please let us know what works for you, we will optimize Adblock Plus for it (as long as I am around). |
Related issues: - uBlockOrigin/uBlock-issues#761 - uBlockOrigin/uBlock-issues#528 The previous bidi-trie code could only hold filters which are plain pattern, i.e. no wildcard characters, and which had no origin option (`domain=`), right and/or left anchor, and no `csp=` option. Example of filters that could be moved into a bidi-trie data structure: &ad_box_ /w/d/capu.php?z=$script,third-party ||liveonlinetv247.com/images/muvixx-150x50-watch-now-in-hd-play-btn.gif Examples of filters that could NOT be moved to a bidi-trie: -adap.$domain=~l-adap.org /tsc.php?*&ses= ||ibsrv.net/*forumsponsor$domain=[...] @@||imgspice.com/jquery.cookie.js|$script ||view.atdmt.com^*/iview/$third-party ||postimg.cc/image/$csp=[...] Ideally the filters above should be able to be moved to a bidi-trie since they are basically plain patterns, or at least partially moved to a bidi-trie when there is only a single wildcard (i.e. made of two plain patterns). Also, there were two distinct bidi-tries in which plain-pattern filters can be moved to: one for patterns without hostname anchoring and another one for patterns with hostname-anchoring. This was required because the hostname-anchored patterns have an extra condition which is outside the bidi-trie knowledge. This commit expands the number of filters which can be stored in the bidi-trie, and also remove the need to use two distinct bidi-tries. - Added ability to associate a pattern with an integer in the bidi-trie [1]. - The bidi-trie match code passes this externally provided integer when calling an externally provided method used for testing extra conditions that may be present for a plain pattern found to be matching in the bidi-trie. - Decomposed existing filters into smaller logical units: - FilterPlainLeftAnchored => FilterPatternPlain + FilterAnchorLeft - FilterPlainRightAnchored => FilterPatternPlain + FilterAnchorRight - FilterExactMatch => FilterPatternPlain + FilterAnchorLeft + FilterAnchorRight - FilterPlainHnAnchored => FilterPatternPlain + FilterAnchorHn - FilterWildcard1 => FilterPatternPlain + [ FilterPatternLeft or FilterPatternRight ] - FilterWildcard1HnAnchored => FilterPatternPlain + [ FilterPatternLeft or FilterPatternRight ] + FilterAnchorHn - FilterGenericHnAnchored => FilterPatternGeneric + FilterAnchorHn - FilterGenericHnAndRightAnchored => FilterPatternGeneric + FilterAnchorRight + FilterAnchorHn - FilterOriginMixedSet => FilterOriginMissSet + FilterOriginHitSet - Instances of FilterOrigin[...], FilterDataHolder can also be added to a composite filter to represent `domain=` and `csp=` options. - Added a new filter class, FilterComposite, for filters which are a combination of two or more logical units. A FilterComposite instance is a match when *all* filters composing it are a match. Since filters are now encoded into combination of smaller units, it becomes possible to extract the FilterPatternPlain component and store it in the bidi-trie, and use the integer as a handle for the remaining extra conditions, if any. Since a single pattern in the bidi-trie may be a component for different filters, the associated integer points to a sequence of extra conditions, and a match occurs as soon as one of the extra conditions (which may itself be a sequence of conditions) is fulfilled. Decomposing filters which are currently single instance into sequences of smaller logical filters means increasing the storage and CPU overhead when evaluating such filters. The CPU overhead is compensated by the fact that more filters can now moved into the bidi-trie, where the first match is efficiently evaluated. The extra conditions have to be evaluated if and only if there is a match in the bidi-trie. The storage overhead is compensated by the bidi-trie's intrinsic nature of merging similar patterns. Furthermore, the storage overhead is reduced by no longer using JavaScript array to store collection of filters (which is what FilterComposite is): the same technique used in [2] is imported to store sequences of filters. A sequence of filters is a sequence of integer pairs where the first integer is an index to an actual filter instance stored in a global array of filters (`filterUnits`), while the second integer is an index to the next pair in the sequence -- which means all sequences of filters are encoded in one single array of integers (`filterSequences` => Uint32Array). As a result, a sequence of filters can be represented by one single integer -- an index to the first pair -- regardless of the number of filters in the sequence. This representation is further leveraged to replace the use of JavaScript array in FilterBucket [3], which used a JavaScript array to store collection of filters. Doing so means there is no more need for FilterPair [4], which purpose was to be a lightweight representation when there was only two filters in a collection. As a result of the above changes, the map of `token` (integer) => filter instance (object) used to associate tokens to filters or collections of filters is replaced with a more efficient map of `token` (integer) to filter unit index (integer) to lookup a filter object from the global `filterUnits` array. Another consequence of using one single global array to store all filter instances means we can reuse existing instances when a logical filter instance is parameter-less, which is the case for FilterAnchorLeft, FilterAnchorRight, FilterAnchorHn, the index to these single instances is reused where needed. `urlTokenizer` now stores the character codes of the scanned URL into a bidi-trie buffer, for reuse when string matching methods are called. New method: `tokenHistogram()`, used to generate histograms of occurrences of token extracted from URLs in built-in benchmark. The top results of the "miss" histogram are used as "bad tokens", i.e. tokens to avoid if possible when compiling filter lists. All plain pattern strings are now stored in the bidi-trie memory buffer, regardless of whether they will be used in the trie proper or not. Three methods have been added to the bidi-trie to test stored string against the URL which is also stored in then bidi-trie. FilterParser is now instanciated on demand and released when no longer used. *** [1] https://github.com/gorhill/uBlock/blob/135a45a878f5b93bc538f822981e3a42b1e9073f/src/js/strie.js#L120 [2] e94024d [3] https://github.com/gorhill/uBlock/blob/135a45a878f5b93bc538f822981e3a42b1e9073f/src/js/static-net-filtering.js#L1630 [4] https://github.com/gorhill/uBlock/blob/135a45a878f5b93bc538f822981e3a42b1e9073f/src/js/static-net-filtering.js#L1566
This issue is opened somewhat a posteriori, as work for this has already been committed in gorhill/uBlock@c229003 (the new
STrieContainer
code was just a matter of reusing existingHNTrieContainer
code and modifying it to deal with plain string rather than hostname, i.e. simplifying really).However I want to document and further investigate pushing the trie usage farther, hence this issue.
Done
In the static network filtering engine, filter buckets are used to hold collections of filters which share the same token. Tokens are used to store/lookup filters or collections of filters so as to minimize the number of filters visited each time a network request needs to be matched against the content of the filter lists.
With gorhill/uBlock@a594b3f, I introduced a method to help investigate the content of all filter buckets. To do so, call the method from uBO's dev console as follow:
This will output a list of all filter buckets in the static network filtering engine, from largest to smallest. Before the work on trie usage was committed,
bucketHistogram()
would show the content of the largest buckets as follow (top ten):As can be seen the
ad
andads
bucket are quite large, and anytime uBO encounter a URL with the tokenad
orads
, it must potentially visit all or a large number of the entries in these buckets to find out whether there is a matching filter.Oftentimes, a majority of filters in these buckets are plain filters which can be coalesced into a single trie entity, and a single scan of the trie will tell whether any of the filter has been matched.
After the work to coalesce trieable filters from the same bucket into a single trie was committed, the buckets looked like this:
Sometimes, buckets end up with 0 entries and 1 trie as all the filters in the bucket can be coalesced into a single trie, for example:
The result as per
µBlock.staticNetFilteringEngine.benchmark()
method was a measurable performance and memory usage gain. I do not expect the performance gain to be noticeable from a user's point of view, although the memory efficiency can be noticed, especially when uBO loads from a selfie (typically well under 40 MB at launch with default settings/lists).To do
Investigate pushing the coalecing-into-tries idea farther in the context of filter buckets. There are still many entries left in some filter buckets which can theoretically be coalesced into a trie.
However this would require a "bidi" trie, i.e. a trie which can scan left and right of the token position in the URL -- current implementation scans just to the right.
The ability to scan both directions would open the door to also coalesce FitlerPlain-based filters into the same trie currently used by the FilterPlainPrefix1-based filters (a specialized version of FitlerPlain).
For examples, this would further reduce the number of entries to scan to 19 entries in the
ad
bucket (from 57 entries with current trie, or 926 entries with no trie), and to just 8 entries in theads
bucket (from 78 entries with current trie, or 636 entries with no trie).The text was updated successfully, but these errors were encountered: