-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative Net Request proposal: disable individual static rules #162
Comments
Have two questions:
|
From my point of view, the disabled rule should not count towards the global static rule count. As part of working on the implementation, I am checking whether and how this can be done. A rule should stay disabled until the next extension update, just like the static rulesets. This would prevent a rule from accidentally becoming enabled when it shouldn't be. The fact that disabled static rules would persist across sessions is a good reason for having a method |
I have updated the Extension API Proposal document to use the appropriate template, and have also expanded and clarified its content: https://docs.google.com/document/d/1NTOTr6iwm0dJbewWjnABmPo6h1QD1mKTpX60s6Klj-8/edit?usp=sharing |
An initial implementation of this proposal on Chromium is available here: https://chromium-review.googlesource.com/c/chromium/src/+/3494984 This code works, although there is still work to do in terms of testing and benchmarking. Since the new API is still being discussed, at the moment my intention is to use this implementation as a way to check the feasibility of the proposal and explore its possible implications. |
This is a collection of real-world examples where a problem discovered in a static list of rules would not have been easily solved with the functionality provided by Declarative Net Request at the moment. These examples are mainly instances of general rules, those that are not tied to a particular domain. Because these rules are applied to lots of different websites, it is not possible to predict with confidence the effects of overriding them with a dynamic rule. The typical pattern is that when a general rule is faulty because it blocks valid content, its hypothetical solution with the current Declarative Net Request API (implemented as a dynamic rule) would have the opposite effect and potentially allow unwanted or harmful resources. Example 1This rule was intended as a comment but the leading
(Block content with This led to having a filter in EasyList that blocked all legit (non-ad and non-tracking related) requests containing “media”, e.g. all images from wikipedia which come from It was quickly discovered, fixed in EasyList and deployed to users within hours via automated filter list updates in the extensions. As static filters currently can’t be disabled, a dynamic rule would be required in order to allow everything with the word "media". But this would not only unblock requests affected by the accidental “media” filter, but would allow ALL - including ad and tracking related - requests, which would have a major impact on ad blocking users. Example 2This line was accidentally added to EasyList (commit):
(Block content with The title of the related issue is self-explanatory: "EasyPrivacy breaking nearly every website". The bug was fixed by removing the faulty line (commit). Since this filter blocks far too many desired resources, trying to override with a dynamic rule would have the opposite consequence of allowing too many unwanted resources. Example 3This filter broke different sites (added, adjusted):
(Block content with The solution was to remove it (commit). While this filter didn’t target A dynamic rule that simply tried to override it would be too broad, as it would probably allow unwanted content. At the same time, adjusting that dynamic rule to affect only specific domains would not be a sustainable strategy, as it would mean updating the rule every time that a new website is found to have been affected by the original filter (which was shown to be able to break more than one site). Example 4This filter removed legitimate content on this website (commit):
(Block content at The solution was to remove the filter (commit). This domain happens to be a CDN. so it is not possible to guess what else is or will be hosted on it. Therefore, a dynamic rule allowing everything with that URL pattern would not be advisable. Example 5A developer on HubSpot complained about this filter because it was blocking internal backend calls to
(Block content with The solution was to remove the faulty filter (commit). However, a dynamic rule that tried to override the faulty filter by allowing everything with Example 6This rule broke comments in YouTube (discussion):
(Block content with The solution was to remove it (commit). A dynamic rule that overrided it by allowing everything with Example 7This rule was too generic and broke valid locations:
(Remove content with The solution was to remove it (discussion, commit). A dynamic rule that simply allowed any URLs with the form Example 8These two rules broke images in
(Remove content with The solution was to remove them (issue, commit). However, a dynamic rule that allowed all content with those two patterns would be to allow things that would be otherwise have been blocked by other rules. Example 9Trying to fix Rotten Tomatoes, this rule broke the internal processes of NBC, a content provider (discussion).
(Block XmlHttpRequest calls containing The solution was to remove this rule and add a more specific one to A dynamic rule to achieve a similar result would need to start by allowing requests containing Example 10This rule broke valid content in
(Block content that contains The solution was to remove it and replace it with more specific rules (commit). A dynamic rule to achieve the same would need to allow all content with |
Frequency of updatesI have been talking with some ad-blocker developers in order to get a better understanding of their use case. Currently, they update their main filter list every hour. Other filter lists are updated every 24 hours. If we decided to set a rate limit for this API, a time interval between 1 and 24 hours might be a reasonable starting point. A rate of once per hour would allow extensions to preserve their current user protection while not imposing an excessive load on the system. (Of course, this does not mean that the API would be called every hour, but rather that the extension would not need to wait for long before disabling static rules when needed). |
BenchmarkingThis entry contains some benchmarks to understand the potential impact on performance. See also the benchmarks in bug #1209218. MaterialsAn initial implementation in Chromium of this API is available at https://chromium-review.googlesource.com/c/chromium/src/+/3494984 The extension used for benchmarking includes ten filter lists (based on EasyList), each one with 33,000 rules. When all ten are active, the number of enabled static rules is 330,000 which is the maximum allowed in the Declarative Net Request API. I have used my development machine for the tests (2 x Xeon 4210R, 64GB memory). It would be a good idea to carry out additional benchmarks in more typical hardware/conditions. MetricsTwo different time metrics are used:
let t0 = Date.now();
chrome.declarativeNetRequest.updateEnabledRulesets({...}, () => {
let t1 = Date.now();
// measured JS time : t1 - t0
});
void OnIndexCompleted(RulesetInfo* ruleset,
base::OnceClosure done_closure,
IndexAndPersistJSONRulesetResult result) {
// the measured internal time is in result.index_and_persist_time Benchmark 1: enable all rulesEnable 10 rulesets with 33,000 rules each. Total: 330,000 enabled static rules (maximum allowed by the API). Javascript time: 3,300 msecs. Internal time per ruleset: 140 msecs on average. Benchmark 2: disable 1% of the rulesEnable 10 rulesets with 33,000 rules each, but disabling 1% of the rules in each ruleset. Total: 326,700 enabled static rules. Javascript time: 3,341 msecs. Internal time per ruleset: 142 msecs on average. Benchmark 3: disable 10% of the rulesEnable 10 rulesets with 33,000 rules each, but disabling 10% of the rules in each ruleset. Total: 297,000 enabled static rules. Javascript time: 3,354 msecs. Internal time per ruleset: 139 msecs on average. ConclusionsThe number of disabled rules does not seem to have a noticeable impact in performance. Note that compiled static rulesets are stored on disk (regardless of whether some of their rules have been disabled) and will not need to be recompiled unless their disabled rules change. Therefore, the main factor affecting performance will probably be the number of times that a ruleset needs to be recompiled to disable some of its rules, rather than the number of those disabled rules. |
Adding the |
The test extension used to create the benchmarks above is available at felipeerias/DNR-disable-rules-extension. |
Thanks for adding the "Chrome: follow-up" label. I'm planning to sync with folks this week. Apologies for the delay on providing you with feedback. |
chrome.declarativeNetRequest.updateEnabledRulesets({
enableRulesetIds: [ "myRules" ]
}, … ); Likewise, the proposed API can be used to set the disabled rules in a ruleset without affecting the others. For example, this call would enable chrome.declarativeNetRequest.updateEnabledRulesets({
enableRulesetIds: [ "myRules" ],
disableRules: [
{ rulesetId: "myRules", disableRulesIds: [ 2, 3, 4 ] }
]
}, … ); This is already included in the ongoing Chrome implementation. To prevent errors, a ruleset must be explicitly listed in Question: is this method discoverable enough? Question: would it be necessary to add methods that work on only one ruleset at the time? For example, we could have a simpler way to disable rules in a single set with: setDisabledRules( "myRules", [ 5, 6, 7 ]) ; |
Noting for visibility that Devlin from Chrome's extensions team has provided some feedback on the design doc, so I'm removing the Chrome feedback label for the time being. |
Following the feedback over the past weeks, we have updated the API proposal document with new content. This is a quick summary:
|
We can probably move to the implemented label, but I'll wait for an update (I notice we haven't documented it yet so I want to make sure the overall design is settled) cc/ @byung-woo |
@oliverdunk should we file a different issue for this? |
I'm happy to try to get an update on that feedback. I think continuing discussion in this issue makes sense (regardless of what labels it has) but open to thoughts. |
@oliverdunk hi Oliver, any update on this? |
Hey! No update yet I'm afraid, thanks for the bump. I do still want to get some thoughts on your question and generally figure out if we are ready to document the new APIs 😄 |
@oliverdunk got it, thank you! If you need any more input from us, please let me know. |
Small update (unfortunately nothing specific on increasing the limits yet) - I've confirmed that we're shipping this API in stable and are ready to document it, so adding the
implemented: chrome
I did confirm that the current limits were chosen based on everything @felipeerias mentioned here. In particular, there is a limit to how large we want the data stored in preferences files to be. I understand that doesn't help with solving the differential updates use case though so definitely still something to figure out there. |
Safari/WebKit bug: https://bugs.webkit.org/show_bug.cgi?id=261039 |
Hi @oliverdunk. We unfortunately ran into the 5k disabled rule limit during the beta testing of moving Adblock and Adblock Plus to use MV3. Given the recent change to allow up to 30k safe dynamic rules, I think it might be worth reconsidering increasing the number of disabled rules, depending on what the performance looks like. The reason is that our main use case for this feature is to update our rulesets to match the latest filters on our servers. If a rule has been changed since we built the extension, we disable the existing static rule, and then add a new dynamic rule with the change. So if we can add 30k dynamic rules but only disable 5k static rules, then the effective number of filters we can change before deploying a new build of our extensions is actually only 5k rules. It could also help us if this limit were scoped to specific rulesets. So maybe we can disable 5k rules per ruleset, rather than the current limit of 5k rules per extension. I'd also like to request two quality of life improvements that don't affect the actual limit:
|
@JWorthe thanks for confirming this issue, as we were telling in the beginning, 5k limit is not enough to implement differential updates. |
This issue was discussed during the WECG in-person meeting. Chrome's stance on this is the following: let's see how it goes with CWS fast-track, whether you would still need increasing the constant given that you're going to publish daily updates. A couple of important points that I missed to mention during the meeting:
I think that given how quickly updates are actually delivered to the users it is important to increase the limit regardless of how well fast track works. The current limits make the feature useless for differential updates (which was the original purpose of it). Initially we requested an increase from 5k to 20k but as far as I understand this would require a considerable change (moving disabled rules from the prefs file to a separate file, parsing, etc.). Can you consider at least doubling the limit (5k -> 10k)? If fast track works correctly this should be enough to keep every user up-to-date for the 2 weeks period that the full rollout takes. |
Hi @JWorthe / @ameshkov - appreciate the feedback. On increasing the limit for disabled static rules, I spoke to the team and we are already towards the upper limit of what we think is reasonable with the current implementation. In fact, storing 5,000 disabled rule IDs in a preferences file feels like a lot and we only went with this knowing it is likely to be used by a small number of extensions. I spoke to @ameshkov more individually to understand the update rollout times, and it seems most users update quickly, with a long tail that would benefit from a higher limit. Notably however, this long tail represents only a small percentage of users and the update times for these users are very long (multiple months perhaps). With that in mind, we'd like to see how things go at first, noting that On exposing a constant and updating the documentation - this all seems worthwhile, I'll speak to the team. |
Got it, thanks for the update @oliverdunk! Not that I am too happy with the decision though, but we'll explore the |
At Firefox we are currently implementing the API to disable individual static rules, at https://bugzilla.mozilla.org/show_bug.cgi?id=1810762 Disabled individual static rules are still counted to enforce static rule limitsI see a notable difference between an initial design goal ("From my point of view, the disabled rule should not count towards the global static rule count") and the actual implementation (where global limits are independent of disabled individual rules source 1, source 2). These aspects were not mentioned in the external design doc nor during code review. I evaluated the two different designs below, and my conclusion is that disabled individual rules should continue to count towards the global static rule count, i.e. they should not free up quota when an individual rule is disabled. (which matches Chrome's current behavior). As mentioned before, Chrome does currently exclude individual disabled rules from the quota of available static rules, and the quota of regexFilter rules. A clear "disadvantage" is that this prevents extensions from pushing the limits. However, I would not attribute much weight to this disadvantage, because extensions should not rely on disabling individual rules in order for rulesets to be enabled. Especially for the use case that motivated this API design, which is to disable a few rules that were unexpectedly broken. The advantage of including disabled individual rules in the quota is that any quota-dependent logic remains stable. When the quotas are independent of the individual disabled rules, then changes to individual rules only affect that ruleset. If disabled individual rules were to affect the quotas, then it is possible that other rulesets become enable/disabled after changes to an unrelated ruleset. Here is an example:
At install:
Now imagine the scenario where we disable 1 regexFilter rule from rulesetA. If the quotas were to account for this (i.e. free up quota when a rule is disabled), then toggling an individual rule can have a significant cascading effect:
In the above example I only mentioned regexFilter, but the whole ruleset is affected: an over-quota part of a ruleset causes the whole static ruleset to be disabled. This cascading effect is undesirable, hence it makes sense for individual static rules to not be subtracted from the quota. Invalid rulesA static ruleset can have invalid rules (e.g. invalid actions/condition values). These are ignored, and are not counted towards the limit of the global static rules. Ignored invalid rules are not considered disabled rules either. Invalid rules are worth a special mention (see also below), because what is considered an invalid rule in one browser version can become a valid rule in an updated version of the browser. Error handling in
|
@oliverdunk The 5k limit is currently not documented in Chrome's extension API docs, nor exposed through any API. Questions:
@xeenon Will Safari limit the number of disabled individual rules? If so, what limit are you considering? |
@Rob--W Thanks for the investigation! Yeah, I'll make a note that we should add this to the documentation. There isn't an API constant as far as I'm aware but I could see that it might make sense to add one. On everything else - we're chatting about this internally, I'll follow up. |
@Rob--W We haven't looked at implementing this yet, so I don't know what limit Safari might impose. |
Yes, we should definitely document this. I'll take a look at that today.
Not currently. Both me and Devlin are fairly neutral here. I would prefer not to do it since we already have many constants, but I can see an argument it is useful if building a feature that will disable many rules. We are happy to implement this if there is consensus between other browsers.
I agree with this and confirmed with Devlin that he feels the same.
These should probably count towards the quota, since it would be odd for an extension with static rules to behave differently with regards to quota across various Chrome versions. We wouldn't want to change it without checking for usage though, since this could be a breaking change (although I suspect it is unlikely to be in practice).
Neither me or Devlin have strong feelings here. An argument not to do this is that if an extension is configured to disable several rules at once, and a single rule was invalid, we don't necessarily want to completely discard that call because of the single missing ID. |
Firefox 128 supports disabling individual static rules through the We enforce a quota of 5000 disabled rules, counted per ruleset rather than across all rulesets. This limit is exposed in the API as the |
This issue is a proposal to extend the declarativeNetRequest API to add a way to disable an individual malfunctioning rule that is part of one of the static rulesets bundled with an extension.
This would allow extension developers to react in a timely manner to static rules with undesirable behavior, without hampering the functionality of the extension or risking unwanted side effects.
Motivation
Manifest v3 extensions may include static rulesets that will only be updated when the extension itself is updated. From time to time, these static lists might include rules that are found to have undesirable behaviour (either because of malice or human error).
While session and dynamic rules can be added and removed one by one, static rules can only be enabled and disabled along with every other rule in their ruleset. Therefore, the only solutions provided by the current API are to either:
A possible workaround might be to add a dynamic rule which overrides the malfunctioning static rule, but doing that risks causing unwanted side effects as the new rule could conflict with other (correct) static rules.
Proposal
Since extensions are already able to review the contents of their static rulesets, it would be enough to add a way in the declarativeNetRequest API to disable a static rule (or a collection of them).
My proposal is that a collection of disabled rules is added as an optional field to
UpdateRulesetOptions
:UpdateRulesetOptions.disableRules
of typeDisableRulesInfo[]
where
DisableRulesInfo
represents the rules that must be disabled for a given ruleset, with the following properties:rulesetId
:string
representing the id of an enabled static ruleset;disableRulesIds
:long[]
containing the IDs of the rules in that ruleset to be disabled.A main reason for choosing this approach over other alternatives (e.g. a separate method) is that it allows the result of calling
updateEnabledRulesets
to be a predictable atomic change of the state of the API, without depending on its previous state.While not being strictly necessary to fix this issue, it would be a good idea to also add a method
getDisabledRules()
that returns the collection of rules that have been disabled in the currently active rulesets. Together withgetEnabledRulesets()
and other methods this would provide a way to know the detailed state of the API at any given time.References
The text was updated successfully, but these errors were encountered: