-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drafting a new specification of radar rules #692
Comments
Please allow me to shine some light on the issue, especially in relation to RSSBud.
Just to clarify, RSSBud gained the ability to evaluate RSSBud uses JavaScriptCore to load and evaluate rules dynamically. JavaScriptCore is extremely limited (especially on iOS, where safety is a top priority): merely the bare bones of JavaScript are accessible so I had to "polyfill" Based on the above reasoning, RSSBud will continue supporting In addition, I'm planning to expand the specification to allow targeting of non-RSSHub feeds. Feel free to check it out. |
@Cay-Zhang Sorry for my unawareness that RSSBud/RSSAid has gained the ability to evaluate Whether to make I've checked RSSBudRules. Extracting official feeds is a nice idea, personally, I am glad to see such a feature, but we need more approval to get it in the new specification. BTW, I've updated the drafting specification (especially adding raw URL as an input choice) to make it possible to be compatible with your current rules. However, |
Just wondering, what if we embed the rule into each release? Could we save the hassle doing these workaround? For the new route v2 we could definitely generate and pack function. |
If this is a solution, shouldn't RSSHub-Radar have already been actively releasing minor versions along with the latest rules since v1.6.0? I feel sorry to mention that until #667, the default rule list was even not imported properly and was unusable, and no new version was released to include this PR so far. What's more, what about debugging new rules? Forcing contributors to install Chromium before they are able to debug, or forcing them to apply some patches and build their own version? I think neither is a good idea. off-topicI, again, feel sorry to mention that whether remote rules along with local-customized rules are enabled is totally up to UA; however, whether the update button to display is up to a synced variable. That is, if someone builds their own add-on without patches and installs, that variable will be set to Updating a public add-on needs reviews from the add-on store which consumes some time. Now that RSSHub-Radar has 4 distributions on different add-on stores, of which 3 (Chrome, Edge, Firefox) are owned by @DIYgod, 1 (Safari) is owned by @Edd-Gao, so the problem is, would they willing to manually update their distributions often or setup some CI to automatically push updates to the add-on stores (is each add-on stores has such an API?)? I fully understand they are busy, so I draft this new specification to completely solve these problems (it hurts me that you call my work workaround, shouldn't the usage of |
Gotcha, do you have a pipeline of this kind so that I may take a look and work with @DIYgod working on auto publish? We may have a variance for remote update but I would like to focus on having a working pipeline first. |
Still, I must argue that auto publishing would not change the fact that Firefox and Safari users are unable to debug their new rules. Honestly speaking, if not for debugging new rules, there was no Chromium on my PC. I may help set up a pipeline to automatically publish to AMO. I do not have and am not willing to register Chrome Web Store / Edge Add-ons accounts, thus, I have nothing to help with the distributions on these two stores. What's more, it seems that there is no such API for publishing Safari extensions, thus, the distribution for Safari is still unable to get the latest rules. Let us figure out what an embarrassing situation to which we are facing. Basically, depends on the goal and the browser, here's what we need:
|
For official plugins update I don't think it would be a great idea to do any remote fetch. A monthly release should be good enough. We could also have an emergency version disable API if any malicious script was published so we could stop any execution until users have the extension updated. For debugging purpose, could we set up scripts like tampermonkey? We pack our rules in form of JS instead of JSON and user could override this file for testing purposes, I believe this debug version could also be publish via github release instead of official channel as only developer or advance user could make use of it. |
I consider there is nothing different between fetching remote rules and monthly releasing as long as the usage of
I consider it relies on users to open issues to report malicious scripts. Still, nothing different between fetching remote rules and monthly releasing.
The answer is no. User scripts are registered and executed on webpages in situ. That's much more dangerous than loading rules using
If what you exactly mean is "override file", that's impossible unless users rebuild/repack their own add-on each time they modify it. If it is "override rules",
Chromium users can get their sideloaded extensions preserved after a browser relaunch, but that's not true for Firefox and Safari users. The good news is that a self-hosted (a.k.a. not-listed-on-AMO) Firefox add-on will help, while the bad news is Safari does not provide such a solution. |
So if I understand this correctly: Extensions could load remote script(js) without any limitation? If that's true we could simply publish rules online and load it anyway. If not true how do you update without publishing it officially or packed it? If we embed the rules into the extension why do we need eval? They are just part of the extension and we don't need eval to execute it -- just run it as part of the function already there. I don't consider there are auditing anyway but it is the process. Remote fetch open more attack surface and I would rather avoid that in official release. Developer extension -- please take it away with all your ideas. Also,even though security issues might be reported by third-party, we are already choosing the way of doing it "officially". Then we need to have some thought over these bad situation and how we could stop them. The issue with monthly release is that user may not be able to have latest update and keep the malicious script running for a month is not a responsible behavior imo. At least we have to do something to stop such behavior at the time we know it. |
Nope. As I have already described, such remote scripts need to be registered and executed on certain web pages (properly speaking, URL patterns) in situ. So the limitations are:
Publishing rules isn't enough since they cannot be executed directly on web pages. The script must modify something on the web page, either
It is the user script manager add-on that re-registers the user script.
When debugging new rules or fetching remote rules, "debug version published via GitHub release" cannot get rid of it. I am not so sure why you asked me such questions since my previous reply has described the precondition (
I agree that that's a significant point. However, that's true only if
So, do you mean that there will be several developers who are auditing rules day by day? If not, no additional audit would happen between rule approving and add-on releasing. Thus, any security issues, not "might be", but would only be reported by third-party (users) before any official actions could be done. Even if there is an official API to remotely disable the extension to avoid further damage, it still relies on users to discover and report security issues, doesn't it? So there is nothing different between fetching remote rules and monthly releasing as long as the usage of
In the case of monthly releases, if a security issue is reported and such API effectively disables users' extensions, they will be unable to use their extension until a newer version is released. In the case of loading remote rules, just a forced-rule-updating API is enough to protect users and can avoid such long downtime. Even if there's no such API, RSSHub-Radar already has forced updates per 5h. To be honest, I am not really opposed to
At least we are depriving them of something they already have. The reason why I drafted this issue is that it is the only solution to make all Chromium, Safari, and Firefox users happy, and meanwhile, is secure and does not require the add-on publisher to publish new versions frequently. |
I believe the current proposal 1)cannot replace the target function and 2)breaking other parts of the ecosystem. I see the point where most of the rules won't make use of the power and we are giving too much, but it is also important to realize that currently, we don't have enough hands to get the adoption, especially without support from other systems. That's why I am trying to find an easy way to get around it:
And for those power user who don't satisfied with monthly update and have enough security knowledge, they still have the option to use the developer version, where we could simply expose the rules and use eval. I don't want to give normal user too much power when they have no idea what they are doing (like pasting stuff in developer console, raise the bar by asking manually install from github imo is a good start). Like I said, unless we have a safe way to cover the case where website content is necessary from RSS link generation, we would rather stay what we have for now. Still, this proposal is a good start for anyone who wanna do the sandbox. I believe it is the first time someone work this issue with some specs, and I really appreciate it. |
As I have already mentioned, there seems to be no add-on publishing API for Safari, so manual actions are needed. If the releasing interval is up to the publisher's manual actions instead of automatical pipelines, it is considered unreliable.
Just turn to the doc. Self-hosted addons are able to update automatically as long as the
That's up to how the add-on fetches remote warnings, not how it is distributed.
So do self-hosted ones as I have already explained.
I am not arguing on the topic of auditing. What I had said were just examples to show there would be nothing different between remote fetching and monthly releasing as long as the usage of
They have the option, but Firefox and Safari effectively make this option nothing because sideloaded add-ons are uninstalled after a browser relaunch. So at least for Firefox, if we still need the official add-on to be listed on AMO, the developer version should be distributed as a self-hosted addon. As for Safari, I could just support the opinion of GNU (just joking, lol). That's pretty OK not to adopt my draft. Willing to comment on my work is already a kind of treasure. I am always here to see if there is something I may help with. |
Okay, so for self-distributed extension, firefox could do update, good to know. I get the idea that since we are doing something new, we should consider everything as perfect as possible. As I said, currently, it does not cover some important cases we are using right now and also break the ecosystem without an alternative, so instead of limits by a preset of rules, could we just make From the project's perspective this is the next step I am looking for (While I am not sure if this is what we already have), and as a stricter set of rules I am more than happy to see if community could come up with better ideas to cover the cases here. So the next step:
If that's sound good to you, I could go ahead and move onto #690 |
Sounds good. I consider there is another simple solution to partially enable remote rules and local debugging. That is, only in such two use cases, deny If so, the official Firefox add-on does not need to be self-distributed, only the developer version needs. And could greatly improve the user experience of the Safari add-on. What's your opinion about that? |
My two cents: please produce a json file so that I can use it to resolve a url to a rsshub route. Example:
I need this because I am making an app which resolves urls to feed urls. My app is not written in js and it does not run in a browser. I need json. |
@Rongronggg9 I have your changes merged, sorry I don't have access to this repo and diygod seems still busy right now. I will nudge again later. cc @dvikan https://github.com/DIYgod/RSSHub/blob/gh-pages/build/radar-rules.json |
Context
#635
Motivation
Currently, the
target
field can be either 1. a route string; or 2. a function. The second use case could potentially lead to some security worry. What's more, it effectively makes radar rules unable to be converted to a JSON but remains to be a Javascript. That is, to load remote or local-modified rules cannot avoid the usage ofeval()
, which is strictly prohibited by some modern browsers (e.g. Firefox and Safari, total market share is ~18%) by default. We have already seen there are two distributions (AMO, Mac App Store) that have to disable remote rules and ignore local modifications. And no one can promise that Chrome Web Store would not ban the usage ofeval()
in the future too.Another reason is most radar rules usetarget
function never need to access the page content, but just match URL query strings / do some filtering/remapping/regex replacement. These rules should have been compatible with RSSBud/RSSAid, but with the current specification, they can only use atarget
function which is incompatible with RSSBud/RSSAid. The new specification should change these embarrassing situations.Last but not least, some users indeed have their demand to use their own online rule list instead of the official one. Though the user is the only one who is responsible for their action, we should still worry about their data security since using
eval()
to load remote rules is typical remote code execution (RCE). As a result, such a feature could not be added before we completely deprecated the usage ofeval()
, and the prerequisite is this issue.Core Idea
A typical
target
function can be described in these three workflows or there combination:param
-> post-process (string operations) -> filter -> remapURL
-> extract qs -> ...document
-> DOM operations -> ...Thus, I constructed a 4-stage workflow:
But after deep diving, I think there's no need to distinguish these stages, we just need to define some actions and let rule authors chain them:
Actions
Universal parameters
action
: action namestopIfEmpty
: whether to stop the chain immediately if the output of the current action is an empty string? (optional, default:true
)Input (must be placed at the beginning of the chain)
{ action: 'path' }
,{ action: 'path', key: '...' }
(formerlyparam[paramName]
){ action: 'qs' }
,{ action: 'qs', key: '...' }
{ action: 'URL' }
{ action: 'raw' }
querySelector
:{ action: 'DOM', ...(To be determined) }
(maybe we could implement some workarounds to make it chainable?)Modify
{ action: 'regex', match: '...' }
,{ action: 'regex', match: '...', matchGroup: 1 }
{ action: 'regex', match: '...', replace: '...' }
string operation(regex matching/replacement should be enough)Filter
Hmm, regex replacement should be enough, but that's OK to provide a convenient action.
Remap
A Quick Look at the New Specification
Due to the support for optional parameters, #693 is a prerequisite of the new specification.
Under the new specification, the radar rule list should be exported both in
.js
and.json
. The former, which contains all fields, is preserved for backward compatibility; the latter, in whichtarget
functions are filtered out and oldtarget
route strings are remapped toroute
, will be used by new versions of RSSHub Radar.A use case of remap and
overrideTitle
Old rule
New rule
More Information
Here is a filtered
radar-rules.js
containing only those rules usingtarget
function (based on https://github.com/DIYgod/RSSHub/blob/780031a5e3e6a43691bb877e759a152cc4c8779f/build/radar-rules.js): radar-rules-filtered.tar.gz (.js
ext name is banned by GitHub, so I compressed it)You can filter it by yourself, here's some vim magic:
cc @NeverBehave @Cay-Zhang @LeetaoGoooo @TonyRL
The text was updated successfully, but these errors were encountered: