-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define acceptance process for listing plugins on if registry #617
Comments
hey @jmcook1186 @jawache @narekhovhannisyan @MariamKhalatova @manushak In summary: i can think of 3 approaches, not mutually exclusive, to balance the need for some oversight with our limited resources: Random Code Reviews / QA Testing by Core Team: Registry Interview / Questionnaire: For the Community, by the Community: |
Thanks for looking into this @pazbardanl Personally like the idea of an entry questionnaire for the registry that covers some basic checklist - ideally we can verify the responses very rapidly (maybe stackblitz can help with this) or we trust that the responses are true and let further investigations happen in a decentralized way post-acceptance. Maybe some mechanism for people to upvote/downvote plugins might be nice. One thing that jumps out is the need for a delisting process - probably just a template for people to raise a complaint about a specific plugin so that we can consider removing it from the site if the questionnaire responses turn out to be dishonest or the plugin breaks. |
hi @jmcook1186 . Personally I think the questionnaire approach in combination with the random auditing / reviewing of registered projects is a good combination: they are not mutually exclusive but they ARE complementary in a way: All in all I think the realistic approach here is to treat the registry as a "quality stamp", which is not bullet proof (we just don't have the resources) but DOES represent a plugin meeting our standards. |
@jmcook1186 @jawache I've spoken to 2 connections I have that are pretty experienced in the open source domain. Presented with our registry idea and challenges, they both said similar things: It's a classic open-source problem. It's the contention between having (or - aspiring to have) a vast and open community where everybody can contribute, and needing some level of standards of quality everyone must adhere to. Having said that, I i realize the word risk here is crucial: It's s risk management problem. We vouch for a plugin, while not being able to validate it 100% for having met our standards. Therefore we run the risk of vouching for a bad plugin. As such, this risk needs to be:
|
Thanks @pazbardanl This is great, I laughed when you wrote "Bottom line is that I got no practical advice on this" :) so sounds like we just need to decide as you say what the risk / reward is. I don't think we can audit, we're not qualified to judge the universe of environmental impact models :/ we can perhaps do some basic auditing of security features, have a linter, existence of tests etc... but should be something automated mostly. I like the idea of an interview which asks them to give some evidence with a basic checklist of items they have to make sure is clicked before we list. The rest can be done via the community, star ratings, report this plugin link. Another approach is to mostly have the bar low but then tier the plugins, so you get a gold badge if you meet much stronger criteria (evidence of test cases, rich support, 10 reviews, lots of docs, citations, things like that). It's kinda like our process for deciding if a project in the GSF is incubation or graduated, you need to show evidence of a higher bar to be graduated and there is a process to get graduated, but it doesn't stop projects from launching and experimenting. I kinda like the idea of this being a melting pot of everything (being very inclusive) but surfacing up the good ones to the top (having high standards), it matches how we function internally as well. I think there is a very far future where certain plugins will get approved by 3rd party auditors, which signals that those plugins are ok to be used for calculating say regulatory reporting numbers. We can eventually badge them differently. Evaluated
Mitigated
Contained
|
hey @jawache . Understood.
The melting pot model is a good one. It creates a healthy, meritocratic workflow. I think special attention is needed here to make sure the merits we give (gold badge etc) are indeed perceived as something worth working towards. Crowdsourcing part of the feedback loop will be most efficient, no doubt. I understand there will be a website for the registry, but the backend will be Github, right? if so then we're set on the technical aspect, just need a process around it: who tracks the stars and reviews, what do we do when detecting a good (or bad) outlier etc. I consider coming up with this process in scope of this issue (i.e ill come up with one). Decision making on delisting - if non IF, then who?, in my mind i see a GSF appointed committee, but I honestly have 0 clue if this is even realistic or not. I'm ok with big fat disclaimers and CLI warnings. Who wouldn't be? :) |
@jawache @jmcook1186 I've added a section to the HackMD doc: https://hackmd.io/@pazbarda/BkfkezEgA and invited you both with write permissions. The section describes a draft for a "badges model" with criteria for each badge, based on recent comments above. I've also detailed how the "bronze" and "silver" badge are going to be handed out. |
Registry Form outline: Please check the relevant boxes below
Define, in your own words, what is(are) the Requirement(s) of your plugin. Please provide citations, links and references that support the validity of the implementation of your plugin. Please describe the test cases covered by your plugin's dedicated unit tests Please provide a demo manifest |
Thanks @pazbardanl this seems like a good start. We can probably drop the question about whether the README contains sample manifest code and instead insist that a working manifest is submitted as a standalone yaml file. Maybe insisting on 100% test coverage is a bit strict? I think we can just implement this as an issue-form on the registry Github repository. Then it can be a prerequisite for a PR, and link to the submitted form from the plugin's card on the registry. I think the right next step is to raise a PR to that repository to add the issue-form and we can tweak the content together directly on the PR. |
Sub of: #633
User story
As a plugin builder, I want my plugin to be discoverable by other IF users in some central location, while keeping control over the plugin in my own registry.
Rationale
The IF has an ethos of decentralising the plugin ecosystem, but we also want people to be able to browse plugins that are not in our direct control. For this, we can build a central registry where we list and link out to community plugins. This ticket is only to create a github repository to store the source code and set the repo permissions and config. The IF team need to find the right balance of responsibility for the plugins listed and the permissionlessness of the ecosystem (i.e. we don't want to list junk or unsafe plugins, but we also don't want to gatekeep). This ticket is for defining the right processes for deciding what to list and how to organize the registry.
Implementation details
Priority
5/5
Size
L
What does "done" look like?
Process is implemented and documented
Deadline
tbc
The text was updated successfully, but these errors were encountered: