Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p peer review #139

Open
lukeburns opened this issue Jul 8, 2017 · 14 comments
Open

p2p peer review #139

lukeburns opened this issue Jul 8, 2017 · 14 comments

Comments

@lukeburns
Copy link

lukeburns commented Jul 8, 2017

An issue to discuss implementation details of p2p peer review. I've documented some of my thoughts on what a p2p review process might look like (see https://github.com/lukeburns/peer-review) fwiw.

My initial thoughts on an implementation are to create modules in the hyper* ecosystem that include mechanisms for publishing feeds under a "publishing feed" that includes identity metadata and a collection of feeds (Would multifeed or dat-pki be helpful?) of publications and forwarded publications that would benefit from review, plus linking between feeds so that one can find reviews of a given feed (hyperdb?).

Is this at all like what you've been thinking?

@blahah
Copy link
Member

blahah commented Jul 8, 2017

@lukeburns thanks for opening this! Yes that sounds very close to what I've been thinking!

dat-pki in general is the plan for authenticated distributed group membership for feed creation and subscription, and an iterative peer review system is a particular way of structuring linked feeds with permissions and group membership.

Late here but I'll read your repo tomorrow and write up my thoughts.

@LGro
Copy link

LGro commented Jul 10, 2017

@lukeburns what do you think about extending your model beyond pre-publication?
While one common entry point would be authors requesting peer reviews before publishing another could be someone deciding to review already published work or work that is available on preprint datasources.

@lukeburns
Copy link
Author

lukeburns commented Jul 10, 2017

I see two approaches: linking to external sources or, better, replicating the external publication on the network as a static hyperdrive. The latter option is nice because it doesn't require the author to publish on the network for the paper to be reviewed and it helps ensure the availability of content by distributing it across peers.

@LGro
Copy link

LGro commented Jul 15, 2017

@lukeburns, do I understand correctly that you propose the following workflow for a review?

  1. Reviewer or author copies publication to a new review hyperdrive
  2. Creator of the review hyperdrive shares it with the target audience (e.g. author, reviewer group, publisher)
  3. Reviewer adds comments to the hyperdrive alongside the publication
  4. Author comments on reviews / updates publication
  5. Optionally: The review hyperdrive creator makes it public and links the hyperdrive to the updated/released publication so people can see the review process

@aschrijver
Copy link

@lukeburns the P2P process is very interesting compared to the traditional process!

@igRo as I understand @lukeburns proposal, it is more or less like you say, but it is a more open and transparent process.

In step 2 the Author forwards to selected peers he/she knows, and should be involved
Then there would be a step 2a where these peers in turn find other peers (and maybe they also forward again) that are valuable reviewers

In this way scientific work would become available to a larger group sooner, which in turn might lead to quicker validation and better feedback.

@aschrijver
Copy link

One other thing to consider:

I don't have much experience with scientific review processes in particular, but I've worked a lot with cms'es in a SaaS environment where you have e.g. content review processes.

Here we never had just one review type. We had many:

  • each tenant had their own review process
  • some tenants used multiple review processes (selected manually or automatically based on content attributes)
  • processes contained parallel as well as sequential flows
  • processes that were very simple, e.g. parallel content review + (unanimous or consensus) approval
  • or much more complex, e.g. content approval + brand compliance approval + regulatory approval + corporate approval
  • we had review processes depending on other review process (content depending on other draft content)
  • we had altogether different processes than review acting on content
  • etcetera

We used state machines (for the simple cases) and workflow engines (for the complex ones) to implement this.

Now in no way am I recommending you to include a workflow engine. Just saying you should think carefully what processes you are gonna support, now and in the future, and design accordingly so that adaptation and extension do not introduce too many breaking changes.

@LGro
Copy link

LGro commented Jul 20, 2017

@aschrijver that 2a step you are mentioning, would you envision that as a restart of the whole process, starting again at 1 with a replication of the publication? I am sceptical about if/how it would be feasible to let non-owners extend the visibility of an encrypted hyperdrive, based on the dat-pki.

(Thanks all for demonstrating that a lower-case L is a really bad choice for my username. Also sorry @igRo for the confusion.)

@aschrijver
Copy link

aschrijver commented Jul 20, 2017

First of all, this was my interpretation of how things work, you'll have to ask @lukeburns to be sure.
But ya, that's basically what it boils down to, I guess.

But not sure, because when allowing the review to spread out on an organically growing network of peers (peer-to-peer-to-peer-etcetera, which you don't control as original author) this raises the question how you know that all the vital reviewers are done reviewing (those that are required to participate, not the nice-to-haves)

@aschrijver
Copy link

aschrijver commented Jul 20, 2017

Some more analysis based on my previous observation..

A review could be addressed to:

  • a boundless number of peers (public review)
  • boundless but with a bounded set of required reviewers (public + internal review)
  • restricted to a set of required reviewers (internal review)

A review process could stop when:

  • never, will collect validation indefinitely (not a good idea)
  • cancelled / stopped by the author (if the paper was inaccurate or bad, or the author is happy)
  • cancelled by some other authority (the institution?)
  • when the required set of reviewers has finished
  • at a specific moment in time, or after a fixed timespan
  • when a threshold is reached (consensus, no. of reviews, etc.)

This raises the follow-up question: How do you stop a review process, avoid people wasting time?

All these choices have (potentially significant) design impact and lead to further questions.

@lukeburns
Copy link
Author

@LGro a minimal and fairly generic implementation might be a hypercore-archiver + a feed of the keys in the archive:

  • each peer in a review network has a hypercore "replication" feed consisting of hyperdrive keys that they think would benefit from review by their peers and replicates these feeds.
  • each peer follows the replication feeds of chosen peers.
  • upon learning of a new hyperdrive from a peer's feed, each peer has a choice of (1) ignoring it, (2) appending it to their replication feed, or (3) publishing a review feed/hyperdrive and appending it to their replication feed, which is then filtered / reviewed like a normal publication.
  • at anytime, a peer can publish a hyperdrive (say an original article, as a live hyperdrive, that undergoes revisions, or an external article that they think needs review, as a static hyperdrive) and append it to their replication feed.

this says nothing about the structure of reviews (maybe comments on a hypercore feed or a hyperdrive with latex files). it also says nothing about how peers find each other, so it could work for open or closed review networks.

there are a couple issues with this proposal.

the propagation of reviews through the network might be too slow if it has to go through the same filtering process that publications do (through steps 1, 2, 3). one way around this might be to have all peers auto-replicate reviews. they could even auto-replicate publications if filtering is not necessary for the size of the network.

additionally, while it's important that peers be able to have filtration control, peers with no followers are unheard. one could implement a process by which to "push" messages to new peers (e.g. jayrbolton/dat-wot#7), so that a reviewer can push a review onto the network, whether or not they are followed by other peers, or to send "follow requests." otherwise, a peer needs to find a "champion" who is already connected to other peers somehow and convince them to replicate their publication / review, which might be all one needs in a small review network (e.g. an undergrad student researcher on a network consisting of collaborators on a research program has a review replicated by their advisor).

a beefier implementation might use dat-pki and allow for selective sharing with groups or individual peers. i think this is a good first step that works with minimal dependencies.

@lukeburns
Copy link
Author

lukeburns commented Jul 20, 2017

@LGro so to actually answer your question, the key differences between what you said and what i'm imagining are how publications are shared and filtered and the structure of reviews (which i'm not settled on yet):

  1. peer (author or reviewer) publishes new hyperdrive, appends to their replication feed (i.e "submits a review request" to their peers), and optionally shares directly with peers (whether on- or off-network)
  2. their peers either (a) ignore, (b) forward to their peers, or (c) goes to step 1 to publish a response
  3. author makes revisions -- return to step 2
  4. (optional) publication or review hyperdrives are cited off-network

@step21
Copy link

step21 commented Jul 31, 2017

@lukeburns I think it depends on who you see as your audience.
If you would like eventual adoption by academics, I think it would be best/easiest to start with 'traditional' peer review, where articles are submitted/reviewed and then eventually accepted by a 'journal'. Then, in addition to that there can be 'free for all' review anyway afterwards or before ppl could ask for input on work in progress like for example researchgate also offers. The reason for this is, that for the foreseeable future, people need to show that they have certain acceptable publications, such as when applying for jobs or just generally to establish credentials. This is easiest if a 'journal' curated by a group of specific individuals can build reputation and then I can show I published there. OTOH, if I would have to say 'well, 10 peers whoever they are, reviewed my work' without knowing who they are or not knowing their credentials, it's worthless for this purpose.
I realise this doesn't address everything mentioned above or not in as much detail, but maybe sth to keep in mind.

@lukeburns
Copy link
Author

@step21 the above proposal isn't free for all review, although you could do that. it works for arbitrary networks of peers, so you could easily put together a closed group of reviewers / editors for a journal using this. you could even do more interesting things like e.g. build a trusted network of reviewers underlying a consortium of overlay journals, so that the overlay journals have a consistent and reliable source of reviewers to tap into.

@step21
Copy link

step21 commented Aug 5, 2017

Sure. @lukeburns thanks for the clarification. That sounds really great. In general I just think that a lot of what I hear from #openscience or movement of 'against publishers etc' is just very much like 'free everything' without a replacement, or a very disorganized one. If projects like sciencefair actually want to include academics and give them a platform, I think it is important to consider these things. Like how best to include them, things like that. Also because if I am just a random Tech Guy with an opinion or something, I would just use a blog ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants