-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Criteria used to assess opennness of projects/tools/platforms? #1
Comments
From @Daniel-Mietchen on May 9, 2018 13:58 @bmkramer I'm certainly looking forward to interaction along these lines! |
From @HeidiSeibold on May 10, 2018 9:10 I think this is a very important question which we should probably discuss very early on. For me, important aspects are:
|
From @Daniel-Mietchen on May 10, 2018 15:50 @HeidiSeibold The question of defining what is meant by "open" also popped up at drdvo/OWLTEH#8 , and there are related questions on Ask Open Science (pinging our issue #6 ): |
From @drdvo on May 10, 2018 16:18 You might also find useful the 5R permissions, in relation to the notion of 'Open Educational Resources' (OER) but relevant more generally to definitions of open content. |
From @Daniel-Mietchen on May 10, 2018 17:14 @bmkramer In a chat around our Mozsprint, @felliott just brought up your graph at https://user-images.githubusercontent.com/16495178/39816678-e8c85458-5369-11e8-9e09-b09ad6b1fae8.png . Do you have an editable version of that, so we could go and try to filter that by whatever openness criteria we come up with? |
From @bmkramer on May 10, 2018 19:0 Yes, I have, it was made in powerpoint (ahem) - I'll look it up on my harddrive and figure out how to upload/share in a moment. Some background on what is currently on that figure (and what isn't), see also here: The logo's were the tools and platforms (not necessarily all open!) that we asked about in our 2015-2016 survey on tool/platform usage, ordered per the different phases of the research workflow that we also used in the survey. The grey lines are tools/platforms that people had indicated using together, either within or between 'adjacent' workflow phases. This co-use represented raw data, later on we did some more thorough analysis to identify tools/platforms preferentially used together - those results are shown here in an interactive table. Finally, the green highlight depicts a hypothethical workflow formed by tools and platforms across the workflow that had been identified as mostly compliant with the (then) version of the principles of the scholarly commons, during a workshop held in San Diego in Sept 2016. (@Daniel-Mietchen - you may remember :-) Enough talk, off to search my harddrive! [edited to add: this is taking a bit longer, apologies. Hope to have it for you by tomorrow, to use on the second sprint day. The first day is well and truly over in my timezone....] |
From @bmkramer on May 11, 2018 11:28 OK, here's the editable powerpoint. Contrary to what you might hope/expect, the circles surrounding the tools together with the grey lines connecting them are a static image, because it was generated as a network image in Gephi (based on our survey results). 101innovations_workflow_puzzle.pptx An additional suggestion, if you will be scoring tools along openness criteria, it could be useful to add that info to our database of 400+ tools/platforms, e.g. in column L in the 'Data' tab. (we are behind in updating this list, so feel free to add tools suggestions) Hope this is helpful! |
Just a note to tie this issue together with two others that are focusing on visualizing the open science tool ecosystem (#11) and locating data about it in wikidata (#7). |
From @Daniel-Mietchen on May 11, 2018 15:52 I've just browsed the spreadsheet for a while and think we could do a number of things with it:
On that basis, we could then filter for those that meet any of the openness criteria, convert the table into a format compatible with QuickStatements, and edit/ create the corresponding Wikidata entries. |
From @felliott on May 11, 2018 15:57 I'll start looking at converting it to QuickStatements. |
From @Daniel-Mietchen on May 11, 2018 16:15 In cases where there is uncertainty about matches between terms and Wikidata concepts, we could use the Mix'n'match tool, which allows users to do such matching. For an example, see https://tools.wmflabs.org/mix-n-match/#/list/662/unmatched (Medical Subject Headings for Disciplines and Occupations) For the manual, see https://meta.wikimedia.org/wiki/Mix%27n%27match/Manual . |
From @wesm on July 8, 2018 16:55 I don't see "governance" mentioned in this thread so far. Projects may be open source and with permissive licenses, but if they do not have open governance (in the style of Apache Software Foundation projects, for example; with structures in place to thwart "corporate takeovers"), it can be problematic longer term. |
From @dwhly on July 10, 2018 14:19 Good thread.
The only decision that was made initially was that projects be science focused, open source and non-profit. These points (and whether to add others) are certainly open for discussion. We might consider two categories: hard requirements and best practices. The ecosystem is still relatively small, with most projects being either the only representative for a certain function or perhaps one of two. We don't want to exclude folks beyond what we agree are the absolutes.
Perhaps these might be the best practices? I can see that it might be a useful way of calibrating projects or scoring them-- maybe as much for their own benefit as anyone else's. i.e. what are the best practices, and what should you strive for -- and what are the trade-offs. |
From @bmkramer on July 10, 2018 20:55 Hi Dan, thanks for the reply! Thinking ahead towards the August workshop, it might be a good opportunity to do some sort of assessment or scoring exercise, both for the discussion it might generate, as you indicate, but also as a test case for how to operationalize these kind of principles and practices. (We started something like that in the scholarly commons workshop in San Diego two years ago). One approach I really like is the one taken by Zenodo in how they outline to which extent they comply with the FAIR principles (http://about.zenodo.org/principles/). Such a 'self-assessment', might be a way to make this less about judging and more about being transparant and accountable (and as you say, maybe as much for the benefit of providers themselves as for anyone else). Anyway, just thinking out loud... |
From @bmkramer on May 9, 2018 8:17
Has it already been decided what criteria will be used to assess openness of projects/tools/platforms to be included in the roadmap? (and perhaps also to identify areas where improvement is recommended)?
For instance, the [Principles of Open Scholarly Infrastructure] (http://dx.doi.org/10.6084/m9.figshare.1314859) might be useful for assessing the infrastructure of tools and platforms. Perhaps in combination with aspects of the FORCE11 principles of the scholarly commons (see also preprint) for both the openness of resulting research objects (Open, FAIR and citable) and openness to participation.
Full disclose: this ties in closely to some of the work we are ourselves planning to do over the next months*, and it would be great to see if we can collaborate!
*See the proposals for FORCE2018 that @JeroenBosman and I put in:
Copied from original issue: OpenScienceRoadmap/mozilla-sprint-2018#13
The text was updated successfully, but these errors were encountered: