-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PRE REVIEW]: wiserank: a platform for running pairwise comparison experiments #7168
Comments
Hello human, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
Software report:
Commit count by author:
|
Paper file info: 📄 Wordcount for ✅ The paper includes a |
License info: ✅ License found: |
|
|
Hi @ianvanbuskirk thanks for submitting to JOSS. |
@editorialbot query scope |
Submission flagged for editorial review. |
Hi @samhforbes, thanks for your prompt response. I've made some edits based on the editorialbot's suggestions. With respect to scope, my collaborators and I discussed JOSS's note on web-based software before deciding to submit. Ultimately, we came to think the wiserank platform could be a valuable tool for researchers as a web-based application, but above all else contributes a core set of functionalities for running a pairwise comparison experiment. These functionalities could be used to run a local, lab-based experiment or to simulate comparisons and study how different experimental designs impact the results. Overall, we hope wiserank can be a rigorous starting point for many kinds of projects that involve pairwise comparisons. Please let me know if there are other scope related topics I should touch on! |
@editorialbot generate pdf |
Five most similar historical JOSS papers: Autorank: A Python package for automated ranking of classifiers Multi-attribute task builder psychTestR: An R package for designing and conducting behavioural psychological experiments Efficiently Learning Relative Similarity Embeddings with Crowdsourcing PyExperimenter: Easily distribute experiments and track results |
Hi @ianvanbuskirk |
@editorialbot reject |
Paper rejected. |
Submitting author: @ianvanbuskirk (Ian Van Buskirk)
Repository: https://github.com/LarremoreLab/wiserank/
Branch with paper.md (empty if default branch):
Version: v0.1.0
Editor: Pending
Reviewers: Pending
Managing EiC: Samuel Forbes
Status
Status badge code:
Author instructions
Thanks for submitting your paper to JOSS @ianvanbuskirk. Currently, there isn't a JOSS editor assigned to your paper.
@ianvanbuskirk if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). You can search the list of people that have already agreed to review and may be suitable for this submission.
Editor instructions
The JOSS submission bot @editorialbot is here to help you find and assign reviewers and start the main review. To find out what @editorialbot can do for you type:
The text was updated successfully, but these errors were encountered: