Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PRE REVIEW]: SeqMetrics: a unified library for performance metrics calculation in Python #6361

Closed
editorialbot opened this issue Feb 16, 2024 · 41 comments
Assignees
Labels
pre-review Python Track: 7 (CSISM) Computer science, Information Science, and Mathematics

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Feb 16, 2024

Submitting author: @AtrCheema (Ather Abbas)
Repository: https://github.com/AtrCheema/SeqMetrics
Branch with paper.md (empty if default branch): master
Version: v2.0.0
Editor: @mstimberg
Reviewers: @FATelarico, @y1my1, @SkafteNicki
Managing EiC: Daniel S. Katz

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16"><img src="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg)](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16)

Author instructions

Thanks for submitting your paper to JOSS @AtrCheema. Currently, there isn't a JOSS editor assigned to your paper.

@AtrCheema if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). You can search the list of people that have already agreed to review and may be suitable for this submission.

Editor instructions

The JOSS submission bot @editorialbot is here to help you find and assign reviewers and start the main review. To find out what @editorialbot can do for you type:

@editorialbot commands
@editorialbot editorialbot added pre-review Track: 7 (CSISM) Computer science, Information Science, and Mathematics labels Feb 16, 2024
@editorialbot
Copy link
Collaborator Author

Hello human, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.08 s (345.4 files/s, 119377.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          12           1415           3276           3628
Markdown                         3            162              0            734
YAML                             4             14             13             86
reStructuredText                 7             52            187             55
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            28           1655           3484           4538
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 910

@editorialbot
Copy link
Collaborator Author

Failed to discover a Statement of need section in paper

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Five most similar historical JOSS papers:

FitBenchmarking: an open source Python package comparing data fitting software
Submitting author: @tyronerees
Handling editor: @dhhagan (Active)
Reviewers: @johnsamuelwrites, @djmitche
Similarity score: 0.8089

TorchMetrics - Measuring Reproducibility in PyTorch
Submitting author: @justusschock
Handling editor: @taless474 (Retired)
Reviewers: @inpefess, @richrobe, @reneraab
Similarity score: 0.8073

Phylogemetric: A Python library for calculating phylogenetic network metrics
Submitting author: @SimonGreenhill
Handling editor: @arfon (Active)
Reviewers: @krother
Similarity score: 0.8063

TextDescriptives: A Python package for calculating a large variety of metrics from text
Submitting author: @HLasse
Handling editor: @fabian-s (Active)
Reviewers: @RichardLitt, @linuxscout
Similarity score: 0.8038

fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms
Submitting author: @dunnkers
Handling editor: @diehlpk (Active)
Reviewers: @mcasl, @estefaniatalavera
Similarity score: 0.7996

⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before considering asking the reviewers of these papers to review again for JOSS.

@danielskatz
Copy link

👋 @AtrCheema - thanks for your submission.

Your paper is a bit unusual, however. It doesn't seem to have sections or references. See the example paper. Please feel free to make changes to your .md file (and perhaps add a .bib file), then use the command @editorialbot check references to check the references, and the command @editorialbot generate pdf after making changes to the .md file or when the references are right to make a new PDF. editorialbot commands need to be the first entry in a new comment.

As some other minor points, your title is grammatically incorrect: it should be "SeqMetrics: a unified library for performance metrics calculation in Python" (no space before a colon; Python with a P, not a p). Please be sure to change these.

I'm going to mark this as paused until you tell me you finished making changes.

@danielskatz danielskatz changed the title [PRE REVIEW]: SeqMetrics : a unified library for performance metrics calculation in python [PRE REVIEW]: SeqMetrics: a unified library for performance metrics calculation in Python Feb 16, 2024
@Sara-Iftikhar
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Five most similar historical JOSS papers:

TorchMetrics - Measuring Reproducibility in PyTorch
Submitting author: @justusschock
Handling editor: @taless474 (Retired)
Reviewers: @inpefess, @richrobe, @reneraab
Similarity score: 0.8123

FitBenchmarking: an open source Python package comparing data fitting software
Submitting author: @tyronerees
Handling editor: @dhhagan (Active)
Reviewers: @johnsamuelwrites, @djmitche
Similarity score: 0.8119

Phylogemetric: A Python library for calculating phylogenetic network metrics
Submitting author: @SimonGreenhill
Handling editor: @arfon (Active)
Reviewers: @krother
Similarity score: 0.8054

TextDescriptives: A Python package for calculating a large variety of metrics from text
Submitting author: @HLasse
Handling editor: @fabian-s (Active)
Reviewers: @RichardLitt, @linuxscout
Similarity score: 0.8046

fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms
Submitting author: @dunnkers
Handling editor: @diehlpk (Active)
Reviewers: @mcasl, @estefaniatalavera
Similarity score: 0.8032

⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before considering asking the reviewers of these papers to review again for JOSS.

@Sara-Iftikhar
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Five most similar historical JOSS papers:

TorchMetrics - Measuring Reproducibility in PyTorch
Submitting author: @justusschock
Handling editor: @taless474 (Retired)
Reviewers: @inpefess, @richrobe, @reneraab
Similarity score: 0.8120

FitBenchmarking: an open source Python package comparing data fitting software
Submitting author: @tyronerees
Handling editor: @dhhagan (Active)
Reviewers: @johnsamuelwrites, @djmitche
Similarity score: 0.8103

Phylogemetric: A Python library for calculating phylogenetic network metrics
Submitting author: @SimonGreenhill
Handling editor: @arfon (Active)
Reviewers: @krother
Similarity score: 0.8038

TextDescriptives: A Python package for calculating a large variety of metrics from text
Submitting author: @HLasse
Handling editor: @fabian-s (Active)
Reviewers: @RichardLitt, @linuxscout
Similarity score: 0.8031

fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms
Submitting author: @dunnkers
Handling editor: @diehlpk (Active)
Reviewers: @mcasl, @estefaniatalavera
Similarity score: 0.8017

⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before considering asking the reviewers of these papers to review again for JOSS.

@Sara-Iftikhar
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Five most similar historical JOSS papers:

TorchMetrics - Measuring Reproducibility in PyTorch
Submitting author: @justusschock
Handling editor: @taless474 (Retired)
Reviewers: @inpefess, @richrobe, @reneraab
Similarity score: 0.8129

FitBenchmarking: an open source Python package comparing data fitting software
Submitting author: @tyronerees
Handling editor: @dhhagan (Active)
Reviewers: @johnsamuelwrites, @djmitche
Similarity score: 0.8114

Phylogemetric: A Python library for calculating phylogenetic network metrics
Submitting author: @SimonGreenhill
Handling editor: @arfon (Active)
Reviewers: @krother
Similarity score: 0.8044

TextDescriptives: A Python package for calculating a large variety of metrics from text
Submitting author: @HLasse
Handling editor: @fabian-s (Active)
Reviewers: @RichardLitt, @linuxscout
Similarity score: 0.8039

fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms
Submitting author: @dunnkers
Handling editor: @diehlpk (Active)
Reviewers: @mcasl, @estefaniatalavera
Similarity score: 0.8029

⚠️ Note to editors: If these papers look like they might be a good match, click through to the review issue for that paper and invite one or more of the authors before considering asking the reviewers of these papers to review again for JOSS.

@AtrCheema
Copy link

@danielskatz Thanks for your feedback. I am finished with making changes that you have recommended. The sections are visible now. Can you allow the further processing now?

@danielskatz
Copy link

@editorialbot check references

👋 @AtrCheema - you still need to fix the DOI in your one reference - see the PDF, where there's clearly an extra prefix, which this command should show. Also, a paper with only one reference seems odd to me, but we can see what reviewers suggest. I'll start the process.

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- None

INVALID DOIs

- https://doi.org/10.1029/2007JD008972 is INVALID because of 'https://doi.org/' prefix

@danielskatz
Copy link

👋 @mstimberg - would you be able to edit this submission?

@danielskatz
Copy link

@editorialbot invite @mstimberg as editor

@editorialbot
Copy link
Collaborator Author

Invitation to edit this submission sent!

@mstimberg
Copy link

👋 @mstimberg - would you be able to edit this submission?

Yes, I'd be happy to edit this.

@mstimberg
Copy link

@editorialbot assign me as editor

@editorialbot
Copy link
Collaborator Author

Assigned! @mstimberg is now the editor

@mstimberg
Copy link

👋 @AtrCheema I will edit this submission and facilitate the review process. Unfortunately, I am currently down with a cold and will not be able to start the search for reviewers before this Friday (hopefully). In case you have any suggestions, please do not hesitate to mention their names here (without tagging their GitHub usernames with an @). Thanks 🙏

@mstimberg
Copy link

Hi again, @AtrCheema. I've contacted a few potential reviewers over email and waiting for their response. From a cursory look at your paper/project, I think that reviewers will ask about comparisons with existing packages – i.e., what does this package offer that e.g. scikit-learn's metrics package, Kera's metrics package, or the torchmetrics package do not?

@AtrCheema
Copy link

@editorialbot generate pdf

Hi, @mstimberg Thank you for your feedback and for highlighting the lack of comparison in the submitted paper. One significant distinction of SeqMetrics from other libraries lies in its extensive coverage of performance metrics for 1-dimensional numerical data. Existing Python libraries offer only a limited number of metrics in comparison. For instance, the metrics sub-module from Keras contains only 24 metrics, while scikit-learn's metrics module covers 45 metrics. Torchmetrics library, although contains 100+ metrics, however, it provides only 48 such metrics which are intended for 1-dimensional data. In contrast, SeqMetrics covers a comprehensive list of 126 metrics spanning diverse fields.

Another noteworthy feature of SeqMetrics is the availability of a user-friendly and intuitive web-based GUI hosted on Streamlit. This facilitates users with no programming background in seamlessly calculating any of these 126 metrics. We have accordingly modified the paper.md file to reflect these key distinctions.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mstimberg
Copy link

@AtrCheema Many thanks for your update and the changes to the document, I think this will be helpful for the reviewers (and future readers, obviously). I have first positive replies from reviewers, but I will still wait a short time for outstanding replies before officially starting the review.

@mstimberg
Copy link

@editorialbot add @FATelarico as reviewer

Many thanks again for agreeing to review 🙏

@editorialbot
Copy link
Collaborator Author

@FATelarico added to the reviewers list!

@mstimberg
Copy link

@editorialbot add @y1my1as reviewer

Thanks a lot for agreeing to review 😊

@editorialbot
Copy link
Collaborator Author

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

@mstimberg
Copy link

@editorialbot add @y1my1 as reviewer

Not quite sure what went wrong above, trying again…

@editorialbot
Copy link
Collaborator Author

@y1my1 added to the reviewers list!

@mstimberg
Copy link

@editorialbot add @SkafteNicki as reviewer

Thanks again for helping with this review !

@editorialbot
Copy link
Collaborator Author

@SkafteNicki added to the reviewers list!

@mstimberg
Copy link

@editorialbot start review

@editorialbot
Copy link
Collaborator Author

OK, I've started the review over in #6450.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pre-review Python Track: 7 (CSISM) Computer science, Information Science, and Mathematics
Projects
None yet
Development

No branches or pull requests

5 participants