-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PRE REVIEW]: SeqMetrics: a unified library for performance metrics calculation in Python #6361
Comments
Hello human, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
|
Wordcount for |
Failed to discover a |
Five most similar historical JOSS papers: FitBenchmarking: an open source Python package comparing data fitting software TorchMetrics - Measuring Reproducibility in PyTorch Phylogemetric: A Python library for calculating phylogenetic network metrics TextDescriptives: A Python package for calculating a large variety of metrics from text fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms |
👋 @AtrCheema - thanks for your submission. Your paper is a bit unusual, however. It doesn't seem to have sections or references. See the example paper. Please feel free to make changes to your .md file (and perhaps add a .bib file), then use the command As some other minor points, your title is grammatically incorrect: it should be "SeqMetrics: a unified library for performance metrics calculation in Python" (no space before a colon; Python with a P, not a p). Please be sure to change these. I'm going to mark this as paused until you tell me you finished making changes. |
@editorialbot generate pdf |
Five most similar historical JOSS papers: TorchMetrics - Measuring Reproducibility in PyTorch FitBenchmarking: an open source Python package comparing data fitting software Phylogemetric: A Python library for calculating phylogenetic network metrics TextDescriptives: A Python package for calculating a large variety of metrics from text fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms |
@editorialbot generate pdf |
Five most similar historical JOSS papers: TorchMetrics - Measuring Reproducibility in PyTorch FitBenchmarking: an open source Python package comparing data fitting software Phylogemetric: A Python library for calculating phylogenetic network metrics TextDescriptives: A Python package for calculating a large variety of metrics from text fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms |
@editorialbot generate pdf |
Five most similar historical JOSS papers: TorchMetrics - Measuring Reproducibility in PyTorch FitBenchmarking: an open source Python package comparing data fitting software Phylogemetric: A Python library for calculating phylogenetic network metrics TextDescriptives: A Python package for calculating a large variety of metrics from text fseval: A Benchmarking Framework for Feature Selection and Feature Ranking Algorithms |
@danielskatz Thanks for your feedback. I am finished with making changes that you have recommended. The sections are visible now. Can you allow the further processing now? |
@editorialbot check references 👋 @AtrCheema - you still need to fix the DOI in your one reference - see the PDF, where there's clearly an extra prefix, which this command should show. Also, a paper with only one reference seems odd to me, but we can see what reviewers suggest. I'll start the process. |
|
👋 @mstimberg - would you be able to edit this submission? |
@editorialbot invite @mstimberg as editor |
Invitation to edit this submission sent! |
Yes, I'd be happy to edit this. |
@editorialbot assign me as editor |
Assigned! @mstimberg is now the editor |
👋 @AtrCheema I will edit this submission and facilitate the review process. Unfortunately, I am currently down with a cold and will not be able to start the search for reviewers before this Friday (hopefully). In case you have any suggestions, please do not hesitate to mention their names here (without tagging their GitHub usernames with an |
Hi again, @AtrCheema. I've contacted a few potential reviewers over email and waiting for their response. From a cursory look at your paper/project, I think that reviewers will ask about comparisons with existing packages – i.e., what does this package offer that e.g. scikit-learn's metrics package, Kera's metrics package, or the torchmetrics package do not? |
@editorialbot generate pdf Hi, @mstimberg Thank you for your feedback and for highlighting the lack of comparison in the submitted paper. One significant distinction of SeqMetrics from other libraries lies in its extensive coverage of performance metrics for 1-dimensional numerical data. Existing Python libraries offer only a limited number of metrics in comparison. For instance, the metrics sub-module from Keras contains only 24 metrics, while scikit-learn's metrics module covers 45 metrics. Torchmetrics library, although contains 100+ metrics, however, it provides only 48 such metrics which are intended for 1-dimensional data. In contrast, SeqMetrics covers a comprehensive list of 126 metrics spanning diverse fields. Another noteworthy feature of SeqMetrics is the availability of a user-friendly and intuitive web-based GUI hosted on Streamlit. This facilitates users with no programming background in seamlessly calculating any of these 126 metrics. We have accordingly modified the paper.md file to reflect these key distinctions. |
@AtrCheema Many thanks for your update and the changes to the document, I think this will be helpful for the reviewers (and future readers, obviously). I have first positive replies from reviewers, but I will still wait a short time for outstanding replies before officially starting the review. |
@editorialbot add @FATelarico as reviewer Many thanks again for agreeing to review 🙏 |
@FATelarico added to the reviewers list! |
@editorialbot add @y1my1as reviewer Thanks a lot for agreeing to review 😊 |
I'm sorry human, I don't understand that. You can see what commands I support by typing:
|
@editorialbot add @y1my1 as reviewer Not quite sure what went wrong above, trying again… |
@y1my1 added to the reviewers list! |
@editorialbot add @SkafteNicki as reviewer Thanks again for helping with this review ! |
@SkafteNicki added to the reviewers list! |
@editorialbot start review |
OK, I've started the review over in #6450. |
Submitting author: @AtrCheema (Ather Abbas)
Repository: https://github.com/AtrCheema/SeqMetrics
Branch with paper.md (empty if default branch): master
Version: v2.0.0
Editor: @mstimberg
Reviewers: @FATelarico, @y1my1, @SkafteNicki
Managing EiC: Daniel S. Katz
Status
Status badge code:
Author instructions
Thanks for submitting your paper to JOSS @AtrCheema. Currently, there isn't a JOSS editor assigned to your paper.
@AtrCheema if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). You can search the list of people that have already agreed to review and may be suitable for this submission.
Editor instructions
The JOSS submission bot @editorialbot is here to help you find and assign reviewers and start the main review. To find out what @editorialbot can do for you type:
The text was updated successfully, but these errors were encountered: