Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Diart: A Python Library for Real-Time Speaker Diarization #5266

Closed
editorialbot opened this issue Mar 16, 2023 · 83 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 16, 2023

Submitting author: @juanmc2005 (Juan Manuel Coria)
Repository: https://github.com/juanmc2005/StreamingSpeakerDiarization
Branch with paper.md (empty if default branch): joss
Version: v0.9.1
Editor: @Fei-Tao
Reviewers: @sneakers-the-rat, @mensisa
Archive: 10.5281/zenodo.12510278

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/cc9807c6de75ea4c29025c7bd0d31996"><img src="https://joss.theoj.org/papers/cc9807c6de75ea4c29025c7bd0d31996/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/cc9807c6de75ea4c29025c7bd0d31996/status.svg)](https://joss.theoj.org/papers/cc9807c6de75ea4c29025c7bd0d31996)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@sneakers-the-rat & @mensisa, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Fei-Tao know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @mensisa

📝 Checklist for @sneakers-the-rat

@editorialbot editorialbot added Python review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning labels Mar 16, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.05 s (552.7 files/s, 70207.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          23            459            689           1882
Markdown                         2            102              0            366
TeX                              1              3              0             27
YAML                             1              1              4             18
TOML                             1              0              0              6
-------------------------------------------------------------------------------
SUM:                            28            565            693           2299
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 459

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/ASRU51503.2021.9688044 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mensisa
Copy link

mensisa commented Apr 10, 2023

Review checklist for @mensisa

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/juanmc2005/StreamingSpeakerDiarization?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@juanmc2005) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@sneakers-the-rat
Copy link

(pinging to say sorry i am so delayed, working on a time-sensitive writing project and this is on my list for when i finish that this week)

@Fei-Tao
Copy link

Fei-Tao commented Jun 2, 2023

Hi @sneakers-the-rat, any updates on the review? Please feel free to let us know if you need any help. Thanks again for your time.

@sneakers-the-rat
Copy link

Sorry, have been having a rough few months. just getting back up to speed with my work responsibilities, and I think i'll be able to get to this next week. Sorry for the extreme delay

@sneakers-the-rat
Copy link

Again extremely sorry, working on this now.

@sneakers-the-rat
Copy link

sneakers-the-rat commented Jun 23, 2023

Review checklist for @sneakers-the-rat

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/juanmc2005/StreamingSpeakerDiarization?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
    • yes, MIT
  • Contribution and authorship: Has the submitting author (@juanmc2005) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
    • yes, @juanmc2005 has made the overwhelming majority of commits and LoC, followed by who i presume is the second author, then several of the remaining authors have made some contributions (i honestly don't care who is on the author list as long as the people who did the work are)
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
    • yes, although full disclosure I don't believe in novelty as a factor for review so i probably would have said yes to anything here.
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
    • There isn't much in the paper, but there is a claim that this package can reproduce some results and that it outperforms a competitor package, which might be in the paper in the repository but isn't in the paper in this review? Can the authors provide some clarification on what exactly is being submitted to JOSS? I'll open an issue on this after i finish an overview read of the package
    • edit: I think i'll just review for claims made in the JOSS paper without reviewing the paper that's in the repository, though it does provide good background information that would be very useful to be included in the documentation.
    • Ground truth for the benchmarks are included in the repository, but the source data that would be necessary to evaluate them is not present.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
    • JOSS paper doesn't include the same level of detail as the paper in the repo, but does claim to outperform existing tools by a substantial margin, so I want to run the benchmarks myself
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

General comments: One minor nitpicky thing I would say here is that it would be good to get the images and paper pdf out of the root of the repo as it makes the package feel less tidy than it actually is! other than that looks good!!!

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
    • Do realtime diarization - i tested this with a friend and it worked just fine on my macbook. I didn't evaluate for perf, but audio files were analyzed in less time than their duration.
    • Optimize hyperparams - code ran, but i'm not really sure how to evaluate what happened.
    • Did not test:
      • Distributed optimization - don't have a cluster i would want to deploy this to rn
      • Network audio streams - doesn't seem core to the package and i'll take your word for it! very cool!
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
    • Performance benchmark claim of realtime evaluation met :)

General comments: Code runs! joy!

Documentation

Collected docs issue here - juanmc2005/diart#176

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
    • Not currently - actually there's very little explanation about what the package actually does
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
    • Yes, python requirements in setup.cfg and system requirements described in repo. PR's conda environment
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
    • Yes, plenty! cli usage, using the inference classes, breaking apart the model components, tuning hyperparameters, etc.
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
    • Not really - the README includes some examples of using the different classes, but the function/class-level documentation is incomplete. I wouldn't expect documentation for every function and method to be there, but I would expect it for what I'm seeing as the "core" packages, eg SpeakerSegmentation, OverlapAwareSpeakerEmbedding , which have cursory docstrings, but don't explain how they work. Again not asking for all the content of the (repo) paper to be dumped into the docstrings, but the API-level docs are quite sparse.
    • Not everything necessarily needs to be on readthedocs, but since there are some docstrings present, and there is plenty of benefit from documenting the way this package works (there are some nicely designed modules!) I am not sure why this package only has a README!
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
    • No tests :(
    • There is a benchmark function, but I would need the source audio to run the evaluation. I ran now against some part of the VoxConverse dataset, and got results pretty damn close to the paper so I totally buy those numbers, but i can't necessarily evaluate the claims about the other datasets if they are substantially different.
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
    • No contribution guidelines :(
    • GitHub issues are clear enough to me re: seeing support/reporting issues.

Additional thoughts on docs:

  • Recommendation : Authors should move some of the discussion in the in-repo paper into the README - I see the abstract, but yes some rationale at the top about what this package does differently than other packages would be nice. (or, ideally, into a docs site)

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
    • I'll pass this one, and i generally understand why speaker diarization is useful, but a bit more explanation would be nice - in particular I'm not sure how knowing timestamps would help demixing simultaneous speech for eg. transcriptions
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
    • Again I'll pass this one because "faster and more accurate is better" is a reasonable enough statement of need, but the JOSS paper would definitely benefit from some description about why this problem is hard, what the barriers are to progress, and what this package does differently.
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
    • Just that this package is faster and more accurate!
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Summary

Thanks for the very lovely package! Code structure is great, clearly written and commented, and all the cli entrypoints run out of the box with very little fuss (aside from that described in the README for installation, but thats totally fine).

I get the feeling that this is an extension of a prior package, so I'm not quite sure what the expectations for documentation and testing should be - the code itself is readable, the examples seem to cover major functionality, and there are some docstrings, but there isn't anything you would call "API documentation" here.

The most glaring thing missing here is a test suite. The package seems to run fine from what I could tell, but I am unable to evaluate the accuracy of the results because I don't know that the intermediate functions/methods are well tested. I can sympathize with the author in not wanting to write them (again, particularly if this is an extension package to a well-tested prior package), but requiring tests is about more than just making sure the code stays working, but that it is maintainable and possible to contribute to for the overall health of the scientific programming ecosystem - if the author gets busy with life and has to drop maintenance, would anyone else be able to pick it up? will anyone be able to run this in 5 years? tests make that question a lot easier to answer. I'm not sure how the other reviewer passed this checkmark because they're not there!

It doesn't necessarily bear on the review process at JOSS, but I do feel like I need to say that I don't love needing to create an account on huggingface.co to test the package, and I am skeptical about making packages dependent on some cloud service existing for free indefinitely (i am aware GitHub is also a platform, but hopefully git makes redundant enough copies of things, and archiveteam is on that), so i would love to see the authors host the model weights elsewhere as another option, even on someplace like zenodo would be great. The models are quite small and would even fit in this repository, not sure why they aren't. Again this doesn't bear on this review process at all.

That said, there is a lot to love here (one minor ux thing, it's rare to see progress bars for multiprocessing tasks done well. The model provisioning worked smoothly once I installed the huggingface-cli, which was lovely (i usually hate that part). I also loved the live plot of the results when I was using my mic). Overall this is a well executed tool that does exactly what it describes. I wish I had more time to spend with the code (i know this will read as ironic given how long it took me to actually get to the review), as I usually like to hunt around for code improvements that the author can make to provide whatever training or tips i can, but it's also relatively clear they don't need it as this package is very well written. I don't believe in reviewing-as-gatekeeping, so every review is a "pass" from me. My comments here, and boxes left unchecked above are to indicate to the authors where things could be improved and to the editor if they need to uphold the checklist.

Issues Raised

Pull requests

@sneakers-the-rat
Copy link

stalled on juanmc2005/diart#158 - can't run program

@arfon
Copy link
Member

arfon commented Oct 1, 2023

@juanmc2005 @Fei-Tao – it looks like @sneakers-the-rat is blocked by juanmc2005/diart#158 – could you agree a plan to move this forward please?

@sneakers-the-rat
Copy link

I am very sorry for the even more extreme delay than last time. Still on my TODO but keeps getting shunted down after prepping for a conference. Ill be releasing a package early next week after which ill make time BC this has.gone on too long already!

@Fei-Tao
Copy link

Fei-Tao commented Oct 2, 2023

Hi @arfon, thanks for your follow-up with this submission. Since sneakers-the-rat has plan to finish reviewing this submission, can we wait for his response?

@sneakers-the-rat No worry. That's understandable. Thanks for your time in reviewing this submission. I'm looking forward to hearing your comments and suggestions. Please feel free to let us know if you need any help.

@arfon
Copy link
Member

arfon commented Oct 2, 2023

Hi @arfon, thanks for your follow-up with this submission. Since sneakers-the-rat has plan to finish reviewing this submission, can we wait for his response?

Of course! Just wanted to prompt to see what is happening here.

@Fei-Tao
Copy link

Fei-Tao commented Oct 2, 2023

@arfon Great. Thanks for your prompt response!

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Fei-Tao
Copy link

Fei-Tao commented Jun 29, 2024

@editorialbot set 10.5281/zenodo.12510278 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.12510278

@Fei-Tao
Copy link

Fei-Tao commented Jun 29, 2024

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/ICASSP40776.2020.9052974 is OK
- 10.1145/3292500.3330701 is OK
- 10.1109/ASRU51503.2021.9688044 is OK
- 10.1109/CVPR52688.2022.01842 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@Fei-Tao
Copy link

Fei-Tao commented Jun 29, 2024

@editorialbot set v0.9.1 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.9.1

@Fei-Tao
Copy link

Fei-Tao commented Jun 29, 2024

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/ICASSP40776.2020.9052974 is OK
- 10.1145/3292500.3330701 is OK
- 10.1109/ASRU51503.2021.9688044 is OK
- 10.1109/CVPR52688.2022.01842 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5554, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 29, 2024
@Fei-Tao
Copy link

Fei-Tao commented Jun 29, 2024

Hi @crvernon, this submission looks good to me now. Can you take it from here? Thanks for your time.

@crvernon
Copy link

crvernon commented Jul 1, 2024

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@crvernon
Copy link

crvernon commented Jul 1, 2024

👋 @juanmc2005 - I just have the following few things I need you to address before I move to accept this publication:

  • The version specified in your Zenodo archive (v1) does not match the version decalared in the review thread (v0.9.1) see https://zenodo.org/records/12510278. Please edit this to match.
  • The end of the "Summary" section in the paper is abrupt and never discusses the introduction of your proposed solution. I would state something like the following as a last sentence in the "Summary" section. "We introduce Diart as Python package to efficiently conduct real-time speaker diarization."

Thanks!

@crvernon
Copy link

crvernon commented Jul 4, 2024

@juanmc2005 just pinging you again on the above.

@juanmc2005
Copy link

@crvernon I just addressed the changes you requested

@crvernon
Copy link

crvernon commented Jul 7, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

crvernon commented Jul 7, 2024

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Coria
  given-names: Juan Manuel
  orcid: "https://orcid.org/0000-0002-5035-147X"
- family-names: Bredin
  given-names: Hervé
  orcid: "https://orcid.org/0000-0002-3739-925X"
- family-names: Ghannay
  given-names: Sahar
  orcid: "https://orcid.org/0000-0002-7531-2522"
- family-names: Rosset
  given-names: Sophie
  orcid: "https://orcid.org/0000-0002-6865-4989"
- family-names: Zaouk
  given-names: Khaled
- family-names: Fruend
  given-names: Ingo
  orcid: "https://orcid.org/0000-0003-4594-1763"
- family-names: Higy
  given-names: Bertrand
  orcid: "https://orcid.org/0000-0002-8198-8676"
- family-names: Kesari
  given-names: Amit
- family-names: Thakkar
  given-names: Yagna
contact:
- family-names: Coria
  given-names: Juan Manuel
  orcid: "https://orcid.org/0000-0002-5035-147X"
doi: 10.5281/zenodo.12510278
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Coria
    given-names: Juan Manuel
    orcid: "https://orcid.org/0000-0002-5035-147X"
  - family-names: Bredin
    given-names: Hervé
    orcid: "https://orcid.org/0000-0002-3739-925X"
  - family-names: Ghannay
    given-names: Sahar
    orcid: "https://orcid.org/0000-0002-7531-2522"
  - family-names: Rosset
    given-names: Sophie
    orcid: "https://orcid.org/0000-0002-6865-4989"
  - family-names: Zaouk
    given-names: Khaled
  - family-names: Fruend
    given-names: Ingo
    orcid: "https://orcid.org/0000-0003-4594-1763"
  - family-names: Higy
    given-names: Bertrand
    orcid: "https://orcid.org/0000-0002-8198-8676"
  - family-names: Kesari
    given-names: Amit
  - family-names: Thakkar
    given-names: Yagna
  date-published: 2024-07-07
  doi: 10.21105/joss.05266
  issn: 2475-9066
  issue: 99
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5266
  title: "Diart: A Python Library for Real-Time Speaker Diarization"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05266"
  volume: 9
title: "Diart: A Python Library for Real-Time Speaker Diarization"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05266 joss-papers#5582
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05266
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jul 7, 2024
@crvernon
Copy link

crvernon commented Jul 7, 2024

🥳 Congratulations on your new publication @juanmc2005! Many thanks to @Fei-Tao for editing and @sneakers-the-rat and @mensisa for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@crvernon crvernon closed this as completed Jul 7, 2024
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05266/status.svg)](https://doi.org/10.21105/joss.05266)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05266">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05266/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05266/status.svg
   :target: https://doi.org/10.21105/joss.05266

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@juanmc2005
Copy link

Thank you all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

8 participants