Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs #5713

Closed
editorialbot opened this issue Aug 1, 2023 · 61 comments
Assignees
Labels
accepted Makefile published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Aug 1, 2023

Submitting author: @diningphil (Federico Errica)
Repository: https://github.com/diningphil/PyDGN/
Branch with paper.md (empty if default branch): paper
Version: v1.5.0
Editor: @arfon
Reviewers: @idoby, @sepandhaghighi
Archive: 10.5281/zenodo.8396373

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/912b1b0ae8bf2ae2d44aaafc50126b74"><img src="https://joss.theoj.org/papers/912b1b0ae8bf2ae2d44aaafc50126b74/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/912b1b0ae8bf2ae2d44aaafc50126b74/status.svg)](https://joss.theoj.org/papers/912b1b0ae8bf2ae2d44aaafc50126b74)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@idoby & @sepandhaghighi, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @idoby

📝 Checklist for @sepandhaghighi

@editorialbot editorialbot added Makefile review Shell TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning labels Aug 1, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.13 s (788.0 files/s, 126413.9 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          47           2020           3800           7060
YAML                            28            336            273           1139
Markdown                         5            282              0            438
reStructuredText                15            242            449            240
TeX                              1             21              0            151
SVG                              3              2              0             89
Bourne Shell                     2             20             15             47
DOS Batch                        1              8              1             26
make                             1              4              7              9
TOML                             1              0              0              6
-------------------------------------------------------------------------------
SUM:                           104           2935           4545           9205
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1122

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- 10.1109/72.572108 may be a valid DOI for title: Supervised neural networks for the classification of structures
- 10.1109/tnn.2008.2010350 may be a valid DOI for title: Neural network for graphs: A contextual constructive approach
- 10.1109/msp.2017.2693418 may be a valid DOI for title: Geometric deep learning: going beyond Euclidean data

INVALID DOIs

- None

@idoby
Copy link

idoby commented Aug 1, 2023

Review checklist for @idoby

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/diningphil/PyDGN/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@diningphil) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@arfon
Copy link
Member

arfon commented Aug 1, 2023

@idoby, @sepandhaghighi – This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above. Please create your checklist typing:

@editorialbot generate my checklist

As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/5713 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@arfon
Copy link
Member

arfon commented Aug 1, 2023

@sepandhaghighi – looks like there's a whitespace character at the start of that command to @editorialbot which is stopping it from responding (a bug we should fix).

Could you retry in a new comment, ensuring the sentence starts with @editorialbot.

@sepandhaghighi
Copy link

sepandhaghighi commented Aug 1, 2023

Review checklist for @sepandhaghighi

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/diningphil/PyDGN/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@diningphil) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@arfon
Copy link
Member

arfon commented Aug 21, 2023

👋 @idoby & @sepandhaghighi – just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?

@sepandhaghighi
Copy link

👋 @idoby & @sepandhaghighi – just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?

@arfon 👋
I will complete my review in the next few days 💯

@idoby
Copy link

idoby commented Aug 21, 2023

👋 @idoby & @sepandhaghighi – just checking in to see how you're getting on with your reviews? It looks like you've both made a start here, do you think you might be able to wrap up your initial reviews in the next week or so so that @diningphil can start responding?

Thanks for the reminder, had forgotten about this. Will comment soon

@idoby
Copy link

idoby commented Aug 29, 2023

@diningphil, thanks for submitting this package, it seems like a lot of thought and effort went into it!

A few comments:

Paper:

  1. Please fix the missing DOIs as well as the broken reference to the PyTorch Geometric paper.
  2. The opening sentence of the paper ("The disregard...") is too harsh in my opinion, and could be phrased in a more positive way. I suggest something along the lines of "Evaluation practices in the machine learning (ML) and specifically graph neural network (GNN) communities can often be improved upon considerably [cite here]..."
  3. The paper discusses various ways to use and benefit from PyDGN, but the descriptions are rather abstract and lack motivation. I recommend the authors motivate each feature in the appropriate section in the text and add a small usage example snippet to illustrate how the feature is used to integrated into the user's code.
  4. Perhaps dedicate a paragraph or two to the design considerations undertaken by the authors of the software, if any interesting design decisions or tradeoffs were made. The author of [REVIEW]: Netgraph: Publication-quality Network Visualisations in Python #5372 took me up on this and I feel it has made their paper more interesting. If you don't feel there are any software or system design considerations worth including in the text, that is fine too.
  5. Minor things: PyTorch should be stylized as such. In the paper, names such as PyTorch, etc, are written inconsistently.
  6. Consider citing more relevant papers such as GraphGym (https://arxiv.org/abs/2011.08843) and spelling out how PyDGN is different than, or complementary to tools such as GraphGym.

Software

  1. In my opinion, the setup shell script makes too many assumptions about the user's environment, mainly the lack of existing virtual environment already created by venv, conda, etc. The requirement to source a shell script is contrary to standard practice in Python. Please formalize your dependencies in setup.py and pyproject.toml (preferably the latter, since the former is considered a legacy install system) and let the user's chosen package manager (e.g., pip, conda, poetry, etc) handle venv creation, dependency resolution and installation.
  2. Furthermore, please do not force the users of your package to install build, wheel, pytest and black, etc, which are required for PyDGN's development workflow but not required to use PyDGN.
  3. The same goes for Jupyter, which should be an optional dependency or detected at runtime, since not every user wants to bring in the entire Jupyter ecosystem to use PyDGN.

Overall, I think this is very good work! I will be digging deeper into the software itself soon.

@diningphil
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@diningphil
Copy link

diningphil commented Sep 8, 2023

Dear @idoby,

First of all, thank you for the constructive and positive feedback. We were able to simplify a lot the installation process thanks to your comments. Below you can find our response so that we can continue the discussion.

Paper:

  1. We added DOIs for all journal papers and the broken reference
  2. You are right, we changed the sentence to make it more positive
  3. We did our best to include snippets of code that give the reader an idea of what should be done to carry out the most important steps, namely the data splitting, which corresponds to the choice of an evaluation procedure such as 10-fold cross validation, the definition of a new model, and the definition of a new metric. Because all pieces of the puzzle come together in a way that is transparent to the user, we found it difficult to show other snippets, but we believe that this, combined to the turorial in our documentation, might be enough to understand the overall mechanism. Any further suggestion is welcome. We also added a motivation when missing to the different paragraphs.
  4. We added a new paragraph on key design considerations. We identified two of them, namely the use of configuration files to enable fast prototyping and the use of the publish-subscribe pattern to flexibly adapt components of the training loop without touching the main logic. Thank you very much for the suggestion!
  5. We fixed this, thank you for noticing, altough we cannot see any other inconsistency like that in the text.
  6. We added the relevant citation and discussed how it differs from PyDGN. Please note that we already discussed to some extent how the other libraries mentioned are different from us: they are more focused on the definition of building blocks of graph machine learning models, whereas we try to automatize the experiments while keeping the framework flexible enough for everyday research. These libraries can be used in our framework, in particular Pytorch Geometric.

Software (modified on main branch):

  1. We completely agree and we simplified a lot the installation process, requiring only a pip install pydgn command (please refer to README.md in the main branch). We also specified the dependencies in the toml file and removed the legacy setup files as suggested. Only the strictly required dependencies are now listed. Thank you very much for the help, now the code looks much cleaner.
  2. Handled in the previous point
  3. Handled in the previous point

As a last note, please note that the example usage is shown in the readme file, and in the aforementioned tutorial the user can find an explanation of the configuration files and how to use them to setup a proper experiment. Examples of configuration files can be also found in the examples folder.

Thank you again for the help. We are happy to discuss any more suggestions, if needed, related to the new version of the paper.
I'd also like to tag @arfon to show that the discussion is ongoing =).

Best regards, the authors!

@diningphil
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@idoby
Copy link

idoby commented Sep 9, 2023

@diningphil I think the intro reads much better and is better motivated now, and the differences between the various existing packages and yours are clearer. The practical use and motivation for the features and design decisions reads well.

Note that something seems to have not rendered well in line 83 (seems like a header didn't render) and that PyDGN is styled inconsistently on line 95.

Regarding the installation procedure: PyDGN should install much more easily now. Please consider not forcing the user to install gpustat, since not all installs have CUDA. The same applies to the wandb dependency: please consider detecting wandb at runtime, not forcing your users to install wandb if they don't use it.

Besides that, I think we're good to go. Please consider updating the paper branch with the latest changes so that when the branch is archived upon acceptance, it would include the changes to the installation procedure and any other enhancement you made to the software.

Good job! 👍

@idoby
Copy link

idoby commented Sep 9, 2023

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/72.572108 is OK
- 10.1109/TNN.2008.2005605 is OK
- 10.1109/TNN.2008.2010350 is OK
- 10.1109/MSP.2017.2693418 is OK
- 10.1186/s40649-019-0069-y is OK
- 10.1016/j.neunet.2020.06.006 is OK
- 10.1109/TNNLS.2020.2978386 is OK
- 10.1109/TKDE.2020.2981333 is OK
- 10.1109/MCI.2020.3039072 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@idoby
Copy link

idoby commented Sep 9, 2023

Hmm, this list doesn't seem to include all of the references in the paper...

@diningphil
Copy link

diningphil commented Sep 9, 2023

Dear @idoby,

We are happy the changes are satisfactory. In order:

  1. Thanks for spotting the (new) typo and the inconsistency! We fixed them.
  2. We have removed wandb, requests and gpustats from the dependencies as suggestes. If CUDA is used, an exception is now raised for gpustats as it already happened for wandb and the user is required to install the library.
  3. References: most machine learning conference papers do not have DOIs, whereas amongst journals the Journal of Machine Learning Research does not release a DOI. From the reference checker it seems the DOIs provided match the journal papers that we referenced and no missing DOIs have been found among the others. This looks like correct behavior, but it is my first time with JOSS so I could well be wrong.
  4. We updated the paper branch for potential future archival.

Thank you again for your careful review. We remain on stand-by for additional exchanges if required =).

@arfon
Copy link
Member

arfon commented Oct 1, 2023

@editorialbot set v1.5.0 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.5.0

@arfon
Copy link
Member

arfon commented Oct 1, 2023

@editorialbot set 10.5281/zenodo.8396373 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.8396373

@arfon
Copy link
Member

arfon commented Oct 1, 2023

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/72.572108 is OK
- 10.1109/TNN.2008.2005605 is OK
- 10.1109/TNN.2008.2010350 is OK
- 10.1109/MSP.2017.2693418 is OK
- 10.1186/s40649-019-0069-y is OK
- 10.1016/j.neunet.2020.06.006 is OK
- 10.1109/TNNLS.2020.2978386 is OK
- 10.1109/TKDE.2020.2981333 is OK
- 10.1109/MCI.2020.3039072 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4635, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Oct 1, 2023
@arfon
Copy link
Member

arfon commented Oct 1, 2023

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Errica
  given-names: Federico
  orcid: "https://orcid.org/0000-0001-5181-2904"
- family-names: Bacciu
  given-names: Davide
  orcid: "https://orcid.org/0000-0001-5213-2468"
- family-names: Micheli
  given-names: Alessio
  orcid: "https://orcid.org/0000-0001-5764-5238"
contact:
- family-names: Errica
  given-names: Federico
  orcid: "https://orcid.org/0000-0001-5181-2904"
doi: 10.5281/zenodo.8396373
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Errica
    given-names: Federico
    orcid: "https://orcid.org/0000-0001-5181-2904"
  - family-names: Bacciu
    given-names: Davide
    orcid: "https://orcid.org/0000-0001-5213-2468"
  - family-names: Micheli
    given-names: Alessio
    orcid: "https://orcid.org/0000-0001-5764-5238"
  date-published: 2023-10-01
  doi: 10.21105/joss.05713
  issn: 2475-9066
  issue: 90
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5713
  title: "PyDGN: a Python Library for Flexible and Reproducible Research
    on Deep Learning for Graphs"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05713"
  volume: 8
title: "PyDGN: a Python Library for Flexible and Reproducible Research
  on Deep Learning for Graphs"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05713 joss-papers#4636
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05713
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Oct 1, 2023
@arfon
Copy link
Member

arfon commented Oct 1, 2023

@idoby, @sepandhaghighi – many thanks for your reviews here! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@diningphil – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Oct 1, 2023
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05713/status.svg)](https://doi.org/10.21105/joss.05713)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05713">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05713/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05713/status.svg
   :target: https://doi.org/10.21105/joss.05713

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@diningphil
Copy link

Thank you all for the swift review process! It has been a pleasure and the feeback has been incredibly useful!

Best regards,
the authors

@idoby
Copy link

idoby commented Oct 1, 2023

Congrats @diningphil

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Makefile published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

5 participants