-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: AutoEIS: An automated tool for analysis of electrochemical impedance spectroscopy using evolutionary algorithms and Bayesian inference #6256
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Wordcount for |
|
@dap-biospec, @DevT-0 - a reminder prompt to start reviews; your first task is to generate a checklist (see instructions at top of this thread). Any questions, please ask - Lucy |
I will submit my review latest by this Friday. Thanks for your patience. |
Review of AUTOEIS routine for JOSS General Remarks on the Scientific Applicability of AutoEIS Moreover, an essential aspect of impedance spectroscopic measurement involves meticulous setup. Unique to electrochemical measurements, the cell constitutes an integral component of the measurement system, with each element and accompanying chemical reactions and physical phenomena within the cell (e.g., adsorption, charge-transfer, mass-transfer or diffusion, field-line geometries, or even simple issues like poor contacts and cabling) having a substantial impact on the data. Consequently, the awareness and consideration of such potential influences are crucial for accurate analysis. Regrettably, impedance measurements yield scant chemical information to identify the origins of artefacts. Thus, assuring the scientific validity of the measurements is challenging. In the context of Kronig-Kramers transform (KKT) validation, ideally, data spanning a broad frequency range would facilitate the application of direct integration KKT validation checks. However, it is important to note that KKT validation may provide a sufficient, but not necessary, condition for data validation. The implementation of linear KKT in this code serves merely to eliminate non-conforming points rather than to verify data validity. Comments on the Code's Programming 0 - README.md
Weaknesses:
Suggestions:
1 init.py
Weaknesses:
Improvements:
2 - main.py
Weaknesses:
Improvements:
3 - .cli.py:
Weaknesses:
Improvements:
4 - cli_pyjulia.py:
Weaknesses:
Improvements:
5 - .core.py Data Preprocessing (preprocess_impedance_data):Strengths:
Weaknesses:
Improvements:
ECM Generation (generate_equivalent_circuits):Strengths:
Weaknesses:
Improvements:
Bayesian Inference (perform_bayesian_inference):Strengths:
Weaknesses:
Improvements:
Plausibility Filtering (filter_implausible_circuits):Strengths:
Weaknesses:
Improvements:
6 - io.py Analysis:
Weaknesses:
Improvements:
Critical Observations:
7 - julia_helpers.py
Weaknesses:
Improvements:
8 - cli_pyjulia.py
Weaknesses:
Improvements:
10 - Metrics Module: Metrics.pyStrengths:
Weaknesses:
Improvements:
_generate_ecm_parallel_juliaStrengths:
Weaknesses:
Improvements:
(Following on last pt. 5) Application to _generate_ecm_parallel_julia:
11 -Models.py
Weaknesses:
Improvements:
12 - Parser Module
Weaknesses:
Improvements:
13 - Utility Functions
Weaknesses:
Improvements:
14 - Versioning
Weaknesses:
Improvements:
15 - Testing and Validation
Weaknesses:
Improvements:
|
Hi @DevT-0 - Wow, this certainly gets the award for longest Github issue comment I have seen :) You have really gone through each part with a fine tooth comb - there is a lot of useful looking feedback. We need to be clear on what your (if any) acceptance blockers are; we need to understand if this submission meets the minimum JOSS standard. @DevT-0 - could you generate the checklist (see instructions at top of thread) and tick off which criteria is met and which you feel is not met? Where not met, please indicate what needs to be done to improve. For more information on each of the checklist criteria you can see the JOSS documentation (https://joss.readthedocs.io/en/latest/reviewer_guidelines.html). It may also be useful to look through another JOSS review as our process is quite different from other journals (for example here is one I am currently editing which is near completion - #6264 (comment)). |
Hi @lucydot, just checking on the status of this review. Thanks for all your efforts! |
Hi @ma-sadeghi I've just started a direct conversation with @DevT-0 outside of this thread and I will update here once I have a reply. @dap-biospec - what is the status of your review? Are you able to start in the next week? |
Review checklist for @DevT-0Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
In the Statement of need, I disagree with the authors stating that (or at least per my understanding of this statement): |
I think this will be all from me. I have tried to evaluate this submission as a scientific communication. Rest I believe, I have given detailed point-wise comments to make a decision above. |
Review checklist for @dap-biospecConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
@ma-sadeghi @lucydot So far I am unable to verify functionality due to an OSError posted. |
Hi @ma-sadeghi:
@DevT-0 I can see there are a number of tick boxes which are unticked under Documentation and Software Paper. What needs to be done (at a minimum) so that these can be ticked? There is a "Statement of Need" and "State of the field" (existing related software projects) listed in the paper, so I think these can be ticked off? You also indicate that there is good coverage of testing (it does not need to be complete), so I infer from this "automated tests" description is also met? |
Hi @lucydot, thanks for the follow up. I'm a bit bandwidth-strapped this week, I'll start addressing the comments next Tuesday. |
Hi Lucy, please accept my apologies for any inconvenience and delay my review may have caused. I thought I had followed a thorough review process during which I compiled 13 pages of detailed and for most part point-wise comments that sufficiently justified ticking off most of the checklist boxes as well as leaving some unticked. But I realise now the stringent compliance an acceptable review must meet for JOSS. At the same time, given the considerable effort and time I have already spent in this review that I am running out of my capacity to contribute further. As such I would understand if my review cannot be used. I appreciate the opportunity to participate in this unique one-time experience. Thanks and Best regards, Dev |
Hi @DevT-0 - JOSS is a different approach to reviewing than traditional journals, with papers being accepted once the reviewers/editors are happy that each item in the checklist has been met. Your contributions are gratefully received - I should be able to take what you have written and infer (alongside my own checks) if most of the remaining checkboxes can be ticked off. The key aspect we still need to review (and is beyond my role as editor) is functionality, which I believe @dap-biospec has been working on given issue raised around installation. Thanks @ma-sadeghi for your update re: availability, and thanks @DevT-0 for your review contributions - we would like to list you as reviewer in published paper as you have committed much time and expertise already to the process, unless you have any objection to this. However I won't ping you with any more review related requests! Lucy |
Hello all, a quick message to say I will be out of office from tomorrow, and will check back here on the 18th. |
Hi @ma-sadeghi, @dap-biospec - any updates on response to review / review? |
@dap-biospec - are you able to continue with the review? |
@ma-sadeghi It appears that you attempted to resolve issues by eliminating introductory section from depository all together. This is not a solution to the bugs in the code. Please include basic step-by-step tutorial that can be clearly found and followed from the landing page of the depository. It may be prudent to include verifiable demo section for each claim that you are making in the manuscript. Please include all the required steps in a markup page(s), leaving linked jupyter notebooks only for illustrations that cannot be included in the markup files. As @lucydot rightfully pointed out, please ask an un-initiated colleague to follow written tutorial without any additional support and verify that they obtain all the results that you claim and expect to see. |
@dap-biospec Thanks for the feedback. Here’s how we’ve addressed your points: The perform_full_analysis function was originally meant as a convenience helper, but it turned out to add unnecessary maintenance complexity and made debugging harder by masking individual steps. We decided to remove it, but this doesn’t affect AutoEIS’s core functionality. The "basic usage" section was removed from the README to keep the landing page clean and focused. We’ve linked to the full documentation, which covers everything in detail: On Jupyter notebooks vs. Markdown: Good point. Notebooks can be slow to render. To handle that, we convert them to HTML in the documentation, so they’re easy to view. Users can still grab the actual notebooks from the repo if they want to run them. We avoid Markdown for examples because it’s harder to automate testing for it, and Jupyter notebooks are tested on every push. Per your and @lucydot's suggestion, we had colleague independently install and use AutoEIS, and they were able to go through the examples. Hope these address your concerns. |
@ma-sadeghi - thank you for addressing points raised, it makes it clear for our reviewers what has changed. @dap-biospec - it appears that @ma-sadeghi and team have addressed a number of your concerns. Are you able to confirm functionality? I think this is the priority for the review to progress. Once we are happy with core functionality, I can support in reviewing other aspects (e.g. documentation, joss paper, general checks). |
@ma-sadeghi I attempted to reproduce the first three examples and encountered variety of errors and inconsistencies in each of "Circuit models 101", "circuit generation", and "detailed workflow". This starts with the lack of arguments for ae.visualization.set_plot_style() example and continues with runtimes errors and unexpected keywords. It does not appear that the authors performed literal execution of their tutorial on a clean system. Errors may be due to package dependence or environment configuration. It also appears that the authors exported jupyter notebook as HTML without correcting action steps or providing adequate instructions on the expected behavior. @lucydot at this time functionality is not verifiable. |
@ma-sadeghi please also note that your "visit examples" link bypasses installation steps. |
@dap-biospec Thank you for taking the time to review.
All of our example notebooks are automatically tested on GitHub virtual machines. These tests are designed to catch any cell that fails to run. The tests are two-fold: One is executed on every pull request, while another runs on every push to the repository to update the documentation. All recent tests have passed. However, to double-check, I cloned a fresh repository, created a fresh virtual environment, and successfully ran the "detailed_workflow.ipynb" notebook (which covers core AutoEIS functionalities). I’ve recorded the process, and you can view it here. (Regarding
As explained above, the examples are tested and run on GitHub virtual machines as part of our CI workflow.
The snippet below shows the Action responsible for publishing the documentation. This workflow is executed on every push to the repository: - name: Build the documentation
run: |
uv run jupyter nbconvert \
--ExecutePreprocessor.timeout=3600 \
--to notebook --execute --inplace examples/*.ipynb
cd doc
uv run make html Based on this snippet, prior to generating HTML files, all example notebooks are re-run, and results are updated. If any cell fails, the published HTML reflects the errors (I double checked our online documentation, and there was no error). Additionally, as noted, there’s a separate Action (notebooks.yml) that runs for each pull request. This setup ensures that any changes causing failures are caught and cannot be merged.
The "Installation" section in README is right before the "Usage" section. The assumption was that prior to diving into examples, users have already gone through the installation step. If you're experiencing specific issues with your local setup, I’d be glad to help you troubleshoot. Feel free to share details about your environment, dependencies, or any error messages, and I can assist in pinpointing the source of the problem. |
@ma-sadeghi Thank you for confirming that HTML content was literally translated from jupyter notebooks. This likely explains why HTML instructions do not work, by the same account as printing of dynamic HTML content to PDF does not yield working pages. Please review previous request to verify that literal execution of your published HTML instructions yield expected demonstrations. A "literal" means that a reader that has absolutely no prior knowledge of your package can follow your instructions from this page https://github.com/AUTODIAL/AutoEIS/ verbatim (i.e. copy-paste) and arrive at the same results. Please also note that code validation by script, or auto-generation of HTML from notebooks in not sufficient to make "as published" tutorial adequate. Hence, the earlier request to test the instructions by uninitiated human. |
@dap-biospec thank you for continued efforts testing the code and @ma-sadeghi for clear account of testing and verification setup. Testing the code via Github Actions is a widely used tool as it should reflect what occurs for when software is freshly installed. It is peculiar that the GH Action runs without error, yet a human following the same procedure does get errors. Do you think this could this be a difference in Python and/or Julia versions @ma-sadeghi ? I can see the GH workflows @dap-biospec - could you let us know which python version you are using? Are you running in a clean python environment (for example, with a virtual environment or conda)? This might help us understand if there are version conflicts. |
@ma-sadeghi is making a small but consequential omission in the online tutorial section. Code that may run in jupyper notebook may not run, or produce the same output, outside of it. Hence, simple HTML export does not work as an online documentation. Furthermore, execution of code by copying and pasting in plain python prompt does not yield described results, as outlined above. As a publication, this work should be self-sufficient and accessible to any reader, including non-programming chemists. |
@dap-biospec Just to clarify, are you asking if copying the code cells from the example Jupyter notebooks (.ipynb) and pasting them into a standalone Python script (.py) will produce the same results as running the code directly within the Jupyter notebook? The reason I ask is that if the user simply downloads the example notebooks (in .ipynb format) and runs them cell by cell, they will get the same results. This is what I demonstrated in the screencast I uploaded to YouTube. |
@ma-sadeghi I am reviewing the paper/depository at face value. I am following your instructions literally (i.e. HTML pages exported from jupyter notebooks) - copying code snippets and executing in python shell. I am not doing anything that instructions do not call for. The fact that line-by-line execution of your examples in plain python yields exceptions and missing argument prompts is a separate issue, which indicates that there are unreported or unrecognized differences in the environment, dependencies, etc. |
@dap-biospec I think I got it. Just to recap so you can correct me if I missed anything: you’re asking us to verify if a user can reproduce the results shown in the example notebooks by simply running the code cells in a Python REPL. I’ll work on this over the weekend, and if I run into any issues, I’ll sort them out. If all goes well, I’ll make a screencast and share it next week. |
@ma-sadeghi - this sounds like a good plan. What you summarise is also what I think @dap-biospec is suggesting. |
Hi @ma-sadeghi - have you got a timescale for completing the above? Thanks - Lucy. |
@ma-sadeghi Here are screenshots of REPL executions. As the screencast is not a part of the manuscript, I am going only by the actual posted steps. First, "Circuits 101" leads to a prompt: While you mention lcapy and LaTeX in "Circuits 101", this should be clearly spelled in the installation section similarly to Julia if you are relying on it to a similar extent. Alternatively, you can have separate and clearly marked pages with or without visualization. Please let me know when this is addressed in the code or in the tutorial and I will repeat. |
Hi @dap-biospec and @ma-sadeghi - I've had a run through installation and tutorials myself. In general I try to avoid doing this as editor, but given the considerable time that has passed, and effort already invested, I think it is sensible to take a more direct approach so that we can wrap this up. InstallationI created an empty conda environment, installed pip, and then followed the instructions given:
I agree with this suggestion from @dap-biospec. To me it makes sense to keep all installation steps on the installation page @ma-sadeghi. It could be included as an optional sub-section for those who want to visualise the circuit (making clear that for most autoeis functionality these additional steps, which are a little more involved than a pip-install, are not needed). Jupyter vs REPLI think there is an assumption that the tutorials are being run through a Jupyter Notebook - for example, @ma-sadeghi if you are working on the assumption that tutorials are completed within a Jupyter Notebook, you could state this at the top of each tutorial page. To support those who want to use REPL, you could point to the nice screencast you have developed and demonstrate how to save a figure using Functionality@dap-biospec I could not reproduce the errors you had in "Circuit generation" and "Detailed workflow". This difference seems unusual to me; perhaps this is a bug which has been fixed since starting the review - I suggest re-installing autoeis (ideally within a virtual environment / conda), if you have not already done so, and re-running through these tutorials. |
@ma-sadeghi I updated autoeis to 0.0.34. |
@lucydot For a depository which is used solely by the authors and close colleagues, any assumptions are warranted. A peer reviewed publication imposes additional requirements on spelling out assumptions, abbreviations, prerequisites and several other established standard, without which depository does not rise to a level of a publication. While I can easily guess that ae.visualization.plot_impedance_combo(freq, Z) and similar calls may not have the same output in REPL as in jupyter, a) we cannot exclude electrochemists without extensive programming background from the audience and b) the example code must proceed through all steps without errors. Furthermore, I have substantial concerns about rolling code changes. Release says 0.0.34, list says 0.0.36, paper submitted for review still says 0.0.17, if I am not mistaken. In my opinion, the submitted paper must be locked to the release under review with the optional information on obtaining the latest version. Otherwise, the accepted paper can point to a materially different code, however well intended the changes were. |
I agree, hence the suggestion to state the assumption of Jupyter Notebook use at the top of the tutorial. I think a case might be made for asking authors to give instructions for REPL and Jupyter. @ma-sadeghi - what are your thoughts - are you able to update tutorials so that they run as written for REPL and Jupyter?
It is very usual for code to be updated and new versions released during the JOSS review process. In fact this is encouraged - it means the review process is working (as the code is being improved through review). When a code passes review, we ask authors to make a final release and the version reported on the paper is updated accordingly.
Excellent 👍 |
FYI @dap-biospec @ma-sadeghi I am going to take two weeks leave, and will be back on the 6th of Jan. |
@ma-sadeghi - what are your thoughts on your comments above - are you able to update tutorials so that they run as written for REPL and Jupyter? @dap-biospec - has your testing raised any further issues? If the Jupyter vs REPL issue is resolved are you happy that stated functionality is demonstrated? |
Hi @lucydot, I apologize for the delay. I agree, here's what we've done to address @dap-biospec's comments:
Hope this addresses the issue. READMEExamples' landing page |
@dap-biospec - has your testing raised any further issues? Please see message above from @ma-sadeghi; they have made clear that examples are expected to run in a Jupyter Notebook (with the assumption being that basic adaptations to the example as written would enable integration into Python REPL or script). |
@ma-sadeghi While the previous errors were cleared, additional errors have been encountered and examples are still not reproducible.
Further testing was halted as irrelevant. |
@lucydot After over a year of efforts to bring the manuscript in line with expectations for a peer-reviewed research article, I arrive at the conclusion that the authors are either unable or unwilling to make a niche GitHub depository accessible to a broad research community beyond its current users. Furthermore, I do not believe that the content of this manuscript makes a substantial new contribution beyond what was already published in this article and, thus, does not warrant a separate publication. Based on the lack of novel contribution, lack of reproducibility, and poor documentation I recommend rejection of this manuscript. |
Thank you @dap-biospec for your time reviewing this submission for JOSS. |
A note here to say that I'm in discussion with authors and editors re: the best way to proceed. |
Here's our response to the reviewer's comments:
We've explained it a few times before and it's not clear to us why "on-the-fly executed notebooks, which are then exported to HTML" is not good enough, and how that's misleading.
We've stripped the cell outputs from the git-committed notebooks to keep the repository size minimal. All examples are rendered as HTML in our documentation website.
We've added clear instructions as callouts in both README, and the examples page in our documentation. That should be sufficient.
This is expected. Equivalent circuits are generated using an evolutionary algorithm, which is a stochastic process. So, you'd expect a different pool of circuits every time. (Of course, you'd expect the algorithm to converge to a few good candidates if you generate a large pool of circuits via
Our automated unit/integration/notebook tests are all passing as of today. I've also triple checked on a brand new Windows machine today. @lucydot We'd appreciate it if you could verify. |
Submitting author: @ma-sadeghi (Mohammad Amin Sadeghi)
Repository: https://github.com/AUTODIAL/AutoEIS/
Branch with paper.md (empty if default branch): joss
Version: v0.0.17
Editor: @lucydot
Reviewers: @dap-biospec, @DevT-0
Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@dap-biospec & @DevT-0, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @lucydot know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @DevT-0
📝 Checklist for @DevT-0
📝 Checklist for @dap-biospec
The text was updated successfully, but these errors were encountered: