-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: AutoEIS: An automated tool for analysis of electrochemical impedance spectroscopy using evolutionary algorithms and Bayesian inference #6256
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Wordcount for |
|
@dap-biospec, @DevT-0 - a reminder prompt to start reviews; your first task is to generate a checklist (see instructions at top of this thread). Any questions, please ask - Lucy |
I will submit my review latest by this Friday. Thanks for your patience. |
Review of AUTOEIS routine for JOSS General Remarks on the Scientific Applicability of AutoEIS Moreover, an essential aspect of impedance spectroscopic measurement involves meticulous setup. Unique to electrochemical measurements, the cell constitutes an integral component of the measurement system, with each element and accompanying chemical reactions and physical phenomena within the cell (e.g., adsorption, charge-transfer, mass-transfer or diffusion, field-line geometries, or even simple issues like poor contacts and cabling) having a substantial impact on the data. Consequently, the awareness and consideration of such potential influences are crucial for accurate analysis. Regrettably, impedance measurements yield scant chemical information to identify the origins of artefacts. Thus, assuring the scientific validity of the measurements is challenging. In the context of Kronig-Kramers transform (KKT) validation, ideally, data spanning a broad frequency range would facilitate the application of direct integration KKT validation checks. However, it is important to note that KKT validation may provide a sufficient, but not necessary, condition for data validation. The implementation of linear KKT in this code serves merely to eliminate non-conforming points rather than to verify data validity. Comments on the Code's Programming 0 - README.md
Weaknesses:
Suggestions:
1 init.py
Weaknesses:
Improvements:
2 - main.py
Weaknesses:
Improvements:
3 - .cli.py:
Weaknesses:
Improvements:
4 - cli_pyjulia.py:
Weaknesses:
Improvements:
5 - .core.py Data Preprocessing (preprocess_impedance_data):Strengths:
Weaknesses:
Improvements:
ECM Generation (generate_equivalent_circuits):Strengths:
Weaknesses:
Improvements:
Bayesian Inference (perform_bayesian_inference):Strengths:
Weaknesses:
Improvements:
Plausibility Filtering (filter_implausible_circuits):Strengths:
Weaknesses:
Improvements:
6 - io.py Analysis:
Weaknesses:
Improvements:
Critical Observations:
7 - julia_helpers.py
Weaknesses:
Improvements:
8 - cli_pyjulia.py
Weaknesses:
Improvements:
10 - Metrics Module: Metrics.pyStrengths:
Weaknesses:
Improvements:
_generate_ecm_parallel_juliaStrengths:
Weaknesses:
Improvements:
(Following on last pt. 5) Application to _generate_ecm_parallel_julia:
11 -Models.py
Weaknesses:
Improvements:
12 - Parser Module
Weaknesses:
Improvements:
13 - Utility Functions
Weaknesses:
Improvements:
14 - Versioning
Weaknesses:
Improvements:
15 - Testing and Validation
Weaknesses:
Improvements:
|
Hi @DevT-0 - Wow, this certainly gets the award for longest Github issue comment I have seen :) You have really gone through each part with a fine tooth comb - there is a lot of useful looking feedback. We need to be clear on what your (if any) acceptance blockers are; we need to understand if this submission meets the minimum JOSS standard. @DevT-0 - could you generate the checklist (see instructions at top of thread) and tick off which criteria is met and which you feel is not met? Where not met, please indicate what needs to be done to improve. For more information on each of the checklist criteria you can see the JOSS documentation (https://joss.readthedocs.io/en/latest/reviewer_guidelines.html). It may also be useful to look through another JOSS review as our process is quite different from other journals (for example here is one I am currently editing which is near completion - #6264 (comment)). |
Hi @lucydot, just checking on the status of this review. Thanks for all your efforts! |
Hi @ma-sadeghi I've just started a direct conversation with @DevT-0 outside of this thread and I will update here once I have a reply. @dap-biospec - what is the status of your review? Are you able to start in the next week? |
Review checklist for @DevT-0Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
In the Statement of need, I disagree with the authors stating that (or at least per my understanding of this statement): |
I think this will be all from me. I have tried to evaluate this submission as a scientific communication. Rest I believe, I have given detailed point-wise comments to make a decision above. |
Review checklist for @dap-biospecConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
@ma-sadeghi @lucydot So far I am unable to verify functionality due to an OSError posted. |
Hi @ma-sadeghi:
@DevT-0 I can see there are a number of tick boxes which are unticked under Documentation and Software Paper. What needs to be done (at a minimum) so that these can be ticked? There is a "Statement of Need" and "State of the field" (existing related software projects) listed in the paper, so I think these can be ticked off? You also indicate that there is good coverage of testing (it does not need to be complete), so I infer from this "automated tests" description is also met? |
Hi @lucydot, thanks for the follow up. I'm a bit bandwidth-strapped this week, I'll start addressing the comments next Tuesday. |
Hi Lucy, please accept my apologies for any inconvenience and delay my review may have caused. I thought I had followed a thorough review process during which I compiled 13 pages of detailed and for most part point-wise comments that sufficiently justified ticking off most of the checklist boxes as well as leaving some unticked. But I realise now the stringent compliance an acceptable review must meet for JOSS. At the same time, given the considerable effort and time I have already spent in this review that I am running out of my capacity to contribute further. As such I would understand if my review cannot be used. I appreciate the opportunity to participate in this unique one-time experience. Thanks and Best regards, Dev |
Hi @DevT-0 - JOSS is a different approach to reviewing than traditional journals, with papers being accepted once the reviewers/editors are happy that each item in the checklist has been met. Your contributions are gratefully received - I should be able to take what you have written and infer (alongside my own checks) if most of the remaining checkboxes can be ticked off. The key aspect we still need to review (and is beyond my role as editor) is functionality, which I believe @dap-biospec has been working on given issue raised around installation. Thanks @ma-sadeghi for your update re: availability, and thanks @DevT-0 for your review contributions - we would like to list you as reviewer in published paper as you have committed much time and expertise already to the process, unless you have any objection to this. However I won't ping you with any more review related requests! Lucy |
Hello all, a quick message to say I will be out of office from tomorrow, and will check back here on the 18th. |
Hi @ma-sadeghi, @dap-biospec - any updates on response to review / review? |
Hi @ma-sadeghi - how are your updates going? |
Sorry for the long wait. We've added lots of minor improvements/bug fixes plus more example notebooks. All CI tests pass, so hopefully no more reproduciblity issues. Please feel free to proceed with the review. Thanks for your patience :) |
Dear @dap-biospec, Please see update above from the authors ☝️ Are you available to complete the review now that initial issues have been addressed? I understand it has been some time now since the review started; your continued help with this would be much appreciated. |
A note here that from this Monday 29th I will be on leave (⛺) for just over two weeks, returning on the 14th of August when I'll check in on progress ASAP. If you have any immediate queries, it may be best to ask before I go. |
@lucydot Hope you enjoyed your vacation! Just following up to see if there's been any updates. Thanks! |
@ma-sadeghi I've not heard anything from the reviewers. I will contact @dap-biospec through email - |
@ma-sadeghi I heard back from @dap-biospec and hopefully they will have time to review this shortly. A quick note that I am on leave for the following week and will be back at desk on 2nd Sept. |
@dap-biospec a reminder for your review if possible to re-visit this. |
@ma-sadeghi @lucydot Code is still broken and cannot be evaluated.
Sample output on Centos 9:
AutoEIS home page has a prominent warning that function [perform_full_analysis] is disabled. Therefore, it is not surprising that usage tutorial is broken. |
@dap-biospec Thanks for getting back to us. As mentioned in the Exception message, that function is deprecated and is not to be used. Please visit the examples page under our documentation. I'll make sure to update the Basic Usage page to remove it altogether to not create further confusion for the users. (Basically, that function was never meant to be part of the package, sorry for the confusion) Apart from this, you should be able to continue with the review. |
@dap-biospec I just updated both the documentation and the README file to make it clear for the users not to use the envelope function. |
Hello @ma-sadeghi
To avoid any confusion, please link to this page. @ma-sadeghi I suggest you ask someone in your team (who is not one of the code developers) to run through the installation and your examples. This review has been running for many months now and I do not want to take advantage of the good will of our reviewers. They will be in a good place to catch issues such as this, and accelerate the review, so the reviewers can focus on the substance of the code (rather than "getting it to work"). |
@dap-biospec - are you able to continue with the review? |
@ma-sadeghi It appears that you attempted to resolve issues by eliminating introductory section from depository all together. This is not a solution to the bugs in the code. Please include basic step-by-step tutorial that can be clearly found and followed from the landing page of the depository. It may be prudent to include verifiable demo section for each claim that you are making in the manuscript. Please include all the required steps in a markup page(s), leaving linked jupyter notebooks only for illustrations that cannot be included in the markup files. As @lucydot rightfully pointed out, please ask an un-initiated colleague to follow written tutorial without any additional support and verify that they obtain all the results that you claim and expect to see. |
@dap-biospec Thanks for the feedback. Here’s how we’ve addressed your points: The perform_full_analysis function was originally meant as a convenience helper, but it turned out to add unnecessary maintenance complexity and made debugging harder by masking individual steps. We decided to remove it, but this doesn’t affect AutoEIS’s core functionality. The "basic usage" section was removed from the README to keep the landing page clean and focused. We’ve linked to the full documentation, which covers everything in detail: On Jupyter notebooks vs. Markdown: Good point. Notebooks can be slow to render. To handle that, we convert them to HTML in the documentation, so they’re easy to view. Users can still grab the actual notebooks from the repo if they want to run them. We avoid Markdown for examples because it’s harder to automate testing for it, and Jupyter notebooks are tested on every push. Per your and @lucydot's suggestion, we had colleague independently install and use AutoEIS, and they were able to go through the examples. Hope these address your concerns. |
@ma-sadeghi - thank you for addressing points raised, it makes it clear for our reviewers what has changed. @dap-biospec - it appears that @ma-sadeghi and team have addressed a number of your concerns. Are you able to confirm functionality? I think this is the priority for the review to progress. Once we are happy with core functionality, I can support in reviewing other aspects (e.g. documentation, joss paper, general checks). |
@ma-sadeghi I attempted to reproduce the first three examples and encountered variety of errors and inconsistencies in each of "Circuit models 101", "circuit generation", and "detailed workflow". This starts with the lack of arguments for ae.visualization.set_plot_style() example and continues with runtimes errors and unexpected keywords. It does not appear that the authors performed literal execution of their tutorial on a clean system. Errors may be due to package dependence or environment configuration. It also appears that the authors exported jupyter notebook as HTML without correcting action steps or providing adequate instructions on the expected behavior. @lucydot at this time functionality is not verifiable. |
@ma-sadeghi please also note that your "visit examples" link bypasses installation steps. |
@dap-biospec Thank you for taking the time to review.
All of our example notebooks are automatically tested on GitHub virtual machines. These tests are designed to catch any cell that fails to run. The tests are two-fold: One is executed on every pull request, while another runs on every push to the repository to update the documentation. All recent tests have passed. However, to double-check, I cloned a fresh repository, created a fresh virtual environment, and successfully ran the "detailed_workflow.ipynb" notebook (which covers core AutoEIS functionalities). I’ve recorded the process, and you can view it here. (Regarding
As explained above, the examples are tested and run on GitHub virtual machines as part of our CI workflow.
The snippet below shows the Action responsible for publishing the documentation. This workflow is executed on every push to the repository: - name: Build the documentation
run: |
uv run jupyter nbconvert \
--ExecutePreprocessor.timeout=3600 \
--to notebook --execute --inplace examples/*.ipynb
cd doc
uv run make html Based on this snippet, prior to generating HTML files, all example notebooks are re-run, and results are updated. If any cell fails, the published HTML reflects the errors (I double checked our online documentation, and there was no error). Additionally, as noted, there’s a separate Action (notebooks.yml) that runs for each pull request. This setup ensures that any changes causing failures are caught and cannot be merged.
The "Installation" section in README is right before the "Usage" section. The assumption was that prior to diving into examples, users have already gone through the installation step. If you're experiencing specific issues with your local setup, I’d be glad to help you troubleshoot. Feel free to share details about your environment, dependencies, or any error messages, and I can assist in pinpointing the source of the problem. |
@ma-sadeghi Thank you for confirming that HTML content was literally translated from jupyter notebooks. This likely explains why HTML instructions do not work, by the same account as printing of dynamic HTML content to PDF does not yield working pages. Please review previous request to verify that literal execution of your published HTML instructions yield expected demonstrations. A "literal" means that a reader that has absolutely no prior knowledge of your package can follow your instructions from this page https://github.com/AUTODIAL/AutoEIS/ verbatim (i.e. copy-paste) and arrive at the same results. Please also note that code validation by script, or auto-generation of HTML from notebooks in not sufficient to make "as published" tutorial adequate. Hence, the earlier request to test the instructions by uninitiated human. |
@dap-biospec thank you for continued efforts testing the code and @ma-sadeghi for clear account of testing and verification setup. Testing the code via Github Actions is a widely used tool as it should reflect what occurs for when software is freshly installed. It is peculiar that the GH Action runs without error, yet a human following the same procedure does get errors. Do you think this could this be a difference in Python and/or Julia versions @ma-sadeghi ? I can see the GH workflows @dap-biospec - could you let us know which python version you are using? Are you running in a clean python environment (for example, with a virtual environment or conda)? This might help us understand if there are version conflicts. |
@ma-sadeghi is making a small but consequential omission in the online tutorial section. Code that may run in jupyper notebook may not run, or produce the same output, outside of it. Hence, simple HTML export does not work as an online documentation. Furthermore, execution of code by copying and pasting in plain python prompt does not yield described results, as outlined above. As a publication, this work should be self-sufficient and accessible to any reader, including non-programming chemists. |
@dap-biospec Just to clarify, are you asking if copying the code cells from the example Jupyter notebooks (.ipynb) and pasting them into a standalone Python script (.py) will produce the same results as running the code directly within the Jupyter notebook? The reason I ask is that if the user simply downloads the example notebooks (in .ipynb format) and runs them cell by cell, they will get the same results. This is what I demonstrated in the screencast I uploaded to YouTube. |
@ma-sadeghi I am reviewing the paper/depository at face value. I am following your instructions literally (i.e. HTML pages exported from jupyter notebooks) - copying code snippets and executing in python shell. I am not doing anything that instructions do not call for. The fact that line-by-line execution of your examples in plain python yields exceptions and missing argument prompts is a separate issue, which indicates that there are unreported or unrecognized differences in the environment, dependencies, etc. |
@dap-biospec I think I got it. Just to recap so you can correct me if I missed anything: you’re asking us to verify if a user can reproduce the results shown in the example notebooks by simply running the code cells in a Python REPL. I’ll work on this over the weekend, and if I run into any issues, I’ll sort them out. If all goes well, I’ll make a screencast and share it next week. |
@ma-sadeghi - this sounds like a good plan. What you summarise is also what I think @dap-biospec is suggesting. |
Hi @ma-sadeghi - have you got a timescale for completing the above? Thanks - Lucy. |
@ma-sadeghi Here are screenshots of REPL executions. As the screencast is not a part of the manuscript, I am going only by the actual posted steps. First, "Circuits 101" leads to a prompt: While you mention lcapy and LaTeX in "Circuits 101", this should be clearly spelled in the installation section similarly to Julia if you are relying on it to a similar extent. Alternatively, you can have separate and clearly marked pages with or without visualization. Please let me know when this is addressed in the code or in the tutorial and I will repeat. |
Submitting author: @ma-sadeghi (Mohammad Amin Sadeghi)
Repository: https://github.com/AUTODIAL/AutoEIS/
Branch with paper.md (empty if default branch): joss
Version: v0.0.17
Editor: @lucydot
Reviewers: @dap-biospec, @DevT-0
Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@dap-biospec & @DevT-0, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @lucydot know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @DevT-0
📝 Checklist for @DevT-0
📝 Checklist for @dap-biospec
The text was updated successfully, but these errors were encountered: