Comparing quantum machine learning with classical neural networks for predicting molecular energies and forces. Spoiler: quantum's pretty good with symmetry, but classicals hold their own.
We're comparing 4 different ML models on predicting molecular properties:
- LiH (2 atoms, simple)
- NH₃ (4 atoms, slightly messier)
The 4 models:
- Quantum + rotational symmetry (quantum's best shot)
- Quantum baseline (no symmetry tricks)
- Quantum + graph structure (for when you have more atoms)
- Classical NN + symmetry (the practical baseline)
All of them predict molecular energies and forces pretty well. Using symmetry helps a lot. That's the main takeaway.
git clone https://github.com/sbisw002/MoleQ-M-L.git
cd MoleQ-M-L
python3.12 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtjupyter notebook "Run LiH comp4.ipynb"Just run all cells. It'll:
- Load the data (already in the repo)
- Train all 4 models
- Dump results in
lih_results/
jupyter notebook "Reader LiH.ipynb"Run all cells, get comparison charts. Done.
Run LiH comp4.ipynb → Reader LiH.ipynb
Run NH3 comp4.ipynb → Reader NH3.ipynb
Output: Charts showing which model's best
DFT_psi4_LiH.ipynb → new molecular data
Uses DFT calculations to generate training data.
Need: Psi4 installed
Run k fold comparison LiH.ipynb → Read k fold results LiH.ipynb
Tests if the differences between models are actually significant.
MoleQ-M-L/
├── README.md ← you are here
├── TECHNICAL_DETAILS.md
├── requirements.txt
│
├── Run LiH comp4.ipynb ← train & compare LiH
├── Run NH3 comp4.ipynb ← train & compare NH₃
├── Reader LiH.ipynb ← visualize LiH results
├── Reader NH3.ipynb ← visualize NH₃ results
├── DFT_psi4_LiH.ipynb
├── DFT_psi4_NH3.ipynb
├── Run k fold comparison LiH.ipynb
├── Read k fold results LiH.ipynb
│
├── eqnn_force_field_data_LiH/ ← pre-generated LiH data
├── eqnn_force_field_data_nh3_new/ ← pre-generated NH₃ data
│
├── lih_results/ ← created when you run experiments
├── nh3_results/
├── kfold_results_lih/
├── kfold_results/
└── figures/
- Open
Run LiH comp4.ipynb - Tweak the parameters at the top:
n_runs=3— how many times to run trainingn_epochs=200— training loops
- Results go to
lih_results/
See troubleshooting below.
JAX can't find my GPU?
import jax
print(jax.devices()) # should show GPUIf it's empty:
pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.htmlOut of memory? Shrink the model in the notebook:
RotationallyEquivariantQML(n_qubits=4, depth=4) # smaller than 6, 6Psi4 is a pain to install? Skip it. The datasets are already in the repo. Only need Psi4 if generating fresh data.
Loss goes NaN during training? Reduce the learning rate in the notebook (divide by 10).
- pennylane — quantum ML framework
- jax — autodiff magic
- numpy, scipy — math stuff
- matplotlib — plotting
- scikit-learn — preprocessing
- psi4 — DFT calculations (optional, only for new data)
Full list: requirements.txt
@article{biswas2025perm,
title={PERM EQ x GRAPH EQ: Equivariant Neural Networks for Quantum Molecular Learning},
author={Biswas, Saumya and Oswal, Jiten},
journal={arXiv preprint arXiv:2512.05475},
year={2025}
}Open an issue on GitHub or create a PR (from your fork) if you want to contribute to this project.
Last Updated: December 2025
Version: 0.8.4