Skip to content

Releases: mir-group/flare

Pre-release 1.3.3 - Dot product kernel, per-data relative noise, wandb

19 Dec 21:00
960757e
Compare
Choose a tag to compare

What's Changed

  • Fixed variables in flare/dft_interface/cp2k_util.py to be consistent by @niklundgren in #316
  • Compilers and libraries from conda instead of module load by @YuuuXie in #318
  • change calculate_efs and calculate_energy to instance variables by @aaronchen0316 in #319
  • Cjo - documentation by @cjowen1 in #317
  • add checks for ase calculators with no stress tensor property impleme… by @juliayang in #324
  • Single atom energy dict for issue #326 by @YuuuXie in #327
  • 1.3.1 Bug fixings - stress, pair_style cutsq, offline preprocess, sanity check by @YuuuXie in #328
  • 1.3.2 Non normalized dot product kernel by @YuuuXie in #322
  • 1.3.3 Relative noise for each training data point by @YuuuXie in #323
  • Development by @YuuuXie in #303
  • Add broadcast of kernel string length by @anjohan in #334
    • add MPI for lammps in the unit test

New Contributors

Full Changelog: 1.3.0...1.3.3

Pre-release 1.0.0 - Merge Flare++ & Use yaml for training & On-the-fly with Lammps

28 Nov 20:38
Compare
Choose a tag to compare

What's Changed

This version includes major changes in the code structure and interfaces (More details are in the pull request #303)

  • Rearrange modules
  • Clean up unused code
  • Merge flare++ into flare
  • Use yaml file for training (same command, different config files for on-the-fly training, offline training, restarting, combine multiple dataset/models)
    flare-otf config.yaml
    (see flare/examples for the format and parameters for yaml)
  • Add on-the-fly training by LAMMPS and offline training (fake MD/DFT)

Backward compatibility notices

  • freeze_hyps is replaced by train_hyps, where an interval (a, b) is specified, and the hyps will only be trained when the GP model has number of training data in this interval

Release notes (auto-generated)

v1.3.0-zenodo

25 Mar 15:48
Compare
Choose a tag to compare

Release for zenodo preservation/citation for Au reconstruction paper in nat. commun.

Pre-release 1.2.0 - Kokkos acceleration & Lammps syntax change

10 Aug 02:34
ef0eca9
Compare
Choose a tag to compare

What's Changed

  1. Kokkos multispecies acceleration with type-sorting and matrix-matrix products by @anjohan in #310
  2. Change syntax of flare pair style code to fit the 2022 release of lammps

Backward compatibility

  1. Due to the new syntax, the lammps plugins of flare have to be installed with the Stable release 23 June 2022. No longer compatible with the 29Sep2021 version.
  2. The 2022 version of Lammps changes the thermostat dumping format a bit in the log.lammps file, such that the latest ASE version 3.22.1 lammps parser fails. If you need to use ASE's lammps calculator or you need to use our PyLAMMPS for on the fly training, you need to install the master branch of ASE

Full Changelog: 1.1.2...1.2.0

Pre-release 1.1.2

10 Aug 01:58
Compare
Choose a tag to compare
Pre-release 1.1.2 Pre-release
Pre-release

What's Changed

Full Changelog: 1.0.0...1.1.2

Offline training

  1. Compute and print mean absolute error at each step (frame), not only at steps where a frame is added to SGP.
  2. Add data distribution statistics at the end of the training, summarizing how many frames are picked up from each dataset

Bug fixing

  1. Fix a lammps calculator bug of prism cell
  2. Fix the on-the-fly MAE calculation and logging when any of energy/forces/stress is excluded for training.
  3. Allow DFT calculator to be saved as json in the checkpoint, in case some can be non-pickable
  4. Add atom_indices into the dict of dumped SGP

Other features and tutorials

  1. Add timer for each part in OTF training log
  2. Add mapped uncertainty to the build_map method in SGP_Calculator
  3. Add python interface to support customized descriptors. Check the tutorial here
  4. Add a tutorial for computing thermal conductivity from flare + Phoebe. Check the tutorial here

Backward compatibility notices

  1. In the yaml file for offline training, the parameters in FakeDFT can be removed since they are redundant
dft_calc:
     name: FakeDFT
     kwargs: 
         filename: fake_dft.xyz
         format: extxyz
         index: ":"
         io_kwargs: {}

change to

dft_calc:
     name: FakeDFT
     kwargs: {} 
     params: {}
  1. In the yaml file for offline training, the filename changes to filenames, allowing user to list multiple files as datasets.
 otf: 
     md_engine: Fake
     md_kwargs: 
         filename: fake_dft.xyz

change to

 otf: 
     md_engine: Fake
     md_kwargs: 
         filenames: [fake_dft_1.xyz, fake_dft_2.xyz]

Then at the end of the training, the log file will report how many frames are selected to add to SGP from fake_dft_1.xyz, fake_dft_2.xyz, respectively.

  1. In the yaml file for online or offline training, set up a lower bound of DFT calls for training hyperparameters
otf:
    freeze_hyps: 10

change to

otf:
    train_hyps: [1, 10]          # [a, b] represents training hyps when the number of DFT frames SGP has collected is between a and b

Compatible LAMMPS version

Stable release 29 September 2021

Stable 0.2.4 - Flare active learning & 2+3 body kernel

21 Apr 18:33
4de1cc2
Compare
Choose a tag to compare

This is the stable version of flare, which is compatible with the flare_pp v0.1.3

Below are the auto-generated release notes

What's Changed

Read more