This repository includes source code for training and evaluating meta-learning for system identification and control via neural state-space models, proposed in IFAC World Congress paper:
@article{chakrabarty2023meta,
title={Meta-learning of neural state-space models using data from similar systems},
author={Chakrabarty, Ankush and Wichern, Gordon and Laughman, Christopher R},
journal={IFAC-PapersOnLine},
volume={56},
number={2},
pages={1490--1495},
year={2023},
publisher={Elsevier}
}
- Installation
- How to run
- Included datasets
- Pre-trained model weights
- Contributing
- Copyright and license
Clone this repo and create an anaconda environment. Then in that new environment, pull the repository and install the dependencies:
git clone https://github.com/merlresearch/meta-learning-state-space
cd meta-learning-state-space && pip install -r requirements.txt
- Activate the correct environment.
- Run
v1_train_metaLearn_fomaml.py
withfirst_order=True
for first-order MAML (FO-MAML), andfirst_order=False
for classical MAML - (Optional) Run
v1_train_supervised.py
for supervised learning with limited dataset. - (Optional) Run
v1_train_universal.py
for supervised learning with all tasks dataset (same data as MAML). - Run
v1_test_metaLearn_fomaml.py
to compute MAML/FO-MAML performance metrics on arbitrary test data, andv1_test_benchmark_fomaml.py
to compute MAML/FO-MAML performance on nonlinear Bouc-Wen benchmark. - Base-learner network architecture can be changed inside
networks
directory. Default base-learner isKoopmanNetMixed
. - All relevant data is in the
data
directory. Pre-trained weights are in thesaved_weights
directory. More information on these directories is provided below.
This directory contains the data used for training and testing the meta-learning models. The specific files include:
-
train_data/BoucWen.MetaLearn.TrainTestDataset.Simple.mat
: This contains the dataset used for meta-training. The dataset$\mathfrak D_{\sf train}$ contains data from$N_{\sf train}=100$ Bouc-Wen systems with 7 parameters ($m_L$ ,$c_L$ ,$k_L$ ,$\alpha$ ,$\beta$ ,$\gamma$ ,$\delta$ ), where for each system, the parameters are selected via uniform sampling within $\pm$50% of the nominal parameter values. Each system is simulated for $T=12$s at $750$Hz sampling. The excitation$u=120\sin(2\pi t)$ for all systems. The scriptv1_test_metaLearn_fomaml.py
is a sanity check to ascertain the quality of the meta-inference on a system within$\mathfrak{D}_{\sf train}$ . This dataset was generated by MATLAB R2024a. -
test_data/BoucWen.BenchmarkDataset.mat
: This is the benchmark dataset proposed in https://www.nonlinearbenchmark.org/benchmarks/bouc-wen. We have ensured no system in the meta-training set has these benchmark parameters. The first 20% of that 40960-sample noisy benchmark dataset is assumed available as$\mathfrak D_{\sf test}$ . The final 80% of the output trajectory is to be predicted by adaptation, which is implemented inv1_test_benchmark_fomaml.py
. This data is available at: https://data.4tu.nl/articles/_/12967592.
This directory contains pre-trained weights for the meta-learning models. The specific files include:
benchmark/maml_v1_benchmark.pt
: Weights obtained after meta-inference or adaptation phase of MAML meta-model on the Bouc-Wen benchmark data.benchmark/fomaml_v1_benchmark.pt
: Weights obtained after meta-inference or adaptation phase of FO-MAML meta-model on the Bouc-Wen benchmark data.benchmark/reptile_v1_benchmark.pt
: Weights obtained after meta-inference or adaptation phase of Reptile meta-model on the Bouc-Wen benchmark data.metatest/maml_v1_test.pt
: Weights obtained after meta-inference or adaptation phase of MAML model on the meta-testing (randomly varying parameters, not benchmark) systems.metatest/fomaml_v1_test.pt
: Weights obtained after meta-inference or adaptation phase of FO-MAML model on the meta-testing (randomly varying parameters, not benchmark) systems.maml_v1_final.pt
: Weights obtained during meta-training the MAML model on the meta-training systems.fomaml_v1_final.pt
: Weights obtained during meta-training the FO-MAML model on the meta-training systems.competitors/reptile_metatrain.pt
: Weights for the Reptile meta-model trained on on the meta-training systems. This model is trained in thev1_train_metaLearn_reptile.py
script.competitors/universal_learned_model.pt
: Weights for the universal model trained on all meta-training systems. This model is trained in thev1_train_universal.py
script.competitors/supervised_learned_model.??percent.pt
: Weights for the supervised model trained greedily on the benchmark system. This model is trained in thev1_train_supervised.py
script and the percentage of benchmark system data seen by the model is described in the title of the weight e.g. 20 or 80.
See CONTRIBUTING.md for our policy on contributions.
Released under AGPL-3.0-or-later
license, as found in the LICENSE.md file.
All files, except as noted below:
Copyright (C) 2024 Mitsubishi Electric Research Laboratories (MERL)
SPDX-License-Identifier: AGPL-3.0-or-later
The following files:
maml/base_learner.py
maml/maml.py
maml/utils.py
was taken without modification from https://github.com/learnables/learn2learn/ (license included in LICENSES/MIT.md)
Copyright (c) 2019 Debajyoti Datta, Ian Bunner, Praateek Mahajan, Sebastien Arnold
SPDX-License-Identifier: MIT
The dataset:
data/test_data/BoucWen.BenchmarkDataset.mat
was taken without modification from https://data.4tu.nl/articles/_/12967592 (license included in LICENSES/CC-BY-SA-4.0.md)
Copyright (c) 2020 by Jean-Philippe Noël, Maarten Schoukens
SPDX-License-Identifier: CC-BY-SA-4.0