This repository implements the benchmarking platform called LIPS and provides the necessary utilities to reproduce the generated datasets used in research.
The Readme file is organized as follows:
- 1 Introduction
- 2 Usage example
- 3 Installation
- 4 Access the Codabench page
- 5 Getting Started
- 6 Documentation
- 7 Contribution
- 8 License information
Nowdays, the simulators are used in every domain to emulate a real-world situation or event or to reproduce the critical situations for which further investigation may be required. The simulators are based generally on physics equations and are costly in terms of time complexity.
The learning industrial physical simulation benchmark suite allows to evaluate the performance of augmented simulators (aka surrogate models) specialized in a physical domain with respect to various evaluation criteria. The implementation is enough flexible to allow its adaptation to various domains such as power grids, transport, aeronotics etc. To do so, as it is depicted in the scheme provided in the figure below, the platform is designed to be modular and includes following modules:
- the Data module of the platform may be used to import the required datasets or to generate some synthetic data (for power grids for now)
- A simulator may access the provided data to train or evaluate the peformance. The developed platform gives also the flexibility to its users to design and implement their own simulators and evaluate its performance with baselines. Various baseline simulators are already implemented and could be used, e.g., Direct Current (DC) approximation and neural netowrk based simulators which are Fully Connected (FC) model and Latent Encoding of Atypical Perturbations network (LEAP net).
- The Evaluation module allows to select the appropriate criteria among the available implemented metrics. Four category of metrics are provided, which are :
- ML-related metrics
- Physic compliance
- Industrial readiness
- Generalization metrics
The paths should correctly point-out to generated data (DATA_PATH) and benchmark associated config file (CONFIG_PATH). The log path (LOG_PATH
) could be set by the user.
from lips.benchmark import PowerGridBenchmark
benchmark1 = PowerGridBenchmark(benchmark_name="Benchmark1",
benchmark_path=DATA_PATH,
load_data_set=True,
log_path=LOG_PATH,
config_path=CONFIG_PATH
)
A simulator (based on tensorflow) could be instantiated and trained if required easily as follows:
from lips.augmented_simulators.tensorflow_models import TfFullyConnected
from lips.dataset.scaler import StandardScaler
tf_fc = TfFullyConnected(name="tf_fc",
bench_config_name="Benchmark1",
scaler=StandardScaler,
log_path=LOG_PATH)
tf_fc.train(train_dataset=benchmark1.train_dataset,
val_dataset=benchmark1.val_dataset,
epochs=100
)
For each architecture a config file is attached which are available here for powergrid use case.
The following script show how to use the evaluation capacity of the platform to reproduce the results on all the datasets. A config file (see here for powergrid use case) is associated with this benchmark and all the required evaluation criteria can be set in this configuration file.
tf_fc_metrics = benchmark1.evaluate_simulator(augmented_simulator=tf_fc,
eval_batch_size=128,
dataset="all",
shuffle=False
)
To be able to run the experiments in this repository, the users should install the last lips package from its github repository. The following steps show how to install this package and its dependencies from source.
- Python >= 3.6
cd my-project-folder
pip3 install -U virtualenv
python3 -m virtualenv venv_lips
source venv_lips/bin/activate
git clone https://github.com/Mleyliabadi/LIPS
cd LIPS
pip3 install -U .
cd ..
pip3 install -e .[recommended]
To see the leaderboard for benchmarking tasks, refer to the codabench page of the framework, accessible from this link.
Some Jupyter notebook are provided as tutorials for LIPS package. They are located in the getting_started directories.
The documentation is accessible from here.
To generate locally the documentation:
pip install sphinx
pip install sphinx-rtd-theme
cd docs
make clean
make html
- Supplementary features could be requested using github issues.
- Other contributions are welcomed and can be integrated using pull requests.
To be able to use the torch library with GPU, you should consider multiple factors:
- if you have a compatible GPU, in this case you can install the last cuda driver (11.6) and install torch using the following command:
pip install torch --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu116
To take the advantage of the GPU when training models, you should indicate it via the device
parameter as follows:
from lips.augmented_simulators.torch_models.fully_connected import TorchFullyConnected
from lips.augmented_simulators.torch_simulator import TorchSimulator
from lips.dataset.scaler import StandardScaler
torch_sim = TorchSimulator(name="torch_fc",
model=TorchFullyConnected,
scaler=StandardScaler,
device="cuda:0",
)
- Otherwise, if you want use only CPU for the training of augmented simulators, you could simply use the version installed following the the requirements and set the device parameter to
cpu
when training as follows:
torch_sim = TorchSimulator(name="torch_fc",
model=TorchFullyConnected,
scaler=StandardScaler,
device="cpu",
)
To be able to use Tensorflow, you should already install a cuda compatible version with your tensorflow package. From Tensorflow 2.4 the cuda version >= 11.0 is required. Once you have get and installed cuda driver (we recommend version 11.5) from here, you should also get corresponding cuDNN package from here and copy the contents in corresponding folders in cuda installation directory. Finally, you should set some environment variables which are discussed in this link for both linux and windows operating systems. For windiows you can do the following in command line:
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\bin;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\extras\CUPTI\lib64;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include;%PATH%
SET PATH=C:\tools\cuda\bin;%PATH%
SET LD_LIBRARY_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\lib\x64
However, after setting variables if you encounter some *.dll not found when importing tensorflow library, you could indicating the path to cuda installation in your code before importing tensorflow package as follows:
import os
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/bin")
And then you can test your installation by running:
import tensorflow as tf
tf.config.list_physical_devices()
Where in its output the GPU device should be appeared as following:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Copyright 2022-2023 IRT SystemX & RTE
IRT SystemX: https://www.irt-systemx.fr/
RTE: https://www.rte-france.com/
This Source Code is subject to the terms of the Mozilla Public License (MPL) v2 also available here