Table of Contents
This project contains an intuitive library for interacting with compressive and predictive methods for predicting fluid dynamics simulations efficiently. Code was built for a Msc thesis at the Imperial College London.
Read the documentation for further info!
Below is a short description of folders two layers deep. Please look inside of folders to see actual built python files and notebooks, each of which contain docstrings at the top:
.
├── ddganAE
│ ├── architectures # Library of architectures
│ ├── models # Logic of implemented models
│ ├── preprocessing # General preprocessing functions
│ └── wandb # Package wandb hyperparam-opt interaction
├── docs
│ └── source # Documentation source code for Sphinx
├── examples # Example notebooks (also Colab and Binder links below)
│ └── models # Stored models for reproduceability. Contains readme
├── hpc # Various ICL HPC bash scripts
│ └── colab # Colab notebooks to interact with wandb for hyperparam-opt
├── images # Various package-related images
├── preprocessing # Contains readme
│ ├── src # Dataset-specific preprocessing functions
│ └── tests # Preprocessing tests
├── submodules
│ └── DD-GAN # Jón Atli Tómasson's DD-GAN package
└── tests # Package tests
└── data # Test datasets
Note that for this project testing was mostly done in the form of global runs through the entire built software in jupyter notebooks and component testing of smaller partitions with benchmark test cases such as the flow past cylinder (FPC) dataset. These notebooks which do this can be found in the Colab links provided below. Wherever relevant (preprocessing, utils functions, etc...) unittests were written and automatically executed through Github workflows, these are included in the Codecov report displayed above.
- Python 3.8
- Tensorflow and other packages in
requirements.txt
- (Recommended) GPU with CUDA 11
Developers can follow these steps to install:
git clone https://github.com/acse-zrw20/DD-GAN-AE
cd ./DD-GAN-AE
pip install -r requirements.txt
pip install -e .
End users can install the newest release with:
git clone https://github.com/acse-zrw20/DD-GAN-AE.git --branch v1.2.0
cd ./DD-GAN-AE
pip install -r requirements.txt
pip install -e .
The release does not include any saved models or datasets
In a python file, import the following to use all of the functionality:
import ddganAE
Training a model for reconstruction:
from ddganAE.models import CAE
from ddganAE.architectures.cae.D2 import *
import numpy as np
import tensorflow as tf
input_shape = (55, 42, 2)
dataset = np.load(...) # dataset with shape (<nsamples>, 55, 42, 2)
optimizer = tf.keras.optimizers.Adam() # Define an optimizer
initializer = tf.keras.initializers.RandomNormal() # Define a weights initializer
# Define any encoder and decoder, see docs for more premade architectures
encoder, decoder = build_wider_omata_encoder_decoder(input_shape, 10, initializer)
cae = CAE(encoder, decoder, optimizer) # define the model
cae.compile(input_shape) # compile the model
cae.train(dataset, 200, batch_size=32) # train the model with 200 epochs, batch_size needs to be smaller than nsamples
recon_dataset = cae.predict(dataset) # pass the dataset through the model and generate outputs
Training a model for prediction:
from ddganAE.models import Predictive_adversarial
from ddganAE.architectures.svdae import *
from ddganAE.architectures.discriminators import *
import tensorflow as tf
import numpy as np
latent_vars = 100 # Define the number of variables the predictive model will use in discriminator layer
n_predicted_vars = 10 # Define the number of predicted variables
dataset = np.load(...) # dataset with shape (<ndomains>, 10, <ntimesteps>)
optimizer = tf.keras.optimizers.Adam() # Define an optimizer
initializer = tf.keras.initializers.RandomNormal() # Define a weights initializer
# Define any encoder and decoder, see docs for more premade architectures. Note for predictive
# models we don't necessarily need to use encoders or decoders
encoder = build_slimmer_dense_encoder(latent_vars, initializer)
decoder = build_slimmer_dense_decoder(n_predicted_vars, latent_vars, initializer)
discriminator = build_custom_discriminator(latent_vars, initializer)
pred_adv = Predictive_adversarial(encoder, decoder, discriminator, optimizer)
pred_adv.compile(n_predicted_vars, increment=False)
pred_adv.train(dataset, 200, val_size=0.1)
# Select the boundaries with all timesteps
boundaries = np.zeros((2, 10, <ntimesteps>))
boundaries[0], boundaries[1] = dataset[2], dataset[9] # third and tenth subdomains used as boundaries
# Select the initial values at the first timestep
init_values = dataset[3:9, :, 0]
predicted_latent = pred_adv.predict(boundaries, init_values, 10, # Predict 10 steps forward
iters=4, sor=1, pre_interval=False)
For beginners the following Binder link will open an example notebook with the above getting started examples that can be executed right away:
For anyone curious to go a bit further and see how the report results were produced see:
- Compression usage examples on flow past cylinder dataset
- Compression usage examples on slug flow dataset
- Prediction usage examples on slug flow dataset
- (under development) Extended usage examples of prediction on slug flow dataset
These notebooks can also be found under examples in this repository
Note that the above notebooks come with their original outputs that were also included in the results of the accompanying report. However, due to the fact that the datasets were too large (over 150gb for the SF dataset) to be shared over github any user interacting with this package cannot reproduce the results unless they have the datasets. Small test datasets are provided in tests/data
to show the workings of the models and these can be used in the notebooks. Furthermore the models that were used to produce the final results are stored in models
.
Please contact me for the original datasets, after they are loaded into the above notebooks the results can be reproduced.
Hyperparameter optimization was done with the help of wandb. The link to the wandb reports are as follows:
- Ordinary predictive network: https://wandb.ai/zeff020/pred-ae
- Predictive adversarial network: https://wandb.ai/zeff020/pred-aae
- SVD autoencoder on slug flow: https://wandb.ai/zeff020/svdae-sf
- Adversarial autoencoder on slug flow: https://wandb.ai/zeff020/aae-sf
- Convolutional autoencoder on slug flow: https://wandb.ai/zeff020/cae-sf
- Adversarial autoencoder on fpc: https://wandb.ai/zeff020/aae-fpc
- Convolutional autoencoder on fpc: https://wandb.ai/zeff020/cae-fpc
- SVD autoencoder on fpc: https://wandb.ai/zeff020/svdae-fpc
Note that all of the models in the examples/models
folder will have names corresponding to the names found on wandb, such that these models can be traced back entirely to the runs that generated them.
Distributed under the MIT License. See LICENSE
for more information.
- Wolffs, Zef zefwolffs@gmail.com
- Dr. Claire Heaney
- Prof. Christopher Pain
- Royal School of Mines, Imperial College London