Skip to content

Repo for my MSc thesis on autencoder compression methods, domain decomposition, and adversarial prediction methods

License

Notifications You must be signed in to change notification settings

acse-zrw20/DD-GAN-AE

Repository files navigation

DD-GAN-AE

Domain Decomposition, Autoencoders, and Adversarial Networks for Modelling Fluid Flow

codecov GitHub license Documentation Status example workflow Code style: black Binder


Logo


Explore the docs»

Report Bug

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. License
  5. Contact
  6. Acknowledgements

About The Project

This project contains an intuitive library for interacting with compressive and predictive methods for predicting fluid dynamics simulations efficiently. Code was built for a Msc thesis at the Imperial College London.

Read the documentation for further info!

Below is a short description of folders two layers deep. Please look inside of folders to see actual built python files and notebooks, each of which contain docstrings at the top:

.
├── ddganAE
│   ├── architectures    # Library of architectures
│   ├── models           # Logic of implemented models
│   ├── preprocessing    # General preprocessing functions
│   └── wandb            # Package wandb hyperparam-opt interaction
├── docs
│   └── source           # Documentation source code for Sphinx
├── examples             # Example notebooks (also Colab and Binder links below)
│   └── models           # Stored models for reproduceability. Contains readme
├── hpc                  # Various ICL HPC bash scripts
│   └── colab            # Colab notebooks to interact with wandb for hyperparam-opt
├── images               # Various package-related images
├── preprocessing        # Contains readme
│   ├── src              # Dataset-specific preprocessing functions
│   └── tests            # Preprocessing tests
├── submodules        
│   └── DD-GAN           # Jón Atli Tómasson's DD-GAN package
└── tests                # Package tests
    └── data             # Test datasets

Note that for this project testing was mostly done in the form of global runs through the entire built software in jupyter notebooks and component testing of smaller partitions with benchmark test cases such as the flow past cylinder (FPC) dataset. These notebooks which do this can be found in the Colab links provided below. Wherever relevant (preprocessing, utils functions, etc...) unittests were written and automatically executed through Github workflows, these are included in the Codecov report displayed above.

Prerequisites

  • Python 3.8
  • Tensorflow and other packages in requirements.txt
  • (Recommended) GPU with CUDA 11

Installation

Developers can follow these steps to install:

  1. git clone https://github.com/acse-zrw20/DD-GAN-AE
  2. cd ./DD-GAN-AE
  3. pip install -r requirements.txt
  4. pip install -e .

End users can install the newest release with:

  1. git clone https://github.com/acse-zrw20/DD-GAN-AE.git --branch v1.2.0
  2. cd ./DD-GAN-AE
  3. pip install -r requirements.txt
  4. pip install -e .

The release does not include any saved models or datasets

Getting Started

In a python file, import the following to use all of the functionality:

import ddganAE

Training a model for reconstruction:

from ddganAE.models import CAE
from ddganAE.architectures.cae.D2 import *
import numpy as np
import tensorflow as tf

input_shape = (55, 42, 2)
dataset = np.load(...) # dataset with shape (<nsamples>, 55, 42, 2)

optimizer = tf.keras.optimizers.Adam() # Define an optimizer
initializer = tf.keras.initializers.RandomNormal() # Define a weights initializer

# Define any encoder and decoder, see docs for more premade architectures
encoder, decoder = build_wider_omata_encoder_decoder(input_shape, 10, initializer)

cae = CAE(encoder, decoder, optimizer) # define the model
cae.compile(input_shape) # compile the model

cae.train(dataset, 200, batch_size=32) # train the model with 200 epochs, batch_size needs to be smaller than nsamples

recon_dataset = cae.predict(dataset) # pass the dataset through the model and generate outputs

Training a model for prediction:

from ddganAE.models import Predictive_adversarial
from ddganAE.architectures.svdae import *
from ddganAE.architectures.discriminators import *
import tensorflow as tf
import numpy as np

latent_vars = 100  # Define the number of variables the predictive model will use in discriminator layer
n_predicted_vars = 10 # Define the number of predicted variables

dataset = np.load(...) # dataset with shape (<ndomains>, 10, <ntimesteps>)

optimizer = tf.keras.optimizers.Adam() # Define an optimizer
initializer = tf.keras.initializers.RandomNormal() # Define a weights initializer

# Define any encoder and decoder, see docs for more premade architectures. Note for predictive
# models we don't necessarily need to use encoders or decoders
encoder = build_slimmer_dense_encoder(latent_vars, initializer)
decoder = build_slimmer_dense_decoder(n_predicted_vars, latent_vars, initializer)
discriminator = build_custom_discriminator(latent_vars, initializer)

pred_adv = Predictive_adversarial(encoder, decoder, discriminator, optimizer)
pred_adv.compile(n_predicted_vars, increment=False)
pred_adv.train(dataset, 200, val_size=0.1)

# Select the boundaries with all timesteps
boundaries = np.zeros((2, 10, <ntimesteps>))
boundaries[0], boundaries[1]  = dataset[2], dataset[9] # third and tenth subdomains used as boundaries

# Select the initial values at the first timestep
init_values = dataset[3:9, :, 0]

predicted_latent = pred_adv.predict(boundaries, init_values, 10, # Predict 10 steps forward 
                                    iters=4, sor=1, pre_interval=False)

Examples

For beginners the following Binder link will open an example notebook with the above getting started examples that can be executed right away:

  • Getting started examples on flow past cylinder dataset Binder

For anyone curious to go a bit further and see how the report results were produced see:

  • Compression usage examples on flow past cylinder dataset Open In Colab
  • Compression usage examples on slug flow dataset Open In Colab
  • Prediction usage examples on slug flow dataset Open In Colab
  • (under development) Extended usage examples of prediction on slug flow dataset Open In Colab

These notebooks can also be found under examples in this repository

Reproduction of reported results

Note that the above notebooks come with their original outputs that were also included in the results of the accompanying report. However, due to the fact that the datasets were too large (over 150gb for the SF dataset) to be shared over github any user interacting with this package cannot reproduce the results unless they have the datasets. Small test datasets are provided in tests/data to show the workings of the models and these can be used in the notebooks. Furthermore the models that were used to produce the final results are stored in models.

Please contact me for the original datasets, after they are loaded into the above notebooks the results can be reproduced.

Hyperparameter optimization (wandb)

Hyperparameter optimization was done with the help of wandb. The link to the wandb reports are as follows:

Note that all of the models in the examples/models folder will have names corresponding to the names found on wandb, such that these models can be traced back entirely to the runs that generated them.

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Acknowledgements

  • Dr. Claire Heaney
  • Prof. Christopher Pain
  • Royal School of Mines, Imperial College London

About

Repo for my MSc thesis on autencoder compression methods, domain decomposition, and adversarial prediction methods

Resources

License

Stars

Watchers

Forks

Packages

No packages published