The aim of this project is to provide an environment to implement Reinforcement Learning algorithms that aim to topologically modify a 2D mesh. More specifically, we implement the work of A. Narayanana, Y. Pan, and P.-O. Persson, which is described in "Learning topological operations on meshes with application to block decomposition of polygons" (see arxiv article and presentation).
See the documentation website for more details: https://lihpc-computational-geometry.github.io/tune/
The project can be cloned from github
The project can be used to train a reinforcement learning agent on triangular meshes or quadrangular meshes.
For training on triangular meshes, you can use an agent with all three actions: flip, split, and collapse. Two training models are available:
- Custom PPO Model (
tune/model_RL/PPO_model
) - PPO from Stable Baselines 3 (SB3)
-
Configure the model and environment parameters in:
tune/training/train.py
-
Then run the following command from the
tune/
directory:python main.py
-
Configure the model and environment parameters in:
tune/environment/environment_config.json
tune/model_RL/parameters/PPO_config.json
-
Then run the training script in pycharm
tune/training/train_trimesh_SB3.py
To train an agent using only the flip action with SB3 PPO, run the training script in pycharm tune/training/train_trimesh_flip_SB3.py
For training on quadrangular meshes, you can use an agent with all four actions: flip clockwise, flip counterclockwise, split, and collapse. Two training models are available:
- Custom PPO Model (
tune/model_RL/PPO_model_pers
) - PPO from Stable Baselines 3 (SB3)
tune/environment/environment_config.json
tune/model_RL/parameters/PPO_config.json
Run the following command from the tune/
directory:
python -m training.train_quadmesh
Run the following command from the tune/
directory:
python -m training.train_quadmesh_SB3
After training, the model is saved as a .zip
file in the tune/training/policy_saved/
directory. To evaluate the policy, follow these steps in tune/training/exploit_SB3_policy.py
:
You can either:
-
Load a specific mesh file and duplicate it:
mesh = read_gmsh("../mesh_files/t1_quad.msh") dataset = [mesh for _ in range(9)]
-
Generate a set of random quad meshes:
dataset = [QM.random_mesh() for _ in range(9)]
Make sure to change and load the environment settings before testing:
with open("../environment/environment_config.json", "r") as f:
env_config = json.load(f)
plot_dataset(dataset)
Use the PPO.load()
function and evaluate the policy with your dataset:
model = PPO.load("policy_saved/name.zip")
Run the script directly in PyCharm (or another IDE that supports graphical output) instead of the terminal.
❗ If executed in a terminal without GUI support, the plots will not be displayed.
🚧 Section in progress...