Skip to content

LIHPC-Computational-Geometry/tune

Repository files navigation

Topologic UntaNgling 2D mEsher

Codacy Badge Codacy Badge License: MIT

The aim of this project is to provide an environment to implement Reinforcement Learning algorithms that aim to topologically modify a 2D mesh. More specifically, we implement the work of A. Narayanana, Y. Pan, and P.-O. Persson, which is described in "Learning topological operations on meshes with application to block decomposition of polygons" (see arxiv article and presentation).

See the documentation website for more details: https://lihpc-computational-geometry.github.io/tune/

Installation

The project can be cloned from github

Usage

The project can be used to train a reinforcement learning agent on triangular meshes or quadrangular meshes.


Triangular Meshes

For training on triangular meshes, you can use an agent with all three actions: flip, split, and collapse. Two training models are available:

  1. Custom PPO Model (tune/model_RL/PPO_model)
  2. PPO from Stable Baselines 3 (SB3)

🚀 Starting Training

1. Using tune/model_RL/PPO_model
  • Configure the model and environment parameters in:
    tune/training/train.py

  • Then run the following command from the tune/ directory:

    python main.py
2. Using PPO from Stable Baselines 3 (SB3)
  • Configure the model and environment parameters in:

    • tune/environment/environment_config.json
    • tune/model_RL/parameters/PPO_config.json
  • Then run the training script in pycharm tune/training/train_trimesh_SB3.py

Flip-Only Training (SB3 PPO)

To train an agent using only the flip action with SB3 PPO, run the training script in pycharm tune/training/train_trimesh_flip_SB3.py


Quadrangular Meshes

For training on quadrangular meshes, you can use an agent with all four actions: flip clockwise, flip counterclockwise, split, and collapse. Two training models are available:

  1. Custom PPO Model (tune/model_RL/PPO_model_pers)
  2. PPO from Stable Baselines 3 (SB3)

🚀 Starting Training

1. Configure the model and environment parameters in :
  • tune/environment/environment_config.json
  • tune/model_RL/parameters/PPO_config.json
2. Using tune/model_RL/PPO_model_pers

Run the following command from the tune/ directory:

python -m training.train_quadmesh
3. Using PPO from Stable Baselines 3 (SB3)

Run the following command from the tune/ directory:

python -m training.train_quadmesh_SB3

🧪 Testing a Saved SB3 Policy

After training, the model is saved as a .zip file in the tune/training/policy_saved/ directory. To evaluate the policy, follow these steps in tune/training/exploit_SB3_policy.py :

1. Create a Test Dataset

You can either:

  • Load a specific mesh file and duplicate it:

    mesh = read_gmsh("../mesh_files/t1_quad.msh")
    dataset = [mesh for _ in range(9)]
  • Generate a set of random quad meshes:

    dataset = [QM.random_mesh() for _ in range(9)]
2. Load the Environment Configuration

Make sure to change and load the environment settings before testing:

with open("../environment/environment_config.json", "r") as f:
    env_config = json.load(f)

plot_dataset(dataset)
3. Load the Model

Use the PPO.load() function and evaluate the policy with your dataset:

model = PPO.load("policy_saved/name.zip")
4. Run the script

Run the script directly in PyCharm (or another IDE that supports graphical output) instead of the terminal.

❗ If executed in a terminal without GUI support, the plots will not be displayed.

🧪 Testing a Saved PPO_perso Policy

🚧 Section in progress...