Skip to content

Commit

Permalink
【hydra No.15】deeponet (#589)
Browse files Browse the repository at this point in the history
* Add deeponet hydra

* Fix

* Fix

* Fix

* Fix

* Fix
  • Loading branch information
co63oc authored Oct 26, 2023
1 parent c4bcd67 commit 2464cf0
Show file tree
Hide file tree
Showing 3 changed files with 238 additions and 70 deletions.
60 changes: 42 additions & 18 deletions docs/zh/examples/deeponet.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,30 @@

<a href="https://aistudio.baidu.com/aistudio/projectdetail/6566389?sUid=438690&shared=1&ts=1690775701017" class="md-button md-button--primary" style>AI Studio快速体验</a>

=== "模型训练命令"

``` sh
# linux
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/DeepONet/antiderivative_unaligned_train.npz
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/DeepONet/antiderivative_unaligned_test.npz
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/deeponet/antiderivative_unaligned_train.npz --output antiderivative_unaligned_train.npz
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/deeponet/antiderivative_unaligned_test.npz --output antiderivative_unaligned_test.npz
python deeponet.py
```

=== "模型评估命令"

``` sh
# linux
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/DeepONet/antiderivative_unaligned_train.npz
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/DeepONet/antiderivative_unaligned_test.npz
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/deeponet/antiderivative_unaligned_train.npz --output antiderivative_unaligned_train.npz
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/deeponet/antiderivative_unaligned_test.npz --output antiderivative_unaligned_test.npz
python deeponet.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/deeponet/deeponet_pretrained.pdparams
```

## 1. 背景简介

根据机器学习领域的万能近似定理,一个神经网络模型不仅可以拟合输入数据到输出数据的函数映射关系,也可以扩展到对函数与函数之间的映射关系进行拟合,称之为“算子”学习。
Expand Down Expand Up @@ -77,7 +101,7 @@ $$

``` py linenums="27"
--8<--
examples/operator_learning/deeponet.py:27:43
examples/operator_learning/deeponet.py:27:27
--8<--
```

Expand All @@ -89,19 +113,19 @@ examples/operator_learning/deeponet.py:27:43

在定义约束之前,需要给监督约束指定文件路径等数据读取配置,包括文件路径、输入数据字段名、标签数据字段名、数据转换前后的别名字典。

``` py linenums="45"
``` py linenums="30"
--8<--
examples/operator_learning/deeponet.py:45:55
examples/operator_learning/deeponet.py:30:38
--8<--
```

#### 3.3.1 监督约束

由于我们以监督学习方式进行训练,此处采用监督约束 `SupervisedConstraint`

``` py linenums="57"
``` py linenums="40"
--8<--
examples/operator_learning/deeponet.py:57:61
examples/operator_learning/deeponet.py:40:44
--8<--
```

Expand All @@ -113,39 +137,39 @@ examples/operator_learning/deeponet.py:57:61

在监督约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

``` py linenums="62"
``` py linenums="45"
--8<--
examples/operator_learning/deeponet.py:62:63
examples/operator_learning/deeponet.py:45:46
--8<--
```

### 3.4 超参数设定

接下来我们需要指定训练轮数和学习率,此处我们按实验经验,使用十万轮训练轮数,并每隔 500 个 epochs 评估一次模型精度。
接下来我们需要指定训练轮数和学习率,此处我们按实验经验,使用一万轮训练轮数,并每隔 500 个 epochs 评估一次模型精度。

``` py linenums="65"
``` yaml linenums="49"
--8<--
examples/operator_learning/deeponet.py:65:66
examples/operator_learning/conf/deeponet.yaml:49:55
--8<--
```

### 3.5 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 `Adam` 优化器,学习率设置为 `0.001`

``` py linenums="68"
``` py linenums="48"
--8<--
examples/operator_learning/deeponet.py:68:69
examples/operator_learning/deeponet.py:48:49
--8<--
```

### 3.6 评估器构建

在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 `ppsci.validate.SupervisedValidator` 构建评估器。

``` py linenums="71"
``` py linenums="51"
--8<--
examples/operator_learning/deeponet.py:71:88
examples/operator_learning/deeponet.py:51:60
--8<--
```

Expand All @@ -157,19 +181,19 @@ examples/operator_learning/deeponet.py:71:88

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 `ppsci.solver.Solver`,然后启动训练、评估。

``` py linenums="90"
``` py linenums="71"
--8<--
examples/operator_learning/deeponet.py:90:123
examples/operator_learning/deeponet.py:71:90
--8<--
```

### 3.8 结果可视化

在模型训练完毕之后,我们可以手动构造 $u$、$y$ 并在适当范围内进行离散化,得到对应输入数据,继而预测出 $G(u)(y)$,并和 $G(u)$ 的标准解共同绘制图像,进行对比。(此处我们构造了 9 组 $u-G(u)$ 函数对)进行测试

``` py linenums="125"
``` py linenums="92"
--8<--
examples/operator_learning/deeponet.py:125:
examples/operator_learning/deeponet.py:92:151
--8<--
```

Expand Down
62 changes: 62 additions & 0 deletions examples/operator_learning/conf/deeponet.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
hydra:
run:
# dynamic output directory according to running time and override name
dir: outputs_deeponet/${now:%Y-%m-%d}/${now:%H-%M-%S}/${hydra.job.override_dirname}
job:
name: ${mode} # name of logfile
chdir: false # keep current working direcotry unchaned
config:
override_dirname:
exclude_keys:
- TRAIN.checkpoint_path
- TRAIN.pretrained_model_path
- EVAL.pretrained_model_path
- mode
- output_dir
- log_freq
sweep:
# output directory for multirun
dir: ${hydra.run.dir}
subdir: ./

# general settings
mode: train # running mode: train/eval
seed: 2023
output_dir: ${hydra:run.dir}
log_freq: 20
TRAIN_FILE_PATH: ./antiderivative_unaligned_train.npz
VALID_FILE_PATH: ./antiderivative_unaligned_test.npz

# set working condition
NUM_Y: 1000 # number of y point for G(u) to be visualized

# model settings
MODEL:
u_key: "u"
y_key: "y"
G_key: "G"
num_loc: 100
num_features: 40
branch_num_layers: 1
trunk_num_layers: 1
branch_hidden_size: 40
trunk_hidden_size: 40
branch_activation: relu
trunk_activation: relu
use_bias: true

# training settings
TRAIN:
epochs: 10000
iters_per_epoch: 1
learning_rate: 1.0e-3
save_freq: 500
eval_freq: 500
eval_during_train: true
pretrained_model_path: null
checkpoint_path: null

# evaluation settings
EVAL:
pretrained_model_path: null
eval_with_no_grad: true
Loading

0 comments on commit 2464cf0

Please sign in to comment.