Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jul 24, 2023
1 parent e20c2ca commit a30c892
Show file tree
Hide file tree
Showing 4 changed files with 93 additions and 83 deletions.
88 changes: 46 additions & 42 deletions _sources/user_guide/run_eval.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,60 +2,64 @@
Evaluate a Model
================================

Step 1: Setup the data and model config files
Step 1: Setup the config file
===============================================

Same as in the training pipeline, firstly we need to initialize two config files: ``data_config.yaml`` and ``model_config.yaml``.
Same as in the training pipeline, firstly we need to initialize the task configuration in the config file.

``data_config.yaml`` is the same as in `Training Pipeline <./run_train_pipeline.html>`_ while we need to update the ``model_config.yaml``
to let the model run the evaluation.


model_config
Similar to the setup in `Training Pipeline <./run_train_pipeline.html>`_, we set the `stage` to `eval` and pass the `pretrained_model_dir` to ``the model_config``
Note that the *pretrained_model_dir* can be found in the log of the training process.

.. code-block:: yaml
RMTPP_gen:
base_config:
stage: gen
backend: torch
dataset_id: retweet
runner_id: std_tpp
base_dir: './checkpoints/'
model_id: RMTPP
model_config:
hidden_size: 32
time_emb_size: 16
mc_num_sample_per_step: 20
sharing_param_layer: False
loss_integral_num_sample_per_step: 20
dropout: 0.0
use_ln: False
seed: 2019
gpu: 0
pretrained_model_dir: ./checkpoints/2555_4348724608_230603-155841/models/saved_model
thinning:
num_seq: 10
num_sample: 1
num_exp: 500 # number of i.i.d. Exp(intensity_bound) draws at one time in thinning algorithm
look_ahead_time: 10
patience_counter: 5 # the maximum iteration used in adaptive thinning
over_sample_rate: 5
num_samples_boundary: 5
dtime_max: 5
num_step_gen: 1
A complete example of these files can be seen at `examples/example_config`.
RMTPP_eval:
stage: eval
backend: torch
dataset_id: conttime
runner_id: std_tpp
base_config:
base_dir: './checkpoints/'
batch_size: 256
max_epoch: 10
shuffle: False
valid_freq: 1
use_tfb: False
metrics: [ 'acc', 'rmse' ]
model_config:
model_id: RMTPP # model name
hidden_size: 32
time_emb_size: 16
num_layers: 2
num_heads: 2
mc_num_sample_per_step: 20
sharing_param_layer: False
loss_integral_num_sample_per_step: 20
dropout: 0.0
use_ln: False
seed: 2019
gpu: 0
pretrained_model_dir: ./checkpoints/59618_4339156352_221128-142905/models/saved_model
thinning:
num_seq: 10
num_sample: 1
num_exp: 500 # number of i.i.d. Exp(intensity_bound) draws at one time in thinning algorithm
look_ahead_time: 10
patience_counter: 5 # the maximum iteration used in adaptive thinning
over_sample_rate: 5
num_samples_boundary: 5
dtime_max: 5
A complete example of these files can be seen at `examples/example_config.yaml <https://github.com/ant-research/EasyTemporalPointProcess/blob/main/examples/configs/experiment_config.yaml>`_ .


Step 2: Run the evaluation script
=================================

Same as in the training pipeline, we need to initialize a ``ModelRunner`` object to do the evaluation.

The following code is an example, which is a copy from *examples/eval_nhp.py*.
The following code is an example, which is a copy from `examples/train_nhp.py <https://github.com/ant-research/EasyTemporalPointProcess/blob/main/examples/train_nhp.py>`_ .


.. code-block:: python
Expand All @@ -81,7 +85,7 @@ The following code is an example, which is a copy from *examples/eval_nhp.py*.
model_runner = Runner.build_from_config(config)
model_runner.evaluate()
model_runner.run()
if __name__ == '__main__':
Expand Down
2 changes: 1 addition & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ <h1><code class="docutils literal notranslate"><span class="pre">EasyTPP</span><
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="user_guide/run_eval.html">Model Prediction</a><ul>
<li class="toctree-l2"><a class="reference internal" href="user_guide/run_eval.html#step-1-setup-the-data-and-model-config-files">Step 1: Setup the data and model config files</a></li>
<li class="toctree-l2"><a class="reference internal" href="user_guide/run_eval.html#step-1-setup-the-config-file">Step 1: Setup the config file</a></li>
<li class="toctree-l2"><a class="reference internal" href="user_guide/run_eval.html#step-2-run-the-evaluation-script">Step 2: Run the evaluation script</a></li>
<li class="toctree-l2"><a class="reference internal" href="user_guide/run_eval.html#checkout-the-output">Checkout the output</a></li>
</ul>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

Loading

0 comments on commit a30c892

Please sign in to comment.