Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

[docs] Add FAQ question about evaling fixed responses #4120

Merged
merged 1 commit into from
Oct 28, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/source/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,19 @@ correctly set. When loading a pretrained checkpoint, all of the parameters for
the model itself will be loaded from the model's `.opt` file, but all
task-specific parameters will need to be re-specified.

If results differ by a few small decimal places, this can often be attributed
to differences in hardware or software environment.

## I want to generate a lot of responses to fixed utterances

The easiest way to do this is to [create a
teacher](tutorial_task) in ParlAI Dialog Format. Then, use
`eval_model` with world logging to store all the responses:

```
parlai eval_model -t fromfile:parlaiformat --fromfile-datapath yourtextfile.txt \
-mf some_model_file --world-logs outputfile
```

## Why is my generative model's perplexity so high (>1000) when evaluating?

Expand Down