Skip to content

Commit

Permalink
GH-2632: Updating the param selection docs for fir the v0.10 syntax
Browse files Browse the repository at this point in the history
  • Loading branch information
tadejmagajna committed Feb 14, 2022
1 parent c9591b9 commit 32ebc4c
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 38 deletions.
26 changes: 8 additions & 18 deletions resources/docs/KOR_docs/TUTORIAL_8_MODEL_OPTIMIZATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ corpus = TREC_6()

```python
from hyperopt import hp
from flair.embeddings import FlairEmbeddings, WordEmbeddings
from flair.hyperparameter.param_selection import SearchSpace, Parameter
# 검색 공간을 정의하세요.
search_space = SearchSpace()
Expand Down Expand Up @@ -54,8 +55,12 @@ hyperopt가 수행해야 하는 최대 평가 실행 횟수를 정의할 수 있
```python
from flair.hyperparameter.param_selection import TextClassifierParamSelector, OptimizationValue
# 매개변수 선택기 생성

label_type = 'question_class'

param_selector = TextClassifierParamSelector(
corpus,
corpus,
label_type,
False,
'resources/results',
'lstm',
Expand Down Expand Up @@ -90,6 +95,7 @@ from flair.datasets import WNUT_17
from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings
from flair.trainers import ModelTrainer
from typing import List
from torch.optim.adam import Adam
# 1. 말뭉치 가져오기
corpus = WNUT_17().downsample(0.1)
print(corpus)
Expand All @@ -113,29 +119,13 @@ tagger: SequenceTagger = SequenceTagger(hidden_size=256,
# 6. 트레이너 초기화하기
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. 학습률 찾기
learning_rate_tsv = trainer.find_learning_rate('resources/taggers/example-ner',
'learning_rate.tsv')
learning_rate_tsv = trainer.find_learning_rate('resources/taggers/example-ner', Adam)
# 8. 학습률 찾기 곡선 그리기
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_learning_rate(learning_rate_tsv)
```

## Custom Optimizers

이제 'ModelTrainer'를 초기화할 때 PyTorch의 최적화 프로그램을 훈련에 사용할 수 있습니다. 옵티마이저에 추가 옵션을 제공하려면 `weight_decay` 예제와 같이 지정하기만 하면 됩니다:

```python
from torch.optim.adam import Adam
trainer = ModelTrainer(tagger, corpus,
optimizer=Adam)

trainer.train(
"resources/taggers/example",
weight_decay=1e-4
)
```

## Next

다음 튜토리얼에서는 [training your own embeddings](/resources/docs/KOR_docs/TUTORIAL_9_TRAINING_LM_EMBEDDINGS.md)에 대해 살펴볼 것입니다,
30 changes: 10 additions & 20 deletions resources/docs/TUTORIAL_8_MODEL_OPTIMIZATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ Therefore, you can use all

```python
from hyperopt import hp
from flair.embeddings import FlairEmbeddings, WordEmbeddings
from flair.hyperparameter.param_selection import SearchSpace, Parameter

# define your search space
Expand Down Expand Up @@ -62,9 +63,13 @@ The final evaluation score will be the average over all those runs.
```python
from flair.hyperparameter.param_selection import TextClassifierParamSelector, OptimizationValue

# what label do we want to predict?
label_type = 'question_class'

# create the parameter selector
param_selector = TextClassifierParamSelector(
corpus,
corpus,
label_type,
False,
'resources/results',
'lstm',
Expand Down Expand Up @@ -94,7 +99,7 @@ explodes as the learning rate becomes too big. With such a plot, the optimal lea
picking the highest one from the optimal phase.

In order to run such an experiment start with your initialized `ModelTrainer` and call `find_learning_rate()` with the
`base_path` and the file name in which to records the learning rates and losses. Then plot the generated results via the
`base_path` and the optimizer (in our case `torch.optim.adam.Adam`). Then plot the generated results via the
`Plotter`'s `plot_learning_rate()` function and have a look at the `learning_rate.png` image to select the optimal
learning rate:

Expand All @@ -103,6 +108,7 @@ from flair.datasets import WNUT_17
from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings
from flair.trainers import ModelTrainer
from typing import List
from torch.optim.adam import Adam

# 1. get the corpus
corpus = WNUT_17().downsample(0.1)
Expand Down Expand Up @@ -135,31 +141,15 @@ tagger: SequenceTagger = SequenceTagger(hidden_size=256,
trainer: ModelTrainer = ModelTrainer(tagger, corpus)

# 7. find learning rate
learning_rate_tsv = trainer.find_learning_rate('resources/taggers/example-ner',
'learning_rate.tsv')
learning_rate_tsv = trainer.find_learning_rate('resources/taggers/example-ner', Adam)

# 8. plot the learning rate finder curve
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_learning_rate(learning_rate_tsv)
```

## Custom Optimizers

You can now use any of PyTorch's optimizers for training when initializing a `ModelTrainer`. To give the optimizer any
extra options just specify it as shown with the `weight_decay` example:

```python
from torch.optim.adam import Adam

trainer = ModelTrainer(tagger, corpus,
optimizer=Adam)

trainer.train(
"resources/taggers/example",
weight_decay=1e-4
)
```
The learning rates and losses will be written to `learning_rate.tsv`.

## Next

Expand Down

0 comments on commit 32ebc4c

Please sign in to comment.