Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

ERROR: Config V2 validation failed: ValueError('_AlgorithmConfig: Unrecognized fields builtinassessorname') #4066

Closed
dean1314 opened this issue Aug 12, 2021 · 6 comments
Assignees
Labels
bug Something isn't working user raised

Comments

@dean1314
Copy link

dean1314 commented Aug 12, 2021

Hi,when I used assessor to the config.yml, it always failed.
My config content is as follow:

authorName: default
experimentName: example_mnist_pytorch
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 50
#choice: local, remote
trainingServicePlatform: local
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
assessor:
#choice: Medianstop, Curvefitting
builtinAssessorName: Curvefitting
classArgs:
epoch_num: 20
threshold: 0.9
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 1

Environment:

  • NNI version:2.4
  • Training service (local|remote|pai|aml|etc):local
  • Client OS:ubuntu 1804
  • Python version:3.8
  • PyTorch/TensorFlow version:1.8
  • Is conda/virtualenv/venv used?:conda
  • Is running in Docker?:no
@cruiseliu
Copy link
Contributor

cruiseliu commented Aug 16, 2021

I can't reproduce this issue with NNI v2.4.
This is the config file I tried:

authorName: default
experimentName: example_mnist_pytorch
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 50
trainingServicePlatform: local
searchSpacePath: search_space.json
useAnnotation: false
tuner:
  builtinTunerName: TPE
  classArgs:
    optimize_mode: maximize
assessor:
  builtinAssessorName: Curvefitting
  classArgs:
    epoch_num: 20
    threshold: 0.9
trial:
  command: python3 mnist.py
  codeDir: .
  gpuNum: 0

If it does not work for you, please paste the command you run and it's full output.

@Scallions
Copy link

i also have this issue with this config file

ERROR: Config V2 validation failed: ValueError('ExperimentConfig: Unrecognized fields trial')
ERROR: 'NoneType' object has no attribute 'get'

the trial and builtinassessorname fields can't recognized

Environment:

  • NNI version:2.4
  • Training service (local|remote|pai|aml|etc):local
  • Client OS:ubuntu 1804
  • Python version:3.9
  • PyTorch/TensorFlow version: other
  • Is conda/virtualenv/venv used?:conda
  • Is running in Docker?:no

@Scallions
Copy link

i also have this issue with this config file

ERROR: Config V2 validation failed: ValueError('ExperimentConfig: Unrecognized fields trial')
ERROR: 'NoneType' object has no attribute 'get'

the trial and builtinassessorname fields can't recognized

Environment:

  • NNI version:2.4
  • Training service (local|remote|pai|aml|etc):local
  • Client OS:ubuntu 1804
  • Python version:3.9
  • PyTorch/TensorFlow version: other
  • Is conda/virtualenv/venv used?:conda
  • Is running in Docker?:no

i think the experiment config(v2) is not support those fields. how can i use legacy(v1) config in nni v2

@Scallions
Copy link

i also have this issue with this config file

ERROR: Config V2 validation failed: ValueError('ExperimentConfig: Unrecognized fields trial')
ERROR: 'NoneType' object has no attribute 'get'

the trial and builtinassessorname fields can't recognized
Environment:

  • NNI version:2.4
  • Training service (local|remote|pai|aml|etc):local
  • Client OS:ubuntu 1804
  • Python version:3.9
  • PyTorch/TensorFlow version: other
  • Is conda/virtualenv/venv used?:conda
  • Is running in Docker?:no

i think the experiment config(v2) is not support those fields. how can i use legacy(v1) config in nni v2

i change the builtinAssessorName to name. wrok now.

assessor:
  #choice: Medianstop, Curvefitting
  # builtinAssessorName: Curvefitting
  name: Curvefitting
  classArgs:
    # (required)The total number of epoch.
    #  We need to know the number of epoch to determine which point we need to predict.
    epoch_num: 20
    # (optional) In order to save our computing resource, we start to predict when we have more than only after receiving start_step number of reported intermediate results.
    # The default value of start_step is 6.
    start_step: 6
    # (optional) The threshold that we decide to early stop the worse performance curve.
    # For example: if threshold = 0.95, best performance in the history is 0.9, then we will stop the trial which predict value is lower than 0.95 * 0.9 = 0.855.
    # The default value of threshold is 0.95.
    threshold: 0.95
    # (optional) The gap interval between Assesor judgements.
    # For example: if gap = 2, start_step = 6, then we will assess the result when we get 6, 8, 10, 12...intermedian result.
    # The default value of gap is 1.
    gap: 1

@dean1314
Copy link
Author

Thanks.
changed the builtinAssessorName to name and it works!

@scarlett2018
Copy link
Member

Thanks. changed the builtinAssessorName to name and it works!

Closed as the issue has been resolved.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working user raised
Projects
None yet
Development

No branches or pull requests

5 participants