Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A standard way to report more failure status of a trial #704

Closed
haifeng-jin opened this issue Jul 6, 2022 · 2 comments · Fixed by #760 or #769
Closed

A standard way to report more failure status of a trial #704

haifeng-jin opened this issue Jul 6, 2022 · 2 comments · Fixed by #760 or #769
Assignees
Labels
enhancement New feature or request

Comments

@haifeng-jin
Copy link
Collaborator

Is your feature request related to a problem? Please describe.

If a model is failed at training, the whole program crashes.
A better strategy is to report the status fo this trial and skip it.

Describe the solution you'd like

One solution would be have a new function of Oracle just to report the failure status of a trial.
However, it requries modification to the proto files used by gRPC.
Rurther looks into the roadmap of distributed features of keras_tuner is required.

Describe alternatives you've considered

Additional context

@carlthome
Copy link
Contributor

This is sorely missing.

I'm tuning both feature representation and model topology simultaneously in a keras-tuner experiment, and when the number of pooling layers is too many for the size of the input features, the whole tuning simply crashes, with no obvious way for me to catch exceptions and skip to the next trial.

@haifeng-jin
Copy link
Collaborator Author

This can now be done with raise keras_tuner.FailedTrialError("error message.") in HyperModel.build(), HyperModel.fit(), or your model build function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
2 participants