-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to Understand the Learning Progression #21
Comments
Hi Dave, this seemingly really simple example actually exhibits some Non-stationarity is when the rate of change of the function changes with Heteroscedasticity is when the amount of noise actually changes with The rest of your questions are answered inline. On Mon, Jan 27, 2014 at 10:57 AM, Quanticles notifications@github.comwrote:
Jasper
|
Thanks for the highly detailed response, this was really helpful. I changed to using the log of the output and preventing the discontinuities and it's making much more sense now. |
Just to clarify... When you say "... project your inputs to the log-domain before passing them to the optimizer.", you mean project the output from our cost function or the inputs we provide in the config file to the "wrapper" (cost function)? |
Change the input bounds of your problem to be the log of the original Jasper On Tue, Jun 30, 2015 at 2:17 AM, Julien Hoachuck notifications@github.com
|
For example, if you want to initialize weights between 0.01 and 1, but want In config.db: variable { Inside your python/whatever function:
op_flags=['readwrite']): So spearmint is working with a variable on the range [-2.0,0.0], but your On Tue, Jun 30, 2015 at 7:25 AM, Jasper Snoek notifications@github.com
|
This is great. Thank you JasperSnoek and Quanticles for the prompt and concise reply :) |
In your paper "Input Warping for Bayesian Optimization of Non-stationary Functions" you mention warping the number of hidden units. How did you go about projecting a sequence of INTEGERS say from 0-9 units to log space? Hope I am not making this thread too long... |
Hey Julien, no problem. In that paper we treated integers as continuous Jasper On Wed, Jul 1, 2015 at 3:12 AM, Julien Hoachuck notifications@github.com
|
Hi,
I'm trying to understand what the learning process is doing - it doesn't seem to be working for me. I'm learning two parameters with GPEIOptChooser on a neural network. The parameters are a global learning slowdown factor and the number of epochs to run.
I thought this should be an easy test where spearmint would dial in the best parameters quickly, but it seems to be struggling.
Questions:
Thanks,
Dave
The text was updated successfully, but these errors were encountered: