You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then, I declare the following function that plays the role of an "experiment" and one to add new runs to the dataset:
Exemple of "true" function
def experiment(x):
y = (1/2)np.sin(x/4) - np.cos(x/6) + 2np.sin(x/2)
return y
Add new runs to the optimizer
def next_experiment(bo, ne=1, verbose=1):
for i in range(ne):
next_run = bo.suggest()
xi = next_run['x']
yi = experiment(xi)
bo.register(params=next_run, target=yi)
if verbose > 0:
print(f'x = {xi:6.2f} : y = {yi:6.2f}')
return bo
For example, I can add three runs and I get the following output:
bo = next_experiment(bo, ne=3)
x = -4.85 : y = -2.47
x = -17.10 : y = -0.12
x = -17.08 : y = -0.14
However, when I use the predict method of the _gp model, I am not getting back the correct value for the last entry:
xi = bo.space.params.flatten()
yi = bo._gp.predict(xi.reshape(-1,1))
for x, y in zip(xi,yi):
print(f'x = {x:6.2f} : yp = {y:6.2f}')
x = -4.85 : yp = -2.47
x = -17.10 : yp = -0.12
x = -17.08 : yp = -0.87
But, if I add two new runs and I repeat the calculation of the predicted values, then yes, I get the correct value for the third entry (x = -17.08, yp = -0.14), but again, I am getting a wrong value for the last entry (x = 20.00), -0.06 instead of -0.58.
bo = next_experiment(bo, ne=2)
x = -19.92 : y = 2.49
x = 20.00 : y = -0.58
xi = bo.space.params.flatten()
yi = bo._gp.predict(xi.reshape(-1,1))
for x, y in zip(xi,yi):
print(f'x = {x:6.2f} : yp = {y:6.2f}')
x = -4.85 : yp = -2.47
x = -17.10 : yp = -0.12
x = -17.08 : yp = -0.14
x = -19.92 : yp = 2.49
x = 20.00 : yp = -0.06
Am I missing something? Is this the correct behaviour?
Thanks!!
The text was updated successfully, but these errors were encountered:
I'm realizing that while I added a note to this effect to the .maximize function docs (link) I never did the same for .probe. Would that have helped clear the confusion in your case?
Hello,
I am finding a recurrent issue with the predictions of the optimizer for the last point that was added to the dataset.
I am using the following chunk of code to initialize a BayesianOptimization instance:
Bounded region of parameter space
pbounds = {'x': (-20, 20)}
Bayesian optimization with upper confidence bound acquisition function
ucb = acquisition.UpperConfidenceBound(kappa=2.5)
bo = BayesianOptimization(
f=None,
acquisition_function = ucb,
pbounds=pbounds,
verbose=2,
random_state=2803,
)
Then, I declare the following function that plays the role of an "experiment" and one to add new runs to the dataset:
Exemple of "true" function
def experiment(x):
y = (1/2)np.sin(x/4) - np.cos(x/6) + 2np.sin(x/2)
return y
Add new runs to the optimizer
def next_experiment(bo, ne=1, verbose=1):
For example, I can add three runs and I get the following output:
bo = next_experiment(bo, ne=3)
x = -4.85 : y = -2.47
x = -17.10 : y = -0.12
x = -17.08 : y = -0.14
However, when I use the predict method of the _gp model, I am not getting back the correct value for the last entry:
xi = bo.space.params.flatten()
yi = bo._gp.predict(xi.reshape(-1,1))
for x, y in zip(xi,yi):
print(f'x = {x:6.2f} : yp = {y:6.2f}')
x = -4.85 : yp = -2.47
x = -17.10 : yp = -0.12
x = -17.08 : yp = -0.87
But, if I add two new runs and I repeat the calculation of the predicted values, then yes, I get the correct value for the third entry (x = -17.08, yp = -0.14), but again, I am getting a wrong value for the last entry (x = 20.00), -0.06 instead of -0.58.
bo = next_experiment(bo, ne=2)
x = -19.92 : y = 2.49
x = 20.00 : y = -0.58
xi = bo.space.params.flatten()
yi = bo._gp.predict(xi.reshape(-1,1))
for x, y in zip(xi,yi):
print(f'x = {x:6.2f} : yp = {y:6.2f}')
x = -4.85 : yp = -2.47
x = -17.10 : yp = -0.12
x = -17.08 : yp = -0.14
x = -19.92 : yp = 2.49
x = 20.00 : yp = -0.06
Am I missing something? Is this the correct behaviour?
Thanks!!
The text was updated successfully, but these errors were encountered: