You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello all,
i have a project with multiple input dimensions and their corresponding monotonicity. I.e. we know that increasing the first input should decrease the output and increasing the second input should increase the output.
I have modified this example
def fd(x):
return -np.ones((x.shape[0],1))
def test_multioutput_model_with_ep():
f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3
N=10
sigma=0.05
#x = np.random.random((3, 2))
x = np.array([np.linspace(1,10,N)]).T
y = f(x)
print(y)
M=15
xd = x
yd = fd(x)
# squared exponential kernel:
se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2)
# We need to generate separate kernel for the derivative observations and give the created kernel as an input:
se_der = GPy.kern.DiffKern(se, 0)
#Then
gauss = GPy.likelihoods.Gaussian(variance=sigma**2)
probit = GPy.likelihoods.Binomial(gp_link = GPy.likelihoods.link_functions.ScaledProbit(nu=100))
# Then create the model, we give everything in lists
inference = GPy.inference.latent_function_inference.expectation_propagation.EP(ep_mode = 'nested')
# inference = GPy.inference.latent_function_inference.Laplace()
m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, probit], inference_method=inference)
m.optimize(messages=0, ipython_notebook=False)
return m
to take in a joint classification problem (dummy one)
However, when i try to increase the dimensionality of the problem, i.e. x = np.random.random((3, 2)), EP throws the error that it is only an 1D method. Laplace on the other hand gives me the following error
TypeError Traceback (most recent call last)
<ipython-input-1-032a3ffad658> in <module>
3
4 from regression_2 import *
----> 5 model = test_multioutput_model_with_ep()
6 xpred = np.array([np.linspace(0,11,4)]).T
7
~/Desktop/phd/manuel_BO/GPY_MONOTONICITY/examples/regression_2.py in test_multioutput_model_with_ep()
106 # inference = GPy.inference.latent_function_inference.FITC()
107 # inference = GPy.inference.latent_function_inference.PEP(alpha = 0.5)
--> 108 m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, probit], inference_method=inference)
109 m.optimize(messages=0, ipython_notebook=False)
110 return m
~/anaconda3/envs/ml_torch/lib/python3.7/site-packages/paramz/parameterized.py in __call__(self, *args, **kw)
56 self._model_initialized_ = False
57 if initialize:
---> 58 self.initialize_parameter()
59 else:
60 import warnings
~/anaconda3/envs/ml_torch/lib/python3.7/site-packages/paramz/core/parameter_core.py in initialize_parameter(self)
335 self._highest_parent_._connect_parameters() #logger.debug("calling parameters changed")
...
--> 124 N = Y_metadata['trials']
125 np.testing.assert_array_equal(N.shape, y.shape)
126 Ny = N-y
TypeError: 'NoneType' object is not subscriptable
Any help is greatly appreciated. PS, I am not fixated in any of the above code/methods. I welcome any suggestion on how to include monotonicity of inputs with respect to the outputs so long as 1) i can make predictions about the output 3) it is multidimensional and 3) i dont need the exact gradient values but their sign as specification
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello all,
i have a project with multiple input dimensions and their corresponding monotonicity. I.e. we know that increasing the first input should decrease the output and increasing the second input should increase the output.
I have modified this example
to take in a joint classification problem (dummy one)
However, when i try to increase the dimensionality of the problem, i.e. x = np.random.random((3, 2)), EP throws the error that it is only an 1D method. Laplace on the other hand gives me the following error
Any help is greatly appreciated. PS, I am not fixated in any of the above code/methods. I welcome any suggestion on how to include monotonicity of inputs with respect to the outputs so long as 1) i can make predictions about the output 3) it is multidimensional and 3) i dont need the exact gradient values but their sign as specification
Beta Was this translation helpful? Give feedback.
All reactions