Skip to content

RuntimeError: InternalServerError when calling predict_proba #192

@1taehyeok

Description

@1taehyeok

Hello, I encountered an InternalServerError (HTTP 500) when calling predict_proba using the TabPFN client, so I’m reporting it here.

y_pred_proba = tabpfn_classifier.predict_proba(X_test)

I get the following error:
RuntimeError: InternalServerError: An unexpected error occurred on our server, sorry about that! Please report this error on GitHub (https://github.com/automl/tabpfn-client) or Discord (https://discord.com/invite/VJRuU3bSxt). Please include the following error message: HTTPError: 500 Server Error: Internal Server Error for url: https://a56a1e89df0acc73.europe-west4-668127571160.prediction.vertexai.goog/v1/projects/668127571160/locations/europe-west4/endpoints/a56a1e89df0acc73:rawPredict

Full traceback:

Processing: 2%|█▌ | [00:01<01:58]

RuntimeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 y_pred_proba = tabpfn_classifier.predict_proba(X_test)
3 # Calculate the ROC AUC score
4 roc_auc = roc_auc_score(y_test, y_pred_proba[:, 1])

File ~\anaconda3\Lib\site-packages\tabpfn_client\estimator.py:241, in TabPFNClassifier.predict_proba(self, X)
232 def predict_proba(self, X):
233 """Predict class probabilities for X.
234
235 Args:
(...)
239 The class probabilities of the input samples.
240 """
--> 241 return self._predict(X, output_type="probas")

File ~\anaconda3\Lib\site-packages\tabpfn_client\estimator.py:254, in TabPFNClassifier._predict(self, X, output_type)
249 estimator_param = self.get_params()
250 estimator_param["model_path"] = TabPFNClassifier._model_name_to_path(
251 "classification", self.model_path
252 )
--> 254 result: PredictionResult = InferenceClient.predict(
255 X,
256 task="classification",
257 train_set_uid=self.last_train_set_uid,
258 config=estimator_param,
259 predict_params={"output_type": output_type},
260 X_train=self.last_train_X,
261 y_train=self.last_train_y,
262 )
264 # Unpack and store metadata
265 self.last_meta = result.metadata

File ~\anaconda3\Lib\site-packages\tabpfn_client\service_wrapper.py:261, in InferenceClient.predict(cls, X, task, train_set_uid, config, predict_params, X_train, y_train)
250 @classmethod
251 def predict(
252 cls,
(...)
259 y_train=None,
260 ):
--> 261 return ServiceClient.predict(
262 train_set_uid=train_set_uid,
263 x_test=X,
264 tabpfn_config=config,
265 predict_params=predict_params,
266 task=task,
267 X_train=X_train,
268 y_train=y_train,
269 )

File ~\anaconda3\Lib\site-packages\backoff_sync.py:105, in retry_exception..retry(*args, **kwargs)
96 details = {
97 "target": target,
98 "args": args,
(...)
101 "elapsed": elapsed,
102 }
104 try:
--> 105 ret = target(*args, **kwargs)
106 except exception as e:
107 max_tries_exceeded = (tries == max_tries_value)

File ~\anaconda3\Lib\site-packages\tabpfn_client\client.py:435, in ServiceClient.predict(cls, train_set_uid, x_test, task, predict_params, tabpfn_config, X_train, y_train)
433 raise ValueError(data["detail"])
434 else:
--> 435 raise RuntimeError(
436 data["error_class"] + ": " + data["detail"]
437 )
438 break
439 except RuntimeError as e:

To help with debugging, I am attaching X_test.csv (the data used for prediction).
If needed, I can also provide X_train or the preprocessing code.

tabpfn-client 0.2.8

Could you please check whether this issue is related to the model server, Vertex AI endpoint, or the client request?
The error appears to come from the backend rather than from the client code.

Thank you!

X_test.csv

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions