-
Notifications
You must be signed in to change notification settings - Fork 538
Description
Hello TabPFN developers,
I have been using the TabPFN2 prior Labs Web App (https://ux.priorlabs.ai/predict) for Model Fitting and Testing for the data set with 18 Features and a target to obtain target predictions (regression task). The files for the model fitting (training data) and model testing (testing data) were created using an 80-20 split for training and testing data. The workflow includes 1) generating a model building data with Features (1-18) and target column (File: BuildingDataSet1), which is uploaded to TabPFN; the target is selected and task as regression. 2) Estimate Model performance (R2 8.56%) for TabPFN2 (File: PriorWebAppWebpage) 3) model fitting data (File: FittingDataSet1) used in the Test Set Prediction section of the web app to obtain the Model fitting predictions for the Target column. The Model predictions for the Fitting Target Set are evaluated against the actual Target values (File: FittingTargetPredictionsSet1). The Model Fitting does not have a good performance (fails) and holds a bias; I have attached the plot file for the Scatter plot of Target vs Predicted Target values obtained from model fitting (File: FittingTargetPredictionsSet1) reflecting the bias.
Why does the Model have a bias and fail to fit the data? Please suggest.