Python package for statistical inference from non-probability samples.
INPS is available at the Python Package Index (PyPI).
pip install inps
In order to run the code included in this guide, the following imports are required.
import inps
import pandas as pd
import numpy as np
from numpy.random import default_rng
from xgboost import XGBRegressor, XGBClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neural_network import MLPRegressor
Also, the following code creates some simulated data as assumed in the examples.
rng = default_rng(0)
pop_size = 10000
n = 1000
N = 2000
np_sample = rng.standard_normal(n * 5).reshape(-1, 5)
p_sample = rng.standard_normal(N * 3).reshape(-1, 3)
population = rng.standard_normal(pop_size * 3).reshape(-1, 3)
weights = [pop_size / N * 0.8] * int(N/2) + [pop_size / N * 1.2] * int(N/2)
def to_category(num_series):
return pd.Series(np.where(num_series > 0, "Yes", "No"), dtype = "category", copy = False)
np_sample = pd.DataFrame(np_sample, columns = ["A", "B", "cat", "target", "target_cat"], copy = False)
p_sample = pd.DataFrame(p_sample, columns = ["A", "B", "cat"], copy = False)
population = pd.DataFrame(population, columns = ["A", "B", "cat"], copy = False)
np_sample["target_cat"] = to_category(np_sample["target_cat"])
np_sample["cat"] = to_category(np_sample["cat"])
p_sample["cat"] = to_category(p_sample["cat"])
population["cat"] = to_category(population["cat"])
p_sample["weights"] = weights
In general, np_sample
and p_sample
must be Pandas DataFrames. In a real example, the user has to verify the covariates. This implies making sure they have the same names in both DataFrame, same data types and same categories (if categorical).
All the code may be found in the test script. Also, check the guide script for code applying INPS to real data.
Calibration requires a sample and its known population totals. population_totals
must be a Pandas Series.
population_totals = pd.Series({"A": 10, "B": 5})
Additionally, the user must pass either the total population size...
calibration_weights = inps.calibration_weights(np_sample, population_totals, population_size = pop_size)
...or the initial weights column name.
calibration_weights2 = inps.calibration_weights(p_sample, population_totals, weights_column = "weights")
Helper methods are provided for obtaining estimations and 95% confidence intervals.
mean_estimation = inps.estimation(np_sample["target"], calibration_weights)
mean_interval = inps.confidence_interval(np_sample["target"], calibration_weights)
proportion_estimation = inps.estimation(np_sample["target_cat"] == "Yes", calibration_weights)
proportion_interval = inps.confidence_interval(np_sample["target_cat"] == "Yes", calibration_weights)
PSA requires np_sample
, p_sample
and population_size
.
psa_weights = inps.psa_weights(np_sample, p_sample, pop_size)
The user may also pass a weights column for the p_sample
.
psa_weights = inps.psa_weights(np_sample, p_sample, pop_size, weights_column = "weights")
By default, columns with the same name will be selected as covariates. This may be dangerous. It is preferable to manually select the covariates after having verified them.
psa_weights = inps.psa_weights(np_sample, p_sample, pop_size, weights_column = "weights", covariates = ["A", "B", "cat"])
By default, regularized logistic regression is applied. However, the user may choose any model supporting sample weights and compatible with the scikit-learn API.
psa_weights2 = inps.psa_weights(np_sample, p_sample, pop_size, weights_column = "weights", model = XGBClassifier(enable_categorical = True, tree_method = "hist"))
For models requiring only numerical data without missing values, make_preprocess_estimator
adds some default preprocessing.
psa_weights3 = inps.psa_weights(np_sample, p_sample, pop_size, weights_column = "weights", model = inps.make_preprocess_estimator(BernoulliNB()))
The result is a dictionary with the np_sample
and p_sample
PSA weights as numpy arrays. The weights for the np_sample
may be used for estimation as usual.
mean_estimation = inps.estimation(np_sample["target"], psa_weights["np"])
mean_interval = inps.confidence_interval(np_sample["target"], psa_weights["np"])
proportion_estimation = inps.estimation(np_sample["target_cat"] == "Yes", psa_weights["np"])
proportion_interval = inps.confidence_interval(np_sample["target_cat"] == "Yes", psa_weights["np"])
Matching requires np_sample
, p_sample
and target_column
(from np_sample
).
matching_values = inps.matching_values(np_sample, p_sample, "target")
It the target variable is categorical, a target category is required and probabilities are returned.
cat_matching_values = inps.matching_values(np_sample, p_sample, "target_cat", "Yes")
By default, columns with the same name will be selected as covariates. This may be dangerous. It is preferable to manually select the covariates after having verified them.
matching_values = inps.matching_values(np_sample, p_sample, "target", covariates = ["A", "B", "cat"])
By default, ridge regression (or regularized logistic regression for categorical values) is applied. However, the user may choose any model compatible with the scikit-learn API.
matching_values2 = inps.matching_values(np_sample, p_sample, "target", model = XGBRegressor(enable_categorical = True, tree_method = "hist"))
For models requiring only numerical data without missing values, make_preprocess_estimator
adds some default preprocessing.
matching_values3 = inps.matching_values(np_sample, p_sample, "target", model = inps.make_preprocess_estimator(MLPRegressor()))
The result is a dictionary with the p_sample
and np_sample
imputed values (or probabilities if categorical) as numpy arrays. The values for the p_sample
may be used for estimation as usual.
mean_estimation = inps.estimation(matching_values["p"], p_sample["weights"])
mean_interval = inps.confidence_interval(matching_values["p"], p_sample["weights"])
proportion_estimation = inps.estimation(cat_matching_values["p"], p_sample["weights"])
proportion_estimation = inps.confidence_interval(cat_matching_values["p"], p_sample["weights"])
The parameters are analogous to matching
.
doubly_robust_estimation = inps.doubly_robust_estimation(np_sample, p_sample, "target", covariates = ["A", "B", "cat"])
cat_doubly_robust_estimation = inps.doubly_robust_estimation(np_sample, p_sample, "target_cat", "Yes", covariates = ["A", "B", "cat"])
Default models are aplied. As usual, custom ones may be specified.
doubly_robust_estimation2 = inps.doubly_robust_estimation(np_sample, p_sample, "target", psa_model = XGBClassifier(enable_categorical = True, tree_method = "hist"), matching_model = XGBRegressor(enable_categorical = True, tree_method = "hist"))
The estimated mean/proportion is directly returned by the method.
Training is the recommended method. The parameters and returning values are analogous to matching
, except now there are 2 models the user may optionally specify.
training_values = inps.training_values(np_sample, p_sample, "target", psa_model = XGBClassifier(enable_categorical = True, tree_method = "hist"), matching_model = XGBRegressor(enable_categorical = True, tree_method = "hist"))
The imputed values for the p_sample
may be used for estimation as usual.
Kernel weighting parameters are analogous to psa
.
kw_weights = inps.kw_weights(np_sample, p_sample, pop_size, weights_column = "weights", covariates = ["A", "B", "cat"])
A numpy array with the estimated weights for the np_sample
is returned.
proportion_estimation = inps.estimation(np_sample["target_cat"] == "Yes", kw_weights)
The exact same methods can be used when the "probabilistic sample" includes the whole population instead.
imputed_values = inps.training_values(np_sample, population, "target")
cat_imputed_values = inps.training_values(np_sample, population, "target_cat", "Yes")
mean_estimation = inps.estimation(imputed_values["p"])
mean_interval = inps.confidence_interval(imputed_values["p"])
proportion_estimation = inps.estimation(cat_imputed_values["p"])
proportion_interval = inps.confidence_interval(cat_imputed_values["p"])
inps.boosting_classifier()
and inps.boosting_regressor()
will return advanced Gradient Boosting estimators ready to use for optimal results.