We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a test case to test_autoai_output_consumption.py covering the following scenario:
DisparateImpactRemover
Here is some code for using the pipeline generated for the German credit dataset:
fairness_info = { "protected_attributes": [ {"feature": "Sex", "reference_group": ['male'], "monitored_group": ['female']}, {"feature": "Age", "reference_group": [[20,40], [60,90]], "monitored_group": [[41, 59]]} ], "favorable_labels": ["No Risk"], "unfavorable_labels": ["Risk"], } prefix = best_pipeline.remove_last().freeze_trainable() from sklearn.linear_model import LogisticRegression as LR from sklearn.ensemble import RandomForestClassifier as RF from lale.operator_wrapper import wrap_imported_operators from lale.lib.aif360 import DisparateImpactRemover wrap_imported_operators() di_remover = DisparateImpactRemover(**fairness_info, preparation=prefix, redact=True) planned_fairer = di_remover >> (LR | RF) from lale.lib.aif360 import accuracy_and_disparate_impact from lale.lib.aif360 import FairStratifiedKFold combined_scorer = accuracy_and_disparate_impact(**fairness_info) fair_cv = FairStratifiedKFold(**fairness_info, n_splits=3) from lale.lib.lale import Hyperopt import pandas as pd df = pd.read_csv("german_credit_data_biased_training.csv") y = df.iloc[:, -1] X = df.drop(columns=['Risk']) trained_fairer = planned_fairer.auto_configure( X, y, optimizer=Hyperopt, cv=fair_cv, verbose=True, max_evals=1, scoring=combined_scorer, best_score=1.0)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Add a test case to test_autoai_output_consumption.py covering the following scenario:
DisparateImpactRemover
on the preprocessing prefix and perform refinement with a choice of classifiers.Here is some code for using the pipeline generated for the German credit dataset:
The text was updated successfully, but these errors were encountered: