Skip to content

Commit

Permalink
Metric tooltips, readme and documentation fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
ZanMervic committed Dec 6, 2023
1 parent 1ca751e commit a7fc5c8
Show file tree
Hide file tree
Showing 8 changed files with 89 additions and 10 deletions.
7 changes: 6 additions & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,16 @@
# Required
version: 2

build:
os: ubuntu-20.04
tools:
python: "3.10"

sphinx:
configuration: doc/conf.py

python:
version: "3.8"
version: "3.10"
install:
- method: pip
path: .
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@

![Example Workflow](doc/readme-screenshot.png)

Fairness add-on for the [Orange](http://orangedatamining.com/).
Orange3 Fairness is an add-on for the [Orange3](http://orangedatamining.com/) data mining suite.
It provides extensions for fairness-aware AI, which includes algorithms for detecting and mitigating
different types of biases in the data and the predictions of machine learning models.


# Easy installation
Expand Down
15 changes: 13 additions & 2 deletions README.pypi
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
Orange3 Fairness
================

Orange3 Fairness is an add-on for the [Orange3](http://orangedatamining.com/) data mining suite. It provides extensions for fairness-aware AI.
Orange3 Fairness is an add-on for the [Orange3](http://orangedatamining.com/) data mining suite.
It provides extensions for fairness-aware AI, which includes algorithms for detecting and mitigating
different types of biases in the data and the predictions of machine learning models.
See [documentation](https://orange3-fairness.readthedocs.io/).

Features
--------
#### Bias detection
* detect bias present in the data
* detect bias present in model predictions

#### Bias mitigation
* pre-processing, in-processing and post-processing methods for bias mitigation

![Screenshot of Orange3 Fairness add-on](https://raw.githubusercontent.com/biolab/orange3-fairness/main/doc/readme-screenshot.png)
7 changes: 7 additions & 0 deletions doc/widgets/adversarial-debiasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,13 @@ In short, **Adversarial Debiasing** is a classification algorithm with or withou

![](images/adversarial-debiasing.png)

Warning
-------

The **Adversarial Debiasing** widget requires TensorFlow in order to work. Because TensorFlow is a big library, we made it an optional dependency. If you want to use the widget you can install TensorFlow by clicking the `Install TensorFlow` button in the widget.

![](images/adversarial-debiasing-no-tensorflow.png)

Example
-------

Expand Down
3 changes: 2 additions & 1 deletion doc/widgets/dataset-bias.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ Computes the bias of a dataset.
- Data: dataset to be evaluated


**Dataset Bias** computes and displays the bias of a dataset. More specifically, it computes the disparate impact and statistical parity difference metrics for the dataset.
**Dataset Bias** computes and displays the bias of a dataset. More specifically, it computes the disparate impact and statistical parity difference metrics for the dataset. \
The ideal threshold is 1.0 for disparate impact and 0.0 for statistical parity difference. Values under the ideal threshold indicate bias towards the unprivileged group. Values above the ideal threshold indicate bias towards the privileged group.

![](images/dataset-bias.png)

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 39 additions & 4 deletions orangecontrib/fairness/evaluation/scoring.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,14 @@ class StatisticalParityDifference(FairnessScorer):
"""Class for Statistical Parity Difference fairness scoring."""

name = "SPD"
long_name = "Statistical Parity Difference (ideal = 0)"
long_name = str(
"<p>Statistical Parity Difference (SPD): Measures the difference in ratios of "
"favorable outcomes. An ideal value is 0.0.</p>"
"<ul>"
"<li>SPD &lt; 0: The privileged group has a higher rate of favorable outcomes.</li>"
"<li>SPD &gt; 0: The privileged group has a lower rate of favorable outcomes.</li>"
"</ul>"
)

def metric(self, classification_metric):
return classification_metric.statistical_parity_difference()
Expand All @@ -72,7 +79,15 @@ class EqualOpportunityDifference(FairnessScorer):
"""Class for Equal Opportunity Difference fairness scoring."""

name = "EOD"
long_name = "Equal Opportunity Difference (ideal = 0)"
long_name = str(
"<p>Equal Opportunity Difference (EOD): It measures the difference in "
"true positive rates. An ideal value is 0.0, indicating the difference "
"in true positive rates is the same for both groups.</p>"
"<ul>"
"<li>EOD &lt; 0: The privileged group has a higher true positive rate.</li>"
"<li>EOD &gt; 0: The privileged group has a lower true positive rate.</li>"
"</ul>"
)

def metric(self, classification_metric):
return classification_metric.equal_opportunity_difference()
Expand All @@ -82,7 +97,17 @@ class AverageOddsDifference(FairnessScorer):
"""Class for Average Odds Difference fairness scoring."""

name = "AOD"
long_name = "Average Odds Difference (ideal = 0)"
long_name = str(
"<p>Average Odds Difference (AOD): This metric calculates the average difference "
"between the true positive rates (correctly predicting a positive outcome) and false "
"positive rates (incorrectly predicting a positive outcome) for both the privileged "
"and unprivileged groups. A value of 0.0 indicates equal rates for both groups, "
"signifying fairness.</p>"
"<ul>"
"<li>AOD &lt; 0: Indicates bias in favor of the privileged group.</li>"
"<li>AOD &gt; 0: Indicates bias against the privileged group.</li>"
"</ul>"
)

def metric(self, classification_metric):
return classification_metric.average_odds_difference()
Expand All @@ -92,7 +117,17 @@ class DisparateImpact(FairnessScorer):
"""Class for Disparate Impact fairness scoring."""

name = "DI"
long_name = "Disparate Impact (ideal = 1)"
long_name = str(
"<p>Disparate Impact (DI): The ratio of ratios of favorable outcomes for an unprivileged "
"group to that of the privileged group. An ideal value of 1.0 means the ratio is "
"the same for both groups.</p>"
"<ul>"
"<li>DI &lt; 1.0: The privileged group receives favorable outcomes at a higher rate "
"than the unprivileged group.</li>"
"<li>DI &gt; 1.0: The privileged group receives favorable outcomes at a lower rate "
"than the unprivileged group.</li>"
"</ul>"
)

# TODO: When using randomize, models sometimes predict the same class for all instances
# This can lead to division by zero in the Disparate Impact score (and untrue results for the other scores)
Expand Down
20 changes: 19 additions & 1 deletion orangecontrib/fairness/widgets/owdatasetbias.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,9 @@ def set_data(self, data: Optional[Table]) -> None:
not data
):
self.disparate_impact_label.setText("No data detected.")
self.disparate_impact_label.setToolTip("")
self.statistical_parity_difference_label.setText("")
self.statistical_parity_difference_label.setToolTip("")
return

# Convert Orange data to aif360 StandardDataset
Expand All @@ -53,5 +55,21 @@ def set_data(self, data: Optional[Table]) -> None:
disparate_impact = dataset_metric.disparate_impact()
statistical_parity_difference = dataset_metric.statistical_parity_difference()
self.disparate_impact_label.setText(f"Disparate Impact (ideal = 1): {round(disparate_impact, 3)}")
self.disparate_impact_label.setToolTip(
"<p>Disparate Impact (DI): Measures the ratio of the ratios of favorable class values for an "
"unprivileged group to that of the privileged group. An ideal value of 1.0 means the ratio of "
"favorable class values is the same for both groups.</p>"
"<ul>"
"<li>DI &lt; 1.0: The privileged group has a higher percentage of favorable class values.</li>"
"<li>DI &gt; 1.0: The privileged group has a lower percentage of favorable class values.</li>"
"</ul>"
)
self.statistical_parity_difference_label.setText(f"Statistical Parity Difference (ideal = 0): {round(statistical_parity_difference, 3)}")

self.statistical_parity_difference_label.setToolTip(
"<p>Statistical Parity Difference (SPD): Measures the difference in ratios of favorable class values "
"between the unprivileged and the privileged groups. An ideal value for this metric is 0.</p>"
"<ul>"
"<li>SPD &lt; 0: The privileged group has a higher percentage of favorable class values.</li>"
"<li>SPD &gt; 0: The privileged group has a lower percentage of favorable class values.</li>"
"</ul>"
)

0 comments on commit a7fc5c8

Please sign in to comment.