Bias en non-discriminatie t.o.v. de levenscyclus #57
Replies: 8 comments
-
Probleemanalyse
Voorbeelden van biases en traps: societal bias A prejudice for or against a person or a group (Ref. 2) Societal or social biases are often stereotypes. Common examples of societal or social biases are based on concepts like race, ethnicity, gender, sexual orientation, socioeconomic status, education, and more. (Ref. 1) Traps (Ref. 2, Selbst et al., 2019):
|
Beta Was this translation helpful? Give feedback.
-
OntwerpIn many cases, fairness-related harms can be traced back to the way a real-world problem is translated into a machine learning task. Which target variable do we intend to predict? What features will be included? What (fairness) constraints do we consider? Many of these decisions boil down to what social scientists refer to as measurement: the way we measure (abstract) phenomena. (Fairlearn, https://fairlearn.org/v0.7.0/user_guide/fairness_in_machine_learning.html#fairness-of-ai-systems:) Voorbeelden van biases: construct validity bias A statistical bias that occurs when a feature or target variable does not accurately measure the construct it was designed to measure. (Ref. 2) Zie ook biases bij probleemanalyse |
Beta Was this translation helpful? Give feedback.
-
Data verkenning en data preparatieRef. 3: This phase tends to produce most bias due to the prejudices and Voorbeelden van biases: historical bias A social bias that is encoded in the data through biased human decision-making or structural biases embedded in society. (Ref. 2) |
Beta Was this translation helpful? Give feedback.
-
OntwikkelenRef 3: In the modelling phase, the model is built and trained. Here, fairness issues can arise when an unfit model is selected or when the modelling choices result in the prioritization of an objective that leads to more errors for underrepresented groups. Voorbeelden van biases: aggregation bias A bias that occurs when a single machine learning model is used for groups that have distinct data distributions, resulting in inaccurate predictions for (some) groups. (Ref. 2) |
Beta Was this translation helpful? Give feedback.
-
ValidatieRef 3: During the evaluation stage of the model development cycle, the performance of the model on the test set is evaluated. Voorbeelden van biases: evaluation bias A bias that occurs through the use of performance metrics and procedures that are not appropriate for the context in which the model will be used. (Ref. 2) |
Beta Was this translation helpful? Give feedback.
-
ImplementatieRef 3: the model is being deployed in a real-world setting, where its predictions are part of a system that affects individuals and groups of people. Ideally, the population that the model sees in the real-world resembles that of the development sample, but this is not always the case. Ref 2: Once the system is deployed, it may be used, interpreted, or interacted with inappropriately, resulting in unfair outcomes. The underlying cause of these outcomes is a mismatch between the system’s design and the context in which it will be applied. Indeed, biases in deployment can often be attributed to abstraction traps. Voorbeelden van biases: automation bias caused when people prefer the results generated by algorithms over deployment bias occurs when decision-makers and other end users behave unexpectedly with the AI system, hereby resulting in unfair outcomes and interventions (Ref. 3) Zie traps bij de probleemanalyse |
Beta Was this translation helpful? Give feedback.
-
MonitorenVoorbeelden van biases: reinforcing feedback loop Feedback mechanisms that amplify an effect. In the context of algorithmic fairness, it refers to the amplification of existing biases when new data is collected based on the output of a biased model. See also selection bias. (Ref. 2) |
Beta Was this translation helpful? Give feedback.
-
ArchiverenNog geen specifieke voorbeelden van bias |
Beta Was this translation helpful? Give feedback.
-
Vragen voor deze discussie:
De volgende 3 referenties bevatten al best een breed overzicht van verschillende soorten biases met een indicatie in welke fases van de levenscyclus ze van belang zijn. Ik heb nog geen vertaling naar Nederlands gedaan.
Ref. 1: NIST, Towards a Standard for identifying and managing bias in artificial intelligence (https://www.nist.gov/publications/towards-standard-identifying-and-managing-bias-artificial-intelligence)
Ref. 2: An Introduction to Algorithmic Fairness, H. Weerts 2021 (https://arxiv.org/pdf/2105.05595v1.pdf)
Ref. 3: The Fairness Handbook (https://openresearch.amsterdam/nl/page/87589/the-fairness-handbook)
bias definitie: a systematic and disproportionate tendency towards something (Ref. 2)
Soorten Bias (Ref. 1)
Human biases reflect systematic errors in human thought based on a limited number of heuristic principles and predicting values to simpler judgmental operations. […]
Systemic biases result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others be- ing disadvantaged or devalued. […]
Statistical and computational biases stem from errors that result when the sample is not representative of the population. These biases arise from systematic as opposed to random error and can occur in the absence of prejudice, partiality, or discriminatory intent. […]
Hierbij ook nog belangrijke nuances:
Hieronder heb ik een aantal voorbeelden van bias benoemd per fase van de levenscyclus. Deze lijst is zeker niet uitputtend, gelet ook op de benoemde nuances. Hierbij de uitnodiging om het verder aan te vullen en aan te scherpen.
Beta Was this translation helpful? Give feedback.
All reactions