From 73a6ff2a77e11c5cf7c332419b8ded47f5196188 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Fri, 15 Nov 2024 18:36:37 -0500 Subject: [PATCH 1/6] better definitions --- contents/core/responsible_ai/responsible_ai.qmd | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index dc6643dd4..e7bafe522 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -38,7 +38,7 @@ Implementing responsible ML presents both technical and ethical challenges. Deve This chapter will equip you to critically evaluate AI systems and contribute to developing beneficial and ethical machine learning applications by covering the foundations, methods, and real-world implications of responsible ML. The responsible ML principles discussed are crucial knowledge as algorithms mediate more aspects of human society. -## Definition +## Terminology Responsible AI is about developing AI that positively impacts society under human ethics and values. There is no universally agreed-upon definition of "responsible AI," but here is a summary of how it is commonly described. Responsible AI refers to designing, developing, and deploying artificial intelligence systems in an ethical, socially beneficial way. The core goal is to create trustworthy, unbiased, fair, transparent, accountable, and safe AI. While there is no canonical definition, responsible AI is generally considered to encompass principles such as: @@ -62,7 +62,9 @@ Putting these principles into practice involves technical techniques, corporate Machine learning models are often criticized as mysterious "black boxes" - opaque systems where it's unclear how they arrived at particular predictions or decisions. For example, an AI system called [COMPAS](https://doc.wi.gov/Pages/AboutDOC/COMPAS.aspx) used to assess criminal recidivism risk in the US was found to be racially biased against black defendants. Still, the opacity of the algorithm made it difficult to understand and fix the problem. This lack of transparency can obscure biases, errors, and deficiencies. -Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques like [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/), Shapley values, and saliency maps empower humans to understand and validate model logic. Laws like the EU's GDPR also mandate transparency, which requires explainability for certain automated decisions. Overall, transparency and explainability are critical pillars of responsible AI. +Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques play a key role in this process. For instance, [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/) (Local Interpretable Model-Agnostic Explanations) highlights how individual input features contribute to a specific prediction, while **Shapley values** quantify each feature’s contribution to a model’s output based on cooperative game theory. **Saliency maps**, commonly used in image-based models, visually highlight areas of an image that most influenced the model’s decision. These tools empower users to understand model logic. + +Beyond practical benefits, transparency is increasingly required by law. Regulations like the General Data Protection Regulation ([GDPR](https://gdpr.eu/tag/gdpr/)) mandate that organizations provide explanations for certain automated decisions, especially when they significantly impact individuals. This makes explainability not just a best practice but a legal necessity in some contexts. Together, transparency and explainability form critical pillars of building responsible and trustworthy AI systems. ### Fairness, Bias, and Discrimination From ae2d91cb892d4ddaf11dbb758334b6a39479854c Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sat, 16 Nov 2024 03:14:37 -0500 Subject: [PATCH 2/6] better explained table --- .../core/responsible_ai/responsible_ai.qmd | 70 +++++++++++-------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index e7bafe522..84e539655 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -62,9 +62,9 @@ Putting these principles into practice involves technical techniques, corporate Machine learning models are often criticized as mysterious "black boxes" - opaque systems where it's unclear how they arrived at particular predictions or decisions. For example, an AI system called [COMPAS](https://doc.wi.gov/Pages/AboutDOC/COMPAS.aspx) used to assess criminal recidivism risk in the US was found to be racially biased against black defendants. Still, the opacity of the algorithm made it difficult to understand and fix the problem. This lack of transparency can obscure biases, errors, and deficiencies. -Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques play a key role in this process. For instance, [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/) (Local Interpretable Model-Agnostic Explanations) highlights how individual input features contribute to a specific prediction, while **Shapley values** quantify each feature’s contribution to a model’s output based on cooperative game theory. **Saliency maps**, commonly used in image-based models, visually highlight areas of an image that most influenced the model’s decision. These tools empower users to understand model logic. +Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques play a key role in this process. For instance, [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/) (Local Interpretable Model-Agnostic Explanations) highlights how individual input features contribute to a specific prediction, while Shapley values quantify each feature’s contribution to a model’s output based on cooperative game theory. Saliency maps, commonly used in image-based models, visually highlight areas of an image that most influenced the model’s decision. These tools empower users to understand model logic. -Beyond practical benefits, transparency is increasingly required by law. Regulations like the General Data Protection Regulation ([GDPR](https://gdpr.eu/tag/gdpr/)) mandate that organizations provide explanations for certain automated decisions, especially when they significantly impact individuals. This makes explainability not just a best practice but a legal necessity in some contexts. Together, transparency and explainability form critical pillars of building responsible and trustworthy AI systems. +Beyond practical benefits, transparency is increasingly required by law. Regulations like the European Union's General Data Protection Regulation ([GDPR](https://gdpr.eu/tag/gdpr/)) mandate that organizations provide explanations for certain automated decisions, especially when they significantly impact individuals. This makes explainability not just a best practice but a legal necessity in some contexts. Together, transparency and explainability form critical pillars of building responsible and trustworthy AI systems. ### Fairness, Bias, and Discrimination @@ -108,28 +108,6 @@ Without clear accountability, even harms caused unintentionally could go unresol While these principles broadly apply across AI systems, certain responsible AI considerations are unique or pronounced when dealing with machine learning on embedded devices versus traditional server-based modeling. Therefore, we present a high-level taxonomy comparing responsible AI considerations across cloud, edge, and TinyML systems. -### Summary - -@tbl-ml-principles-comparison summarizes how responsible AI principles manifest differently across cloud, edge, and TinyML architectures and how core considerations tie into their unique capabilities and limitations. Each environment's constraints and tradeoffs shape how we approach transparency, accountability, governance, and other pillars of responsible AI. - -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Principle | Cloud ML | Edge ML | TinyML | -+:=======================+:=============================+:==============================+:=============================+ -| Explainability | Complex models supported | Lightweight required | Severe limits | -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Fairness | Broad data available | On-device biases | Limited data labels | -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Privacy | Cloud data vulnerabilities | More sensitive data | Data dispersed | -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Safety | Hacking threats | Real-world interaction | Autonomous devices | -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Accountability | Corporate policies | Supply chain issues | Component tracing | -+------------------------+------------------------------+-------------------------------+------------------------------+ -| Governance | External oversight feasible | Self-governance needed | Protocol constraints | -+------------------------+------------------------------+-------------------------------+------------------------------+ - -: Comparison of key principles in Cloud ML, Edge ML, and TinyML. {#tbl-ml-principles-comparison .striped .hover} - ### Explainability For cloud-based machine learning, explainability techniques can leverage significant compute resources, enabling complex methods like SHAP values or sampling-based approaches to interpret model behaviors. For example, [Microsoft's InterpretML](https://www.microsoft.com/en-us/research/uploads/prod/2020/05/InterpretML-Whitepaper.pdf) toolkit provides explainability techniques tailored for cloud environments. @@ -146,13 +124,24 @@ Edge ML relies on limited on-device data, making analyzing biases across diverse TinyML poses unique challenges for fairness with highly dispersed specialized hardware and minimal training data. Bias testing is difficult across diverse devices. Collecting representative data from many devices to mitigate bias has scale and privacy hurdles. [DARPA's Assured Neuro Symbolic Learning and Reasoning (ANSR)](https://www.darpa.mil/news-events/2022-06-03) efforts are geared toward developing fairness techniques given extreme hardware constraints. + +### Privacy + +For cloud ML, vast amounts of user data are concentrated in the cloud, creating risks of exposure through breaches. Differential privacy techniques add noise to cloud data to preserve privacy. Strict access controls and encryption protect cloud data at rest and in transit. + +Edge ML moves data processing onto user devices, reducing aggregated data collection but increasing potential sensitivity as personal data resides on the device. Apple uses on-device ML and differential privacy to train models while minimizing data sharing. Data anonymization and secure enclaves protect on-device data. + +TinyML distributes data across many resource-constrained devices, making centralized breaches unlikely and making scale anonymization challenging. Data minimization and using edge devices as intermediaries help TinyML privacy. + +So, while cloud ML must protect expansive centralized data, edge ML secures sensitive on-device data, and TinyML aims for minimal distributed data sharing due to constraints. While privacy is vital throughout, techniques must match the environment. Understanding nuances allows for selecting appropriate privacy preservation approaches. + ### Safety Key safety risks for cloud ML include model hacking, data poisoning, and malware disrupting cloud services. Robustness techniques like adversarial training, anomaly detection, and diversified models aim to harden cloud ML against attacks. Redundancy can help prevent single points of failure. Edge ML and TinyML interact with the physical world, so reliability and safety validation are critical. Rigorous testing platforms like [Foretellix](https://www.foretellix.com/) synthetically generate edge scenarios to validate safety. TinyML safety is magnified by autonomous devices with limited supervision. TinyML safety often relies on collective coordination - swarms of drones maintain safety through redundancy. Physical control barriers also constrain unsafe TinyML device behaviors. -In summary, safety is crucial but manifests differently in each domain. Cloud ML guards against hacking, edge ML interacts physically, so reliability is key, and TinyML leverages distributed coordination for safety. Understanding the nuances guides appropriate safety techniques. +Safety considerations vary significantly across domains, reflecting their unique challenges. Cloud ML focuses on guarding against hacking and data breaches, edge ML emphasizes reliability due to its physical interactions with the environment, and TinyML often relies on distributed coordination to maintain safety in autonomous systems. Recognizing these nuances is essential for applying the appropriate safety techniques to each domain. ### Accountability @@ -170,15 +159,36 @@ Edge ML is more decentralized, requiring responsible self-governance by develope Extreme decentralization and complexity make external governance infeasible with TinyML. TinyML relies on protocols and standards for self-governance baked into model design and hardware. Cryptography enables the provable trustworthiness of TinyML devices. -### Privacy +### Summary -For cloud ML, vast amounts of user data are concentrated in the cloud, creating risks of exposure through breaches. Differential privacy techniques add noise to cloud data to preserve privacy. Strict access controls and encryption protect cloud data at rest and in transit. +@tbl-ml-principles-comparison summarizes how responsible AI principles manifest differently across cloud, edge, and TinyML architectures and how core considerations tie into their unique capabilities and limitations. Each environment's constraints and tradeoffs shape how we approach transparency, accountability, governance, and other pillars of responsible AI. -Edge ML moves data processing onto user devices, reducing aggregated data collection but increasing potential sensitivity as personal data resides on the device. Apple uses on-device ML and differential privacy to train models while minimizing data sharing. Data anonymization and secure enclaves protect on-device data. ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Principle | Cloud ML | Edge ML | TinyML | ++:=======================+:=====================================+:===================================+:===============================+ +| Explainability | Supports complex models and methods | Needs lightweight, low-latency | Severely limited due to | +| | like SHAP and sampling approaches | methods like saliency maps | constrained hardware | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Fairness | Large datasets enable bias detection | Localized biases harder to detect | Minimal data limits bias | +| | and mitigation | but allows on-device adjustments | analysis and mitigation | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Privacy | Centralized data at risk of breaches | Sensitive personal data on-device | Distributed data reduces | +| | but can leverage strong encryption | requires on-device protections | centralized risks but poses | +| | and differential privacy | | challenges for anonymization | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Safety | Vulnerable to hacking and | Real-world interactions make | Needs distributed safety | +| | large-scale attacks | reliability critical | mechanisms due to autonomy | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Accountability | Corporate policies and audits ensure | Fragmented supply chains complicate| Traceability required across | +| | responsibility | accountability | long, complex hardware chains | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ +| Governance | External oversight and regulations | Requires self-governance by | Relies on built-in protocols | +| | like GDPR or CCPA are feasible | developers and stakeholders | and cryptographic assurances | ++------------------------+--------------------------------------+------------------------------------+--------------------------------+ -TinyML distributes data across many resource-constrained devices, making centralized breaches unlikely and making scale anonymization challenging. Data minimization and using edge devices as intermediaries help TinyML privacy. -So, while cloud ML must protect expansive centralized data, edge ML secures sensitive on-device data, and TinyML aims for minimal distributed data sharing due to constraints. While privacy is vital throughout, techniques must match the environment. Understanding nuances allows for selecting appropriate privacy preservation approaches. +: Comparison of key principles in Cloud ML, Edge ML, and TinyML. {#tbl-ml-principles-comparison .striped .hover} + ## Technical Aspects From 6c095a21edc45aba7478980fdb094a27076461c5 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sat, 16 Nov 2024 20:44:27 -0500 Subject: [PATCH 3/6] fixing figure and clarifying figure explanation --- .../images/png/fairness_cartoon.png | Bin 6565 -> 19038 bytes .../core/responsible_ai/responsible_ai.qmd | 23 +++++++++--------- 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/contents/core/responsible_ai/images/png/fairness_cartoon.png b/contents/core/responsible_ai/images/png/fairness_cartoon.png index 1bd91fd6b703c76b663b622e7c9ae7c332eafcae..c08953d4796a12d9a964583d33489a25d78c5750 100644 GIT binary patch literal 19038 zcmeHuXH-<%wq_}cih>CgK>?MZYR?X~8bX?@?EFBIhNl9Eu7pin4Mse5;n zP$;4-6zV|6QDXRHP;pZM{v5Nur)h^mk$pyf4}`FgQKL|YJg_Pn_8PJ>f(BL=?D~dQ z4~^KJEv(^g6iQgk*;?NKXJk+J(8vU9DZ(^gR>eezH56e|N6T``THiJ@#olwZHBxq! zQ!#MG8Qd^r5)&m6b{2#IER5{+>6|UhE$sxIMVR)-6@>qhk2#p&CR;;eL8UvAf87FC zB21?C_SS+N98OM7>`pxFR<+gT!fIPwui{{WbwBz7n z=R~GOM<@K}enD|VJLK}uOO%al_AmZinPd0gWUX&&XQbj{Z6v~^WMpUMU~6EsKLYa5 zUxVDXHPW{?G8Ej3zxTk}!Pb0#e+>;djLeNJj4bVu9pL(F2Mi4a?XmXeM*oz*xVinm z%O4a=&|KfrM1;wi&CtkL-@)9TNmSKx|1m>4LnAw^i6tEy9SHJ&TG(G>33L3*YyJMfBO`8qX?5N zcnsd8ls*(n!dL2!xQcV+%%F>c>d&#d#ny@xO1c!x2?>eY61=kY4E5PBQ-h_GeuWF5 z4J~E+ziXKGgr$;VWrITs!#XhZGB)pdPo4iHp3weHMnSpe@I{8hr+Y_tcOMjsS;Y=p z*FPOzBXG>wMdt~7ISa=uv(_2r^_m|<++1~R)7s|#f2iUMPgwRpYC8(v6Mk-AE#Cj; zM8F8*T2Y+5|Nn;nZaGj55{{!#U9I)OOsW`+d#B(ze~F=?q1E9F7cQj1Po8Dnt4lwB z{CK$MvA&?sz{J#O&WCG@6`$@az-4-T{P>al(xuGvdhUw4y1GT*&!C)c%gMcZMI(}y zuAZGH8^Jf-o0l9J6}3n3w6U>ST3l2Nq!F2Rs4OoRPE1ZF{P^N;*8ag;&|x~E(4xO5 zO&x>DP)?TXb~%yPoAKxoYHL0y=*-UgSfHTOT%@~)2gR8)9~foByZTzq3L3h*RSoK1 zW#Z99rp}--ZPgi=5+x?N$YN|%d;DD%R#urH`=1RI%@O?bJ`Q5*BSc$7V^&o=NozxH z@}8ca-0x-l{QZ@kor^bD2opJh-Dawqnit|II4RS+)4qS#P*qi}H{`B2&Z}>1yl+*$ zC>g~OXJ$TFsyghsmB}afQYzjV`OvW>B+u5 zIcaI7BFn*i!}>E)9iP098x?3>dP2xGsIwobJ|J=D&Ok2TfnpD_z3n_c^Y5{8QG(d% zuGFRFWo0F$pq>oP_~*}$W@_Z#vb3};SnV&dOYX_i(VkyUOh~xK&8;-sUxc4*kMFMX z@-i|sG+qAH+?Q+MD=aKbPDN#wt!KwUCFGb9&Tlnmok1X)`r>~+Bsw~4ZP-gs#Pzp= zq@>TU=7{8-tuy40QkB^a&W#zf|7g{hgGb?Fm zgt@NHU^!Zwn%>d}P;z~W+nzV)iy)&BPKmqi+t*m?IGYw0MyaZzlA;(ty~q9G!-uY9 zdD@Mwt>G8HP{p@{f`aI!^!4?phbqeGuhY}t0sTeRNp?vvp^nyTKv#^PUtQ0A(YvFg zWBkVtc>IHFlog>TGbjPunKbq0F3lVImDzsvOG1uaFE8-qdu%!Sj)cE|f1QbmN%lRr zL;y8^<*3ccivn~~tLH6tcXvUL8k(B&kKlj*wogn<9Lza#zIw;^%X`#_sP*o)i^tla z6Uot|ZJqLBvpH-$Jj#s0QlNeOB9cZ%i6kT>7iq3F8|T3aVe#}*ReQT-WVE6g*Oik7 zHY8ESYzzzxLF1fNI%&~$>{8y|-ef;%J=f!4?)%F2o+nv%;$yG5t&NQ$D1ceTsXPtV zrL{GD0S>RCpzvT{irLxovbWP!L`-@NxqJ2Ay_aEp-)!tV`b{le74}7aq9uypFo>dJ zcSZIB{PO1SPI*;D#kk%)6QL6&!D+PTDk>^em6Sew{(MKL(p}S{zfkqTgILgjnH4Iu z8NL6P4>L37Zkmpcg<6H?sY$XC(rI%RmpC6hqH&#!^Gdcs?dA?uxDmd+x#|}fSX6~! z=HS3*>6Fn+J%0R{Y{hM5I#DyvxW2Vj-uEPJ7QRG=F?h&jB1*6b_V^kT(_K(YTPLSb zdN2hR4i0&b&EM_dtsExXE{D8(mv-BiY`VWFTO>hR2{zFvulH?m@Kst`TAgBR@sN-Z zv(BUd`s-|Lvf$4)$Ajgk9hy;8d=N)?tzW!&(eUkC+gW+xxYgNyA#)ZsHdR;yMn?yY z>tPKE4ec+2e`GkSwm)vJ&0BF$->^-{P)lb$=ig+U7ZeR?>aJaTMJtv?!J@V; zL`tN<5ic2_WNDezldYEx!c7N*?yK^u0!PYg(h{lVpQfOFZ zvtLCt0@gQ=9Xm$0;^E7x#PlvTNyQBieo4diHznZ8oiZ@VigxRjY1N z3s}d3H=J)+A>w58AtAa+be@)$LyGL=3p99Sh^;DWYDu8Y*p=ySWR%}iovQFjpy=j3 zNa|NN4Zgl?Pml_J8y=q4)urO?<74)FvK`dO&(BZE+B&D!gx$=C@nG>9IBp6`%4_QC z>QOiBB;8hL6!rDv`iiWqVwPQtAQs>TOC8L*(?X)%=5DyTG-I|T1E{8Ei-vo^zJLD= z)ieA2{IsCQX8x5cS7wyYg!5UXfqQ7It-Z~~#kI7s&=$mSzbhZBoS2jZ{z%z%wbx{} zj-8Dyq}g~&z3<4#P+T1aiYh11q;(n`&~!(lKLiRDoEvzGmASI{hT7U%{a3hq_nw1z z^IccWECz}dv&)zK5~fGJ&N9h$eSUHF(yQRe$P6BnmdVVbp&*r=io4+4KYh9d+9RG_ zu^R3{bLo+rL3f%ego4h6(OR@o19?eF$yBS@Ue`p7SC+b-hwAb~YgPIIN3avE65GUq z!NGzDH3zrIiKM7_O<%jayLXk?nYg&RN(W&u7+$+yGKh_&rl$6lIb{={q+yjZ!)HaE zW5-NdZ-<{ZeR|}iS_GfP+tASDpN*j`A|g61EiEL+jwza&rZgFE8a>)Nz4!4z?16sp zu?eMa+6?s&=D}}ue*jsxxa8Fe=p?d#XEG$O91;OG%E>NS5?RaNz?H72{GLkUcIHY-X@P*5wz zWA!~~_B(bREnMG|(9lpCw;3)s(~2Rniu1FBW$K_e;k?+Ogz3w*=h-m|5H%U)qcgsL z|E>k^84yF_?@cfn7A`Kuxw$#AnBw*1a5RoFxIO`VRYcEy8JRBc zd!|c@{Aqtq?v@`{R|1VVa#J})TlQ2TP9IiJT$x4Pep z7mCSpQApUNrHygD6c3Ig$iCA}5qv2k5D35pX5G?!5wb_fF19JlizM#uY~h;1xLnsq zeGA;y=K9S}qQ>@vtHb7RtWJfiYK7}c>F5LlpVd&tRA(~2_qfgKjApNSVT7omnwlCj zBcr6e=tcsFCSm%Bp}f322tq2zqbAht<4xd#Z8oDA6 za_yPU30vxV!T)*D_j!k=fW+vk?(UG1K zD;JlNhYug}KKy*N0B#GKSvJY8I!|%_d}DX_%DcWJ)$1CO$52!$;Ulhj^0$725Gvf( z)U>q1kt_h>cN%P%^u2p+HBXPAA5^1!TSqb385uGJe6~8+@of(2P5fpzHB&eJA`b@ zjdh6s%-629fd+PckT_0p?p%Xefpieb@#6*dQyqe95h8Ad!XhGQ{ZEI$iy=Du<;xcm zGBPzq#WxQP3|8=ZURdzas|uS=i;IiQtgKzdHe)kJkQzrvM{j~-Bfoet0|Yq*A&tL0 z=78z?zW#TgC@IqnOs_ZWfQ_c0phzq(7JxKDM#*7*a=box)<3fAXL5|El9beoXqR6X ze3w8V;2d;Zmz(*2j*Z2Gn=);9dmTJ#_a#$`N7`uPCQ27)XCw)OY_gz2n2zoyI77sL z%zZEE=+`zK;4(#-{*0tIeA8E+RB5E_KyAh`BTtqw;asOh%dKpVR&+?);{Iy4Y)pV-$EWUxXagB>WgaUdgVX=qRajTjghASWmPte`(4DrtQfIoD89`STn= z{`u#D@0iT-rmfh2##Ev^2kE4Z26N()qAJ34=k+)zD);uNW&K%pMp$`d#Xg79WUPQVJL`_uj}T@=1{0+AZd zgWyKNzj^e-W|3PC6u(iHK(!xIM?Ja#$wBhM++Z0|Kg#c(_2HD)YxBdtTWj-c7TQ`` zBx`enjj2~qkEV`gp(vK%EU*B%Ql*7nm1lD@CF;>L#N1)JJYys%&Zm&moOu@-x>d-= z&i+eHtoI)q6ML)DedA1FQPEaml%Rt& zj3E7wsa`-tX}`O@VZAh7UulsnAJdXb^=Nco!uG2(z1Hws-iUfGfk+sBwYmzO%iva*dGgHm22Fu%SXY$cQ&{x|Kdi+AkEP6pgPSC`I;B zuV@wLC^fQOe3z>Bi1`dA?#EYb;fIWDcJie5!wp`#{QO=%0hzRCy+j-Pv}fgtop-iA z$2jdAy5}=OLcwQYSF9JWdz_HIYopqNdw>Gq(6*9=ws_xR?AJ zT%(_X0_hVLm98&@;j_5n)lPSD&mgi)5zR=vQ5sD9oob{3CTR@4Hb3{RYI(ap4)aOz zucywLJn_3QQCK*1c=MAq+bWN3g=9Qy^C1<9ftr*n(fF}LH9ISH8%U%!Ta zm@6J&$y^X*KDtVCd(vZYt+!y8pw#8*O61t-gU>L#^xFmHW7qwRfGl4` zZqy|nY2WArI>Xf?t5$>tp7zVPUUrEQjb2-;Z9q;~3{z63Z0+=m}i8FoC50zL3nyos-hCq69%!d#3Bm&Vf9!@0>DSbj83;T!P%Q zYi$DZp^|PQ&X*%34ClM;6Bo7oCmGvkKfO-SZ%t=nGgB}NzZvr7L0`-c*^tS=W}^zO zL$|_}f=1Z6ZEd1EUA?61Ag3RG^ycBj_JQ5rwe81@jW2OF?z`JJgxyv%toi})sfB#n zF73Hge%~NbziWDT66}Z=XLUheR4y(5^HQ1wRX3}U;i{_nuRX0VXhW_kXG)MysLjqs zwO8>MItj*=+}PYo+;A-%A_N5Ws`Kaf#Rtj`?CtqswAC&??Y&ZbtKMkmI-XTv5>fY4-uvMkq z27QXNS)-Wv&O6;%QRNgCkU2@QRX40&K6ZwmlA@$ALx4C>BJ>zVS4wMQ`JQ+y)@V56 zmMR6qx6kh>{IAZ6XNu#?r2^v)2t0boJx0QCt~(4{9_!jc^E%wVXUWQ?=^WE_|M#=v zOU1aKkbjHw!h@8nt7*l@usndZ5}Zs`^UC2d4*()$ZD=m(93;7O`z7EzQfs%Mz5%`SxFXB}=9 zGL{5CY%}SoG?sB~ZXA<4{nQ^7d>Rw?d5J0x?<`)rq){SNLh&qEdz3#*Dp2;l+2Jq; zOA4Q$#!aJDMcJseaPiU(U6e&ziB6J<@?K%3po@R-3Q1LGm)0GdU6FpF_~G`x6#TE8;BzHo)H*$rCa+i2fvqz@iDpY0`^9y&j%D^8!RtV~GxRSQI((SE5WTx3ewPF{bjSIQ6 zGqB3iY4d)|$!UC8im-htasNFj#A26HQUWq#Ud;2PwR7G&R_|vP{ZTkOIQp=8!QIvz z*(=Q3GH0DT(j69Bvsp(^(8-s|UMrETIJYQ_VdCWv#@T`!W+uJD`8{@o5^caAz}rGt z>8^>9EnQ8)4G6>unHf8@$;|MEM~EGn%Qh=IMwB37?M?FB>O!@*aMA+PMlL1WtZ-R| zj!Kpx)STz+RrX@Tl;`wA6DOnHmYqvxoXHkEn*{>IT0|z?a_lBN=d9p_6Lr&o_KbGt z^7PE8z9QquFI>vRN499f>IvoOmiBY;;#4G0*p3a7cK7u~;|sGWP^HXISlXnVr=#9X zyXtB$iQCkC=pz=(&!s!tR!Oy~G#&6oU7I|O$}Ov(TgJsT(Ai*7wbxY_ySk-h@VE@^ zzu}zL?IO$4O@h)nt+#R@>fX@#N*b}(mAUI_LmKX?Q+tc;#q<$%uLa18rpa8X@Hwed-P|2H)J?b@LJ>_!GWYl~K5#$x#{7MY z5)awCX$4megd!vA8VUwkx5)&LxMCj58DoM%FNS!^?!NBtI%z6bo=f!W+t#O|`na@` zTHk&lI)T=~uD%FmFYO~WuT*e20K*zAi>UiPefmUx;R04k0yT0$BuoBUCd+@ZzhHeI?zxmUA ztX<9>Yc@Cp28!*b0H%^sSg1yHgscmIC~|7*{cI2$W z%)){k#z`*?VVP3;|E(dnBF_W(bLykR#EX!wU^tNub+IFocXa#Uq}{`|S; zUH?=h9IEq-n>zsa6Yd>8=OoSKn=@5|5Il`3iPBP16ad-j zHHwRRd4H*+m|iN4pl?Im>*Zz1bMbhrK?(uAvsRL|Wi^;*u`pW!OCjl2dYVj z)qQILR#t*TUDp=l)ckxlAcM2RRl}z+{(gRjMg1#nw@=pN@x|Q30PvCn592mpfuD6p zNCg0yegojo0x%K~G6*)@Zv)V)pz}g3g1WzceQMAFuY)i_fOjL?vob%dr!>ep4nT<+ z&;os>4(X*%^I6ZH9U`UVycwjTqmwx>fB_7k18v0*r*ps%_%K$mIgbUizE_QWiJ zsfwPSl?W68AedAK;LEDIx>>2p$!R$`3V1veA^1qEHzIKR&+*7qI!zYlONP9Ci+4!58u4 z$B*BGPQzG$3#DacRUv~Xt&;&tKsiZR4)Tb$wGIGX1}cURA2B?Lj50M{a=;{E0@FkiK|k1?XrXus#r+W;cV2b_n2klDMWhsGhk%;(2o4nXg0zj#7+}%|I+yt=TkcfzM$gv0;6EXNOE(1(mHt6f<=%{5) zw`CFTXG4gzoSfrf{9u_AqL#==h@$pe6anI<6v1!R1Lsd5ZrC}plSt)3G;dV+1HXgX zNl8f|+k@bUZ^w5jmH@v<1L(_adwubcdiAI6?QP8b1-Z+YFEgqVp-^pnGMAXQfv+RL z$*Hfx3>?ztzkfRQ6LuBg|bT3?bUv(l_9tJsj9u5Y|+ih ztG(uY2y*X#jx7jwaL9dGmXt(vWU0chnhDCvL!;OCOPRqzpk&$_smY%*c z>PwqeTB?O0y0GaCQV2%#8-^yt2a0w8%B&^)y9B-m0I*wVzC6C)vP?E89{>f!sDY^L zX?o8?Epiu7)yj=u)Dn`C2<#VRKkTuF1x}%%r6r5MSPnM2ySqD+Bs1DG5Y^WcP^0=| zg9sIycVAW(0BhWKHCaQ~2!Q=w6y4Iw;vGf zT1D0rFjC_>otpf6cWa&nyb%nH@t>b)zf0@jt!5Gcs3Wp{N%Ap6 zlk@4Zu{YJTFzxG$V>%WsY|jyz7TDJ7%*+i7;ay!_=kNsku?BC*1yFsG+U~kV+?}Bn z$B!LrKh9Ae`|{-pHcYX?_3PIW5Ek%OZXOcSNU8Z`0!3ECcGt&FdTcM&3EGZ7 zuiRd`542_e`oIclBPki#J9^-d?xjTxo8|RlX7Ob!J=!ACYhUNq!P5n1FRret%@0)= z74=6r5#a&pQv?WIyDV=Gcvd8GjS=okOaESO6>BukoAN%d%ekFVSZEno3gmTj7}ss! z{@Ah*lm^x3hZV^JQA^9qJCTxBB7XXJ$9)c|pw;=|`BgjLTK)AcIGu>-rY9wR&1`mY z&(^J=mpX9ZfUUsAQs>1F{kbazQfW^2vg{tGPESv}>M5(Lb|<)eh@MM^c#F>}vEtTK zRb@zBl+Xa|jX|o`_as8el6uWmZdpo5NTjBvNkd2t=sAkahW{zd1{~Zbs04s}2|{zl z@k0h+sXz>!NGEk7^vDMN2GoV}6&nl7vNIca#r1XP+Yxs6ECAN02X&D4CA-uCIsf1^ zwxw+d*kJ&8(-RWz0EO=p6D8`Q70RK<*YpDRHIrY%t&|o*$H$M{wUtFJTe+}rszFvN z+S=(LCI@&I&?o9QZ$9eh0!#@HROS&H=hzxIi>&eSaXt}RnDe8on*0$eYU*@t z%MuBMXaW(wjf^y>bktT>j#o`rs~@$d{1pZuigJ(&24fx;UxLH42jamtdjOlo#1$na zNop6GE5e&_Sp+~^3jl=A)Gm_C&CSiyZ3i_kDSU7wUF)53B}fQgD1sev0CNMxd`75!42_h z;GPjG6nJ_Spj#E=B`>TB$O1V2>XJ~t`-W}rtRql!(`ECOoujqBI&rJO7#nHjJa~+g z{|-eBAYY8U$jVy+8c225@i%vW_nO3Dhss^3L|pU#f@Fae2bD{bYP?@pSBK!-5QjiX z@j(7T+=~a$78K;#<;&tAAM}8{v2hZ@AV~!?g1AjvqeV!`JX#>5u&~HHbLNcwucix# zM1j|DW63V>cwo_j5_O!>2&{8MM;SMT#et6g{PoKWIJ)W1+9Pv zmB!8Cs`DE#U^+}@Iz78GsHmt27JxSRdIIVL@CLH*T_=ztUFqr^P^a*rW)p(Brw5Rj z4Xv&CV0qEh>gsC7L4IWQ`9~tgzN)^G0rr6b*;Zd)et~h-voP+1HVq!yb)ndd0=X zN#DQU3CJu$ZXpZ~%m-2CU++zLU6u^~Kp-?&hC}ivStJ3ZMUzKjGe^t8uAKcJfHv{uRSuYzI8=FDO>{IAV>!|^= zWCV60`ZNaEfYMkK;fn5Kn9ze9zS}N6VPl^N`%6QJq}ebbSVcXInBu+06LzA z@NKg)tr{ifHPrTNb~Y1I_6WfIQ8x+ulR}~1ecgxsQK@$(~rjHRpX95ppg35{yw+}}{gG96%Yx$+MmIYNMg0`DK)KsZ_^ zB{-=72}y|@x@FR!+B?=FXP>x1Rd?*OQc!F27@ zG9UKIs8i^Df-2?DlmGU}R^}9_Udk{WzdqFk(ry(tH#d8Om7-Nn-{ga*9vrkVH-84X z!>UC@c=#F2<7+U8fuWrpk1*JzRX>ozwQz=^PCJTvVE=@rnzFaAk7S>n;p5>!EF{y| zh7soW!T2jO7&1E4SyWoUVGx@_^~fD2UVSJ$DvC5OFHbh~fC8vCr{t@PH_iax)m~)b z>RK9ShC(&u{v{C@>Ej>z>OWh||JM4e&^j>M%GUOV-TKnc`@@A0PrW;n<@x_!d4<=W z=pU#W(CowNbkhG=AAvyd=!Y2n3C?r>vhsm}6Y|l*Q~m$x%Hd?pZ7&$5(~o*c4`Pvt zh1@eIC#TK+0?Hdaih-f4t1JFQ1JKJx}w@L>si>9u?6zTk-FzuiFIxyC-uBkn| zySr(zx7v9A)?^dEy8O`{v`%Htj*hHCRk=YmFs1D_fh@D3ys>LEtNihrg3FgNg0!Ee zZ41y9p46WroW7W8Y4+{vI>@fk>~7xh36GVl=Tb1-J6XepLY+R1!4{RByj8qsI=|;> zlsb@P|NX1867Gp$fa{5)IjA;)nh-}Sbk3ZLBq#M+IE+0|GQ}rwSC9QqEf-R^Y>jZI@Z~Mqr(P3PUTN7^ zTl+RwxfBXp+>d(gY3>rp;=gS$`Unz!**&%`7{#>S$rU&|%Pm8Bf6j=_H3r+UaDuwQ zujlPcX*HcrPEt1=w0M-yg?m)1lp5N@mcMp)J7tJaUH7aLMdn1mSuy)#8p2t_-*$JA zQ8y5K_k<+Yq`w{VXxl4lOt~)4-Tzuu`HgqtoY==rW*aJUt>EQ5OLpPn6E98`a)|TU z9YGaUm?I ztLJxC*Y|F=Co1WNI$Wlx_)@co<|Y=^iB>*tng0m{yDqF9n^ixsdVp}7h}PT zaN_2lJqm#C#pin7X?o}&3GJP*k;TTT9ggL1mg%*Sz#y?GedEQt`X~*xA?ce zyg!z=QEAJ%S$sb$)fsb67d`j+3Qnx4-^*W`*t+8c)%cU=9(!wUQ1dgnwYfXLmehFg z9z*e&qKx;=gWNb~g3{1sHYAeZ z(9^||j?MmIgHCUpN+;FUM^i!iavX^HX~eG1MvHp%DM+C2QIDieyVN>asx7-~py)zJ zq|2vI7}gWWTwje5ipq*YTzYM_WHnjBf9V{=NW7LyK+hRRe7(Iy8OBXsu~zcZfXQO7 z%w7LCUN$&G0Mk7p`=*tRApN$YTuqw>_tWn2QvM32_48Q_Hk6U-#gGqgCReF_WnWcH z4}F!V(0Z<@YEQ7SLKvgx8qGWYql$)AtOj27+HX^tRY`|VxJI_-WUJ;UeGk-*xo-rO zp%>*M1#)rJ23Y|Uu1BA+aN(}d>k7traR(Y1+q5;^Qq7ZnA+Apar+H7)1)q1C7|aQO zfB$`F_~m`9y4DCaxmr5e3m1KnbiU^0&j?&0;JNKQj0XZOf<0Z=0dx2Os;l88%cAsg zBh}%nxuHJ!(~PT2-lOcn>G7#@5%1Yuy)J|ZLfY>PXP!=@msR=cWT;0CVK#|6;pXJ% z)Qz<0pHFy+Wl#^U9L@Ti{-SYl=@WM7!yQ#B$Dd*cBBX-&9w^-nbBH`}GxLyoUt-43 zeqWaz%6SuN-ZD)GWs7(IQon|5Q7KU)(q+7-!(4Aa)H2X=2y1a`mCcGK;ribA#4J8- zGwcY;5sY{-ZCdk%%j!}_by}@`pWB>8Pv^N8Qy#14msXyQjQ$uK>DYL<#(e&UoZ36$ zJ-G`kwslLG(JS*s8F-iFtnVc`S$j_f*DlfHi1v)km2S|;9$SnyR>6c zPP7Rz342=;_*NXDtk(W$|K(gi#u7HP!>{z?4WF52iw9DjXfyjlaAidIL$(Z~CK7oy zC%kooonLKK_1c8$x2S*)g-l5tLDhFX{BOq%Rl=x zV<#s=oSldt)t0gcDKnCKxGAWt?A0yeKYe(9>iLW6XWwEkIQ!d|AJWo`XgANV_py@I z?-jUV_9db7%U5>3F^1U>-QzF4OwSG}ebx@mRcXk+m4H2_(Wk0->|GQ2<5M*|J2h4> zIzQ%MeqS8f2(ytCjTG4ujVSZb-P(w3^&KH0;8@=l@LhAYlkZx|s7KrO50EW=Z;uw5 z8{6yWL+4&Qn~{;r@s?_9jS%2%hDw7fpj#Int|KH>j$7{}WCbo%`)r#^=uS(phV6+Fo)mFE#HE!es9IbhW1Y z#_z7j8N+j_SEoqy{NM4@FjweY`u)>H*s5w|dZALF$nnDZ*TNao#&Acg#f!Y>vr~3D z-=W_Z1hTfB!u#b$OWuW#JX&8}_&Mkno4VDr@q>647gLKRRTL@VZ%hjv3ZSSC%6%g!jmD8+Wrejtn?boGAXYVUt z&0d*p#SYmIzVpc))GnXD#_zOgecy~qsC3nfeu#Mb*U*?-=BlqmgtcH4)XH+_1Ip?! z)$t|4PsW-}jOnN-9LS!G_^l@2h*GjDa6FWoi8J#*9rf$bG(q)jR>_km2NCeG?HmnF z7C8v{A&Ppx55R0 za}ZD;%JH=AccJ9-4uT8kwTc4nl>iLWxU#GE}d57{*9*RIoeD51jD`jApV zc|5=rNt2VN8=IR5X)!f51prD1q=A1?Nw0r3y^n8eQ;;fX90j-(>Xz<9#X%R;)fL$O zY*@3#NB8c>h5J%yn0b74SU`Q>Hns7jMJ^*TmAXx}tBGrAsG!@}!05LNTg$1-F z$sEs;OIC(-6ApI@Ao~9Kot+&7zClhwPjJxt1CTcw+d&U-!Tv`dR@h z0Ib3kh$d)12&KOyXn&8)_cmyk8h};M1@jZgE&$kAfMltxtQ@M`rWtf!ZU)XRxOSBD z5FVzj0uV&$M6{chumg4VO1q`WeaNAmA81(@^qM0@;7G>?g3>TBNFbQaMIM_UcVr_4 z+S>}9=5;UKu)PBnx=zQl00Auh`A&P_2_aO^Gd2EhATzY5fWtt#Q5SAV40Ns*<5$0)pj1p*9@hrC=kJ@7=qnV?78BMZ~03^nh|bdGe&x?#|u21gN9# z>PWngRIeg}MEZLG&_n2^y_``A zO<8JC*V+TT1cMa99uGN>kyb3X2x;X$^akTb1i*0r-M8UkJ>Z4Xs4beleOsU`MYx(T zU%#3G6$BVl_oD+uDw>+9fU~CqI?GYx4dWC) zDZ2h3J-K?wb8FTiPBjQ2DM3%MfVOS|F@jKX*TK@ET-No+3#JpwhEV?uNLn)3CD1@o za$wXwvM1jqLT?HPKr+m28xrg}2Nbp@a`uA5WI$n(>%8GvJ|LobWx z<`fgstOqz!Te-_J6zz8Xm`D*r&r6R5dzOuwfHDE`i^vxtbs<5I5UvAaYX8^O9(P&kK-5Mk}LL|-^JXO>E1-MNhz^H_sb7(x*^^wy0`O2*<0ML_k z${eR2AE7Z_ETIm6N9ZF-0{%j2b|;fBA~UO^lG>`geavM~#H1XK&0_|DSa!$XJqa`i z|NOz}-2Pe8x!mcrEdcR#599S~eM&N!XUy~p&EFSV4kBfrfPNiII*f<5r`e$jZQy^~ zp}k~!bt}TMRH{{Y`K577WGZ;X;=}66o!Ipa0ESoBG2lcjEwaXR)?A1W9r`w950=wp zVPBMykx{eE$IE~(s8#`5t<-+%$Lba!QAH9dR6nL%&CJZ~0X8d|$O6b$Md-;O6;kyM z>@dRkHZ({B#9uV+&JH-Xg)Ia&z+nTiHE3`w&;|f~C0Nd4X(b41&=_?>yp@7D8&mR7 zpfDxn+n61!wk$Kq4Wuij1T_Qu#1GW{BfE$YKV3uLzTE>jOa&109;o*?QMCo}10njU z=U%KVFXK3Y`k_hgO#SfT3Q#{;gg?}e_6I<&0pod>ot@pE)||=ZFx|x`bRH)5s3pf9h<#i3*_*K4h!$T$snJRVvP zfTzlUN)x0=1EQg&c5yLM7T>L|m)U>T5z1hI{0kTLa2+&&7DW5Koi!YA%?Jn$h8YJ` z)Fo&N1BwxXz>?X?1n{@lAOHZojre0mnJ}c70m9lf@D*TN;leH-M_pm*1<(VEco~S& zz_(f1PcI{-5x}8A^B%YiIS`u?D9_4(U8~DV-~En;5EfGHehd`mz@8vbJ$#9VCZ^g& z@Px{mn(sxSB!YN87t#4gIE2JOsvw}vzzllJ5SkrnF_kJvAD6syXAQq5(+1T{>Vp`xD&1Y4w(>xS8+4NHcVrc`1MQiaXN%2Nl@_wX1Uorw%>l$t@U8dJ8mo z3J`KId&~IbtS@h;eWgR8Rg@VTyr5j72Wl=VDiLD=jfCVI5-7Cm3=m5y8X74J3)X?h zuOKB$o4_;e6ZyQnDo7_NWSClL5lUnZ87X*&TRLt~K>#`m#CAnirf0y=t0$Wq8!iO_vK}zMWR>_cN0SkCdR8$vci}0*5Uc0Fv%>3RSPMDA}waLR#!uXKLseZpy}0;0faXBAlh+g+ z0P={0loVZdg8$GT`swsO>u?t7QUf^{m@Yde6QhcU_8FEa1%om+)T3?Y z0zbx|-#fc%Y()#W{R4)5h#U1NaU%?Ya^WWcs=tuG{|%hX-z-E~FJea{oBc=^jjino z3rG%nqMU}Xa^k2qfOEjzFd;I!1a;4C=O`$Q5EeoJKDVcXgg`6LI3O+)GF)&j@1!Uo zAl{*09_E7RR7-~t}Gp4vZ&%+U>sL= z)`qG3h{-M@J>PKJMLF-c2u(hNq&DQu8-1i+5!i?v`hC$rLn2Z_bHV~RD#Q`~t7yP3 zk(PCzLXZw3UsL?w^wrCK`sx-q(JpW`gMFxKzrU2_)~zSNcQ>&Up~`Oo1x7{>B;tON z7g+my6{rro*}pCZbKTpU9msuQrbsE-F0^ovwD}J4jJQxSS|fx)gi7Mb5` zMq~@`okiH&gkwNL{@d;OVs_BTG_vO5=}D%ir{`3(wzj5UHrSGXbc72=T6n@9+Gs5r zGjHizD_(q1>J!Dtivi1`(Xp|yxk3$ApFvChPGEszV08I$*9x+_vBkMU-E7~^Dc=3h zmv11ujn#!I=(ZW>U0B`PDWbml24?3hJ|;jmCeR{6G!k=E{Snhg411Nw=X|Y9V=W@o yf>*h0CvUR|Vm>sBf5!N)tnPnP82|6z-lxZ7WJazdKe2y^l9G_SlXdIClm7v47@3{` literal 6565 zcma)AbyO7I)*ZT2Kw3aRBt%lB96F?=h8P+IX@(h;?vQSjl5U1jVulb9knSO+haBnd z&-cFdz4iP4d*`pS&f4efv(~+9?S1bjZB1n|Vn$*B06?ayqM!o+;J^U@tV%+hzm=VM zVk-aun@?L^Px1c#{`c?S6B84go10TpQCdm0iElnVz0rT3QMZ58v6@ zSzljoZEdZosp;?Uzqq()Y;4@y+w1D;g2UlIe*9QlTRS^DJ3l|KsHljKkH5UUEGQ`O z@bJ*m(#pxnIXE~N85z;m)`mi%Z{NP{>+1^%3Gwsuv$3(Mt*tF8D$>{2Pfbm|xw+BN z(ed)~GB7ZRii(24V0U+S^Yim46soEzqPoGRo zO!oKpi;IiVXfy(WSYBR!|Ngz5ot>VZ9vBSn?d`3uu1-r!OHNMi=;$afFCQ8ja&d8S zb8~}0ARj(_u(h?VtgPJH+WP$Yvy+q4>gsA%R#s9{Qg(K>tE=nc;^MDgzd#_6p`l@B zW~PIKgSEA_m6g@cpFe$lec!!%H#|n?V_u>J7 zhhnMX_-?O%t%)Zz6;rI70rJ2h8@l(F(5}%yCECf~}P!Fok@9%y{ z6dKZ1h)&TXRE=I0*=A6BN9RB5fwKX4h4506@yEj_{}mvYCl{4#DC?2i#eef|(Kir} zgops|*rr@GEmt`8_|seVzDg(-1944Y`Hmtg+BsLL{w4E05v9z&mUQf5qT;%(A zITOjzJn~LpHt`#YW~7hzAtv+O3x4ZhKAt&VUeqoupygjpYQPiqYKOo8-nDW;v7;kg zqeg~1Tan(u=pn!l2EH^bL9~0@0*LtZ@4etm->GIqT}5iTArDR-B4!=Zkqgrs;_8$ zo|hM?uY5I)tt!#)#8NOOH9YP1JvHa$`?U+ZT@VSdd|R4p`#Z(+%Jl){69(~bgd}B4 z<+qZqx97OGvhdeD`2KqFi`P#{AO`nRAB&$vbY}aKDF+)F!IhZrE5DUvZD}jc0o_3` zMUI(ftr(_;TMyO*dHMtAw4!e~0U3_rUe5@ID@3`DIu%WgNz&)ZT~xbX2vX@dGY1SX zUiQ#vjT%k|@o`;<+4+MIU3l{`EP#S=jasS~o6Wo1cRnB!$lAexCL*n|JYDZG(GSMQ zR9_Pn>5-M)7rR$N*cQ!j>Lkk{?x70Ii_hEnw7au&V{Kl0U(s`RRjfGFF64kN$Jer+ z*ZSioLnZ?atDn>S;4)t^$M^?)x{A$35DU$)SNpPvdU{Ysry;@~8*Ep)WGTDcERcr}ildVBBEigB$_$sj;i0?4_}ii?jf%CnMeI^tz}Jdf zbuC>Y!^vd418rMesWSU_>I|dfA=VPjyqW+h(Ce8-W1?9rB;gguw76W9V=@IVf&AN0 z(WfNv*Pc|xXME{^ubM7+8R*AsRo>Xf!&J-(BF0x`<4_CuY|ldk5)919apBsEG7iid z-+6cQ`#j1bEGhRr^Dn4b>HM`>n3-Ju)C0%?TR@5v+|Czhx^m5yOm02Vqn78&=V2By>I5!VnP@%i zl(NW&ynwSU(gGy0&6X~_k(VQApNrzx5Sci-y1|?MjfOX24a#=S?d2Wn_u`&e@UX+{ zhb{}ki`|pwg=1dndt9eteXF#miX(t-;0SP^ z<^_uQnV7d3da_N)EbXYKZw*9?ypc%99{iHTY6redarDkCJWS%0#+`2xSdehVAr1H6 zK<`|PxY_Ii5p3%0wkmyEb`2kyjU5L8sTC#d+8fsK|hcn<9>K3T@=up+PVU1iBgz&wupJF^Bk(L3Za)E-BHxAoLZ5bKC%TED+uf)ie03UjK=$?i;fWQS8Bqm4k{b zJcL!wGf+YWI1!WNT-`7JFTBH#-$f7q1OJ;~GV3%Sw@a~*=M1Q(slz{)4AB0Ip?|OF z&p08P7`&9O`~k2W(JIpK%!D6o?|(El0?vCv>>aVlF0BwhzOX{ZM}sQ1FBEzOF%8T* zXNY0ouN_ITP0UG(>>VTo4ILw>ASV(%xTBuJ>17hX%Hz&o3ZIn53873Bkvs7=0a70) z>9d_40V@r~#3(tRDd1Ttdkw9B7H+>4vR6i{HPAinsxJi+Ti0hdF*lzBl-P{1I1LVI zTO9J)5~|vPqLmd0H4Tp&ziTeK8Zm1SyAPfEBtOc$s}v^Yy|h{?V0Ff4sbG$Z;9b+s zei^kP?OPA$j`tx)nUUz3qK$s0{819^Q*(mOI^pFwNN-H!xFL0=URXDmpi$GByyF+uc-kAT0B_^!V%q0TR0-DQctWm)Ut z62*-6s-_`==}^`t_^X?M=yWY6!KqUk4DlV6tAXkDChjfy5f5*FEu3RY?CYJF`43lL zFB?=8yaUD(ZwxAU@oXCQ>?GQ$I~77gd&buf*^r&|OPz?US2T7sT^oJ2ByGW76~D0kSQnQlIz z%AY#jyEsTc6MZUV26QB_)rccWB-}sOH zm!j+y3URBnAbZKrT{&AUQ*9Vy8G{j}zG8EZnQFfI>qfzfdCZQ7EgzdWA?Ni3#*DQiOX{+#&D%5=Wt?1aA2{*cQS1_)im#B!94>eeljD4@P1q4Ml@BR zd3%pNYo1hb#OtD%ZoJss5-_ocIziN>0Vq^N!)zSmz}- zP`iDTi-vji&lHWx2bZfmuwKokSMKnbDp|_dk5-*zQ6Ztm>tNTdM>2T{SOCeG&aLC* z#T1-Bekg^)?pocGnqXoAEc$D9Bds=IhjzBi>&!((t8qt(aMu37%7mzc^hhBUD@)@T zTbaU|$1ZOZJv?aDf@J-EM6Nl53IKvS5ZarDro}NS22HH|qnHK}kD|@yKzw8*QD$sG zJGlhu>?Nz#;PSPdm3rtG&Nfz#{0ifRKj05?N$Rm5-xMX_@*#8>00a(!XezFe{!?5) z?R_D+c52eYz;`=6zFQ+*p9#Mq+7vSdM@QDaL52bgGg`EjKP5=%LaJ!6m^ma>pjZOm zAg~}b$1XsRN-iJL8=u9u2G;0#9K33`j$}#VA&&UUCMR5qHYQh3N;+^ z>UTMK%=4%`wlg&J-2<+8a7|mvD=CqLVK)KSxBKKr`usRRH@OzmJCDALP>|_gY;e<+-=`|RFX|Zkxu!ihdwCl6zF@@GlWK|G7r)zW((W8{U>n_Zv zF0$0%v@xCweQNcKtJEzZ0zV2{g0OkWBF1RwOR_3J{IyhB$0{1kU_AwW-*n&?C6Aj8 z*zBLFc6swU(QT3;3)5da^no+8gMV`wK4h4|vl`&Jm(Ml_ApW^^yI>k9fM*|w-@dT$ z!5-^3wl=`zZNj{z??x8>SE+&XU@wXKBIj4Tc(GZ08`}^(L_aN}t|^la2Te?4?<2)R zpiT8pAw7c!I#SoPW@_eZL-3|K(ZrBcJ_xM)5t^7ZuIR@7^CgA4pK*bZLz*e!55%$*?uJM41-4 zlZ^9O&5E==?YVc}%ZPlYd{PvFQ!0Ul`uE0A`!&2gf@D-@#%djRtwH3Bd54X*fwW4k zU*ssh$$)nD!yrmUdu3yYV5BEW9G&))7c)`0&~35eHTn>ifseybd3#~KA9ndK z0F$FAWJZ(Bd&L_^hT|gA?%2^Xgu9kgNv?t59{%u)VKR#*E{w>g)vA9kVq zV}jIYsK5KAs2hhj=M6ma-W@hAE{Q)Uy$%V5RTWf>gz4j0lb6vsq~~d;gI`S3CVuDC zgc2OZt!Z#deWLr6;B#rvk12n1FRPxxlbO=lHIXSG-?<$Nv&G2!Ei+0i<`P+r^Djkh ztZxnue1CvEd8cY5-(0DIwk71HIfWJJ5f{N|nax3%jMx-zwGxr$YWVB6VPErHM|GCU z3YY*;je9AT9lX`AD%nbl!Kh&zOry7mtN0AY#njG8t14hs(e^)2wZaE58EXf92kg+( zB5wN!?DD=erWNK44e+VdxvnumNGQCj1%7qHR_Tak8tXLfS;nVND4gOmn?bpRDv(S# zO!iWcL*NWan2b&G-(FpCcO^R;(pzmyu1p?sSD=ZYZA9*NGmU?C_WX9PSA{5f7ety~ zC|lOSB{2@;!cUECR5T>kgt$j#4LXr~GWIQ+go`2Q+u}qy z)P8<@J5zex!IyxMa~&%1ocpn8^m<;EEiLj{8SY`kUpEoIwzT^4&e_^zez|2YyP+d| zsB1XCVLn&9vDQe$)1leV|9)vOJXu0>*vHvF4ToPzB!jP6+x^8xjezJ}^lu?&f5qUH z6P8Osg5#_M(%Z9dk8eB95w01J1)<-Au`(y043Oxs9rxbuZNp zuiVe?$DiyrDqq2+$5;FG51nuDpEqRl_bJ2p!0#;#AY>|U9AE9L=?kZ|eHqq|u4Qt> zX5Uilbry%z2XO=w*S7Q$5AyV&hm$CI%)+1GFx^nd9=Pgwu*@adK9_grUeEIn%e=?4 zm~q75_;KZmbV>0ag;B0!S^fwRu4y3jkJ~ip(d-k`b6#5Bj}7=mPg1hS7Bi=z%s#5Y zHND{O7eX^j-p(bvz#&#_e&YA+kxS!bNLm{A#Up))2)In=KIdzGK3l}P#K&Vb7LY&9Hd{k!5jAKVt1O%GaH|{n!8D3 zIFUvq^{$wy^fc@B=*wbslWc!QR=T0f=egCo!c4`(JEEDJ*(!2yKO71>NiAit;n;{Qw zR2xpRhgi8Xe?Iq+mIjmki%kjB7>`q*(}2>w1jRlSDO+!bBV(xOTqt3GM2*?dDQhUR z1die~Y7E>u2`(}ubnVqlcI?0^AlD5Q?s+YCOD2iL)-OxJu}KgNT$=*(vC;S9xT8&SpwfV#b)U=JD^!D0OU8at3JNdwb zj!BUw!G#n2J>~e#45extKuPa|Eg?FbMawC$78Xw&$xk%cy7_%(olunxgLQ*s=~d)i zzX0*{Q--t@=RqE?%8fbNfV&&ADD0Xq<-^%KzUo*7h*c8B3zT0T3@cIvl7VDglO_|#}uLuKF@nLhP>BQ0cU|CDv8#!fzAP07!INmgnl?P4hqC0K@1@ks+*^iM*{7^=Tk{X5xRBS*Y{;ItM^t kD>lj_3I2)yFLK}z36?(tr@p$&*}u@Likb>=Ig8-`0Uzs|7ytkO diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index 84e539655..7d28cd00b 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -153,7 +153,7 @@ With TinyML, accountability mechanisms must be traced across long, complex suppl ### Governance -Organizations institute internal governance for cloud ML, such as ethics boards, audits, and model risk management. But external governance also oversees cloud ML, like regulations on bias and transparency such as the [AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/), [General Data Protection Regulation (GDPR)](https://gdpr-info.eu/), and [California Consumer Protection Act (CCPA)](https://oag.ca.gov/privacy/ccpa). Third-party auditing supports cloud ML governance. +Organizations institute internal governance for cloud ML, such as ethics boards, audits, and model risk management. External governance also plays a significant role in ensuring accountability and fairness. We have already introduced the [General Data Protection Regulation (GDPR)](https://gdpr-info.eu/), which sets stringent requirements for data protection and transparency. However, it is not the only framework guiding responsible AI practices. The [AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) establishes principles for ethical AI use in the United States, and the [California Consumer Protection Act (CCPA)](https://oag.ca.gov/privacy/ccpa) focuses on safeguarding consumer data privacy within California. Third-party audits further bolster cloud ML governance by providing external oversight. Edge ML is more decentralized, requiring responsible self-governance by developers and companies deploying models locally. Industry associations coordinate governance across edge ML vendors, and open software helps align incentives for ethical edge ML. @@ -186,7 +186,6 @@ Extreme decentralization and complexity make external governance infeasible with | | like GDPR or CCPA are feasible | developers and stakeholders | and cryptographic assurances | +------------------------+--------------------------------------+------------------------------------+--------------------------------+ - : Comparison of key principles in Cloud ML, Edge ML, and TinyML. {#tbl-ml-principles-comparison .striped .hover} @@ -194,25 +193,25 @@ Extreme decentralization and complexity make external governance infeasible with ### Detecting and Mitigating Bias -A large body of work has demonstrated that machine learning models can exhibit bias, from underperforming people of a certain identity to making decisions that limit groups' access to important resources [@buolamwini2018genderShades]. +Machine learning models, like any complex system, can sometimes exhibit biases in their predictions. These biases may manifest in underperformance for specific groups or in decisions that inadvertently restrict access to certain opportunities or resources [@buolamwini2018genderShades]. Understanding and addressing these biases is critical, especially as machine learning systems are increasingly used in sensitive domains like lending, healthcare, and criminal justice. -Ensuring fair and equitable treatment for all groups affected by machine learning systems is crucial as these models increasingly impact people's lives in areas like lending, healthcare, and criminal justice. We typically evaluate model fairness by considering "subgroup attributes" unrelated to the prediction task that capture identities like race, gender, or religion. For example, in a loan default prediction model, subgroups could include race, gender, or religion. When models are trained naively to maximize accuracy, they often ignore subgroup performance. However, this can negatively impact marginalized communities. +To evaluate and address these issues, fairness in machine learning is typically assessed by analyzing "subgroup attributes," which are characteristics unrelated to the prediction task, such as geographic location, age group, income level, race, gender, or religion. For example, in a loan default prediction model, subgroups could include race, gender, or religion. When models are trained with the sole objective of maximizing accuracy, they may overlook performance differences across these subgroups, potentially resulting in biased or inconsistent outcomes. -To illustrate, imagine a model predicting loan repayment where the plusses (+'s) represent repayment and the circles (O's) represent default, as shown in @fig-fairness-example. The optimal accuracy would be correctly classifying all of Group A while misclassifying some of Group B's creditworthy applicants as defaults. If positive classifications allow access loans, Group A would receive many more loans---which would naturally result in a biased outcome. +This concept is illustrated in @fig-fairness-example, which visualizes the performance of a machine learning model predicting loan repayment for two subgroups, Subgroup A (blue) and Subgroup B (red). Each individual in the dataset is represented by a symbol: plusses (+) indicate individuals who will repay their loans (true positives), while circles (O) indicate individuals who will default on their loans (true negatives). The model’s objective is to correctly classify these individuals into repayers and defaulters. -![Fairness and accuracy.](images/png/fairness_cartoon.png){#fig-fairness-example} +To evaluate performance, two dotted lines are shown, representing the thresholds at which the model achieves acceptable accuracy for each subgroup. For Subgroup A, the threshold needs to be set at 81.25% accuracy (the second dotted line) to correctly classify all repayers (plusses). However, using this same threshold for Subgroup B would result in misclassifications, as some repayers in Subgroup B would incorrectly fall below this threshold and be classified as defaulters. For Subgroup B, a lower threshold of 75% accuracy (the first dotted line) is necessary to correctly classify its repayers. However, applying this lower threshold to Subgroup A would result in misclassifications for that group. This illustrates how the model performs unequally across the two subgroups, with each requiring a different threshold to maximize their true positive rates. -Alternatively, correcting the biases against Group B would likely increase "false positives" and reduce accuracy for Group A. Or, we could train separate models focused on maximizing true positives for each group. However, this would require explicitly using sensitive attributes like race in the decision process. +![Illustrates the trade-off in setting classification thresholds for two subgroups (A and B) in a loan repayment model. Plusses (+) represent true positives (repayers), and circles (O) represent true negatives (defaulters). Different thresholds (75% for B and 81.25% for A) maximize subgroup accuracy but reveal fairness challenges.](images/png/fairness_cartoon.png){#fig-fairness-example} -As we see, there are inherent tensions around priorities like accuracy versus subgroup fairness and whether to explicitly account for protected classes. Reasonable people can disagree on the appropriate tradeoffs. Constraints around costs and implementation options further complicate matters. Overall, ensuring the fair and ethical use of machine learning involves navigating these complex challenges. +The disparity in required thresholds highlights the challenge of achieving fairness in model predictions. If positive classifications lead to loan approvals, individuals in Subgroup B would be disadvantaged unless the threshold is adjusted specifically for their subgroup. However, adjusting thresholds introduces trade-offs between group-level accuracy and fairness, demonstrating the inherent tension in optimizing for these objectives in machine learning systems. -Thus, the fairness literature has proposed three main _fairness metrics_ for quantifying how fair a model performs over a dataset [@hardt2016equality]. Given a model h and a dataset D consisting of (x,y,s) samples, where x is the data features, y is the label, and s is the subgroup attribute, and we assume there are simply two subgroups a and b, we can define the following. +Thus, the fairness literature has proposed three main _fairness metrics_ for quantifying how fair a model performs over a dataset [@hardt2016equality]. Given a model $h$ and a dataset $D$ consisting of $(x, y, s)$ samples, where $x$ is the data features, $y$ is the label, and $s$ is the subgroup attribute, and we assume there are simply two subgroups $a$ and $b$, we can define the following: -1. **Demographic Parity** asks how accurate a model is for each subgroup. In other words, P(h(X) = Y S = a) = P(h(X) = Y S = b) +1. **Demographic Parity** asks how accurate a model is for each subgroup. In other words, $P(h(X) = Y \mid S = a) = P(h(X) = Y \mid S = b)$. -2. **Equalized Odds** asks how precise a model is on positive and negative samples for each subgroup. P(h(X) = y S = a, Y = y) = P(h(X) = y S = b, Y = y) +2. **Equalized Odds** asks how precise a model is on positive and negative samples for each subgroup. $P(h(X) = y \mid S = a, Y = y) = P(h(X) = y \mid S = b, Y = y)$. -3. **Equality of Opportunity** is a special case of equalized odds that only asks how precise a model is on positive samples. This is relevant in cases such as resource allocation, where we care about how positive (i.e., resource-allocated) labels are distributed across groups. For example, we care that an equal proportion of loans are given to both men and women. P(h(X) = 1 S = a, Y = 1) = P(h(X) = 1 S = b, Y = 1) +3. **Equality of Opportunity** is a special case of equalized odds that only asks how precise a model is on positive samples. This is relevant in cases such as resource allocation, where we care about how positive (i.e., resource-allocated) labels are distributed across groups. For example, we care that an equal proportion of loans are given to both men and women. $P(h(X) = 1 \mid S = a, Y = 1) = P(h(X) = 1 \mid S = b, Y = 1)$. Note: These definitions often take a narrow view when considering binary comparisons between two subgroups. Another thread of fair machine learning research focusing on _multicalibration_ and _multiaccuracy_ considers the interactions between an arbitrary number of identities, acknowledging the inherent intersectionality of individual identities in the real world [@hebert2018multicalibration]. From cc8c22e28fd573ad6bfa6400884050f72fba26ba Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sat, 16 Nov 2024 21:10:40 -0500 Subject: [PATCH 4/6] summarizing policies listed in chapter --- .../core/responsible_ai/responsible_ai.qmd | 23 +++++++++++-------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index 7d28cd00b..2f991998e 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -263,9 +263,9 @@ With ML devices personalized to individual users and then deployed to remote edg Initial unlearning approaches faced limitations in this context. Given the resource constraints, retrieving models from scratch on the device to forget data points proves inefficient or even impossible. Fully retraining also requires retaining all the original training data on the device, which brings its own security and privacy risks. Common machine unlearning techniques [@bourtoule2021machine] for remote embedded ML systems fail to enable responsive, secure data removal. -However, newer methods show promise in modifying models to approximately forget data [?] without full retraining. While the accuracy loss from avoiding full rebuilds is modest, guaranteeing data privacy should still be the priority when handling sensitive user information ethically. Even slight exposure to private data can violate user trust. As ML systems become deeply personalized, efficiency and privacy must be enabled from the start---not afterthoughts. +However, newer methods show promise in modifying models to approximately forget data without full retraining. While the accuracy loss from avoiding full rebuilds is modest, guaranteeing data privacy should still be the priority when handling sensitive user information ethically. Even slight exposure to private data can violate user trust. As ML systems become deeply personalized, efficiency and privacy must be enabled from the start---not afterthoughts. -Recent policy discussions which include the [European Union's General Data](https://gdpr-info.eu), [Protection Regulation (GDPR)](https://gdpr-info.eu), the [California Consumer Privacy Act (CCPA)](https://oag.ca.gov/privacy/ccpa), the [Act on the Protection of Personal Information (APPI)](https://www.dataguidance.com/notes/japan-data-protection-overview), and Canada's proposed [Consumer Privacy Protection Act (CPPA)](https://blog.didomi.io/en-us/canada-data-privacy-law), require the deletion of private information. These policies, coupled with AI incidents like Stable Diffusion memorizing artist data, have underscored the ethical need for users to delete their data from models after training. +Global privacy regulations, such as the well-established [GDPR](https://gdpr-info.eu) in the European Union, the [CCPA](https://oag.ca.gov/privacy/ccpa) in California, and newer proposals like Canada’s [CPPA](https://blog.didomi.io/en-us/canada-data-privacy-law) and Japan’s [APPI](https://www.dataguidance.com/notes/japan-data-protection-overview), emphasize the right to delete personal data. These policies, alongside high-profile AI incidents such as Stable Diffusion memorizing artist data, have highlighted the ethical imperative for models to allow users to delete their data even after training. The right to remove data arises from privacy concerns around corporations or adversaries misusing sensitive user information. Machine unlearning refers to removing the influence of specific points from an already-trained model. Naively, this involves full retraining without the deleted data. However, connectivity constraints often make retraining infeasible for ML systems personalized and deployed to remote edges. If a smart speaker learns from private home conversations, retaining access to delete that data is important. @@ -349,19 +349,22 @@ To ensure that models keep up to date with such changes in the real world, devel ### Organizational and Cultural Structures -While innovation and regulation are often seen as having competing interests, many countries have found it necessary to provide oversight as AI systems expand into more sectors. As shown in in @fig-human-centered-ai, this oversight has become crucial as these systems continue permeating various industries and impacting people's lives (see [Human-Centered AI, Chapter 8 "Government Interventions and Regulations"](https://academic-oup-com.ezp-prod1.hul.harvard.edu/book/41126/chapter/350465542). +While innovation and regulation are often seen as having competing interests, many countries have found it necessary to provide oversight as AI systems expand into more sectors. As shown in in @fig-human-centered-ai, this oversight has become crucial as these systems continue permeating various industries and impacting people's lives. Further discussion of this topic can be found in [Human-Centered AI, Chapter 22 "Government Interventions and Regulations"](https://academic-oup-com.ezp-prod1.hul.harvard.edu/book/41126/chapter/350465542). ![How various groups impact human-centered AI. Source: @schneiderman2020.](images/png/human_centered_ai.png){#fig-human-centered-ai} -Among these are: +Throughout this chapter, we have touched on several key policies aimed at guiding responsible AI development and deployment. Below is a summary of these policies, alongside additional noteworthy frameworks that reflect a global push for transparency in AI systems: -* Canada's [Responsible Use of Artificial Intelligence](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html) +* The European Union's [General Data Protection Regulation (GDPR)](https://gdpr-info.eu/) mandates transparency and data protection measures for AI systems handling personal data. +* The [AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) outlines principles for ethical AI use in the United States, emphasizing fairness, privacy, and accountability. +* The [California Consumer Privacy Act (CCPA)](https://oag.ca.gov/privacy/ccpa) protects consumer data and holds organizations accountable for data misuse. +* Canada’s [Responsible Use of Artificial Intelligence](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html) outlines best practices for ethical AI deployment. +* Japan’s [Act on the Protection of Personal Information (APPI)](https://www.dataguidance.com/notes/japan-data-protection-overview) establishes guidelines for handling personal data in AI systems. +* Canada’s proposed [Consumer Privacy Protection Act (CPPA)](https://blog.didomi.io/en-us/canada-data-privacy-law) aims to strengthen privacy protections in digital ecosystems. +* The European Commission’s [White Paper on Artificial Intelligence: A European Approach to Excellence and Trust](https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en) emphasizes ethical AI development alongside innovation. +* The UK’s Information Commissioner’s Office and Alan Turing Institute’s [Guidance on Explaining AI Decisions](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence) provides recommendations for increasing AI transparency. -* The European Union's [General Data Protection Regulation (GDPR)](https://gdpr-info.eu/) - -* The European Commission's [White Paper on Artificial Intelligence: a European approach to excellence and trust](https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en) - -* The UK's Information Commissioner's Office and Alan Turing Institute's [Consultation on Explaining AI Decisions Guidance](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence) co-badged guidance by the individuals affected by them. +These policies highlight an ongoing global effort to balance innovation with accountability and ensure that AI systems are developed and deployed responsibly. ### Obtaining Quality and Representative Data From 863156eff989fe8b5db805fd967a3de3c16109fc Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Sat, 16 Nov 2024 21:39:17 -0500 Subject: [PATCH 5/6] Formatting fixes --- .../core/responsible_ai/responsible_ai.qmd | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index 2f991998e..3f40fbf67 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -62,7 +62,7 @@ Putting these principles into practice involves technical techniques, corporate Machine learning models are often criticized as mysterious "black boxes" - opaque systems where it's unclear how they arrived at particular predictions or decisions. For example, an AI system called [COMPAS](https://doc.wi.gov/Pages/AboutDOC/COMPAS.aspx) used to assess criminal recidivism risk in the US was found to be racially biased against black defendants. Still, the opacity of the algorithm made it difficult to understand and fix the problem. This lack of transparency can obscure biases, errors, and deficiencies. -Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques play a key role in this process. For instance, [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/) (Local Interpretable Model-Agnostic Explanations) highlights how individual input features contribute to a specific prediction, while Shapley values quantify each feature’s contribution to a model’s output based on cooperative game theory. Saliency maps, commonly used in image-based models, visually highlight areas of an image that most influenced the model’s decision. These tools empower users to understand model logic. +Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques play a key role in this process. For instance, [LIME](https://homes.cs.washington.edu/~marcotcr/blog/lime/) (Local Interpretable Model-Agnostic Explanations) highlights how individual input features contribute to a specific prediction, while Shapley values quantify each feature's contribution to a model's output based on cooperative game theory. Saliency maps, commonly used in image-based models, visually highlight areas of an image that most influenced the model's decision. These tools empower users to understand model logic. Beyond practical benefits, transparency is increasingly required by law. Regulations like the European Union's General Data Protection Regulation ([GDPR](https://gdpr.eu/tag/gdpr/)) mandate that organizations provide explanations for certain automated decisions, especially when they significantly impact individuals. This makes explainability not just a best practice but a legal necessity in some contexts. Together, transparency and explainability form critical pillars of building responsible and trustworthy AI systems. @@ -124,7 +124,6 @@ Edge ML relies on limited on-device data, making analyzing biases across diverse TinyML poses unique challenges for fairness with highly dispersed specialized hardware and minimal training data. Bias testing is difficult across diverse devices. Collecting representative data from many devices to mitigate bias has scale and privacy hurdles. [DARPA's Assured Neuro Symbolic Learning and Reasoning (ANSR)](https://www.darpa.mil/news-events/2022-06-03) efforts are geared toward developing fairness techniques given extreme hardware constraints. - ### Privacy For cloud ML, vast amounts of user data are concentrated in the cloud, creating risks of exposure through breaches. Differential privacy techniques add noise to cloud data to preserve privacy. Strict access controls and encryption protect cloud data at rest and in transit. @@ -188,7 +187,6 @@ Extreme decentralization and complexity make external governance infeasible with : Comparison of key principles in Cloud ML, Edge ML, and TinyML. {#tbl-ml-principles-comparison .striped .hover} - ## Technical Aspects ### Detecting and Mitigating Bias @@ -197,7 +195,7 @@ Machine learning models, like any complex system, can sometimes exhibit biases i To evaluate and address these issues, fairness in machine learning is typically assessed by analyzing "subgroup attributes," which are characteristics unrelated to the prediction task, such as geographic location, age group, income level, race, gender, or religion. For example, in a loan default prediction model, subgroups could include race, gender, or religion. When models are trained with the sole objective of maximizing accuracy, they may overlook performance differences across these subgroups, potentially resulting in biased or inconsistent outcomes. -This concept is illustrated in @fig-fairness-example, which visualizes the performance of a machine learning model predicting loan repayment for two subgroups, Subgroup A (blue) and Subgroup B (red). Each individual in the dataset is represented by a symbol: plusses (+) indicate individuals who will repay their loans (true positives), while circles (O) indicate individuals who will default on their loans (true negatives). The model’s objective is to correctly classify these individuals into repayers and defaulters. +This concept is illustrated in @fig-fairness-example, which visualizes the performance of a machine learning model predicting loan repayment for two subgroups, Subgroup A (blue) and Subgroup B (red). Each individual in the dataset is represented by a symbol: plusses (+) indicate individuals who will repay their loans (true positives), while circles (O) indicate individuals who will default on their loans (true negatives). The model's objective is to correctly classify these individuals into repayers and defaulters. To evaluate performance, two dotted lines are shown, representing the thresholds at which the model achieves acceptable accuracy for each subgroup. For Subgroup A, the threshold needs to be set at 81.25% accuracy (the second dotted line) to correctly classify all repayers (plusses). However, using this same threshold for Subgroup B would result in misclassifications, as some repayers in Subgroup B would incorrectly fall below this threshold and be classified as defaulters. For Subgroup B, a lower threshold of 75% accuracy (the first dotted line) is necessary to correctly classify its repayers. However, applying this lower threshold to Subgroup A would result in misclassifications for that group. This illustrates how the model performs unequally across the two subgroups, with each requiring a different threshold to maximize their true positive rates. @@ -265,7 +263,7 @@ Initial unlearning approaches faced limitations in this context. Given the resou However, newer methods show promise in modifying models to approximately forget data without full retraining. While the accuracy loss from avoiding full rebuilds is modest, guaranteeing data privacy should still be the priority when handling sensitive user information ethically. Even slight exposure to private data can violate user trust. As ML systems become deeply personalized, efficiency and privacy must be enabled from the start---not afterthoughts. -Global privacy regulations, such as the well-established [GDPR](https://gdpr-info.eu) in the European Union, the [CCPA](https://oag.ca.gov/privacy/ccpa) in California, and newer proposals like Canada’s [CPPA](https://blog.didomi.io/en-us/canada-data-privacy-law) and Japan’s [APPI](https://www.dataguidance.com/notes/japan-data-protection-overview), emphasize the right to delete personal data. These policies, alongside high-profile AI incidents such as Stable Diffusion memorizing artist data, have highlighted the ethical imperative for models to allow users to delete their data even after training. +Global privacy regulations, such as the well-established [GDPR](https://gdpr-info.eu) in the European Union, the [CCPA](https://oag.ca.gov/privacy/ccpa) in California, and newer proposals like Canada's [CPPA](https://blog.didomi.io/en-us/canada-data-privacy-law) and Japan's [APPI](https://www.dataguidance.com/notes/japan-data-protection-overview), emphasize the right to delete personal data. These policies, alongside high-profile AI incidents such as Stable Diffusion memorizing artist data, have highlighted the ethical imperative for models to allow users to delete their data even after training. The right to remove data arises from privacy concerns around corporations or adversaries misusing sensitive user information. Machine unlearning refers to removing the influence of specific points from an already-trained model. Naively, this involves full retraining without the deleted data. However, connectivity constraints often make retraining infeasible for ML systems personalized and deployed to remote edges. If a smart speaker learns from private home conversations, retaining access to delete that data is important. @@ -358,11 +356,11 @@ Throughout this chapter, we have touched on several key policies aimed at guidin * The European Union's [General Data Protection Regulation (GDPR)](https://gdpr-info.eu/) mandates transparency and data protection measures for AI systems handling personal data. * The [AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) outlines principles for ethical AI use in the United States, emphasizing fairness, privacy, and accountability. * The [California Consumer Privacy Act (CCPA)](https://oag.ca.gov/privacy/ccpa) protects consumer data and holds organizations accountable for data misuse. -* Canada’s [Responsible Use of Artificial Intelligence](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html) outlines best practices for ethical AI deployment. -* Japan’s [Act on the Protection of Personal Information (APPI)](https://www.dataguidance.com/notes/japan-data-protection-overview) establishes guidelines for handling personal data in AI systems. -* Canada’s proposed [Consumer Privacy Protection Act (CPPA)](https://blog.didomi.io/en-us/canada-data-privacy-law) aims to strengthen privacy protections in digital ecosystems. -* The European Commission’s [White Paper on Artificial Intelligence: A European Approach to Excellence and Trust](https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en) emphasizes ethical AI development alongside innovation. -* The UK’s Information Commissioner’s Office and Alan Turing Institute’s [Guidance on Explaining AI Decisions](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence) provides recommendations for increasing AI transparency. +* Canada's [Responsible Use of Artificial Intelligence](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html) outlines best practices for ethical AI deployment. +* Japan's [Act on the Protection of Personal Information (APPI)](https://www.dataguidance.com/notes/japan-data-protection-overview) establishes guidelines for handling personal data in AI systems. +* Canada's proposed [Consumer Privacy Protection Act (CPPA)](https://blog.didomi.io/en-us/canada-data-privacy-law) aims to strengthen privacy protections in digital ecosystems. +* The European Commission's [White Paper on Artificial Intelligence: A European Approach to Excellence and Trust](https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en) emphasizes ethical AI development alongside innovation. +* The UK's Information Commissioner's Office and Alan Turing Institute's [Guidance on Explaining AI Decisions](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence) provides recommendations for increasing AI transparency. These policies highlight an ongoing global effort to balance innovation with accountability and ensure that AI systems are developed and deployed responsibly. From af5e625c264f532ad3cd64a51f63816b1673c41c Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Sat, 16 Nov 2024 21:42:42 -0500 Subject: [PATCH 6/6] Figure placement --- contents/core/responsible_ai/responsible_ai.qmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contents/core/responsible_ai/responsible_ai.qmd b/contents/core/responsible_ai/responsible_ai.qmd index 3f40fbf67..b85e5ee62 100644 --- a/contents/core/responsible_ai/responsible_ai.qmd +++ b/contents/core/responsible_ai/responsible_ai.qmd @@ -197,10 +197,10 @@ To evaluate and address these issues, fairness in machine learning is typically This concept is illustrated in @fig-fairness-example, which visualizes the performance of a machine learning model predicting loan repayment for two subgroups, Subgroup A (blue) and Subgroup B (red). Each individual in the dataset is represented by a symbol: plusses (+) indicate individuals who will repay their loans (true positives), while circles (O) indicate individuals who will default on their loans (true negatives). The model's objective is to correctly classify these individuals into repayers and defaulters. -To evaluate performance, two dotted lines are shown, representing the thresholds at which the model achieves acceptable accuracy for each subgroup. For Subgroup A, the threshold needs to be set at 81.25% accuracy (the second dotted line) to correctly classify all repayers (plusses). However, using this same threshold for Subgroup B would result in misclassifications, as some repayers in Subgroup B would incorrectly fall below this threshold and be classified as defaulters. For Subgroup B, a lower threshold of 75% accuracy (the first dotted line) is necessary to correctly classify its repayers. However, applying this lower threshold to Subgroup A would result in misclassifications for that group. This illustrates how the model performs unequally across the two subgroups, with each requiring a different threshold to maximize their true positive rates. - ![Illustrates the trade-off in setting classification thresholds for two subgroups (A and B) in a loan repayment model. Plusses (+) represent true positives (repayers), and circles (O) represent true negatives (defaulters). Different thresholds (75% for B and 81.25% for A) maximize subgroup accuracy but reveal fairness challenges.](images/png/fairness_cartoon.png){#fig-fairness-example} +To evaluate performance, two dotted lines are shown, representing the thresholds at which the model achieves acceptable accuracy for each subgroup. For Subgroup A, the threshold needs to be set at 81.25% accuracy (the second dotted line) to correctly classify all repayers (plusses). However, using this same threshold for Subgroup B would result in misclassifications, as some repayers in Subgroup B would incorrectly fall below this threshold and be classified as defaulters. For Subgroup B, a lower threshold of 75% accuracy (the first dotted line) is necessary to correctly classify its repayers. However, applying this lower threshold to Subgroup A would result in misclassifications for that group. This illustrates how the model performs unequally across the two subgroups, with each requiring a different threshold to maximize their true positive rates. + The disparity in required thresholds highlights the challenge of achieving fairness in model predictions. If positive classifications lead to loan approvals, individuals in Subgroup B would be disadvantaged unless the threshold is adjusted specifically for their subgroup. However, adjusting thresholds introduces trade-offs between group-level accuracy and fairness, demonstrating the inherent tension in optimizing for these objectives in machine learning systems. Thus, the fairness literature has proposed three main _fairness metrics_ for quantifying how fair a model performs over a dataset [@hardt2016equality]. Given a model $h$ and a dataset $D$ consisting of $(x, y, s)$ samples, where $x$ is the data features, $y$ is the label, and $s$ is the subgroup attribute, and we assume there are simply two subgroups $a$ and $b$, we can define the following: