Skip to content

Commit

Permalink
Merge 8b9bf6e into 4fc2f4f
Browse files Browse the repository at this point in the history
  • Loading branch information
monicagerber authored Dec 22, 2023
2 parents 4fc2f4f + 8b9bf6e commit aa55008
Show file tree
Hide file tree
Showing 2 changed files with 90 additions and 11 deletions.
24 changes: 13 additions & 11 deletions 04b-AI_Policy-ai_regs_and_laws.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -59,41 +59,43 @@ Some individual industries have already begun adopting policies about generative
```{r, out.width = "100%", echo = FALSE}
ottrpal::include_slide("https://docs.google.com/presentation/d/1PSprPB9RrNJh_01BXAcj9jn6NK2XzWx74vD_YmQyliM/edit#slide=id.g2a84ae71e54_0_67")
```

# Case Studies

AI regulations and policies are continuing to evolve as people adapt to the use of AI. Let's look at some real-life examples.

## Education

For students and educators, generative AI's capacity in writing, problem solving, and conducting research has upended the goals and evaluations of our education system. For instance, ChatGPT 4 has been able to generate college-level essays to earn passing grades at Harvard with minimal prompting for various subjects @slowboring. Many educational institutions reacted with various policies and adaptations; first to protect the current educational environment, then to consider adapting to generative AI's capacity.
For students and educators, generative AI's capacity in writing, problem solving, and conducting research has upended the goals and evaluations of our education system. For instance, ChatGPT 4 has been able to generate college-level essays to earn passing grades at Harvard with minimal prompting for various subjects (@slowboring). Many educational institutions reacted with various policies and adaptations; first to protect the current educational environment, then to consider adapting to generative AI's capacity.

In the first few months after ChatGPT was released, many schools and universities restricted the use of AI in education. The two largest public school systems in the United States, New York City Public Schools and Los Angeles Public Schools, banned the use of ChatGPT in any school work, declaring that any use of ChatGPT counted as plagiarism @nytimes-technology1. Many universities also followed with similar policies. However, educators soon realized that most students embraced generative AI despite the ban for most assignments @chronicle-higher-ed, @washingtonpost-opinions. Furthermore, enforcement to bar AI from students, such as using AI detection software or banning AI from school networks, created disparities in students. Teachers noticed that AI detection software biased against the writings of non-native English learners @washingtonpost-opinions. Children from wealthy families could also access AI through personal smartphones or computers @nytimes-technology1.
In the first few months after ChatGPT was released, many schools and universities restricted the use of AI in education. The two largest public school systems in the United States, New York City Public Schools and Los Angeles Public Schools, banned the use of ChatGPT in any school work, declaring that any use of ChatGPT counted as plagiarism @nytimes-technology1. Many universities also followed with similar policies. However, educators soon realized that most students embraced generative AI despite the ban for most assignments (@chronicle-higher-ed, @washingtonpost-opinions). Furthermore, enforcement to bar AI from students, such as using AI detection software or banning AI from school networks, created disparities in students. Teachers noticed that AI detection software biased against the writings of non-native English learners (@washingtonpost-opinions). Children from wealthy families could also access AI through personal smartphones or computers (@nytimes-technology1).

With these lessons, some educational systems started to embrace the role of AI in students' lives and are developing less-restrictive various policies. New York City Public School and Los Angeles Public Schools quietly rolled back their ban, as did many universities @nytimes-technology1. Groups of educators have come together to give guidelines and resources on how to teach with the use of AI, such as the [Mississippi AI Institute](https://mississippi.ai/institute/), [MIT's Daily-AI curriculum](https://raise.mit.edu/daily/), and [Gettysburg College's Center for Creative Teaching and Learning](https://genai.sites.gettysburg.edu/).
With these lessons, some educational systems started to embrace the role of AI in students' lives and are developing less-restrictive various policies. New York City Public School and Los Angeles Public Schools quietly rolled back their ban, as did many universities (@nytimes-technology1). Groups of educators have come together to give guidelines and resources on how to teach with the use of AI, such as the [Mississippi AI Institute](https://mississippi.ai/institute/), [MIT's Daily-AI curriculum](https://raise.mit.edu/daily/), and [Gettysburg College's Center for Creative Teaching and Learning](https://genai.sites.gettysburg.edu/).

Each educational institution and classroom is adapting to AI differently. The Mississippi AI Institute suggested that there are some common questions to consider @mississippi-ai-blog:
Each educational institution and classroom is adapting to AI differently. The Mississippi AI Institute suggested that there are some common questions to consider (@mississippi-ai-blog):

- _How are we inviting students to demonstrate their knowledge, and is writing the only (or the best) way to do that?_ For instance, some universities have encouraged the use of in-class assignments, handwritten papers, group work and oral exams @nytimes-technology2.
- _How are we inviting students to demonstrate their knowledge, and is writing the only (or the best) way to do that?_ For instance, some universities have encouraged the use of in-class assignments, handwritten papers, group work and oral exams (@nytimes-technology2).

- _What are our (new) assignment goals? And (how) might generative AI help or hinder students in reaching those goals?_ Some educators want to use AI to help students get over early brainstorming hurdles, and want students to focus on deeper critical thinking problems @washingtonpost-opinions. Many educators have started to develop AI literacy and "critical computing" curricula to teach students how to use AI effectively and critically @nytimes-technology3.
- _What are our (new) assignment goals? And (how) might generative AI help or hinder students in reaching those goals?_ Some educators want to use AI to help students get over early brainstorming hurdles, and want students to focus on deeper critical thinking problems (@washingtonpost-opinions). Many educators have started to develop AI literacy and "critical computing" curricula to teach students how to use AI effectively and critically (@nytimes-technology3).

- _If we’re asking students to do something that AI can do with equal facility, is it still worth asking students to do? And if so, why?_ Educators will need to think about what aspects of their lesson goals will be automated in the future, and what are critical and creative skills that students need to hone in on.

- _If we think students will use AI to circumvent learning, why would they want to do that? How can we create conditions that motivate students to learn for themselves?_
Educators have started to teach young students the limits of AI creativity and what kind of bias is embedded in AI models, which has led students to think more critically about use of AI @nytimes-technology3.
Educators have started to teach young students the limits of AI creativity and what kind of bias is embedded in AI models, which has led students to think more critically about use of AI (@nytimes-technology3).

- _What structural conditions would need to change in order for AI to empower, rather than threaten, teachers and learners? How can we create those conditions?_ Some teachers have started to actively learn how their students use AI, and are using AI to assist with writing their teaching curriculum @nytimes-technology1.
- _What structural conditions would need to change in order for AI to empower, rather than threaten, teachers and learners? How can we create those conditions?_ Some teachers have started to actively learn how their students use AI, and are using AI to assist with writing their teaching curriculum (@nytimes-technology1).


## Healthcare

The health care industry is an example of an industry where the speed of technology development has led to gaps in regulation, and the US recently released an Executive Order about creating [healthcare-specific AI policies](https://www.whitehouse.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/).
The U.S. Food and Drug Administration (FDA) regulates AI-enabled medical devices and software used in disease prevention, diagnosis, and treatment. However, there are serious concerns about the adequacy of current regulation, and many other AI-enabled technologies that may have clinical applications fall out of the scope of FDA regulation. Other federal agencies, such as the Health and Human Services Office of Civil Rights, have important roles in the oversight of some aspects of AI use in health care, but their authority is limited. Additionally, there are existing federal and state laws and regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), that impact the use and development of AI. This patchwork landscape of federal and state authority and existing laws has led the American Medical Association (AMA) to advocate for a “whole government” approach to implement a comprehensive set of policies to ensure that “the benefits of AI in health care are maximized while potential harms are minimized.”

The U.S. Food and Drug Administration (FDA) regulates AI-enabled medical devices and software used in disease prevention, diagnosis, and treatment. However, there are serious concerns about the adequacy of current regulation, and many other AI-enabled technologies that may have clinical applications fall out of the scope of FDA regulation (@habib2023; @ama2023). Other federal agencies, such as the Health and Human Services Office of Civil Rights, have important roles in the oversight of some aspects of AI use in health care, but their authority is limited. Additionally, there are existing federal and state laws and regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), that impact the use and development of AI. This patchwork landscape of federal and state authority and existing laws has led the American Medical Association (AMA) to advocate for a “whole government” approach to implement a comprehensive set of policies to ensure that “the benefits of AI in health care are maximized while potential harms are minimized” (@healthcareradio2023).

The AMA and health care leaders have highlighted the importance of specialized expertise in the oversight and adoption of AI products in health care delivery and operations. For example, Dr. Nigam Shah and colleagues call for the medical community to take the lead in defining how LLMs are trained and developed:

> By not asking how the intended medical use can shape the training of LLMs and the chatbots or other applications they power, technology companies are deciding what is right for medicine.
> By not asking how the intended medical use can shape the training of LLMs and the chatbots or other applications they power, technology companies are deciding what is right for medicine (@nigam2023).
The medical community should actively shape the development of AI-enabled technologies by advocating for clinically-informed standards for the training of AI, and for the evaluation of the value of AI in real-world health care settings. At an institutional level, specialized clinical expertise is required to create policies that align AI adoption with standards for health care delivery. And in-depth knowledge of U.S. health insurance system is required to understand how complexity and lack of standardization in this landscape may impact AI adoption in clinical operations. In summary, health care leaders and the medical community need to play an active role in the development of new AI regulations and policy.
The medical community should actively shape the development of AI-enabled technologies by advocating for clinically-informed standards for the training of AI, and for the evaluation of the value of AI in real-world health care settings. At an institutional level, specialized clinical expertise is required to create policies that align AI adoption with standards for health care delivery. And in-depth knowledge of U.S. health insurance system is required to understand how complexity and lack of standardization in this landscape may impact AI adoption in clinical operations (schulman2023). In summary, health care leaders and the medical community need to play an active role in the development of new AI regulations and policy.

# VIDEO AI acts, orders, and policies
77 changes: 77 additions & 0 deletions book.bib
Original file line number Diff line number Diff line change
Expand Up @@ -806,3 +806,80 @@ @misc{zoom2023
language = {en-US},
urldate = {2023-12-20},
}


@article{habib2023,
author = {Habib, Anand R. and Gross, Cary P.},
title = "{FDA Regulations of AI-Driven Clinical Decision Support Devices Fall Short}",
journal = {JAMA Internal Medicine},
volume = {183},
number = {12},
pages = {1401-1402},
year = {2023},
month = {12},
abstract = "{We are entering a new era of computerized clinical decision support (CDS) tools. Companies are increasingly using artificial intelligence and/or machine learning (AI/ML) to develop new CDS devices, which are defined by the US Food and Drug Administration (FDA) as software used in disease prevention, diagnosis, or treatment. Recognizing the potential implications for clinical practice, the 21st Century Cures Act enjoined the FDA to regulate these new devices.In their case series reported in this issue of JAMA Internal Medicine, Lee and colleagues analyzed the evidence supporting FDA approval of 10 AI/ML CDS devices intended for use in critical care. Their findings are worrisome. Only 2 device authorizations cited peer-reviewed publications, and only 1 outlined a detailed safety risk assessment. No company provided software code to enable independent validation, evaluated clinical efficacy, or assessed whether the use of algorithms exacerbates health disparities.}",
issn = {2168-6106},
doi = {10.1001/jamainternmed.2023.5006},
url = {https://doi.org/10.1001/jamainternmed.2023.5006},
eprint = {https://jamanetwork.com/journals/jamainternalmedicine/articlepdf/2810620/jamainternal\_habib\_2023\_er\_230003\_1701463607.95585.pdf},
}

@misc{healthcareradio2023,
title={AMA issues new principles for AI development, deployment & use},
url={https://www.healthcarenowradio.com/ama-issues-new-principles-for-ai-development-deployment-use/#:~:text=Key%20concepts%20outlined%20by%20the,governance%20of%20health%20care%20AI.},
journal={HealthcareNOWradio.com},
author={News, Industry},
year={2023},
month={Dec}
}
@misc{ama2023,
title={AMA issues new principles for AI development, deployment & use},
url={https://www.ama-assn.org/press-center/press-releases/ama-issues-new-principles-ai-development-deployment-use},
journal={American Medical Association},
author={American Medical Association},
year={2023},
month=nov,
language={en}
}
@article{nigam2023,
author = {Shah, Nigam H. and Entwistle, David and Pfeffer, Michael A.},
title = "{Creation and Adoption of Large Language Models in Medicine}",
journal = {JAMA},
volume = {330},
number = {9},
pages = {866-869},
year = {2023},
month = {09},
abstract = "{There is increased interest in and potential benefits from using large language models (LLMs) in medicine. However, by simply wondering how the LLMs and the applications powered by them will reshape medicine instead of getting actively involved, the agency in shaping how these tools can be used in medicine is lost.Applications powered by LLMs are increasingly used to perform medical tasks without the underlying language model being trained on medical records and without verifying their purported benefit in performing those tasks.The creation and use of LLMs in medicine need to be actively shaped by provisioning relevant training data, specifying the desired benefits, and evaluating the benefits via testing in real-world deployments.}",
issn = {0098-7484},
doi = {10.1001/jama.2023.14217},
url = {https://doi.org/10.1001/jama.2023.14217},
eprint = {https://jamanetwork.com/journals/jama/articlepdf/2808296/jama\_shah\_2023\_sc\_230004\_1693922864.71803.pdf},
}

@article{schulman2023,
author = {Schulman, Kevin A. and Nielsen, Perry Kent, Jr and Patel, Kavita},
title = "{AI Alone Will Not Reduce the Administrative Burden of Health Care}",
journal = {JAMA},
volume = {330},
number = {22},
pages = {2159-2160},
year = {2023},
month = {12},
abstract = "{Large language models (LLMs) are some of the most exciting innovations to come from artificial intelligence research. The capacity of this technology is astonishing, and there are multiple different use cases being proposed where LLMs can solve pain points for physicians—everything from assistance with patient portal messages to clinical decision support for chronic care management to compiling clinical summaries. Another often discussed opportunity is to reduce administrative costs such as billing and insurance-related costs in health care. However, before jumping into technology as a solution, considering why the billing process is so challenging in the first place may be a better approach. After all, the prerequisite for a successful LLM application is the presence of “useful patterns” in the data.}",
issn = {0098-7484},
doi = {10.1001/jama.2023.23809},
url = {https://doi.org/10.1001/jama.2023.23809},
eprint = {https://jamanetwork.com/journals/jama/articlepdf/2812255/jama\_schulman\_2023\_vp\_230150\_1701364722.65094.pdf},
}








0 comments on commit aa55008

Please sign in to comment.