Skip to content

Commit

Permalink
Update program.md
Browse files Browse the repository at this point in the history
  • Loading branch information
gsileno authored Dec 13, 2023
1 parent 6d59bbf commit 7f40323
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions program.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ Download the <a href="https://jurix23.maastrichtlawtech.eu/assets/JURIX2023-work

17:15 - 18:15. **First Keynote Speech** by **[Jaap Hage](https://jurix23.maastrichtlawtech.eu/speakers/#monday-18th-december-2023)**, professor in Legal Theory (Maastricht University).

<p style='padding-left:10%'><strong>Explainable Legal AI</strong></p>
<p style='padding-left:10%; font-size:90%'>Abstract: With the increasing popularity of AI based on machine learning, the ideal that AI programs can explain their outputs becomes more difficult to realise. There is no reason why this would be different for legal AI. However natural the demand for explicability may seem, it is not at all obvious what precisely is asked for. There seem to be two kinds of explanation, which can ideally be combined but which in practice do not always go together. The one kind describes the process through which the explanandum came about, physically or – in the law – logically. The other kind is a tool to create understanding in the audience. Psychological research has shown that people are often not capable to explain their own behaviour in the first way, and that when they explain it in the second way, the explanation may very well be false. This has also be shown to hold for legal decisions. If naturally intelligent lawyers are not always capable of explaining their own decisions – but may be under the illusion that they are – should we then demand from AI legal decisions makers that they do what human legal decision makers often cannot do? What can we under these circumstances expect from the explanations that AI systems give of their legal decisions? For some, the answer may come as a surprise.</p>
<p style='padding-left:10%'><strong><em>Explainable Legal AI</em></strong></p>
<p style='padding-left:10%; font-size:90%'>With the increasing popularity of AI based on machine learning, the ideal that AI programs can explain their outputs becomes more difficult to realise. There is no reason why this would be different for legal AI. However natural the demand for explicability may seem, it is not at all obvious what precisely is asked for. There seem to be two kinds of explanation, which can ideally be combined but which in practice do not always go together. The one kind describes the process through which the explanandum came about, physically or – in the law – logically. The other kind is a tool to create understanding in the audience. Psychological research has shown that people are often not capable to explain their own behaviour in the first way, and that when they explain it in the second way, the explanation may very well be false. This has also be shown to hold for legal decisions. If naturally intelligent lawyers are not always capable of explaining their own decisions – but may be under the illusion that they are – should we then demand from AI legal decisions makers that they do what human legal decision makers often cannot do? What can we under these circumstances expect from the explanations that AI systems give of their legal decisions? For some, the answer may come as a surprise.</p>

18:15 - 20:30. *Reception*

Expand Down Expand Up @@ -80,7 +80,7 @@ Download the <a href="https://jurix23.maastrichtlawtech.eu/assets/JURIX2023-work

13:30 - 14:30. **Second Keynote Speech** by **[Piek Vossen](https://jurix23.maastrichtlawtech.eu/speakers/#tuesday-19th-december-2023)**, professor in Computational Lexicology (VU University Amsterdam), head of the Computational Linguistics & Text Mining Lab.

<p style='padding-left:10%'><strong>ChatGPT: what it is, what it can do, cannot do and should not do</strong></p>
<p style='padding-left:10%'><strong><em>ChatGPT: what it is, what it can do, cannot do and should not do</em></strong></p>
<p style='padding-left:10%; font-size:90%'>OpenAI has set a new standard by making complex AI tools and systems available to the general public through a natural language interface. No need to program complex systems, just ask your question or send your request to ChatGPT. In this presentation, I dive deeper into the workings of ChatGPT to explain what it can do and what it cannot do. Finally, I discuss its potential future as a technology solution: as Artificial General Intelligence or as natural language interface to technology.</p>

14:30 - 15:45. **session 3: Mixed session**
Expand Down Expand Up @@ -158,7 +158,7 @@ Teaser videos are available [online](http://jurix23.maastrichtlawtech.eu/demos).

13:30 - 14:30. **Third Keynote Speech** by **[Iris van Rooij](https://jurix23.maastrichtlawtech.eu/speakers/#wednesday-20th-december-2023)**, professor in Computational Cognitive Science (Radboud University) and PI at the Donders Institute for Brain, Cognition and Behaviour

<p style='padding-left:10%'><strong>There is no AGI on the horizon, and AI cannot replace people’s (legal) thinking and judging</strong></p>
<p style='padding-left:10%'><strong><em>There is no AGI on the horizon, and AI cannot replace people’s (legal) thinking and judging</em></strong></p>
<p style='padding-left:10%; font-size:90%'>The contemporary field of AI has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems. Yet, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. This puts us at risk of thinking that our thinking can be replaced by AI and of deskilling our professions. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.</p>
<p style='padding-left:10%; font-size:90%'>(based on recent interdisciplinary work published as <a href="https://osf.io/preprints/psyarxiv/4cbuv/">Reclaiming AI as a theoretical tool for cognitive science</a> together with Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova, & Patricia Rich)</p>

Expand Down

0 comments on commit 7f40323

Please sign in to comment.