Just type www.artificialethics.it
to go here. And info @artificialethics.it
for suggestions!
The term "Ethics of Artificial Intelligence" refers to the morality of how humans design, construct, use and treat Artificial Intelligence(s) and artificially intelligent beings. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
In this document you will find curated list of useful (sometimes awesome) resorices on learning the basics of Ethics of Artificial Intelligence in courses, books, papers, blogs (and blog posts), research centres, journals, conferences and much, much more...
Collection is curated by:
- Marilù Miotto (@marikyu7): Bachelor in International Studies at Trento University and Master International Security Studies student at Charles University in Prague
- Matteo G.P. Flora (@lastknight): Adjunct Professor of Corporate Reputation & Storytelling at university of Pavia, Lecturer of Big Data and Analytics at Bicocca University in Milan, CEO The Fool. Hacker.
Contributions most welcome in the GitHub page or in mail via info@artificialethics.it.
- Artificial Intelligence - Philosophy, Ethics, and Impact - The goal of the course is to equip students with the intellectual tools, ethical foundation, and psychological framework to successfully navigate the coming age of intelligent machines.
- Data Science Ethics - MIT - This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns.
- Responsible Innovation in the Age of AI - IEEE - This course focuses on how traditional philosophical models like utilitarianism, virtue ethics and other key modalities of applied ethics have evolved to embrace specifics of algorithmic tracking and intelligence augmentation of AI.
- Ethics and Law in Data and Analytics - Microsoft - In this course, you'll learn to apply ethical and legal frameworks to initiatives in the data profession. You'll explore practical approaches to data and analytics problems posed by work in Big Data, Data Science, and AI. You'll also investigate applied data methods for ethical and legal work in Analytics and AI.
- The Ethics and Governance of Artificial Intelligence - MIT - This course will pursue a cross-disciplinary investigation of the development and deployment of the opaque complex adaptive systems that are increasingly in public and private use.
- The Economic Advantage of Ethical Design For Business - IEEE - AI/S technology cannot be built in isolation, and to get a full sense of ethical and policy impacts of this technology it's essential to understand the larger context of its implementation to best increase societal wellbeing while building business.
- Superintelligence - Nick Bostrom - The book asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
- Artificial Intelligence Safety and Security - This book is proposed to mitigate the foundamental problem of safety and security issues of AI. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence.
- Life 3.0: Being Human in the Age of Artificial Intelligence - What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.
- Towards a Code of Ethics for Artificial Intelligence - Paula Boddington - The book explores in detail the ethical codes of practice in science, and should be read by everyone with an interest in the future of AI.
- Principled Artificial Intelligence - Is there enough commonality among AI ethics efforts to suggest the emergence of sectoral norms? Where are the most significant points of divergence? This report presents thirty-two sets of principles side by side, enabling comparison between efforts from governments, companies, advocacy groups, and multi-stakeholder initiatives.
- The Ethics of Artificial Intelligence - The possibility of creating thinking machines raises a host of ethical issues. These ques- tions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves.
- Ancient dreams of intelligent machines: 3,000 years of robots. - Past and Present AI Narratives.
- The Value Learning Problem - The paper cover the hypothesised basic drives of AI agents at level or reaching beyond the human intelligence.
- Safety Engineering for Artificial General Intelligence - This paper encompasses the alignment issues pertaining to the open-ended and cognitively developing intelligence. More precisely, it discusses the issue of acquiring values aligned with human intensions, interests, and needs.
- AGI Safety Literature Review - A review regarding the current state of Artificial General Intelligence development.
- What is a Singleton? - This note introduces the concept of a “singleton” and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.
- A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis - This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement.
- Superintelligence Skepticism as a Political Tool - This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends.
- Countering Superintelligence Misinformation - This paper surveys strategies to counter superintelligence misinformation.
- Will there be superintelligence and would it hate us? - The paper suggests that AI may well in some future produce undesirable social effects but there is as yet no reason to think they could be on the massive and end-of-civilization scale Bostrom so confidently predicts.
- The errors, insights and lessons of famous AI predictions – and what they mean for the future - This paper will start by proposing a decomposition schema for classifying AI predictions. Then it constructs a variety of theoretical tools for analysing, judging and improving them.
- Offensive Realism and the Insecure Structure of the International System - Nick Bostrom has recommended AI development under what he calls the common good principle: “[s]uperintelligence should be developed only for the benefit of all humanity and the service of widely shared ethical ideals.”7 Proposals for precisely which ideals or guidelines should regulate AI production are often linked to their potential use as weaponry.
- Military AI as a Convergent Goal of Self-Improving AI. - Authors show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. This militarization trend increases global catastrophic risk or even existential risk during AI takeoff.
- Artificial agents and the expanding ethical circle - The paper discusses the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: the anthropocentric, infocentric, biocentric and ecocentric ones.
- Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History - The author suggests that the only plausible escape from the security/surveillance conundrum is the creation of a “supersingleton” run by a friendly superintelligence, founded upon a “post-singularity social contract.”
- ChinAI - Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics.
- Nick Bostrom's Home Page - Home page for Nick Bostrom, Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test.
- LessWrong - a community dedicated to improving reasoning and decision-making. It seeks to hold true beliefs and to be effective at accomplishing its goals. More generally, it works to develop and practice the art of human rationality.
- Intro to AI Ethics - Ethical considerations when building and interacting with Artificially Intelligent systems.
- The Hitchhiker’s Guide to AI Ethics - “The Hitchhiker’s Guide to AI Ethics is a must read for anyone interested in the ethics of AI.” also part 2 and part 3.
- An Overview of National AI Strategies - The article summarizes the key policies and goals of states regarding AI. Such policies represent a first application of AI ethics by public entities.
- What is facial recognition - and how sinister is it? - an article by The Guardian that summarizes ethics implications of facial recognition.
- TED Nick Bostrom on What happens when our computers get smarter than we are? - Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make."
- TEDx The ethical dilemma we face on AI and autonomous tech by Christine Fox - The inspiration for Kelly McGillis' character in Top Gun, Christine Fox is the Assistant Director for Policy and Analysis of the Johns Hopkins University Applied Physics Laboratory. Prior to joining APL, she served as Acting Deputy Secretary of Defense from December 2013 to May 2014, making her the highest-ranking female official in history to serve in the Department of Defense. Ms. Fox is a three-time recipient of the Department of Defense Distinguished Service Medal. She has also been awarded the Department of the Army’s Decoration for Distinguished Civilian Service.
- The Future of Artificial Intelligence and Ethics on the Road to Superintelligence - The progress of technology over time, the human brain Vs the future, and the future of artificial intelligence.
- "Stop Killer Robots" YouTube Channel - The "Stop Killer Robots" campaign YouTube channel features videos on introductory material and several UN and US cometees seats
- Moral Code: The Ethics of AI - Atlantic Re:think introductory video
- Future of Humanity Institute (@FHIOxford) - Based at Oxford University FHI is a multidisciplinary research institute born to investigates what we can do now to ensure a long flourishing future.
- Leverhulme Centre for the Future of Intelligence (@LeverhulmeCFI) - The CFI research is mostly structured in a series of projects and research exercises. Research is done by an interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal. Topics range from algorithmic transparency to exploring the implications of AI for democracy.
- Centre for the Study of Existential Risk (@CSERCambridge) - An interdisciplinary research centre within CRASSH at the University of Cambridge dedicated to the study and mitigation of existential risks.
- Global Catastrophic Risk Institute (@GCRInstitute) - GCRI studies the human process of developing and governing AI, using risk analysis, social science, and the extensive knowledge we have gained from the study of other risks.
- OpenAI (@OpenAI) - a team of a hundred people based in San Francisco, California whose mission is to ensure that artificial general intelligence benefits all of humanity.
- MIRI - Machine Intelligence Research Institute (@MIRIBerkeley) - a research nonprofit that studies the mathematical underpinnings of intelligent behavior. Their mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.
- The Institute for Ethical AI & Machine Learning (@EthicalML) - a UK-based research centre that carries out highly-technical research into responsible machine learning systems.
- AI Ethics Lab (@AIEthicsLab) - Through collaboration between computer scientists, practicing lawyers and legal scholars, and philosophers, the Lab offers a comprehensive approach to ethical design of AI-related technology.
- Data & Society - an independent nonprofit research institute that advances public understanding of the social implications of data-centric technologies and automation.
- Berkman Klein Center - centre's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions. Princeton Dialogues on AI and Ethics - is a research collaboration between Princeton’s University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) that seeks to explore these questions – as well as many more. Research is focused on the emerging field of artificial intelligence (broadly defined) and its interaction with ethics and political theory.
- AI-Ethics - The mission of this group is threefold: Foster dialogue between the conflicting camps in the current AI ethics debate; Help articulate basic regulatory principles for government and industry groups; and Inspire and educate everyone on the importance of artificial intelligence.
- AI & Society
- Journal of Experimental & Theoretical Artificial Intelligence
- AI Communications
- Evolutionary Intelligence
- Information
- Minds and Machines
- Progress in Artificial Intelligence
- Artificial Intelligence
- AI Magazine
- International Joint Conferences on Artificial Intelligence (@IJCAIconf)
- AAAI Conferences
- AIES Conference
- GAITC
- AITopics - Large aggregation of AI resources
- AIResources - Directory of open source software and open access data for the AI research community
- Artificial Intelligence Ethics Subreddit
To the extent possible under law, Marilù Miotto and Matteo G.P. Flora has granted this content a Creative Commons Attribution-ShareAlike 4.0 International License.