Leads: Malé Luján Escalante
Luke Robert Moffat
In collaboration with UAL:LCC Design School, Trilateral, University of Applied Sciences Bonn-Rhein-Sieg, and King's College London Gallery of Science
In collaboration with the Design School at London College of Communications, UAL , isITethical hosted a series Design Brief Award and a Companion Program of Public Lectures and Creative Workshops Exploring the role of Arts and Design in AI ethics and AI Responsible Research Innovation.
AI Ethics Through Design
Dr Malé Luján Escalante – MA Service Design UAL: London College of Communication.
A Route Out Of Permacrisis? The Informational Right To The City
Prof. Monika Buscher, Sociology Department, Lancaster University.
The Westernizing Dream: Semiotics of AI and Technological Colonialism
Dr Luke Moffat, Sociology Department, Trustworthy Autonomous Systems -Security Hub, Lancaster University.
Consumer protection in IT as well as the role of dark patterns in emerging technology
Prof. Alexander Boden and Veronika Krauß, University of Applied Sciences Bonn-Rhein-Sieg.
Worst, Most, Uncertainty: Exploring creative methods to uncover unspoken in AI ethics challenges
Dr Katrina Petersen is a Research Manager at Trilateral Research
One challenge in designing AI comes from the work needed to align the wants of the users, the needs of society, the abilities of the tools, and the content of data themselves. These disconnects are often unspoken, yet regularly lead to unintended ethical impacts. For example, when designing AI intended to be used in decisions to help people the most, what happens if the AI looks for how to help the most people, but the user wants to know who needs help the most? This workshop starts from these disconnects, using uncertainty as a conceptual framing tool to explore how creative, hands-on, and participatory methods can help us see how ethical challenges like social injustices, uneven benefits, or unexpected responsibilities relate to the design and use of AI.
Narrative Futuring: Co-creating utopian visions for our futures with AI
Vivienne Kuh, Responsible Research Innovation at Bristol University and Bec Gee, Artist & Celebrant
Vivienne Kuh and Bec Gee developed the Narrative Futuring method to help scientists and engineers "feel the futures" their research is helping to create. Using utopian envisioning as method, Narrative Futuring helps us imagine the people, places and emotions that may exist one day as a result of the pioneering technologies being developed right now. In this workshop, participants will learn about the Narrative Futuring method and use it to create some utopian visions for our plural futures with some of the emerging AI systems in the present day. Narrative Futuring is quick, dirty and iterative, enabling practitioners to swiftly generate multiple possible futures within which we can all play and explore, anticipating the joys and perils in the imaginaries of our shared techno-moral futures.
Context
AI technologies and visions of AI promise great societal and even environmental solutions, from data management and predictive analysis, to medicine and means of production, from entertainment to modes of living in this world and out of it. AI visions are populating an image of a future in which human-made agencies are solving the wicked, human made, highly complex crises of today.
However, current AI innovations across domains involve intrusions of privacy, surveillance of people, assets, and undiscriminating exploitation of human and natural resources and environments, as well as maximizing a sense of distributed, even diluted, responsibility. Ethical issues arise from gender, political and racial biases, to discrimination and profiling, from hidden exploitative labour to hidden environmental destruction.
There is a big “ethical turn” in tech innovation. The media is following cases related to social networks, autonomous systems, facial recognition, bio cams and sensors, health apps, track and trace, and algorithmic political manipulation. Responsible Research and Innovation and Ethical Frameworks for AI have many disciplines busy, from Computer Sciences to Social Sciences, there are international digital lawyers and human rights activists, philosophers and anthropologists, policy makers and tech CEO’s struggling to address AI ethical tensions proactively.
Ethics are hard to understand, ethical conversations are complex and slow to engage with, ethical frameworks are perceived as obstacles to innovation, tick-box administrative paperwork, a challenge to bypass.
The collaborative unit brief invites thinking about how designerly and creative methods can be applied to an ethics that is accessible, cares about context, that is participatory and creative. Arts & Design has had a huge role in imagining, designing, and developing AI technologies. This brief is not about the technical solutions, it is about the role of Arts & Design in supporting human and more-than-human centered, ethical and responsible innovation of AI.
Student Work
Hello, AI Robot
“Hello, AI Robot” is a collaborative reading tool that invites children and parents to learn and explore together the basic principles of artificial intelligence and machine learning through interactive activities and puzzles.
Beyond just the technical aspects of AI and robotics, “Hello, AI Robot” also addresses important ethical issues. The book prompts discussions around transparency of AI-related principles, trust toward AI robots and autonomy of using AI and robotics, encouraging children to think critically about the impact of technology on their daily lives.
The Emotion Matrix
The Emotion Matrix project is an immersive exhibition showcasing the possible future in 2050, where AI brain chip implant technology has been widely adopted. The exhibition is centered around the product of Emo+ Chip, which could be implanted into people’s brain to make emotion adjustment. We explore the possibilities this technology could bring to the future and the ethical issues that could arise around its development.
Monday Morning
Monday Morning is a board game which aims to get everyone to experience the ethical dilemmas around the introduction of AI to the workplace. The players explore a future office, and are confronted with ethical dilemmas triggered by current benign, malevolent or misinformed application of digital technology. In its second phase, the game introduces scenarios inspired by speculations of future technologies.
The mechanics of the game are inspired by the award- winning horror board game “Betrayal at the House on the Hill”, while the player choices and future scenarios are inspired by Deceptive Design framework, EU Responsible AI framework, and sci-fi films, books and video games.
FACEIT
Al Diagnosis will create new ethical challenges that must be mitigated since Al has tremendous capability to threaten patient data safety, privacy and inclusivity by healthcare data leaking.
FACEIT focuses on the ethics in Al diagnosis of facial skin diseases. It’s for the patients engaged in peer support groups held by the British
Association of Dermatologists (BAD), to open up conversations of concerns about ethical issues in Al diagnosis in a creative and engaging way, as well as providing emotional support. This acts as a scaffolding to help patients primarily and also their family, friends and carers to form and articulate their unique understanding, concerns and expectancy about data processing in Al diagnosis.