Hollie Lubbock
Jivan Virdee

Live Innovation: Exploring the ethics of AI with WIRED Live

We are on the cusp of AI becoming commonplace – computing power continues to improve and we are generating and have access to an abundance of data, allowing us to push this technology to its full potential. That said, as designers and data scientists ‘how to use AI ‘and ‘when to use AI ‘are challenges we need to address.

AI is what we make it
There are disparate views on the repercussions of artificial intelligence on our societies. Facebook’s Mark Zuckerberg wholly believes AI will make our lives better, while Stephen Hawking, during his talk at Web Summit this week, commented that the rise of artificial intelligence could be “the worst or the best thing that has happened for humanity.”

There’s no doubt that artificial intelligence will change how we work, entertain ourselves and interact with others as well as machines. It has the potential to transform our society as we know it, help eradicate poverty and disease, but only if we design it to do such. It is our responsibility – as designers, brands, companies, and ultimately users – to ensure we create AI systems that will bring a better future for society.

Live Innovation – how do we design ethical AI?
This question has been on our minds for a while, and it was exciting to be able to explore the ethics of artificial intelligence with the delegates at WIRED Live.

Throughout the two-day festival, which brings to life fantastic and compelling stories on ideas, business, technology and design, we ran a series of short interactive sessions that delved into ethical considerations of artificial intelligence.

Our aim: to create an AI ethics manifesto (more on that below).

Serious subject, serious fun
Focusing on four significant areas – bias, automation, trust and responsibility – attendees were invited to share their thoughts around AI, as well as their hopes and fears, with us and Accenture – who were the headline partner of WIRED Live. Our Live Innovation area was a space for discussion and learning through play to open up a technical, seemingly unaccessible subject to a wider audience. As AI will impact all of our lives, creating ways in which people can become informed on the topic and able to participate in discussion is vital to creating a future that benefits us as society rather than having the knowledge, decisions and power rest with the few.

We wanted to inspire people to open up and discuss these complex moral dilemmas which will soon affect us all, and a number of activities captivated people’s imagination. Among the most popular ones were Bot or Not – a Turing test style game where you have to decide whether poetry, art and music has been produced by a computer (bot) or a human (not).

Watching people put their eyes and ears to the test and their reactions – surprise of being unexpectedly wrong and the satisfaction of answering correctly – was one of the most enjoyable things for the design team to see.

Another popular activity – though one with a slightly darker twist – was Moral Machine. Based on MIT’s experiment of the same name, we asked people to imagine the following scenario: the brakes of a self-driving car suddenly fail. Where does it go next? And who has to die?

This hard, and to many almost unbearable, decision certainly put the potential consequences of automation, and the moral dilemmas it might bring to the forefront of people’s minds and provoked some lively discussions. If you haven’t already given it a go, do. It gives you insight into yourself as well as prompting critical thinking on how we will determine the behaviours of our autonomous future.

Other activities that proved popular were A(I) day in the life of Joe, where you had to choose how our imaginary friend Joe’s story would pan out, exploring moral decisions when faced with AI bias long the way; and Fair Judge, where people had to decide how much they trust humans vs. AI when it comes to law, and who they see as being fairer and less biased in their approach to decision-making.

Designing AI for a human experience

We finished off our Live Innovation with a panel debate on AI and ethics with Tabitha Goldstaub, co-founder of Cognition X, Jivan Virdee, a data designer at Fjord and Thomas Cowell, digital product creation lead at Accenture Interactive. They discussed the role of humanity as AI take over more repetitive activities in culture and society, areas of where AI simply must not be used and whose vision of the future we should follow as we design AI systems. Watch the panel debate in full here.

The AI ethics manifesto

Our manifesto for ethical design that you see below was co-created with the WIRED Live attendees. It’s a starting point for what we hope will continue to be a rich discussion, so please do let us know what you think.

More thoughts on AI from Fjord on Design Voices

If you’re interested in hearing more about what our experts have to say about artificial intelligence, do follow us on Design Voices and check out these recent posts:

Automation in the workplace. Embracing intelligent automation at work by James Deakin, Technology Design Director, Fjord London.

Who’s the fairest of them all? Not AI by Jivan Virdee, Data + Design, Fjord London.

Trustworthy AI? Yes, by design by Daniela Ivanova, Service and Interaction Design, Fjord London.

Hollie Lubbock
Jivan Virdee

More Stories from Fjord

54.196.127.107