Designed Intelligence: Empowering people within systems
This article was originally published on Design Voices, our design blog. Find it here. By James O’Neill
How AI with Design is empowering people to tackle complexity.
“Everything should be made as simple as possible, but no simpler.” — attributed to Albert Einstein.
Designed Intelligence is Fjord and Accenture’s approach to unlocking the full potential of human collaboration with AI. In our previous articles we talk about how AI technologies can help augment strategic decision making and build better experiences. Empowerment is the third pillar of Designed Intelligence and focusses on how design can make intelligent systems more transparent, more adaptable and ultimately more resilient.
We live in a world of increasing complexity. Physical objects are becoming ‘smart’ and ‘connected’ with cars, toys, medical devices and even toothbrushes getting an AI flair. These products come with a myriad of digital services that plug into larger networks of people, organisations, and infrastructure. These services, in turn, are just one part of a broader digital economy. The end result is unprecedented levels of customer choice; Amazon currently lists 3 billion products, Airbnb has over 5 million property listing, YouTube is expanding with three hundred hours of new content every minute. These are inhuman scales of information, so we increasingly depend on algorithms to present the relevant information to us. Businesses also find themselves increasingly interconnected in terms of supply chains and digital infrastructures. A MIT Sloan article states that it “has become too complex and is moving too rapidly for boards and CEOs to make good decisions without intelligent systems”. As complexity increases, it seems that AI powered recommendation systems and algorithmic decision support are not just helpful, they are becoming essential.
At the same time there is wariness about the role that AI will play in relation to the future of work. Much of the commercial narrative to date has focused on smart automation that results in improved quality and reduced labour costs. Critics of this approach point to examples where over-dependency on AI in can result in ethical issues, fragile systems and even legal challenges. In reality AI enabled products and services exist as part of larger technical, social, and economic systems and can have both positive and negative effects. If we really want to get the best of AI (and avoid the worst) then we will need to understand the role it plays in these systems, and how to design for them.
To do this in our projects, we’ve had to shift our design focus from the level of user centricity to the level of systems. At the systems level, design is about balancing the automation of tasks while maintaining the value of human problem solving. Design is also about mapping the complex relationships and interdependencies involved in modern businesses and digital platforms so that they can be built, maintained and improved. Here are a couple of the things we’ve learned.
Adaptive resilience (Two heads are better than one)
The world changes fast these days. 2020 alone seems to have cycled through a bewildering series of world changing events. So how do you design with AI, when the models you build today might not be able to cope with the reality of tomorrow? Even in times of stability, AI models that interact with the real world degrade in accuracy over time, which can lead to undesired or even dangerous outcomes.
One way of coping with this is to think about AI solutions as parts of wider systems. These involve people, AI, and other technologies working together to create a more resilient system, one that is able to adapt to change rather than break under stress. We took this approach when designing the Accenture Logistics Platform, an AI driven demand prediction, scheduling and route planning tool that empowers postal services to support same day delivery. By paying careful attention to how the people in the system operate, we were able to design an AI solution that advises postal workers but also allows them to override suggestions. This way, the workers can react to on-the-ground situations and unexpected events without being slaves to the algorithm. This level of user control maintained the workers autonomy and was critical to drive adoption of the solution. But just as important, through their deviations and workarounds, users become trainers of the AI, providing it with feedback and new information that allows it to learn and improve. If you design your AI solution to interact with its surrounding ecosystem you can ensure it supports meaningful actions even in the most changeable environment.
Interface from the Accenture Logistics Platform. Click to watch video.
Trust, transparency and fairness
Building resilience is about feedback and communication. It’s about collaboration between people and AI, and this requires trust. However there is evidence that the majority of people (at least in the USA) do not trust algorithms to make decisions that will affect their lives. Building trust in AI is a multifaceted problem but transparency and fairness are two of the biggest challenges we face.
When we talk about transparency we mean the ability for an AI’s logic to be understood by a person. In principle this would seem like a sensible thing to do, so much so that it has been enshrined in European Legislation. In practice, however, some of the most effective forms of AI involve levels of mathematical complexity that defy simple explanation. This raises technical challenges in terms of the types of algorithms we use and communications challenges for how we represent models and their outputs to people.
When we talk about fairness we mean the ability for an AI to generate results that align with our societal values and laws around discrimination and bias. This has proven to be a major challenge with examples of gender and racial bias in AI applications across finance, law and healthcare. AI models are only as good as the data they are trained upon. The problem is that this data may contain historic latent biases or be unrepresentative of a wider population. In high stakes applications, like facial recognition in law enforcement, some companies are choosing to back away from AI altogether.
Explainable AI and algorithmic fairness are not technological challenges they are systems problems. The Algorithmic Fairness project at the Dock, Accenture’s Global Innovation Hub, examined fairness and transparency in the context of banking systems and credit risk. Mapping the AI model lifecycle from problem identification, through development process, launch and reviews revealed the points where human input was essential. Here, bias detection and mitigation strategies could be developed and assessed by data scientists and business domain experts together. To facilitate communication between these very different stakeholders we designed a set of visualisations and simulations that increased the transparency of the data and its effect on the models. This approach allowed them to understand, discuss and develop plans for tackling the complex problem of bias in machine learning.
Mapping the AI model lifecycle.
Approaching complexity conscientiously
AI has become a critical technology for coping with complexity in the modern world. It’s helping to tackle important problems like drug discovery and climate change but it can also result in unintended consequences for society and business. Taking a systems design approach shifts our focus from the particulars of AI algorithms to the people, organisations, and technologies that surround it. We’re expanding our design toolkit to meet this challenge. A systems view helps us define how an AI interacts with its wider ecosystem and help us articulate the consequences of these interactions so that we can make more conscientious design decisions.
No man is an island, and no technology is either. If we think of AI as part of a system we can use it to its best, and avoid its worst.