{{item.title}}
Key takeaways
Is your robot smarter than you? Google’s DeepMind researchers gave an IQ test to various AI models to measure their ability for abstract reasoning.[1] The results were mixed, but the fact that such research was undertaken at all points to a greater conversation that’s being had around what’s going on inside the machines we’re trying to imbue with intelligence.
A recent PwC report, Explainable AI: Driving business value through greater understanding, examines the need to understand the algorithms being used in business. As AI grows in sophistication, complexity and autonomy, the report notes, there are opportunities – to the tune of global GDP gains of $15.7 trillion by 2030 – for business and society.
But this same sophistication is also driving murkiness, as the human understanding of the relationship between what AI is doing and how it reaches its conclusions gets further apart. This is a potential problem because this lack of understanding is likely to erode consumer and stakeholder trust and, the gains that AI promises may fail to eventuate.
It’s understandable, from a commercial point of view, why AI applications that use machine learning (the focus of the paper) operate in black boxes – closed systems which give no insight into how they reach their outcomes. Keeping secrets and IP close to the vest makes sense for competitive edge, but the problem is that even those in the business are often unaware of how their systems work.
For relatively benign uses of AI, this may not be a problem – for business or consumers. In general, people are willing to accept AI that helps them day to day as long as it’s accurate in its application. For example, recommendation engines that let a user know what television show they may like based on their viewing history.
However, as the complexity and impact increase, that implicit trust quickly diminishes. When trusting an AI algorithm to move your money, or diagnose your medical issues, humans are less willing to trust without question. And while the whitepaper notes that acceptance of these technological advances may increase over time as people use and build evidence of their results (in much the same way trust is built in any realm), but until that happens, AI needs to be explainable.
Unfortunately, designing explainable AI (also referred to as XAI), that is interpretable enough that humans can understand how it works on a basic if not specific level, has drawbacks.
Firstly, not all AI can be made explainable, and when it can, it can result in inefficient systems and forced design choices. Often systems need to make a trade-off between explainability and superior accuracy or performance. Businesses also already have AI models in use, and engineering them to be explainable retrospectively can be difficult. And as mentioned, there is a commercial sensitivity to the matter, with businesses naturally unwilling to give away valuable intellectual property.
It’s therefore necessary to give thought to why and when explainable AI is useful. It won’t always be necessary to ‘open the black box’ and reveal the decision making process at work in arriving at specific decisions or actions taken by AI. But organisations are facing growing pressure from customers, and will likely increasingly face regulatory requirements, to make sure that technology in use aligns with ethical norms and operates within publicly acceptable boundaries.
Therefore businesses need to assess the criticality of the use case. This includes looking at the type of AI being used (and whether it can be explained as well as how), the type of interpretability needed (for instance, transparency over how it works, or explainability as to why it does), and finally, the type of use. This final consideration, use type, includes what, where and how the AI is being used.*
Once determined, business needs to choose the appropriate machine learning algorithm, explanation technique and method for presenting the explanation to a human**. Explainable AI should be thought about at the earliest possible stage, and incorporated into the design process, but it’s worth noting this won’t necessarily make it easy, and it will involve a series of trades offs between functionality and explainability. The more complex the AI, the less transparent it will likely be – and the harder the box to open.
Given all this, it would be understandable for business leaders to be asking why they should bother. But as Explainable AI notes, “the objective of XAI isn’t to stifle or slow down innovation, but rather to accelerate it by giving your business the assurance and platform for execution you need to capitalise on the full potential of AI.”
There are numerous benefits to building explainability into AI processes. For one, it allows trust to be built and maintained between stakeholders and ultimately, customers. With greater visibility over (previously) unknown vulnerabilities and flaws, business leaders will be able to assure others that the system is operating as it is meant to and have more control to intervene when they can see where the AI is flawed.
This greater understanding into why and how the model works will also allow for better insights into the answers it comes up with. For instance, with predictive modelling, while knowing that “sales will be up in November” helps with ordering stock, knowing why that is, say because opening hours are longer and the weather better***, helps more. In turn this can help to optimise the model, if for example, the factors are not quite right, and then, when it comes to decision making made by AI recommendation, it can be done with confidence.
From a compliance and control point of view, understanding how the AI application is working allows for traditional lines of accountability. Instead of the developer being the only one knowing what a business is being built on – and the implications ethical and financial of that – staff at all relevant levels of the organisation will have oversight. A critical distinction given that the use of AI is set to take its spot alongside cyber when it comes to a company’s risk considerations.
At the end of the day, artificial intelligence must be driven by the business, not the other way around. For that to be possible, the AI in question must be understandable from developers to leaders, design to business strategy. This has the necessary byproduct of making sure that executives are accountable for the AI, and the risk of the AI that is being brought into the business.
Building explainability into AI from the start is ideal, with the right measures, governance and values, implicit, but existing applications need to be examined for changes that may need to be made too. This is particularly true while AI is still at a relatively young point in its evolution, less complex than it will undoubtedly grow.
Done right, explainable AI can be a differentiator. “The greater the confidence in the AI, the faster and more widely it can be deployed. Your business will also be in a stronger position to foster innovation and move ahead of your competitors in developing and adopting next generation capabilities.”
At the end of the day, what it boils down to is that, “Any cognitive system allowed to take actions on the back of its predictions had better be able to explain itself, if even in a grossly simplified way.”
The risk of it not being so, is simply too great.
* See the report for more detailed use case criticality evaluation criteria.
** For businesses facing such a scenario, the whitepaper goes into further detail on the types of design considerations and explanation techniques that should be considered.
*** At least in the Southern Hemisphere.
For further detail on how and why to incorporate explainable AI into your business, visit Explainable AI: Driving business value through greater understanding.
Get the latest in your inbox weekly. Sign up for the Digital Pulse newsletter.
Sign Up
Dr Anand Rao
Anand Rao is a partner and the global PwC Artificial Intelligence lead, PwC United States
References
© 2017 - 2024 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Liability limited by a scheme approved under Professional Standards Legislation.