Ethical AI: Tensions and trade-offs

Key takeaways

  • Most people want ethical artificial intelligence and realise the damage that could be done by unsupervised applications.
  • Context is key: only by knowing why, how and where an AI-powered product works can risk be avoided.
  • There will be tensions and trade-offs required so that ethics are upheld and the company fulfills its aspirations.

Throw a rock in the air and it’s likely to hit someone talking about the artificial intelligence ethics or clutching a list of proposed principles.

In analysing a significant number of proposed ethical artificial intelligence principles PwC has synthesised them into a list of nine, which encapsulates the ethical spectrum: interpretability, robustness and security, accountability, data privacy, human agency, fairness, safety, lawfulness and compliance. There’s broad consensus around what such principles should cover at a theoretical level. Yet a challenge remains. How can companies embed and operationalise ethical principles in the context of their use of AI?

As business ambitions for AI grow, it’s important that appropriate measures and controls are built in from the outset of projects to avoid potential issues down the road. For example, negative bias, which could unfairly prevent someone from being considered for a job, or trigger inaccurate red flags in the forecasting of crime or reoffense rates.1 Issues like these invite trouble — in the form of reputational damage and legal problems, as well as in the cost of fixing systems retrospectively.

Artificial intelligence ethics

Pre-empting these kinds of issues is a big part of the challenge, alongside considering how any mitigation measures might conflict with an organisation’s ambitions for AI — such as improved efficiency, higher profitability or a better customer experience. To strike the right balance, stakeholders will need to find a reliable and effective way to identify and work through any tensions between ethical considerations and business objectives in order to identify trade-offs that offer compromise.

Encouragingly, in a recent survey PwC conducted of 1,000 US companies already investigating or implementing AI, 55% acknowledged the need to create AI systems that were ‘ethical, understandable and legal’.

But how?

Contextualisation

One of the reasons why operationalising ethical artificial intelligence principles is hard is that defining what is important between use cases, target markets and stakeholder groups can vary significantly.

The first step towards the practical application of ethical artificial intelligence principles, then, must be to contextualise them according to relevant local drivers. These might include formal jurisdictional or territorial legal requirements, industry standards and regulations, as well as cultural values, social norms and behaviours of users.

The many variables can make it difficult to set down a definitive set of rules and requirements that will apply to every situation, dictating how an organisation should manage ethical tensions and trade-offs. Instead, companies need to work from a consistent approach that makes it possible to reach the best solutions in different contexts.

A case in point: The risk of bias in AI-driven financial lending

Imagine a fictional financial institution, ExampleBank.

The bank launches a mobile-first lending solution offering small short-term loans to its business customers. Within minutes, they can apply for a loan  — with minimal documentation — and are pre-approved by a machine learning algorithm. The app proves a big hit until a high percentage of successful loan applicants fail to make repayments. Rising complaints about irresponsible lending results in negative press. A lot of it.

Investigations reveal that the AI system had developed a bias. The algorithm learnt that customers in financial hardship generated greater long-term profits for the bank — the project’s underlying business objective — because they accumulated more debt and paid more interest.

As a result, the bank’s reputation suffered and it had to recall and rectify the design flaw in its AI tool as well as appropriately respond to the financial distress caused to its customers.

Too often, organisations make improvements to a system or process only after disaster has struck. In the ExampleBank scenario, customers’ financial hardship reflected badly on the bank’s ethics and had a profound impact on the affected businesses.

In safety-crucial scenarios, involving self-driving vehicles, autonomous weapons systems, crime control or health-related AI, disaster management can be even more critical.

If ExampleBank had embedded the aforementioned ethical artificial intelligence principles as part of its app development process, before rolling out its loan-approval tool, it would have picked up on the potential for contextual bias within the algorithm at a much earlier stage. Through contextualisation, ExampleBank would have realised that a lack of regulation as it relates to human oversight and agency meant it needed to develop its own processes.

Although it wanted to provide a low-touch tool to swiftly convert demand for loans into approved applications, these contextual factors would have flagged a need to consider some important quality/anti-bias controls to keep the lending within the desired ethical parameters.

Tensions and trade-offs

While context provides a robust lens through which to identify and mitigate ethics-related risks and make technical decisions, there are still internal and external tensions and trade-offs that must be managed.

A variety of tools can be used to map the tensions in AI use cases to help prioritise those that need compromise. For example, using a matrix (see below for an example) to plot the multiple dimensions and perspectives among the principles, as well as the stakeholders. Assigning a weighting to different factors will then illuminate the relative importance of each criterion.

An illustrative example of how to build an AI ethical matrix

An illustrative example of how to build an AI ethical matrix

If ExampleBank had applied this process, it would have identified a tension between its executives and its customers. The leadership team wanted a fully automated AI decision capability, enabling lower overheads and greater efficiencies. Yet contextually, customers mistrusted a fully autonomous application without the reassurance of human checks. Without the tools to map the tension, the potential for problems went unseen and trade-offs never attempted.

Ground rules and proof points: framing the discussion

Even with all of the different ethical and strategic considerations represented in black and white, debates around tensions and trade-offs can become heated. It is therefore important to provide boundaries and independent proof points to frame and defuse negotiations — and guide stakeholders towards amicable outcomes.

At its simplest level, this process might involve bringing all parties back to core, agreed ethical beliefs, such as ‘always acting in the best interests of customers’.

Using sample data to demonstrate how potential bias in the system could lead to unethical decisions and outcomes without appropriate intervention would help people understand the problem — after all, seeing is believing.

At ExampleBank, a good compromise resulting from such negotiations might have been to incorporate a human review as a step in the automated loan-application process.

Conclusion

In time, regulation will establish a framework to manage the ethical application of AI technology. But for now, companies’ desire to press on with ambitious AI-based plans places the onus on them to proactively identify and manage any potential ethical risks.

As AI becomes more commonplace in routine business and social interactions, building trust is paramount.

Organisations planning to deploy AI offerings must formulate and finesse their approach to ethical principles appropriately and in context, as part of any planned transformation of their operations.

This starts with having the right conversations and using the right guiding frameworks and tools, and allows organisations to build optimal controls into AI-powered designs to ensure the best, fairest, outcomes for themselves and their customers.

This article is part of PwC’s Responsible AI initiative. Visit the Responsible AI website for further articles and information on PwC’s comprehensive suite of frameworks and toolkits, and to take the free PwC Responsible AI Diagnostic Survey.



References

  1. https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/