{{item.title}}
Every new technology creates new questions. Will it jeopardise personal information? Will it invade individual privacy? Is it ethical? Can it be trusted?
AI is no exception. Businesses are starting to pay attention to the ethical conflicts that artificial intelligence applications can unexpectedly raise, but it’s slow going.
A review of existing research documents shows that more than 75 organisations worldwide are already developing a code of ethics for artificial intelligence technologies, and many others agree that it’s essential to incorporate ethical AI principles into their core business processes. But, let’s face it: businesses must juggle a variety of priorities coming from different stakeholders (customers, employees, revenue, costs, growth, reputation, etc.) with different value sets. The reality is, ethical considerations don’t always come out on top.
So, how can companies manage the trade-offs between ethics and profits with AI, while taking full advantage of the promised benefits?
First off, CEOs and other business leaders need to champion ethics within their organisations, both in terms of AI and more broadly. They should make clear to all internal and external stakeholders — customers, employees, partners and the community at large — that the company is building its AI and other tech initiatives on an ethical foundation.
Because AI is becoming increasingly important in today’s world, it gives corporate leaders a great opportunity to shine a spotlight on ethics in relation to bias, privacy, jobs, and unfair policies and practices throughout the company. The goal is to create a corporate culture that keeps ethics top of mind every day for all employees. This focus on ethics can also help uncover existing unfair practices — both internal and external.
Of course, corporate leaders don’t have to do this on their own. Successful ethics champions rally managers, employees, and other stakeholders under a responsible AI flag, involving them in all discussions and decisions about AI and ethics.
The first step is to define consumer profiles and determine how they would use AI applications, as well as how they would be affected by them. For example, customers using a telecommunications service chatbot, or employees using an AI application to screen process invoices. This would require consumer engagement, which could include various types of outreach, workshops, or training. The outreach could be different, depending on whether the consumer is an internal or external customer.
The second step involves engaging internal and external customers in meaningful dialogue about AI and empowering them to provide their input on the ethics and values that should be embedded in AI applications. How do they honestly view the AI system’s value, risks and biases? What do they like? What worries them? Are they concerned about biased systems? What could be done better?
In short, championing ethical AI requires organisations to design technology in close collaboration with customers, employees, and the community — and to treat them as respected partners throughout the AI application lifecycle.
Sadly, that’s often not the case. PwC’s recent Responsible AI Diagnostic Survey of more than 750 senior business executives through September 2019 found that the level of understanding and application of responsible, ethical AI practices was immature in most cases. In fact, only 25 percent of these executives said they prioritise the ethical implications of an AI solution before investing in it, and only 34 percent think their use of AI is in line with their organisation’s values.
In addition, just 26 percent of the executives who responded to the survey strongly encourage ethical practices, yet even they don’t have formal AI policies or procedures. A mere 6 percent have an ethical framework for AI development and use that is embedded in their policies and procedures.
There is work to be done in understanding the importance of trust and its implications for customers, employees, business partners, and the community. For an AI initiative to be trustworthy, it should align with the company’s stated values and human rights, while also being safe, secure, explainable and fair. To establish trust, AI users should feel confident that the technology does, in fact, meet these stringent criteria.
Of course, it’s not easy to establish trusting relationships. In PwC’s recent Global CEO Survey, 84 percent of the respondents agree that for AI-based decisions to be trusted, they need to be explainable. There is a clear need, therefore, for business leaders to review their firm’s AI practices, ask key questions about the initiative, and take any steps necessary to deal with potential risks and ethical concerns that can lead to a lack of trust.
Building trust takes time, effort, and resources, as well as C-suite support. But putting in the work is the only way to ensure that consumers, workers, partners, and community members have confidence in the trustworthiness of your AI technologies — and, by extension, your business.
Get the latest in your inbox weekly. Sign up for the Digital Pulse newsletter.
Sign Up
References
© 2017 - 2024 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Liability limited by a scheme approved under Professional Standards Legislation.