The ethics of artificial intelligence in law

The legal profession is quickly absorbing the opportunities presented by artificial intelligence. PwC invited a panel of some of Australia’s leading legal and AI experts to discuss the ethical issues presented by automation and AI. Host Cameron Whittfield shares the discussion.

In 2013, James Barrat wrote a book in which he described artificial intelligence as “Our final invention.” It could suppose, went the title, “the end of the human era”. Beyond the hyperbole of fear that surrounds AI, however, there are a myriad of practical applications – and nascent forms of this technology have actually been in use for some time.

What is artificial intelligence?

The broad definition of artificial intelligence (AI) is the advanced ability for computers to display ‘intelligence’.

This raises fundamental questions around what it means to be intelligent, or indeed what it means to be human. If we’re to understand the ethical implications of artificial intelligence, grappling with those definitions is a good place to start.

The goalposts of what we consider intelligent, however, are constantly moving, argued PwC’s Former Chief Data Scientist Matt Kuperholz at our breakfast panel for law practitioners. Once, optical character recognition was considered hard for machines to do. Then it was handwriting recognition. Now, Facebook can recognise faces, Shazam can identify music, and Siri can understand language. In May last year, Google’s AI capability beat a human at the game of Go – which entails more possible manoeuvres than there are atoms in the known universe.

Alan Turing, considered the father of computer science, devised the Turing test: a machine is considered ‘intelligent’ if it can hold a text-only conversation in which the other person can’t tell if they’re speaking to a human or a computer. That milestone was beaten in 2014¹.

Despite the moving target, what we can agree on is that artificial intelligence builds on a foundation of settled rules and reasoning driven by logic. This is also the sweet spot for law, which means an open door for disruption by this rapidly evolving technology.

Under the hood of AI: who is responsible?

Many commentators, said Dr Jeannie Marie Paterson, Associate Professor at Melbourne Law School, like to talk about technology in legal practices from an efficiency perspective. Here, technology helps perform routine tasks faster: whether that’s to populate a standard form contract, to compare or contrast documents, or to perform discovery in due diligence. In fact, technology probably carries out those tasks better than humans do.

AI also offers the promise of contributing to the very core of legal practice, for example using big data to predict litigation outcomes, draft bespoke contracts and assess the relative attractions of different business structures.

For legal practitioners to perform their ethical duties competently, diligently and as promptly as reasonably possible, they need the experience in how a machine does that job – what its capacity is and where the gaps lie in its performance.

Technology is also now able to do the job for us – for example, the “AI chatbot lawyer” that appeals against parking tickets². How does a lawyer exercise reasonable supervision over technology or, in this case, the AI chatbot? The challenge for lawyers is to know how the artificial intelligence is working, when it’s suitable in context, the risks that are associated with it and when to step in.

Bob Williamson, Chief Scientist at Data61 and Professor in the Research School of Computer Science at ANU, offered an analogy from his profession. “These days, there are very few machines designed directly by engineers.” Instead, they utilise a stack of software to make sense of “billions” of transistors. Engineers can’t possibly know what each component does, but they are responsible for the outcome. And they manage this by testing the final product.

Similarly, lawyers have to assess the risk of using artificial intelligence and articulate that risk to clients. In that sense, the fact law firms are using such technology to provide services for their clients doesn’t change the game – provided it doesn’t undermine their duties, including their paramount duty to the court and the administration of justice.

Battling bias

One of the major ethical concerns when it comes to the use of artificial intelligence in law is the issue of bias and whether it compromises integrity and professional independence.

One only needs to look towards Microsoft’s AI chatbot ‘Tay’, which started sharing racist ‘views’ within a day of being launched in early 2016, to understand how the bedrock of data on which it’s built could inherently cause a sway in the AI platform’s learnings.

The challenge, argued PwC Partner and former White House deputy CTO, Chris Vein, is that a lot of today’s AI technology is being built in Silicon Valley, often by males with a certain viewpoint. As a result, gender, socio-economic or even political biases are embedded in the technology they’re creating. “Those biases may not reflect cultural norms in Australia, or China, or anywhere else” he said. “The difference is they are now embedded into the foundation of the AI and may have an exponential impact.”

“We all bring bias into our daily lives and it’s silly to expect that bias doesn’t exist or that the technology we use will be free from bias.” The requirement, he explained, is to be diligent and conscious of its presence.

On the flip side, added Kuperholz, artificial learning processes can absorb many points of view but, unlike a human, a robot won’t have a ‘bad day’. As such, it could be said that the danger of cognitive bias is tempered by the removal of personal bias.

Will the robots take our law jobs?

Many discussions on the disruption of the legal profession centre on the belief that roles face redundancy through computerisation. PwC modelling in 2015, for example, showed that 44% of all jobs in Australia are likely to be automated by technology.

This is consistent with the belief that AI is out to replicate humans. This is not the case. Artificial intelligence, or robotic processes, solve a particular problem. The resulting automation merely changes the shape of the workforce and shifts the balance of roles.

“There will be jobs that don’t exist in future,” explained Kuperholz, “but we’re not out of work.” Humans must always stay in the loop. The same PwC report that puts accountants at greatest risk of automation, says the legal profession only faces a low (6.5%) probability. What this means is, rather than be lost, roles will evolve to deal with angles such as applying the technology, ensuring quality, and providing clients with assurance every step of the way.

Vein raised the question of how to respond, as a society, to the shifting work requirements. “How do we prepare for a future we don’t understand?”, he asked.

Speed of change

How well the law is equipped to keep up with the pace of technology presents interesting opportunities for the legal profession. Kuperholz argued that technology is forcing what’s traditionally considered to be a slow-moving area, to be more current and timely.

“National privacy laws and the laws protecting confidentiality are out of date for what technology now affords,” he said. “Human inventiveness is moving faster than the pre-tech regulatory environment around it.”

Vein added that it’s not just AI that poses ethical problems. Gene technology, for example, is continuing to develop ahead of the law’s ability to keep up. However, the slack in politicians’ ability to anticipate and build laws around new technology means that “you, as arbiters or interpreters of this change, will almost have to develop your own code of ethics to support your professional responsibilities, and training for that code of ethics will radically change in every profession.”

Dr Paterson presented an alternative view: that more regulation is perhaps unnecessary. “The beauty of the common law is that it is reflective, and it takes its time to deal with complex problems. There is an element where, if you want innovation, or if you want agility or diversity, we don’t want to stop that too quickly. But I’m confident that the common law can evolve to deal with new questions of criminality and negligence and the like.”

An avenue for good

Sometimes legal protocols mean it can be hard for clients to find a remedy to their problems. Here, there is a tremendous opportunity for AI to intervene and provide a public service.

Williamson talked about Data61’s desire to “take every bit of statute law, every act of Parliament, and turn it into computer code,” – what they’re calling ‘regulation as a platform’.

Data61 is “on track” to achieving this, he explained, citing the example of the pilot Free Trade Agreement Portal. Hosted on the DFAT website, it’s a service that guides farmers on the 900-page legal regulations around exporting goods from Australia. “You can type it in plain English and the system will give you an answer. It’s not the same service you’d get from a $500-an-hour lawyer, but it’s a starting point. It tells you enough.”

Dr Paterson backed up this hope for law to make a wider contribution. “The role of lawyers is to engage. Lawyers are really good at thinking about the implications of new things on society. We need to engage with the challenge of AI and embrace the opportunities it presents.”

Cameron Whittfield is Head of Digital and Technology Law at PwC Australia.

Watch bestselling robotics author Martin Ford and Managing Partner of PwC’s People Business, Jon Williams, discuss AI’s impact on the workforce here. For more information on artificial intelligence, read the PwC global report, Leveraging the upcoming disruptions from AI and IoT.


THIS COMPONENT WILL NOT DISPLAY WHEN PUBLISHED

No search results



References

  1. www.theguardian.com/technology/2014/jun/08/super-computer-simulates-13-year-old-boy-passes-turing-test
  2. http://venturebeat.com/2016/06/27/donotpay-traffic-lawyer-bot/