{{item.title}}
It sure sounded promising: a startup that claimed it had artificial intelligence (AI) technology that could automate the development of mobile apps.
The problem?
It may not be true. According to The Wall Street Journal, the company (which recently raised nearly $30 million from AI-focused VC funds), may hardly have any AI capabilities or expertise.1
It’s a common scenario. According to a study of 2,830 European startups by London-based MMC Ventures, 40 percent of those that claimed to be “AI startups” had barely any AI at all.2
This phenomenon is called ‘AI washing’: companies and people that mislead others about their artificial intelligence capabilities. In my work at the intersection of marketing and emerging technologies I’m seeing it more and more often.
AI-washing probably got its name as a variant on ‘green-washing’ — presenting corporate activities as sustainable when they’re not — itself a variation of the historical meaning of white-washing (to conceal or cover with a uniform paint). For corporate leaders, the risks of AI-washing may be even greater than that of false sustainability claims.
One risk is for dealmakers. With skyrocketing valuations for companies that claim AI capabilities, you’d best do your AI due diligence to make sure you’re getting what you pay for.
Beyond deals, there’s another risk for companies that are aggressively rolling out AI and letting their business partners and clients know about it. It could be devastating for your business and brand if anyone within your ranks was misleading others about your AI maturity.
Misleading promises can happen more easily than you think. AI is hard. Unless you’ve got a doctorate in computer science, how can you be sure that your AI tools are really doing everything that your tech and marketing teams claim they are?
Maybe there’s not even any conscious intent to mislead. Maybe someone on the tech team is just being too optimistic. Maybe someone in marketing is just pushing the envelope a little too far in the hope of a catchy campaign, or they didn’t understand the technology right.
Whether making AI deals or building AI organically, the imperative is the same: Make sure that you’re always dealing with responsible AI.
Responsible AI means that all your stakeholders — customers, employees or communities — can be confident that your AI really is doing what it’s supposed to, in a way that benefits them, because you’ve got these five pillars right:
If you have these pillars internally, you can be sure that your AI is a source of trust, not risk. And when looking at potential targets, dealmakers must examine these five pillars in relation to their own needs and plans.
For example, you may find that a small company’s AI is real and robust enough for its own limited needs. But it might not work or be secure if you try to scale it up for your own global operations.
If a startup lacks good governance or says it’s impossible to explain how its AI works, those could be red flags, telling you to walk away.
Yet, if you poke a little deeper and find that the tech is genuine, these flaws could be opportunities for you to acquire it and take it to another level.
So if the bad news here is that a lot of people and companies are talking a better AI game than they can really play, there’s good news too. If you know what to look for, you can avoid the dangers and be an AI leader, not an AI washout.
This is modified version of an article previously published on Forbes.
Get the latest in your inbox weekly. Sign up for the Digital Pulse newsletter.
Sign Up
References
© 2017 - 2024 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Liability limited by a scheme approved under Professional Standards Legislation.