{{item.title}}
The world of generative artificial intelligence (GenAI) is constantly evolving with numerous advancements and breakthroughs. This is creating more concerns around the privacy and security of information stored, access to the data and technology misuse. It’s essential for organisations to thoroughly understand the risks and implement protective measures before adopting this technology to enable secure adoption.
Data privacy and confidentiality are critical aspects. GenAI models often require large datasets for training, which may include sensitive or personal information. If not handled properly, this data can be exposed or misused. One of the recent GenAI solutions, Copilot for Microsoft 365, is an AI-powered tool designed to help boost user productivity by providing suggestions and automation as users work in Microsoft 365 and apps under the platform (e.g. Microsoft Outlook, Teams, Office Apps, OneNote, Loop and Whiteboard).
Copilot draws insights from a company’s data and content to provide its users relevant recommendations in applications. GenAI solutions offer numerous benefits, but they also come with certain data security risks. Here are some key risks to consider:
There are many benefits that GenAI tools like Copilot bring to organisations. It is imperative that with their increased adoption, robust data security, reliable identity and access management and appropriate logging and monitoring is introduced to help to minimise the above mentioned risks.The main areas of focus for business to minimise risk include:
Establishing a data trust framework in the context of Copilot can help to reduce the risks around this GenAI solution. The framework should consider the whole data lifecycle and cover the main area domains including data governance, data discovery, data protection and data minimisation.
Centrally governing AI-related identities and roles through Microsoft Entra ID, controlling how AI tools interact with identity information, rolling out the AI tools only to defined users first and preventing access to critical data from unmanaged devices are areas to consider.
Monitoring all users’ interactions with organisation and non-organisation owned AI tools, defining and ingesting AI-related log sources, alerting rules and having proper incident response playbooks for the target AI tools can minimise risks. Microsoft security stack services such as Microsoft Purview AI Hub, Microsoft Sentinel and Microsoft defender services provide further visibility, detection of suspicious activities and control the use of Copilot and other potential AI platforms or applications in business.
If you would benefit from a rapid assessment on your current state and desired future state for GenAI adoption from a security, data risk and privacy perspective, contact Pouya.Koushandehfar@au.pwc.com, Robert.Di.Pietro@au.pwc.com or Jon.Benson@au.pwc.com
Pouya Koushandehfar
Senior Manager, PwC Australia
Robert Di Pietro
Jon Benson
Partner, Advisory, Cybersecurity and Privacy, Melbourne, PwC Australia
© 2017 - 2025 PwC. All rights reserved. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Liability limited by a scheme approved under Professional Standards Legislation.