Generative artificial intelligence (GenAI): Risks and secure adoption

  • Understand the risks prior to adopting GenAI across your organisation
  • Reduce the risks by developing, designing and implementing the right controls
  • GenAI powered tools offer numerous benefits

The world of generative artificial intelligence (GenAI) is constantly evolving with numerous advancements and breakthroughs. This is creating more concerns around the privacy and security of information stored, access to the data and technology misuse. It’s essential for organisations to thoroughly understand the risks and implement protective measures before adopting this technology to enable secure adoption.

Data privacy and confidentiality are critical aspects. GenAI models often require large datasets for training, which may include sensitive or personal information. If not handled properly, this data can be exposed or misused. One of the recent GenAI solutions, Copilot for Microsoft 365, is an AI-powered tool designed to help boost user productivity by providing suggestions and automation as users work in Microsoft 365 and apps under the platform (e.g. Microsoft Outlook, Teams, Office Apps, OneNote, Loop and Whiteboard). 

Copilot draws insights from a company’s data and content to provide its users relevant recommendations in applications. GenAI solutions offer numerous benefits, but they also come with certain data security risks. Here are some key risks to consider:

  1. Data privacy concerns
    GenAI systems often require access to large amounts of data to function effectively. This can include sensitive or personal information. If not properly managed, this data could be exposed to unauthorised parties. 
  2. Data leakage
    Models can inadvertently leak sensitive information that they were trained on. For example, if the model is trained on proprietary or confidential data, there is a risk that it could generate outputs that reveal this information. 
  3. Model inversion attacks
    In a model inversion attack, an adversary can use the outputs of an AI model to infer the data that was used to train it. This can be particularly concerning if the training data includes sensitive information. 
  4. Adversarial attacks
    Models can be susceptible to adversarial attacks, where malicious inputs are designed to deceive the model into producing incorrect or harmful outputs. This can compromise the integrity and reliability of the AI system. 
  5. Data governance and compliance
    Making sure that the use of GenAI complies with data protection regulations is crucial. Non-compliance can result in legal penalties and reputational damage. 
  6. Access control
    Improper access control mechanisms can lead to unauthorised access to the AI system and the data it processes. Enabling robust authentication and authorisation protocols is essential. 
  7. Bias and fairness
    GenAI models can perpetuate or even amplify biases present in the training data. This can lead to unfair or discriminatory outcomes, which can be misaligned with an organisation's strategy, diversity & inclusion policies, values and may have legal implications. 
  8. Data integrity
    Enabling the integrity of the data used by GenAI systems is critical. Corrupted or tampered data can lead to incorrect or harmful outputs.

There are many benefits that GenAI tools like Copilot bring to organisations. It is imperative that with their increased adoption, robust data security, reliable identity and access management and appropriate logging and monitoring is introduced to help to minimise the above mentioned risks.The main areas of focus for business to minimise risk include:

Data governance and security

Establishing a data trust framework in the context of Copilot can help to reduce the risks around this GenAI solution. The framework should consider the whole data lifecycle and cover the main area domains including data governance, data discovery, data protection and data minimisation. 

Identity and access management

Centrally governing AI-related identities and roles through Microsoft Entra ID, controlling how AI tools interact with identity information, rolling out the AI tools only to defined users first and preventing access to critical data from unmanaged devices are areas to consider.

Logging and monitoring for AI/Copilot

Monitoring all users’ interactions with organisation and non-organisation owned AI tools, defining and ingesting AI-related log sources, alerting rules and having proper incident response playbooks for the target AI tools can minimise risks. Microsoft security stack services such as Microsoft Purview AI Hub, Microsoft Sentinel and Microsoft defender services provide further visibility, detection of suspicious activities and control the use of Copilot and other potential AI platforms or applications in business.

If you would benefit from a rapid assessment on your current state and desired future state for GenAI adoption from a security, data risk and privacy perspective, contact Pouya.Koushandehfar@au.pwc.com, Robert.Di.Pietro@au.pwc.com or Jon.Benson@au.pwc.com


Contact the authors

Pouya Koushandehfar

Senior Manager, PwC Australia

Contact form

Robert Di Pietro

Partner, Lead of Cyber Security, Melbourne, PwC Australia

+61 418 533 346

Contact form

Jon Benson

Partner, Advisory, Cybersecurity and Privacy, Melbourne, PwC Australia

Contact form