What is Shadow AI, and why is it a problem?

Shadow AI refers to the unauthorised use of generative Artificial Intelligence tools, such as Large Language Models (LLMs), by employees within an organisation. These tools often appear in workplaces faster than security teams can react, and their chat-style interfaces can make employees perceive them as harmless, leading them to integrate them into daily work without considering the corporate data they might be sharing.

In Australia, this widespread adoption is bypassing IT oversight and security controls, creating significant blind spots, data security risks, compliance violations, and issues of customer trust for organisations. Research indicates that over half of all AI application adoption falls under Shadow AI, with generative AI platforms being the fastest-growing segment.

'Shadow AI' estimated use in Australia

Recent studies show that 33% of Australian professionals regularly upload sensitive company data to unauthorised AI platforms, whilst 70% of organisations lack visibility into actual AI tool usage. This creates unprecedented compliance risks under the Privacy Act 1988.

Types of Data at Risk in Australian Organisations

Australian employees commonly input various types of sensitive company data into unauthorised AI tools, creating substantial privacy and compliance risks:

Personal and Business Information

Confidential emails, financial spreadsheets or reports including unreleased revenue numbers and supplier terms, and proprietary code snippets or entire repositories containing trade secrets represent the most commonly shared data types.

Regulated Client Information

Customer names, addresses, job titles, and purchase notes, along with client contract sections containing legally protected information, create significant regulatory exposure under Australian privacy laws.

Special Categories of Sensitive Data

Patient records, medical history, diagnoses, or treatment plans fall under heightened protection requirements. Behavioural and academic data, or health-related data inferred from user patterns, can also be at risk when employees use AI tools inappropriately.

Australian Privacy Act Implications

Under the Privacy Act 1988, many of these data types constitute "personal information" or "sensitive information" requiring special protections. Unauthorised sharing with AI platforms likely violates multiple Australian Privacy Principles and may trigger notification obligations.

Australian Government Regulatory Response

The Australian government has established a comprehensive framework addressing Shadow AI risks through multiple interconnected policies and guidance documents.

Office of the Australian Information Commissioner (OAIC) Guidance

In October 2024, the OAIC issued definitive guidance stating that the Privacy Act 1988 and Australian Privacy Principles apply to ALL AI uses involving personal information. Most critically, the OAIC explicitly recommended that organisations should NOT enter personal information, particularly sensitive information, into publicly available AI chatbots and other publicly available generative AI tools, due to significant and complex privacy risks.

Voluntary AI Safety Standard

Released in August 2024, this establishes 10 guardrails for organisations, aligning with international standards whilst preparing businesses for anticipated mandatory requirements. The standard emphasises risk-based regulation and accountability frameworks.

Proposed Mandatory Guardrails

Following public consultation through October 2024, the government is developing mandatory requirements for high-risk AI applications. The government has confirmed the current regulatory system is "not fit for purpose" for AI risks and is considering three regulatory approaches: domain-specific integration, new framework legislation, or pre-market risk assessment requirements.

Why Organisations Struggle with AI Governance

The rapid emergence of AI tools has created a familiar challenge that organisations faced during the early adoption of email and web technologies in the 1990s. Just as companies initially struggled to develop comprehensive internet usage policies whilst employees were already using these tools for work, today's organisations find themselves playing catch-up with AI governance whilst their workforce has already integrated these technologies into daily workflows.

The pace of AI development compounds this challenge significantly. Unlike email or web browsing, which evolved gradually over years, generative AI capabilities have advanced exponentially within months. New AI models, features, and platforms emerge weekly, making it difficult for governance frameworks to keep pace. Organisations that spend months developing comprehensive AI policies often find them outdated before implementation, as new capabilities and risks emerge faster than traditional policy development cycles can accommodate.

Common Employee Thoughts

"My employer has no AI policy - what am I supposed to do?"

"Everyone else is using ChatGPT for work - who really cares about this stuff?"

"It's just helping me write emails faster - how could that be risky?"

"The university hasn't said anything about AI, so it must be fine to use."

Existing Policies Already Apply

What many employees don't realise is that their organisation's existing information security and privacy policies already govern AI usage, even when AI isn't explicitly mentioned. Universities like Charles Darwin University have comprehensive frameworks through their Privacy and Confidentiality Policy and Information Security and Access Policy that directly apply to Shadow AI scenarios.

Understanding Shadow IT and Information Security Standards

Shadow AI is a subset of "Shadow IT" - the practice of using technology solutions without explicit organisational approval or IT department knowledge. Shadow IT creates security vulnerabilities because it bypasses established security controls, monitoring systems, and governance frameworks that organisations use to protect sensitive information.

Many employees aren't aware that Australian organisations, particularly those handling personal information, must comply with stringent standards. The ISO 27000 series provides internationally recognised frameworks for information security management, whilst APP entities (organisations covered by the Privacy Act 1988's Australian Privacy Principles) have legal obligations to protect personal information regardless of the technology used to process it.

Australian Privacy Principles and Shadow AI Violations

Australian businesses face extensive compliance requirements under the Privacy Act 1988 when employees use unauthorised LLMs with company data. Multiple Australian Privacy Principles are routinely violated through typical Shadow AI usage, creating immediate legal exposure for organisations.

APP 6: Use and Disclosure Limitations

This principle prohibits using personal information for purposes beyond the primary collection purpose unless individuals would "reasonably expect" such use. Under existing organisational policies like CDU's Privacy and Confidentiality Policy, "The University may use and disclose personal information only" for specific permitted instances, including when "the use or disclosure is related or directly related to the purpose for collecting it and the individual would reasonably expect the University to use or disclose it for that purpose."

Shadow AI Violation Example

A university staff member copies student assessment data into ChatGPT to help write feedback comments. The student data was originally collected for educational assessment purposes, not for training commercial AI models or being processed by third-party platforms. Students would not reasonably expect their personal academic information to be shared with OpenAI for model improvement.

APP 11: Security Safeguards

Organisations must take reasonable steps to protect personal information from unauthorised access, modification, or disclosure. CDU's policies state: "The University will take all reasonable steps to protect all personal information it holds from misuse, loss, unauthorised access, modification or disclosure." Public AI platforms fundamentally violate this principle.

Shadow AI Violation Example

An HR administrator uses Claude to draft a performance review by inputting employee personal details, salary information, and performance data. This places sensitive employment information on external servers beyond university control, violating the requirement to maintain reasonable security safeguards. The information may be retained, accessed by AI platform staff, or used for model training.

APP 1: Open and Transparent Management

Organisations must maintain clear, up-to-date privacy policies explaining how personal information is handled. CDU's policy requires that when personal information is collected, individuals must be "aware of the purpose for which the information is collected" and "aware of the persons or bodies, or classes of persons or bodies, to which the University usually discloses personal information."

Shadow AI Violation Example

A researcher inputs interview transcripts containing participant personal information into an AI tool for analysis. The participants consented to university research, not to having their data processed by commercial AI platforms. The university's privacy notice doesn't mention AI processing, violating transparency requirements and making consent invalid.

Trans-border Data Flow Restrictions

CDU's policy states: "The University will not transfer personal information about an individual to a person (other than the individual) outside the Northern Territory unless" specific conditions are met, including that the University "reasonably believes that the person receiving the information is subject to a law... that requires the person to comply with principles for handling the information that are substantially similar to the Information Privacy Principles and Australian Privacy Principles."

Shadow AI Violation Example

A finance officer uses ChatGPT to analyse vendor payment data containing personal information of sole traders and small business owners. This transfers personal information to OpenAI's international servers without meeting the required safeguards for trans-border data flows, particularly as AI platforms often don't provide equivalent privacy protections to Australian privacy principles.

Data Quality and Accuracy Requirements

The Privacy Act requires organisations to ensure personal information is "accurate, complete and up to date." CDU's policy states: "The University will take all reasonable steps to ensure that the personal information it collects, uses or discloses is accurate, complete and up to date."

Shadow AI Violation Example

A student services coordinator uses AI to generate student welfare reports based on incomplete prompts. When the AI hallucinates or generates inaccurate information about student circumstances, this creates false records that violate data quality requirements. The university becomes responsible for inaccurate personal information generated by AI systems.

Notifiable Data Breach Implications

Employee inputs of personal information into public AI tools constitute potential unauthorised disclosure under the Notifiable Data Breach scheme. CDU's policy recognises this risk: "All suspected data breaches must be referred to the University's Privacy Officer for actioning and reporting as deemed appropriate." Shadow AI usage creates ongoing breach risks as data may be retained indefinitely by AI platforms.

Enhanced Penalties and Enforcement

The Privacy and Other Legislation Amendment Act 2024 introduces penalties up to $50 million for serious or repeated breaches. The OAIC has explicitly warned about AI-related privacy risks and stated it "reserves the right to take action" where cautious AI use is not followed, signalling potential enforcement action against organisations with inadequate AI governance.