Your Team Is Already Using AI. Do You Have a Policy?

What Australian SMEs Need to Understand About AI in the Workplace Right Now.

Fair Work Act, Privacy Act 1988 & Emerging WHS Obligations

The Adoption Reality

AI use in Australian workplaces has moved past the experimental phase. According to BizCover’s 2025 Australian Small Business Report, which surveyed 965 small business owners, 66% of small business owners already use AI some or all of the time. An additional 14% plan to adopt within the next six months to two years. Total current use or intent to use has reached 80%.

The National AI Centre’s monthly adoption tracker – which surveys 400 Australian SMEs – reports that approximately 37% of SMEs are actively using AI tools. Australia ranks third globally for consumer use of AI technology, adjusted for population.

What those numbers do not capture is what is happening with the people inside these businesses – employees using AI tools daily for their own work, often without any policy, disclosure requirement, or data protection guardrail in place.

56 to 57% of employees globally hide their AI usage or present AI output as their own. 57% admit to not checking AI-generated output for accuracy. Your team is using these tools. The question is whether you know how, and what the exposure is.

The People Risks SMEs Are Not Managing

The legal and operational risks of unmanaged AI use in an SME are concentrated in three areas:

Privacy exposure from employee data entering public AI tools. When a manager drafts a performance review using ChatGPT, they are likely feeding employee names, roles, performance details, and potentially health or personal information into a third-party system. Under the Privacy Act 1988 (Cth), employers have obligations regarding how employee personal information is handled. The employee records exemption under the Privacy Act is narrower than many employers assume – and the Privacy Amendment Bill 2024, which took effect 10 June 2025, has further tightened the boundaries of what constitutes a serious invasion of privacy, carrying with it a new statutory tort and increased exposure.

Fair Work exposure from AI-assisted hiring and performance decisions. The Australian Parliament’s House of Representatives Standing Committee on Employment, Education and Training’s Future of Work Report, published 11 February 2025, recommended that all AI systems used for employment purposes – including recruitment, performance assessment, promotion, and termination – be classified as high-risk systems. While mandatory regulation has not yet been enacted, the existing general protections provisions of the Fair Work Act capture circumstances where an employee or candidate has been disadvantaged by a discriminatory decision, regardless of whether a human or an algorithm made it. An employer who relies exclusively on AI-assisted screening or performance tools, without human review and documented reasoning, faces a harder evidentiary position if a general protections claim is brought.

Emerging WHS obligations. In February 2026, the NSW Parliament passed the Work Health and Safety Amendment (Digital Work Systems) Bill 2026. This legislation imposes a positive duty on employers to ensure that health and safety risks are not created by the use of digital work systems – which the legislation defines to include algorithms, artificial intelligence, automation, and online platforms. This is the first jurisdiction-specific legislation in Australia to create WHS duties around AI use. It is being closely watched as a potential model for other jurisdictions. For NSW employers in particular, the use of AI to allocate work, set performance metrics, or manage attendance without assessing the health and safety impact on workers is now a compliance risk, not just an operational question.

The Productivity Story – and What It Leaves Out

The most frequently cited figure in coverage of AI and business is the Tech Council of Australia’s projection that AI could add $142 billion annually to Australian GDP by 2030. The same report states that SMEs are projected to achieve 22% faster productivity growth than large firms over the 2025 to 2030 period as a result of AI adoption.

A more grounded data point comes from a Morgan Stanley survey of 935 corporate executives across five sectors identified as most exposed to AI. Companies reported an average 11.5% increase in net productivity over 12 months. They also reported a 4% net decline in headcount. AI adoption eliminated 11% of jobs and left an additional 12% of positions unfilled, partially offset by 18% new hires.

For SME owners, this is not an argument against AI. It is an argument for managing the people impact of AI adoption deliberately. When a business introduces AI tools that change how work is done, employees need to be consulted and supported through that change. Under modern award and enterprise agreement consultation obligations, introducing technology that is likely to have a significant effect on employees triggers consultation requirements. Most SMEs are not aware of this obligation, and most are not meeting it.

What a Functional AI Policy for an SME Looks Like

An AI use policy is a people system. It does not need to be complex. It needs to cover:

  • Which AI tools are approved for use and which require prior approval
  • What categories of information cannot be entered into public AI tools – including employee personal information, client confidential information, and commercially sensitive business data
  • Disclosure requirements – employees must not represent AI-generated output as their own original work in contexts where authenticity matters, and must review AI output for accuracy before acting on it or sharing it
  • Human review requirements for any employment decision where AI has been used as an input – hiring, performance assessment, disciplinary processes
  • How the business will consult with employees before introducing new AI tools that affect how their roles are structured or managed

The National AI Centre released a suite of guidance and editable AI policy templates in October 2025 specifically designed for SMEs. These are freely available at industry.gov.au. The gap is not in the availability of resources – it is in whether business owners are treating AI use as a people systems question that requires a documented answer.

The Talent Angle

Candidates and employees at all levels are now forming views about employers partly on the basis of how those employers handle AI. Businesses that use AI tools in recruitment without transparency, that monitor employees through AI-based surveillance without clear policy, or that have no position on how their team uses AI in their own work – are sending signals about how they operate.

Businesses that get ahead of this – with clear policies, transparent practices, and a genuine approach to managing the human impact of technology – will have a differentiation point in talent attraction that most of their competitors do not have. The businesses that ignore it will find themselves managing the consequences of uncontrolled AI use after a claim, a privacy breach, or a Fair Work complaint – rather than before.

Sources

BizCover: “The Australian Small Business AI Report 2025,” August 2025.

National AI Centre / Department of Industry: AI Adoption Tracker Q1 2025 (Fifth Quadrant, 400 SMEs monthly).

Department of Industry: “National AI Plan,” December 2025.

House of Representatives Standing Committee: “Future of Work Report,” 11 February 2025.

Privacy and Other Legislation Amendment Act 2024 (Cth), effective 10 June 2025.

Work Health and Safety Amendment (Digital Work Systems) Bill 2026 (NSW), passed 13 February 2026.

Morgan Stanley Research: “AI Adoption Accelerates,” survey of 935 executives, 2025.

Scale Suite: “AI Adoption in Australian SMEs 2026,” February 2026.

National AI Centre: “Guidance for AI Adoption,” October 2025 (editable SME policy templates at industry.gov.au).