Every firm needs an AI strategy with robust governance
As you consider how your firm might use AI in 2026, or how AI can assist clients on their journey, take the time to develop an AI strategy and governance model appropriate for your organisation and seek advice from a specialist, trusted AI adviser, write Kian Ghahramani and Gerard Sayers.
With the rise of generative AI, we’ve seen firms rushing to incorporate it into their day-to-day business to uncover potential productivity gains.
Yet without a comprehensive strategy and proper governance, many firms are exposing themselves to additional risks without realising they exist.
The current state of play in Australia
In October, the National AI Centre (NAIC) released its Guidance for AI Adoption, providing six essential practices to embed safety, transparency and ethics into AI development and deployment, including deciding who is accountable, managing risks, and maintaining meaningful human control.
This guidance includes a range of practical resources to help make AI adoption widely accessible, including templates for AI policy and registration, an AI screening tool, as well as common terms and definitions.
The Australian government also launched a National AI Plan, providing what amounted to a statement of national intent to innovate with purpose, grow with inclusion, build national capability that grows the economy and serves the public good, and ensure safety and trust in AI adoption.
Professional services firms are already making use of AI through automating a range of tasks, including data entry, invoice processing, summarising financial reports and market announcements, freeing up accountants to focus on providing insight, analysis and innovation for clients.
At RSM, we have developed policies for how employees use AI, limiting the use of confidential and sensitive with unauthorised platforms.
We have invested in training users in using Microsoft Copilot effectively and identified high-value use cases in progress within our divisions. Importantly, we seek to balance strategic intent with governance structures to support responsible AI to embed trust in the AI solutions and use cases.
However, data released by the NAIC shows while there has been a sustained increase in the adoption of AI in Australia, 19 per cent of SMEs using AI have not implemented any of the recommended responsible AI practices, and only 37 per cent follow guidelines for safe and responsible use of AI (NAIC, AI Adoption Tracker - Responsible AI use by adoption stage, Q4 2025).
Without policies in place, businesses expose themselves to risks including data loss and privacy breaches, reputational risk arising from AI ‘hallucinations’ and bias disproportionately impacting certain employees or customers.
So, what should you be doing?
While firms are encouraged to take the AI plunge, careful consideration should be given to how AI will be used by the employees and the potential impacts on customers and the firm’s reputation.
Mapping out an AI strategy balanced with an AI governance framework won’t just help protect your organisation from the legal and ethical risks; it can help protect your firm from potential reputational damage.
There are six key actions a professional services firm should undertake as part of its journey.
1. Develop and adopt policies for how sensitive information will be shared and protected
Review your contract terms and conditions and specifically your firm’s Privacy Policy in relation to the use and disclosure of personal information and ensure it remains relevant in the age of generative AI.
Without relevant policies, such as an employee use of AI policy (i.e. code of conduct) in place, you risk more than exposing sensitive client data, with breaches likely to erode consumer trust and confidence in your organisation.
2. Undertake ongoing risk assessment
This process should entail a thorough evaluation of potential harms arising from algorithmic biases, security vulnerabilities, and potential impacts on privacy and human rights. The full range of potential harms needs to be assessed on an ongoing basis as data and models change over time.
When the risks are understood, business leaders become empowered to make informed decisions in evaluating their strategic AI priorities and roadmap.
3. Prioritise guardrails
Guardrails are important for addressing identified risks, including guardrails on the inputs to prevent personal information from being disclosed to third-party model providers and on the outputs to prevent sensitive disclosures to end users.
Firms should familiarise themselves with the Voluntary AI Safety Standard, which outlines 10 voluntary guardrails for developers and deployers of AI. These are practical steps organisations can take to improve accountability, risk management, transparency and testing approaches.
4. Promote data governance
A key success factor for deploying AI is ensuring source data is reliable and well managed, traditional data governance programs, where they exist, have focused on structured data in enterprise systems and data warehouses; however, GenAI challenges these norms as it relies heavily on unstructured data such as documents.
Without a well-governed data source, it’s impossible to trust AI decisions, leaving organisations exposed and leading to diminishing trust with clients.
5. Provide ongoing, regular training
Employees at every level need to understand the risks and implications of using AI and they should be provided with training to help them work with this evolving technology.
It’s important that employees understand that AI should always be used with human oversight, but this isn’t effective if your team don’t have the necessary skills or knowledge. Regular training and awareness sessions should be a staple activity that encourages accountability around AI use and ensures employees can effectively govern the data, identify biases and verify source data.
6. Review and update your AI strategy and governance frameworks
AI tools and technology are constantly evolving, and firms need to similarly adapt their strategies to remain relevant. Regular audits and monitoring support ongoing compliance with legal and ethical standards and provide an opportunity to identify emerging risks or opportunities for improvement, all of which are important to develop trust in AI systems and protect against reputational damage.
Establishing and reviewing governance frameworks for overseeing AI development and deployment is key.
This may include defining roles and responsibilities, setting up ethical review boards, and developing policies and procedures for AI use and management as appropriate for the firm’s level of maturity.
The takeaway from this is that AI, and AI usage, is not isolated and nearly all organisations, whether it is accounting practices or their clients, are facing these challenges and temptations. Often, clients look to their trusted advisors for guidance on how to best proceed and more than likely, this is their accountants.
As you consider how your firm might use AI in 2026, or how AI can assist clients on their journey, take the time to develop an AI strategy and governance model appropriate for your organisation and seek advice from a specialist, trusted AI adviser, writes Kian Ghahramani and Gerard Sayers.
Kian Ghahramani is a partner at RSM Australia and the national professional services leader. Gerard Sayers is a senior who specialises in AI and data at RSM Australia.