Overview
Artificial intelligence (AI) offers opportunities for the public sector to achieve better social, economic, and environmental outcomes. It can provide operational efficiencies, support decision-making, and enhance the government’s ability to respond to complex challenges.
To realise these benefits, entities need to effectively manage ethical risks and ensure the way they use AI aligns with public sector values and community expectations.
We focused on policies the Department of Customer Services, Open Data and Small and Family Business has issued that guide entities in managing ethical risks with AI. We also assessed how the Department of Transport and Main Roads, in collaboration with the Queensland Revenue Office, managed ethical risks and controls of 2 AI systems it uses.
Tabled 24 September 2025.

Report summary
This report examines whether the Queensland public sector has policies and guidelines to effectively manage ethical risks associated with artificial intelligence (AI) systems.
In this audit, we focused on policies the Department of Customer Services, Open Data and Small and Family Business (CDSB) has issued to guide the management of ethical risks with AI across the public sector. We also assessed how the Department of Transport and Main Roads (TMR), in collaboration with the Queensland Revenue Office (QRO) within Queensland Treasury, manages ethical risks and relevant mitigating controls of 2 AI systems it uses.
What is important to know about this audit? |
- AI offers considerable opportunities for government to transform how it delivers services and how efficiently it operates.
- Realising the benefits of AI requires public sector entities to effectively manage ethical risks and ensure AI use aligns with public sector values and community expectations.
- While ethical risks are not new, AI’s advanced capabilities have increased them, making it important for entities to understand how AI systems work and apply suitable controls and oversight.
- In September 2024, the Queensland Government introduced its AI governance policy to ensure entities establish governance arrangements and assess ethical risks for each AI system they use.
- Individual entities are responsible for identifying and managing ethical risks with AI systems they operate.
What did we find? |
The Queensland Government’s AI governance framework is effectively designed to support entities with managing the ethical risks of AI systems, with some opportunities for improvement.
- The AI governance policy and supporting materials align to national and international frameworks and provide a range of resources to assist entities in managing the ethical risks of AI.
- CDSB, which is responsible for the policy, could strengthen its guidance on the application of ethical risk assessments to support a more consistent and effective application of the framework.
CDSB needs to monitor whole-of-government AI usage and risks.
- CDSB has limited visibility across the Queensland Government on AI use and emerging ethical risks. This affects its ability to assess how well entities manage these risks.
- As entities continue to increase their use of AI, it will be important for CDSB to ensure risks are monitored and understood at the whole-of-government level. Monitoring can also inform decisions on whether a more coordinated response or additional support for entities is needed.
TMR has not yet established department-wide policies or governance arrangements to consistently oversee ethical risks on AI systems.
- TMR has not yet established department-wide AI governance or incorporated AI ethical risk management into its policies or existing information and communication technology governance. It has not assessed if existing arrangements align with the AI governance policy.
- System-level governance has been established for the Mobile Phone and Seatbelt Technology (MPST) program, but not for the QChat system.
TMR’s identification and management of ethical risks across its AI systems varies in effectiveness.
- TMR has not yet undertaken dedicated ethical risk assessments for the MPST program or the QChat AI systems. While aspects of ethical risks have been identified through the existing risk assessment process, the department needs to apply a dedicated ethical risk assessment framework to ensure it identifies and manages all AI system risks.
- The MPST program uses image recognition AI to detect driving offences. TMR has implemented controls, including human review, to support accuracy and reliability, privacy, and fairness, and to monitor its external vendor that manages the system.
- TMR does not have adequate safeguards to manage ethical risks for QChat. It needs to establish suitable governance arrangements to manage risks, implement controls to monitor use, and develop a structured plan to educate its staff on using AI systems responsibly.
What do entities need to do? |
- We make 2 recommendations to TMR to strengthen governance arrangements and risk assessment processes, enhance oversight of AI systems, and improve staff capability to use AI systems responsibly.
- We make 4 recommendations to CDSB, focused on continuously improving the AI governance policy, monitoring whole-of-government risks and use of AI, enhancing the tools entities use to assess ethical risks, and supporting entities to better monitor QChat.
- We make one recommendation to all entities to implement ethical risk assessment processes to better identify and manage ethical risks.

1. Audit conclusions
The Department of Customer Services, Open Data and Small and Family Business (CDSB) has designed an effective policy to guide the public sector’s ethical use of AI. The policy requires entities to develop entity-specific governance and is supported by evidence-based materials for entities to assess ethical risks.
CDSB could improve the effectiveness of the policy by enhancing guidance to entities on the application of ethical risk assessments for AI systems. It also needs to determine how it will evaluate its AI governance policy and supporting tools to support their effective implementation and continuous improvement.
CDSB needs a better understanding of how AI is being used across the Queensland public sector. This will help it identify and respond to risks at a whole-of-government level and ensure AI use is safe, secure, and reliable. While its approach and support to date has been appropriate, further support may be needed as the use of AI in the public sector continues to grow. Monitoring AI use will enable CDSB to provide targeted guidance and coordinate responses across government more effectively.
The Department of Transport and Main Roads (TMR) is not effectively identifying and managing aspects of ethical risks associated with its Mobile Phone and Seatbelt Technology (MPST) image-recognition AI and the QChat generative AI system.
TMR has considered some ethical risks for both systems, which were implemented before CDSB issued its AI governance policy. This policy has been in place for 12 months. TMR needs to perform full ethical risk assessments to determine whether its governance arrangements and mitigation strategies for these systems address risks effectively.
The MPST program has implemented governance arrangements and risk mitigation strategies, including human review of potential offences, to support reliability, accuracy, and fairness. It needs to assess the completeness and effectiveness of these arrangements and mitigation strategies.
TMR needs to perform an ethical risk assessment for QChat and establish monitoring controls. A more structured approach to training would enhance staff capability in the responsible use of AI systems.
At a whole-of-department level, TMR needs to do more to ensure it assesses and manages ethical risks in a structured and consistent manner. It has taken initial steps, but lacks full visibility over AI systems in use. It has not yet established comprehensive department-wide governance arrangements to effectively oversee the ethical risks of AI systems. Strengthening its governance frameworks and implementing assurance mechanisms will support consistent and responsible management across the department.

2. Recommendations
We have developed the following recommendations for the Department of Customer Services, Open Data and Small and Family Business (CDSB) and the Department of Transport and Main Roads (TMR). We have also developed a recommendation for the benefit of all public sector entities.
Chapter 4: Supporting the ethical use of artificial intelligence | Entity response |
We recommend that the Department of Customer Services, Open Data and Small and Family Business:
| Agree |
| Agree |
| Agree |
Chapter 5: Managing ethical risks in 2 artificial intelligence systems | Entity response |
We recommend that the Department of Transport and Main Roads:
| Agree |
| Agree |
We recommend that the Department of Customer Services, Open Data and Small and Family Business:
| Agree |
We recommend all public sector entities:
| CDSB: Agree TMR: Agree Queensland Treasury: Agree |
Reference to comments
In accordance with s. 64 of the Auditor-General Act 2009, we provided a copy of this report to relevant entities. In reaching our conclusions, we considered their views and represented them to the extent we deemed relevant and warranted. Any formal responses from the entities are at Appendix A.

3. Understanding artificial intelligence
Artificial intelligence (AI) offers opportunities for the Queensland Government to achieve better social, economic, and environmental outcomes. It can help deliver these outcomes by enabling more targeted service delivery, improving operational efficiency, supporting evidence-based decision-making, and enhancing the government’s ability to respond to complex challenges.
Realising the benefits of AI requires public sector entities to effectively manage ethical risks and ensure the way they use AI aligns with public sector values and community expectations.
This chapter outlines what AI is, and the ethical risks involved. It also details how some Queensland Government entities are using AI and summarises the focus of our audit.
What is AI?
Artificial intelligence refers to computer systems that use inputs to produce outputs like predictions, content, recommendations, or decisions. These systems can simulate aspects of human intelligence by analysing large volumes of data, recognising patterns, and adapting their responses based on new information. This enables them to solve problems and perform functions that have traditionally relied on human involvement or judgement.
The different types of AI
AI is not a single technology, but a broad set of systems with different capabilities and uses. Understanding the different types of AI is important, because each is suited to different tasks and built using different techniques. AI systems can be narrow, designed to perform specific tasks, or general, able to be applied to a wide range of tasks.
Some of the more common types of AI systems include:
Generative AI – creates new content such as text, images, audio, or video by learning patterns from existing data. QChat and ChatGPT are examples of generative AI systems that also use natural language processing. | |
Natural language processing – allows machines to understand, interpret, and generate human language in both written and spoken form. | |
Computer vision – enables machines to interpret and respond to visual information such as images and video, mimicking aspects of human sight. | |
Machine learning – enables systems to automatically learn from data and improve their performance over time without being explicitly programmed. Mobile Phone and Seatbelt Technology is an example of an AI system that uses machine learning and computer vision technology. |
To use AI responsibly, entities need to understand the different types of AI systems and how they function. This includes knowing what data the AI system uses, who can access the data, how it makes decisions, and how its outputs might impact people or services.
Examples of AI in the Queensland Government
Queensland Government entities are at different stages of trialling and using AI systems. Figure 3A provides examples of AI projects across 4 entities.
Queensland Police Service QFACE Trialling a computer vision system with facial recognition to cross-check images of possible offenders | Department of Education Corella Trialling a generative AI tool to create learning experiences for students | Queensland Academy of Sport YouFor2032 Using computer vision to analyse photos and videos for athletic talent for the Brisbane 2032 Olympic and Paralympic Games | Queensland Health Medical Scribing Trialling a natural language processing system with speech recognition to scribe interactions between clinicians and patients |
Compiled by the Queensland Audit Office.
If AI is not used appropriately, it could raise ethical risks such as unfair results, unclear decisions, privacy concerns, and no clear person responsible when problems happen. If not managed, ethical risks can reduce public trust and affect people or communities.
What are ethical risks of AI?
While ethical risks themselves are not new, the use of AI and its capabilities has increased the ethical risks that entities need to manage. These risks include privacy and data security concerns; limited transparency; gaps in accountability; and outcomes that may be harmful, unfair, or unintended.
Figure 3B provides examples of ethical risks across different types of AI. It is not an exhaustive list but aims to raise awareness and guide entities’ risk management when using different types of AI systems.

Compiled by the Queensland Audit Office using information from the Digital NSW website; reports and journal articles on AI; and UNESCO publications.
When understanding and managing the ethical risks of AI systems, public sector entities should consider the type of AI they are using, the context in which they apply it, and the possible effects on individuals or communities. This helps ensure they use the technology in a way that is fair, transparent, and accountable.
Australia’s AI Ethics Principles
The Australian Government has developed 8 AI Ethics Principles to support safe and ethical use of AI. These voluntary principles aim to promote the incorporation of ethical standards into the design, development, and implementation of AI.
The Queensland Public Sector Ethics Act 1994 sets standards of integrity and accountability that guide how public servants make decisions. These values align with the Australian AI Ethics Principles, which focus on fairness, transparency, and accountability in the use of AI across government. Acting ethically is not a new obligation, but AI creates new contexts where these responsibilities must be applied.
Figure 3C outlines Australia’s 8 AI Ethics Principles.
Human, societal and environmental wellbeing | Human-centred values | Fairness | Privacy protection and security |
---|---|---|---|
AI systems should benefit individuals, society, and the environment. | AI systems should respect human rights, diversity, and the autonomy of individuals. | AI systems should be inclusive and accessible, and should not result in unfair discrimination against individuals, communities, or groups. | AI systems should respect and uphold privacy rights of individuals and ensure the protection of data. |
Reliability and safety | Transparency and explainability | Contestability | Accountability |
---|---|---|---|
Throughout their life cycle, AI systems should reliably operate in accordance with their intended purpose. | Entities should disclose when users are interacting with AI systems and ensure that the outcomes are explainable. | People should be able to challenge the outcome or use of AI systems when they significantly impact a person, community, group, or an environment. | People responsible for AI systems should be identifiable and accountable for outcomes, with appropriate human oversight in place. |
Australian Government, Department of Industry, Science and Resources.
These principles provide a practical framework for identifying and managing the ethical risks with AI systems and can be adopted by both government entities and private businesses. When effectively applied, they help ensure AI is developed and used in a safe, transparent, reliable, and ethical way.
Checklist for managing ethical risks in AI systems
Appendix C provides a checklist of key questions for those charged with governance to consider when managing ethical risks related to AI.
It is a practical tool to support entities in aligning their AI use with ethical standards and governance expectations. The checklist is adapted from the Queensland Government’s AI governance framework and draws on national AI assurance frameworks and guidelines.
What did we audit?
In this audit, we focused on the Department of Customer Services, Open Data and Small and Family Business’s policies and support provided to guide entities in managing the ethical risks with AI. It has a central role in providing guidance, coordination, and advice on risk management of AI across government.
We also assessed how the Department of Transport and Main Roads, in collaboration with the Queensland Revenue Office within Queensland Treasury, evaluates and manages ethical risks of 2 AI systems it uses.
These 2 AI systems are:
- QChat – a generative AI virtual assistant the Queensland Government created for its employees. It is designed to assist with a variety of tasks, including summarising documents, brainstorming solutions, developing communications, and performing policy analysis.
- Mobile Phone and Seatbelt Technology (MPST) – an image-recognition AI system used to detect possible mobile phone and seatbelt offences.
We did not examine broader frameworks or controls in information communication and technology, procurement, project management practices, or risk management, other than the areas that relate to the ethical risk management of AI systems. However, the use of AI still needs to be integrated with these broader frameworks and controls to support consistent governance and risk management.

4. Supporting the ethical use of artificial intelligence
Well-designed policies and governance frameworks support the state, and entities, to identify and manage ethical risks, while still obtaining potential benefits from AI. The uptake and use of AI is still new and evolving quickly. Governments, industries, and entities are still working out approaches to best manage the ethical risks with AI as the technology develops.
This chapter examines the design of the AI governance policy and the support the Department of Customer Services, Open Data and Small and Family Business (CDSB) provides to help entities implement it. The AI governance policy is intended to ensure a consistent approach to managing ethical risks associated with AI.
Queensland Government roles and responsibilities for AI systems
CDSB sets policies on information and communication technology and data management for entities to follow.
To be effective in its role, CDSB must ensure policies for the ethical use of AI are evidence-based, clear, and user-friendly. It should have a comprehensive understanding of how AI is used across the public sector and provide appropriate guidance and support to help entities identify and manage ethical risks.
In September 2024, CDSB released the AI governance policy. We refer to this policy and other supporting materials and guidelines collectively as the ‘AI governance framework’.
Appendix E includes a recent timeline of CDSB initiatives, and developments in AI policy and governance across Australia.
The AI governance policy applies to:
- Queensland Government departments, as defined by the Public Sector Act 2022
- statutory bodies under the Financial and Performance Management Standard 2019
- accountable officers in departments with delegated responsibility for other statutory bodies.
While CDSB established and maintains the AI governance framework, those entities to which it applies are responsible for implementing and managing the risks specific to their AI systems. These entities must also ensure their AI systems and use align with other government policies and laws. Each entity remains accountable for the business, regulatory, or administrative decisions assisted by AI and must monitor the performance of its AI systems.
In addition to these requirements, entities must also ensure their use of AI aligns with broader policies and frameworks for information and communication technology systems, as outlined in the Queensland Government Enterprise Architecture (QGEA). This covers areas such as privacy, security, data, and information management.
How effective is the design of Queensland’s AI governance framework to manage ethical risks?
The AI governance framework is designed to support entities in establishing governance arrangements and assessing ethical risks for AI systems. Queensland is the first jurisdiction nationally to mandate public sector entity compliance with international standard ISO 38507 Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organizations, which is considered international best practice.
Figure 4A provides an overview of the AI governance framework, which requires consistent and evidence-based methods for assessing risk and key ethical issues like transparency, accountability, and fairness throughout the life of an AI system.

Compiled by the Queensland Audit Office using information from the Queensland Government Enterprise Architecture (QGEA) Artificial intelligence directions.
In designing the AI governance framework, CDSB considered different international AI frameworks and consulted with the Queensland Government AI Assurance Working Group and Queensland Government departments. This helped it to understand the guidance and support entities need and the risks that need to be managed.
Improvements could make the framework more effective
CDSB developed the Foundational artificial intelligence risk assessment (FAIRA) and its accompanying guideline to provide a structured and evidence-based approach for entities to evaluate ethical risks.
The FAIRA is appropriately designed to apply consistent steps to understand what the AI does, the information it uses, and its outputs.
Effective elements include:
alignment with Australia’s 8 AI Ethics Principles – The FAIRA includes evaluating the ethical principles with suggested controls to support entities to respond to risks effectively. | |
supporting clear communication of ethical risks to stakeholders – The FAIRA supports clear communication of ethical risks by encouraging involvement of diverse stakeholders and technical experts in the development and oversight of AI systems. | |
highlighting potential impacts on vulnerable groups – The FAIRA encourages entities to assess the broader impacts of AI systems, including on the public and vulnerable groups, to support appropriate safeguards. |
CDSB needs to strengthen guidance on ethical risk assessments
CDSB recommends that entities initiate the FAIRA at the earliest opportunity and use it throughout the life cycle of the AI system. However, CDSB does not provide guidance on when and how often to apply the FAIRA, or whether it should be used retrospectively with AI systems implemented before the policy was released. Without timing guidance, entities may apply the FAIRA inconsistently or less frequently than CDSB intended.
The National framework for the assurance of artificial intelligence in government, released in June 2024, recommends entities review risks when transitioning between key phases. These phases include design, model building, testing, deployment, and ongoing use or when major changes occur. Aligning use of the FAIRA with these phases and providing guidance on when to apply it within these phases would support more consistent implementation and strengthen risk management across the AI life cycle.
Some entity feedback indicated the FAIRA may be too complex for low-risk AI systems. These entities suggested a more flexible risk assessment could improve efficiency while still being effective. Other jurisdictions in Australia use threshold tests that help entities scale their risk response. This approach may support entities in Queensland to manage ethical risks in a more efficient way.
The AI governance policy lets entities choose to use their own ethical framework instead of using the FAIRA but does not suggest what that framework should include. Other jurisdictions require entities to adopt mandated ethical principles or consider the national principles when developing their own frameworks. As the technology continues to emerge, additional guidance on ethical frameworks will better support entities in developing their AI policies and managing ethical risks effectively.
Recommendation 1 We recommend that the Department of Customer Services, Open Data and Small and Family Business enhances its Foundational artificial intelligence risk assessment (FAIRA) and supporting material by:
|
Developing a policy evaluation plan
Since introducing the AI governance policy in September 2024, CDSB has not developed a formal evaluation plan for it. While CDSB receives informal feedback and provides support to entities, it does not have enough information to fully assess whether the framework meets its objectives or supports ethical risk management.
The Queensland Government’s performance management framework requires regular evaluation of policies to ensure their relevance and effectiveness. To meet this, CDSB should develop a plan to review the policy regularly, using updates to national frameworks and jurisdictional approaches to guide refinements. This approach will support CDSB to manage ethical risks, adapt to challenges, and align with evolving standards.
Recommendation 2 We recommend that the Department of Customer Services, Open Data and Small and Family Business supports continuous improvement by assessing the effectiveness of the AI governance policy and supporting tools. |
How effective is CDSB support for managing AI risk?
In addition to the AI governance policy and guidance material, CDSB provides a range of support to help entities manage the ethical risks of AI and implement the AI governance policy effectively. These initiatives aim to build capability, promote consistency, and support ethical use of AI systems across the public sector.
CDSB currently provides a range of initiatives to assist entities in meeting the AI governance policy. This includes:
coordinating a community of practice – A collaborative forum supports the safe and responsible use of AI by bringing together entities and experts to share insights, address governance challenges, and promote consistent policy application. | |
developing occasional education, advice, and implementation support – CDSB delivers training workshops and provides advice to help entities understand and manage the ethical risks of AI systems. | |
conducting some central assessments – CDSB conducts some central assessments to improve the efficiency of risk assessments of widely used tools by other entities, such as Microsoft’s Copilot and DeepSeek. |
Monitoring and responding to AI risks across government
CDSB is responsible for developing information technology policy and advising government on related risks, which includes oversight of emerging technologies such as AI. However, it has limited visibility across the Queensland Government on how AI is being used and the types of risks that may be emerging.
It collects information on departmental digital systems through a standard reporting process that informs whole-of-government planning. This process currently does not capture AI systems or related risks, as reporting requirements under the AI governance policy have not yet been finalised.
At present there is no whole-of-government monitoring or oversight to understand how well risks are managed, identify gaps or good practices, inform AI policy development, and share learnings. This information would enable CDSB to focus its support, build public sector capability to better manage AI ethical risks, and improve outcomes across government.
There are examples within the Queensland Government where whole-of-government oversight, risk monitoring, and coordination already exist. For instance, the Cyber Security Unit within CDSB undertakes a whole-of-government monitoring and reporting role for cyber security. This enables a coordinated approach for managing cyber risks across the public sector.
Across Australia, jurisdictions use varied approaches to AI governance frameworks, whole-of-government strategies, and central oversight bodies that monitor higher-risk AI systems. Appendix D provides a summary of AI governance frameworks the different Australian states and territories have adopted to manage AI risk.
In June 2025, CDSB released its Strategic Plan 2025-2029. One of its strategic objectives is to transform government services through cross-agency leadership. This includes leading a whole-of-government approach to AI and providing tools to support entities in using AI to improve productivity, service delivery, decision-making, and policy design.
As use of AI grows, CDSB needs to consider how it can more systematically monitor risk and support agencies to use AI safely and consistently. This would align with CDSB’s strategic objective to lead the whole-of-government approach to AI. This could include engaging with entities more frequently for risks that require additional attention or support, which would help improve risk management and support better outcomes across government.
Recommendation 3 We recommend that the Department of Customer Services, Open Data and Small and Family Business improves its understanding of AI system use and risks across the public sector and develops risk-based advice to support entities in managing higher-risk AI systems. |

5. Managing ethical risks in 2 artificial intelligence systems
Managing ethical risks is an important part of using artificial intelligence (AI) responsibly in government services. Without strong governance, risk assessment processes, and effective mitigation strategies, AI systems may cause harm, reduce transparency, or erode public trust. This is particularly important where AI is used to support service delivery or decisions that affect the public.
This chapter examines how effectively the Department of Transport and Main Roads (TMR) assesses and manages ethical risks associated with AI systems.
It focuses on 3 key areas:
- Governance structures – the systems and arrangements in place to oversee how ethical risks are managed. This includes the organisational structure, policies, and processes that set the rules, roles, and responsibilities to manage the use of artificial intelligence.
- Ethical risk assessments – the processes used to identify and evaluate potential ethical risks in the design or use of AI systems. These assessments support agencies to understand where harm might occur and whether the system aligns with public values and legal obligations.
- Mitigation strategies – the actions taken to reduce or manage identified risks. These can include system controls, human intervention, monitoring activities, or staff training. Effective strategies can prevent issues from arising or limit their impact if they do.
We focused on 2 of TMR’s AI systems: QChat and the Mobile Phone and Seatbelt Technology (MPST) program. Figure 5A details key information and statistics on these AI systems.
QChat GenAI* | ||
Ethical risk considerations
|
| |
Mobile phone and seatbelt technology | ||
Ethical risk considerations
|
|
Notes:
CDSB – Department of Customer Services, Open Data and Small and Family Business.
QRO – Queensland Revenue Office.
MPST – Mobile phone and seatbelt technology.
* The statistics for QChat are from its commencement on 23 February 2024 to 30 June 2025.
** Assessments by the AI for potential mobile phone and seatbelt offences can be done using the same photo and vehicle. The number of assessments for the MPST program is from 1 January 2024 to 31 December 2024.
Compiled by the Queensland Audit Office.
Has TMR established effective governance arrangements to oversee ethical risks for AI systems?
TMR has not yet established department-wide AI policies or new governance arrangements to consistently oversee ethical risks on AI systems. It also has not assessed whether its existing information and communication technology governance and risk management processes align with the requirements of the Queensland Government’s AI governance policy.
Under the Queensland Government’s AI governance policy, TMR is required to align its AI governance arrangements with ISO 38507 – Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organizations. TMR has also not assessed whether its system-level governance for the MPST program and QChat aligns with these requirements – we discuss system-level governance in further detail below. The specific requirements of this policy for entities are discussed further in Chapter 4.
TMR is aware that its existing information and communication technology policies and governance arrangements may need to be updated. Until an assessment is made, there is an increased potential that ethical risks are not effectively managed across the department. This may limit its ability to demonstrate responsible system oversight.
As part of updating these governance arrangements, TMR will need to consider what types of assurance mechanisms are suitable for different AI systems. These mechanisms can assess whether TMR’s processes and controls are working effectively to manage ethical risks within its risk appetite.
TMR is developing an AI Strategic Roadmap 2025–28 that includes initiatives that aim to identify and address these gaps. Strengthening governance arrangements and implementing assurance mechanisms in the roadmap to manage ethical risks is necessary to ensure consistent and effective oversight of AI systems.
TMR needs to improve its visibility of AI systems in use and under development
TMR does not have full visibility over the AI systems in use or under development across the department. It has not established a central record or AI inventory to track where AI is used, the purpose of each system, or the risks linked to them.
Not knowing the extent to which, and how, AI is used across the department inhibits its ability to:
- manage ethical risks
- know whether controls are consistently applied
- maximise any benefits from the AI use.
Improving its visibility of AI systems in use would support consistent oversight and allow it to better manage related risks.
Recommendation 4 We recommend that the Department of Transport and Main Roads enhances its governance arrangements to support responsible use of AI by:
|
Is TMR effectively identifying and managing the ethical risks associated with the MPST program?
This section examines whether TMR has the necessary governance structures, ethical risk assessments, and mitigation strategies in place to manage ethical risks associated with the use of AI in the MPST program.
TMR is identifying and managing certain aspects of ethical risks associated with the MPST program. However, it has not completed a full ethical risk assessment, which is important to ensure all ethical risks are fully understood and addressed. It has also not reviewed whether its governance structures for the MPST program align with the Queensland Government’s AI governance policy.
TMR conducted the risk assessment for the MPST program before the AI governance policy was introduced in September 2024, which explains why some steps were not taken earlier. It remains important that TMR now applies an ethical framework to ensure it identifies and manages all ethical risks.
Governance arrangements for the MPST program are well positioned to oversee ethical risks, but require further assessment to confirm effectiveness
TMR has established governance arrangements to oversee the MPST program. These governance arrangements support consistent and coordinated oversight and risk management of the MPST program.
The following formal governance arrangements support the MPST program:
Formal oversight – Governance arrangements include an oversight board, executive committees, and working groups that meet regularly and keep records of key decisions and actions. | |
Clear roles and responsibilities – Policies and procedures define roles and responsibilities to support consistent management of ethical risks. | |
Structured risk management processes – Risk owners are assigned, and risk mitigation plans are in place to support monitoring and response activities. |
These arrangements are well positioned to oversee the management of ethical risks. However, TMR has not assessed whether these governance arrangements fully respond to ethical risk considerations or if they are aligned to the Queensland Government’s AI governance policy. This creates a risk that some ethical issues may not be fully identified, managed, or overseen through the governance arrangements.
Completing these steps would support TMR to confirm whether the governance arrangements remain appropriate to oversee ethical risks.
TMR has implemented strategies to mitigate some ethical risks for the MPST program, but has not yet undertaken a full ethical risk assessment
The MPST program uses image-recognition AI to detect driving offences, which introduces a range of ethical risks. TMR has implemented controls to support the system reliability and accuracy, protect privacy, enable fair outcomes in the fine adjudication process, and manage its contractual arrangements, aligning to some of the areas of Australia’s AI Ethics Principles.
Figure 5B shows key measures TMR and the Queensland Revenue Office (QRO) have implemented to mitigate risks for the MPST program.

Compiled by the Queensland Audit Office.
TMR monitors the accuracy of AI in the MPST program
TMR’s contract with the external vendor includes several service key performance indicators (KPIs) to ensure the AI system’s accuracy in determining which photos progress for human review.
Figure 5C summarises data from January to December 2024 to show the volume of photos captured, human reviews conducted, and fines issued, providing insight into the MPST program’s accuracy.

Notes:
- Numbers have been rounded to the nearest thousand.
- Assessments by the AI for potential mobile phone and seatbelt offences can be done using the same photo and vehicle.
- The fines issued include fines withdrawn.
Compiled by the Queensland Audit Office.
The AI is designed to support an efficient human review process by filtering out photos the MPST cameras capture that are unlikely to be offences. In 2024, the AI system reduced the volume of assessments requiring human review at the external vendor by 98.7 per cent to 2.7 million. The high number of reviews by the external vendor allows more AI assessments to be checked while its accuracy is improved. This supports managing ethical risks by confirming the accuracy of potential offences before any decision is made.
An ethical framework is not yet integrated into the MPST program risk assessment
The MPST program has been subject to general risk assessment processes that include ethical elements. However, TMR has not yet undertaken a full ethical risk assessment as required by the Queensland Government’s AI governance policy. This means it does not know whether all ethical risks for the MPST program are identified and managed. It can address this gap by integrating an ethical framework, such as the FAIRA, into its risk assessment process for AI systems. We discuss the FAIRA in Chapter 4 of this report.
The MPST program was implemented before the Department of Customer Services, Open Data and Small and Family Business (CDSB) introduced its AI governance policy in September 2024. This requires entities to use an ethical framework to assess risks.
TMR assessed risks for the MPST program before the requirement to apply an ethical framework was established. It primarily assessed risk through privacy impact assessments, which considered certain aspects of ethical risks. These assessments considered aspects of privacy, information storage and security, discrimination, and accountability.
TMR plans to adopt the FAIRA as part of its department-wide AI policy. It also needs to apply an ethical framework for existing AI systems, including the MPST program, and to any systems planned for future use.
All public sector entities using AI should assess risks using an ethical framework to help identify risks that may not be covered by other technical or operational reviews. This supports consistency and accountable decision-making. For this reason, public sector entities should apply an ethical framework when assessing risk for AI systems in use and under development.
Recommendation 7 We recommend all public sector entities implement ethical risk assessment processes for AI systems in use or under development to more comprehensively identify and manage ethical risks. |
The MPST has processes to continuously improve
TMR has implemented structured and collaborative processes to support the continuous improvement of the AI used in the MPST program. These include weekly audits, and quarterly performance reviews that representatives from QRO attend. QRO also provides feedback on the performance of the AI to TMR, which is then shared with the external vendor. These processes provide a formal mechanism to review performance data, identify system issues, and deliver feedback to the external vendor. These activities support refinements to the AI system and help maintain accuracy and fairness in its operation.
Is TMR effectively identifying and managing ethical risks associated with QChat?
This section assesses whether TMR has appropriate governance structures, risk assessment processes, and mitigation strategies in place to effectively manage the ethical risks associated with QChat.
TMR has made progress in assessing some ethical risks for QChat but is also yet to do a full ethical risk assessment on this system. TMR has not yet established governance arrangements or monitoring controls to manage the ethical risks of QChat, and should improve the uptake of staff training to support its responsible use.
Governance is not yet in place to oversee ethical risks of QChat
As a user of the QChat system, TMR is responsible for ensuring appropriate use by its staff. However, it has not yet established governance arrangements to oversee QChat’s use or to manage its ethical risks. Key elements to support the system’s use, such as policies, risk management processes, and oversight mechanisms, are not in place.
Governance for generative AI systems like QChat is important, as system responses can influence decision-making and service delivery. Without governance, there is a risk that ethical use, including information management, may not be managed effectively.
System-level safeguards are established, but monitoring and training need improvement
QChat uses multiple mechanisms to help ensure its responses to user prompts reflect Queensland Government values and relevant operational context. CDSB has built some of these mechanisms into the system. Other mechanisms are outside the AI system and entities need to implement them, like monitoring and training users on appropriate use.
Prompts are input or instructions provided to an AI system to guide its response or behaviour. They can be questions, statements, or commands designed to provide a specific type of output from the AI system.
The first safeguard for QChat comes from the model’s design, with Microsoft-embedded safety measures included to reduce harmful or biased outputs.
CDSB manages the system-level safeguards within QChat, which include:
- CDSB safety prompts – which help detect and block harmful, inappropriate, or risky content from being generated or discussed for all entities
- contextual responses – which help ensure responses are relevant, safe, and aligned with the Queensland Government’s tone and style for all entities
- entity prompts – which allow entities to adjust QChat’s responses to align with their users’ needs and preferences.
Entities need to configure entity prompts to fit their specific needs. Using entity prompts can help strengthen the reliability and accuracy of QChat’s responses by providing it with information the underlying model may not already know. For example, entities can include up-to-date departmental arrangements to prevent QChat from responding with incorrect or outdated information.
TMR has not yet configured any entity prompts. Using this safeguard could improve the accuracy of responses from QChat and reduce the risk of users receiving misleading guidance.
Figure 5D shows how each user prompt passes through system-level safeguards before QChat generates a response, and outlines measures entities can take to ensure appropriate use and oversight.

Compiled by the Queensland Audit Office.
QChat’s safeguards help it to refuse requests for harmful, illegal, or inappropriate information such as creating weapons, spreading misinformation, or engaging in harassment.
It will also include messages in its responses to remind users of their obligations to use the system responsibly and ethically. For example, ‘Please ensure your use of this service aligns with ethical and legal standards’. This approach reinforces accountability, ensuring users understand the importance of appropriate and lawful interactions.
Generative AI systems can behave unpredictably and produce different outputs each time they are used. Some experts have demonstrated that the safeguards can be bypassed using specific prompt engineering, meaning these may not always prevent inappropriate use. This highlights the importance of effective monitoring and ensuring staff are trained to recognise and manage potential risks.
Monitoring controls for QChat need to be established
CDSB provides dashboards to entities who use QChat, offering an overview of user activity and usage, including summaries of conversation types. TMR has access to its dashboard but is not using it to monitor how staff interact with QChat or to identify emerging risks.
CDSB also has access to additional dashboards on content safety data, which it logs and monitors separately. CDSB has not yet shared this data with entities through the dashboards, limiting entities’ ability to detect potential misuse and respond appropriately.
TMR primarily relies on system-level safeguards developed by CDSB for QChat to mitigate ethical risks. While these measures help reduce the risk of inappropriate use, they are not sufficient on their own, as users may still interact with QChat in unintended or inappropriate ways that breach ethical and legislative requirements.
TMR has not implemented all controls identified in its August 2024 security impact assessment, which recommended actions such as establishing an AI policy and strategy, staff training, and processes for managing privacy and security breaches. Without these measures, TMR may have limited ability to ensure the ethical use of QChat and protect sensitive information.
Uptake for training and education in using generative AI could be improved
In 2024, TMR introduced a generative AI training course and awareness campaign for its staff. However, uptake has been low. It is now considering incorporating the content into mandatory training to improve participation and ensure consistent understanding across the department. TMR also ran an AI awareness campaign between 2024 and 2025 to help reinforce staff understanding of the benefits and risks of using generative AI tools.
Generative AI systems, such as QChat, can be applied across a wide range of tasks. The ethical risks associated with their use often depend on user behaviour, the type of data entered, and the context in which the system is used. For example, a user could accidentally enter protected information into a QChat prompt, which is not allowed under the terms of service. These risks cannot be fully addressed through system-level safeguards alone.
Ongoing education and training are needed to ensure staff understand how to use AI systems responsibly and in line with governance expectations. Targeted training helps staff recognise which data should not be entered into AI prompts, understand the limitations of generative responses, and apply appropriate judgement to system outputs.
To strengthen staff capability and reduce the risk of inappropriate use, TMR should develop a structured training and education plan. This would help build foundational AI knowledge, clarify staff responsibilities under the department’s governance arrangements, and support more informed, responsible use of AI.
Recommendation 5 We recommend that the Department of Transport and Main Roads improves QChat’s controls to manage ethical risks more effectively by:
Recommendation 6 We recommend that the Department of Customer Services, Open Data and Small and Family Business supports entities to better manage the risks associated with using generative AI systems, such as QChat, by providing entities with access to content safety information. |
TMR has not yet undertaken a full ethical risk assessment for QChat
TMR has not completed a full ethical risk assessment for QChat. It assessed information security risks in August 2024, before the Queensland Government’s AI governance policy introduced the requirement to apply an ethical framework in September 2024.
While the security risk assessment covers some ethical elements, this does not substitute for a full ethical risk assessment. TMR should update this assessment using an ethical framework based on how it intends to use QChat. This assessment should consider what information can be shared with QChat, as well as the potential impacts of its use on staff, service delivery, and decision-making processes.
We have made a recommendation to TMR to apply an ethical framework to all AI systems in use and planned, including QChat. Refer to recommendation 7 for further details.
CDSB has processes to continually improve QChat
QChat’s continuous improvement is managed by CDSB and not TMR. CDSB carries out regular testing, and system updates. CDSB regularly tests the system, implements updates, and conducts privacy impact assessments and security testing to identify vulnerabilities. It also reviews user feedback to improve response accuracy and overall functionality.
This ongoing process helps maintain QChat’s security, reliability, and alignment with operational needs. This is essential because generative AI can be unpredictable and requires continuous oversight to manage risks and ensure safe, ethical operation.