Mandate for stronger AI regulation

Australians are strongly in favour of government regulation of AI as research shows just over a third of the public trust the technology.

The Melbourne Business School study – which surveyed 48,340 people across 47 countries between November 2024 and January 2025 – found that while 50 per cent of Australians use AI regularly, only 36 per cent have confidence in it.

“There is a clear public mandate in Australia – as there is in all 47 countries we surveyed – for stronger regulation and governance of AI,” study lead – Professor Nicole Gillespie – told GN. 

There exists a combination of factors contributing to the public’s low trust in AI. “Australians are concerned about the safety and security of using AI systems and negative outcomes from AI use at scale on people and society,” Gillespie said.

The concerns are many and varied: cybersecurity risks, loss of privacy and IP; misinformation, disinformation and manipulation from AI-generated content and bots; deskilling and loss of jobs; inaccurate outcomes and bias in decision making; and loss of human interaction from AI use.

Nicole Gillespie (supplied)

Gillespie told GN there are not just concerns. “Thirty-seven per cent of Australians say they are personally experiencing or observing these negative outcomes.” 

The global study – the most comprehensive of its kind – reveals a tension where people are experiencing benefits but also potentially negative impacts from AI. “This is fuelling a growing need for reassurance that AI systems are being used in a safe, secure and responsible way,” Gillespie said.  

No surprise then, that the research found that 77 per cent of Australians believe government oversight is necessary. However – with only 30 per cent of Australians expressing faith that current safeguards are adequate to control the technology – the findings suggest, when it comes to gaining public trust, the government has an uphill climb. “To consider and remedy this gap, policymakers need to not only design, implement and enforce appropriate AI regulation, but also to educate and raise public awareness of these laws,” Gillespie said.

The research offers key considerations for governments and regulators:

  • analyse where holes lie in current regulation and laws
  • accelerate the development and implementation of AI regulation at the national and international level
  • support international coordination and cooperation to ensure consistent global standards, interoperability, and mitigation of AI risks
  • invest in methods to combat mis- and disinformation
  • invest in public AI training and education to support AI literacy and responsible use.

Gillespie told GN clarifying and raising awareness of how existing laws apply to AI, and the rights and responsibilities that each individual has, as well as the responsibilities of organisations and governments to manage and enforce the laws, is also key to building public trust.

“When people believe regulatory safeguards are adequate, they are considerably more likely to trust and accept the use of AI, underscoring the importance of having an effective regulatory framework in place and ensuring it is communicated widely to those that are governed by it,” Gillespie said.

Indeed, the research shows that if the public were reassured that the government had responsible AI governance practices in place – and the country adhered to international standards – 81 per cent of Australians say they would be more willing to trust the technology. As Gillespie told GN: “The public’s trust of AI technologies and their safe and secure use is central to acceptance and adoption.” 

Like this news?

Leave a Reply

Your email address will not be published.