As the nation gears up for a federal election, many involved in politics are becoming increasingly concerned about the role that will be played by artificial intelligence, writes Gareth Cox.
The ability of AI technology to generate misleading and deceptive content makes it a powerful tool. Those seeking to shift public sentiment and influence election results are therefore embracing AI with gusto.
Of particular concern is AI’s ability to generate seemingly authentic photographic, video, and audio content. Photos and clips can be shared on social media that appear to show politicians saying or doing things that are simply untrue.
Such incidents underscore a pressing challenge: how do Australian political leaders protect the integrity of elections when it’s no longer possible for people to trust what they see and hear?
Election campaigns are fertile ground for misinformation
Elections are the perfect storm for AI-driven cyberthreats due to their high stakes, emotionally charged atmosphere, and the sheer volume of information voters must navigate. Misinformation and disinformation campaigns thrive in such environments, where deepfake content can be weaponised to mislead voters, discredit political figures, or even fabricate endorsements.
Also, AI can also be used to execute cyberattacks against political campaigns and electoral infrastructures. For example, Iranian hackers allegedly attempted to infiltrate Donald Trump’s 2024 campaign using AI-generated phishing emails designed to steal credentials from campaign staff. The ability to launch such sophisticated attacks at scale makes AI a formidable tool for bad actors.
An ongoing challenge
Unfortunately, Australia’s highly digital and connected population is no stranger to AI-driven disinformation on social media platforms. Fortunately, proactive measures are already in place.
The Australian Electoral Commission recently relaunched its Stop and Consider campaign designed to alert voters about the challenges caused by AI-generated disinformation. The campaign explains what the technology can do, the types of outputs it can generate, and the steps people can take to avoid being misled.
While such initiatives provide a strong foundation, enforcement remains a challenge in today’s digital age. AI-generated disinformation tends to spread faster than regulations can react, making proactive monitoring and public vigilance equally important.
Improving and extending legislative protections
While robust legislation is critical, legal frameworks alone cannot stem the tide of AI-generated disinformation and cyberattacks. Governments must complement regulatory measures with advanced technological defences to safeguard election integrity.
One strategy is leveraging AI-powered cybersecurity tools that align with the widely respected MITRE ATT&CK framework. Leveraging the framework is essential for understanding adversary tactics and techniques and providing a structured approach to threat detection and response.
However, real-world use cases reveal that relying solely on the framework is insufficient. Attackers constantly evolve their methods, bypass known techniques, and exploit blind spots that MITRE does not always cover. To stay ahead, security teams must combine MITRE ATT&CK with real-time threat intelligence, behavioural analytics, and proactive threat hunting.
Governments can also adopt AI-driven cybersecurity solutions, such as automated fact-checking systems for content verification and real-time network monitoring platforms to detect threats like phishing attempts or unauthorised access.
Voters need their own protection
Election integrity is not solely the responsibility of governments, voters also play a crucial role. They should be discerning with the content they consume and learn how to identify when content has been digitally manipulated. Often, red flags include unnatural facial expressions, mismatched audio, and inconsistencies in speech patterns.
When in doubt, voters should use fact-checking tools or deepfake detection software to verify content authenticity. Additionally, they should report suspicious material and share information responsibly to prevent disinformation from spreading further.
A simple rule of thumb is that if content appears overly inflammatory, perfectly aligned with one’s biases, or too shocking to be true, it likely isn’t. AI-driven disinformation thrives in the gap between what’s real and what’s emotional. The more we shrink that gap with awareness and critical thinking, the safer we’ll be.
AI will continue to evolve, making disinformation and cyberattacks increasingly difficult to detect. Soon, even the most discerning individuals may struggle to differentiate reality from manipulation.
By embracing cutting-edge cybersecurity measures, strengthening public awareness, and implementing decisive policies, Australians can ensure that AI enhances rather than endangers electoral integrity.
Gareth Cox, vice president sales – Asia Pacific and Japan, Exabeam
Leave a Reply