AI in the US Election: Trump’s Use of AI#
Artificial intelligence (AI) is swiftly becoming a crucial component of technological advancements, but as Dr Catherine Ball, Honorary Associate Professor at the ANU School of Cybernetics, discussed in a recent interview on Channel 9’s Today Weekend, its role in the U.S. election raises significant concerns. The conversation touched on how AI is not just shaping the future but is also being leveraged in ways that pose real dangers to democracy, including in Australia.
Dr Ball highlighted the sophisticated nature of AI, which has reached a point where the distinction between reality and AI-generated content is increasingly difficult to discern. She referred to the “uncanny valley,” where AI-produced images and videos are almost indistinguishable from real ones. A striking example mentioned was a video of Donald Trump and Elon Musk dancing, which, despite being entirely AI-generated, gained massive popularity online. “You could tell that wasn’t real,” Dr Ball noted, but this subtle recognition is becoming harder to rely on as AI improves.
The implications of such advancements extend far beyond mere entertainment. Dr Ball emphasised the “real, clear and present danger” that AI-generated misinformation and disinformation pose to global democracy. As these technologies advance, the public’s ability to differentiate between what is real and what is not will become increasingly compromised. She referenced the global AI safety summit at Bletchley Park, where discussions on watermarking or labelling AI-generated content were held. However, Dr Ball pointed out a significant concern: “That has not happened,” leaving a critical gap in the regulation of AI.
Impact of AI on Democracy and Misinformation#
Dr Ball also delved into the neurobiological impact of AI-generated content, explaining how once an image is seen, it cannot be unseen, regardless of its authenticity. This has profound implications for public perception, as even known falsehoods can leave a lasting impression. Dr Ball recalled an incident involving an AI-generated image of Pope Francis wearing a fake Balenciaga jacket, which, despite obvious flaws, fooled many. She warned, “This was the worst these images are ever going to be,” indicating that the ability of AI to deceive will only get better.
The discussion also covered the intersection of AI and celebrity culture. Dr Ball noted how AI-generated content involving public figures, such as a fake endorsement by Taylor Swift, could lead to significant legal challenges. “Large celebrities with buckets of money and good legal teams behind them” are likely to push back against the unauthorised use of their likeness, which may test the boundaries of existing legislative frameworks. The recent Hollywood strikes, centred around protecting voice, image, and appearance rights from AI manipulation, underscore the urgency of addressing these issues.
Legal and Ethical Implications of AI-Generated Content#
As AI technology evolves, so do the challenges of regulating it. Dr Ball expressed concerns about the current regulatory landscape, particularly given that many leading tech companies are based in the United States. “Until the US government or the US populace say we want the tech companies to regulate this, they are not going to do it,” she said. This lack of global regulation presents a significant risk, with AI-generated content having the potential to influence elections and public opinion worldwide, including in Australia.
Despite these challenges, Dr Ball remains optimistic about the potential for regulation. “Regulation is a really important part of a large number of armory that we could deploy here,” she asserted. She also emphasised the crucial role of education in combating the spread of misinformation. In the context of cybersecurity, Dr Ball advocated for a “zero trust” approach—advising individuals to question every piece of content they encounter, whether it’s an image, email, or phone call. “Deep fake audios exist now. Don’t trust it,” she cautioned, highlighting the need for heightened vigilance in an era where reality can be easily manipulated.
Education and Personal Safety Measures#
Dr Ball’s insights underscore the urgent need to address the challenges posed by AI proactively. As Australia navigates the complexities of the digital age, her call for increased regulation, education, and public awareness provides a valuable framework for safeguarding the integrity of our democratic systems.
In a lighter moment during the interview, Dr Ball offered practical advice on personal safety measures. She recommended that families use code words to verify the authenticity of urgent messages, such as those asking for money. This simple yet effective strategy serves as a reminder of the importance of critical thinking in today’s digital world.
Dr Catherine Ball’s discussion highlights the power and potential dangers of AI. As this technology continues to advance, the need for robust education, regulation, and vigilance will become increasingly vital to ensure that AI serves as a tool for progress rather than a threat to democracy.
View the full interview here