AI-Enabled Crime Fears Grip Australia: Universities Lead Research Amid Rising Threats

Widespread Worries Over Deepfakes and Scams

  • research-publication-news
  • monash-university
  • australian-universities-ai
  • unsw
  • griffith-university
New0 comments

Be one of the first to share your thoughts!

Add your comments now!

Have your say

Engagement level
a large building with a long walkway in front of it
Photo by Datingscout on Unsplash

Rising Concerns Over AI-Enabled Crimes in Australia

Australian adults are increasingly voicing fears about artificial intelligence (AI)-enabled crimes, with recent surveys painting a picture of widespread apprehension. According to the Australian Institute of Criminology's (AIC) Statistical Bulletin 51, half of respondents expressed worry about AI causing them harm, while nearly one in five anticipated becoming a victim within the next year. 80 82 This concern stems from AI's rapid integration into everyday life, where tools once seen as productivity boosters are now perceived as potential weapons for scams, impersonations, and harassment.

The Australian Cybercrime Survey, which informed the AIC report, gathered insights from over 6,000 respondents in 2024. It revealed that 52% believe AI location tracking is the most common misuse, followed closely by fears of deepfake videos and impersonations for catfishing or financial trickery. 77 These fears are not unfounded, as cybercrime reports in Australia surge, with a new incident every six minutes according to government data.

Andrew Childs, a criminology lecturer at Griffith University, notes that AI is 'rapidly becoming normalised' in daily activities, from work tools to personal planning, yet offenders are distributing 'dark AI tools' without safeguards on illicit platforms. 82 This normalization amplifies vulnerabilities, particularly as corporate investments lower barriers to AI access.

Understanding AI-Enabled Crime: Definitions and Mechanisms

AI-enabled crime refers to criminal activities augmented by artificial intelligence technologies, such as generative AI models that create realistic deepfakes (synthetic media mimicking real people using AI algorithms), voice cloning (replicating someone's speech from short audio samples), and automated phishing. Unlike traditional cybercrimes, these leverage machine learning to scale attacks, personalize deceptions, and evade detection.

The process typically begins with data collection from social media—photos, videos, voices—fed into open-source AI like Stable Diffusion or ElevenLabs. Criminals then generate convincing fakes for investment scams, romance frauds, or harassment. For instance, deepfake videos of celebrities endorsing bogus schemes have cost Australians over $382 million in investment scams alone during 2023-2024. 78

Monash University's Abhinav Dhall highlights the global rise in dark web services offering these capabilities, noting Australia's experience mirrors worldwide trends where low-cost tools democratize crime. 82

Deepfake Scams: A Growing Threat Down Under

Deepfake scams represent one of the most visceral fears, with over 30% of Australians dreading victimization through AI-generated impersonations. 82 Commonwealth Bank's September 2025 study of 1,988 Australians found 27% had witnessed deepfakes, primarily investment (59%), business email compromise (40%), and romance scams (38%). Yet, detection accuracy hovers at 42%—worse than chance—despite 89% self-reported confidence. 76

Recent cases include AI videos of executives requesting urgent transfers, fooling employees into multimillion-dollar losses. Around Valentine's Day, romance scams spike, using deepfakes to build false intimacy rapidly, as warned by UNSW's Dr. Lesley Land. 79 These exploits prey on emotional vulnerabilities, with agentic AI automating grooming over weeks.

Illustration of a deepfake scam using AI-generated video impersonation

AIC data shows 43% perceive AI impersonations for financial gain as common, underscoring the need for vigilance.Read the full AIC report.

Voice Cloning and Impersonation Frauds

Voice cloning scams, where AI replicates a loved one's voice from seconds of audio, evoke terror. Parents particularly fear AI grooming via fake child profiles, with 30% viewing it as frequent. 82 Over 16% report unauthorized tracking, often via AI-enhanced apps.

  • AI cracks passwords or accesses accounts (41% perceived common).
  • Smart devices gaslight users with manipulated commands.
  • Revenge porn deepfakes affect 41% in perceptions.

Griffith's Childs warns of shifting targets to younger demographics via fake sites, expanding beyond elderly victims.

Statistics and Real-World Impacts

Australia lost $2.03 billion to scams in 2024, with AI amplifying scale. Cyber.gov.au's 2024-25 report notes AI enables larger attacks. AIC survey: 2.9% faced stalking, 7% non-consensual images, <1% direct deepfakes—but underreporting skews figures. 82

Demographics: Older Australians worry more despite lower perceived commonality; parents fret child risks. 74% use AI apps daily, averaging 3-4 hours online.Cyber.gov.au Annual Report.

AI Crime Fear% Worried
Location Tracking52%
Deepfakes/Catfishing50%
Financial Impersonation43%

University Research Pioneering Solutions

Australian universities lead in tackling AI-enabled crime fears. Monash University partners with AFP on 'Silverer', a data-poisoning tool altering image pixels to corrupt deepfake training data, rendering outputs blurry. 78 PhD candidate Elizabeth Perry's prototype targets child abuse material and scams.

UNSW researches romance scam evolution, emphasizing biometric data limits.Craft your academic CV for cybersecurity roles. UTS hosts seminars on deepfakes and cryptocrime. Griffith and Monash experts provide policy insights.

Monash's December 2025 report on sexualised deepfake abuse offers perpetrator/victim perspectives, informing laws.

Monash University and AFP Silverer tool for data poisoning against deepfakes

For those in higher ed jobs in data science or criminology, these projects open doors.

Government and Law Enforcement Responses

Australia's tech-neutral laws cover AI crimes via cybercrime acts. eSafety Commissioner targets deepfake porn; new laws ban sexually explicit deepfakes. AFP's AiLECS Lab advances defenses.

RSIS praises distributed governance: criminal law, regulation, institutions. Dr. Rick Brown (AIC) urges education on scams.RSIS on Australia's AI Crime Strategy.

Challenges in Detection and Prevention

High confidence belies low accuracy; CommBank: 42% detection rate. Dark AI lacks safeguards. Scaling via automation overwhelms responders.

  • Underreporting due to embarrassment.
  • Global dark web supply chains.
  • Balancing beneficial AI with risks.

Actionable Insights: Protecting Against AI Threats

Experts recommend:

  • Verify unexpected requests with safe words.
  • Limit public biometrics online.
  • Use antivirus with AI detection.
  • Report to Scamwatch.
  • Poison personal images with tools like Silverer.

Discuss scams with family; educate via university resources. Explore higher ed career advice in cybersecurity.

Future Outlook: Innovation and Policy

Predictions: AI scams industrial-scale by 2026. Unis like Monash innovate defenses; policy eyes safeguards. Positive: AI for fraud detection. AIC calls for tracking regs, verification standards.

Higher ed plays key role, training experts amid demand for AI ethics researchers. Check university jobs in criminology.

The Role of Higher Education in Shaping Tomorrow's Defenses

Australian universities bridge fears and solutions. Programs in AI ethics, cybersecurity proliferate. Griffith, Monash, UNSW produce leaders combating threats. Aspiring academics, rate professors via Rate My Professor.

Collaborations like AiLECS exemplify academia-police synergy. Future: More PhDs in AI forensics, grants for deepfake research.

Historic stone building against a clear blue sky.

Photo by Jay lee on Unsplash

Conclusion: Navigating AI-Enabled Crime Fears

AI-enabled crime fears reflect tech's dual edge. With 50% worried, proactive research from Australian unis offers hope. Stay informed, adopt defenses, pursue higher ed jobs in this field, seek career advice, explore university jobs, and rate experiences at Rate My Professor.

Frequently Asked Questions

🤖What are AI-enabled crimes?

AI-enabled crimes use artificial intelligence for scams, deepfakes, voice cloning, and harassment. Examples include fake celebrity videos tricking investments.

😱How common are deepfake fears in Australia?

50% worry about AI harm per AIC survey; 30% fear deepfakes. 27% witnessed scams per CommBank.80

🎓Which universities research AI crime?

Monash (Silverer tool), UNSW (romance scams), Griffith (criminology), UTS (seminars). Explore higher ed jobs.

💸What scams use deepfakes?

Investment (59%), email compromise (40%), romance (38%). Losses: $382m in 2023-24.

🛡️How does Silverer work?

Monash-AFP tool poisons images, corrupting AI deepfake outputs. AFP details.

Can Australians spot deepfakes?

Only 42% accuracy despite 89% confidence (CommBank).

⚖️Government actions on AI crime?

Tech-neutral laws, eSafety deepfake bans, AFP AiLECS.

Tips to avoid AI scams?

Use safe words, limit biometrics, verify calls, poison images.

🔮Future of AI crime in Australia?

Industrial-scale scams predicted; unis innovate defenses.

💼Higher ed careers in AI security?

Demand for criminology, data science experts. Check Rate My Professor and career advice.

📱Stats on AI usage Australia?

74% use AI apps; 3-4 hours daily online.