Rising Concerns Over AI-Enabled Crimes in Australia
Australian adults are increasingly voicing fears about artificial intelligence (AI)-enabled crimes, with recent surveys painting a picture of widespread apprehension. According to the Australian Institute of Criminology's (AIC) Statistical Bulletin 51, half of respondents expressed worry about AI causing them harm, while nearly one in five anticipated becoming a victim within the next year.
The Australian Cybercrime Survey, which informed the AIC report, gathered insights from over 6,000 respondents in 2024. It revealed that 52% believe AI location tracking is the most common misuse, followed closely by fears of deepfake videos and impersonations for catfishing or financial trickery.
Andrew Childs, a criminology lecturer at Griffith University, notes that AI is 'rapidly becoming normalised' in daily activities, from work tools to personal planning, yet offenders are distributing 'dark AI tools' without safeguards on illicit platforms.
Understanding AI-Enabled Crime: Definitions and Mechanisms
AI-enabled crime refers to criminal activities augmented by artificial intelligence technologies, such as generative AI models that create realistic deepfakes (synthetic media mimicking real people using AI algorithms), voice cloning (replicating someone's speech from short audio samples), and automated phishing. Unlike traditional cybercrimes, these leverage machine learning to scale attacks, personalize deceptions, and evade detection.
The process typically begins with data collection from social media—photos, videos, voices—fed into open-source AI like Stable Diffusion or ElevenLabs. Criminals then generate convincing fakes for investment scams, romance frauds, or harassment. For instance, deepfake videos of celebrities endorsing bogus schemes have cost Australians over $382 million in investment scams alone during 2023-2024.
Monash University's Abhinav Dhall highlights the global rise in dark web services offering these capabilities, noting Australia's experience mirrors worldwide trends where low-cost tools democratize crime.
Deepfake Scams: A Growing Threat Down Under
Deepfake scams represent one of the most visceral fears, with over 30% of Australians dreading victimization through AI-generated impersonations.
Recent cases include AI videos of executives requesting urgent transfers, fooling employees into multimillion-dollar losses. Around Valentine's Day, romance scams spike, using deepfakes to build false intimacy rapidly, as warned by UNSW's Dr. Lesley Land.

AIC data shows 43% perceive AI impersonations for financial gain as common, underscoring the need for vigilance.Read the full AIC report.
Voice Cloning and Impersonation Frauds
Voice cloning scams, where AI replicates a loved one's voice from seconds of audio, evoke terror. Parents particularly fear AI grooming via fake child profiles, with 30% viewing it as frequent.
- AI cracks passwords or accesses accounts (41% perceived common).
- Smart devices gaslight users with manipulated commands.
- Revenge porn deepfakes affect 41% in perceptions.
Griffith's Childs warns of shifting targets to younger demographics via fake sites, expanding beyond elderly victims.
Statistics and Real-World Impacts
Australia lost $2.03 billion to scams in 2024, with AI amplifying scale. Cyber.gov.au's 2024-25 report notes AI enables larger attacks. AIC survey: 2.9% faced stalking, 7% non-consensual images, <1% direct deepfakes—but underreporting skews figures.
Demographics: Older Australians worry more despite lower perceived commonality; parents fret child risks. 74% use AI apps daily, averaging 3-4 hours online.Cyber.gov.au Annual Report.
| AI Crime Fear | % Worried |
|---|---|
| Location Tracking | 52% |
| Deepfakes/Catfishing | 50% |
| Financial Impersonation | 43% |
University Research Pioneering Solutions
Australian universities lead in tackling AI-enabled crime fears. Monash University partners with AFP on 'Silverer', a data-poisoning tool altering image pixels to corrupt deepfake training data, rendering outputs blurry.
UNSW researches romance scam evolution, emphasizing biometric data limits.Craft your academic CV for cybersecurity roles. UTS hosts seminars on deepfakes and cryptocrime. Griffith and Monash experts provide policy insights.
Monash's December 2025 report on sexualised deepfake abuse offers perpetrator/victim perspectives, informing laws.
For those in higher ed jobs in data science or criminology, these projects open doors.
Government and Law Enforcement Responses
Australia's tech-neutral laws cover AI crimes via cybercrime acts. eSafety Commissioner targets deepfake porn; new laws ban sexually explicit deepfakes. AFP's AiLECS Lab advances defenses.
RSIS praises distributed governance: criminal law, regulation, institutions. Dr. Rick Brown (AIC) urges education on scams.RSIS on Australia's AI Crime Strategy.
Challenges in Detection and Prevention
High confidence belies low accuracy; CommBank: 42% detection rate. Dark AI lacks safeguards. Scaling via automation overwhelms responders.
- Underreporting due to embarrassment.
- Global dark web supply chains.
- Balancing beneficial AI with risks.
Actionable Insights: Protecting Against AI Threats
Experts recommend:
- Verify unexpected requests with safe words.
- Limit public biometrics online.
- Use antivirus with AI detection.
- Report to Scamwatch.
- Poison personal images with tools like Silverer.
Discuss scams with family; educate via university resources. Explore higher ed career advice in cybersecurity.
Future Outlook: Innovation and Policy
Predictions: AI scams industrial-scale by 2026. Unis like Monash innovate defenses; policy eyes safeguards. Positive: AI for fraud detection. AIC calls for tracking regs, verification standards.
Higher ed plays key role, training experts amid demand for AI ethics researchers. Check university jobs in criminology.
The Role of Higher Education in Shaping Tomorrow's Defenses
Australian universities bridge fears and solutions. Programs in AI ethics, cybersecurity proliferate. Griffith, Monash, UNSW produce leaders combating threats. Aspiring academics, rate professors via Rate My Professor.
Collaborations like AiLECS exemplify academia-police synergy. Future: More PhDs in AI forensics, grants for deepfake research.
Conclusion: Navigating AI-Enabled Crime Fears
AI-enabled crime fears reflect tech's dual edge. With 50% worried, proactive research from Australian unis offers hope. Stay informed, adopt defenses, pursue higher ed jobs in this field, seek career advice, explore university jobs, and rate experiences at Rate My Professor.