Advances in Threat Actor Usage of AI Tools
Posted2 months ago
cloud.google.comTechstory
calmneutral
Debate
0/100
AI SecurityThreat IntelligenceCybersecurity
Key topics
AI Security
Threat Intelligence
Cybersecurity
Google Threat Intelligence Group reports a shift in threat actors' use of AI tools from productivity gains to deploying novel AI-enabled malware, sparking discussion on the implications of this new operational phase of AI abuse.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Start
Avg / period
1
Key moments
- 01Story posted
Nov 8, 2025 at 8:20 PM EST
2 months ago
Step 01 - 02First comment
Nov 8, 2025 at 8:20 PM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Start
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 8, 2025 at 8:20 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45861991Type: storyLast synced: 11/17/2025, 5:57:45 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This report serves as an update to our January 2025 analysis, "Adversarial Misuse of Generative AI," and details how government-backed threat actors and cyber criminals are integrating and experimenting with AI across the industry throughout the entire attack lifecycle. Our findings are based on the broader threat landscape.
Key Findings: - First Use of "Just-in-Time" AI in Malware: For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware. While still nascent, this represents a significant step toward more autonomous and adaptive malware. - "Social Engineering" to Bypass Safeguards: Threat actors are adopting social engineering-like pretexts in their prompts to bypass AI safety guardrails. We observed actors posing as students in a "capture-the-flag" competition or as cybersecurity researchers to persuade Gemini to provide information that would otherwise be blocked, enabling tool development. - Maturing Cyber Crime Marketplace for AI Tooling: The underground marketplace for illicit AI tools has matured in 2025. We have identified multiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research, lowering the barrier to entry for less sophisticated actors. - Continued Augmentation of the Full Attack Lifecycle: State-sponsored actors including from North Korea, Iran, and the People's Republic of China (PRC) continue to misuse Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.