#8: Offensive Augmentation
Five ways to leverage AI offensively
Welcome to CYBER_AI, a new newsletter from the Packt team focusing on—well, exactly what it says on the tin: cybersecurity in the age of AI.
Here we go on another step into the future, into a world where the world of cybersecurity brims with the confidence that AI can bring to our practice. Of course, this goal—like all goals—requires us to set up the foundations properly and figure out how we stand on them. That means, for all those struggling to make these ambitious bounds forward, establishing the “101” topics and making sure they are widely understood. For a look into the future, here’s our plan:
1. What “Cybersecurity AI” Actually Means
2. Machine Learning 101 for Security Professionals
3. Threat Detection with AI: From Rules to Models
4. Adversarial Machine Learning Basics
5. LLMs in Cybersecurity: Capabilities and Limitations
6. Securing AI Models and Pipelines
7. AI-Enhanced Offensive Techniques
8. Privacy and Data Protection in AI Systems
9. AI Governance, Ethics, and Risk Management
10. Building a Security-Aware AI Workflow
Sound good? Head over to Substack and sign up there!
Join us on Substack to find our bonus articles!
In this newsletter, we’ll explore how AI is transforming cybersecurity—what’s new, what’s next, and what you can do to stay secure in the age of intelligent threats.
Welcome aboard! The future of cyber defence starts here.
Cheers!
Austin Miller
Editor-in-Chief
News Wipe
Microsoft Warns of AI Shadow Agents & Prompt Injection Risks in the Workplace: Microsoft’s latest Cyber Pulse security report identifies Shadow AI — unsanctioned AI agents created by employees without IT oversight — as a growing risk vector. The analysis explains how prompt injection attacks can manipulate AI agents into executing unauthorized actions, and how insufficient governance and visibility around AI use can amplify corporate security gaps. The article urges adoption of “zero-trust” principles for AI agents, treating them as distinct enterprise identities to mitigate misuse and compliance risks.
“You can no longer trust what you see and hear“—Experts on AI’s Role in Geopolitical Cyberattacks: In the context of rising geopolitical tensions, cybersecurity leaders warn that AI technologies — especially deepfakes and AI-crafted phishing — are fuelling a new wave of sophisticated attacks on critical infrastructure. The report explains how AI’s ability to generate highly realistic synthetic content and highly personalized attack vectors is undermining traditional trust models and forcing organizations to rethink identity verification and multi-factor authentication strategies in their cyber defenses.
AI and Deepfakes Supercharge Sophisticated Cyber-Attacks, Says Cloudflare: A newly released threat intelligence report from Cloudflare highlights how widely available LLMs and other AI tools are lowering the technical bar for cybercriminals. According to the analysis, attackers are using AI to automate reconnaissance, tailor malware, and craft highly effective phishing campaigns at scale — effectively democratizing sophisticated attack capabilities that were once the domain of expert threat actors.
Cybersecurity is now the price of admission for industrial AI: Cisco’s 2026 State of Industrial AI Report finds that cybersecurity concerns have overtaken other barriers to AI adoption across industrial sectors (manufacturing, utilities, transport). The piece argues that as AI connects more assets and systems, traditional security architectures struggle to keep pace — making robust cybersecurity an unavoidable prerequisite for AI-powered infrastructure.
AI Risk Moves Into the Security Budget Spotlight: Based on the 2026 Thales Data Threat Report, this coverage examines how enterprises are now explicitly budgeting for AI security alongside broader cybersecurity programs. It outlines that deepfake exploitation and AI-generated misinformation are now factored into organizational threat models, and that dedicated AI security funding is becoming more common as risk awareness grows.
Culture, You, and AI
Malicious AI - An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats. Part 2 of the story. And a Wall Street Journal article.
AI and Deepfakes Supercharge Sophisticated Cyber-Attacks: This article summarizes a Cloudflare Threat Report highlighting how AI and deepfake tools are lowering the skill barrier for advanced attacks. The analysis explains how attackers with minimal technical expertise can now generate convincing deepfake content and automated exploitation workflows at scale, greatly increasing the volume and sophistication of social engineering, identity fraud, and SaaS abuse. It critiques traditional perimeter-centric defenses and emphasizes the need for adaptive identity and authentication controls that can keep pace with AI-assisted threats.
Fraudsters create 200+ AI slop websites in one operation: Synopsis: A technical investigative report detailing how attackers used generative AI to launch over 200 fraudulent “AI slop” websites in a single automated campaign. Researchers discovered the AI prompt-generation logic embedded in the sites’ source code, offering rare visibility into how threat actors leverage LLMs to rapidly scale low-effort scam operations. The article analyzes the economic model attackers use (very low per-page cost) and the limitations of current detection strategies — underscoring how automation has reshaped attacker economics and the practical challenges defenders face in attributing and mitigating these attacks.
’This is an AI arms race’ — CrowdStrike says attackers now move through networks in under 30 minutes, TechRadar: This article critically analyzes CrowdStrike’s 2026 Global Threat Report, which reveals a dramatic shift in adversary behavior driven by generative AI. It reports that AI-assisted attackers are completing lateral movement within compromised environments in as little as 29 minutes, up significantly from previous years. The coverage dissects how AI accelerates reconnaissance, credential theft, evasion, and fake-service impersonation, and concludes with expert commentary on defensive imperatives — including the need for machine-speed detection, adaptive incident response, and tighter guardrails around development-platform access.



