#11: Building properly
A look at tools, news, and insightful views
Welcome to CYBER_AI, a new newsletter from the Packt team focusing on—well, exactly what it says on the tin: cybersecurity in the age of AI.
Here we go on another step into the future, into a world where the world of cybersecurity brims with the confidence that AI can bring to our practice. Of course, this goal—like all goals—requires us to set up the foundations properly and figure out how we stand on them. That means, for all those struggling to make these ambitious bounds forward, establishing the “101” topics and making sure they are widely understood. For a look into the future, here’s our plan:
1. What “Cybersecurity AI” Actually Means
2. Machine Learning 101 for Security Professionals
3. Threat Detection with AI: From Rules to Models
4. Adversarial Machine Learning Basics
5. LLMs in Cybersecurity: Capabilities and Limitations
6. Securing AI Models and Pipelines
7. AI-Enhanced Offensive Techniques
8. Privacy and Data Protection in AI Systems
9. AI Governance, Ethics, and Risk Management
10. Building a Security-Aware AI Workflow
Sound good? Head over to Substack and sign up there!
Join us on Substack to find our bonus articles!
In this newsletter, we’ll explore how AI is transforming cybersecurity—what’s new, what’s next, and what you can do to stay secure in the age of intelligent threats.
Welcome aboard! The future of cyber defence starts here.
Cheers!
Austin Miller
Editor-in-Chief
Who is Cyber_AI?
In order to keep providing high quality content that meets your needs, we thought that we would reach out and find a little bit about our audience. Take the survey below and get your copy of AI and Cybersecurity: What Everyone Should Know, a short fact file for helping non-specialists get up to speed.
The Tool Library
You asked for tools and tutorials, so here are some tools and tutorials.
Each week, we’ll look at a selection of tools concerning AI and cybersecurity. Cast your vote for your favourite tool and we’ll share a quick tutorial on how to get started and how to get the most out of it the next week.
awesome-ai-security: Not a tool, but the motherload of all AI security resource dumps.
agentic_security: Agentic LLM Vulnerability Scanner and AI red teaming kit. Handy for those wanting to start assessing their posture.
hexstrike-ai: “HexStrike AI MCP Agents is an advanced MCP server that lets AI agents (Claude, GPT, Copilot, etc.) autonomously run 150+ cybersecurity tools for automated pentesting, vulnerability discovery, bug bounty automation, and security research. Seamlessly bridge LLMs with real-world offensive security capabilities.”
tracecat: “The AI automation platform built for security teams and agents.” - A bold claim!
AIGoat: “A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.”
News Wipe
Over 29 million secrets were leaked on GitHub in 2025, and AI really isn’t helping: From GitGuardian’s 2026 State of Secrets Sprawl findings, highlighting a record-breaking 29 million exposed credentials in public repositories. A key insight is that AI-assisted development is materially worsening security hygiene—with secrets in AI-generated code leaking at nearly double the normal rate. The report also identifies emerging risks such as Model Context Protocol (MCP) misconfigurations, prompt injection, and AI-agent access to sensitive credentials, reframing AI as both an amplifier of developer productivity and a systemic attack surface expansion vector.
Cybersecurity’s new race: Finding the CrowdStrike or Wiz of AI security: This piece provides a strategic analysis of the cybersecurity market shift toward AI-native platforms. It argues that incumbent vendors are structurally disadvantaged against startups building AI-first detection and response systems, rather than retrofitting AI into legacy stacks. The article highlights investor signals (e.g., a 76.5% spike in SOAR-related deals) and frames the market as entering a platform transition moment, where AI-native architectures—not tools—will define category leaders. It also underscores a growing gap between vendor capabilities and enterprise expectations around AI-driven threat mitigation.
AI agents are cybersecurity firms’ newest employees: This article examines the operational deployment of AI agents in Security Operations Centers (SOCs). Unlike generic generative AI, these agents execute multi-step workflows such as incident triage, identity threat investigation, and customer support automation. Real-world implementations show up to 90% workload reduction in some analyst tasks. However, the piece also surfaces technical limitations—particularly around ambiguous threat contexts and error propagation, where incorrect agent outputs can introduce risk. The broader takeaway is that cybersecurity is moving toward a human–AI hybrid operating model, with implications for workforce structure and detection fidelity.
Culture, You, and AI
The 6 Security Shifts AI Teams Can’t Ignore in 2026 (Ben Lorica): This article outlines key structural shifts in AI-era cybersecurity, including the rise of AI-native threat detection, event-based monitoring, and the need for integrated security across ML pipelines. It emphasizes that traditional perimeter defenses are inadequate for AI systems, pushing organizations toward continuous, model-aware security practices.
The Cybersecurity Industry Is Being Rewired for 2026 Cloud Security Guy): Focuses on workforce disruption caused by AI automation in security operations. Entry-level roles built on repetitive tasks are being replaced by AI-driven systems that handle scanning, alert triage, and incident response. The article connects this shift to broader industry restructuring and talent reallocation.
AI Dominates Cybersecurity (Matthew Rosenquist): A high-level strategic overview arguing that AI will dominate both offensive and defensive cybersecurity capabilities in 2026. It discusses how attackers are leveraging AI to scale attacks, while defenders must adopt AI-driven strategies to keep pace, reshaping CISO-level decision-making.


