#7: Securing the Pipeline
AI Supply Chain Security
More than 50% of enterprises are experimenting or building with the Model Context Protocol (MCP). They use MCP to connect their AI agents to data and systems behind their corporate firewall, providing agents with the context they need to deliver real value: better code, richer responses, deeper insights, etc. The technical leaders who help their companies deploy MCP in production will create huge competitive advantages.
So, how do you get out in front of MCP?
With this model in hand, you will know where you are today and how to take the next step. The model includes a simple process and technology indicators for every stage and best of all, there are no forms - it’s yours to freely access and share.
The MCP Maturity Model was created by Stacklok, who have built an MCP platform and are working with enterprises to put MCP into production. Their Applied AI Engineers work hands-on with leaders to curate trusted registries, deploy advanced security measures and light up AI agents. You can learn more about the company at stacklok.com, or just drop them an email at enterprise@stacklok.com to start a conversation.
Welcome to CYBER_AI, a new newsletter from the Packt team focusing on—well, exactly what it says on the tin: cybersecurity in the age of AI.
Here we go on another step into the future, into a world where the world of cybersecurity brims with the confidence that AI can bring to our practice. Of course, this goal—like all goals—requires us to set up the foundations properly and figure out how we stand on them. That means, for all those struggling to make these ambitious bounds forward, establishing the “101” topics and making sure they are widely understood. For a look into the future, here’s our plan:
1. What “Cybersecurity AI” Actually Means
2. Machine Learning 101 for Security Professionals
3. Threat Detection with AI: From Rules to Models
4. Adversarial Machine Learning Basics
5. LLMs in Cybersecurity: Capabilities and Limitations
6. Securing AI Models and Pipelines
7. AI-Enhanced Offensive Techniques
8. Privacy and Data Protection in AI Systems
9. AI Governance, Ethics, and Risk Management
10. Building a Security-Aware AI Workflow
Sound good? Sign up!
In this newsletter, we’ll explore how AI is transforming cybersecurity—what’s new, what’s next, and what you can do to stay secure in the age of intelligent threats.
Welcome aboard! The future of cyber defence starts here.
Cheers!
Austin Miller
Editor-in-Chief
News Wipe
AI Found Twelve New Vulnerabilities in OpenSSL - The title of the post is”What AI Security Research Looks Like When It Works,” and [Bruce Schneier] agree[s]: In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.
Cybersecurity in the Age of Generative AI - This joint analytic report argues that while generative AI does lower the barrier for cybercrime, it does not fundamentally change core security principles. Attack techniques enabled by AI still rely on traditional weaknesses such as credential theft, social engineering, and misconfiguration.
Integrated AI Security and Safety Framework Report - Cisco’s framework identifies structural weaknesses in modern AI deployments and criticizes fragmented approaches to AI security. The report argues that current security models fail to capture the full lifecycle risk of AI systems, including model poisoning, prompt injection, orchestration abuse, and supply-chain compromise.
The AI Hype Frenzy Is Fueling Cybersecurity Risks - This analysis argues that the rush to deploy AI is creating systemic cybersecurity risks, especially when organizations integrate AI into critical systems without proper security validation. It highlights real-world weaknesses such as exposed encryption keys and unencrypted transmissions in AI applications.
Culture, You, and AI
Malicious AI - An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats. Part 2 of the story. And a Wall Street Journal article.
From the cutting edge
Remote Timing Attacks on Efficient Language Model Inference: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficiency of language model generation. But these techniques introduce data-dependent timing characteristics. We show it is possible to exploit these timing differences to mount a timing attack. By monitoring the (encrypted) network traffic between a victim user and a remote language model, we can learn information about the content of messages by noting when responses are faster or slower. With complete black-box access, on open source systems we show how it is possible to learn the topic of a user’s conversation (e.g., medical advice vs. coding assistance) with 90%+ precision, and on production systems like OpenAI’s ChatGPT and Anthropic’s Claude we can distinguish between specific messages or infer the user’s language. We further show that an active adversary can leverage a boosting attack to recover PII placed in messages (e.g., phone numbers or credit card numbers) for open source systems. We conclude with potential defenses and directions for future work.
When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs: Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. In evaluations using research prototypes and production-grade vLLM serving frameworks, we show that an adversary monitoring these patterns can fingerprint user queries (from a set of 50 prompts) with over 75% accuracy across four speculative-decoding schemes at temperature 0.3: REST (100%), LADE (91.6%), BiLD (95.2%), and EAGLE (77.6%). Even at temperature 1.0, accuracy remains far above the 2% random baseline—REST (99.6%), LADE (61.2%), BiLD (63.6%), and EAGLE (24%). We also show the capability of the attacker to leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.
Whisper Leak: a side-channel attack on Large Language Models: Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like “money laundering” while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies – random padding, token batching, and packet injection – finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.



