About Obfuscated
The attack surface for AI is expanding faster than most security teams can map it. Prompt injection, model poisoning, agent hijacking, jailbreaks, compliance gaps — every new model release and every new agent framework introduces vulnerabilities that didn't exist six months ago.
Obfuscated exists to make that surface smaller. We publish research on adversarial techniques against AI systems, build open-source security tooling, and break down the compliance frameworks that organizations need to navigate — from the EU AI Act to NIST AI RMF to ISO 42001.
What We Cover
AI red teaming methodologies and findings. Prompt injection and jailbreak techniques. LLM and agent security architecture. AI compliance and governance frameworks. Open-source security tools for AI applications. Defensive strategies for production AI systems.
Who's Behind This
Obfuscated is run by Michael Harbison — an information security and systems engineering practitioner with over a decade of experience. Currently focused on the intersection of adversarial AI and enterprise security, building red teaming tools, and writing about what he's learning along the way.
Find Michael on LinkedIn.