Kai Aizen — GenAI Security Research Same attack. Different substrate.
I research AI security vulnerabilities — from LLM jailbreaking to prompt injection — mapping the adversarial psychology that exploits both humans and machines.
Scroll
Recent Work
Framework
AATMF — Adversarial AI Threat Modeling
Systematic framework for AI red teaming with quantitative risk scoring.
20 tactics • 240+ techniques
Research
LLM Jailbreaking & Prompt Injection
Original research on bypassing AI safety controls through psychological vectors.
Novel techniques • Defense strategies
Book
Adversarial Minds
The psychology of social engineering and human manipulation.
Published 2024 • Social Engineering
Vulnerability Discoveries
View all CVEs → SnailSploit Live Threat Feed
Loading threat feed…
Latest Research
Featured Articles
View all → Prompt Injection
Memory Injection Through Nested Skills
Mar 2026 Security ResearchLinux Kernel io_uring/zcrx: Race Condition
Mar 2026 AI SecuritySelf-Replicating Memory Worm
Mar 2026 AI SecurityWeaponized AI Supply Chain
Mar 2026 AI SecurityMCP vs A2A Attack Surface
Mar 2026 AI SecurityThe 30% Blind Spot: LLM Safety Judges
Feb 2026Let's Work Together
Available for security research, red team engagements, and AI safety consulting.
Or email directly: kai@snailsploit.com