⊹ ai_abuse
All incidents tagged with "ai_abuse" threat vector.
Novel Prompt Injection Attack Bypasses Enterprise LLM Guardrails
Posted on:January 23, 2026 at 08:00 AMResearchers demonstrate a new prompt injection technique that bypasses safety filters in enterprise LLM deployments, enabling data exfiltration.
AI Model Poisoning Attack Demonstrated on Open Source Models
Posted on:January 10, 2026 at 11:00 AMResearchers demonstrate how training data poisoning can embed backdoors in open-source language models.