Model poisoning allows hidden trigger phrases to cause malicious outputs in fine-tuned models.
AI Model Poisoning Attack Demonstrated on Open Source Models
Posted on:January 10, 2026 at 11:00 AM
Model poisoning allows hidden trigger phrases to cause malicious outputs in fine-tuned models.