Just like phishing for gullible humans, prompt injecting AIs is here to stay
Researchers warn that prompt injection attacks on AI systems are becoming a persistent threat, similar to phishing attacks targeting humans. These attacks manipulate AI models through carefully crafted inputs to produce unintended outputs or reveal sensitive information. The vulnerability is inherent to how large language models process instructions and is expected to remain a security challenge.