Show HN: LLMSecure – prompt injection detection, no signup
LLMSecure is a tool for detecting prompt injection attacks in large language models. The service requires no signup and is available for immediate use. It helps identify malicious prompts that could compromise AI system security.