- An exploit that aims to hijack a machine-learning model’s output, by sneaking malicious prompts into its input.
- More about this term (beta): Web search, articles and videos, books
- Previous term: Promise
- Next term: Proof of concept
- Random term: Symbolication