Adversarial attacks, for example when attackers mislead the LLM by adding adversarial content to prompts. Hallucination, when ...