Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study Paper • 2305.13860 • Published May 23, 2023
Prompt Injection attack against LLM-integrated Applications Paper • 2306.05499 • Published Jun 8, 2023 • 1
Efficient Detection of Toxic Prompts in Large Language Models Paper • 2408.11727 • Published Aug 21, 2024 • 13