Tag: prompt
All the articles with the tag "prompt".
How to Hack LLMs
Updated: at 04:44 AMLeverage the use of Large Language Models (LLMs) by simply asking them for what you want, examining how their primary benefit of being helpful assistants can also be their drawback.
Prompt Injection versus Jailbreaking Definitions
Updated: at 04:32 AMSimon Willison distinguishes between prompt injection and jailbreaking in attacks against applications using Large Language Models (LLMs) based on their methods of subverting the systems' safety filters.