↳
In-reply-to
»
OpenAI's Latest Model Closes the 'Ignore All Previous Instructions' Loophole
Kylie Robison reports via The Verge: Have you seen the memes online where someone tells a bot to "ignore all previous instructions" and proceeds to break it in the funniest ways possible? The way it works goes something like this: Imagine we at The Verge created an AI bot with explicit instructions to direct you to our excellent re ... ⌘ Read more
⤋ Read More
@slashdot@feeds.twtxt.net it’s amazing that anyone thinks that these so-called instructions in large language models are anything close to what you would consider instructions or even remotely intelligible.