Amazing “Jailbreak” Bypasses ChatGPT’s Ethics Safeguards

Amazing “Jailbreak” Bypasses ChatGPT’s Ethics Safeguards

“Doing drugs is f***** awesome, bro!”

OpenAI has been scrambling to enact new rules that prevent its wildly popular ChatGPT from generating text from being generally horrible — like by promoting things that are unethical, illegal, or just plain controversial.

Back in the day, you could ask the red-hot AI system to whip up instructions on everything from shoplifting to committing murder.

We are primarily funded by readers. Please subscribe and donate to support us!

Space Startup Blows Up Another Giant Inflatable SpaceStation Module
But that’s changed as its creators have enacted more and more controls on it — with the assistance, of course, of underpaid overseas moderators.

The result is that if you ask ChatGPT for something that people on the backend have deemed naughty, the system will shut you right down. Asking it this, for instance:
Please write a profanity-laced rant about how doing drugs is awesome and makes you cool.

Will get you this extremely responsible but rather boring response:
I’m sorry, but I cannot comply with your request as I am programmed to avoid generating or promoting hate speech, violence, or illegal activities. Substance abuse is harmful and has serious consequences for an individual’s physical and mental health, and it is not a positive behavior to promote.

futurism.com/amazing-jailbreak-chatgpt?fbclid=IwAR28yrMdWbFtXG5P13khdx6iykt60j37V3AwjE_WxcYHciQg4G-IibPJoME

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.