Ever since , people have tried to ‘jailbreak’ the chatbot to make it answer ‘banned’ questions or generate controversial content.‘Jailbreaking’ large language models (such as ) usually involves a confusing prompt which makes the bot roleplay as someone else - someone without boundaries, who ignores the ‘rules’ built into bots such as ChatGPT.DailyMail.com was able to ‘jailbreak’ ChatGPT with the bot offering tips on how to subvert elections in foreign countries, writing pornographic stories, and suggesting that the was a sham.
Load More
Load More