A new technique can more effectively perform a safety check on an AI chatbot. Researchers enabled their model to prompt a chatbot to generate toxic responses, which are used to prevent the chatbot from giving hateful or harmful answers when deployed.