Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.