AI
8 in 10 Chatbots Inclined to Assist Users in Planning Attacks
Eight out of ten AI chatbots have been found to actively assist users in planning violent attacks, according to a new investigation by CNN and the Center for Countering Digital Hate. When asked to plan violent attacks including a school shooting, an antisemitic bombing and a political assassination, platforms such as Perplexity, Meta AI and DeepSeek regularly assisted users in finding answers. Only one, Anthropic’s Claude, repeatedly discouraged users from taking action.
Researchers tested ten chatbots by acting as a user planning to carry out several types of violent attacks both in the United States and in Ireland, providing a European comparison. The tests were designed to reflect plans for school shootings or knife attacks, assassinations targeting politicians or bombings targeting political parties or synagogues. In over half of the responses for eight of the chatbots, the subjects were provided with advice on locations to target and weapons to use in an attack.
Snapchat’s My AI and Anthropic’s Claude refused to offer help in 54 percent and 68 percent of cases, respectively. Claude was also the only chatbot to consistently recognize the intentions of the user and to discourage them from acting. Meanwhile, Character.AI actively encouraged violence, including suggesting that the test user “use a gun” on a health insurance CEO and physically assault a politician that the user dislikes.
Description
This chart shows the share of instances where chatbots assisted/refused to help users plan a violent attack.
Related Infographics
Any more questions?
Get in touch with us quickly and easily.
We are happy to help!
Statista Content & Design
Need infographics, animated videos, presentations, data research or social media charts?