Analyzing online posts could help spot future mass shooters and terrorists

Analyzing online posts could help spot future mass shooters and terrorists
Image by GaudiLab, Shutterstock

In the weeks following two mass shootings in El Paso, Texas, and Dayton, Ohio, police forces across the United States made more than 20 arrests based on threats made on social media.


Police in Florida, for example, arrested an alleged white supremacist who, police said, threatened a shooting at a Walmart. Richard Clayton, 26, allegedly posted on Facebook, “3 more days of probation left then I get my AR-15 back. Don’t go to Walmart next week.”

People who are contemplating, or even planning, serious crimes rarely make such clear public declarations of their intent. However, they might leave clues that, if properly understood, could offer opportunities to avert tragedy. We have teamed up with computer scientist Anna Rumshisky to collect and analyze more than 185,000 words of extremist or hateful narratives published online by people who have then gone on to commit large-scale shootings or terrorist crimes.

We have also assembled a second, admittedly smaller, sample of over 50,000 words published online by people who did not go on to kill.

The key question for us was whether we could identify signals in online posts that could help police and other officials tell the difference between people who are upset and ranting online and those who intend to do real physical harm. We wondered if the way people express their feelings online could signal whether someone is a real-world danger or a Facebook fantasist.

The power of words

In the aftermath of many mass shootings or terrorist attacks, over the past two decades and around the world, media coverage often indicates that police had previously encountered the suspect.

During the buildup to a mass shooting or a solo terrorist attack, the planners often leak signals of what they’re about to do. A 2016 study found that in nearly 60% of lone-actor terrorist attacks, the person involved produced letters or public statements before the attack that outlined his or her beliefs – though not necessarily violent intent, like the Florida man did about Walmart. They need to maintain secrecy to carry out their plans, but these attackers may fear that if their motivations remain unknown, their actions will have no real meaning.

In the past, researchers have looked to various attributes of people’s behavior and personalities when seeking warning signs that they might become violent and dangerous to the public. But those signals were not enough to prevent many high-profile attacks. For instance, the FBI had analyzed the emails of Nidal Malik Hassan before he shot more than 30 people, killing 13, at Fort Hood, Texas, in 2009.

Australian police had assessed Man Haron Moris as a potential risk to public safety the day before he took hostages in a Sydney coffee shop in 2014. Tamerlan Tsarnaev, who planned the 2013 Boston Marathon bombing, was, as the reporter’s phrase often goes, “known to authorities,” as were the alleged perpetrators of the 9/11 attacks, and terrorist incidents in Madrid, London and Paris, among others.

However, there is not yet a way to evaluate or understand the relationship between writing words of hate and taking action.

Is it possible to tell when rage is going to come offline and into the physical world?

Valery Sidelnykov/Shutterstock.com

Fighting talk

It’s hard to draw strong conclusions from words posted on the internet: A person can post on Instagram about how much they go to the gym while in fact devouring their second delivery pizza of the day.

For years, looking at people’s words has been of little use. A U.K. investigation of the murder of a British soldier found that the killers’ expressions of desire to become a “martyr,” for instance, were dismissed as “a fairly standard example of [online] rhetoric,” rather than a serious indicator of violent intentions.

Yet research has shown that words can indeed be used as indicators of their authors’ psychological states. For instance, highly neurotic people are more likely to use first-person singulars, such as “I,” “me” and “mine.” By contrast, extroverts use more positive emotion words like “great,” “happy” and “amazing.” Social media posts have been used to diagnose personality, personal values and even depression.

Our work seeks to extend this research to the effort to prevent mass shootings and lone-actor terrorist attacks. We compared the online writings and postings of people who had allegedly committed a mass shooting or lone-offender terrorist attack to posts from people who had expressed ideological intent and motivation online, but had no violent plans or intent when they were intercepted by law enforcement. What we found was that there were key differences in how they use words. Those who engaged in real-world violence commented differently from enraged online commentators with no violent intent.

In particular, we have found that people who later became violent were more likely to use emotionally laden and specifically targeted words like “shit,” “hate,” “you” and “they.” Violent people were less likely to use words about the external world, such as “people,” “world,” “state” and “time.”

Our analysis continues, including looking at the structure of these two groups’ writing, such as how well they stay on topic or diverge into tangents. We are also using machine learning and natural language processing to develop automatic tools that could remove the need for human judgment and help analyze large swathes of text to minimize the psychological and physical burden on analysts.

Our findings are preliminary, but we are optimistic that these words can offer a window – and a warning – about individuals’ intentions. This work is by no means a standalone solution to gun violence or terrorism, but it might help, even as predicting and preventing these sorts of attacks remains incredibly difficult.

Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter.

Neil Shortland, Director, Center for Terrorism and Security Studies; Assistant Professor of Criminology and Justice Studies, University of Massachusetts Lowell and Allyssa McCabe, Professor of Psychology, University of Massachusetts Lowell

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Enjoy this piece?

… then let us make a small request. AlterNet’s journalists work tirelessly to counter the traditional corporate media narrative. We’re here seven days a week, 365 days a year. And we’re proud to say that we’ve been bringing you the real, unfiltered news for 20 years—longer than any other progressive news site on the Internet.

It’s through the generosity of our supporters that we’re able to share with you all the underreported news you need to know. Independent journalism is increasingly imperiled; ads alone can’t pay our bills. AlterNet counts on readers like you to support our coverage. Did you enjoy content from David Cay Johnston, Common Dreams, Raw Story and Robert Reich? Opinion from Salon and Jim Hightower? Analysis by The Conversation? Then join the hundreds of readers who have supported AlterNet this year.

Every reader contribution, whatever the amount, makes a tremendous difference. Help ensure AlterNet remains independent long into the future. Support progressive journalism with a one-time contribution to AlterNet, or click here to become a subscriber. Thank you. Click here to donate by check.

Close
alternet logo

Tough Times

Demand honest news. Help support AlterNet and our mission to keep you informed during this crisis.