News & Politics

Monitoring Kids' Social Media Accounts Won't Prevent the Next School Shooting

Invading students' privacy isn't the solution.

Photo Credit: Twin Design/Shutterstock

The Parkland, Fla., school shooting has reignited the national conversation on what can be done to prevent such tragedies, which seem to occur with frightening regularity. One option, which already is used by many schools and probably will be adopted by more, is to employ companies that monitor students’ social media feeds to flag threats of violence, as well as behavior such as bullying and self-harm.

Miami-Dade County’s school system has asked for $30 million in upgrades that include "advanced monitoring of social media," while schools in CaliforniaOhioTennessee and Virginia have indicated that social media monitoring, including by third-party companies, is a key security feature.

But schools should think long and hard before they go down this path. There is little evidence that such monitoring works, and these practices raise plenty of questions about privacy and discrimination.

Don't let big tech control what news you see. Get more stories like this in your inbox, every day.

Nikolas Cruz, the suspected perpetrator of the Parkland shooting, hardly presents a case for schools to proactively check social media. If anything, it shows that people already alert law enforcement when they see genuinely threatening material online. Cruz was reported to the FBI and local police at least three times for disturbing posts; one call to the FBI warned that he might become a school shooter, while a separate call flagged a YouTube post saying that the user wanted to become a “professional school shooter” (although the poster wasn’t identified as Cruz until after the shooting).

And Cruz’s explicit declaration of intent is the exception, not the rule, which means monitoring the Internet wouldn’t usually turn up such warnings. Our informal survey of major school shootings since the 2012 Sandy Hook killings in Newtown, Conn., shows that only one other perpetrator’s social media accounts indicated an interest in school violence: Adam Lanza, the Newtown shooter, posted in discussion forums about the Columbine high school shooting and operated Tumblr accounts named after school shooters. These postings were not a secret, and while viewers at the time may not have known whether to take the threats seriously, it is hard to imagine in the current climate that his posts would not be reported to the authorities — as they should be.

Generally, school shooters’ online profiles — which wind up being extensively analyzed in the wake of attacks — reveal little that sets them apart from other teenagers. The Facebook page for the perpetrator of a 2014 shooting in Troutdale, Ore., is typical. It showed that he liked first-person shooter and military-themed games like “Call of Duty,” in addition to various knife and gun pages. Meanwhile, the official “Call of Duty WWII” Facebook page boasts nearly 24 million followers, while over 1.3 million people have “liked” the Remington Arms Facebook page.

An algorithm trawling the Web for people who like violent video games or firearms would be swamped with far more hits than any law enforcement agency or school administrator could conceivably review. The same would be true of any program that looked for words like “gun,” “bomb” or “shoot,” as the Jacksonville, Fla., police department discovered the hard way when its social media monitoring tool — while producing zero evidence of criminal activity — flagged comments about crab burgers, pizza or beer being described as “bomb,” or excellent. (It also caught two uses of the phrase “photo bomb.”)

Social media monitoring tools can also result in discrimination against minority students. While there is little publicly available information on what such tools look for, it is likely that — much like the equivalent tools used by law enforcement agencies — they will incorporate biases. A recent ACLU report showed that the Boston Police Department’s social media monitoring efforts contributed nothing to public safety while searching for terms like “Ferguson” and “#blacklivesmatter,” as well as terms likely to be used by Muslim users, like “#muslimlivesmatter” and “ummah,” the Arabic word for community.

There is also substantial evidence to suggest that children of color, especially those who are Muslim, would be treated as dangerous and perhaps subject to extra monitoring, despite the fact that the majority of school shooters have been white. Take the case of Ahmed Mohamed, the Muslim teenager who brought a homemade clock to his Dallas-area high school and was promptly arrested on the suspicion that it concealed a bomb.

Children of color appear likely to be treated more harshly in general, in light of research showing that black children experience more punitive school discipline from preschool through high school — even when their white peers break the same rules. This appears to play out online as well: When an Alabama school hired an ex-FBI agent to scour students’ social media accounts, 86 percent of the students expelled as a result were black, in a school district that was only 40 percent African American.

As many Americans cheer the Parkland shooting survivors for their political activism, it is important to recognize the chilling effect of ongoing surveillance. While students’ privacy and free speech rights may be diminished when using school WiFi networks and school-issued devices, social media monitoring extends into their out-of-school social and recreational lives. Given that 92 percent of American teens go online daily and 24 percent are online almost constantly, monitoring programs can operate like listening devices that record every utterance and pass it on to school administrators. Yes, this scrutiny may on occasion reveal risky behavior that requires intervention. But far more often, it will also squelch young people’s ability to express themselves — and probably drive conversations to communications channels that cannot be easily monitored.

This is not to say that schools should never look at students’ Facebook posts. But they should generally do so only when there is a reason — for example, when a student or parent has flagged concerning behavior or when the school is investigating online harassment or bullying. Every school must have in place policies available to parents, teachers and students specifying when it will look at social media postings. Such policies should be narrowly tailored to avoid impinging on the privacy and free speech rights of students, and they should limit the sharing of data with third parties and include procedures for deleting information when a child graduates or leaves the school, as well as safeguards to ensure that children of color are not unfairly targeted.

In the wake of yet another school shooting, Americans are understandably looking for ways to keep students safe. We should focus our attention on measures that have been proved to work, such as sensible gun controls and ensuring that parents and peers know whom to contact to report threats and to receive help, rather than expensive tools that are unlikely to make us secure but carry substantial costs for the very children we are trying to protect.

Rachel Levinson-Waldman serves as Senior Counsel to the Brennan Center for Justice’s Liberty and National Security Program, which seeks to advance effective national security policies that respect constitutional values and the rule of law.

Faiza Patel serves as Co-Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU School of Law. She is also a member of the UN Human Rights Council’s Working Group on the Use of Mercenaries. Follow her on Twitter: @FaizaPatelBCJ