The Future Of School Safety Includes Round-The-Clock Surveillance Of Students

The Future Of School Safety Includes Round-The-Clock Surveillance Of Students

Tim Cushing | TechDirt | Source URL

To go to school is to be surveilled, on campus and off. The average school hosts a number of cameras, and the average school administration is always looking for more ways to keep tabs on students, even after they’ve gone home.

The move towards pervasive surveillance of off-campus activities is generally justified with the meaningless assertion that if it stops one person from shooting up a school (or just shooting themselves), it’s all worth it. Two articles based on public records requests — both written by Benjamin Herold of Education Week — show there’s a surveillance state being built one school district at a time. (h/t Amelia Vance)

Documents obtained by Education Week via open-records requests show that over the past year, state agencies have discussed the possibility of sharing a breathtaking amount of data. That included more than 2.5 million records related to Floridians who received involuntary psychiatric examinations, records for over 9 million people placed in foster care, diagnosis and treatment records for substance abusers, unverified criminal reports of suspicious activity, reports on students who were bullied and harassed because of their race or sexual orientation, and more.

This database is currently on hold as education officials try to figure out how much of this incredibly sensitive info they can share with other agencies, much less collect in the first place. But once these details are ironed out, the rollout will continue, urged along by Governor Ron DeSantis, who has publicly expressed his frustration that schools aren’t placing students under round-the-clock surveillance quickly enough.

It’s not just medical, psychiatric, and criminal records. This program also seeks to hoover up as much data as it can from students’ social media accounts and internet activities. The data-sharing concerns are being mitigated by a loophole in federal privacy laws. The Family Educational Rights and Privacy Act (FERPA) prohibits sharing of much of this info, but provides an exception for “school officials.” Schools want to hand this information to law enforcement and law enforcement officers in schools (usually referred to as “school resource officers”) are the loophole districts are using to circumvent FERPA’s protections.

While the collected documents listed above are concerning enough in their implications — that implication being that schools consider at-risk students to be “threats” — the addition of social media to the mix increases the number of students viewed as threats because algorithms and haystacks tend to ignore important things like context or frame of mind. The software being sold to schools may expedite the collection of posts containing flagged terms but they’re completely useless when it comes to doing more human things, like recognizing humor or sarcasm.

Relying on flagged terms means school security contractors are sorting through tons of garbage data. One company’s software flagged all of the following as potentially threatening:

Tweets about the movie “Shooter,” the “shooting clinic” put on by the Stephen F. Austin State University women’s basketball team, and someone apparently pleased their credit score was “shooting up.”

A common Facebook quiz, posted by the manager of a local vape shop.

A tweet from the executive director of a libertarian think tank, who wrote that a Democratic U.S. senator “endorses murder” because of her support for abortion rights.

And a post by one of the Brazosport district’s own elementary schools, alerting parents that it would be conducting a lockdown drill that morning.

“Please note that it is only a drill,” the school’s post read. “Thank you for your understanding. We will post in the comment section when the drill is over.”

The software also flagged this tweet, posted by a 31-year-old comic book artist who happened to be tweeting from inside Social Sentinel’s geofence:

If you can’t read/see the tweet, it says:

Cat #1: pwease feed me thank you

Me: ha, cute.

Cat #2: IF YOU DON’T FEED ME RIGHT NOW I’M CALLING THE POLICE I CAN’T BELIEVE YOU’D TREAT ME LIKE THIS I AM DISAPPOINTED I DIDN’T RAISE YOU TO BE LIKE THIS

Me: I would die for you.

It’s a tweet about the artist’s relationship with her cats and any human being would immediately recognize it as having nothing to do with self-harm. But there’s no room for nuance in automated keyword searches.

The head of Social Sentinel says people shouldn’t worry about the collateral damage. Instead, they should be thankful all this surveillance could theoretically prevent a student from harming themselves or others.

“If you’re responsible for the safety and security of a school, you have to pay attention to the places where harm is being foreshadowed,” said Gary Margolis, the CEO of Social Sentinel, which claims “thousands” of K-12 schools in 30 states are using its service.

Margolis said it’s unfair to focus on the false positives that may slip through a company’s monitoring system. Any harms pale in comparison to the benefits of what is caught.

That might mean something if meaningful things were actually being caught. But the filters searching for harmful material, threatening posts, and other concerning content are being used as a dragnet to sweep up students who do nothing more than use certain flagged words in benign ways. Monitoring company Gaggle flagged 3,000 incidents in Michigan schools over a six-month period. 2,500 of those were minor violations like profanity.

It’s not just a Michigan problem. It’s a Gaggle problem.

The experiences of other K-12 Gaggle clients help illuminate such concerns.

Evergreen Public Schools in Washington state, for example, started using the company’s service this school year. Between September and mid-March, the system flagged more than 9,000 incidents in the 26,000-student district.

The overwhelming majority—84 percent—were for minor violations, such as profanity.

These programs are deployed under a number of faulty assumptions, not the least of which is students have zero expectation of privacy. Students still have rights, even if they’re somewhat diminished when they’re on campus. They also assume this pervasive surveillance will result in safer schools, even though there’s no evidence that points to this conclusion. Companies and schools may be assuming the courts will back up these efforts when fighting lawsuits, but it seems unlikely many judges will agree that striving for school safety justifies widespread untargeted surveillance of minors.

Leave a Reply

Your email address will not be published. Required fields are marked *