Much has been made about employers seeking access to the Facebook profiles of potential employees, but a new piece of information came along last week that shifts the debate about Facebook and online privacy rights to focus on what Facebook as a company is doing, not how others are choosing to use it.
On July 12, Reuters published an article written by Joseph Menn where Facebook’s Chief Security Officer Joe Sullivan discussed the software Facebook has in place to scan post and chats for certain behavioral patterns and language connected to potential criminal activity.
In a very over-simplified nutshell, such programs analyze relationships users have and watch conversations for questionable language and words that were used by convicted pedophiles in online chats which later led to crimes such as sexual assault. Filtered messages are graded based on algorithms and, if enough indicators are there that someone may commit a crime, the data is marked to be reviewed by a Facebook employee and placed in a ticket system based on its priority level. The employee who reads the ticket can then contact the police with the information, as was done in the chilling instance described by Menn where a 30-year-old man wrote about his intention to have sex with a 13-year-old girl the following day. The man in question was arrested as a result of the post and the two parties never met.
Facebook’s software, according to Sullivan, has “a very low false-positive rate” of sending innocent communications forward to be reviewed by a human. He says the company’s intent was never to “set up an environment where we have employees looking at private communications,” which should be comforting to those concerned with invasion of privacy.
However, this low false-positive rate raises questions regarding whether the parameters are so stringent that legitimate dangers slip by. What happens next could have truly catastrophic effects, like the three sexual assaults that occurred to underage Skout users who had been contacted by adults masquerading as teens using the mobile app. (Skout temporarily shut down their teen section after news of the assaults went viral but has since reopened its service to minors.)
In order to best protect teens, Facebook places restrictions on how those younger than the age of 18 can use the site. The company also limits access to minors: They do not appear in public searches, can only receive messages from friends of friends, and can only chat with friends. One huge caveat here is that underage individuals can simply lie about their age to gain access to the unrestricted version of the site, but how much can Facebook be reasonably expected to do to guard the safety of its users without completely compromising privacy rights?
This question is food for thought for what Facebook is doing in general, not just what it is doing for minors. Do you think it’s altogether too creepy that there is a program — and possibly a person — monitoring your online activity, or are you glad that the software exists if it means crimes can be prevented? Sound off in the comments section below!