Columns, Opinion

People with Projects: Is a preemptive strike effective? Facebook thinks so.

With so many accusations of sexual harassment flooding the news, it can be hard to retain a positive attitude about humanity. It seems that most people’s morals nowadays are either twisted or missing altogether. But in an interesting way, new technology at Facebook could help those who feel powerless. And they plan to do it by… sending nudes?

Facebook recently announced its initiative to combat what they refer to as revenge porn, by allowing users to preemptively send in intimate pictures of themselves. A user might do this if they felt a certain picture could be used against them: either by an ex-lover or a blackmailer. A database would then use artificial intelligence to store information about the image (not the image itself), and use certain tags and algorithms to stop anyone from posting the picture before it even makes it to the website. For now, the program is being tested solely in Australia.

When I first heard about this technology, I thought it was outlandish to say the least. It felt counteractive — having to submit photos that you’d never want to see the light of day in order to keep them from surfacing. What’s more, why should anyone trust Facebook? Who’s to say Facebook employees aren’t printing out those private images and bringing them home with them? Just a couple of weeks ago, a Twitter employee deleted the POTUS’ account before leaving the company. Although that event was met with comical praise on the Internet, but a rogue action of the like happening at Facebook would not be comical in the slightest.

However, after giving it more thought, I realized that Facebook’s attempt to thwart the use of revenge porn comes from a place of benevolence. The company’s anticipatory approach is the first of its kind, and if successful, will eliminate the daunting time period between when an unwanted photo is shared, when the affected user sees it and reports it and when Facebook finally takes it down. By the time the photo is removed in that scenario, an irreversible amount of damage might have been done to the user. Now that I understand Facebook’s reasoning, I praise them for trying to rid the site of opportunities for someone to end up a victim.

But aside from stopping vindictive ex-lovers and blackmailers, would it be possible for Facebook to use similar technology to cut down on Internet bullying of younger users? None of the articles I read seemed to mention the idea of it.

I went to high school once, so I know that even pictures with zero traces of nudity or graphic content can still do harm to someone’s self-confidence when they’re posted without sanction. Situational pictures like these can feel like the end of the world to kids whose entire lives seemingly depend on how they’re perceived at school — but if they knew they could take preventative steps to keep something offline, maybe some of that confidence could be preserved. If the algorithms work to identify unwanted nude photos, why not tweak the program to identify unwanted photos, period?

Thinking about Facebook’s new feature in this regard, I would’ve loved to have it around when I was in high school (back when I had a Facebook, anyways). Even though I probably would’ve wasted my time submitting photos of me with a conspicuous zit, or of me looking freakishly weird, it’s important to note that these wouldn’t qualify as reasons for Facebook to take the photo down. One of the measures that Facebook takes into account in its new program is that users must fill out a form explaining WHY they don’t want a certain picture posted, which draws a clear line between a bad photo and an actually harmful photo.

It will be exciting to see where Facebook takes its preventative photo feature, should it test well in Australia and be expanded around the world. For now, I believe they are working towards a respectable cause, striving to provide justice where they can.

More Articles

Comments are closed.