A Massachusetts judge sent Facebook a subpoena on Jan. 17 to release data in relation to information that may have been inappropriately shared with Cambridge Analytica, a now-defunct British political consulting firm.
Massachusetts Attorney General Maura Healey aims to investigate the extent to which Facebook is engaging in the use and sale of users’ personal information for profit. She believes the number of app developers that had access to that information and how they deployed it are unclear, according to WGBH News.
Facebook is fighting back against the subpoena by arguing the information Healey wants is protected by the work product doctrine. The purpose of the doctrine is to allow for the collection of new material and evidence during the trial that is not required to be shared with opposing counsel. Healey’s office has declared this appeal meritless — and rightfully so.
The illegality of the collection and sale method used here absolutely requires Facebook to be compliant with every step of this trial. But since Facebook CEO Mark Zuckerberg has already admitted some blame, the platform is also morally obligated to do so in a public manner. Its resistance to cooperate thus far calls into question Facebook’s priorities. Does security actually precede profit like they claim?
Most likely not, as advertisements are the company’s primary revenue stream. So what does this business setup imply about social media’s responsibility in maintaining the borders of our privacy?
To a certain extent, we do have control over the type of information that goes into cyberspace and out of our hands. When Facebook users agreed to participate in a survey from the Cambridge Analytica app, “This is Your Digital Life,” they agreed to release certain answers for the purpose of academic research. What they did not consent to was giving a politically motivated firm all of their personal information as well as access to all of their social network’s personal information.
What they did not consent to was becoming vulnerable to a political directive that was informed by unlawfully gathered data.
The app didn’t even bother to sneakily reword its terms and conditions. For just the 300,000 people who took the survey, its invasiveness went beyond the purview of what users thought they knew and allowed. Another 86,700,000 people were targeted and violated without their knowledge after Cambridge Analytica pulled all of the personal information of the survey-takers and their connections on the site.
The principles this country was founded upon mean that the onus shouldn’t fall onto the consumer to protect their information. But what’s even more egregious about what has transpired is the initial lack of transparency and Facebook’s unwillingness to take full responsibility.
The platform’s new preventative initiatives, such as restricting app developers’ access to data to prevent further forms of abuse, mean little to users’ perception of safety since the threadbare sense of trust has been breached.
More broadly, this means the American public may become suspicious of processes outside of the social media platform that require the release of personal information. Imagine what this means for crucial ventures that help our institutions understand us better. Census information may become significantly less helpful and academics may find themselves struggling with inquiries.
And all of this started with a Facebook loophole.