3 min read
The Terrifying Lesson We Need To Learn From Facebook's Recent Connectivity Outage

Yeshaya Gedzelman


On October 4th, Facebook and its subsidiaries (Instagram and Whatsapp) were hit with an almost 6 hour long outage that saw the company lose around $65 Million in Ad revenue and 4.9% of its stock value  by the time it was over. The incident came at a sensitive time for Facebook, with the congressional testimony of a former company executive (in its misinformation branch) set to take place the following day of October 5th, where Frances Haugen accused Facebook of not doing enough to combat misinformation and preventing the spread of harmful data, painting a picture of a company pushing indifferent or delusional to the negative ramifications of Instagram’s impact on its users and society as a whole, with particular focus on how Instagram has affected the mental health of its users under age 18. The social networks recent debacles have further amplified public focus on the extent to which billions of people (cite) rely on Facebook and its companies and the power that it and other information technologies command, not only because of its success financially, but also as a result of its ability to control all of its associated user data. 

Facebook's abnormally (approximately) six hour long online hiatus, was tremendously costly for Mark Zuckerberg and his company as a whole and its primary losses may have even not been easily measured in money. The outage will likely have a powerful but immeasurable effect in its consumer relations and shake public confidence in its ability to secure and protect its network Following its offline fiasco, Facebook released a statement on the matter filled with a bit of coding jargon explaining the cause of the incident as stemming from "changes" on the" back bone router that coordinate network traffic between our data centers, (that) caused issues that interrupted this communication" and also assured the public that they "have no evidence that user data was compromised as a result of this downtime". The error allegedly caused by the backbone router not only took Facebook and all its associated companies offline, but actually locked its employees out of its offices (due to in an inability to use their electronic ID’s) further complicating efforts to restore its connectivity and accentuating chaos at its headquarters. 

The issue of Facebook's data and its protection (or lack thereof) of the privacy of its users has already been a public concern for some time. In 2015, British consulting firm Cambridge Analytical used an app called This Is Your Digital Life to collect data and build profiles on an estimated 87 million Facebook users, to help the political campaigns of Ted Cruz and Donald Trump to enhance their marketing strategy by identifying relevant audiences for their advertising efforts. Although Facebook later apologized and took action to destroy the data and remove the app in 2015, the issue of Cambridge Analytica highlighted the explosive nature of social media data and its potential usage, an issue that will only gain more relevance in the foreseeable future. 

Even if we accept Facebook’s assertion that there wasn't a breach of its data during the recent outage, the catastrophic danger of Facebook’s data (as well as that of other social media sites) in the wrong hands remains. Certainly, it is in Facebook's interest to have sufficient measures to safeguard the data of its billion plus users and very likely has extensive and sufficient measures in place. However, if at some point in the future this changes, the societal implications would almost certainly be massive. It’s important to consider that the nature of the data collected and later sold by Cambridge Analytica was relatively impersonal (it reportedly built psychological profiles of Facebook’s users to determine and target a specific political preference) to what it could have been. For example, assuming every person that has had or has a Facebook, Instagram and/or Whatsapp account (and uses these accounts for messaging others), had every single one of their messages now visible online. Unless those people were squeaky clean and had never sent an angry, hypocritical or otherwise inappropriate and embarrassing message, there could be tremendous societal upheaval. Privacy helps humanity to function because it allows us room to be imperfect (often very imperfect) and grow by learning from our mistakes. Undoubtedly, any rational person would admit (if they were being honest) that no person is perfect and that at some point every person has made a mistake and everyone almost certainly has at least one or two words that were later regretted and learnt from (or perhaps not).  The idea of two separate sets of behavior thoughts and words (depending on whether a person is in front of a crowd or a single friend) is scary because it insinuates a level of hypocrisy, but humans are not robots or saints. Everyone has said things to a friend at some point in their lives that they wouldn’t in front of a crowd and how would humanity function if every phone call, word or message exchanged was held up to public scrutiny? George Orwell’s 1984 gives a chilling vision of a dystopian future, in which he describes a society controlled by technology and beset by paranoia and fear as a result of a complete lack of privacy (for example the constant monitoring  by the thought police), even in their homes, leaving no room for error. 

Social media is defined by people sharing what they WANT others to see, creating an illusion of a life filled with moments disproportionately more happy and exciting then their own (having an unquestionable impact on mental health). Aside from exciting trips or memorable meetings with friends, it can often paint a picture of a person more moral then he or she really is, because no sane person enjoys sharing actions or beliefs that they believe are negative (others may have a much different view though). 

The real threat doesn’t only stem from known companies such as Cambridge Analytica, but may come from inside Facebook, from a company official who could be incentivized to sell a user (or users) data to an interested party with mal intent. Alternatively, this malicious actor may just hack Facebook’s data by itself. Regardless, the day is coming (if it hasn’t already) when Facebook’s data will be used to ruin someone’s reputation or life for political or economic gain (although I pray I’m wrong). Building support to address this concern (of data privacy) can be challenging, because the issue of data and likely human hypocrisy goes to a vulnerable, sensitive and very imperfect, human place. Unquestionably there are levels to a person’s hypocrisy and the difference to public and private behavior/faces of an individual and what we can and should tolerate. However, before passing judgment on this person (or persons) whoever they may be, we should ask ourselves two questions, have we ever been hypocritical in this way or a similar/comparable way? Do we have friends who we accept and care about who have been? If the answer is yes and we’re considering to join the chorus condemning this individual anyway, it may be wise to heed the warning of Everlast (in his song what it’s like) to not judge others, otherwise, “then you really might know what it’s like, to have to lose”.

Comments
* The email will not be published on the website.