At the start of 2018, Facebook CEO Mark Zuckerberg pledged in an online statement that his personal challenge that year would be to “fix” Facebook. This statement was made after years of seeing the social media supergiant face public scrutiny over a long list of issues — some of which included the platform’s role as a undercover mass surveillance tool, grounds for political and foreign interference and a space where user behaviour could be claimed and sold to assist companies with targeted advertising.
“The first four words of Facebook’s mission have always been ‘give people the power’,” Zuckerberg’s speech read. “But today,” he continued, “many people have lost faith in that promise. With the rise of a small number of big tech companies — and governments using technology to watch their citizens — many people now believe technology only centralizes power rather than decentralizes it.”
While honourable, Zuckerberg’s speech unfortunately pays little mind to the business model we now see on most of the Internet in today’s world — one that is built almost entirely around the concepts of data harvesting and surveillance. In 2019, an estimated US $333 billion was projected to be spent on worldwide digital ad spending — the majority of this revenue going to Facebook and Google.
So, what exactly was Zuckerberg vowing to fix? Can we expect to one day see Facebook become a more decentralized, ethically-run social media platform? Or was Zuckerberg’s message nothing more than a litany of empty words, carefully crafted to placate his users?
Before we dig deeper into the mechanisms (and flaws) of Facebook’s business model, let’s review what likely prompted Zuckerberg’s pledge — and recap some of the company’s most prominent scandals.
An experiment on emotional contagion
Facebook first came under fire in 2014, when it was revealed that the social media platform performed a massive scale psychological experiment on nearly 700,000 oblivious users. The objective of this secret study was to determine whether users were influenced by the content that was shown in their newsfeeds, and whether said content encouraged emotional responses (referred to as “emotional contagion”).
In short, the content that users saw on their timelines was manipulated behind the scenes. The study concluded that when more positive sentiments expressed by a subject’s Facebook friends were reduced, “people produced fewer positive posts and more negative posts” — and conversely, when the negative sentiments were reduced, the opposite reaction was generated and more positive expressions were shared.
News of this experiment was met with public outrage from both Facebook users and researchers alike, with the general consensus accusing the social media firm of violating their privacy rights and tweaking their personal profiles without their knowledge or informed consent.
Unbeknownst to many, the manipulation of user behaviour was about to venture into a deeper, darker rabbit hole — and on a much more global scale.
The problem with self-service and surveillance capitalism
In the wake of Facebook ads being used as tools for political interference, the company has also seen its self-service advertising platform become the subject of public scrutiny.
Facebook provides a self-service ad platform, with a focus on providing ease of access to users. Essentially, this makes it so anyone can advertise just about anything on Facebook — so long as they have a valid method of payment. As a result, most Facebook ads are purchased with zero correspondence between Facebook and the advertiser.
Zuckerberg’s argument for this setup? “We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.”
However, as demonstrated by its previous support of covert social experiments on its users, Facebook clearly has the right resources and technology to monitor the content that gets published on its platform. Unfortunately, the chances of us seeing this happen for the sake of ethics are highly unlikely — as any reduction in the number of advertisers that pay to use their services is a huge threat to their economic model.
The business model we see on most of today’s online platforms encourages “dwell time”, which is the process of targeting online users and capturing their attention for as long as possible. This goes hand-in-hand with surveillance capitalism — a system whereby our personal data is surveilled, collected by big data companies and then sold to third parties.
It should come as no surprise that when it comes to what Facebook allows users to share on its medium, ethics pretty much fall to the wayside. Violent content, such as videos depicting beheadings, are even allowed on the platform (they’re just prefaced with a warning sign). Controversy loves company — and at the end of the day, it’s all about what keeps users glued to their screens.
On top of that, Zuckerberg’s economic model extends far beyond Facebook’s parameters.
Zuckerberg’s singular solution
Today, Zuckerberg serves as chief of the four most downloaded mobile apps in the last decade: Facebook, Facebook Messenger, WhatsApp and Instagram.
A year after his pledge to make Facebook better, Zuckerberg recently announced his plans to integrate the direct messaging services of each leading app into one unified interface. The key motivation behind this initiative has been to provide end-to-end encryption across a singular messaging system, in response to the company’s alleged disregard for its users’ privacy.
However, other sources have raised the idea that this plan might actually be a veiled attempt to counter an anti-trust lawsuit that could force Facebook to sell Instagram and WhatsApp. As it turns out, these new terms and conditions effectively lift any accountability from Facebook should any data travel across its messaging tools. And in the wake of potential anti-trust litigation, it appears that this agenda works very well in Zuckerberg’s favour.
How can Facebook adopt a more ethical approach? Or is this even possible?
In today’s world, big tech companies are amongst the most profitable and capitalized businesses in the modern world — with both the technology industry and the world economy in their clutches. With the rise of surveillance capitalism, this system has enabled these large firms to wield an incredible level of social and cultural sway over the way we communicate, share knowledge and think.
At the end of the day, it is these experiences, refined into data, that fuel their engines — all at the expense of our personal freedom and rights to democracy.
So, will it ever be possible for Mark Zuckerberg to really “fix” Facebook? Or should we just simply accept that mass popular, ethical and political manipulation may continue to be part and parcel of having an online presence?
Author Evan Osnos just might frame it perfectly: “The era when Facebook could learn by doing, and fix the mistakes later, is over. The costs are too high.”
On the other hand, others suggest that an extreme threat to Facebook’s reputation could affect its stock price, thereby resulting in Zuckerberg finally adopting a more ethical oversight of the platform to improve the company’s public image.
As we navigate through an era of increased social media use, changes to how tech companies are governed and increased awareness of what happens to our online data, perhaps only time will tell.