At the start of 2018, Facebook CEO Mark Zuckerberg pledged in an online statement that his personal challenge that year would be to “fix” Facebook. This statement was made after years of seeing the social media supergiant face public scrutiny over a long list of issues — some of which included the platform’s role as a undercover mass surveillance tool, grounds for political and foreign interference and a space where user behaviour could be claimed and sold to assist companies with targeted advertising.

“The first four words of Facebook’s mission have always been ‘give people the power’,” Zuckerberg’s speech read. “But today,” he continued, “many people have lost faith in that promise. With the rise of a small number of big tech companies — and governments using technology to watch their citizens — many people now believe technology only centralizes power rather than decentralizes it.”

While honourable, Zuckerberg’s speech unfortunately pays little mind to the business model we now see on most of the Internet in today’s world — one that is built almost entirely around the concepts of data harvesting and surveillance. In 2019, an estimated US $333 billion was projected to be spent on worldwide digital ad spending — the majority of this revenue going to Facebook and Google.

So, what exactly was Zuckerberg vowing to fix? Can we expect to one day see Facebook become a more decentralized, ethically-run social media platform? Or was Zuckerberg’s message nothing more than a litany of empty words, carefully crafted to placate his users?

Before we dig deeper into the mechanisms (and flaws) of Facebook’s business model, let’s review what likely prompted Zuckerberg’s pledge — and recap some of the company’s most prominent scandals.

An experiment on emotional contagion

Facebook first came under fire in 2014, when it was revealed that the social media platform performed a massive scale psychological experiment on nearly 700,000 oblivious users. The objective of this secret study was to determine whether users were influenced by the content that was shown in their newsfeeds, and whether said content encouraged emotional responses (referred to as “emotional contagion”).

In short, the content that users saw on their timelines was manipulated behind the scenes. The study concluded that when more positive sentiments expressed by a subject’s Facebook friends were reduced, “people produced fewer positive posts and more negative posts” — and conversely, when the negative sentiments were reduced, the opposite reaction was generated and more positive expressions were shared.

News of this experiment was met with public outrage from both Facebook users and researchers alike, with the general consensus accusing the social media firm of violating their privacy rights and tweaking their personal profiles without their knowledge or informed consent.

Unbeknownst to many, the manipulation of user behaviour was about to venture into a deeper, darker rabbit hole — and on a much more global scale.

The Cambridge Analytica scandal

In 2016, Facebook was behind a now well-known political scandal involving the Brexit vote and US President Donald Trump’s 2016 election campaign. This would later prove to be what is arguably Facebook’s most notorious crime on record, as well as the largest leak in the website’s history.

In March of 2018, it was revealed by a whistleblower that Cambridge Analytica, a political data firm, was responsible for purchasing data on tens of millions of Facebook users and then selling it to ultimately influence large-scale political campaigns.

According to this leak, Cambridge Analytica was hired by President Trump’s 2016 election campaign to collect private data on more than 50 million Facebook users. This was done with the goal of utilising tools offered by the firm’s research team to target and influence American users’ behaviour, with the goal of helping elect Trump as president. The initiative was funded by Robert Mercer, an affluent Republican donor, and Steve Bannon, a popular right-wing pundit and one of Trump’s former advisors.

Leaked emails have also evidenced that Cambridge Analytica assisted with the “Leave” side of the 2016 Brexit vote, helping target and influence British users into voting for the United Kingdom to leave the European Union.

Facebook claims that it does not allow user data to be sold or transferred “to any ad network, data broker or other advertising or monetisation-related service”. In response to Cambridge Analytica’s politically-motivated data harvesting, they asserted that this is exactly what the firm did by selling user profiles to political campaigns. Those associated with Cambridge Analytica were subsequently banned from Facebook.

However, this has prompted a deep questioning about which companies Facebook allows to advertise on their platform. What made Facebook the ideal grounds for a firm like Cambridge Analytica to harvest data on such a gross scale?

The problem with self-service and surveillance capitalism

In the wake of Facebook ads being used as tools for political interference, the company has also seen its self-service advertising platform become the subject of public scrutiny.

Facebook provides a self-service ad platform, with a focus on providing ease of access to users. Essentially, this makes it so anyone can advertise just about anything on Facebook — so long as they have a valid method of payment. As a result, most Facebook ads are purchased with zero correspondence between Facebook and the advertiser.

Zuckerberg’s argument for this setup? “We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.”

However, as demonstrated by its previous support of covert social experiments on its users, Facebook clearly has the right resources and technology to monitor the content that gets published on its platform. Unfortunately, the chances of us seeing this happen for the sake of ethics are highly unlikely — as any reduction in the number of advertisers that pay to use their services is a huge threat to their economic model.

The business model we see on most of today’s online platforms encourages “dwell time”, which is the process of targeting online users and capturing their attention for as long as possible. This goes hand-in-hand with surveillance capitalism — a system whereby our personal data is surveilled, collected by big data companies and then sold to third parties.

It should come as no surprise that when it comes to what Facebook allows users to share on its medium, ethics pretty much fall to the wayside. Violent content, such as videos depicting beheadings, are even allowed on the platform (they’re just prefaced with a warning sign). Controversy loves company — and at the end of the day, it’s all about what keeps users glued to their screens.

On top of that, Zuckerberg’s economic model extends far beyond Facebook’s parameters.

The marriage of Facebook and Instagram

In the last decade, we’ve seen Instagram join the likes of Facebook as one of the most widely used social media applications. Offering a cleaner, less complex alternative to Facebook’s interface, Instagram is responsible for revolutionizing the art of photo sharing, on-the-go photography and modern advertising.

Founded by programmers Kevin Systrom and Mike Krieger, Instagram was acquired by Facebook in 2012 for US $1 billion (this was soon followed by their acquisition of WhatsApp in 2014, for US $19 billion).

However, recent sources have shone light on Zuckerberg’s increased fear of seeing his users ditch Facebook for an alternative platform. Worried that Instagram’s success might cannibalise the success of Facebook, Zuckerberg began restricting resources that were tied to Instagram. As a result of this, Systrom and Krieger’s independence and oversight of the app was eventually rescinded (resulting in their decision to leave the company).

Today, Instagram unironically comes equipped with more Facebook-like features — such as notifications, recommendations for users to follow and, you guessed it — personalised ads. It’s the perfect place for Facebook to expand upon the economic model in which they’ve set up for success.

Instagram was acquired by Facebook in 2012 for US $1 billion

Image features Kevin Systrom former CEO of Instagram. Image Credit: Tech Crunch 

Zuckerberg’s singular solution

Today, Zuckerberg serves as chief of the four most downloaded mobile apps in the last decade: Facebook, Facebook Messenger, WhatsApp and Instagram.

A year after his pledge to make Facebook better, Zuckerberg recently announced his plans to integrate the direct messaging services of each leading app into one unified interface. The key motivation behind this initiative has been to provide end-to-end encryption across a singular messaging system, in response to the company’s alleged disregard for its users’ privacy.

However, other sources have raised the idea that this plan might actually be a veiled attempt to counter an anti-trust lawsuit that could force Facebook to sell Instagram and WhatsApp. As it turns out, these new terms and conditions effectively lift any accountability from Facebook should any data travel across its messaging tools. And in the wake of potential anti-trust litigation, it appears that this agenda works very well in Zuckerberg’s favour.

How can Facebook adopt a more ethical approach? Or is this even possible?

In today’s world, big tech companies are amongst the most profitable and capitalized businesses in the modern world — with both the technology industry and the world economy in their clutches. With the rise of surveillance capitalism, this system has enabled these large firms to wield an incredible level of social and cultural sway over the way we communicate, share knowledge and think.

At the end of the day, it is these experiences, refined into data, that fuel their engines — all at the expense of our personal freedom and rights to democracy.

So, will it ever be possible for Mark Zuckerberg to really “fix” Facebook? Or should we just simply accept that mass popular, ethical and political manipulation may continue to be part and parcel of having an online presence?

Author Evan Osnos just might frame it perfectly: “The era when Facebook could learn by doing, and fix the mistakes later, is over. The costs are too high.”

On the other hand, others suggest that an extreme threat to Facebook’s reputation could affect its stock price, thereby resulting in Zuckerberg finally adopting a more ethical oversight of the platform to improve the company’s public image.

As we navigate through an era of increased social media use, changes to how tech companies are governed and increased awareness of what happens to our online data, perhaps only time will tell.

Leave a Reply