Governing the digital frontier
Regardless of the many complications and apprehensions regarding potential infringements on free speech, the regulatory framework for social media platforms must be reconsidered
As this article goes to print, lawmakers around the world are engaged in efforts to understand and regulate the workings of social media platforms. Many have called social media one of the greatest inventions of the 21st century. In its reach and function, social media has transformed human society. Social media actualised the idea of digital reality, an online world where people can 'live' their virtual lives. Indeed, social media has over time evolved to integrate many other aspects of our life and as a consequence, make access from an offline reality to an online reality even more seamless.
Naturally, such large scale changes over a relatively short period of time in society are bound to have consequences. Both good and bad. Efforts to fully understand the effects of social media on the individual and on society as a whole are still premature but concerns have been raised. A frequently raised question is whether social media is actually connecting people or driving them apart in different ways. Some argue that while social media has indeed connected us globally in ways that no other technology to date has managed, it is also creating online echo chambers. Social media creates spaces where like-minded individuals may gather and affirm their shared beliefs, no matter how niche. This once again feeds into the natural human tendency that has been labelled as 'confirmation bias'. This tendency makes us seek, interpret and remember new information to be in-line with our existing beliefs, creating a constructed narrative of reality.
Confirmation bias is not new to human society which is why the core systems driving our civilisation such as scientific thought and the legal system are theoretically designed to prevent this bias from manifesting on a larger scale. Social media, however, feeds this bias in ways like never before. Some have gone so far as to say that confirmation bias is a large part of what draws people to social media. Social media platforms use algorithms to identify specific interests and inclinations and feed its users a personalised online reality. As innovations to streamline access continue, the divide between online reality and offline reality blur as more and more time is spent online in 'comfortable spaces' and offline reality is increasingly perceived through the lens of the online one.
It is not difficult to see the inherent dangers here. The notion of objective reality is weakened as traditional standards of factual accuracy do not, in most cases, apply to social media. In recent years, lawmakers have taken note of the significance of a largely unregulated online world and its effects on how people think and behave in their daily lives. Social media companies have also over time produced systems that theoretically self-regulate their platforms. But as many examples across the world show, there is no good or inherently correct answer when it comes to 'governing' this digital frontier.
Damned if you do,
damned if you don't
As it stands currently, there is no clear and responsible winning option when it comes to regulating social media, both for lawmakers and the companies themselves. It is a matter of degrees and balance matters, a balance that is yet to be achieved.
The 2016 US Presidential Elections were a major turning point for the relationship between the government and social media. Social media, with all its flaws and influence, was used in a far more significant sense in a democratic election than ever before. It would be easy here to turn the conversation towards Donald Trump, for he plays a significant role in the unfolding tale of social media and governance, but he was not the only example. Candidate campaigns from both Republican and Democrat sides clashed in digital, sometimes petty arguments and conflicts to create a personalised context for potential voters to buy into. This came at a time when a significant number of people were using social media to share and consume 'news'. Furthermore, the 2016 elections also saw tech companies like the social media giants come forth as major political donors in a way that has only been surpassed by oil and gas companies. Suddenly, big tech was interested in politics and so was a far greater section of people across the world who suddenly found it more accessible to ease into political discussions regarding the election. In many ways, one of the Trump campaigns greatest strengths was, as some commentators put it, that his social media account was a representative of his own voice and opinions, unfiltered and raw. This was unlike the other candidates who used the platform in the more traditional format as a campaign rather than a person, Trump created a pathway for ordinary, normally uninterested participants to suddenly be involved in politics through him.
The 2016 elections were allegedly also the target of an active interference campaign that was carried out in part on social media and supplemented by the hacking of DNC servers and a WikiLeaks release of problematic Hillary Clinton emails. Targeted social media ads and fake news websites were used to target US voters. Then there was the Cambridge Analytica scandal which brought to the public's attention the fact that social media platforms like Facebook enable the collection and sale of user data that for political ends. Cambridge Analytica used the data it procured through a quiz app to offer targeted solutions to those looking to swing undecided voters. Later, it was found that such use of data was not unique to the US with Cambridge Analytica having a limited role in the UK's EU vote as well. Among reports of bot accounts and email hacks, there were one too many irregularities for social media companies to avert the inevitable fallout of public outrage and establishment concern. All of a sudden, the hidden aspects of our unbalanced relationship with social media was brought to the public forum and Facebook became the poster-child of the perceived encroachment on freedom by big tech. Facebook, like many other social media platforms, pledged to revise their way of functioning to be more user-privacy oriented in an effort to win back public trust. Regardless of whether social media really had a turn-around moment of user privacy, the inherent issues that make social media problematic for governance remained. The various social media giants were castigated in the public forum for not doing enough to regulate what was being posted on their platforms.
Thus, in the 2020 elections, Silicon Valley attempted to do better and experimentally forayed into a more active moderation of not just content published on their platforms but more importantly, political content. In the leadup to the elections, Twitter and Facebook made efforts to weed out fake news websites and extremist forums that violate their guidelines. This apparently thwarted influence attempts by the same Russian group that was responsible for the 2016 effort. Both Facebook and Twitter also expanded fact-checking capabilities to target misleading posts though they differed when it came to the degrees it was applied. Facebook has largely opined that it is not the business of social media platforms to fact check what politicians post on their platforms as political discourse is the most sensitive part of the idea of free speech. On the other hand, Twitter took the significant step of fact-checking tweets by political leaders, most prominently Donald Trump and labelling them as disputed facts rather than outright removing them. Overall, commentators noted that the performance of the social media platforms was a mixed bag this time with Twitter doing the best and Google's YouTube reportedly doing the worst. The stakes this time were quite high for social media companies with politicians on both sides of the partisan line looking-on closely. Indeed, Mark Zuckerberg is known to have expressed his concerns regarding the 2020 elections in an earning's conference call where he warned of the possibility of active civil unrest in the nation over the result and the considerable test of Facebook that this may represent.
The regulation attempts did have its critics. Predictably, Donald Trump, his allies and the Republican Party as a whole were not fans, with enraged accusations of bias against Trump and the Republican Party. On other hand, Democratic lawmakers are once again castigating the social media giants for still not going far enough to contain the situation.
Just how far should the establishment go in regulating the social media environment, particularly given the concern of democracies over encroachments on the freedom of speech? In a non-democratic setup, like China and Russia, it is much easier to see just what too much regulation of social media looks like. In such cases, government control of social media and the many narratives being shared on it means that such platforms become an ideal tool for state control of narrative and propaganda.
Given that such examples of regulation largely remain limited to a handful of authoritarian nations, it may be more beneficial to examine the current direction of this debate in democracies, specifically, large-scale, multicultural, multi-religion democracies such as the US and India and how arguments over 'free-speech' complicate the matter.
Tale of two democracies
Back in 2019, during a Congressional Committee examination of Facebook's new cryptocurrency Libra, New York Congresswoman Alexandria Ocasio-Cortez had an iconic confrontation with Facebook's Mark Zuckerberg over the latter's stance on not fact-checking statements or ads that are run by politicians or political campaigns. During the exchange, AOC questioned Zuckerberg regarding a hypothetical misinformation campaign that she could possibly run next year and tried to determine just how far Facebook's policy on political speech would allow her to go with said misinformation campaign if she were so inclined. As it turned out, quite far with Zuckerberg agreeing that she could 'probably' run an ad campaign that made the false claim that Republicans voted for the Green New Deal, a liberal environmental agenda. Zuckerberg defended this by asserting that in democracies, people should have the right to judge the words and conducts or the leaders they may or may not vote for by themselves and that social media platforms like Facebook did not have the right to be arbitrators of truth.
The whole exchange marked two main defences that social media companies often hide behind in regards to regulation. First, the belief that regulation on their part or the government would have negative consequences on free speech and second, the continued assertion of social media giants that they have a limited role to play in shaping politics. This denial is not unique to Facebook. In a recent hearing, Republican Ted Cruz asked Twitter's Jack Dorsey as to whether he thinks that Twitter has the power to influence the US elections. When Dorsey answered in the negative, Cruz rebutted by asking him the need to block certain political content if they felt that social media had little influence. Dorsey claimed that Twitter's self-regulation was an attempt to create a safe environment where everyone was included. The other social media giants have mirrored such claims and generally claimed that they are neutral platforms with no political agenda. But, the very act of censoring or not censoring a particular piece of 'free-speech' is a political decision in and of itself, one with a significant impact on the electoral process.
In India, social media and its potential ill-effects are even more severe with misinformation campaigns resulting in violence and division. Here too, social media is used by opinion-influencers to construct a particular narrative and target those who oppose it. Like the US, the concerned authorities are in efforts to make social media platforms directly liable for the content that exists on their platforms. India also had recent events that served to catalyse the conversation on regulation. A recent article by the Wall Street Journal had accused Anhki Das, the former head of public policy at Facebook India, South and Central Asia of "alleged deliberate and intentional inaction to contain certain hateful content in India." It was alleged that Das advised Facebook that taking actions against the content would harm the company's business interests in India. This was the second time Facebook had been in the spotlight that year, with Facebook India Head Ajit Mohan facing a Parliamentary Committee on Information and Technology regarding the alleged misuse of social media platforms.
More recently, Twitter also sparked a major row in India when it surfaced that Twitter's location tag for Leh showed it to be part of Jammu and Kashmir, People's Republic of China. Members of the Joint Parliamentary Committee on Personal Data Protection Bill sought a response from the company while also questioning what process and laws it follows while making decisions to mute or amplify certain political speech.
Taking a stand
From the perspective which considers social media to be the last true bastion of 'free speech', any regulatory attempt is an intrusive one with negative consequences. Free speech is an important and nuanced part of any democracy and as a right must be balanced with its own responsibilities. The question of misinformation or hate speech in social media platforms is one of considering who takes responsibility for it and interprets whether it is right or wrong. So far the social media platforms themselves have taken a limited stand in taking up such responsibility.
In the US, both the outgoing and incoming administration have expressed concerns regarding the regulations that govern social media platforms. This specifically relates to repealing or revising Section 230 as part of the Communications Decency Act. This law protects content platforms from liability for what third party content. At the time it was conceived, Section 230 was meant to promote the emergence of such platforms and moderation of problematic content was left in 'good faith' to the companies themselves. Furthermore, there has even been discussion of breaking up the largest tech companies to prevent monopolies. However, there is no clear plan of action for such action as the US is embroiled in several crises that take precedence.
In India, a recently issued regulation for OTT platforms and online content holds platforms responsible for third party news and information posted on its platform. While the Government sees the move as a necessary step in curbing the inherent chaos of social media, there are fears that such actions will diminish free-speech. Others have stated that such an action, levels the playing field as other content providers are already regulated. The specifics of how this will apply to the various specifics of content posted worldwide on a platform the size of Facebook or Twitter is yet to be clarified.
As a way of connecting the world, social media is undoubtedly a revolution. But with this connection comes the occasional amplification of fringe tendencies that are harmful to human life and society as a whole. This is part of the risk. It is when such tendencies are allowed to fester in the name of protecting the fragile concept of free speech that social media often becomes a toxic mirror to human society.