The power of social media platforms: who gets to have their say online?

Former President Trump has been banned from his favorite social media platform Twitter. Having used Twitter during his presidency to make more sound than anyone probably really wanted to hear, he claimed that the suspension of his @realDonalTrump account meant he was silenced and robbed of his First Amendment right to free speech. The ban followed a few days after Trump supporters breached the US Capitol. According to Twitter, Trump's messages glorified violence, violating the platform's terms of service.

While many will have uttered a sigh of relief, watching the big media platforms stumble over each other to deplatform the outgoing President, the ban rightfully reignited a debate about who gets to govern speech online. Not because, as Trump argued, it is a First Amendment issue, but exactly because it isn't. This was about a private platform, Twitter, making use of their right to ban their users when not following the rules. Even if the user in question happens to be the President.

The First Amendment is directed at the government, requiring it to stay out of regulating freedom of speech, the press, and freedom of religion and assembly. The decision to suspend Trump's account, however, was made by a private company, based on a rationale and decision-making process we'll likely never see the details of. It does bring up more questions than answers if you take away the global Trump-fatigue we were all suffering from: Why didn’t they ban them sooner? What does it mean for other users? What was the final draw? Who’s next? Is this harming freedom of speech or is there simply no such thing when you’re operating on a privately owned platform?

The power of the… private company
The fact that private companies have this power can be very problematic, especially for average people who do not also have an official press corps, spokespersons, and an eager media apparatus to cover whatever views or opinions they want to express. And, as we have seen over the past years, that power is wielded very differently when it comes to safeguarding the rights of marginalised groups, racialised groups, women, and those who exist at their intersection, than when it comes to the rights of those who resemble the Donald Trumps of this world. Of course some platforms use their power for good such as getting rid of trollarmies, racists and spam but when we’re trying to think of one that does, it gets difficult. 

Online speech regulation 
As we live more and more of our lives online, the internet plays a central role in how we exchange information and ideas. Online platforms like Twitter, Facebook and YouTube facilitate these exchanges: but we shouldn't forget these are all private companies working to make a profit. This means that the "rules" for anyone expressing themselves with a Tweet or an Instagram post are also set by these companies; and over the years, their terms of service, community guidelines and numerous policies have ostensibly sought to make sure these platforms remain open to the public at large by banning users engaging in online abuse, threats, and the like.

How these rules are enforced, however, is anything but clear. For instance, on Twitter there have been numerous times where calls to ‘please block this hateful yet powerful account’ were dismissed or ignored. Dutch hateplatform Vizier op Links - that specializes in doxxing leftwingers, activists and anyone that has a problem with nazi’s, racists and fascists - comes to mind. Decisions to take down or "moderate" online content (or not!) are made behind closed doors, often by decision-makers who are too far removed from a specific context to properly understand what they are doing. Or, to help deal with the sheer volume of posts and tweets, these decisions have been automated, which often leads to disparities in how expression from marginalised groups is dealt with.

The fiction of the online "marketplace of ideas"
The cool thing to say is that freedom of speech is holy: everyone should be able to say what they think and interference should be limited. Instead, we should let the "marketplace of ideas" do its work: by letting as many views as possible circulate, eventually the best ideas will surface, goes the theory. Of course, as is the case with most libertarian marketplace theories, this one does not account for power structures or historic inequalities on the foundation of which this "free and open marketplace" was built. Can you really rely on ‘eventually the best ideas will surface’ in a world where there’s so much inequality? And especially in a day and age where this inequality keeps on getting louder and louder? This dynamic also plays out online: the currency of members of marginalised groups is not worth as much that of those who have traditionally held positions of power in our societies.So how is this whole marketplace supposed to be an honest spot? Why do some people have to pay fifty cents for their bread or right to be heard and others have to tip about five bucks extra?

As social media companies have been reluctant to share data with researchers, this is not easily quantifiable, but the lived experience of, for example Black Facebook users paints a clear picture of marginalised groups routinely being silenced while offensive speech against them is protected

andreadesantis.jpeg

Why so late?
Sure, there are good arguments for allowing more flexibility to the speech of a head of state. Political speech deserves more protection under international human rights law and, as has been argued by some, gives the public at large a unique opportunity to better understand the former President than would have been possible through official press release and White House communiqués. 

That being said, the decision to deactivate his Twitter account at the 11th hour smells like hypocrisy and begs the question why platforms decided to pull the plug only this late. His online speech has been provocative (to say the least) from the very beginning. Arguably, waiting until the Capitol had been stormed was in fact too late to deplatform him: the harm had been done. And why was his incitement to violence against Black Lives Matter protesters last year not sufficient to take action, for example? 

Who are the real trolls?
Contrasted with the ease with which accounts of marginalised groups are removed from social media on apparent baseless grounds and the failure to properly address online harassment and bullying, it is clear that platforms are failing to create an environment in which the concept of the "free flow of information" has any chance of materialising. 

Given that much of our public debate nowadays takes place online, this is very bad news for democracy. One of the things that the right to free speech and the ability to share and impart information and ideas facilitates, is an informed debate about matters that concern us all, in which everyone can participate. If specific groups are being pushed out of the debate, be it through trolling or unfair content removal and moderation practices, we essentially have a democratic deficit. 

Where next?
The million dollar question is: what should we do about this? First and foremost, human rights need to be the starting point for any initiatives to regulate speech online: our international framework, which foresees a balancing of equal, and sometimes competing, rights needs to underpin all regulation and policy. Ensuring that we create an environment in which everyone can access and share opinions and ideas will inevitably mean restricting some users of these platforms: to make sure there is space for a plurality of voices and not just the voice of the loudest, most aggressive bullies. 

However, these decisions need to be made in accordance with human rights-based standards, in a clear and transparent process that allows those whose speech has been curtailed to seek redress through clear procedures. Here, we can look at the way financial markets are regulated as an analogy: banks, like social media platforms, are private companies. But they are held to very careful obligations to ensure public accountability. This is something we can apply to the Twitters and Facebooks of this world as well: they will need to make transparent how their algorithms work, what content gets promoted, what not, what gets taken down and why. In addition, anyone whose posts have been removed or account has been suspended should have a chance to be heard, not just the big and powerful.

It also means increasing companies' investment in meaningful content moderation. The disconnect between moderators and context, and the shortcomings of automation won't be solved otherwise. These two areas of intervention are not inherently contradictory: if the decisions on which content gets taken down and who will be denied access to platforms are made more carefully and transparently, and there are clear and accessible processes in place to challenge such decisions, both could work very well in tandem. 

Illustration: Andrea de Santis


Nani Jansen Reventlow