Comment Writer Erin Osgood discusses the dangers of fake news, examining the recent example of Twitter having to suspend fake accounts pretending to be black Trump supporters, and the damage caused by ‘digital blackface’
It is common knowledge that President Trump struggles to attract diverse supporters. In 2016, black and Hispanic voters overwhelmingly voted for Hillary Clinton for the presidency, and a targeted disinformation campaign is now attempting to rectify this in time for election day. Twitter has suspended many fake accounts professing to be black Trump supporters, several of which had tens of thousands of followers.
Fake news, its seemingly unstoppable spread, and the at times Sisyphean fight to stop it has been the cornerstone of the Trump administration. Social media sites, particularly Facebook and Twitter, are more so than ever sites for the dissemination of disinformation, and this revelation again calls into question how much we can trust what we read online. With the instant gratification provided by social media – and therefore a lack of fact-checking being inherent in sites’ use – what do such tactics of deception mean for the election?
According to a report from the US Senate Select Committee on Intelligence, this is familiar territory. It found that in 2016 Russian operatives intended for American users to ‘deepen their engagement’ with fake accounts by sharing personal information, largely through signing petitions and attending events, presumably all in an effort to undermine Secretary Clinton’s chances in the presidential election. This begs a potentially obvious question – why has Twitter not done more in the years since to protect against fake news?
One of the more alarming aspects of fake news that has come to light in the years since President Trump’s election is just how sophisticated an operation it can be. No, this is not just the work of a small number of people working independently, or sensationalist headlines being taken out of context. A study by the Knight Foundation discovered that, in the run-up to the 2016 election, large clusters of Twitter accounts produced coordinated (or even automated) content disseminated from a number of conspiracy websites. Many of the tweets were identical as if copied from a script – just like those claiming to be black Trump supporters today. Twitter, then, did nothing.
The study concluded that because the accounts were active at such regimented intervals and with no account for the time of day or night, that they were acting too inorganically to be deemed human Twitter users. Accounts from very different clusters – sets of accounts that follow similar users – would share the same conspiracy theories in different hashtags such as #WikiLeaks. The study says: ‘this level of message discipline, across very different clusters, is nearly impossible for spontaneous human tweet activity…Many still believe that fake news is spread by thousands of small, independent sites. The study revealed this is not accurate.’
If fake news spreaders like these act in such obviously inorganic ways, they should be much easier to spot – and therefore suspended. In an increasingly online world, the validity of the news and information we consume is more important than ever, meaning that Twitter must hold itself accountable for the spread of disinformation that could influence such contentious politics. Activist Munroe Bergdorf has commented on the situation on Instagram, stating: ‘this action is way overdue and should have been taken care of a long time ago.’
Bergdorf also stated that ‘white supremacist social media users who pretend to be black people to spread conservative propaganda and racist content, has been an issue that black activists will be all too familiar with… Only until recently, it was more likely for the profile of an actual black person to [be] banned or restricted for speaking up against white supremacy, than it would be for blackfishing accounts to be reprimanded or deleted.’
This issue extends far beyond one election. Not only will the spread of conspiracies create an electorate less trusting of traditional news media, but such a tactic of deception is damaging to the African American community. In co-opting the identities of black people for Trump’s political gain, these accounts are aligning African Americans with a known racist, largely unknowingly and unwillingly. Researchers have coined the phrase ‘digital blackface’ to describe the effects – this is far more than just mistaken identity.
It is important to note that this action from Twitter is more than welcomed by many, but is it a case of too little too late? The Knight Foundation’s report was published before the 2018 mid-term congressional elections, so why is the social media giant only now recognising the issue? And this is not isolated to Twitter alone; Facebook, famed for its influence during the 2016 election, has also removed hundreds of accounts posing as African American Trump supporters. The reach of social media is hard to quantify, and the extent of the pandemic that is fake news even more so.
The ongoing spread of disinformation is a targeted, extensive, and aggressive operation, using covert tactics in order to trick users into sharing fake news. It is encouraging to see that Twitter does not condone the problem but with the presidential election less than two weeks away, and early voting well underway in many areas, as Bergdorf says, ‘the damage in many cases has already been done.’
Like this? See below for more from Comment: