How advertising is making hatred and bigotry worse

superplane39
5 min readDec 22, 2020
A Tweet accusing Jews of being the “Synagogue of Satan”

$70,000,000,000. Seventy billion US dollars.

That’s what Facebook made from advertising alone in 2019[1], across 2.45 billion active users[2]. That comes out to about $28.5 dollars per active user for the year.
Twitter made about $3,000,000,000 dollars in 2019[3], across 330 million users[4]. That’s about $9 dollars per user.

It’s no secret that advertising money is what keeps the Internet running; ads have been a part of Internet life for so long that any of us have simply started tuning them out without thinking about it — just not noticing them when they’re there, or using an adblocker so that they don’t show up at all. But it remains the #1 source of revenue for a very large percentage of Internet-based companies — social media companies included.

The nature of advertising means that you want to put the ads in front of as many people as possible, to maximize that small percentage of people who will actually buy something that’s being advertised online. Naturally, this means that companies that want to make money from ads will want to have — and keep — as large of a userbase as possible.

Now, let’s take a detour for a minute and discuss hate speech online.

Social media is to bigots as a pool of stagnant water is to mosquitoes

I’m personally more familiar with the Twitter cesspool of white supremacists and neo-Nazis, so that’s what I’m focusing on here for the most part.

Twitter is a breeding ground for white supremacists. Using code words and dogwhistles, they form a “community” of sorts, united by their hatred and fear of the other. The most obvious are the ones who revere Adolf Hitler and proudly display swastikas, but there are several different strains of white supremacist who all share the same spaces. In a bid to avoid being found by the people they hate, they use shorthand such as “J’s” for Jews (or “J€.ws” is also what I’ve seen), “jogger” as a replacement for the n-word, using double lightning bolt emojis as a dogwhistle for “SS” (as in the Nazi party), among others.

They’re a suspicious bunch. They’re constantly infighting over whether or not women should be allowed in their spaces, or accusing each other of being “feds” and reporting each other, blocking half their followers in attempts to weed out the people they think are reporting them.

I have a file on my computer called “nazitwitteraccounts.md”. I started it as a way of keeping track of the reports I was making, since Twitter stopped giving me updates on my reports. The file now has over 300 accounts marked down in there.

Twitter does suspend white supremacist accounts… eventually.
Recently, they suspended a major account that had accrued thousands of followers, which I’m not going to name here. But this only happened after literal months of reporting this account, by dozens of people, on a nearly daily basis. (Yes, there’s a small network of people working to report these Twitter Nazis; I’m not too deeply involved, but I catch glimpses at times.)
During the months it took Twitter to suspend this account, this user was able to send hundreds of vile Tweets, connect white supremacists to each other so that they can coordinate, and try to convince more people to join them.

Hundreds of other accounts that spread hate and encourage violence continue to freely post with no restrictions, with Twitter only taking action once enough reports have piled up — and often not even then — or simply requiring that the users delete one or two specific Tweets but refusing to outright ban the users. It’s extremely difficult to get Twitter to permanently suspend a user even for the most blatant of bigotry.

What does this have to do with advertising, though?

So… what does Twitter (and Facebook)’s reluctance to remove hate-spreading accounts have to do with advertising revenue?

At the beginning, I wrote that Facebook makes ~$28.5 USD per user per year, while Twitter makes ~$9 USD.
For every user that these platforms suspend, that’s one fewer user who’s going to see their ads — and so a tangible loss in revenue. When your business model is based on getting as many people as possible visiting your site, blocking certain users from visiting your site is, from a purely business perspective, a bad idea.
And so Twitter drags its feet for as long as possible, and Facebook looks the other direction, because it’s not profitable for them to ban bigots.

So long as social media companies continue to prioritize ad revenue over proper moderation, social media will remain a breeding ground for hatred, allowing people to become radicalized, become convinced that violence is the only answer, and eventually lead to real-life atrocities. (Yes, there are examples.)
Now… admittedly, this is more of an issue for 4chan and similar sites that are even more loosely moderated than Twitter and Facebook. However, oftentimes regular old social media is where it all starts; people become exposed to the hateful ideology and eventually become brainwashed or start to believe in it. (The same with the also-dangerous anti-vax conspiracies.) Only then do they start moving to the echo chambers of the ideology.

Social media companies need to realize that prioritizing profit over moderation just isn’t working anymore.

Facebook has allowed anti-vaccine and anti-public-health conspiracies to grow almost unchecked. People believe COVID-19 is a hoax and that vaccines are poison due to Facebook’s historic refusal to remove posts that make those claims, allowing dangerous misinformation to proliferate and cause an untold number of deaths.
Twitter’s reluctance to take an active stance against the neo-Nazis and white supremacists using their platform to recruit and coordinate has allowed the radicalization of hundreds of people, encouraging violence against minorities, and emboldened the worst of humanity by refusing to remove the hatred spewing from their accounts.

Conspiracy theories on a mention of the COVID-19 vaccine

Until social media companies stop trying to maximize advertising revenue and stop being so reluctant to remove users who promote dangerous or harmful ideologies, things are only going to get worse.

--

--

superplane39

Community enthusiast; pretending I know how to adult.