TikTok and Our Shadowbanned Future – The Dispatch

Will A.I. ever be able to discern ‘hate speech’ from ‘hated speech?’ 
White knight captures black queen. Checkmate.   
A human being can immediately recognize this as a move from a chess game. But to a smart-yet-dumb social media algorithm, it can seem rather sinister: a call to violence, the use of racialized adjectives, and an invocation of what must surely be the White Knights of the Ku Klux Klan!  
This is not merely hypothetical. In early 2021, YouTube’s algorithm flagged a number of videos on popular chess channels for hate speech, including some featuring world grandmasters. A subsequent experiment by an artificial intelligence researcher found that more than 80 percent of chess videos or comments that had been flagged for hate speech were false positives. The algorithm had attempted to filter out racist speech but had accidentally swept up four times more innocent speech in its net.   
This case might sound esoteric, but the episode is a warning of what happens when platforms ramp up algorithmic content moderation. A similar false-positive problem is coming for more substantive domains, including politics and culture, potentially chilling political dissent and squelching social movement formation.  
As a former moderator myself, I think the answer at least partially lies in empowering human moderators to do the heavy lifting. Keep in mind – moderators can, and sometime are, paid employees, as I was. Sometimes they are classified under different names, like “community engagement specialist”, but they are moderators nonetheless. With the right training and culture, they can handle the more complex nuance while the algorithm flags potentially troublesome content for their approval. A team of professional moderators would could also maximize “crowdsourced” intelligence by discerning who has credibility in the community – and who might have bad motives.

But, as you said, moderating at scale is difficult. In order for something like to work on a site like TikTok or Twitter, you would need hundreds, if not thousands, of moderators – which comes with its own headaches. And often times these moderators can be under-supported as well by their employers – after all there’s a reason why we prefer algorithms and AI, as they can browse through all the horror of the internet without suffering depression and PTSD from seeing the worst humanity has to offer in a post (or at least until the AI becomes self-aware and goes all Skynet on us).
Thank you for writing this piece.
A few months back, I was engaged in a college football discussion on Facebook.

I caught a three-day ban for saying “I hate the Irish.” Apparently, neither the AI nor the appeals process understood the context, and labeled me as one who hates people from Ireland.
Kudos to the author for providing a proposal that can be considered, discussed, and potentially implemented. I do wish the author’s proposal were presented up-front instead of two-thirds of the way through the piece, but perhaps the proposal wasn’t the point. If it were, more space might have been spent developing the proposal instead of re-hashing a problem that could have been introduced in a sentence or paragraph at most.

The crowdsourcing idea is interesting, at least as an academic exercise. It should be considered. At the moment, I don’t think crowdsourcing can replace automated content moderation (I could be convinced otherwise), but I could see how it could work in parallel.

The challenge, of course, is time. Crowdsourced moderation takes time and people. For non-controversial topics, Wikipedia (the authors crowdsourcing example) relies on a core of dedicated volunteers. For controversial topics, the arguments seem never-ending. As the author suggests, a certain amount of moderated content (the exact amount isn’t clear) is false positives that a human could dispose of easily.
According to the article, about 1% of twitter posts contain hate speech, but it doesn’t suggest how many of those are false positives. A quick search reveals that there are about 500 million posts a day. 1% is about 5 million potential posts for review *every day.* That’s still an awful lot of content to moderate.

In many ways, this is a modern problem. I don’t see how analogizing to history, particularly 18th century history, provides a workable approach.

Let’s also not forget that these platforms are not governments, but private businesses. I understand that the test for when a private actor is treated like a government is pretty narrow. Can the government mandate zero moderation of what is, in truth, a private forum? Is this an area of modern life where the marketplace shouldn’t be allowed to decide? On the other side, can the government mandate a specific mode of content moderation? That, to me, seems worse than government mandating zero moderation.

As the article suggests, the option to unplug is always available. Another commenter made a similar point saying that the users will simply vote with their feet. Should the (mainly blameless) users be the ones to suffer? Moreover, does anyone legitimately think that a completely un-moderated forum is a viable business option? The existing examples suggest not, at least beyond the short term. I suppose a revolving door of social media enterprises created by users migrating from one to the next to avoid the hatemongers isn’t substantially different from the existing rise and fall of social media over the last few years.

Ideally, people wouldn’t be the monsters that they can be and we could maintain our norms and principles without having them taken advantage of by bad actors. Right now, it feels like we’re held hostage by the worst examples of humanity. The system was premised on a certain minimal level of civility.

It seems like the civility floor has been shattered in the modern era. I suspect that’s an illusion, however. I’m sure that, with effort, equally horrible examples can be found in the past as in the present. What has changed is the amplification factor. In the past, amplification was expensive. Now it’s cheap. After a while, the increasing number and volume blends together into background noise or else we all go deaf. I’m not sure which is worse.
Good article – ultimately the moderation debate is going to fall to each social media platform. Users who object have a world of choice these days.
Couple of things here:

1. I deeply resent the fact that social media plays an increasingly important role in our politics. No substantive idea was ever done justice in 280 characters or a 12-second video, yet we are training an entire generation to think of political discourse in those terms.

2. I’m leery of the incentives that crowd-sourced content moderation would provide to the already existing legions of digital Carrie Nations ready to swing their axes into some barrels of “hate speech”. Imagine a bot net organized on the ideological principles of the SPLC with the ability to conduct Distributed Denial of Speech attacks…

3. Maybe we should consider draining some of the venom out of the online world by forcing it to become more like the physical world? Online anonymity is a double-edged sword; vital in some cases but often indulgent of our worst impulses. There are definitely some Internet trolls who would be eager to put their money where their mouths are, but certainly fewer than the current roster of keyboard warriors, and every little bit helps.
Excellent column, Paul!
Content moderation will always be doomed to failure. Whoever makes the rules or writes the algorithm will have the power and no one can be trusted with it. I don’t really give a fig about private companies “owning” platforms. They don’t own the speech and they have no right to moderate it.

Part of the problem is that we try to hold these entities accountable for speech we don’t like. They aren’t and can’t be. And of course, these companies play both sides of the coin demanding the right to moderate speech but disavowing responsibility for it in court.

Justice Brandeis’s argument is half of it. Counter bad speech with better speech. The other side is that if you silence the idiots, you won’t know who or where they are. Silencing fools doesn’t erase them. See Trump, Donald.

We are headed towards (or already living in) Ray Bradbury’s world of book burning and keeping people confined to thought free bullshit. No unnecessary, time-wasting thought on TikTok:

“Politics? one column, two sentences, a headline! Then, in midair, all vanishes! Whirl a man’s mind around about so fast under the pumping hands of publishers, exploiters, broadcasters that the centrifuge flings off all unnecessary, time-wasting thought!”
You kinda talked around it until the end. Hate speech is speech, and you shouldn’t just ban speech. While there are certain low hanging fruits, a lot of hate speech is in the eye of the beholder, and consequently very political. You mention Wikipedia having crowd sourced moderation, but the end result of that is a lot of bias. Any approach to controlling hate speech is going to fail to do it well. The only approach that is viable is to let more people speak.
Your feedback is important in helping us keep our community safe.
About Paul Matzko
As a former moderator myself, I think the answer at least partially lies in empowering human moderators to do the heavy lifting. Keep in mind – moderators can, and sometime are, paid employees, as I was. Sometimes they are classified under different names, like “community engagement specialist”, but they are moderators nonetheless. With the right training and culture, they can handle the more complex nuance while the algorithm flags potentially troublesome content for their approval. A team of professional moderators would could also maximize “crowdsourced” intelligence by discerning who has credibility in the community – and who might have bad motives.

But, as you said, moderating at scale is difficult. In order for something like to work on a site like TikTok or Twitter, you would need hundreds, if not thousands, of moderators – which comes with its own headaches. And often times these moderators can be under-supported as well by their employers – after all there’s a reason why we prefer algorithms and AI, as they can browse through all the horror of the internet without suffering depression and PTSD from seeing the worst humanity has to offer in a post (or at least until the AI becomes self-aware and goes all Skynet on us).
Thank you for writing this piece.
A few months back, I was engaged in a college football discussion on Facebook.

I caught a three-day ban for saying “I hate the Irish.” Apparently, neither the AI nor the appeals process understood the context, and labeled me as one who hates people from Ireland.
Kudos to the author for providing a proposal that can be considered, discussed, and potentially implemented. I do wish the author’s proposal were presented up-front instead of two-thirds of the way through the piece, but perhaps the proposal wasn’t the point. If it were, more space might have been spent developing the proposal instead of re-hashing a problem that could have been introduced in a sentence or paragraph at most.

The crowdsourcing idea is interesting, at least as an academic exercise. It should be considered. At the moment, I don’t think crowdsourcing can replace automated content moderation (I could be convinced otherwise), but I could see how it could work in parallel.

The challenge, of course, is time. Crowdsourced moderation takes time and people. For non-controversial topics, Wikipedia (the authors crowdsourcing example) relies on a core of dedicated volunteers. For controversial topics, the arguments seem never-ending. As the author suggests, a certain amount of moderated content (the exact amount isn’t clear) is false positives that a human could dispose of easily.
According to the article, about 1% of twitter posts contain hate speech, but it doesn’t suggest how many of those are false positives. A quick search reveals that there are about 500 million posts a day. 1% is about 5 million potential posts for review *every day.* That’s still an awful lot of content to moderate.

In many ways, this is a modern problem. I don’t see how analogizing to history, particularly 18th century history, provides a workable approach.

Let’s also not forget that these platforms are not governments, but private businesses. I understand that the test for when a private actor is treated like a government is pretty narrow. Can the government mandate zero moderation of what is, in truth, a private forum? Is this an area of modern life where the marketplace shouldn’t be allowed to decide? On the other side, can the government mandate a specific mode of content moderation? That, to me, seems worse than government mandating zero moderation.

As the article suggests, the option to unplug is always available. Another commenter made a similar point saying that the users will simply vote with their feet. Should the (mainly blameless) users be the ones to suffer? Moreover, does anyone legitimately think that a completely un-moderated forum is a viable business option? The existing examples suggest not, at least beyond the short term. I suppose a revolving door of social media enterprises created by users migrating from one to the next to avoid the hatemongers isn’t substantially different from the existing rise and fall of social media over the last few years.

Ideally, people wouldn’t be the monsters that they can be and we could maintain our norms and principles without having them taken advantage of by bad actors. Right now, it feels like we’re held hostage by the worst examples of humanity. The system was premised on a certain minimal level of civility.

It seems like the civility floor has been shattered in the modern era. I suspect that’s an illusion, however. I’m sure that, with effort, equally horrible examples can be found in the past as in the present. What has changed is the amplification factor. In the past, amplification was expensive. Now it’s cheap. After a while, the increasing number and volume blends together into background noise or else we all go deaf. I’m not sure which is worse.
Good article – ultimately the moderation debate is going to fall to each social media platform. Users who object have a world of choice these days.
Couple of things here:

1. I deeply resent the fact that social media plays an increasingly important role in our politics. No substantive idea was ever done justice in 280 characters or a 12-second video, yet we are training an entire generation to think of political discourse in those terms.

2. I’m leery of the incentives that crowd-sourced content moderation would provide to the already existing legions of digital Carrie Nations ready to swing their axes into some barrels of “hate speech”. Imagine a bot net organized on the ideological principles of the SPLC with the ability to conduct Distributed Denial of Speech attacks…

3. Maybe we should consider draining some of the venom out of the online world by forcing it to become more like the physical world? Online anonymity is a double-edged sword; vital in some cases but often indulgent of our worst impulses. There are definitely some Internet trolls who would be eager to put their money where their mouths are, but certainly fewer than the current roster of keyboard warriors, and every little bit helps.
Excellent column, Paul!
Content moderation will always be doomed to failure. Whoever makes the rules or writes the algorithm will have the power and no one can be trusted with it. I don’t really give a fig about private companies “owning” platforms. They don’t own the speech and they have no right to moderate it.

Part of the problem is that we try to hold these entities accountable for speech we don’t like. They aren’t and can’t be. And of course, these companies play both sides of the coin demanding the right to moderate speech but disavowing responsibility for it in court.

Justice Brandeis’s argument is half of it. Counter bad speech with better speech. The other side is that if you silence the idiots, you won’t know who or where they are. Silencing fools doesn’t erase them. See Trump, Donald.

We are headed towards (or already living in) Ray Bradbury’s world of book burning and keeping people confined to thought free bullshit. No unnecessary, time-wasting thought on TikTok:

“Politics? one column, two sentences, a headline! Then, in midair, all vanishes! Whirl a man’s mind around about so fast under the pumping hands of publishers, exploiters, broadcasters that the centrifuge flings off all unnecessary, time-wasting thought!”
You kinda talked around it until the end. Hate speech is speech, and you shouldn’t just ban speech. While there are certain low hanging fruits, a lot of hate speech is in the eye of the beholder, and consequently very political. You mention Wikipedia having crowd sourced moderation, but the end result of that is a lot of bias. Any approach to controlling hate speech is going to fail to do it well. The only approach that is viable is to let more people speak.

source

Leave a Comment

Your email address will not be published. Required fields are marked *