Twitter's "trust and safety team", and all big tech wrong-think enforcement, relies on the mob to "self-regulate".
They use the report and dislike functions to trigger moderation.
They claim this is done because "we can't possibly monitor every corner of our website 24/7".
They then refuse to acknowledge that this reliance on mob rule to police speech has the effect of allowing a small yet vocal/active cult of radicals to control and steer the narrative in their chosen direction.
Doing it this way allows them to claim they are impartial and blame it on algorithms and unintended consequences of unmanageable growth.
But it's done on purpose to give them an excuse to stifle non-approved anti-globohomo narrative information and conversations.
It's not a flaw in the design... it's a feature.
"your group isn't inclusive we should be allowed in and our opinions should be respected"
"we should have a voice in moderation"
"we think you said mean things, we don't want you in this group anymore, you have to leave"
Twitter's "trust and safety team", and all big tech wrong-think enforcement, relies on the mob to "self-regulate".
They use the report and dislike functions to trigger moderation.
They claim this is done because "we can't possibly monitor every corner of our website 24/7".
They then refuse to acknowledge that this reliance on mob rule to police speech has the effect of allowing a small yet vocal/active cult of radicals to control and steer the narrative in their chosen direction.
Doing it this way allows them to claim they are impartial and blame it on algorithms and unintended consequences of unmanageable growth.
But it's done on purpose to give them an excuse to stifle non-approved anti-globohomo narrative information and conversations.
It's not a flaw in the design... it's a feature.
rinse... repeat...