Good morning! This Friday, the tech industry is trying some new things for content moderation, it looks like Amazon's union vote is going Amazon's way, the race is on to be the next WeChat, and there's been a shakeup at Box.
(Was this email forwarded to you? Sign up here to get Source Code every day. And you can text with us, too, by signing up here or texting 415.475.1729.)
The Big Story
Fresh ideas about moderation
The tech industry is finally getting past thinking about content moderation as a "leave it up or take it down" proposition. Companies are increasingly thinking more holistically, building new tools that give users more control and generally letting go of the idea that AI will solve all problems.
Twitch is expanding its policies
to include some off-platform conduct, working with a third-party investigator to "take action against users for hateful conduct or harassment that occurs off Twitch services … when directed at members of the Twitch community."Pinterest also has some new guidelines
, called the "Creator Code," meant to set the tone for how people operate on the platform. It's also giving creators more tools to remove content and promote good stuff.Facebook is all-in on context. It's testing a system
that adds labels like "satire page" or "public official" to posts in the News Feed, in an effort to give people more information about what they're seeing and why.
The award for most out-there idea goes to Intel, which built a tool called Bleep that lets users decide how much bad stuff they want to encounter. It's designed for gamers in particular, and literally offers a slider that lets you decide how much misogyny or racism and xenophobia you're willing to hear in audio streams from other gamers: none, some, most, or all. Intel's AI will tune out any offending audio based on what you've chosen.
This sounds crazy (and kind of is), but it's a version of something a few folks have told me the industry needs more of: user controls. Rather than decide for users what they should encounter, platforms might instead try to get very good at classifying content and then letting users pick their own filters. (Though I don't know who in their right mind is turning "name-calling" up to "all.")
Some of these systems will work; most probably won't move the needle. But it's clear that the industry is thinking seriously — and sometimes for the first time — about what their policies say and how they're enforced. The answers are rarely as easy as "leave it up" or "take it down."