, looking at Bluesky in its earliest days, and then looking at Threads as Meta started to develop it, what we saw was that a lot of the services that were leaning the hardest into community-based control gave their communities the least technical tools to be able to administer their policies,” Roth said.
He also saw a “pretty big backslide” on the open social web when it came to the transparency and decision legitimacy that Twitter once had. While, arguably, many at the time disagreed with Twitter’s decision to ban Trump, the company explained its rationale for doing so. Now, social media providers are so concerned about preventing bad actors from gaming them that they rarely explain themselves.
Meanwhile, on many open social platforms, users wouldn’t receive a notice about their banned posts, and their posts would just vanish — there wasn’t even an indication to others that the post used to exist.
“I don’t blame startups for being startups, or new pieces of software for lacking all the bells and whistles, but if the whole point of the project was increasing democratic legitimacy of governance, and what we’ve done is take a step back on governance, then, has this actually worked at all?” Roth wonders.
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.
Tech and VC heavyweights join the Disrupt 2025 agenda
Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.
The economics of moderation
He also brought up the issues around the economics of moderation and how the federated approach hasn’t yet been sustainable on this front.
For instance, an organization called IFTAS (Independent Federated Trust & Safety) had been working to build moderation tools for the fediverse, including providing the fediverse with access to tools to combat CSAM, but it ran out of money and had to shut down many of its projects earlier in 2025.
“We saw it coming two years ago. IFTAS saw it coming. Everybody who’s been working in this space is largely volunteering their time and efforts, and that only goes so far, because at some point, people have families and need to pay bills, and compute costs stack up if you need to run ML models to detect certain types of bad content,” he explained. “It just all gets expensive, and the economics of this federated approach to trust and safety never quite added up. And in my opinion, still don’t.”
Bluesky, meanwhile, has chosen to employ moderators and hire in trust and safety, but it limits itself to the moderation of its own app. Plus, they’re providing tools that let people customize their own moderation preferences.
“They’re doing this work at scale. There’s obviously room for improvement. I’d love to see them be a bit more transparent. But, fundamentally, they’re doing the right stuff,” Roth said. However, as the service further decentralizes, Bluesky will face questions about when it is the responsibility to protect the individual over the needs of the community, he notes.
For example, with doxxing, it’s possible that someone wouldn’t see that their personal information was being spread online because of how they configured their moderation tools. But it should still be someone’s responsibility to enforce those protections, even if the user isn’t on the main Bluesky app.
Where to draw the line on privacy
Another issue facing the fediverse is that the decision to favor privacy can thwart moderation attempts. While Twitter tried not to store personal data it didn’t need to, it still collected things like the IP address of the user, when they accessed the service, device identifiers, and more. These helped the company when it needed to do forensic analysis of something like a Russian troll farm.
Fediverse admins, meanwhile, may not even be collecting the necessary logs or won’t view them if they think it’s a violation of user privacy.
But the reality is that without data, it’s harder to determine who’s really a bot.
Roth offered a few examples of this from his Twitter days, noting how it became a trend for users to reply “bot” to anyone they disagreed with. He says that he initially set up an alert and reviewed all these posts manually, examining hundreds of instances of “bot” accusations, and nobody was ever right. Even Twitter co-founder and former CEO Jack Dorsey fell victim, retweeting posts from a Russian actor who claimed to be Crystal Johnson, a Black woman from New York.
“The CEO of the company liked this content, amplified it, and had no way of knowing as a user that Crystal Johnson was actually a Russian troll,” Roth said.
The role of AI
One timely topic of discussion was how AI was changing the landscape. Roth referenced recent research from Stanford that found that, in a political context, large language models (LLMs) could even be more convincing than humans when properly tuned.
That means a solution that relies only on content analysis itself isn’t enough.
Instead, companies need to track other behavioral signals — like if some entity is creating multiple accounts, using automation to post, or posting at weird times of day that correspond to different time zones, he suggested.
“These are behavioral signals that are latent even in really convincing content. And I think that’s where you have to start this,” Roth said. “If you’re starting with the content, you’re in an arms race against leading AI models and you’ve already lost.”