Skip to content

Elon Musk’s Twitter is not ready for the next natural disaster

    Robert Mardini, the director general of the International Committee of the Red Cross (ICRC), says the organization has its own trend analysis unit that uses software to monitor Twitter and other online resources in places where the organization operates. This can help, for example, to keep employees safe in conflict areas.

    Of course, you can’t believe everything you read on Twitter. During a crisis, responders using social media must figure out which messages are false or unreliable and when to spread dangerous rumors. This is where Twitter’s own moderation capabilities could be crucial, experts say, and a concern as the downsized company changes. In conflict zones, military campaigns sometimes include online operations that attempt to use the platform for weaponized falsehoods.

    “Disinformation and disinformation can harm humanitarian organizations,” says Mardini. “If the ICRC or our Red Cross-Red Crescent partners come across false rumors about our work or conduct, it could jeopardize the safety of our personnel.”

    In May, Twitter introduced a special moderation policy for Ukraine to curb misinformation about its conflict with Russia. Nathaniel Raymond, co-leader of the Humanitarian Research Lab at Yale’s School of Public Health, says that while Twitter hasn’t made any recent announcements about that policy, he and his team have seen evidence that it’s been enforced less consistently since Musk took over as CEO and CEO. laid off many staff working on moderation. “We’re definitely seeing more bots,” he says. “This is anecdotal, but it seems that information space has been in decline.” Musk’s acquisition has also cast doubt on Twitter’s ability to keep evidence of possible war crimes on the platform. “Before we knew who to talk to to preserve that evidence,” says Raymond. “Now we don’t know what’s going to happen.”

    Other aid workers are concerned about the effects of Twitter’s new verification plan, which was suspended after some users who paid for a verification tick used their new status to impersonate big brands, including Coca-Cola and the pharmaceutical company Eli Lilly. Emergency responders and those on the frontlines of a disaster both need to be able to quickly determine whether an account is an official organization’s legitimate Twitter presence, said R. Clayton Wukich, a Cleveland State University professor who studies how local governments use social media. “They literally make life and death decisions,” he says.

    WIRED asked Twitter if the company’s special moderation policy for Ukraine remains in effect, but received no response as the company recently laid off its communications team. A company blog post published Wednesday says that “none of our policies have changed,” but also that the platform will rely more on automation to mitigate abuse. Yet automated moderation systems are far from perfect and require constant maintenance from human contributors to keep up with changes in problematic content over time.

    Don’t expect emergency managers to leave Twitter immediately. They are conservative by nature and are unlikely to tear up their best practices overnight. FEMA’s public affairs director Jaclyn Rothenberg did not respond to questions about whether it is considering changing its approach to Twitter. She said only that “social media plays a critical role in emergency management for rapid communication during disasters and will continue to do so for our agency.” But on a practical level, people are primed to expect emergency updates on Twitter and it can be dangerous for agencies to leave the platform.

    For people working in emergency management, the turmoil on Twitter has raised bigger questions about what role the internet should play in crisis response. If Twitter becomes unreliable, can another service fulfill the same role as a source of distraction and entertainment, as well as reliable information about an ongoing disaster?

    “With the lack of public plazas like this, it’s not clear where public communication is going,” said Leysia Palen, a professor at the University of Colorado at Boulder who has studied crisis response. Twitter wasn’t perfect, and her research suggests that the platform’s community has become less good at organically amplifying high-quality information. “But it was better than having nothing at all, and I don’t know if we can say that anymore,” she says.

    Some disaster managers make contingency plans. If Twitter gets too toxic or spammy, they can turn their accounts into one-way communication tools, just a way to provide clues instead of gathering information and directly quelling the fears of concerned people. Eventually they were able to leave the platform completely. “This is emergency management,” said Joseph Riser, a public information officer with the Los Angeles Department of Emergency Management. “We always have a plan B.”