In June 2021, Twitter told the world “you don’t need an edit button, you just have to forgive yourself.” Twitter founder Jack Dorsey even held out Kim Kardashian’s pleas when she cornered him at Kanye West’s 2018 birthday party. For years, the platform held out against tweet editing. Until now† An edit button is imminent, but tricky questions remain about how to implement it without causing chaos.
“Everyone thinks it’s really easy to just enter an edit button,” said Christina Wodtke, a computer science professor at Stanford University. Wodtke, who has worked on product design projects at LinkedIn, MySpace, Zynga and Yahoo, argues that such a seemingly simple change requires a lot of thought. She brings up a hypothetical situation: Donald Trump — whose return to Twitter is more likely since Elon Musk’s rise to Twitter’s board — tweets something shocking or offensive. He then edits his message to smooth out its rough edges. But people have already commented on the content of the first tweet, making their responses nonsensical.
The obvious solution to this is a Slack- or Facebook-like change log of edits, where people can view the history of changes to a post. Facebook has been letting people edit posts since June 2012, but it’s a feature that has been regularly abused by scammers since its rollout. Alex Stamos, former Chief Security Officer at Facebook and now an adjunct professor at Stanford, noted that Facebook’s post-editing tools helped legitimize a cryptocurrency scam page to scam users. Editing pages is a core function of Wikipedia, but that leads to “edit wars” where individuals argue over the wording of an entry, including an 11-year battle over the origins of the Caesar salad. Similar third-party tools for Twitter bios exist, such as Spoonbill, which can track how a person’s profile has changed over time.
But such tracking comes with its own problems, Wodtke says. First, the user who edited the post probably doesn’t want the original text to be accessible. “There is so much complexity going on about how all the players in the system will react to this change,” she says. “You have to think about all these norms that you’re breaking and changing right now.” Simply put, you should design any new feature of this nature with the worst-case scenario in mind. Even if the majority of people use an edit button to remove typos, it can cause chaos if a small minority uses it for nefarious purposes. “A prominent fear is that it will only lead to more confusion and exhaustion on Twitter,” she says.
In an effort to fix the problem, Twitter will begin testing an editing feature among users of Twitter Blue, its paid subscription service, in the coming months. Editing tweets has been Twitter’s most requested feature for “many years.” claimed Jay Sullivan, the platform’s head of consumer product. Twitter has also said that development of the feature has been underway since 2021, debunking all claims a poll van Musk, who asked users if they wanted an edit button, was behind the decision.
The announcement of the edit button was welcomed by many, but raised concerns among other things. Sullivan admits that ensuring that the editing feature is used fairly requires “time limits, controls, and transparency about what was edited.” So how do you code for honesty? Simply put, the way Twitter designs, tests, and implements its editing feature determines its success — and can make or break the platform. “Are there any risks?” asks Christopher Bouzy, founder of Bot Sentinel, a service that tracks fake behavior on Twitter. “Absolutely. It could change the context of a tweet.” Misinformation and misinformation — the first deliberately shares incorrect information, the second accidentally does so — aren’t exactly scarce on Twitter, and the viral dynamics of the platform means some posters aren’t inclined to edit incorrect information.An academic paper from 2018 found that fake news travels six times faster than the truth on Twitter, largely because falsehoods are 70 percent more likely to be retweeted than fact-based posts.