Last year, the National Center for Missing and Exploited Children (NCMEC) released data showing it received overwhelmingly more reports of child sexual abuse material (CSAM) from Facebook than any other web service it has tracked. While other popular social platforms such as Twitter and TikTok had tens of thousands of reports, Facebook had 22 million.
Today, Facebook announced new efforts to limit the spread of some of that CSAM on its platforms. In partnership with NCMEC, Facebook is building a “global platform” to prevent “sextortion” by helping “stop the proliferation of teen intimate images online.”
“We are partnering with the National Center for Missing and Exploited Children (NCMEC) to build a global platform for teens who are concerned that intimate images they captured could be shared on public online platforms without their consent,” Antigone Davis, Facebook’s VP, global head of security, said in a blog post Monday.
This global platform for teens will operate similarly to the one Meta created to help adults fight “revenge porn,” Davis said, which Facebook said last year was “the first global initiative of its kind.” It allows users to generate a hash to proactively prevent images from being distributed on Facebook and Instagram.
According to Davis, Meta found that more than 75 percent of child exploitative content that spreads on its platforms at a rate faster than any other social media platform is posted by people “with no apparent intent to harm.” Instead, the CSAM is shared to express outrage, disgust or “bad humor,” Davis said.
“Sharing this content is against our policies regardless of intent,” Davis said. “We plan to launch a new PSA campaign that encourages people to stop and think before sharing those images online again and reporting them to us instead.”
She also said there would be more news about the new teen platform in the coming weeks.
NCMEC did not immediately respond to Ars’ request for comment.
Meta investigates labeling of “suspicious” adults
In her blog, Davis outlined several other updates Meta has made to better protect teens.
For new users under the age of 16 (or 18 in some countries), Meta will use default privacy settings to prevent strangers from seeing their friends list, the pages they follow, or the posts they’re tagged in. Teens also have default settings limiting who can comment on their posts and asking them to review posts they’re tagged in before those posts appear on their pages. For any teens already on the platforms, Meta said it would send notifications to recommend updating their privacy settings.
Perhaps the biggest precaution Meta is now testing is a move toward flagging adult users believed to be harassing teen users as “suspicious” accounts.
“A ‘suspicious’ account is an account that belongs to an adult and has, for example, recently been blocked or reported by a youth,” Davis wrote.
To find out who’s “suspicious,” Meta plans to rely on reports from teen users. When a teen user blocks an account, the teen user will also get a notification to report the account to Meta to let them know “if anything makes them feel uncomfortable while using our apps.” To identify more suspicious users, Meta said it would review all teen blocks, whether or not they file a report.
Any account marked as suspicious will not show up in People You May Know recommendations from teen users. Davis said Meta is also considering whether or not to completely remove the message button when a “suspicious” account uses a teen’s profile.
Even this, of course, is an imperfect system. The biggest shortcoming of Meta’s solution here is that marking “suspicious” users will only happen after some teens have already been harassed.