Skip to content

Google changes appeals procedure for images of suspected child abuse

    When Google informed a mom in Colorado that her account was disabled, it felt like her house had burned down, she said. In an instant, she lost access to her wedding photos, videos of her son growing up, her emails going back a decade, her tax documents, and everything else she’d kept in what she thought would be the safest place. She had no idea why.

    Google declined to reconsider the decision in August, saying its YouTube account contained malicious content that may be illegal. It took her weeks to discover what had happened: her 9-year-old eventually confessed to using an old smartphone of hers to upload a YouTube video of himself dancing naked.

    Google has a comprehensive system, with algorithmic monitoring and human review, to prevent the sharing and storage of exploitative images of children on its platforms. If a photo or video uploaded to the company’s servers is considered sexually explicit content featuring a minor, Google will disable the user’s account from all Google services and report the content to a non-profit organization that partners with law enforcement. Users have the option to challenge Google’s action, but in the past they had no real chance to provide context for a nude photo or video of a child.

    Now, following reporting by The New York Times, Google has changed its appeals process, giving users accused of the heinous crime of child sexual exploitation the opportunity to prove their innocence. The content deemed exploitative will still be removed from Google and reported, but the users can explain why it was in their account, such as clarifying that it was an ill-conceived joke by a child.

    Susan Jasper, Google’s head of trust and security operations, said in a blog post that the company would provide “more detailed reasons for account suspensions.” She added: “And we will also be updating our appeal process to allow users to submit even more context about their account, including sharing more information and documentation from relevant independent professionals or law enforcement agencies to help us understand in the content detected in the account.”

    In recent months, The Times, which reported on the power tech companies wield over the most intimate parts of their users’ lives, brought it to Google’s attention several times when its previous review process seemed to have gone awry.

    In two separate cases, fathers took pictures of their naked toddlers to facilitate medical treatment. An algorithm automatically flagged the images and human moderators considered them to be against Google’s rules. Police determined the fathers had committed no crime, but the company deleted their accounts anyway.

    The fathers, one in California and the other in Texas, were hampered by Google’s previous appeal process: At no point were they able to provide medical records, communications with their doctors, or police documents clearing them of wrongdoing. The San Francisco father eventually got six months of his Google data back, but on a USB stick from the police, which she had obtained with a search warrant from the company.

    “If we find child sexual abuse material on our platforms, we will remove it and suspend the related account,” a Google spokesperson, Matt Bryant, said in a statement. “We take the implications of an account suspension seriously and our teams are constantly working to minimize the risk of an improper suspension.”

    Tech companies that offer free services to consumers are known for being bad at customer support. Google has billions of users. Last year, it shut down more than 270,000 accounts for violating rules against child sexual abuse material. It has shut down more in the first half of this year than in all of 2021.

    “We don’t know what percentage of those are false positives,” said Kate Klonick, an associate professor at St. John’s University School of Law who studies Internet governance issues. Even just 1 percent would result in hundreds of calls per month, she said. She predicted that Google would need to expand its trust and security team to handle the disputes.

    “Google seems to be taking the right step,” Ms. Klonick said, “to judge and resolve for false positives. But it’s an expensive proposition.”

    Evelyn Douek, an assistant professor at Stanford Law School, said she would like to give Google more details about how the new appeals process would work.

    “Just setting up a process doesn’t solve everything. The devil is in the details,” she said. “Does the new review make sense? What’s the timeline?”

    A Colorado mom eventually received a warning on YouTube that her content violated community guidelines. Credit…YouTube

    It took four months for the Colorado mom, who had asked not to use her name to protect her son’s privacy, to get her account back. Google reinstated it after The Times brought the matter to the company’s attention.

    “We understand how distressing it would be to lose access to your Google account and data stored in it due to some wrong circumstance,” Mr Bryant said in a statement. “These cases are extremely rare, but we are working on ways to improve the appeals process when people come to us with questions about their account or think we made the wrong decision.”

    Google did not tell the woman that the account was active again. Ten days after her account was reinstated, she learned of the decision from a Times reporter.

    When she logged in, she found that everything had been restored beyond the video her son had taken. A post appeared on YouTube with an image of a referee blowing a whistle saying her content violated community guidelines. “Because it’s the first time, this is just a warning,” the post read.

    “I wish they had just started here in the first place,” she said. “It would have saved me months of tears.”

    Jason Scott, a digital archivist who wrote a memorably profane blog post in 2009 warning people not to trust the cloud, said companies should be required by law to give users their data even if an account was closed for breaking the rules .

    “Data storage should be the same as tenancy law,” Mr Scott said. “You shouldn’t be able to hold someone’s data and not give it back.”

    The mother also received an email from “The Google Team,” sent on Dec. 9.

    “We understand that you have attempted to appeal this multiple times and apologize for any inconvenience this has caused,” the statement read. “We hope you understand that we have strict policies in place to prevent our services from being used to share harmful or illegal content, especially abusive content such as child sexual abuse material.”

    Besides Google, many companies are monitoring their platforms to prevent the rampant sharing of child sexual abuse images. Last year, more than 100 companies sent 29 million reports of suspected child exploitation to the National Center for Missing and Exploited Children, the nonprofit that acts as the clearing house for such materials and passes reports to law enforcement for investigation. The nonprofit does not track how many of those reports represent genuine abuse.

    Meta sends the largest number of reports to the national center – more than 25 million in 2021 from Facebook and Instagram. Last year, data scientists at the company analyzed some of the flagged material and found examples that qualified as illegal under federal law but were “non-malicious.” In a sample of 150 flagged accounts, more than 75 percent showed “no malicious intent,” the researchers said, citing examples of a “meme of a child’s genitals being bitten by an animal” shared humorously and teens sexting each other.