Skip to content

YouTube Algorithm Pushes Election Fraud Claims To Trump Supporters, Report Says

    YouTube Algorithm Pushes Election Fraud Claims To Trump Supporters, Report Says

    For years, researchers have suggested that algorithms that feed users’ content aren’t the cause of online echo chambers, but rather are due to users actively seeking content that aligns with their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed results of a YouTube experiment that happened to be conducted when claims for voter fraud were made in the fall of 2020. They say their results are an important caveat to previous research by showing that in 2020 YouTube’s algorithm was responsible for “disproportionately” recommending vote-fraud content to users who were more “skeptical about the legitimacy of the election to participate in.” to start”.

    A co-author of the study, political scientist James Bisbee of Vanderbilt University, told The Verge that although participants were recommended a low number of election denial videos — a maximum of 12 videos out of the hundreds that participants clicked — the algorithm generated three times that amount for people. inclined to believe in the conspiracy than people who did not. “The more susceptible you are to these types of election stories… the more content about that story is recommended to you,” Bisbee said.

    YouTube spokesperson Elena Hernandez told Ars that Bisbee’s team’s report “doesn’t accurately reflect how our systems work.” Hernandez says, “YouTube does not allow or endorse videos that make false claims that widespread fraud, errors or malfunctions occurred in the 2020 US presidential election” and YouTube’s “most viewed and recommended election-related videos and channels are from from authoritative sources, such as news channels.”

    Bisbee’s team states directly in their report that they have not attempted to unravel the conundrum of how YouTube’s recommendation system works:

    “Without access to YouTube’s trade-secret algorithm, we cannot claim with certainty that the recommendation system infers a user’s appetite for vote-fraud content based on their viewing history, their demographics, or a combination of the two. For the purposes of our contribution , we treat the algorithm as the black box it is, and instead simply ask whether it will disproportionately recommend election fraud content to users who are more skeptical about the legitimacy of the election.”

    To conduct their experiment, Bisbee’s team recruited hundreds of YouTube users and recreated the recommendation experience by having each participant complete the survey, logged into their YouTube accounts. After participants clicked through several recommendations, researchers recorded all recommended content that was flagged as supporting, refuting, or neutrally reporting Trump’s election fraud claims. When they finished watching videos, the participants filled out a lengthy survey in which they shared their beliefs about the 2020 election.

    Bisbee told Ars that “the purpose of our research was not to measure, describe or reverse engineer the inner workings of the YouTube algorithm, but rather to describe a systematic difference in the content it recommended to users.” who were more or less concerned about elections.” fraud.” The sole purpose of the study was to analyze content served to users to test whether online recommendation systems contributed to the “polarized information environment.”

    “We can show this pattern without reverse engineering the black box algorithm they use,” Bisbee told Ars. “We only looked at what real people were seeing.”

    Testing YouTube’s Recommendation System

    Bisbee’s team reported that because YouTube’s algorithm relies on watch histories and subscriptions. In most cases, it’s a positive experience when featured content aligns with users’ interests. But due to the extreme circumstances following the 2020 election, researchers hypothesized that the recommendation system would naturally add more substance to voter fraud to users who were already skeptical of Joe Biden’s victory.

    To test the hypothesis, researchers “closely monitored the behavior of real YouTube users while on the platform.” Participants logged into their accounts and downloaded a browser extension to record data about the recommended videos. They then navigated through 20 recommendations, following a specified path mapped out by researchers, such as just clicking the second recommended video from the top. Each participant started by watching a randomly assigned “seed” video (political or non-political) to ensure that the first video they watched did not influence subsequent recommendations based on previous user preferences that the algorithm would recognize.

    There were many limitations to this study, which researchers described in detail. Perhaps most importantly, the participants were not representative of typical YouTube users. The majority of participants were young, highly educated Democrats who watched YouTube on Windows-based devices. Researchers suggest that the recommended content would have been different if more participants were conservative or Republican, thus presumably more likely to believe in voter fraud.

    There was also an issue where YouTube removed election fraud videos from the platform in December 2020, causing researchers to lose access to what they described as an insignificant number of videos recommended to participants that could not be reviewed.

    Bisbee’s team reported that the report’s main conclusion was preliminary evidence of a behavior pattern of the YouTube algorithm, but no real measure of how misinformation spread on YouTube in 2020.