YouTube approved dozens of ads promoting voter suppression and incitement to violence ahead of the upcoming election in India, according to a new investigation by the rights groups Global Witness and Access Now, shared exclusively with TIME.
India, a country often described as the world’s largest democracy, will hold its election in seven phases between April 19 and June 1 of this year. Voters are set to decide whether to return Prime Minister Narendra Modi to rule the country for a third term, or to deal his Hindu nationalist political project an unlikely defeat. In a calendar year where more than half the world’s population will vote in at least 65 different national elections, India’s is by far the biggest. It will also be a crucial test—ahead of the U.S. election in November—of social media platforms’ ability to tackle election disinformation, following a spate of job cuts across the industry.
[time-brightcove not-tgx=”true”]
Read More: A Make-or-Break Year for Democracy Worldwide
To test YouTube’s ability to prevent disinformation on its platform, Global Witness and Access Now submitted 48 advertisements containing election-related content prohibited by YouTube rules. The ads were written in three different languages widely spoken in India: Hindi, Telugu, and English. After a 24-hour review period, YouTube approved 100% of the ads. Global Witness and Access Now then withdrew the ads before they were published, meaning no voters were exposed to their contents.
The experiment indicates that Google-owned YouTube may be failing to prevent the spread of paid-for disinformation in one of the most significant global elections of the year. “Frankly, we weren’t expecting such a despicable result,” says Namrata Maheshwari, senior policy counsel at Access Now. “We thought they would do better at least catching the English ads, but they didn’t, which means the problem is not the language—it’s more also a problem of which countries they’re choosing to focus on.” The findings, she said, point to a “lack of regard for the global majority at large” inside YouTube.
A Google spokesperson said the company applies its policies “globally and consistently,” and disputed the methodology of the report. “Not one of these ads ever ran on our systems and this report does not show a lack of protections against election misinformation in India,” the spokesperson said in a statement. “Our enforcement process has multiple layers to ensure ads comply with our policies, and just because an ad passes an initial technical check does not mean it won’t be blocked or removed by our enforcement systems if it violates our policies. But the advertiser here deleted the ads in question before any of our routine enforcement reviews could take place.”
The spokesperson added: “While an ad may initially be eligible to serve via our automated systems, this is just the first stage of our review and enforcement process. After this step, ads are still subject to several layers of reviews, to ensure the content complies with our policies. These protections can kick into place both before an ad runs or quickly after it starts to gain impressions.”
YouTube has more than 450 million users in India, making it the most popular tech platform in the country after the Meta-owned messaging platform WhatsApp. But, unlike WhatsApp, YouTube offers a sophisticated set of targeted advertising tools to businesses and political parties. On YouTube, advertisers can pay to target ads at users based on specific characteristics like age, gender, location, interests, and usage patterns. The system is highly lucrative: YouTube recorded $9.2 billion in ad revenue in the final three months of 2023 alone, according to Google’s most recent public financial statements.
Some of the ads approved by YouTube contained false information aimed at suppressing votes, such as saying ID is not required to vote, or that women could vote by text message. (In fact, voting in India requires an identification document, and must be done at a polling station unless a voter is eligible for a postal ballot.) One ad falsely claimed that the voting age had been increased to 21, and that people under that age attempting to vote would be breaking the law. (The actual voting age in India is 18.) “Due to an increase in infectious diseases, all voters will be automatically sent a mail-in ballot for the 2024 elections,” one false ad said. “Stay home and stay safe.” (There are no such disease-related restrictions.)
Read More: AI Isn’t Our Election Safety Problem, Disinformation Is
Other voter-suppression ads also contained content that could be considered incitement to violence. One accused a religious community of dispersing bribes in return for votes, and urged people to “show up in force at their voting stations and show them who’s in charge.” A second ad alleged that certain locations were hotspots for electoral fraud, and urged voters to “block their roads to stop these extremists from voting.” A third ad contained a conspiracy theory about forced religious conversions supposedly carried out by a specific religious community in India, used a derisive term for that community, and urged readers of the ad to prevent them from voting. A fourth ad targeted a specific caste group, saying that “we will show them who’s the boss at the polling booths,” adding that only certain castes “have the right to choose who runs the country.” TIME viewed each of the ads, but agreed not to disclose which specific groups were targeted in each.
The report’s authors say the findings point to a growing divide between countries in the global south, where platforms often fail to prevent the spread of election disinformation, and countries in the global north where platforms have invested more resources. When Global Witness tested election disinformation in English and Spanish ahead of the U.S. midterms in 2022, YouTube rejected 100% of the ads and suspended the channel that attempted to host them, the group said. But when Global Witness submitted similar ads in Portuguese ahead of the Brazilian election the same year, YouTube approved all of them. “It just shows that they’re not consistently enforcing what aren’t terrible policies,” says Henry Peck, a campaigner on digital threats at Global Witness. “It’s inconsistent practice depending on where in the world the study is.”
The report’s authors dispute Google’s rebuttal of their methodology. “We kept the ads submitted for long enough for YouTube to review and approve them for publication,” Access Now said in a statement. “In a fast election cycle where advertisers can publish an ad within hours, the damage is done once the ads go live, particularly on a platform that reaches over 462 million people in India. YouTube has chosen a model with little friction around the publication of ads, instead suggesting violating content may later be removed, rather than adequately reviewing the content beforehand. This is dangerous and irresponsible in an election period, and there is no justification for why ads containing election disinformation are rejected in the U.S. but accepted in India.”
Peck suspects that recent layoffs in YouTube’s trust and safety division have made matters worse. Google and Youtube’s parent company, Alphabet, doesn’t release statistics about which subsidiaries or teams have seen job cuts, but as a whole Alphabet has approximately 8,000 fewer workers today than it did this time last year, according to the company’s most recent financial statement. Google did not answer specific questions from TIME about the number of ad reviewers it has in place who speak English, Hindi and Telugu, nor whether those teams had been impacted by recent layoffs. However, a spokesperson said YouTube has made “significant investments” into enforcing its policies against election disinformation, and added it has “a dedicated team of local experts across all major Indian languages” working 24/7.
Access Now and Global Witness have called on YouTube to step up its counter-disinformation game in the final weeks before voting in the Indian election gets underway. “In previous studies YouTube has shown it can catch and limit prohibited content when it wants to—but this ability should be resourced and applied equally across all countries and languages,” the report says. It calls on YouTube to take 11 actions, including conducting a “thorough evaluation” of its ad approval process; ensure that its trust and safety teams have enough resources to tackle content, including in local languages; and commission and publish an independent human rights impact assessment of YouTube’s role in the Indian election.
“Once these ads go live, the damage is done,” Peck says. “They have the infrastructure in place that allows advertisers to reach thousands of users very quickly.”
Leave a comment