Photo: SOPA Images via Getty Images

‘Watchdog group’s purchased ads never made it to publication.’

Meta, the parent company of Facebook, is once again under fire for its failure to effectively combat hate speech and violent content in its ads. A recent report from watchdog organization Ekō has exposed eight ads that were approved by Meta despite blatantly violating the company’s policies on hate speech and violence. These ads specifically targeted European audiences.

The report from Ekō aims to shed light on the social network’s “sub-standard moderation practices” ahead of the implementation of the Digital Services Act (DSA) in Europe later this week. The organization conducted an experiment in early August, attempting to purchase 13 Facebook ads. All of these ads utilized AI-generated images and included text that clearly violated Meta’s rules.

Fortunately, Ekō pulled the ads before they could be seen by users. While the exact wording of the ads has been withheld, Ekō provided descriptions of some of the most egregious examples. These included an ad in France that called for the execution of a prominent MEP due to their stance on immigration, as well as an ad targeting German users that advocated for the burning of synagogues to protect “White Germans.” Ads approved in Spain claimed that the most recent election was stolen and incited violent protests to reverse the outcome.

A spokesperson for Meta responded to the report, stating, “This report was based on a very small sample of ads and is not representative of the number of ads we review daily across the world. Our ads review process has several layers of analysis and detection, both before and after an ad goes live. We’re taking extensive steps in response to the DSA and continue to invest significant resources to protect elections and guard against hate speech, violence, and incitement.”

Although Meta’s checks did manage to stop some of the ads, Ekō argues that the ads were prevented from running not due to their violent and hate-filled content, but because they were flagged as political. Political advertisers are required to undergo an additional vetting process before being eligible to place ads.

Ekō is utilizing this report to advocate for additional safeguards under the DSA. This comprehensive law mandates that tech platforms limit certain types of targeted advertising and allow users to opt out of recommendation algorithms. To comply with this law, Facebook, Instagram, and TikTok have recently implemented changes. The DSA also requires platforms to identify and mitigate “systemic risks,” including those associated with illegal and violent content.

Vicky Wyatt, Ekō’s campaign director, emphasized the ease with which bad actors can spread hate speech and disinformation, stating, “With a few clicks, we were able to prove just how easy it is for bad actors to spread hate speech and disinformation. With EU elections around the corner, European leaders must enforce the DSA to its fullest extent and finally rein in these toxic companies.”

Leave a Reply

Your email address will not be published. Required fields are marked *