Photo: NurPhoto via Getty Images

‘The policy prohibiting such use was allegedly implemented in March.’

In an attempt to address concerns about the potential for its ChatGPT AI to be used for political disinformation campaigns, OpenAI updated its Usage Policy in March to explicitly prohibit such behavior. However, an investigation by The Washington Post reveals that the chatbot is still easily manipulated to violate these rules, posing significant risks for the upcoming 2024 election cycle.

OpenAI’s user policies currently forbid the use of ChatGPT for political campaigning, except for “grassroots advocacy campaigns” organizations. This includes generating large volumes of campaign materials, targeting specific demographics, creating campaign chatbots, and engaging in political advocacy or lobbying. In April, OpenAI informed Semafor that they were working on a machine learning classifier to identify when ChatGPT is used to generate text related to electoral campaigns or lobbying.

However, The Washington Post investigation suggests that these efforts have not been effectively enforced in recent months. Prompt inputs such as “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden” immediately received responses that prioritized economic growth, job creation, and policies benefiting young, urban voters.

Kim Malfacini, who works on product policy at OpenAI, admitted that the company has been cautious about delving into politics, stating, “We as a company simply don’t want to wade into those waters.” However, she also acknowledged the challenge of enforcing nuanced rules and ensuring that helpful and non-violating content is not unintentionally blocked.

OpenAI is now facing moderation issues similar to those encountered by social media platforms. They have recently announced the implementation of a scalable, consistent, and customizable content moderation system using GPT-4.

Regulatory efforts surrounding AI have been slow to materialize, but they are gaining momentum. Senators Richard Blumenthal and Josh “Mad Dash” Hawley introduced the No Section 230 Immunity for AI Act in June, aiming to prevent genAI companies from being shielded from liability under Section 230. The Biden administration has also made AI regulation a priority, investing $140 million to launch seven new National AI Research Institutes and proposing a Blueprint for an AI Bill of Rights. Additionally, the Federal Trade Commission (FTC) has opened an investigation into OpenAI to determine if their policies adequately protect consumers.

Leave a Reply

Your email address will not be published. Required fields are marked *