Photo: ASSOCIATED PRESS

‘However, there is no assurance that other media sources will exercise the same level of restraint.’

The Associated Press (AP) has released guidelines for the use of generative AI in its newsroom. These standards come as part of a licensing agreement between AP and ChatGPT maker, OpenAI. The guidelines aim to establish a clear framework for utilizing this emerging technology, while also cautioning against the creation of publishable content through AI.

While the guidelines are not particularly contentious, they have raised concerns that less scrupulous media outlets may abuse the AP’s endorsement, leading to excessive and unethical use of generative AI.

The AP’s Vice President for Standards and Inclusion, Amanda Barrett, emphasized that AI should be seen as a flawed tool and not a replacement for trained journalists. She stated, “We do not see AI as a replacement of journalists in any way,” emphasizing the responsibility of AP journalists in ensuring accuracy and fairness in their reporting.

The article instructs AP journalists to consider AI-generated content as “unvetted source material,” which should be subject to editorial judgment and the organization’s sourcing standards. It explicitly states that AI should not be used to create publishable content, including images. The AP maintains its commitment to not altering any elements of its multimedia content using generative AI.

However, the AP does allow for the use of AI illustrations or art in stories, but only if clearly labeled as such.

Barrett also highlights the potential for AI to spread misinformation, urging journalists to exercise caution and skepticism. This includes verifying the source, conducting reverse image searches, and cross-referencing similar reports from trusted media organizations. The guidelines also prohibit the input of confidential or sensitive information into AI tools to protect privacy.

While these guidelines are sensible and uncontroversial, other media outlets have been less discerning in their use of generative AI. CNET, for example, published error-ridden financial explainer articles earlier this year without clearly indicating their AI origin. Similarly, Gizmodo faced criticism for publishing a Star Wars article full of inaccuracies. This raises concerns that some media organizations, in a fiercely competitive landscape, may interpret the AP’s restricted use of AI as a green light for robot journalism, leading to the publication of poorly edited or inaccurate content without proper labeling.

Leave a Reply

Your email address will not be published. Required fields are marked *