Democracy Dies in Darkness

OpenAI won’t let politicians use its tech for campaigning, for now

The company laid out its elections policies Monday as politicians and activists raised concerns about AI-generated misinformation

January 15, 2024 at 3:00 p.m. EST
“We work to anticipate and prevent relevant abuse — such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates,” OpenAI said in the blog post. (Jabin Botsford/The Washington Post)
4 min

Artificial intelligence company OpenAI laid out its plans and policies to try to stop people from using its technology to spread disinformation and lies about elections, as billions of people in some of the world’s biggest democracies head to the polls this year.

The company, which makes the popular ChatGPT chatbot, DALL-E image generator and provides AI technology to many companies, including Microsoft, said in a Monday blog post that it wouldn’t allow people to use its tech to build applications for political campaigns and lobbying, to discourage people from voting or spread misinformation about the voting process. OpenAI said it would also begin putting embedded watermarks — a tool to detect AI-created photographs — into images made with its DALL-E image-generator “early this year.”