OpenAI, the company behind the popular text generator ChatGPT, has announced plans to introduce tools aimed at combating disinformation in the lead-up to the numerous elections scheduled to take place this year. With elections set to occur in countries including the United States, India, and Britain, OpenAI has made it clear that its technology, including ChatGPT and the image generator DALL-E 3, will not be allowed to be used for political campaigns.
Amid concerns that AI-driven text and image generators could flood the internet with false information and manipulate voters, OpenAI is taking proactive measures to ensure that its technology is not misused. In a blog post, the company stated its commitment to safeguarding the democratic process, emphasizing that it is working to understand the potential effectiveness of its tools for personalized persuasion. Until more is known, OpenAI has implemented restrictions on the use of its technology for political campaigning and lobbying.
The risks associated with AI-driven disinformation and misinformation were recently highlighted by the World Economic Forum in a report that identified them as significant global threats. These risks pose a particular danger to newly elected governments in major economies. Although concerns over election disinformation have been present for some time, the widespread availability of powerful AI text and image generators has heightened the magnitude of the issue, especially as it becomes increasingly difficult to discern genuine content from fake or manipulated material.
OpenAI has taken note of these concerns and is actively developing tools to address them. The company is working on solutions that will add reliable attribution to text generated by ChatGPT, enabling users to determine the source of the information. OpenAI is also focusing on developing capabilities that would allow users to detect if an image was created using DALL-E 3. To achieve this, the company plans to implement the digital credentials proposed by the Coalition for Content Provenance and Authenticity (C2PA), an initiative supported by major industry players such as Microsoft, Sony, Adobe, Nikon, and Canon. The use of cryptography will encode details about the content’s origin, improving methods for identifying and tracing digital content.
OpenAI has also outlined its approach to addressing election-related queries through ChatGPT. When users ask procedural questions about US elections, such as where to vote, ChatGPT will direct them to authoritative websites, ensuring the dissemination of accurate information. OpenAI has taken precautions with DALL-E 3 as well, implementing controls to prevent the generation of images depicting real individuals, including political candidates.
The announcement by OpenAI follows steps taken by tech giants Google and Meta (formerly Facebook) last year to limit election interference, particularly through the use of AI. The spread of disinformation, including deepfake videos and manipulated audio, has raised concerns about trust in political institutions. While detecting and debunking such content can be challenging, efforts to address disinformation are crucial in maintaining the integrity of the electoral process.
As OpenAI continues to develop its tools for combating disinformation, the lessons learned from the upcoming elections will inform the company’s approach in other countries and regions. By prioritizing transparency, accountability, and the fight against disinformation, OpenAI aims to contribute to the preservation of democratic processes worldwide.
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it