Study Of AI Chatbots Reveals Startling Election Claims

A recent investigation by AI Democracy Projects in collaboration with the nonprofit Proof News revealed that AI-powered platforms produce false election details over 50% of the time. This issue raises alarms, particularly amidst the U.S. presidential primaries, given the increasing dependence of Americans on chatbots such as Google’s Gemini and OpenAI’s GPT-4 for news.

The rise of advanced AI technology has been hailed as a new era of information, with the ability to provide facts and analysis faster than humans. However, the study reveals these AI models often provide harmful or incomplete answers. They have been found to suggest polling places that don’t exist and provide illogical responses based on outdated information.

For instance, Meta’s Llama 2, one of the AI models evaluated, falsely claimed that voters in California could cast their votes via text message. This method is not legally permitted in any part of the U.S. Moreover, all the AI models reviewed failed to correctly identify that Texas law bans wearing attire featuring campaign logos, like MAGA hats, at polling stations. The AI technologies scrutinized in this assessment included Meta’s Llama 2, OpenAI’s ChatGPT-4, Anthropic’s Claude, Google’s Gemini, and Mistral’s Mixtral.

While AI has the potential to improve elections by enhancing tabulators and detecting anomalies in voting, there is a growing concern that these tools can be misused to manipulate voters and weaken democratic processes. Just last month, AI-generated robocalls with a fake version of President Joe Biden’s voice urged people not to vote in the New Hampshire presidential primary.

Moreover, individuals utilizing AI technologies have faced additional challenges. For example, Google temporarily halted its Gemini AI image generator following incidents where it generated historically inaccurate and problematic outcomes. A notable case involved the tool presenting racially diverse images in response to a request for a depiction of a German soldier from World War II.

The findings of this study raise questions about the safety and ethical testing of AI models. It is unclear how extensively these models are tested before being released to the public. Users have discovered historical inaccuracies, indicating that these models may need further refinement before being widely available.

Companies like Meta and Anthropic have responded to the study’s findings. Meta clarified that Llama 2 is a model for developers and not a tool that consumers would use. According to Meta, most responses from their AI tool directed users to authoritative information from state election authorities. Anthropic plans to release a new version of its AI tool to provide accurate voting information.

The public’s faith in AI tools during elections depends on the accuracy and reliability of the information they provide. As the fear of false and misleading information spreads, it becomes apparent that regulations governing AI in politics are necessary. Until then, the responsibility for ensuring the integrity of their chatbot platforms falls on the tech companies themselves.