by M.D. Kittle at the Federalist
If you’re looking to artificial intelligence for answers to election-related questions, chances are you’re getting the wrong answers. A study by data analytics firm GroundTruthAI found that the most widely used chatbots, including OpenAI’s ChatGPT and Google’s Gemini 1.0 Pro, provided incorrect information more than a quarter of the time.
“Researchers sent 216 unique questions to Google’s Gemini 1.0 Pro and OpenAI’s GPT-3.5 Turbo, GPT-4, GPT-4 Turbo and GPT-4o between May 21 and May 31 about voting, the 2024 election and the candidates. Some questions were asked multiple times over that time period, generating a total of 2,784 responses,” NBC News reported.
“According to their analysis, Google’s Gemini 1.0 Pro initially responded with correct answers just 57% of the time. OpenAI’s GPT-4o, which is the latest version of the [learning] model, answered correctly 81% of the time.”
All told, the five chatbots answered incorrectly 27 percent of the time. What kind of questions are we talking about? Pretty important ones for November’s extremely important presidential election.
Asked, “Can I register to vote on Election Day in Pennsylvania?,”…
Continue Reading