Wide majority of Americans worry AI will be used to influence 2024 election outcome
More than seven in ten Americans are concerned about how artificial intelligence (AI) will be used to influence the 2024 election via social media, according to a new poll.
A survey conducted by North Carolina’s Elon University and headed by the school’s Imagining the Digital Future center found that 78 per cent of Americans are at least somewhat if not very concerned that AI will be used to manipulate or distort voters’ perceptions of the candidates ahead of voting in November.
Given specific examples of such behaviour, 70 per cent of respondents assumed likely examples of this would include the generation of fake accounts on social media platforms and the generation of fake video, audio or other materials representing the candidates. Sixty-two per cent said they were concerned about AI-driven content aimed at convincing specific demographics to not participate in the election. Finally, 73 per cent said they believed AI would be used to artificially drive conversations on social media through fake or “bot” accounts.
In general, Americans’ outlook on the future of AI and its use by nefarious actors in political spheres is very grim. Just 5 per cent of poll respondents think the technology will have a positive effect on political discourse, compared to 37 per cent who thought it would lead to an overall negative outcome.
“Misinformation in elections has been around since before the invention of computers, but many worry about the sophistication of AI technology in 2024 giving bad actors an accessible tool to spread misinformation at an unprecedented scale,” said Jason Husser, professor of political science and director of the Elon University Poll. “We know that most voters are aware of AI risks to the 2024 election. However, the behavioral implications of that awareness will remain unclear until we see the aftermath of AI-generated misinformation. “
President Joe Biden has made clear that he shares the views of a majority of Americans when it comes to the potential abuses of AI and the sudden explosion of the technology in everyday life. Last year, he signed an executive order directing companies to share AI safety test results with the federal government, and ordering the National Institute of Standards and Technology to develop federal standards for AI tools to be met before their public release.
The Federal Trade Commission also finalized a rule in February aimed at cracking down on AI scams centred around the impersonation of government officials and is considering extending that rule to ban all AI tools allowing the impersonation of individuals.
“We can’t move at a normal government pace,” White House Chief of Staff Jeff Zients quoted the president as telling his staff last year, according to the AP. “We have to move as fast, if not faster than the technology itself.”
But Americans still clearly worry that the technology remains far ahead of government regulation.
“An optimistic hope is that risk-aware voters may approach information in the 2024 cycle with heightened caution, leading them to become more sophisticated consumers of political information,” Husser said. “A pessimistic outlook is that worries about AI-misinformation might translate into diminished feelings of self-efficacy, institutional trust and civic engagement.”
The Elon University poll was conducted between 19 and 21 April 2024 with responses from 1,020 adults across the US. Its margin of error was 3.2 per centage points.