Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Action needed to protect election from AI disinformation, study says

Ofcom and the Electoral Commission have been urged to address the use of AI to mislead the public (Tim Goode/PA)
Ofcom and the Electoral Commission have been urged to address the use of AI to mislead the public (Tim Goode/PA)

Artificial intelligence-generated deepfakes could be used to create fake political endorsements ahead of the General Election, or be used to sow broader confusion among voters, a study has warned.

Research by The Alan Turing Institute’s Centre for Emerging Technology and Security (Cetas) urged Ofcom and the Electoral Commission to address the use of AI to mislead the public, warning it was eroding trust in the integrity of elections.

The study said that while there was, so far, limited evidence that AI will directly impact election results, the researchers warned that there were early signs of damage to the broader democratic system, particularly through deepfakes causing confusion, or AI being used to incite hate or spread disinformation online.

It said the Electoral Commission and Ofcom should create guidelines and request voluntary agreements for political parties setting out how they should use AI for campaigning, and require AI-generated election material to be clearly marked as such.

The research team warned that currently, there was “no clear guidance” on preventing AI being used to create misleading content around elections.

Some social media platforms have already begun labelling AI-generated material in response to concerns about deepfakes and misinformation, and in the wake of a number of incidents of AI being used to create or alter images, audio or video of senior politicians.

In its study, Cetas said it had created a timeline of how AI could be used in the run-up to an election, suggesting it could be used to undermine the reputation of candidates, falsely claim that they have withdrawn or use disinformation to shape voter attitudes on a particular issue.

The study also said misinformation around how, when or where to vote could be used to undermine the electoral process.

Sam Stockwell, research associate at the Alan Turing Institute and the study’s lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period.

“Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.

“That’s why it’s so important for regulators to act quickly before it’s too late.”

Dr Alexander Babuta, director of Cetas, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face.

“Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.”