According to recent research, over 100 deepfake video ads featuring Rishi Sunak were promoted on Facebook in the past month. This has sparked concerns about the potential risks of AI in the upcoming general election.
The advertisements potentially reached a large audience of 400,000 individuals, even though they seemingly violated multiple Facebook policies. This also marks the first instance of the prime minister’s image being altered in a systematic and widespread manner.
A total of £12,929 was used for 143 advertisements, coming from 23 different countries such as the United States, Turkey, Malaysia, and the Philippines.
One of the videos features a fabricated clip of BBC news presenter Sarah Campbell announcing fake breaking news about a scandal involving Sunak earning large amounts of money from a project meant for regular citizens.
The statement makes a false assertion that Elon Musk released an app that can “gather” stock market transactions, and includes a doctored video of Sunak stating that the government chose to test the app instead of risking regular people’s money.
The video segments subsequently direct viewers to a fake BBC News website that promotes a fraudulent investment opportunity.
Fenimore Harper, a communications firm founded by Marcus Beard, a former member of Downing Street and leader of the response to combatting conspiracy theories during the Covid pandemic, conducted the study.
He cautioned that the advertisements, which signify a change in the caliber of counterfeit material, indicate that this year’s elections are vulnerable to tampering from numerous high-quality, artificially generated lies.
As the availability and simplicity of voice and face cloning increases, it requires minimal skill and understanding to manipulate someone’s appearance for harmful intentions.
Regrettably, this issue is made worse by lenient moderation policies on paid promotions. These advertisements violate multiple of Facebook’s advertising guidelines. However, we found that only a small number of the ads have been taken down.
Meta, the parent company of Facebook, has been requested to provide a statement.
A spokesperson for the UK government stated that they are collaborating throughout the government to guarantee readiness in quickly addressing any potential risks to our democratic processes. This includes the defending democracy taskforce and specialized government units.
The Online Safety Act we have implemented includes additional regulations for social media platforms to quickly remove false or misleading information, including content created by artificial intelligence, as soon as it is brought to their attention.
The BBC representative stated that with the rise of fake news, it is important for everyone to obtain information from a reliable source. In response to this issue, we introduced BBC Verify in 2023. This initiative involves a dedicated team with expertise in various areas such as forensics and open source intelligence (OSINT). Their role is to investigate, fact-check, authenticate videos, combat disinformation, analyze data, and simplify complex stories.
“We establish credibility with our audience by demonstrating the expertise of BBC journalists in the information they report. We also provide guides on identifying fake and deepfake content. In case of any fake BBC content, we promptly take action.”
Authorities are worried that there is limited time to implement major revisions in order to modernize Britain’s electoral system and keep up with advancements in artificial intelligence before the upcoming general election, which is expected to occur in November.
The government has been in talks with regulators, such as the Electoral Commission. The Commission believes that the new regulations, set to take effect in 2022, will help to guarantee transparency for voters by requiring digital campaign material to include an “imprint” disclosing who funded the advertisement or is attempting to sway their voting decision.
Source: theguardian.com