Home / news / Will “deep fraud” threaten the upcoming US elections?

Will “deep fraud” threaten the upcoming US elections?

Many analysts warn that deepfake technology could be used in political competition during the upcoming US elections, and many of them see the possibility of falsifying videos and audio clips of candidates using artificial intelligence to show them saying or doing something unreal that threatens their political future and reduces their chances of winning the election. .

Indeed, this modern technology was used to create fake pornographic videos for a number of celebrities, and it was also used at other times to create false news, which angered American lawmakers.

A hearing was held at the US House of Representatives Intelligence Committee about this technology, which was listed among the top 8 misleading threats to the 2020 election campaign in a report published by New York University.

Deepfake technology

It is a technology based on creating fake videos by using computer programs and artificial intelligence, and this technology combines photos and video clips of a character in order to produce a new video that may appear at first glance to be real, but in fact it is fake.

The first features of this technology appeared in 1997, when the “Video Rewrite” program was converting a video of a person talking about a certain topic into a video of the same person, but talking about another topic, and uttering new words that he did not say in the original video.

Usually this technology is exploited in the academic field in the field of computer vision, which is based on artificial intelligence, but the use of this technology is not limited to the academic field only.

Deepfakes were used to discredit some well-known politicians (Pixabay)

The use of the term deepfakes is due to the name of a Reddit user, who at the end of 2017 shared fake porn videos that they had created for celebrities, and these videos achieved high views.

By February 2018, the site “Reddit” banned the user, and the rest of the sites also banned anyone who promotes this technology, however there are still other platforms on the Internet working to share videos made with this technology with or without knowledge.

In December of the same year, actress Scarlett Johansson spoke publicly about the deepfakes issue during an interview with The Washington Post, in which she expressed her concern about this “phenomenon”, and described the Internet world as “a great hole of darkness that eats itself.”

Political exploitation

Deep fake technique was used to distort the image of some well-known politicians. For example, but not limited to, the face of Argentine President Mauricio Macri replaced the face of Adolf Hitler, and Angela Merkel’s face was replaced by Donald Trump before.

In April 2018, he published a video of former US President Barack Obama speaking loudly about this technology and explaining its risks, which the US President had not done in the first place.

Also in the same year, the Belgian Socialist Party released a fake video of President Trump insulting Belgium, and as expected the video got enough reaction to show the potential risks of high-quality deepfakes.

And in January 2019, Fox TV broadcast a deepfakes video of President Trump during his Oval Office speech.

Deep-rigging technology may pose a real threat to the future of US election candidates from both parties (Pixabay)

The battle of artificial intelligence

Researchers paint a scenario for an investigative journalist receiving a video clip from anonymous, in which the US presidential candidate admits to illegal activity, and everyone wonders: Is this video real? If so, it would be resounding news that could completely reverse the outcome of the upcoming elections.

Hany Farid of the University of California, Berkeley, thinks it might be even more terrifying, as this technology can be used to arouse suspicion about the content of actual videos, and that it is extremely useful to be able to spot deepfakes and clearly categorize it.

The discovery of deepfakes began as a research area a little over 3 years ago, and early work focused on detecting visual problems in videos, and over time the counterfeit products became better at simulating real videos and became more difficult to detect.

Therefore, researchers are trying to develop a specialized tool to test deepfakes of videos using artificial intelligence and deep learning, and journalists around the world may use such a tool within a few years.

However, this tool will not solve all the problems related to this technology, but it will be just one weapon in the arsenal of war in the battle of artificial intelligence against disinformation.

Other researchers argue that deepfakes are here to stay, and that protecting the public from disinformation will be more difficult than ever as the power of artificial intelligence grows.

And some who study the effects of these applications have argued that senior American politicians are not very afraid of the threats posed by these applications, but that this technology is likely to become a weapon used to expand the range of harassment, bullying and blackmail over the Internet.




Source link

Leave a Reply