The world we are living in is revolutionizing every second. With new technologies, we are unwinding new threats and dangers for the same world we are destined to live in. It has been more than a decade since the invention of the internet, and with its development came a new world of undiscovered domains. Initially, the 1960s opened the doors for cyberspace; however, with time, technicians and internet users started unleashing new ways the internet could be used. Developers of internet platforms used it to establish an online means of communication and entertainment for people and profit from it. Yet, the users discovered the other side of the internet and the wrong ways to use social platforms.
Development is a factor natural to humans; we are continuously seeking ways to improve our living situation on this planet. Moreover, in our quest for learning and discovering new things, we tend to forget the ways our developments are harming other creatures in this world. Artificial intelligence has brought forward the idea of how a digital being can aid humans in their work and, in many cases, take over their duties as well. The computer-controlled datasets only require us to set the way they should perform different tasks, and they will take over from there. As the developer of such a program cannot limit any of its functionalities, owing to its already-existing complexity, this provides a loophole for people looking to misuse the program for their own gain.
The “Pentagon Bombing”
Now, the world sees numerous such examples daily where AI is being used for either positive or negative activities. Consequently, May 22, 2023, witnessed one such image that shocked the whole world. It happened when someone uploaded an AI-generated image of the Pentagon being bombed on social media. The news sent shocks throughout Twitter (now X); even verified account holders reposted the image, sending waves of terror, horror, and fear into the minds of viewers and the general public.
The initial report, which manifested that a large explosion had taken place near the Pentagon complex, was followed by an image representing the bomb, implying that once again the US was under attack. People who had not forgotten the implications brought in by the 9/11 incident were already predicting what was going to happen this time. Additionally, the general population, which includes novice users and those not digitally literate, is not capable or skilled in the domain of verifying news. Therefore, it tends to believe whatever it sees. This is why anything shared on the internet can easily cause a great change, as users tend to reshare posts that might be fake.
Reposting by verified users and citation of this post by news channels like Republic TV eventually meant it to be considered true. Furthermore, this all happened in a few minutes and was passed on to many users, reaching different areas of the world. While the officials soon affirmed that no such explosion had occurred, they could not avoid the information disaster it had already caused. A mere AI-generated deepfake managed to fool not only the public but many news outlets as well. Now, one might think that at most the image would have caused fear, and once it was confirmed that it was fake, everything must have gone back to normal, yet this was not the situation.
Within only a few hours, the US stock market had to face a visible drop owing to the incident, which only went away after many hours of verifying the image as fake. Many industrials had to face this plunge in negativity because of the news of the Pentagon bombing, which included the Dow Jones Industrial and the broader S&P 500, presenting the risks of AI for investors as well. One cannot imagine the effect of such a small incident, which was quickly identified as fake, on the politics of a nation.
The seconds in which the image was being reshared caused a noticeable dip in the US stock market, harming the nation’s economy at the same time. This introduction of deep and cheap fakes in the world of international affairs has made many politicians use the idea against their opponents as well.
AI and Propaganda
Propaganda builders have used AI to generate fake videos of politicians to present them saying something they have not said in reality and build a narrative against them. Artificial intelligence now holds a stronger position in the hands of the public than it is usually given credit for. Robert Chesney and Danielle Citron, in their article, “Deepfakes and the New Disinformation War,” highlighted how the growing use of social media and the capability of deepfakes to generate believable misinformation will bring chaos in the near future. True to the idea, while this was just a prediction in 2018, it had started bringing great disruptions in 2020.
During the era of COVID-19, China was readily targeted by the West for human rights violations when news that Chinese officials had strictly started limiting citizens’ movement from Wuhan to their homes broke off. Westerners, who still did not believe in the existence of the virus that was quickly spreading throughout the world, took the moment as an opportunity to shape more negative narratives about China.
This included the use of images to showcase the poor and unhygienic conditions for patients in quarantine along with newscasting on how citizens’ movement restrictions were causing deaths as people had no access to basic necessities or first aid to ailing people. While the West had to later go on the same path of lockdown, the fake news helped it generate brutal and harsh impressions of the Chinese government for the world to see.
The Role of AI in Politics
As it stands, anything is not good or bad but is dependent on the way it is used. Artificial intelligence can help strategists outline their ideas, plans, and content creation and analyze the effects of the implementation of a specific strategy on society. With proper commands and dataset management, an AI-set robotic system can help perform numerous tasks. It can not only help identify social patterns, do political analysis, news reporting, and manage cyberspace easily, but can also aid a state in crisis management, risk prevention, and security deterrence.
However, the problem is that AI is being used more for wrong measures than for positive ones. It is bringing us to a time that Stephen Hawking warned us about, a time when the development of full artificial intelligence could spell the end of the human race. At the national level, AI is now being used to oppress certain topics while giving rise to others. It has provided a platform to develop gender issues, political crises, and extremism. Correspondingly, states are using AI-generated products to affect election campaigns, target certain cultures, spread disinformation, and enhance false democracy.
Earlier this year, the Republican National Committee (RNC) used AI-generated video to showcase what the US would look like under President Biden in the upcoming years. While it was just a 30-second video that came as a response to Biden‘s re-election campaign, the video was successful in portraying a US with a high number of crimes, open borders for migrants and terrorists, enriched war with China, and an overall collapse of the US economy. Consequently, this led to criticism of Biden and a reduction in his future public power.
In the future, AI can affect election campaigns in the same way, it can spread false information about politicians or dispel a campaign’s agenda, bringing negativity for one and ultimately favoritism for the other. Moreover, it can help spread awareness in a way that targets indecisive citizens to choose one candidate over another. Correspondingly, artificial intelligence can help politicians spread their propaganda more effectively, and with enhanced defamation through deepfakes, it can allow them to gain public support for them.
In a world that is now dependent on the internet for much of its tasks and where users are available more online than offline, AI can dominate thousands of such domains with little to no effort. Henceforth, as social media users and responsible citizens, learning cyber awareness and avoiding the spread of information that might be inherently wrong is our duty. Before affirming any news and spreading it, we need to understand that nothing on the internet can be completely true or false. Thus, there is a need to keep in mind the ground realities and look for confirmation through different intelligence tools.
Analyzing pictures and videos requires concentration because they can be easily detected as AI-generated products with more attention. Keeping a check on who is uploading what content and what their previous history explains can help identify more misinformation and disinformation. It requires some effort, but if we are ready to give it that, we can efficiently escape the circle of acknowledging wrong information. It might be funny to see Biden sing “Baby Shark” on national television. Still, we must understand that it is yet another method of spreading a narrative of the US president being incompetent for the post he is given.
If you want to submit your articles, research papers, and book reviews, please check the Submissions page.
The views and opinions expressed in this article/paper are the author’s own and do not necessarily reflect the editorial position of Paradigm Shift.