AI and disinformation: portunities and risks amid war
This has actualized discussions of the risks and portunities that artificial intelligence brings along as part of information wars.
How artificial intelligence helps in working with information
AI has great potential for creating and processing content. The Center for Strategic Communication and Information Security employs AI capabilities to monitor media space and analyze an array of online publications. This is about automated tos, including SemanticForce and Attack Index platforms.
SemanticForce helps users identify information trends, track changes in user response on social media to news and events, identify hate speech, etc. Another vector of the neural network application is detailed image analysis, which allows for the rapid detection of inapprriate or malign content.
Attack Index uses machine learning (assessment of message tonality, source ranking, forecasting the develment of information dynamics), cluster analysis (automated grouping of text messages, detection of plots, formation of story chains), computer linguistics (to identify established phrases and narratives), formation, clustering, and visualization of semantic networks (to determine connections and nodes, develment of cognitive maps), and correlation and wavelet analysis (to detect ongoing psys).
The available tos allow distinguishing between organic and coordinated content distribution, detect automated spam distribution systems, assess the impact on the audience of certain social media user accounts, tell bots from real users, and much more – all using AI.
These tos can be used to both detect disinformation, analyze misinformation campaigns, and devel countermeasures.
AI potential to create and spread disinformation
Almost every day, neural networks demonstrate the improvement of their capabilities in creating graphic, textual, and audiovisual content. Its quality will improve considering the capabilities of machine learning. Today, pular neural networks are used by Internet users more like a toy than a to for creating fakes.
However, there are already examples of how the images generated by neural networks not only became viral, but also were perceived by users as real. In particular, the image of “a boy who survived a missile strike in Dnipro” or “Putin greeting Xi Jinping on his knees.”
These examples clearly demonstrate that the images designed with the help of neural networks already compete with the real ones in terms of their emotional charge, and this will certainly be used for the purpose of disinformation.
A study by the NewsGuard analytical center, conducted in January 2023, found that ChatGPT is able to generate texts that devel the already existing conspiracy theories and include real events in their context. The to has the potential for automated distribution (through bot farms) of multiple messages, the tic and tone of which will be determined by a human erator but their text will be generated by AI. Already today, with the help of this bot, disinformation messages can be created, including those based on the narratives of Kremlin praganda – by formulating apprriate requests. Countering the spread of artificially generated fake content is a challenge that we already have to be prepared to respond to.
War use of AI: what to expect from Russians
Russia’s special services, already having extensive experience in using photo and video editing to create fakes and run psychogical erations, are now actively mastering AI. Deepfake technogy is based on AI. It was used, in particular, to create a fake video address by President Zelensky about Ukraine’s “surrender” that appeared in the media space in March 2022.
Given the poor quality of this “product,” prompt reaction of state communications bodies, the president, who personally refuted the fake, and journalists, the issue didn’t get much coverage. The video did not reach its goal either in Ukraine or abroad. But Russians are obviously not going to st at that.
Today, the Kremlin uses a huge number of tos to circulate disinformation: TV, radio, websites, praganda blogs on Telegram, YouTube, and social networks.
AI has the potential to be used primarily for creating photo, audio, and video fakes, as well as for bot farms. AI can replace a significant part of human personnel at Russian “trl factories,” Internet warriors who provoke conflicts on social media and create the illusion of mass support for Kremlin narratives online.
Instead of “trls” who pen comments according to certain guidebooks, this can be done by AI using keywords and the vocabulary it is fed with. At the same time, it’s actual influencers (piticians, pragandists, bloggers, conspiracy theorists, etc.) who have a decisive impact on the loyal audience rather than nameless bots and Internet trls. However, with the help of AI, the weight of the latter can be increased by quantitative growth and “fine-tuning” of messages for different target audiences.
In 2020, the Ukrainian government approved the “Concept for the Develment of Artificial Intelligence.” The framework document defines AI as a computer program, respectively, the legal regulation of the use of AI is the same as in other software products. So, it is too early to talk about any legal regulation of AI in Ukraine.
The develment of AI outpaces the creation of safeguards against its malicious use and the formulation of picies to regulate it.
Therefore, the coeration of Ukrainian government agencies with Big Tech companies in countering the spread of disinformation, as well as identifying and eliminating bot farms, should only deepen. Both Ukraine government and the world’s technogical giants are interested in this.
Center for Strategic Communication and Information Security
Photo: armyinform.com.ua/authored by Beata Kurkul
Source: www.unian.info