AI-powered 'narrative assaults' pose a growing threat: Three defence methods for corporate leaders
From automating routine tasks and optimizing supply chains to complex financial models supporting and personalization of customer experiences at scale, Artificial Intelligence transforms business enabling them to function with efficiency and insight never experienced before.
AI is also inadvertently aiding the cybercriminals whose modus operandi has always been to exploit technical vulnerabilities of systems and networks. Generative AI is fast evolging and accelerating the cyber threat landscape. The worst attackers today are already rapidly integrating AI-generated disinformation into their attack vectors, leaving enterprises wide open to a myriad of attacks. Back in January, AI-generated disinformation and misinformation was rated as the top risk according to the World Economic Forum's annual Global Risks Report, since it had destabilizing potential across industries and institutions. Disinformation campaigns can destabilize organizations, manipulate markets, and undermine trust in public authorities.
Such a deepfake video could easily mislead employees, drop a stock price, or leak confidential data. Another example would be the launch of an AI-driven, organized campaign about social media posts sharing false information related to any company. This will then lead to financial, reputational, and other damages. With the aid of large language models, deepfakes, and bot networks, it is now possible for a run-of-the-mill attacker to compose plausible untruths that will exploit biases and destroy trust in preparation of a real cyberattack or as part of an attempt to amplify previous ones.
The marriage between AI-generated disinformation and traditional hacking techniques has created a new form of cyber threat: the narrative attack. In this way, fabricated stories can be used to manipulate perceptions and modify behavior. This is a high expansion in the effect of technological intrusions, leaving organizations and people vulnerable to these types of attacks.
I spoke to the defense attorney and former CIA case manager Jack Rice about the dangers of disinformation and defensive strategies for corporate and organizational leaders. "The reason why disinformation and misinformation are so effective in altering people's beliefs and behaviors is because people are innately attracted to information that aligns with their own beliefs," said Rice. "The objective of the creators and distributors of false information is to create chaos within society in order to gain advantages in influence and power.
There have been several truly memorable instances of cyberattacks that have been amplified with disinformation. Here, this strategy of dual threat aggravated this crisis by adding fear and urgency to it, creating further pressure on companies to comply with attackers' demands. The amplified effect of cyber attacks on stakeholders and the public at large could be seen in how rapidly misinformation and disinformation spread, leading to panic.