An Example: Cybercriminal Identifies and Exploits Vulnerabilities Using AI Tools
“The Malicious Use of Artificial Intelligence,” a report drafted by 26 authors from 14 institutions spanning academia, civil society, and industry, concluded that AI tools can automate tasks that would ordinarily require intensive human labor, intelligence, and expertise, such as identifying vulnerabilities and developing exploitative code, enabling less technically skilled individuals to launch sophisticated attacks.
For instance, AI-powered tools can scan vast networks to identify potential vulnerabilities in a fraction of the time it would take a human. Once these vulnerabilities are identified, AI can also automate the process of crafting and deploying exploits. This means that attacks can be launched more quickly, and on a larger scale, than ever before.
A recent report from Checkpoint, an Israeli security firm, shares an example of an anonymous user of a popular underground hacking forum sharing their positive experiences utilizing OpenAI’s ChatGPT “to recreate malware strains and techniques described in research publications.”
To demonstrate the capabilities of AI-based malware, HYAS Labs built a proof of concept exploiting a LLM and found that this kind of malware can be “virtually undetectable by today’s predictive security solutions.” Not only can this be created from scratch as HYAS demonstrated, but recent studies have also shown how a threat actor can alter malware into new highly evasive polymorphic mutations using ChatGPT.
Attend a webinar: Quantifying AI Cyber Risk in Financial Terms, hosted by RiskLens, Tuesday, June 20, 2023, at 2 PM EDT.
To frame this type of incident in a way that allows us to quantify the potential loss exposure, we use FAIR scoping principles to identify a risk scenario:
To estimate Threat Event Frequency, we might consider:
Next, we estimate Vulnerability (or Susceptibility) by evaluating:
RiskLens in-platform guidance (click for bigger image)
Loss magnitude can be estimated by considering costs such as:
The advent of AI has lowered the barrier to entry for malicious external actors to identify and exploit vulnerabilities, adding a new layer of complexity to the cybersecurity landscape. These risks necessitate a comprehensive, risk-informed approach to AI governance. Robust risk quantification models like FAIR, powered by tools like RiskLens, can guide us in navigating this rapidly changing threat landscape. Stay tuned for more posts like this as we strive to stay on top of emerging related cyber risks and discuss approaches to quantifying their impacts
Read our blog post series on FAIR risk analysis for AI and the new threat landscape
Attend the Webinar: Quantifying AI Cyber Risk in Financial Terms, Tuesday, June 20, 2023 at 2 PM EDT.