As the UK’s cybersecurity agency has warned, artificial intelligence will make it harder to distinguish between emails that are sent by scammers and bad actors and those that are legitimate. This includes messages that request that computer users reset their passwords.
The National Cyber Security Centre (NCSC) claimed that because AI technologies are becoming more sophisticated, consumers will find it difficult to recognize phishing mails, which deceive users into sending over passwords or personal information.
With chatbots like ChatGPT and free versions known as open source models, generative AI—a technology that can generate convincing text, speech, and graphics from simple hand-typed prompts—has become broadly accessible to the general public.
In its most recent evaluation of AI’s effects on the cyberthreats that the UK faces, the NCSC, a division of GCHQ, predicted that over the following two years, AI would “almost certainly” increase the amount of cyberattacks and intensify their impact.
It stated that the technology supporting chatbots, generative AI and big language models, will make it more difficult to recognize several attack vectors, including spoof communications and social engineering—a term used to trick people into disclosing sensitive information.
“To 2025, generative AI and large language models will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”
According to the NCSC, ransomware attacks, which had affected organizations like the British Library and Royal Mail in the previous year, were also anticipated to rise.
It issued a warning, claiming that amateur hackers and cybercriminals now have an easier time accessing systems and gathering information about their targets thanks to the sophistication of AI, which makes it possible for them to paralyze a victim’s computer systems, extract sensitive data, and demand a cryptocurrency ransom.
According to the NCSC, generative AI tools have already contributed to more convincing approaches to potential victims by producing fictitious “lure documents” with contents that were created or edited by chatbots and lacked the translation, spelling, or grammar errors that often identify phishing attacks.
It did state, however, that generative AI—which has shown to be a capable coding tool—would assist in sorting through and identifying targets rather than increasing the efficacy of ransomware code.
The UK’s data watchdog, the Information Commissioner’s Office, reports that there were 706 ransomware instances in the country in 2022 as opposed to 694 in 2021.
The government cautioned that state actors most likely possessed enough malware, short for malicious software, to train an artificial intelligence model designed specifically to produce new code that might evade security safeguards. According to the NCSC, training such a model would require using data that was taken from the target.
“Highly capable state actors are almost certainly best placed among cyber threat actors to harness the potential of AI in advanced cyber operations,” the NCSC report says.
According to the NCSC, AI will also be used defensively, with the technology able to identify threats and create safer systems.
The research was released concurrently with new advice from the UK government encouraging businesses to better prepare for and recover from ransomware attacks. According to the NCSC, the “Cyber Governance Code of Practice” attempts to put information security on par with financial and legal management.
Experts in cybersecurity, however, have demanded more aggressive action. According to Ciaran Martin, the former head of the NCSC, “an incident of the severity of the British Library attack is likely in each of the next five years” unless public and private groups radically modify how they tackle the issue of ransomware. Martin stated in a newsletter that the UK should review its response to ransomware, putting more restrictions on the payment of ransoms and abandoning “fantasies” of “striking back” against criminals operating in unfriendly countries.