X

Microsoft AI Has the Potential to Become an Automated Phishing Scheme

Microsoft sped up to integrate generative AI into its core systems. The company’s Copilot AI technology can retrieve responses from your emails, Teams chats, and files when you ask questions regarding an upcoming meeting, which might be quite helpful in terms of efficiency. However, hackers may also take advantage of these very procedures.

Researcher Michael Bargury is showcasing five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, like Word, can be manipulated by malicious attackers today at the Black Hat security conference in Las Vegas. These ways include using it to provide false references to files, exfiltrate some private data, and evade Microsoft’s security measures.

Arguably, one of the most concerning demonstrations is Bargury’s capacity to transform the AI into an autonomous spear-phishing apparatus. Known as LOLCopilot, the red-teaming code that Bargury developed can, crucially, be used by hackers to see who you regularly email, draft a message that mimics your writing style (including the use of emojis), and send a customized blast that may contain malware or a malicious link once they have access to a target’s work email.

Cofounder and CTO of security firm Zenity Bargury says, “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf.” Bargury released his research along with videos demonstrating how Copilot may be misused. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”

This example, like other assaults developed by Bargury, primarily operates by utilizing the large language model (LLM) as intended: inputting written queries to obtain information that the AI can acquire. Nevertheless, if it contains extra information or commands to carry out certain tasks, it may have harmful effects. A few of the difficulties in integrating AI systems with corporate data are brought to light by the research, along with the potential consequences of incorporating “untrusted” external data, especially when the AI produces results that appear legitimate.

Among the other assaults that Bargury designed is an example of how a hacker might obtain sensitive data, like people’s salaries, without inadvertently triggering Microsoft’s defenses for sensitive files. This hacker, of course, must first have gained control of an email account. Bargury’s prompt requests that the system not give references to the files from which the data is extracted. Bullying occasionally does assist, according to Bargury.

In other cases, he demonstrates how an attacker can modify responses regarding banking information to reveal their own bank details. This attacker doesn’t have access to email accounts, but instead taints the AI’s database by sending it a malicious email. According to Bargury, “Every time you give AI access to data, that is a way for an attacker to get in,” 

Another example demonstrates how an outside hacker could obtain certain restricted knowledge regarding the potential success or failure of an impending corporate earnings call. The last example, according to Bargury, transforms Copilot into a “malicious insider” by sending users to phishing websites.

Microsoft’s head of AI incident detection and response, Phillip Misner, said the company has been collaborating with Bargury to evaluate the findings and is grateful that he discovered the issue. According to Misner,  “The risks of post-compromise abuse of AI are similar to other post-compromise techniques,” “Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.”

In the last two years, generative AI systems have advanced to the point where, like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s ChatGPT, they may ultimately perform human-like jobs like making reservations for events or making online purchases. But as security experts have shown time and time again, letting outside data into AI systems—for example, by email or by reading content from websites—raises the possibility of indirect trigger injection and poisoning attacks.

As a security researcher and red team director who has widely shown security flaws in AI systems, Johann Rehberger adds, “I think it’s not that well understood how much more effective an attacker can actually become now.” “What we have to be worried [about] now is actually what is the LLM producing and sending out to the user.”

Rehberger cautions in general that a number of data problems may be traced back to the long-standing issue of businesses permitting an excessive number of employees to view files and failing to properly arrange access permissions throughout their enterprises. Rehberger continues, “Now imagine you put Copilot on top of that problem.” He claims to have employed AI systems to look up popular passwords like Password123, and that the algorithms have produced findings from within businesses.

Rehberger and Bargury agree that monitoring the output that an AI generates and transmits to a user needs to be given greater attention. According to Bargury, “The risk is about how AI interacts with your environment, how it interacts with your data, how it performs operations on your behalf,” “You need to figure out what the AI agent does on a user’s behalf. And does that make sense with what the user actually asked for.”

Categories: Technology
Archana Suryawanshi:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received