AI
Microsoft’s Copilot AI: A Boon for Productivity or a Hacker’s Spear-Phishing Arsenal?
To return to this article, go to My Profile and then select Saved Stories.
Microsoft's AI Transforms into a Phishing Tool Automatically
In a rush to integrate generative AI into its core operations, Microsoft introduced a feature where its Copilot AI can sift through your emails, Teams conversations, and documents to answer queries about future meetings, promising a significant increase in efficiency. However, this same functionality opens doors for cybercriminals to exploit.
At the Black Hat security event in Las Vegas today, Michael Bargury, a security expert, unveiled five experimental methods showing how Copilot, integrated within Microsoft 365 applications like Word, can be exploited by cyber attackers. These include generating incorrect file references, stealing confidential information, and circumventing Microsoft's security measures.
Arguably, one of the most concerning aspects is Bargury's skill in transforming AI into a self-operating spear-phishing tool. Known as LOLCopilot, the offensive security program he developed is capable, importantly, of utilizing Copilot to identify frequent email contacts once a cybercriminal gains entry to an individual's work email. It can then craft an email that emulates the person's usual communication patterns, right down to the use of emojis, and dispatch a tailored email that may carry a harmful link or embedded malware.
"Bargury, the cofounder and Chief Technology Officer of the cybersecurity firm Zenity, has demonstrated through his research and accompanying video evidence that Copilot can be exploited to create and send a vast number of deceptive emails. He stated, 'This capability allows me to interact with anyone you've ever communicated with by generating and dispatching hundreds of emails on your behalf.' He further explained that while a cybercriminal might traditionally invest days in composing a single convincing email to trick someone into clicking on a malicious link, this tool enables the production of hundreds of similar emails in just minutes."
The exhibition, like other incidents orchestrated by Bargury, primarily operates by employing the large language model (LLM) in the manner it was intended: inputting text queries to extract information the AI is capable of accessing. Yet, it can yield harmful outcomes by incorporating extra data or commands to execute specific tasks. The study underscores the difficulties associated with integrating AI technologies with business data and the potential risks when "untrusted" external data is introduced—especially when the AI's responses appear to be credible.
In another exploit devised by Bargury, he showcases how a cybercriminal, who must first compromise an email account, can obtain confidential data like individual salaries. This is achieved without setting off Microsoft's security measures designed to safeguard sensitive documents. Bargury instructs the system to refrain from indicating the sources of the data it retrieves. "A little intimidation can be effective," Bargury notes.
In some cases, Bargury illustrates that an individual, despite not having direct access to email accounts, can still corrupt the AI's data repository by dispatching a harmful email. This act enables them to alter responses related to banking data, substituting it with their personal banking information. Bargury emphasizes, "Providing AI with data access opens a door for potential attackers to exploit."
A different demonstration reveals the way a hacker from outside the organization might gain a bit of insight into the potential outcome of a forthcoming corporate earnings announcement, being either positive or negative. In the last example, Bargury explains, Copilot is transformed into a "nefarious insider" through offering users access to fraudulent websites.
Phillip Misner, who leads AI incident detection and response at Microsoft, expressed gratitude towards Bargury for pointing out the security flaw and mentioned that they have collaborated with him to evaluate the situation. Misner remarks that the dangers associated with the misuse of AI following a breach are akin to those found with other compromise strategies. He emphasizes that implementing security measures and surveillance across different platforms and user identities can either reduce or entirely prevent these types of incidents.
Over the last two years, the evolution of generative AI technologies, including OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini, indicates a future where these systems could handle tasks on behalf of individuals, such as scheduling appointments or making online purchases. Nonetheless, security experts have persistently pointed out the dangers associated with integrating external data into these AI models, like from emails or website content, due to the potential for security vulnerabilities via indirect prompt injections and poisoning attacks.
"Johann Rehberger, a security expert and leader of a red team, who has significantly highlighted vulnerabilities in AI technologies, expressed concerns about the underestimated potential of attackers in the current landscape. He emphasized, "The real concern at this moment is the output of the LLM and what it's delivering to the end-user."
Bargury highlights that despite Microsoft's extensive measures to secure its Copilot system against prompt injection attacks, he has managed to identify vulnerabilities by dissecting the system's structure. He mentions that he was able to uncover the system's internal prompt and figure out how it interacts with enterprise resources, including the methods it employs. "Interacting with Copilot feels restricted due to the numerous safeguards Microsoft has implemented," he notes. "However, with the right set of keywords, the system becomes much more accommodating, allowing for broader manipulation."
Rehberger cautions that a recurring issue involves corporations granting excessive file access to numerous employees without adequately managing permissions throughout their entities. He adds a layer of concern by introducing Copilot into this scenario. According to Rehberger, he has employed artificial intelligence technologies to hunt for widely used passwords like Password123, and surprisingly, these efforts have unearthed results from within various companies.
Rehberger and Bargury emphasize the importance of closely observing the outputs and actions of AI in relation to the user. Bargury points out, "The concern revolves around the AI's interaction with one's surroundings, its handling of personal data, and its execution of tasks for the user." He further explains, "It's crucial to understand the actions taken by the AI on behalf of a user and whether these actions align with the user's original requests."
Check Out Similar Content…
Explore Political Insights: Subscribe to our newsletter and tune into our podcast.
Exploring the outcomes of providing individuals with unconditional cash
Not all individuals experience weight loss with Ozempic.
The Pentagon is seeking to allocate $141 billion towards a catastrophic device.
Invitation: Be part of the Energy Tech Summit happening on October 10th in Berlin.
Additional Content from WIRED
Critiques and Manuals
Copyright © 2024 Condé Nast. All rights reserved. When you buy products via our website, WIRED might receive a share of the revenue through our affiliate agreements with retail partners. The content on this website is protected and cannot be copied, shared, broadcast, stored, or used in any way without explicit consent from Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.