Connect with us

Technology

Microsoft AI Has the Potential to Become an Automated Phishing Scheme

Published

on

Microsoft sped up to integrate generative AI into its core systems. The company’s Copilot AI technology can retrieve responses from your emails, Teams chats, and files when you ask questions regarding an upcoming meeting, which might be quite helpful in terms of efficiency. However, hackers may also take advantage of these very procedures.

Researcher Michael Bargury is showcasing five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, like Word, can be manipulated by malicious attackers today at the Black Hat security conference in Las Vegas. These ways include using it to provide false references to files, exfiltrate some private data, and evade Microsoft’s security measures.

Arguably, one of the most concerning demonstrations is Bargury’s capacity to transform the AI into an autonomous spear-phishing apparatus. Known as LOLCopilot, the red-teaming code that Bargury developed can, crucially, be used by hackers to see who you regularly email, draft a message that mimics your writing style (including the use of emojis), and send a customized blast that may contain malware or a malicious link once they have access to a target’s work email.

Cofounder and CTO of security firm Zenity Bargury says, “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf.” Bargury released his research along with videos demonstrating how Copilot may be misused. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”

This example, like other assaults developed by Bargury, primarily operates by utilizing the large language model (LLM) as intended: inputting written queries to obtain information that the AI can acquire. Nevertheless, if it contains extra information or commands to carry out certain tasks, it may have harmful effects. A few of the difficulties in integrating AI systems with corporate data are brought to light by the research, along with the potential consequences of incorporating “untrusted” external data, especially when the AI produces results that appear legitimate.

Among the other assaults that Bargury designed is an example of how a hacker might obtain sensitive data, like people’s salaries, without inadvertently triggering Microsoft’s defenses for sensitive files. This hacker, of course, must first have gained control of an email account. Bargury’s prompt requests that the system not give references to the files from which the data is extracted. Bullying occasionally does assist, according to Bargury.

In other cases, he demonstrates how an attacker can modify responses regarding banking information to reveal their own bank details. This attacker doesn’t have access to email accounts, but instead taints the AI’s database by sending it a malicious email. According to Bargury, “Every time you give AI access to data, that is a way for an attacker to get in,” 

Another example demonstrates how an outside hacker could obtain certain restricted knowledge regarding the potential success or failure of an impending corporate earnings call. The last example, according to Bargury, transforms Copilot into a “malicious insider” by sending users to phishing websites.

Microsoft’s head of AI incident detection and response, Phillip Misner, said the company has been collaborating with Bargury to evaluate the findings and is grateful that he discovered the issue. According to Misner,  “The risks of post-compromise abuse of AI are similar to other post-compromise techniques,” “Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.”

In the last two years, generative AI systems have advanced to the point where, like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s ChatGPT, they may ultimately perform human-like jobs like making reservations for events or making online purchases. But as security experts have shown time and time again, letting outside data into AI systems—for example, by email or by reading content from websites—raises the possibility of indirect trigger injection and poisoning attacks.

As a security researcher and red team director who has widely shown security flaws in AI systems, Johann Rehberger adds, “I think it’s not that well understood how much more effective an attacker can actually become now.” “What we have to be worried [about] now is actually what is the LLM producing and sending out to the user.”

Rehberger cautions in general that a number of data problems may be traced back to the long-standing issue of businesses permitting an excessive number of employees to view files and failing to properly arrange access permissions throughout their enterprises. Rehberger continues, “Now imagine you put Copilot on top of that problem.” He claims to have employed AI systems to look up popular passwords like Password123, and that the algorithms have produced findings from within businesses.

Rehberger and Bargury agree that monitoring the output that an AI generates and transmits to a user needs to be given greater attention. According to Bargury, “The risk is about how AI interacts with your environment, how it interacts with your data, how it performs operations on your behalf,” “You need to figure out what the AI agent does on a user’s behalf. And does that make sense with what the user actually asked for.”

Technology

Microsoft Expands Copilot Voice and Think Deeper

Published

on

Microsoft Expands Copilot Voice and Think Deeper

Microsoft is taking a major step forward by offering unlimited access to Copilot Voice and Think Deeper, marking two years since the AI-powered Copilot was first integrated into Bing search. This update comes shortly after the tech giant revamped its Copilot Pro subscription and bundled advanced AI features into Microsoft 365.

What’s Changing?

Microsoft remains committed to its $20 per month Copilot Pro plan, ensuring that subscribers continue to enjoy premium benefits. According to the company, Copilot Pro users will receive:

  • Preferred access to the latest AI models during peak hours.
  • Early access to experimental AI features, with more updates expected soon.
  • Extended use of Copilot within popular Microsoft 365 apps like Word, Excel, and PowerPoint.

The Impact on Users

This move signals Microsoft’s dedication to enhancing AI-driven productivity tools. By expanding access to Copilot’s powerful features, users can expect improved efficiency, smarter assistance, and seamless integration across Microsoft’s ecosystem.

As AI technology continues to evolve, Microsoft is positioning itself at the forefront of innovation, ensuring both casual users and professionals can leverage the best AI tools available.

Stay tuned for further updates as Microsoft rolls out more enhancements to its AI offerings.

Continue Reading

Technology

Google Launches Free AI Coding Tool for Individual Developers

Published

on

Google Launches Free AI Coding Tool for Individual Developers

Google has introduced a free version of Gemini Code Assistant, its AI-powered coding assistant, for solo developers worldwide. The tool, previously available only to enterprise users, is now in public preview, making advanced AI-assisted coding accessible to students, freelancers, hobbyists, and startups.

More Features, Fewer Limits

Unlike competing tools such as GitHub Copilot, which limits free users to 2,000 code completions per month, Google is offering up to 180,000 code completions—a significantly higher cap designed to accommodate even the most active developers.

“Now anyone can easily learn, generate code snippets, debug, and modify applications without switching between multiple windows,” said Ryan J. Salva, Google’s senior director of product management.

AI-Powered Coding Assistance

Gemini Code Assist for individuals is powered by Google’s Gemini 2.0 AI model and offers:
Auto-completion of code while typing
Generation of entire code blocks based on prompts
Debugging assistance via an interactive chatbot

The tool integrates with popular developer environments like Visual Studio Code, GitHub, and JetBrains, supporting a wide range of programming languages. Developers can use natural language prompts, such as:
Create an HTML form with fields for name, email, and message, plus a submit button.”

With support for 38 programming languages and a 128,000-token memory for processing complex prompts, Gemini Code Assist provides a robust AI-driven coding experience.

Enterprise Features Still Require a Subscription

While the free tier is generous, advanced features like productivity analytics, Google Cloud integrations, and custom AI tuning remain exclusive to paid Standard and Enterprise plans.

With this move, Google aims to compete more aggressively in the AI coding assistant market, offering developers a powerful and unrestricted alternative to existing tools.

Continue Reading

Technology

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Published

on

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Elon Musk’s artificial intelligence company xAI has unveiled its latest chatbot, Grok-3, which aims to compete with leading AI models such as OpenAI’s ChatGPT and China’s DeepSeek. Grok-3 is now available to Premium+ subscribers on Musk’s social media platform x (formerly Twitter) and is also available through xAI’s mobile app and the new SuperGrok subscription tier on Grok.com.

Advanced capabilities and performance

Grok-3 has ten times the computing power of its predecessor, Grok-2. Initial tests show that Grok-3 outperforms models from OpenAI, Google, and DeepSeek, particularly in areas such as math, science, and coding. The chatbot features advanced reasoning features capable of decomposing complex questions into manageable tasks. Users can interact with Grok-3 in two different ways: “Think,” which performs step-by-step reasoning, and “Big Brain,” which is designed for more difficult tasks.

Strategic Investments and Infrastructure

To support the development of Grok-3, xAI has made major investments in its supercomputer cluster, Colossus, which is currently the largest globally. This infrastructure underscores the company’s commitment to advancing AI technology and maintaining a competitive edge in the industry.

New Offerings and Future Plans

Along with Grok-3, xAI has also introduced a logic-based chatbot called DeepSearch, designed to enhance research, brainstorming, and data analysis tasks. This tool aims to provide users with more insightful and relevant information. Looking to the future, xAI plans to release Grok-2 as an open-source model, encouraging community participation and further development. Additionally, upcoming improvements for Grok-3 include a synthesized voice feature, which aims to improve user interaction and accessibility.

Market position and competition

The launch of Grok-3 positions xAI as a major competitor in the AI ​​chatbot market, directly challenging established models from OpenAI and emerging competitors such as DeepSeek. While Grok-3’s performance claims are yet to be independently verified, early indications suggest it could have a significant impact on the AI ​​landscape. xAI is actively seeking $10 billion in investment from major companies, demonstrating its strong belief in their technological advancements and market potential.

Continue Reading

Trending

error: Content is protected !!