Connect with us

Technology

Intel has gained Cnvrg.io, a platform to manage, create and automate machine learning

Published

on

Intel keeps on snap up new businesses to work out its machine learning and AI activities. In the most recent move, TechCrunch has discovered that the chip goliath has gained Cnvrg.io, an Israeli organization that has fabricated and works a stage for information researchers to assemble and run machine learning models, which can be utilized to prepare and follow numerous models and run examinations on them, manufacture proposals and that’s only the tip of the iceberg.

Intel affirmed the obtaining to us with a short note. “We can confirm that we have acquired Cnvrg,” a spokesperson said. “Cnvrg will be an independent Intel company and will continue to serve its existing and future customers.” Those clients incorporate Lightricks, ST Unitas and Playtika.

Intel isn’t revealing any budgetary terms of the arrangement, nor who from the startup will join Intel. Cnvrg, helped to establish by Yochay Ettun (CEO) and Leah Forkosh Kolben, had raised $8 million from speculators that incorporate Hanaco Venture Capital and Jerusalem Venture Partners, and PitchBook gauges that it was esteemed at around $17 million in its last round.

It was just seven days prior that Intel made another procurement to support its AI business, likewise in the territory of AI demonstrating: it got SigOpt, which had built up an enhancement stage to run AI displaying and recreations.

While SigOpt is based out of the Bay Area, Cnvrg is in Israel, and joins a broad impression that Intel has implicit the nation, explicitly in the zone of computerized reasoning innovative work, banked around its Mobileye self-governing vehicle business (which it obtained for more than $15 billion of every 2017) and its securing of AI chipmaker Habana (which it procured for $2 billion toward the finish of 2019).

Cnvrg.io’s foundation works across on-reason, cloud and half and half conditions and it comes in paid and complementary plans (we covered the dispatch of the free assistance, marked Core, a year ago). It rivals any semblance of Databricks, Sagemaker and Dataiku, just as more modest tasks like H2O.ai that are based on open-source structures. Cnvrg’s reason is that it gives an easy to understand stage to information researchers so they can focus on formulating calculations and estimating how they work, not fabricating or keeping up the stage they run on.

While Intel isn’t saying much regarding the arrangement, it appears to be that a portion of a similar rationale behind a week ago’s SigOpt securing applies here too: Intel has been pulling together its business around cutting edge chips to more readily contend with any semblance of Nvidia and more modest players like GraphCore. So it bodes well to likewise give/put resources into AI devices for clients, explicitly administrations to help with the process stacks that they will be running on those chips.

It’s striking that in our article about the Core complementary plan a year ago, Frederic noticed that those utilizing the stage in the cloud can do as such with Nvidia-upgraded holders that sudden spike in demand for a Kubernetes bunch. It’s not satisfactory if that will keep on being the situation, or if compartments will be upgraded rather for Intel design, or both. Cnvrg’s different accomplices incorporate Red Hat and NetApp.

Intel’s emphasis on the up and coming age of processing expects to balance decreases in its heritage activities. In the last quarter, Intel announced a 3% decrease in its incomes, driven by a drop in its server farm business. It said that it’s anticipating the AI silicon market to be greater than $25 billion by 2024, with AI silicon in the server farm to be more prominent than $10 billion in that period.

In 2019, Intel announced some $3.8 billion in AI-driven income, yet it trusts that devices like SigOpt’s will help drive greater action in that business, dovetailing with the push for more AI applications in a more extensive scope of organizations.

Technology

Microsoft Expands Copilot Voice and Think Deeper

Published

on

Microsoft Expands Copilot Voice and Think Deeper

Microsoft is taking a major step forward by offering unlimited access to Copilot Voice and Think Deeper, marking two years since the AI-powered Copilot was first integrated into Bing search. This update comes shortly after the tech giant revamped its Copilot Pro subscription and bundled advanced AI features into Microsoft 365.

What’s Changing?

Microsoft remains committed to its $20 per month Copilot Pro plan, ensuring that subscribers continue to enjoy premium benefits. According to the company, Copilot Pro users will receive:

  • Preferred access to the latest AI models during peak hours.
  • Early access to experimental AI features, with more updates expected soon.
  • Extended use of Copilot within popular Microsoft 365 apps like Word, Excel, and PowerPoint.

The Impact on Users

This move signals Microsoft’s dedication to enhancing AI-driven productivity tools. By expanding access to Copilot’s powerful features, users can expect improved efficiency, smarter assistance, and seamless integration across Microsoft’s ecosystem.

As AI technology continues to evolve, Microsoft is positioning itself at the forefront of innovation, ensuring both casual users and professionals can leverage the best AI tools available.

Stay tuned for further updates as Microsoft rolls out more enhancements to its AI offerings.

Continue Reading

Technology

Google Launches Free AI Coding Tool for Individual Developers

Published

on

Google Launches Free AI Coding Tool for Individual Developers

Google has introduced a free version of Gemini Code Assistant, its AI-powered coding assistant, for solo developers worldwide. The tool, previously available only to enterprise users, is now in public preview, making advanced AI-assisted coding accessible to students, freelancers, hobbyists, and startups.

More Features, Fewer Limits

Unlike competing tools such as GitHub Copilot, which limits free users to 2,000 code completions per month, Google is offering up to 180,000 code completions—a significantly higher cap designed to accommodate even the most active developers.

“Now anyone can easily learn, generate code snippets, debug, and modify applications without switching between multiple windows,” said Ryan J. Salva, Google’s senior director of product management.

AI-Powered Coding Assistance

Gemini Code Assist for individuals is powered by Google’s Gemini 2.0 AI model and offers:
Auto-completion of code while typing
Generation of entire code blocks based on prompts
Debugging assistance via an interactive chatbot

The tool integrates with popular developer environments like Visual Studio Code, GitHub, and JetBrains, supporting a wide range of programming languages. Developers can use natural language prompts, such as:
Create an HTML form with fields for name, email, and message, plus a submit button.”

With support for 38 programming languages and a 128,000-token memory for processing complex prompts, Gemini Code Assist provides a robust AI-driven coding experience.

Enterprise Features Still Require a Subscription

While the free tier is generous, advanced features like productivity analytics, Google Cloud integrations, and custom AI tuning remain exclusive to paid Standard and Enterprise plans.

With this move, Google aims to compete more aggressively in the AI coding assistant market, offering developers a powerful and unrestricted alternative to existing tools.

Continue Reading

Technology

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Published

on

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Elon Musk’s artificial intelligence company xAI has unveiled its latest chatbot, Grok-3, which aims to compete with leading AI models such as OpenAI’s ChatGPT and China’s DeepSeek. Grok-3 is now available to Premium+ subscribers on Musk’s social media platform x (formerly Twitter) and is also available through xAI’s mobile app and the new SuperGrok subscription tier on Grok.com.

Advanced capabilities and performance

Grok-3 has ten times the computing power of its predecessor, Grok-2. Initial tests show that Grok-3 outperforms models from OpenAI, Google, and DeepSeek, particularly in areas such as math, science, and coding. The chatbot features advanced reasoning features capable of decomposing complex questions into manageable tasks. Users can interact with Grok-3 in two different ways: “Think,” which performs step-by-step reasoning, and “Big Brain,” which is designed for more difficult tasks.

Strategic Investments and Infrastructure

To support the development of Grok-3, xAI has made major investments in its supercomputer cluster, Colossus, which is currently the largest globally. This infrastructure underscores the company’s commitment to advancing AI technology and maintaining a competitive edge in the industry.

New Offerings and Future Plans

Along with Grok-3, xAI has also introduced a logic-based chatbot called DeepSearch, designed to enhance research, brainstorming, and data analysis tasks. This tool aims to provide users with more insightful and relevant information. Looking to the future, xAI plans to release Grok-2 as an open-source model, encouraging community participation and further development. Additionally, upcoming improvements for Grok-3 include a synthesized voice feature, which aims to improve user interaction and accessibility.

Market position and competition

The launch of Grok-3 positions xAI as a major competitor in the AI ​​chatbot market, directly challenging established models from OpenAI and emerging competitors such as DeepSeek. While Grok-3’s performance claims are yet to be independently verified, early indications suggest it could have a significant impact on the AI ​​landscape. xAI is actively seeking $10 billion in investment from major companies, demonstrating its strong belief in their technological advancements and market potential.

Continue Reading

Trending

error: Content is protected !!