Connect with us

Technology

Apple Launches Eight Small AI Language Models for On-Device Use

Published

on

Within the field of artificial intelligence, “small language models” have gained significant traction lately due to their ability to operate locally on a device rather than requiring cloud-based data center-grade computers. On Wednesday, Apple unveiled OpenELM, a collection of minuscule AI language models that are available as open source and small enough to run on a smartphone. For now, they’re primarily proof-of-concept research models, but they might serve as the foundation for Apple’s on-device AI products in the future.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

A similar goal is pursued by Microsoft’s Phi-3 models, which we discussed on Tuesday. These models are small, locally executable AI models that can comprehend and process language to a reasonable degree. Although Apple’s OpenELM models range in size from 270 million to 3 billion parameters across eight different models, Phi-3-mini has 3.8 billion parameters.

By contrast, OpenAI’s GPT-3 from 2020 shipped with 175 billion parameters, and Meta’s largest model to date, the Llama 3 family, has 70 billion parameters (a 400 billion version is on the way). Although parameter count is a useful indicator of the complexity and capability of AI models, recent work has concentrated on making smaller AI language models just as capable as larger ones were a few years ago.

Eight OpenELM models are available in two flavors: four that are “pretrained,” or essentially a next-token version of the model in its raw form, and four that are “instructional-tuned,” or optimized for instruction following, which is more suitable for creating chatbots and AI assistants:

The maximum context window in OpenELM is 2048 tokens. The models were trained using datasets that are publicly available, including RefinedWeb, a subset of RedPajama, a version of PILE that has had duplications removed, and a subset of Dolma v1.6, which contains, according to Apple, roughly 1.8 trillion tokens of data. AI language models process data using tokens, which are broken representations of the data.

According to Apple, part of its OpenELM approach is a “layer-wise scaling strategy” that distributes parameters among layers more effectively, supposedly saving computational resources and enhancing the model’s performance even with fewer tokens used for training. This approach has allowed OpenELM to achieve 2.36 percent accuracy gain over Allen AI’s OLMo 1B (another small language model) with half as many pre-training tokens needed, according to Apple’s published white paper.

In addition, Apple made the code for CoreNet, the library it used to train OpenELM, publicly available. Notably, this code includes reproducible training recipes that make it possible to duplicate the weights, or neural network files—something that has not been seen in a major tech company before. Transparency, according to Apple, is a major objective for the organization: “The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks.”

By releasing the source code, model weights, and training materials, Apple says it aims to “empower and enrich the open research community.” However, it also cautions that since the models were trained on publicly sourced datasets, “there exists the possibility of these models producing outputs that are biased, or objectionable in response to user prompts.”

Though the company may hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a much-needed boost, Apple has not yet integrated this new wave of AI language model capabilities into its consumer devices. It is anticipated that the upcoming iOS 18 update—which is expected to be revealed in June at WWDC—will include new AI features that use on-device processing to ensure user privacy.

Technology

Apple has revealed a revamped Mac Mini with an M4 chip

Published

on

A smaller but no less powerful Mac Mini was recently unveiled by Apple as part of the company’s week of Mac-focused announcements. It now has Apple’s most recent M4 silicon, enables ray tracing for the first time, and comes pre-installed with 16GB of RAM, which seems to be the new standard in the age of Apple Intelligence. While the more potent M4 Pro model starts at $1,399, the machine still starts at $599 with the standard M4 CPU. The Mac Mini is available for preorder right now and will be in stores on November 8th, just like the updated iMac that was revealed yesterday.

The new design will be the first thing you notice. The Mini has reportedly been significantly reduced in size, although it was already a comparatively small desktop computer. It is now incredibly small, with dimensions of five inches for both length and width. Apple claims that “an innovative thermal architecture, which guides air to different levels of the system, while all venting is done through the foot” and the M4’s efficiency are the reasons it keeps things cool.

Nevertheless, Apple has packed this device with a ton of input/output, including a 3.5mm audio jack and two USB-C connections on the front. Three USB-C/Thunderbolt ports, Ethernet, and HDMI are located around the back. Although the USB-A ports are outdated, it’s important to remember that the base M2 Mini only featured two USB-A connectors and two Thunderbolt 4 ports. You get a total of five ports with the M4. You get an additional Thunderbolt port but lose native USB-A.

Depending on the M4 processor you select, those Thunderbolt connectors will have varying speeds. While the M4 Pro offers the most recent Thunderbolt 5 throughput, the standard M4 processor comes with Thunderbolt 4.

With its 14 CPU and 20 GPU cores, the M4 Pro Mac Mini also offers better overall performance. The standard M4 can have up to 32GB of RAM, while the M4 Pro can have up to 64GB. The maximum storage capacity is an astounding 8TB. Therefore, even though the Mini is rather little, if you have the money, you can make it really powerful. For those who desire it, 10 gigabit Ethernet is still an optional upgrade.

Apple has a big week ahead of it. On Monday, the company released the M4 iMac and its first Apple Intelligence software features for iOS, iPadOS, and macOS. (More AI functionality will be available in December, such as ChatGPT integration and image production.) As Apple completes its new hardware, those updated MacBook Pros might make their appearance tomorrow. The business will undoubtedly highlight its newest fleet of Macs when it releases its quarterly profits on Thursday.

Continue Reading

Technology

Apple Intelligence may face competition from a new Qualcomm processor

Published

on

The new chip from Qualcomm (QCOM) may increase competition between Apple’s (AAPL) iOS and Android.

During its Snapdragon Summit on Monday, the firm unveiled the Snapdragon 8 Elite Mobile Platform, which includes a new, second-generation Oryon CPU that it claims is the “fastest mobile CPU in the world.” According to Qualcomm, multimodal generative artificial intelligence characteristics can be supported by the upcoming Snapdragon platform.

Qualcomm, which primarily creates chips for mobile devices running Android, claims that the new Oryon CPU is 44% more power efficient and 45% faster. As the iPhone manufacturer releases its Apple Intelligence capabilities, the new Snapdragon 8 platform may allow smartphone firms compete with Apple on the AI frontier. Additionally, Apple has an agreement with OpenAI, the company that makes ChatGPT, to incorporate ChatGPT-4o into the upcoming iOS 18, iPadOS 18, and macOS Sequoia.

According to a September Wall Street Journal (NWSA) story, Qualcomm is apparently interested in purchasing Intel (INTC) in a deal that could be valued up to $90 billion. According to Bloomberg, Apollo Global Management (APO), an alternative asset manager, had also proposed an equity-like investment in Intel with a potential value of up to $5 billion.

According to reports, which cited anonymous sources familiar with the situation, Qualcomm may postpone its decision to acquire Intel until after the U.S. presidential election next month. According to the persons who spoke with Bloomberg, Qualcomm is waiting to make a decision on the transaction because of the possible effects on antitrust laws and tensions with China after the election results.

According to a report from analysts at Bank of America Global Research (BAC), Qualcomm could expand, take the lead in the market for core processor units, or CPUs, for servers, PCs, and mobile devices, and get access to Intel’s extensive chip fabrication facilities by acquiring Intel. They went on to say that Qualcomm would become the world’s largest semiconductor company if its $33 billion in chip revenue were combined with Intel’s $52 billion.

The experts claimed that those advantages would be outweighed by the financial and regulatory obstacles posed by a possible transaction. They are dubious about a prospective takeover and think that Intel’s competitors may gain from the ambiguity surrounding the agreement.

Continue Reading

Technology

iPhone 16 Pro Users Report Screen Responsiveness Issues, Hope for Software Fix

Published

on

Many iPhone 16 Pro and iPhone 16 Pro Max users are experiencing significant touchscreen responsiveness problems. Complaints about lagging screens and unresponsive taps and swipes are particularly frustrating for customers who have invested $999 and up in these devices.

The good news is that initial assessments suggest the issue may be software-related rather than a hardware defect. This means that Apple likely won’t need to issue recalls or replacement units; instead, a simple software update could resolve the problem.

The root of the issue might lie in the iOS touch rejection algorithm, which is designed to prevent accidental touches. If this feature is overly sensitive, it could ignore intentional inputs, especially when users’ fingers are near the new Camera Control on the right side of the display. Some users have reported that their intended touches are being dismissed, particularly when their fingers are close to this area.

Additionally, the new, thinner bezels on the iPhone 16 Pro compared to the iPhone 15 Pro could contribute to the problem. With less protection against accidental touches, the device may misinterpret valid taps as mistakes, leading to ignored inputs.

This isn’t the first time Apple has faced challenges with new iPhone models. For instance, the iPhone 4 experienced “Antennagate,” where signal loss occurred depending on how the device was held, prompting Steve Jobs to famously suggest users hold their phones differently. Apple eventually provided free rubber bumpers to mitigate the issue.

To alleviate the touchscreen problem, using a case might help by covering parts of the display and reducing the chances of accidental touches triggering the rejection algorithm. The issue appears on devices running iOS 18 and the iOS 18.1 beta and does not occur when the phone is locked. Users may notice difficulties when swiping through home screens and apps.

Many are hopeful that an upcoming iOS 18 update will address these issues, restoring responsiveness to the iPhone 16 Pro and iPhone 16 Pro Max displays.

Continue Reading

Trending

error: Content is protected !!