Connect with us

Technology

The Three Biggest Advancements in AI for 2023

Published

on

The Three Biggest Advancements in AI for 2023

In many ways, the year 2023 marked the beginning of people’s understanding of artificial intelligence (AI) and its potential. That was the year governments started to take AI risk seriously and the year chatbots went viral for the first time. These advancements weren’t so much new inventions as they were concepts and technologies that were coming of age after a protracted gestation period.

However, there were also a lot of fresh inventions. These are the top three from the previous year:

Differentiation

Although the term “multimodality” may sound technical, it’s important to know that it refers to an AI system’s capacity to handle a wide variety of data types, including audio, video, images, and text.

This year marked the first time that robust multimodal AI models were made available to the general public. The first of these, GPT-4 from OpenAI, let users upload images in addition to text inputs. With its ability to “see” images, GPT-4 offers up a plethora of possibilities. For instance, you could ask it to decide what to have for dinner based on a picture of what’s in your refrigerator. OpenAI released the capability for users to communicate with ChatGPT via voice and text in September.

Announced in December, Google DeepMind’s most recent model, Gemini, is also capable of processing audio and images. In a Google launch video, the model was shown using a post-it note with a line drawing to identify a duck. In the same video, Gemini came up with an image of a pink and blue plush octopus after being shown a picture of pink and blue yarn and asked what they could make. (The promotional film gave the impression that Gemini was watching moving images and reacting to voice commands in real time. However, Google stated in a blog post on its website that the video had been trimmed for brevity and that the model was being prompted with text prompts rather than audio and still images, even though the model does have

“I think the next landmark that people will think back to, and remember, is [AI systems] going much more fully multimodal,” Google DeepMind co-founder Shane Legg said on a podcast in October. “It’s early days in this transition, and when you start really digesting a lot of video and other things like that, these systems will start having a much more grounded understanding of the world.” In an interview with TIME in November, OpenAI CEO Sam Altman said multimodality in the company’s new models would be one of the key things to watch out for next year.

Multimodality offers benefits beyond making models more practical. The models can also be trained on a wealth of new data sets, including audio, video, and images, which together contain more information about the world than text can. Many of the world’s leading AI companies hold the view that these models will become more powerful or capable as a result of this new training data. It is a step toward “artificial general intelligence,” the kind of system that can equal human intellect, producing labor that is economically valuable and leading to new scientific discoveries. This is the hope held by many AI scientists.

AI under the Constitution

How to integrate AI with human values is one of the most important unsolved issues in the field. If artificial intelligence and power surpass that of humans, these systems have the potential to unleash immense damage on our species—some even predict its extinction—unless they are somehow restrained by laws that prioritize human well-being.

The method that OpenAI employed to align ChatGPT (in order to steer clear of the racist and sexist tendencies of previous models) was successful, but it necessitated a significant amount of human labor. This method is called “reinforcement learning with human feedback,” or RLHF. If the AI’s response was beneficial, safe, and adhered to OpenAI’s list of content guidelines, human raters would evaluate it and award it the computational equivalent of a dog treat. OpenAI created a reasonably safe and efficient chatbot by rewarding the AI for good behavior and punishing it for bad behavior.

However, the RLHF process’s scalability is seriously questioned due to its heavy reliance on human labor. It costs a lot. It is susceptible to the prejudices or errors committed by certain raters. The longer the list of rules, the greater the likelihood of failure. And it doesn’t seem like it will work for AI systems that get so strong that they start doing things that are incomprehensible to humans.

Constitutional AI, which was initially introduced in a December 2022 paper by researchers at the prestigious AI lab Anthropic, aims to solve these issues by utilizing the fact that AI systems are now able to comprehend natural language. The concept is very straightforward. You start by creating a “constitution” that outlines the principles you want your AI to uphold. Subsequently, the AI is trained to grade responses according to how closely they adhere to the constitution. The model is then given incentives to produce responses that receive higher scores. Reward learning from AI feedback has replaced reinforcement learning from human feedback. The Anthropic researchers stated that “these methods make it possible to control AI behavior more precisely and with far fewer human labels.” Anthropic’s 2023 response to ChatGPT, Claude, was aligned using constitutional AI. (Among the investors in Anthropic is Salesforce, whose CEO and co-chair of TIME is Marc Benioff.)

“With constitutional AI, you’re explicitly writing down the normative premises with which your model should approach the world,” Jack Clark, Anthropic’s head of policy, told TIME in August. “Then the model is training on that.” There are still problems, like the difficulty of making sure the AI has understood both the letter and the spirit of the rules, (“you’re stacking your chips on a big, opaque AI model,” Clark says,) but the technique is a promising addition to a field where new alignment strategies are few and far between.

Naturally, Constitutional AI does not address the issue of whose values AI ought to be in line with. However, Anthropic is attempting to make that decision more accessible to all. The lab conducted an experiment in October wherein it asked a representative sample of one thousand Americans to assist in selecting rules for a chatbot. The results showed that, despite some polarization, it was still possible to draft a functional constitution based on statements that the group reached a consensus on. These kinds of experiments may pave the way for a time when the general public has far more influence over AI policy than it does now, when regulations are set by a select group of Silicon Valley executives.

Text to Video

The rapidly increasing popularity of text-to-video tools is one obvious result of the billions of dollars that have been invested in AI this year. Text-to-image technologies had just begun to take shape a year ago; today, a number of businesses are able to convert sentences into moving pictures with ever-increasing precision.

One of those businesses is Runway, an AI video startup with offices in Brooklyn that aims to enable anyone to make movies. With its most recent model, Gen-2, users can perform video-to-video editing—that is, altering an already-existing video’s style in response to a text prompt, such as transforming a picture of cereal boxes on a tabletop into a nighttime cityscape.

“Our mission is to build tools for human creativity,” Runway’s CEO Cristobal Valenzuela told TIME in May. He acknowledges that this will have an impact on jobs in the creative industries, where AI tools are quickly making some forms of technical expertise obsolete, but he believes the world on the other side is worth the upheaval. “Our vision is a world where human creativity gets amplified and enhanced, and it’s less about the craft, and the budget, and the technical specifications and knowledge that you have, and more about your ideas.” (Investors in Runway include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

Pika AI, another startup in the text-to-video space, claims to be producing millions of new videos every week. The startup, which is headed by two Stanford dropouts, debuted in April but has already raised money valued at between $200 and $300 million, according to Forbes. Free tools like Pika, aimed more at the average user than at professional filmmakers, are attempting to change the face of user-generated content. Though text-to-video tools are computationally expensive, don’t be shocked if they start charging for access once the venture capital runs out. That could happen as soon as 2024.

Technology

Apple has revealed a revamped Mac Mini with an M4 chip

Published

on

A smaller but no less powerful Mac Mini was recently unveiled by Apple as part of the company’s week of Mac-focused announcements. It now has Apple’s most recent M4 silicon, enables ray tracing for the first time, and comes pre-installed with 16GB of RAM, which seems to be the new standard in the age of Apple Intelligence. While the more potent M4 Pro model starts at $1,399, the machine still starts at $599 with the standard M4 CPU. The Mac Mini is available for preorder right now and will be in stores on November 8th, just like the updated iMac that was revealed yesterday.

The new design will be the first thing you notice. The Mini has reportedly been significantly reduced in size, although it was already a comparatively small desktop computer. It is now incredibly small, with dimensions of five inches for both length and width. Apple claims that “an innovative thermal architecture, which guides air to different levels of the system, while all venting is done through the foot” and the M4’s efficiency are the reasons it keeps things cool.

Nevertheless, Apple has packed this device with a ton of input/output, including a 3.5mm audio jack and two USB-C connections on the front. Three USB-C/Thunderbolt ports, Ethernet, and HDMI are located around the back. Although the USB-A ports are outdated, it’s important to remember that the base M2 Mini only featured two USB-A connectors and two Thunderbolt 4 ports. You get a total of five ports with the M4. You get an additional Thunderbolt port but lose native USB-A.

Depending on the M4 processor you select, those Thunderbolt connectors will have varying speeds. While the M4 Pro offers the most recent Thunderbolt 5 throughput, the standard M4 processor comes with Thunderbolt 4.

With its 14 CPU and 20 GPU cores, the M4 Pro Mac Mini also offers better overall performance. The standard M4 can have up to 32GB of RAM, while the M4 Pro can have up to 64GB. The maximum storage capacity is an astounding 8TB. Therefore, even though the Mini is rather little, if you have the money, you can make it really powerful. For those who desire it, 10 gigabit Ethernet is still an optional upgrade.

Apple has a big week ahead of it. On Monday, the company released the M4 iMac and its first Apple Intelligence software features for iOS, iPadOS, and macOS. (More AI functionality will be available in December, such as ChatGPT integration and image production.) As Apple completes its new hardware, those updated MacBook Pros might make their appearance tomorrow. The business will undoubtedly highlight its newest fleet of Macs when it releases its quarterly profits on Thursday.

Continue Reading

Technology

Apple Intelligence may face competition from a new Qualcomm processor

Published

on

The new chip from Qualcomm (QCOM) may increase competition between Apple’s (AAPL) iOS and Android.

During its Snapdragon Summit on Monday, the firm unveiled the Snapdragon 8 Elite Mobile Platform, which includes a new, second-generation Oryon CPU that it claims is the “fastest mobile CPU in the world.” According to Qualcomm, multimodal generative artificial intelligence characteristics can be supported by the upcoming Snapdragon platform.

Qualcomm, which primarily creates chips for mobile devices running Android, claims that the new Oryon CPU is 44% more power efficient and 45% faster. As the iPhone manufacturer releases its Apple Intelligence capabilities, the new Snapdragon 8 platform may allow smartphone firms compete with Apple on the AI frontier. Additionally, Apple has an agreement with OpenAI, the company that makes ChatGPT, to incorporate ChatGPT-4o into the upcoming iOS 18, iPadOS 18, and macOS Sequoia.

According to a September Wall Street Journal (NWSA) story, Qualcomm is apparently interested in purchasing Intel (INTC) in a deal that could be valued up to $90 billion. According to Bloomberg, Apollo Global Management (APO), an alternative asset manager, had also proposed an equity-like investment in Intel with a potential value of up to $5 billion.

According to reports, which cited anonymous sources familiar with the situation, Qualcomm may postpone its decision to acquire Intel until after the U.S. presidential election next month. According to the persons who spoke with Bloomberg, Qualcomm is waiting to make a decision on the transaction because of the possible effects on antitrust laws and tensions with China after the election results.

According to a report from analysts at Bank of America Global Research (BAC), Qualcomm could expand, take the lead in the market for core processor units, or CPUs, for servers, PCs, and mobile devices, and get access to Intel’s extensive chip fabrication facilities by acquiring Intel. They went on to say that Qualcomm would become the world’s largest semiconductor company if its $33 billion in chip revenue were combined with Intel’s $52 billion.

The experts claimed that those advantages would be outweighed by the financial and regulatory obstacles posed by a possible transaction. They are dubious about a prospective takeover and think that Intel’s competitors may gain from the ambiguity surrounding the agreement.

Continue Reading

Technology

iPhone 16 Pro Users Report Screen Responsiveness Issues, Hope for Software Fix

Published

on

Many iPhone 16 Pro and iPhone 16 Pro Max users are experiencing significant touchscreen responsiveness problems. Complaints about lagging screens and unresponsive taps and swipes are particularly frustrating for customers who have invested $999 and up in these devices.

The good news is that initial assessments suggest the issue may be software-related rather than a hardware defect. This means that Apple likely won’t need to issue recalls or replacement units; instead, a simple software update could resolve the problem.

The root of the issue might lie in the iOS touch rejection algorithm, which is designed to prevent accidental touches. If this feature is overly sensitive, it could ignore intentional inputs, especially when users’ fingers are near the new Camera Control on the right side of the display. Some users have reported that their intended touches are being dismissed, particularly when their fingers are close to this area.

Additionally, the new, thinner bezels on the iPhone 16 Pro compared to the iPhone 15 Pro could contribute to the problem. With less protection against accidental touches, the device may misinterpret valid taps as mistakes, leading to ignored inputs.

This isn’t the first time Apple has faced challenges with new iPhone models. For instance, the iPhone 4 experienced “Antennagate,” where signal loss occurred depending on how the device was held, prompting Steve Jobs to famously suggest users hold their phones differently. Apple eventually provided free rubber bumpers to mitigate the issue.

To alleviate the touchscreen problem, using a case might help by covering parts of the display and reducing the chances of accidental touches triggering the rejection algorithm. The issue appears on devices running iOS 18 and the iOS 18.1 beta and does not occur when the phone is locked. Users may notice difficulties when swiping through home screens and apps.

Many are hopeful that an upcoming iOS 18 update will address these issues, restoring responsiveness to the iPhone 16 Pro and iPhone 16 Pro Max displays.

Continue Reading

Trending

error: Content is protected !!