Connect with us

Technology

What The Strict AI Rule in The EU Means for ChatGPT and Research

Published

on

What The Strict AI Rule in The EU Means for ChatGPT and Research

The nations that make up the European Union are about to enact the first comprehensive set of regulations in history governing artificial intelligence (AI). In order to guarantee that AI systems are secure, uphold basic rights, and adhere to EU values, the EU AI Act imposes the strictest regulations on the riskiest AI models.

Professor Rishi Bommasani of Stanford University in California, who studies the social effects of artificial intelligence, argues that the act “is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent.”

The law is being passed as AI advances quickly. New iterations of generative AI models, like GPT, which drives ChatGPT and was developed by OpenAI in San Francisco, California, are anticipated to be released this year. In the meanwhile, systems that are already in place are being exploited for fraudulent schemes and the spread of false information. The commercial use of AI is already governed by a hodgepodge of rules in China, and US regulation is in the works. The first AI executive order in US history was signed by President Joe Biden in October of last year, mandating federal agencies to take steps to control the dangers associated with AI.

The European Parliament, one of the EU’s three legislative organs, must now officially approve the legislation, which was passed by the governments of the member states on February 2. This is anticipated to happen in April. The law will go into effect in 2026 if the text stays the same, as observers of the policy anticipate.

While some scientists applaud the policy for its potential to promote open science, others are concerned that it would impede creativity. Nature investigates the impact of the law on science.

How is The EU Going About This?

The European Union (EU) has opted to govern AI models according to their potential danger. This entails imposing more stringent laws on riskier applications and establishing distinct regulations for general-purpose AI models like GPT, which have a wide range of unanticipated applications.

The rule prohibits artificial intelligence (AI) systems that pose “unacceptable risk,” such as those that infer sensitive traits from biometric data. Some requirements must be met by high-risk applications, such as employing AI in recruiting and law enforcement. For instance, developers must demonstrate that their models are secure, transparent, and easy for users to understand, as well as that they respect privacy laws and do not discriminate. Developers of lower-risk AI technologies will nevertheless need to notify users when they engage with content generated by AI. Models operating within the EU are subject to the law, and any company that breaks the regulations faces fines of up to 7% of its yearly worldwide profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some believe that the laws don’t go far enough, leaving “gaping” exemptions for national security and military needs, as well as openings for the use of AI in immigration and law enforcement, according to Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a non-profit organization based in Berlin that monitors how automation affects society.

To What Extent Will Researchers Be Impacted?

Very little, in theory. The draft legislation was amended by the European Parliament last year to include a provision exempting AI models created just for prototyping, research, or development. According to Joanna Bryson, a researcher at the Hertie School in Berlin who examines AI and regulation, the EU has made great efforts to ensure that the act has no detrimental effects on research. “They truly don’t want to stop innovation, so I’m surprised if there will be any issues.”

According to Hovy, the act is still likely to have an impact since it will force academics to consider issues of transparency, model reporting, and potential biases. He believes that “it will filter down and foster good practice.”

Physician Robert Kaczmarczyk of the Technical University of Munich, Germany, is concerned that the law may hinder small businesses that drive research and may require them to set up internal procedures in order to comply with regulations. He is also co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit dedicated to democratizing machine learning. “It is very difficult for a small business to adapt,” he says.

What Does It Signify For Strong Models Like GPT?

Following a contentious discussion, legislators decided to place strong general-purpose models in their own two-tier category and regulate them, including generative models that produce code, images, and videos.

Except for those used exclusively for study or those released under an open-source license, all general-purpose models are covered under the first tier. These will have to comply with transparency standards, which include revealing their training procedures and energy usage, and will have to demonstrate that they honor copyright rights.

General-purpose models that are considered to have “high-impact capabilities” and a higher “systemic risk” will fall under the second, much tighter category. According to Bommasani, these models will be subject to “some pretty significant obligations,” such as thorough cybersecurity and safety inspections. It will be required of developers to disclose information about their data sources and architecture.

According to the EU, “big” essentially means “dangerous”: a model is considered high impact if it requires more than 1025 FLOPs (the total number of computer operations) for training. It’s a high hurdle, according to Bommasani, because training a model with that level of computational power would cost between US$50 million and $100 million. It should contain models like OpenAI’s current model, GPT-4, and may also incorporate next versions of LLaMA, Meta’s open-source competitor. Research-only models are immune from regulation, although open-source models in this tier are.

Some scientists would rather concentrate on how AI models are utilized than on controlling them. Jenia Jitsev, another co-founder of LAION and an AI researcher at the Jülich Supercomputing Center in Germany, asserts that “smarter and more capable does not mean more harm.” According to Jitsev, there is no scientific basis for basing regulation on any capability metric. They use the example that any chemical requiring more than a particular number of person-hours is risky. “This is how unproductive it is.”

Will This Support AI That is Open-source?

Advocates of open-source software and EU politicians hope so. According to Hovy, the act encourages the replication, transparency, and availability of AI material, which is equivalent to “reading off the manifesto of the open-source movement.” According to Bommasani, there are models that are more open than others, and it’s still unknown how the act’s language will be understood. However, he believes that general-purpose models—like LLaMA-2 and those from the Paris start-up Mistral AI—are intended to be exempt by the legislators.

According to Bommasani, the EU’s plan for promoting open-source AI differs significantly from the US approach. “The EU argues that in order for the EU to compete with the US and China, open source will be essential.”

How Will The Act Be Put Into Effect?

Under the guidance of impartial experts, the European Commission intends to establish an AI Office to supervise general-purpose models. The office will create methods for assessing these models’ capabilities and keeping an eye on associated hazards. However, Jitsev wonders how a public organization will have the means to sufficiently review submissions, even if businesses like OpenAI follow the rules and submit, for instance, their massive data sets. They assert that “the demand to be transparent is very important.” However, there wasn’t much consideration given to how these operations needed to be carried out.

Continue Reading
Advertisement

Technology

Threads uses a more sophisticated search to compete with Bluesky

Published

on

Instagram Threads, a rival to Meta’s X, will have an enhanced search experience, the firm said Monday. The app, which is based on Instagram’s social graph and provides a Meta-run substitute for Elon Musk’s X, is introducing a new feature that lets users search for certain posts by date ranges and user profiles.

Compared to X’s advanced search, which now allows users to refine queries by language, keywords, exact phrases, excluded terms, hashtags, and more, this is less thorough. However, it does make it simpler for users of Threads to find particular messages. Additionally, it will make Threads’ search more comparable to Bluesky’s, which also lets users use sophisticated queries to restrict searches by user profiles, date ranges, and other criteria. However, not all of the filtering options are yet visible in the Bluesky app’s user interface.

In order to counter the danger posed by social networking startup Bluesky, which has quickly gained traction as another X competitor, Meta has started launching new features in quick succession in recent days. Bluesky had more than 9 million users in September, but in the weeks after the U.S. elections, users left X due to Elon Musk’s political views and other policy changes, including plans to alter the way blocks operate and let AI companies train on X user data. According to Bluesky, there are currently around 24 million users.

Meta’s Threads introduced new features to counter Bluesky’s potential, such as an improved algorithm, a design modification that makes switching between feeds easier, and the option for users to select their own default feed. Additionally, it was observed creating Starter Packs, its own version of Bluesky’s user-curated recommendation lists.

Continue Reading

Technology

Apple’s own 5G modem-equipped iPhone SE 4 is “confirmed” to launch in March

Published

on

Tom O’Malley, an analyst at Barclays, recently visited Asia with his colleagues to speak with suppliers and makers of electronics. The analysts said they had “confirmed” that a fourth-generation iPhone SE with an Apple-designed 5G modem is scheduled to launch near the end of the first quarter next year in a research note they released this week that outlines the main conclusions from the trip. That timeline implies that the next iPhone SE will be unveiled in March, similar to when the present model was unveiled in 2022, in keeping with earlier rumors.

The rumored features of the fourth-generation iPhone SE include a 6.1-inch OLED display, Face ID, a newer A-series chip, a USB-C port, a single 48-megapixel rear camera, 8GB of RAM to enable Apple Intelligence support, and the previously mentioned Apple-designed 5G modem. The SE is anticipated to have a similar design to the base iPhone 14.

Since 2018, Apple is said to have been developing its own 5G modem for iPhones, a move that will let it lessen and eventually do away with its reliance on Qualcomm. With Qualcomm’s 5G modem supply arrangement for iPhone launches extended through 2026 earlier this year, Apple still has plenty of time to finish switching to its own modem. In addition to the fourth-generation iPhone SE, Apple analyst Ming-Chi Kuo earlier stated that the so-called “iPhone 17 Air” would come with a 5G modem that was created by Apple.

Whether Apple’s initial 5G modem would offer any advantages to consumers over Qualcomm’s modems, such quicker speeds, is uncertain.

Qualcomm was sued by Apple in 2017 for anticompetitive behavior and $1 billion in unpaid royalties. In 2019, Apple purchased the majority of Intel’s smartphone modem business after the two firms reached a settlement in the dispute. Apple was able to support its development by acquiring a portfolio of patents relating to cellular technology. It appears that we will eventually be able to enjoy the results of our effort in four more months.

On March 8, 2022, Apple made the announcement of the third-generation iPhone SE online. With antiquated features like a Touch ID button, a Lightning port, and large bezels surrounding the screen, the handset resembles the iPhone 8. The iPhone SE presently retails for $429 in the United States, but the new model may see a price increase of at least a little.

Continue Reading

Technology

Google is said to be discontinuing the Pixel Tablet 2 and may be leaving the market once more

Published

on

Google terminated the development of the Pixel Tablet 3 yesterday, according to Android Headlines, even before a second-generation model was announced. The second-generation Pixel Tablet has actually been canceled, according to the report. This means that the gadget that was released last year will likely be a one-off, and Google is abandoning the tablet market for the second time in just over five years.

If accurate, the report indicates that Google has determined that it is not worth investing more money in a follow-up because of the dismal sales of the Pixel Tablet. Rumors of a keyboard accessory and more functionality for the now-defunct project surfaced as recently as last week.

It’s important to keep in mind that Google’s Nest subsidiary may abandon its plans for large-screen products in favor of developing technologies like the Nest Hub and Hub Max rather than standalone tablets.

Google has always had difficulty making a significant impact in the tablet market and creating a competitor that can match Apple’s iPad in terms of sales and general performance, not helped in the least by its inconsistent approach. Even though the hardware was good, it never really fought back after getting off to a promising start with the Nexus 7 eons ago. Another problem that has hampered Google’s efforts is that Android significantly trails iPadOS in terms of the quantity of third-party apps that are tablet-optimized.

After the Pixel Slate received tremendously unfavorable reviews, the firm first declared that it was finished producing tablets in 2019. Two tablets that were still in development at the time were discarded.

By 2022, however, Google had altered its mind and declared that a tablet was being developed by its Pixel hardware team. The $499 Pixel Tablet was the final version of the gadget, which came with a speaker dock that the tablet could magnetically connect to. (Google would subsequently charge $399 for the tablet alone.)

Continue Reading

Trending

error: Content is protected !!