Connect with us

Technology

Concerns about how AI will affect the 2024 election are growing

Published

on

Concerns about how AI will affect the 2024 election are growing

As the 2024 primary election approaches, worries about how artificial intelligence (AI) might affect the results of the upcoming election are growing due to its rapid advancement.

Artificial intelligence (AI), a cutting-edge technology that can produce text, images, audio, and even deepfake videos, has the potential to spread misinformation in the already divisive political landscape and further erode public trust in the nation’s electoral system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Concerns have been raised by experts that AI chatbots may provide voters with false information if they use them to look up ballots, calendars, or polling locations. More sinisterly, AI may be used to fabricate and spread false information against specific politicians or causes.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

According to polls, Americans are becoming more concerned about AI than just academics are. They seem to be concerned about how the technology may make things more complicated or confusing during the already divisive 2024 cycle.

Bipartisan majorities of American adults are concerned that artificial intelligence (AI) will “increase the spread of false information” in the 2024 election, according to a UChicago Harris/AP-NORC poll published in November.

According to a Morning Consult-Axios survey, the percentage of American adults who believe AI will have a negative effect on voters’ trust in candidate commercials and in election results in general has increased recently.

Almost 60% of respondents stated they believed AI-spread misinformation would influence the winner of the 2024 presidential contest.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.

An AI-generated rendition of former President Trump’s voice was allegedly used in a television advertisement over the summer by the DeSantis-aligned super PAC Never Back Down.

The campaign of the former president released a video clip just before the third Republican presidential debate, in which the candidates introduced themselves using Trump’s favorite nicknames, seemingly mimicking the voices of their fellow Republicans.

Additionally, the Trump campaign published a modified version of a report that Garrett Haake of NBC News provided prior to the third GOP debate earlier this month. Haake’s report opens the clip unaltered, but then a voiceover criticizes the former president’s Republican opponents.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

Politicians’ use of AI in particular has pushed tech companies and policymakers to think about regulating the technology.

Google declared earlier this year that verified election advertisers would have to “prominently disclose” the times when their advertisements were altered or created digitally.

When a political advertisement employs a “photorealistic image or video, or realistic-sounding audio” that was created or modified to, among other things, portray a real person saying or doing something they did not do, Meta also intends to mandate disclosure.

In October, President Biden signed an executive order on artificial intelligence that included plans for the Commerce Department to develop guidelines for content authentication and watermarking, as well as new safety standards.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

Legislators, however, have mainly been left scurrying to attempt to control the sector as it advances with new innovations.

As part of her campaign, Shamaine Daniels, a Democratic candidate for Congress from Pennsylvania, is using an AI-powered voice tool developed by startup Civox for phone banking.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

According to experts, artificial intelligence (AI) has potential applications in election cycles, such as assisting election officials in cleaning up voter lists to remove duplicate registrations and telling the public which political candidates they may support on certain issues.

However, they also caution that the technology may make issues that were discovered in the cycles of 2016 and 2020 worse.

According to Bryant, AI could enable misinformation to “micro-target” people even more precisely than social media currently does. Not even a person is immune from this, she said, citing the way advertisements on sites like Instagram already have the power to shape behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Evidence suggests that social media targeting has not been successful in influencing elections, so Bueno de Mesquita said he is less concerned about micro-targeting from voter manipulation campaigns. According to him, resources ought to be directed toward informing the public about the “information environment” and directing them toward reliable sources of information.

The nonprofit watchdog group Protect Democracy, led by Nicole Schneidman, advocates for technology policy, stated that rather than anticipating AI to bring about “novel threats” for the 2024 election, the group sees potential acceleration of trends that are already compromising democracy and election integrity.

She warned against overstressing AI’s potential in the context of a larger misinformation campaign that could influence the outcome of the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

Getting people in front of rapidly evolving technology may be a crucial first step toward overcoming it.

“The best way to become AI literate myself is to spend half an hour an hour playing with the chat bot,” said Bueno de Mesquita.

People who said they were more familiar with AI tools in the UChicago Harris/AP-NORC survey were also more likely to say that using the technology could increase the spread of false information, indicating that knowledge of the technology’s potential benefits can also increase awareness of its drawbacks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

Despite investments in those tools, she said detection technology might struggle to keep up with the increasing sophistication of AI. As an alternative, she claimed that “pre-bunking” by election officials can be useful in educating the public even before they may come across content created by artificial intelligence.

According to Schneidman, she hopes that election officials will also use digital signatures more frequently to let the public and media know which information is phony and which is coming from a reliable source. She said that in order to prepare for deepfakes, a candidate may also include these signatures in the images and videos they upload.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said that in order to prevent voter suppression and to ensure that people are not confused about when and how to vote, election officials, political leaders, and journalists can obtain the necessary information. She continued by saying that there is precedent for stories about election meddling, which benefits those combating false information generated by artificial intelligence.

“The benefits of pre-bunking include the ability to create powerful counter-messaging that foresees recurrent misinformation narratives and, ideally, get that in front of voters’ eyes well in advance of the election, ensuring that message is consistently landing with voters so that they are getting the authoritative information that they need,” stated Schneidman.

Continue Reading
Advertisement

Technology

Microsoft Expands Copilot Voice and Think Deeper

Published

on

Microsoft Expands Copilot Voice and Think Deeper

Microsoft is taking a major step forward by offering unlimited access to Copilot Voice and Think Deeper, marking two years since the AI-powered Copilot was first integrated into Bing search. This update comes shortly after the tech giant revamped its Copilot Pro subscription and bundled advanced AI features into Microsoft 365.

What’s Changing?

Microsoft remains committed to its $20 per month Copilot Pro plan, ensuring that subscribers continue to enjoy premium benefits. According to the company, Copilot Pro users will receive:

  • Preferred access to the latest AI models during peak hours.
  • Early access to experimental AI features, with more updates expected soon.
  • Extended use of Copilot within popular Microsoft 365 apps like Word, Excel, and PowerPoint.

The Impact on Users

This move signals Microsoft’s dedication to enhancing AI-driven productivity tools. By expanding access to Copilot’s powerful features, users can expect improved efficiency, smarter assistance, and seamless integration across Microsoft’s ecosystem.

As AI technology continues to evolve, Microsoft is positioning itself at the forefront of innovation, ensuring both casual users and professionals can leverage the best AI tools available.

Stay tuned for further updates as Microsoft rolls out more enhancements to its AI offerings.

Continue Reading

Technology

Google Launches Free AI Coding Tool for Individual Developers

Published

on

Google Launches Free AI Coding Tool for Individual Developers

Google has introduced a free version of Gemini Code Assistant, its AI-powered coding assistant, for solo developers worldwide. The tool, previously available only to enterprise users, is now in public preview, making advanced AI-assisted coding accessible to students, freelancers, hobbyists, and startups.

More Features, Fewer Limits

Unlike competing tools such as GitHub Copilot, which limits free users to 2,000 code completions per month, Google is offering up to 180,000 code completions—a significantly higher cap designed to accommodate even the most active developers.

“Now anyone can easily learn, generate code snippets, debug, and modify applications without switching between multiple windows,” said Ryan J. Salva, Google’s senior director of product management.

AI-Powered Coding Assistance

Gemini Code Assist for individuals is powered by Google’s Gemini 2.0 AI model and offers:
Auto-completion of code while typing
Generation of entire code blocks based on prompts
Debugging assistance via an interactive chatbot

The tool integrates with popular developer environments like Visual Studio Code, GitHub, and JetBrains, supporting a wide range of programming languages. Developers can use natural language prompts, such as:
Create an HTML form with fields for name, email, and message, plus a submit button.”

With support for 38 programming languages and a 128,000-token memory for processing complex prompts, Gemini Code Assist provides a robust AI-driven coding experience.

Enterprise Features Still Require a Subscription

While the free tier is generous, advanced features like productivity analytics, Google Cloud integrations, and custom AI tuning remain exclusive to paid Standard and Enterprise plans.

With this move, Google aims to compete more aggressively in the AI coding assistant market, offering developers a powerful and unrestricted alternative to existing tools.

Continue Reading

Technology

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Published

on

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Elon Musk’s artificial intelligence company xAI has unveiled its latest chatbot, Grok-3, which aims to compete with leading AI models such as OpenAI’s ChatGPT and China’s DeepSeek. Grok-3 is now available to Premium+ subscribers on Musk’s social media platform x (formerly Twitter) and is also available through xAI’s mobile app and the new SuperGrok subscription tier on Grok.com.

Advanced capabilities and performance

Grok-3 has ten times the computing power of its predecessor, Grok-2. Initial tests show that Grok-3 outperforms models from OpenAI, Google, and DeepSeek, particularly in areas such as math, science, and coding. The chatbot features advanced reasoning features capable of decomposing complex questions into manageable tasks. Users can interact with Grok-3 in two different ways: “Think,” which performs step-by-step reasoning, and “Big Brain,” which is designed for more difficult tasks.

Strategic Investments and Infrastructure

To support the development of Grok-3, xAI has made major investments in its supercomputer cluster, Colossus, which is currently the largest globally. This infrastructure underscores the company’s commitment to advancing AI technology and maintaining a competitive edge in the industry.

New Offerings and Future Plans

Along with Grok-3, xAI has also introduced a logic-based chatbot called DeepSearch, designed to enhance research, brainstorming, and data analysis tasks. This tool aims to provide users with more insightful and relevant information. Looking to the future, xAI plans to release Grok-2 as an open-source model, encouraging community participation and further development. Additionally, upcoming improvements for Grok-3 include a synthesized voice feature, which aims to improve user interaction and accessibility.

Market position and competition

The launch of Grok-3 positions xAI as a major competitor in the AI ​​chatbot market, directly challenging established models from OpenAI and emerging competitors such as DeepSeek. While Grok-3’s performance claims are yet to be independently verified, early indications suggest it could have a significant impact on the AI ​​landscape. xAI is actively seeking $10 billion in investment from major companies, demonstrating its strong belief in their technological advancements and market potential.

Continue Reading

Trending

error: Content is protected !!