Artists who are being targeted by AI that analyzes their work and then imitates their styles have joined forces with academics to prevent this kind of imitation.
American illustrator Paloma McClain went into defensive mode when she discovered that multiple AI models had been “trained” with her work without giving her credit or payment.
“It bothered me,” McClain told Agence France-Presse. “I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others.”
The University of Chicago researchers’ “Glaze” was a free program that the artist used.
When it comes to how AI models are trained, Glaze basically outsmarts them by adjusting pixels in ways that are imperceptible to human observers but that give the impression that a digital artwork is significantly different from what AI has generated.
Professor of computer science Ben Zhao of the Glaze team stated, “We’re basically providing technical tools to help protect human creators against invasive and abusive AI models.”
In just four months, Glaze created a spinoff company that disrupted facial recognition software.
“We were working at super fast speed because we knew the problem was serious,” Zhao said of the urgency to defend artists from software imitators. “A lot of people were in pain.”
Giants in generative AI have agreements to use data for training in some situations, but most digital images, text, and audio that are used to program extremely intelligent software have been downloaded from the internet without permission.
According to Zhao, Glaze has been downloaded over 1.6 million times since its launch in March.
Zhao’s group is developing “Nightshade,” a Glaze enhancement that fortifies defenses by tricking AI, such as by making it believe that a dog is a cat.
Several businesses have expressed interest in using Nightshade to the team, according to Zhao.
“The goal is for people to be able to protect their content, whether it’s individual artists or companies with a lot of intellectual property,” Zhao said.
Kudurru software, created by startup Spawning, is capable of identifying attempts to extract a large number of images from an online platform.
Then, according to Jordan Meyer, co-founder of Spawning, an artist can refuse access or submit pictures that don’t correspond with the request.
The Kudurru network has already been expanded to include over a thousand websites.
“The best solution would be a world in which all data used for AI is subject to consent and payment,” Meyer said. “We hope to push developers in this direction.”