Home  >  Article  >  Technology peripherals  >  AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

WBOY
WBOYforward
2023-04-12 22:16:011799browse

This article is reprinted with the authorization of AI New Media Qubit (public account ID: QbitAI). Please contact the source for reprinting.

AI painting infringement is confirmed!

The latest research shows that the diffusion model will firmly remember the samples in the training set and "follow the example" when generating.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

In other words, in the AI ​​paintings generated by Stable Diffusion, behind every stroke there may be an infringement incident.

Not only that, after research and comparison, the diffusion model’s ability to “plagiarize” from training samples is twice that of GAN, and the better the diffusion model is generated, the stronger its ability to remember training samples.

This research comes from a team composed of Google, DeepMind and UC Berkeley.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

There is another bad news in the paper, that is, in response to this phenomenon,

existing privacy protection methods are all invalid.

As soon as the news came out, netizens exploded, and the author of the paper's related Twitter retweets were about to exceed a thousand.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Some people lamented: It turns out that it makes sense to say that they steal other people’s copyrighted results!

Support the lawsuit! Sue them!

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Someone stood on the side of the diffusion model and spoke:

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Some netizens also extended the results of the paper to the most popular topics at the moment. On ChatGPT:

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Existing privacy protection methods are all invalid

The principle of the diffusion model is to remove noise and then restore it, so what researchers need to study is actually:

Do they

remember the images used for training, and finally "plagiarize" during generation?

The images in the training set are often obtained from the Internet. They are copyrighted, trademarked, and some are private, such as private medical X-rays.

In order to figure out whether the diffusion model can

memorize and regenerateindividual training samples, the researchers first proposed a new definition of "memory".

Generally speaking, the definition of memory focuses on text language models. If the model can be prompted to recover a word-by-word sequence from the training set, it means that the sequence has been extracted and memorized.

In contrast, the research team

defined "memory" based on image similarity.

However, the team also frankly admits that the definition of "memory" is conservative.

For example, the picture on the left is a "photo of Obama" generated by Stable Diffusion. This picture is not similar to any specific training image on the right, so this image cannot be counted as based on memory. generate.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

#But this does not mean that Stable Difusion’s ability to generate new identifiable images does not violate copyright and privacy.

Next, they extracted more than 1,000 training samples including personal photos and company tenders, and then designed a

two-stagedata extraction(data extraction attack).

The specific operation is to use standard methods to generate images, and then label those images that exceed the human reasoning scoring criteria.

Applying this method on Stable Diffusion and Imagen, the team extracted more than 100 approximate or identical copies of training images.

There are both identifiable personal photos and trademark logos. After inspection, most of them are copyrighted.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Then, in order to better understand how "memory" occurs, the researchers sampled 1 million times from the model and trained hundreds of diffusion models on CIFAR-10.

The purpose is to analyze which behaviors in model accuracy, hyperparameters, enhancement and deduplication have an impact on privacy.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Finally came to the following conclusion:

First of all, the diffusion model has more memory than GAN.

But diffusion models are also the worst private group among the image models evaluated, leaking more than twice as much training data as GANs.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

Also, A larger model may remember more data.

With this conclusion, the researchers also studied the 2 billion parameter text-image diffusion model Imagen. They tried to extract 500 images with the highest scores outside the distribution and used them as samples in the training data set. They found that All are remembered.

In contrast, the same method applied to Stable Difusion did not identify any memory behavior.

Therefore, Imagen has worse privacy than Stable Difusion on copied and non-copied images. The researchers attribute the reason to the fact that the model used by Imagen has a larger capacity than Stable Difusion, so it remembers more images.

In addition, Better generative model(lower FID value)More data stored.

In other words, as time goes by, the same model leaks more privacy and infringes more copyrights.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective
(GAN model sorted by FID, the lower the FID value, the better the effect)

Through training the model, the team found that Increasing utility reduces privacy , and simple defensive measures (such as deduplication) are not enough to completely resolve memory attacks.

Therefore, privacy-enhancing technologies do not provide an acceptable privacy-utility trade-off.

In the end, the team made four suggestions for those who train the diffusion model:

  • It is recommended to remove duplicate data from the training data set and minimize overtraining;
  • It is recommended to use data extraction or other auditing techniques to assess the privacy risks of the training model;
  • If there are more practical privacy protection technologies, it is recommended to use them as much as possible;
  • I hope that the pictures generated by AI will not The privacy-related parts will be provided to users free of charge.

The copyright owner has never stopped defending its rights

Once the research comes out, it may have an impact on the ongoing litigation.

At the end of January, the big brother of the gallery, Getty Images, (Getty Images) sued Stability AI in the High Court of London in the name of copyright infringement.

AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective

△Stability AI

Getty Images believes that Stability AI "illegally copied and processed millions of copyrighted images." Stable Difussion under the training name.

Part of the training data for Stable Difussion is open source. After analysis and inspection of watermarks, it was found that many photo agencies, including Getty, had unknowingly provided a large amount of material for Stable Difussion's training set, accounting for a large proportion.

But from the beginning to the end, Stability AI has never interacted with the photo agency.

Many AI companies believe that this practice is protected by laws such as the U.S. Fair Use Doctrine, but most copyright owners disagree with this statement and believe that this behavior infringes on their rights.

Although Stability AI has previously issued a statement saying that in the next version, copyright owners can delete their own copyrighted works in the training gallery, but at this stage it is still Some people are dissatisfied.

In mid-January, three artists had filed a lawsuit against Stability AI and Midjourney.

Legal experts also hold different opinions in order to reach a unified opinion, but they all agree that the court needs to rule on the issue of copyright protection.

Getty Images CEO Craig Peters said that the company has sent a notice to Stability AI, saying that "you will soon be sued in the UK"!

The company also said:

We do not care about the losses caused by infringement, and we have no intention to stop the development of AI art tools.

Taking Stability AI to court is not in the interests of our Getty family.

The choice to sue has a deeper long-term purpose, and we hope that the court will set new laws to regulate the status quo.

The above is the detailed content of AI painting infringement is confirmed! Diffusion models may remember your photos and all existing privacy protection methods will be ineffective. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete