A New Tool Could Protect Image Creators Against AI

A New Tool Could Protect Image Creators Against AIA New Tool Could Protect Image Creators Against AI

In this post:

  • A new tool called Nightshade could protect artists’ works from being used to train AI models.
  • While some people benefit from AI-generated images, artists themselves are worried that their works are being exploited.
  • Nightshade is still under development by a team led by University of Chicago professor Ben Zhao.

A team of developers led by University of Chicago professor Ben Zhao has introduced a new tool called “Nightshade” to help digital artists protect their works from being harvested and used in training Al algorithms.

Nightshade Might Be The Solution to Artists’ Nightmare on AI

Over the recent months, AI art generators have become increasingly sophisticated and are now able to create images that are almost difficult to differentiate from human-made art. 

While some people benefit from AI-generated images, artists themselves are concerned that their works are being exploited and devalued by AI tools, among other copyright and intellectual property violation issues. 

In August, three digital artists filed a lawsuit against AI image companies Stability AI, Midjourney, and DeviantArt, seeking damages and a court order to the companies from exploiting artists’ works without consent.

The artists complained that the companies violated their rights and those of millions of other artists whose works were scraped and fed into AI algorithms to produce derivative works that now compete against the originals. 

Nightshade, though still under development, might be the solution to this nightmare. 

How Does Nightshade Work?

According to reports, the software renders images unusable for AI model training just by making tiny changes to the pixels of images. The changes are small enough that they are invisible to the human eyes but effective enough to cause AI algorithms to completely misidentify the subject matter. 

For instance, an image of a cat modified with Nightshade may incorrectly be interpreted by an AI algorithm as that of a dog. Having such modified images on AI image libraries could significantly ruin the output or performance of AI image generators.

The objective is to prevent AI companies from unauthorisedly feeding artistic images to AI algorithms and also to deter AI image generators from being used to create fake images and pirated art.

On that front, some companies are beginning to implement measures to help people distinguish between fake and human-made art. 

Earlier this month, Cryptopolitan reported that OpenAI, the development firm behind ChatGPT, is developing a new tool that can tell apart AI images from real ones with 99% accuracy. The tool is still being tested internally ahead of a planned public release.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Axie Infinity Homeland Avatar Mode
Subscribe to CryptoPolitan