AI-generated images can deceive you into perceiving fake content as original. That is why OpenAI, the developer of ChatGPT, has created a tool that can determine whether an image results from DALL-E 3, the one algorithm for image generation that they developed.
On Tuesday, OpenAI gave users the first chance to test drive an image detection tool consisting of baseline and tuned models. The objective is to engage self-performing researchers to examine the apprehensiveness, usefulness, ways it could be applied, and factors that might cause AI-generated content.
Tool’s success rate and testing
OpenAI has tested the tool internally, and in some aspects, it has been encouraging, while on the other hand, it has been very disappointing. Evaluating the images produced by DALL-E 3 rendered 98% of them correctly. Additionally, when analyzing pictures not made by DALL-E 3, the system would erroneously attribute them to DALL-E 3 only 0. 5% of the time.
OpenAI did not find the modification of an image with a slightly significant difference. The internal team tested the tool by compressing, cropping, and introducing saturation changes to the image created by DALL-E 3 and observed that the tool was still able to achieve a good level of success.
Challenges and limitations
Unluckily, the tool was not very effective with pictures that had been processed extensively. The wording of this article by OpenAI isn’t clear on how many modifications they made in the cases that they describe, and they say that “other modifications can reduce performance.”
In an interview with The Wall Street Journal, Sandhini Agarwal, a researcher, said that the tool was less effective in situations such as changing the image hue and that a subscription is required. As Agarwal mentioned, in order to deal with these types of problems, OpenAI will be bringing in external testers to the system.
Moreover, the internal testing also questioned the tool’s ability to analyze the images made with AI models from other companies. In such situations, the tool by OpenAI could recognize only 5% or 10% of images with respect to models other than it. Such modifications in such images, like hue switches, also significantly decreased efficiency, Agarwal told the Journal.
AI-made images are not only artificial but also pose problems in this election year. Aggrieved groups, both within and beyond a country, can easily employ such photos to taint an aspiring politician or a case being advocated for. Nowadays, when AI image generators continue to develop, the line between reality and fantasy is harder to tell than ever.
Industry adoption
On the other hand, OpenAI is seeking to add watermarks to AI image metadata as companies become members of the User Value Organization (UVG). The C2PA belongs to a tech industry initiative that comes with technical standards to unveil the source of the content and its authenticity in the process known as watermarking. The Facebook giant Meta said earlier this month that its AI will be labeled as such by the C2PA standard starting this May.
A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.