Last month, the tech giant Apple Inc. made a bold move and removed three apps from its App Store. These apps even exploited machine learning tools available in AI and tried to construct nonconsensual images without the consent of a person. The decision follows concerns that apps were using false advertising on places like Instagram and porn! Websites.
Misleading Cases in Advertising
Those who click on this ad are sent to the Apple App Store, and the same app is described as an “art generator” but without any reference to the fact that it can generate nudes. This is a long-standing loophole in mobile app stores first identified in 2022.
Through AI-based algorithms, the apps blamed for doing this act claimed to undress individuals in photos, leading to an incorrect assumption of nudity even though they do not have the permission or knowledge of the subjects. Such acts as these not only breach privacy but also are potential sources of exploitation, harassment, and blackmail.
A number of similar versions of this advertisement with various other AI generated women have been showing on Facebook, Instagram, Facebook Messenger and Meta’s in-app ad network between April 10 and April 20.
After being tipped off about these scam apps, Apple in fact was the first one to remove them from the App Store. Though they claimed to be innocent, their true motives were disclosed by the apps forcing Apple to remove the apps and observe its community standards. This clearly demonstrates Apple’s unshakeable devotion to creating a secure and gender-neutral digital space for everyone.
Ongoing Challenges and Accountability
Even though the countermeasures taken by Apple deserve respect, this incident, in the end, brings about some doubts about the app review and monitoring of advertising methods. The fact that these apps have been unnoticed in the App Store for a long time only underlines the necessity of more carefulness and active measures to detect and eliminate fraudulent apps.
In addition, developers applied sly methods of promotion of their illegal features on adult sites that ensured absence of their detection by Apple and Google. These cases illustrate the problems the tech companies face to police their platforms and enforce the community guidelines.
With the Apple’s forthcoming disclosure of some AI-related projects at the Worldwide Developers Conference (WWDC) 2023, it will be rather intriguing to see the move initiated by the company with regards to these apps. An important part of Apple’s responsible technology implementation is to maintain a good image for ethical behavior and sound technology governance, which is critical in light of Apple’s increasing use of AI in its products.
Removing these apps would be an example of an ethical issue on the part of AI development and use. The second concern related to the development of AI abilities is users’ privacy. Additionally, the users need to be requested for their permission, and the assurance of their safety should be given. By blocking such applications, Apple stresses ethical behavior in its community.
From Zero to Web3 Pro: Your 90-Day Career Launch Plan