Recently, Meta has been accused of damaging teenagers’ mental health, and more than 40 states are suing it, claiming that it has ‘profited from children’s pain.’ US politicians have accused Meta of disturbing the psychological well being of youngsters. Meta announced on Thursday that it is developing tools based on artificial intelligence to protect children from sextortion scams on Instagram.
Sextortion scams run by gangs on Instagram convince teenagers to share their explicit images and then demand for money or otherwise threatening to release the images to the public on the internet.
Financial sextortion and safety measures
Meta announced that it has been testing tools based on artificial intelligence to counter the financial situation. The tools are said to be providing nudity protection by blurring the images that their AI detects for containing nudity while sent to minors in messages on Instagram. Capucine Tuffier, in charge of child protection at Meta France, told AFP that this will help protect children from exposure to unwanted intimate content and it give them the choice of whether they want to see the image or not, as when someone receives any such message, a warning screen will appear over the blurred image.
Meta said that they will also offer safety tips and advice to anyone who is receiving or sending such messages, proving timely help. A message will appear on the screen that will encourage the user to not feel the pressure to respond to the potential trickery and also provide an option to block the sender and report the incident.
According to statistics from authorities, nearly 3000 youngsters were found victims of sextortion scams in 2022 in the United States alone. Back in October, more than 40 states sued Instagram’s mother company, Meta, in a case stating that the company had profited from children’s pain. The company was accused of designing a business model that exploits young users by spending maximum time on their platform, despite the damage to their mental health and well being.
Meta working on on-device machine learning solutions
At the start of the new year, Meta disclosed that they would release strategies to safeguard minors under 18 by restricting content usage policies and increasing parental control on their platform. Yesterday, Meta said that its latest tools are building on their deep rooted work regarding the protection of underage people from potential negative or harmful interaction.
Meta said in a post,
“We’re testing new features to help protect young people from sextortion and intimate image abuse and to make it more difficult for potential scammers and criminals to find and interact with teens.”
Source: Meta
Meta also added,
“Nudity protection uses on-device machine learning to analyze whether an image sent in a DM on Instagram contains nudity. Because the images are analyzed on the device itself, nudity protection will work in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone chooses to report them to us.”
Source: Meta
Meta also urged that it screen potential accounts sending objectionable content and apply strict restrictions to them when contacting underage users. While we know Meta has been accused of violating its users data privacy on many occasions, but these measures seem to be in the right direction. But the company also pointed out that it will not have access to the inappropriate images unless they are reported by the users.
This story sourced from a Meta blog.
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap