TikTok and Meta have blocked keywords amid growing ethical concerns.
A concerning rise in the popularity of apps and websites using artificial intelligence to undress women in photos has caught the attention of researchers and privacy advocates, a report from Bloomberg revealed. Graphika, a social network analysis company, revealed that a whopping 24 million people visited these undressing websites in September alone, highlighting a troubling surge in non-consensual pornography driven by advancements in artificial intelligence.
The so-called “nudify” services have been making use of popular social networks for marketing, with the number of links advertising undressing apps skyrocketing by over 2,400 percent on platforms like X and Reddit since the beginning of the year. These services employ AI technology to digitally undress individuals, predominantly focusing on women. This proliferation presents serious legal and ethical challenges, as the images are often taken from social media without the subject’s consent or knowledge.
The worrying trend extends to potential harassment, as some advertisements suggest customers could create nude images and send them to the digitally undressed person. In response, Google has stated its policy against sexually explicit content in ads and is actively removing violative material. However, X and Reddit have not yet responded to requests for comments.
Privacy experts are sounding the alarm about the increasing accessibility of deepfake pornography facilitated by advancements in AI technology. Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, notes a shift towards ordinary people using these technologies on everyday targets, including high school and college students. Many victims may remain unaware of these manipulated images, and those who do face challenges in seeking law enforcement intervention or pursuing legal action.
Despite growing concerns, there is currently no federal law in the United States explicitly prohibiting the creation of deepfake pornography. A recent case in North Carolina, where a child psychiatrist was sentenced to 40 years for using undressing apps on patient photos, marks the first prosecution under a law banning the deepfake generation of child sexual abuse material.
In response to the alarming trend, TikTok and Meta Platforms Inc. have taken steps to block keywords associated with these undressing apps. TikTok warns users that the term “undress” may be associated with content violating its guidelines, while Meta Platforms Inc. declined to provide further comments on its actions.
As technology evolves, the ethical and legal challenges posed by deepfake pornography underscore the urgent need for comprehensive regulations to protect individuals from the non-consensual and harmful use of AI-generated content.