Facebook’s new AI sticker tool generates ‘completely unhinged’ images

3 Min Read

{{ #verifyErrors }}{{ message }}{{ /verifyErrors }}{{ ^verifyErrors }}Something went wrong. Please try again later{{ /verifyErrors }}

Facebook users have shared images of cartoon characters wielding weapons, naked celebrities, and child soldiers – all created using the app’s new AI-generated sticker feature.

Parent company Meta unveiled the new feature last week, allowing Facebook, Instagram, Messenger and Instagram users to generate stickers with artificial intelligence by writing prompts.

“I don’t think anyone involved has thought anything through,” 3D artist and illustrator Pier-Olivier Desbiens wrote on X, formerly known as Twitter, who used the tool to create stickers with the prompts ‘Waluigi rifle’, ‘child soldier’, ‘Karl Marx large breasts’ and ‘Trudeau buttocks’.

“We really do live in the stupidest future imaginable,” he wrote.

Another user shared an AI-generated sticker of conspiracy theorist Alex Jones kissing a dog. “It’s completely unhinged,” they wrote.

The Independent has reached out to Meta for comment.

The new AI stickers are currently only available to a limited number of English-language users, with Meta yet to confirm whether a wider roll out is planned.

“Using technology from Llama 2 and our foundational model for image generation called Emu, our AI tool turns your text prompts into multiple unique, high-quality stickers in seconds,” Meta announced in a blog post last week.

“This new feature… provides infinitely more options to convey how you’re feeling at any moment.”

Meta claims that billions of stickers are sent by Facebook, Instagram, Messenger and WhatsApp users each month, having first introduced the feature in 2013.

Other AI-generated image tools, such as OpenAI’s DALL-E 3, have limits in place to prevent misuse. Users are unable to generate images featuring violent content or real people, though other generative AI platforms exist that do not place limitations on the ways content can be generated.

In its blog post announcing the new feature, Meta noted that there was a chance that AI tools could be misused, which is why their introduction is being done on a “step by step” basis.

“In keeping with our commitment to responsible AI, we also stress test our products to improve safety performance and regularly collaborate with policymakers, experts in academia and civil society, and others in our industry to advance the responsible use of this technology,” the post stated.

“We’ll continue to iterate on and improve these features as the technologies evolve and we see how people use them in their daily lives.”

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.