AI This Week: AI’s Threat to Creative Freedom

8 Min Read

Though the hype-men behind the generative AI industry are loathe to admit it, their products are not particularly generative, nor particularly intelligent. Instead, the automated content that platforms like ChatGPT and DALL-E poop out with intensive vigor could more accurately be characterized as derivative slop — the regurgitation of an algorithmic puree of thousands of real creative works created by human artists and authors. In short: AI “art” isn’t art — it’s just a dull commercial product produced by software and designed for easy corporate integration. A Federal Trade Commission hearing, held virtually via live webcast, made that fact abundantly clear.

This week’s hearing, “Creative Economy and Generative AI,” was designed to allow representatives from various creative vocations the opportunity to express their concerns about the recent technological disruption sweeping their industries. From all quarters, the resounding call was for impactful regulation to protect workers.

This desire for action was probably best exemplified by Douglas Preston, one of dozens of authors who is currently listed as a plaintiff in a class action lawsuit against OpenAI due to the company’s use of their material to train its algorithms. During his remarks, Preston noted that “ChatGPT would be lame and useless without our books” and added: “Just imagine what it would be like if it was only trained on text scraped from web blogs, opinions, screeds cat stories, pornography and the like.” He said finally: “this is our life’s work, we pour our hearts and our souls into our books.”

The problem for artists seems pretty clear: how are they going to survive in a market where large corporations are able to use AI to replace them — or, more accurately, whittle down their opportunities and bargaining power by automating large parts of the creative services?

The problem for the AI companies, meanwhile, is that there are unsettled legal questions when it comes to the untold bytes of proprietary work that companies like OpenAI have used to train their artist/author/musician-replacing algorithms. ChatGPT would not be able to generate poems and short stories at the click of a button, nor would DALL-E have the capacity to unfurl its bizarre imagery, had the company behind them not gobbled up tens of thousands of pages from published authors and visual artists. The future of the AI industry, then — and really the future of human creativity — is going to be decided by an ongoing argument currently unfurling within the U.S. court system.

This week we had the pleasure of speaking with Allie Funk, Freedom House’s Research Director for Technology and Democracy. Freedom House, which tracks issues connected to civil liberties and human rights all over the globe, recently published its annual report on the state of internet freedom. This year’s report focused on the ways in which newly developed AI tools are supercharging autocratic governments’ approaches to censorship, disinformation, and the overall suppression of digital freedoms. As you might expect, things aren’t going particularly well in that department. This interview has been lightly edited for clarity and brevity.

One of the key points you talk about in the report is how AI is aiding government censorship. Can you unpack those findings a little bit?

What we found is that artificial intelligence is really allowing governments to evolve their approach to censorship. The Chinese government, in particular, has tried to regulate chatbots to reinforce their control over information. They’re doing this through two different methods. The first is that they’re trying to make sure that Chinese citizens don’t have access to chatbots that were created by companies based in the U.S. They’re forcing tech companies in China to not integrate ChatGPT into their products…they’re also working to create chatbots on their own so that they can embed censorship controls within the training data of their own bots. Government regulations require that the training data for Ernie, Baidu’s chatbot, align with what the CCP (Chinese Community Party) wants and aligns with core elements of the socialist propaganda. If you play around with it, you can see this. It refuses to answer prompts around the Tiananmen square massacre.

Disinformation is another area you talk about. Explain a little bit about what AI is doing to that space.

We’ve been doing these reports for years and, what is clear, is that government disinformation campaigns are just a regular feature of the information space these days. In this year’s report, we found that, of the 70 countries, at least 47 governments deployed commentators who used deceitful or covert tactics to try to manipulate online discussion. These [disinformation] networks have been around for a long time. In many countries, they’re quite sophisticated. An entire market of for-hire services has popped up to support these kinds of campaigns. So you can just hire a social media influencer or some other similar agent to work for you and there’s so many shady PR firms that do this kind of work for governments.

I think it’s important to acknowledge that artificial intelligence has been a part of this whole disinformation process for a long time. You’ve got platform algorithms that have long been used to push out incendiary or unreliable information. You’ve got bots that are used across social media to facilitate the spread of these campaigns. So the use of AI in disinformation is not new. But what we expect generative AI to do is lower the barrier of entry to the disinformation market, because it’s so affordable, easy to use, and accessible. When we talk about this space, we’re not just talking about chatbots, we’re also talking about tools that can generate images, video, and audio.

What kind of regulatory solutions do you think need to be looked at to cut down on the harms that AI can do online?

We think there are a lot of lessons from the last decade of debates around internet policy that can be applied to AI. A lot of the recommendations that we’ve already made around internet freedom could be helpful when it comes to tackling AI. So, for instance, governments forcing the private sector to be more transparent about how their products are designed and what their human rights impact is could be quite helpful. Handing over platform data to independent researchers, meanwhile, is another critical recommendation that we’ve made; independent researchers can study what the impact of the platforms are on populations, what impact they have on human rights. The other thing that I would really recommend is strengthening privacy regulation and reforming problematic surveillance rules. One thing we’ve looked at previously is regulations to make sure that governments can’t misuse AI surveillance tools.

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.