Guest editorial: The Hollywood writers negotiated a sensible approach to AI. Other humans should pay attention.

admin
7 Min Read

Know any web developers? How about tax preparers, budget analysts or clerks who do data entry?

Those are just some of the occupations at risk of being displaced by artificial intelligence (AI), the exploding science of making machines to think like humans. And unlikely as it sounds, the recently concluded Hollywood writers’ strike could come to the rescue of those jobs by establishing new and exportable guidelines that ensure people have a say in how AI is used.

The Writers Guild of America, one of the two big unions that went on strike against the film and television industry this year, ended its labor action Wednesday, after 148 days of closing down most productions everywhere from Hollywood to Chicago.

In its agreement, which still needs to be formally ratified, the writers aim to thread a tricky needle, giving major studios the ability to use AI, as they have demanded from the start, but not to replace human writers with it. The deal adopts a common sense approach that doesn’t try to put this genie back in the bottle — which won’t work — while limiting its potential to unfairly upend people’s lives.

Much of the Writers Guild summary agreement makes for heavy reading with a lot of rigid union work rules that might well prove to be inflexible for people who just want to ensure the survival of this industry or be fairly compensated for their labor. When it comes to the creative process of writing and who owns what at the end of the day, however, the contract is surprisingly thoughtful.

According to the Guild summary, AI will not be permitted to write or rewrite literary material, and material generated by AI will not be used to take credit away from a writer. At the same time, a writer can choose to use AI to assist in writing, if a company consents, while companies can’t require the writer to use AI. When companies give writers AI-generated material to work with, an idea enough to give any self-respecting writer shivers, they will need to disclose it to them.

Disclosure. Consent. That’s a fair-minded approach. The Guild also is reserving the right to fight back if a writer’s material is used to “train” AI. That “training” is a timely and valid concern, given the vast amounts of information, otherwise known as the work of human writers, being fed into ChatGPT and other AI programs to teach them how to mimic the thinking of people.

AI-enhanced tools can process much more data than any human and then use it to answer questions and create everything from term papers to editorials (though not this one and none in the future, at least while we specific humans are in charge here). These “smart” machines can learn to recognize patterns, make seemingly rational decisions and otherwise deploy the human-generated ideas they’re being fed.

Training of AI has been happening behind the scenes, with tech developers using some sneaky tactics.

The Atlantic magazine has spotlighted Books3, a data set of at least 191,000 books being used by companies such as Meta and Bloomberg for training machines — without the knowledge or consent of authors ranging from Stephen King to part-time novelists hardly anyone has heard of. A database enables writers to see if their work has been pirated. And, yes, that’s the right word.

Besides using books without permission, AI developers have scraped news websites for information that similarly helps their machines work better. News outlets such as CNN, Reuters and, yes, the Chicago Tribune have moved to block the company behind ChatGPT from just helping itself to their content, while The Associated Press recently struck a deal licensing its archive of news stories.

To the extent AI developers take what isn’t theirs, in our view, they need to stop. At the same time, it isn’t realistic to think that AI can entirely be halted in its tracks. It’s too attractive a proposition for too many.

This is where the Writers Guild approach is instructive. The business of AI would benefit from more transparency and open disclosures, to build the trust and credibility that does not exist today. AI should be deployed with as much mutual consent among those involved as possible. And paying attention to the livelihoods of the people most directly affected also would go a long way to relieving concerns about what’s happening in the bowels of tech companies developing AI.

This page addressed the specter of thinking machines taking over the world on New Year’s Day in 2017, advising readers to be awed at the power of AI, but not afraid of it. That includes people in occupations vulnerable to AI.

Technically, computers may be able to outthink us, and they certainly can process data more efficiently. But humans will always have the edge because we are more creative. After all, we built the machines.

And one last thing: some editorial boards of late have been having AI write an “editorial” or two as a kind of instructive case study or as a lighthearted exercise. As far as we know, there has been clear disclosure. But that’s still playing with fire. And it won’t happen here, at least until somebody, or something, runs us out of town.

— Chicago Tribune

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.