Meet the AI heretic battling the hype with a warning for Rishi Sunak

10 Min Read

“It’s good to be back home in Vancouver”, says Gary Marcus, after a typical 10 days criss-crossing the United States.

As an AI startup entrepreneur, Emeritus Professor of Psychology and Neural Science at New York University and a best-selling author, Marcus’s work drags him across the continent and beyond. But one destination that is not on his itinerary is Bletchley Park.

In a fortnight, the UK’s AI Safety Summit will assemble the great and the good of artificial intelligence, in the hope of creating an international “Bretton Woods”-style agreement to regulate it. Although he was one of three experts invited to give testimony to the United States Congress on AI regulation, alongside OpenAI founder Sam Altman, Marcus hasn’t been invited to Buckinghamshire. He isn’t surprised that his views aren’t welcome.

“Generative AI can’t live up to the current expectations,” he says. “It’s simply not smart enough to do many of the things we think it will be able to do. The systems are not transparent, they’re not reliable, they don’t really understand the world. These are very serious problems that are not being faced.”

Such talk makes him a heretic, and pointing out some very inconvenient truths is not universally welcome. Marcus explains these flaws very elegantly: for years he was The New Yorker magazine’s go-to guy to explain developments in neuroscience and data. Guitar Zero, his book explaining how the brain learns, based on his own initially hopeless quest to master a musical instrument, became a bestseller.

But when so many hopes are pinned on the transformative power of AI, and with venture capitalists in the goldrush phase, a witty sceptic is not what people want to hear. Rather than being burned at the stake, he takes a roasting on social media and snubs such as Bletchley.

“I’ve been treated badly, but most of it just rubs off. Some of it is genuinely irritating, though,” he says. But he’s noticing a pattern. Ridicule can be followed by bullying, but then some time later, his critics concede a vital point.

A long-running spat with the distinguished AI pioneer and Meta executive Yann LeCun, a Turing Award winner, saw LeCun loftily quote-Tweet Marcus to dismiss his points. Then one day, LeCun simply adopted Marcus’ position.

“None of these people ever credit me, but I am often there first,” he says. “Yann LeCun misrepresented my credentials in order to undermine my arguments. He then adopted my arguments and didn’t acknowledge me. These kinds of things just aren’t cool.”

The Baltimore native began computer programming when he was eight, and his interest in AI was piqued almost at once. Today his career has produced over a hundred academic papers, cited over 5,000 times, while his first AI startup, Geometric Intelligence, was acquired by Uber in 2016.

He briefly led Uber’s self-driving car initiative, and has since founded another startup, with the leading robot pioneer Rodney Brooks, founder of iRobot, the company that made the Roomba cleaner. His popular Humans v Machines podcast explores our fascination with AI, unsparingly highlighting how it fails.

To understand his critique of today’s AI, it helps to know a little about the historic schism in the field and his place in it. The first two waves of interest in AI, in the Sixties and again in the Eighties, attempted to reduce reasoning and understanding to symbols that a computer could then crunch, like algebra. Today that’s called Classical AI, or symbolic AI.

It was a failure for several reasons. Today’s AI doesn’t try to be clever at all, but uses brute force statistical prediction based on relations. These emerge if you feed big machines huge amounts of data. So connections are the basis of today’s AI, giving it the name “connectionist AI”.

Today’s AI advocates paint Marcus as a throwback, but he has consistently argued that we need to combine both approaches: both symbolic and connectionist. He compares dogmatism to arguments over nature and nurture.

“The argument of whether something is learned or innate can be found in neuroscience, in developmental science, in linguistics, in policy, and nowadays in AI. Much more so than it was in AI some years ago,” he explains.

“People always want it to be one or the other. In many fields it’s obvious that nature and nurture work together”.

Awkwardly for Marcus’s critics, while today’s AI has produced some extraordinary results, it also fails spectacularly, it’s evident that it has no understanding of the world. It’s an “AI” without any real intelligence.

For example, ask an AI image generator to draw a clock and the chances are it will only show one of two times: ten past ten, or ten to two. That’s because clock manufacturers almost exclusively show only these two very similar-looking times in their marketing material, as they’re the most visually appealing representation of the hands.

The bigger problem that won’t go away is when it makes stuff up, called hallucinations.

“I wrote about these in my 2001 book The Algebraic Brain,” Marcus explains. “They’ll generalise and you get a bleed-through, or spillover. An AI bot will tell you that Elon Musk died in a car crash – because it doesn’t understand the difference between owning a Tesla, and owning Tesla!”

Recent news reporting has unearthed how in both Washington DC and Westminster, the artificial intelligence regulation agenda has been influenced by devotees of the cult-like intellectual movement called Effective Altruism.

This is a motley collection of interests ranging from veganism, to animal rights for insects, but apocalyptic and far-distant scenarios of a Terminator AI destroying humanity is a favourite preoccupation.

Marcus frets that today, the more immediate danger from deploying AI comes from connecting unreliable systems to power grids, or cars. Urgent issues such as the exploitation of intellectual property by AI companies and threat of a flood of images of child abuse also seem to be overlooked at Bletchley in favour of paranoid fantasies.

Marcus declines to comment on the Effective Altruist flavour of the summit, but believes it lacks diversity of thought.

“I haven’t seen the final list of invitees, but the impression I get is that it has shifted from AI in general, and trying to figure out the best policy for humanity, to something that is taking a certain premise, and not allowing a lot of people who disagree with the premise to have a voice,” he says.

Marcus says he personally welcomes AI governance and regulation, and thinks it’s good for AI companies to have certainty, rather than fragmented national laws. But he questions the Government’s goals of an international council for AI regulation, when so many multilateral institutions already exist.

“Anyone who understands how the real world works understands that technologies often have risks, often have dual uses, can get us places but can also explode – many technologies need some kind of constraints to mitigate risk,” he explains. And he worries that when things go wrong, today’s God Kings of AI won’t be found.

“The idea in the technology industry is that no matter what the externalities are, the public should bear them.”

Far from decrying AI, Marcus says, he’s trying to improve it in its infancy, so it gains the public’s trust. For now even the business models are in question.

“There are many problems with the economics of AI, and some of it is very bubble-like. In the long-term AI will do most of what people want, but it won’t be able to do it in the next decade”.

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.