An AI Call to Arms

admin
14 Min Read

Arguably the most interesting thing about this essay by Sam Altman is the ‘why now?’ of it. Is OpenAI on the verge of some new breakthrough? Is it a reflection on the ‘o1’ model launch? Is it to help close out the new funding round? Is it a post ahead of the announcement of that funding round? Trying to get ahead of the inevitable backlash from a startup raising $6B+? Did he just have some time on his hands after weeks/months of said fundraising and shipping o1? An attempt to “thought lead” by coining a term? Some combination of the above?

Most likely, it is timed with the upcoming UN General Assembly “general debate” set to open later today. And while it’s wrapped in some provocative, if vague, proclamations, it really seems to be a call-to-arms for various world governments to steps up with the resources need to make “superintelligence” – the artist formerly known as “AGI” – a reality. Notably, resources beyond just money.

It also feels like Altman is using the opportunity to attempt to “own the narrative” of this cycle by coining a term/phrase: . In recent years, Marc Andreessen has probably been the most effective at this, though Altman’s mentor, Paul Graham, reminded everyone of his essayist capabilities by getting the world – at least the startup world – to talk about “Founder Mode” recently.

A few passages to call out in Altman’s Post:

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

A nice way to frame the current chaos swirling around AI into a context that puts it into perspective.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

This is what will garner all the attention because Altman is seemingly putting a date on “superintelligence”. In reality, there’s quite a bit of couching and vagueness here still. Beyond the use of “possible”, “a few thousand days” is a pretty wide range. I went ahead and asked ChatGPT what the date will be in a few thousand days and it gave me a very specific one: December 11, 2032.

That’s because it’s taking “a few thousand days” to literally mean 3,000 days. Which is the technical bare minimum “a few thousand days” can be, I suppose. Though I also suppose you could bucket in 4,000 or 5,000 or 6,000 or even more in there. Anything short of 10,000 days, really. That means anything before February 10, 2052 seems like fair game. Of course, the “it may take longer” couching basically negates any firm date. Anyway, it doesn’t really matter as it’s clearly just meant to put a rough timetable out there and not to draw a line in the sand. Sometime in the next century, Altman is “confident” we’ll have “superintelligence” – however you wish to define that.

How did we get to the doorstep of the next leap in prosperity?

In three words: deep learning worked.

In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.

That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

Again, “doorstep” would seem to be doing a lot of work here. But actually in the full arc of human history “a few thousand days” could be considered a “doorstep”.

Regardless, this is also a nice, easy way to frame what is actually going on behind artificial intelligence as we know it right now, and as ushered in by OpenAI (thanks to the work of researchers within Google, who didn’t leverage it internally because they really couldn’t for reasons that don’t at first seem obvious, but are actually quite obvious). By throwing enough compute at algorithms tailored to discovering the links behind words (and other data), we’ve done something profound.

Just how “profound” is also still up for much debate. Is this a magic trick or is this a path to recreating how our own brains work? Most everyone agrees this is not using the same mechanisms as the human brain right now, but can it lead us down that path? Or will it forge a new path? An intelligence that is not human intelligence, but a different kind, perhaps more powerful (one day) because it’s not limited to the confines of a squishy neuron sponge in our heads? Or are we about to hit a wall with said algorithms and computing power? Will we run out of the energy required before we get there? Will the next breakthrough require augmenting this “intelligence” with other kinds, such as those gleaned from the “real world”? Can something truly learn without doing? How vital are physical senses? You quickly fall down philosophical rabbit holes…

Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.

If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

It’s buried, but it’s there. A call-to-arms for governments to help with the infrastructure required. OpenAI can raise all the money in the world – nearly quite literally, seemingly, from the largest companies in the world – but if governments aren’t willing to help with said power production and other resources, the ultimate goal of giving such technology to everyone cannot be achieved. Obviously, this is a self-serving call ahead of said UN General Assembly meeting. But it beats the alternative: what the EU is doing. It also beats, I believe, the other calls in this general direction.

I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.

Just, you know, some modest goals and expectations. That said, I think it’s fine to run the risk of over-promising and under-delivering here because the stakes are high enough (and again, the vague timelines for all this dampens any under-delivery risk).

I honestly don’t love Altman’s closing because it seemingly gets too much into the political weeds – but it’s clearly to head off the major criticism coming to counter the call: that AI is going to take all of the jobs from the people the politicians serve. The theme of this year’s UN General Assembly debate is: “Leaving no one behind: acting together for the advancement of peace, sustainable development and human dignity for present and future generations”. And so here you go:

As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.

While I don’t love it as a closing here, I also firmly believe in the above. We can debate if AI is going to mean the end of the world in some various science fiction scenarios. What it will not do is take all of our jobs. As Altman notes, it will undoubtedly be disruptive to some labor markets, but that’s always been true of every technological breakthrough dating back centuries or longer. Humans adapt and figure out how to use such tools to augment their lives and work and ultimately make them better. There’s no reason to believe this revolution will be any different. That doesn’t mean it won’t be bumpy and complicated and hard in various places. But it’s also inevitable and it would be dumb not to embrace and get ahead of such questions and issues now.

“Superintelligence” seemingly has less baggage than “AGI”, though still some new baggage thanks to a certain group that broke away from OpenAI!

Though nothing has topped 2011’s “Software is Eating the World” — wild that this was already 13 years ago.

Which was derived, of course, from a talk Brian Chesky gave — another person close to Altman. Altman has clearly long modeled his own essays on those of Graham, and they stepped up when he took over YC from him. But now Altman has a reach far wider than Graham ever has.

Share This Article
By admin
test bio
Please login to use this feature.