In Robin Sloan’s “pop-up newsletter” Winter Garden, he argues that artificial general intelligence has been with us since the development of GPT-3:
The trick is to read plainly.
The key word in Artificial General Intelligence is General. That’s the word that makes this AI unlike every other AI: because every other AI was trained for a particular purpose and, & even if it achieved it in spectacular fashion, did not do anything else. Consider landmark models across the decades: the Mark I Perceptron, LeNet, AlexNet, AlphaGo, AlphaFold … these systems were all different, but all alike in this way.
Language models were trained for a purpose, too … but, surprise: the mechanism & scale of that training did something new: opened a wormhole, through which a vast field of action & response could be reached. Towering libraries of human writing, drawn together across time & space, all the dumb reasons for it … that’s rich fuel, if you can hold it all in your head.
It’s important to emphasize that the open-ended capability of these big models was a genuine surprise, even to their custodians. Once understood, the opportunity was quickly grasped … but the magnitude of that initial whoa?! is still ringing the bell of this century.
I’m extreme in this regard: I think 2020’s Language Models are Few-Shot Learners marks the AGI moment. In that paper, OpenAI researchers demonstrated that GPT-3 — at that time, the biggest model of its kind ever trained — performed better on a wide range of linguistic tasks than models trained for those tasks specifically. A more direct title might have been: This Thing Can Do It All?!
“AGI” is such a misused, ill-defined term that I honestly don’t find it too useful… but it’s hard to argue with Sloan’s argument here! Certainly if you showed current LLMs to someone from 20 years ago, or even 10, they’d seem like wild science fiction.
It also reminds me of a quote from Asimov on the definition of “artificial intelligence” and how the goal posts move as new achievements are retrospectively deemed as “not AI”:
[artificial intelligence is] a phrase that we use for any device that does things which, in the past, we have associated only with human intelligence
(via Nicholas Carlini)
So. Do we have AGI? Do we even meaningfully have AI? What would we have to see for the general consensus to agree they had been achieved?
Anyway, they are mostly marketing terms at this point. But it can still be interesting to think about them.
Thoughts from a dog walk listening to the Sloan article using ElevenReader.
