If you watch the news, it’s easy to miss the forest for the trees. The evidence is all scattered so let’s gather it in one place:

Does this mean generative AI is failing? Perhaps, but not necessarily.

Let’s make a fair case:

How can these two contrasting pictures happen at the same time?

Tyler Cowen reconciles the disparity with a prediction grounded in history: “AI hype has subsided, but the revolution continues.” The first picture is the dying hype, a natural development of any new technology that grabs more attention than it can feed and more investment than it can return. The second is the silent, hidden “lull”, as Cowen puts it, that relentlessly moves forward, indifferent to the number of eyes it may or not attract from the outer world, focused on outputs, not outcomes.

Cowen doesn’t claim the AI revolution will continue out of baseless hope, though. He has reasons to assume it’ll follow the same steps—hype, calm (we’re entering this phase), and revolution—as other past technologies. Whether his optimism resonates is a different question but he’s not shy in drawing analogy precisely from the three innovations that are most commonly compared to generative AI: the printing press, electricity, and the internet. Here’s what he says:

Every revolutionary technology has a period when it feels not so exciting after all. The dot-com crash in came in 2000, but even before that online commerce was decidedly meh. Only two years earlier Paul Krugman had observed that maybe the internet was overrated—and the thing is, in 1998, he wasn’t completely wrong.

Reaching even further back, consider the introduction of electricity into factories in the 19th century, which had many fits and starts over a period of decades. The printing press had a much larger impact on Europe in the 17th century, due to cheaper technology and paper, than it did immediately after Gutenberg’s invention in the mid-15th century.

Generative AI could unfold similarly: early turmoil mixed with enthusiasm, followed by indifference, and finally, eventually, a resurgence. Does it need the popular approval it’s been deservedly losing for a while? I don’t think so; it’s not hype or consensus that builds the new world but the work of people quietly watching over progress while everyone else has moved on (even if they only manage to reap the overhanging fruits two hundred years later thanks to an “unrelated” innovation they couldn’t foresee, like cheaper paper).

Cowen provides evidence favoring this prospect and even claims that “generative AI and LLMs continue to advance by leaps and bounds.” He mentions OpenAI’s enterprise services, Google’s GPT-4 competitorand, most significantly, he says, that “open-source AI models are advancing at a rapid pace, even if most casual users don’t realize it.”

Well, although we can remain hopeful looking back at our distant history, Cowen’s selected examples may not be the most favorable evidence: Enterprises aren’t sure about AI and among those who are, many are struggling to adopt the tech as I argued in the introduction. Google Gemini Advanced was a fiasco and not as good as it should have been. And Cowen already said why open source efforts don’t really matter for the world at large—due to technical entry barriers, users care about open source much less than they care for the best ready-to-use products like GPT-4 and Claude 3, which is already ~zero compared to the outdated GPT-3.5-based ChatGPT.

Isn’t the repetitive deflation of hopes an undeniable sign that it’s not just hype that has subsided but also the revolution itself?

Again, not necessarily.

Cowen failed to provide evidence to keep our hopes up but that’s because his article is eight months old and the field—both successes and failures—goes just too fast. I provided my own “hopeful” evidence in the second big paragraph above; will we falsify that eight months from now as well? Yes, very possibly so. But then we will be able to provide a few more brushstrokes of updated hope, repeating the cycle. If this keeps happening, then Cowen’s thesis is partially right: We’re at a low-hype, “non-excitement” phase. Developments will go under most people’s radar but the field will still advance—slower than during the overhype stage, but advance nonetheless.

We may simply be unable to either prove or falsify the revolutionary nature of generative AI from the limited view of the present moment. Only the neutral, rational picture provided by distant hindsight will reveal the truth. We can confidently claim that two hundred years separated the materialized usefulness of printing books at scale from the invention of the technology; Gutenberg couldn’t.

The optimistic conclusion we can take away is that a calmer, non-hype stage will allow the much-needed adjacent work (technical, social, ethical work) to be done—just like it happened with the printing press, electricity, and the internet.

A few relevant questions can help us reframe and understand the current and coming discussions about the state of generative AI in this new phase.

How much harm the overhype phase—not yet left behind or completely overcome—has done to the industry and the underlying engineering and scientific efforts? Can the world ever receive generative AI with open arms when those enabling it have been so indifferent about the externalities? Is the world going to leave the promises of exponential growth with a positive vibe of “You need to attract first and deliver later—I get it” or with a bad taste of “you lied to me and hurt me and I won’t trust you again”?

High expectations can help a lot in the short term but they entail a dangerous trade-off with trust; there’s a tipping point of saturation beyond which hype hinders—and may even taint forever—future efforts.

There’s another question that should serve as a cautionary tale (although I’m afraid it won’t). It’s something I’ve been asking myself since GPT-3 (especially due to the over-the-top claims company executives have been sharing with the press, behavior that peaked post-ChatGPT):

Wasn’t generative AI incredible enough on its own—it is—that they had to exaggerate its capabilities with wild promises and hyped narratives so much as to ensure that the field couldn’t ever deliver on them?

If generative AI is so amazing and great, was it really necessary for Sam Altman to say OpenAI’s end goal is “magic intelligence in the sky” and the beginning of a post-scarcity world? Or for Sundar Pichai to say AI is “more important than fire or electricity”? Or for Satya Nadella to say that generative AI feels as big as the internet felt in 1995? Or for Elon Musk to say AI could become the “most disruptive force in history”? Or for Jensen Huang to say it won’t be sensible to learn programming?

If they need that discourse, doesn’t it mean the underlying reality isn’t as convincing as they claim so they’re forced to do the convincing themselves?

Today, I still don’t have a satisfactory answer.

Finally, setting aside the hype and other emotional reactions, the broader question we’ll eventually face is this: Can those building AI in the silence of their own conviction prove that it’s worth being called a revolution like the printing press, electricity, or the internet? Or will it pass to engross the pile of forgotten technologies that, after hype subsided, never surged again?

Only time will tell.

Let’s just hope we won’t have to wait two hundred years, like those poor medieval peasants, to find out. Back then, they didn’t have any information to react to the slowness, but we’re not that forgiving in the 21st century, and even less so are the people paying for these would-be revolutions. Thankfully, if the world advances as fast as AI evangelists claim it shouldn’t take long.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *