About 10 months ago, I wrote a post on twitter that ended up being one of my most liked, commented and viewed posts. A lot of what I said in there still holds up. The format is tweeting friendly, but should read well in this format as well. Do let me know if this isn't the case.

I worked as a transistor pathfinding researcher in the 2000s and 2010s.

The economics of transistor & comp. architecture R&D and engineering into the present day has lessons for how LLM based AI and it's dominant players (OpenAI, etc...) might evolve.

Let's explore this further.

There are many similarities between today’s Intel CPUs and those in the 90’s:

Both are based on the Von Neumann architecture, use the x86 ISA, use a largely silicon-based manufacturing process, have a similar pipeline architecture and use registers to hold data while processingIt’s worth asking:

Weren't better, more performant architectures than x86 developed for CPUs in all these years.

Why didn't they find their way into servers and laptops over the years?Take another, even more fundamental example: Transistor technology.

CPU transistor technology today is still largely Si- based, still uses photolithography (w/ EUV) and still largely uses the same type of manufacturing processes that were developed in the 90s and early 2000s.Again, it’s worth asking:

Weren't there better performant transistor materials than Silicon?

And if so, why don’t we see these replacing Si fully in most (or at least some) commercially available, mass produced CPUs?In both cases, the answer is YES! But, only in principle.

Prestigious journals are filled with research that attempts to replace x86 (& other Von Neumann-based architectures) for CPUs and silicon as the base transistor material.

So why hasn’t it happened yet?

Here’s one possible answer: It’s not about the technology, it's about the economics.

Once a mature technology is adopted to a point where it begins to have a measurable, positive impact on economic activity, these economic forces end up favoring (and sometimes ensuring) its continued scaling, adoption and dominance.The economic flywheel effect is difficult to overcome.

It becomes cheaper to invest in improving this technology with proven benefits and known economic trade offs than it is to spend time reinventing the wheel.

Over time, more investment flows into keeping the flywheel going.As a PhD student and later as an industry researcher, it was extremely frustrating to see new, exciting (and sometimes promising) research being deprioritized in the interest of keeping the economic machine chugging along.

But when you've invested billions of $ in a new fab......you can’t recover it unless you fully capitalize on what you’ve already made pretty well.

Economics always ends up winning in the long run.

Even before the ChatGPT API price slash, it was pretty clear that LLMs were a big deal.

With a 10x price drop (and likely more to come) it’s clear that there will be tremendous short term, measurable economic impact once these LLMs integrate into daily products and services.So, in an economic context, the ChatGPT announcement is a bit of an earthquake really.

I suspect unless competitors really up their game, OpenAI will quickly begin to dominate.

Think Intel and CPUs, or at the very least Amazon and AWS.

Why?Just like for Intel, x86 and silicon, we’re now seeing immense investment in two main verticals:

1. Improving the existing tech (e.g. data annotation, ML tooling, fine-tuning) to keep current scaling trends.

2. Using the tech to improve the lives of people (LLM based products).Just like with Intel, x86 and silicon, there is no shortage of investment and researchers working on the next best thing that will replace transformer-based LLM foundation models.

In the early days, when the economic impact is unclear, there tend to be lots of contenders.With the passage of time, as ChatGPT or its transformer-based colleagues entrench themselves into everyday products, the economic forces driving this adoption will make it more difficult to replace them.

Sure, we’ll have the occasional report of a model that’s 50% better......than ChatGPT/Transformer under certain conditions.

But is it economical to run the model and use it in everyday products?

Is it 2-10x cheaper at iso-performance?

Does it have the tooling and dev support to make products work the way they do with ChatGPT?Now, I understand that ‘this time it’s different’.

I get it that there’s Open Source in a way that there couldn’t be in transistors and architectures.

I get that it’s far easier to spin up a bunch of GPUs and test a new architecture.You could also have a product released by Stability that outperforms ChatGPT while allowing inference on a simple MacBook.

Some version of this product will exist just as microcontrollers and other smaller and cheaper chips exist and do really cool stuff.

This is all true.But as you read this thread, OpenAI is busy planning the release of ChatGPT4 with a gazillion more parameters.

Exactly like Intel used to turn out x86s that magically got better and did so much more every two years.

The economic incentives for scaling end up winning.So yeah, maybe it’s different this time.

The rate at which new developments will happen is certainly faster.

But remember, once you get this stuff into real-time products and create an industry around it, the clock begins to tick on alternatives. Economics wins.

Tic toc.