We can’t make transistors any smaller, is this the end of Moore’s law?

What does the end of Moore’s law imply for our technological progress?

John Loeffler
We can’t make transistors any smaller, is this the end of Moore’s law?
An abstract 3D render of a light blue circuit boardmatejmo/iStock

There has been a lot of talk about the end of Moore’s law for at least a decade now and what kind of implications this will have on modern society. 

Since the invention of the computer transistor in 1947, the number of transistors packed onto the silicon chips that power the modern world has steadily grown in density, leading to the exponential growth of computing power over the last 70 years.

A transistor is a physical object, however, and being purely physical it is governed by laws of physics like every other physical object. That means there is a physical limit to how small a transistor can be. 

Back when Gordon Moore made his famous prediction about the pace of growth in computing power, no one was really thinking about transistors at nanometer scales. He simply talked about the rapid advancement and miniaturization of computer chips over time, calculating that the number of transistors in an integrated circuit would double about every two years.

But as we enter the third decade of the 21st century, our reliance on packing more transistors into the same amount of silicon is brushing up against the very boundaries of what is physically possible, leading many to worry that the pace of innovation we’ve become accustomed to might come to a screeching end in the very near future.

History of the transistor

A replica of the first transistor on display at the White House in 2000 | Source: White House Archives

The transistor is a semiconductor that usually has at least three terminals that can connect to an electrical circuit. Generally, one of the terminals is responsible for controlling the flow of current through the other two terminals, which allows for rapid switching in a digital circuit.

Prior to the transistor, this kind of rapid circuit switching was done using a thermionic valve, which is commonly known as the vacuum tube of old. 

These vacuum tube triodes were significantly larger than a transistor and required considerably more power to operate. They aren’t “solid-state” components, unlike transistors, meaning that they can fail during normal operation because they rely on the movement of electrons flowing within the tube to conduct the electronic current.

This meant that vacuum tube-based electronics are large, hot, and expensive to operate as they require regular maintenance to replace tubes that fail for one reason or another, and can thus bring the entire electronic machine to a halt.

The transistor was “invented” at AT&T’s Bell Labs by John Bardeen and Walter Houser Brattain, under the supervision of William Shockley. Although the transistor existed in concept for about 20 years before that—a working model of a transistor was not built until the work was done at Bell Labs. Shockley improved on the 1947 design with the bipolar junction transistor in 1948, and it is this implementation that first went into mass production in the 1950s. 

The next major leap came with silicon surface passivation, which allowed silicon to replace germanium as the semiconducting material for transistors, and later, for integrated circuits

In November 1959, Mohamed Atalla and Dawon Kahng at Bell Labs invented the metal–oxide–semiconductor field-effect transistor (MOSFET) which used significantly less energy and was much more scalable than Shockley’s bipolar junction transistors.

MOSFETs are still the dominant transistor in use today and, as a single unit, are the most manufactured device in human history. Because MOSFETs could be made increasingly smaller, more and more transistors could be fabricated into an integrated circuit, enabling increasingly complex logical operations.

By 1973, William C. Hittinger, the Executive Vice President of Research and Engineering for RCA, was boasting of putting “more than 10,000 electronic components on a silicon ‘chip’ only a few millimeters across.” Today’s transistor densities far exceed these early advances by orders of magnitude.

How Gordon Moore inadvertently invented Moore’s law

Gordon Moore in his cubicle at the Robert Noyce Building in Santa Clara, Calif. in 2013 | Source: Wikimedia Commons

Gordon Moore isn’t a household name, but his handiwork is in nearly every home and office in the industrialized world. Though he would go on to become president of Intel Corporation and eventually its Chairman Emeritus, Moore wasn’t nearly that esteemed when he described what we now call Moore’s law in 1965.

An electrical engineer, Moore worked in the Shockley Semiconductor Laboratory department of Beckman Instruments, then being headed up by Shockley himself. When several of Shockley’s employees, even some of his proteges, became disaffected by Shockley’s leadership, they struck out on their own to form Fairchild Semiconductor in 1957, one of the most influential companies in history.

As Fairchild Semiconductor’s director of R&D, Moore was the natural person to ask about the current state of the industry, and so in 1965 Electronics magazine asked Moore to predict where the semiconductor industry would be in ten years’ time. Looking at the rate of innovation at Fairchild, Moore simply extrapolated forward in time.

In the several years since Fairchild began fabricating semiconductors, the cost to produce the components declined and the size of the components themselves was reduced by about half every year. This allowed Fairchild to produce just as many integrated circuits each year, but with twice as many transistors as they had done the year before.

“I did not expect much precision in this estimate,” Moore wrote in 1995. “I was just trying to get across the idea [that] this was a technology that had a future and that it could be expected to contribute quite a bit in the long run.”

“I think that this is truly a spectacular accomplishment for the industry. Staying on an exponential like this for 35 years while the density has increased by several thousand is really something that was hard to predict with any confidence,” Moore added.

Moore’s prediction held more or less steady for about a decade, after which Moore revised his estimates to doubling the transistor density once every two years. “I have never been able to see beyond the next couple of generations [of semiconductors] in any detail. Amazingly, though, the generations keep coming one after the other keeping us about on the same slope,” Moore wrote. “The current prediction is that this is not going to stop soon either.” 

This might have been true in 1995, but Moore’s law would soon start pushing the bounds of physics before too long, and it would start to face an existential challenge.

Why is Moore’s law in trouble?

Source: Intel

Currently, the problem with Moore’s law is that the size of a transistor is now so small that there just isn’t much more we can do to make them smaller. The transistor gate, the part of the transistor through which electrons flow as electric current, is now approaching a width of just 2 nanometers, according to the Taiwan Semiconductor Manufacturing Company’s production roadmap for 2024.

A silicon atom is 0.2 nanometers wide, which puts the gate length of 2 nanometers at roughly 10 silicon atoms across. At these scales, controlling the flow of electrons becomes increasingly more difficult as all kinds of quantum effects play themselves out within the transistor itself. With larger transistors, a deformation of the crystal on the scale of atoms doesn’t affect the overall flow of current, but when you only have about 10 atoms distance to work with, any changes in the underlying atomic structure are going to affect this current through the transistor. Ultimately, the transistor is approaching the point where it is simply as small as we can ever make it and have it still function. The way we’ve been building and improving silicon chips is coming to its final iteration.

There is also another potential pitfall for Moore’s law, and that is simple economics. The cost of shrinking transistors isn’t decreasing the way it was in the 1960s. At best, it’s decreasing slightly generation over generation, but diseconomies of scale are starting to weigh fabrication down. When the demand for semiconductor chips was first taking off, the engineering capacity to produce the chips was expensive, but it was at least available. With demand from everything from smartphones to satellites to the Internet of Things skyrocketing, there just isn’t enough capacity to meet that demand, which increases prices at every step of the supply chain.

The server room at a Facebook data center | Source: Facebook/Meta

What’s more, when the number of transistors doubles, so does the amount of heat they can generate. The cost of cooling large server rooms is getting more and more untenable for many businesses that are the biggest purchasers of the most advanced processing chips. As businesses try to extend the life and performance of their current equipment to save money, chipmakers responsible for fulfilling Moore’s law bring in less revenue to devote to R&D—which itself is becoming more expensive.

Without that extra revenue, it becomes much harder to overcome all of the physical impediments to shrinking the transistors even further. So even if the physical challenges don’t bring an end to Moore’s law, the lack of demand for smaller transistors almost certainly will.

Ok, so what are we doing about it?

Well, that’s the trillion-dollar question at this point. We’ve spent the past 70 years experiencing an unprecedented technological advance so that rapid technological progress is taken as a given by just about every industrialized society at this point. 

How do you suddenly bring that to a halt? What would that even feel like? What would it mean to have the same iPhone for 30 years? Obviously, we could simply deal with that as a society. There is nothing in our DNA that mandates we have a new iPhone every two to three years and an entirely new computer every five. We’ve simply become accustomed to that pace of progress, and if that pace changes, we would acclimate ourselves to that as well. 

After all, humanity has only had computers for less than a century, or about 1/250,000th of the time we’ve been on this planet as a species. We will certainly find a way to endure such a calamitous hardship.

Alternatively, we can look at the end of Moore’s law with excitement and anticipation. Adversity is the mother of invention, after all. We’ve spent the past 70 years trying to figure out how to shrink the transistor more and more and now that path of innovation is reaching its end. 

It is absolutely not the only way forward, and if we are no longer putting all our efforts into shrinking transistors, we can put that energy into other areas and discover new breakthroughs that might make the invention of the transistor look banal in comparison. We won’t know until we explore these new avenues of innovation, and the end of Moore’s law might be the signal we need that it’s time to start looking for a new engine of progress.

Moore’s law is dead, long live Moore’s law!

Source: jeuxvideo.com

In the end, Moore’s law was never a “law” to begin with but more of a self-fulfilling aspiration. We expected transistor density to double every year, then every two years, and so we looked for how we could accomplish that task.

Whatever the next thing is, whether it’s quantum computing, machine learning and artificial intelligence, or even something we don’t even have a name for yet, we’ll find a new aspiration to drive that innovation forward.

Ultimately, our fascination with Moore’s law was never really about the density of the transistors. Most people who’ve heard of Moore’s law couldn’t even begin to explain what transistor densities even mean, much less how interlocking transistors form logical circuits or how the smartphone in their pocket works (or even a 1970s pocket calculator, for that matter). For most of us, Moore’s law was always about our expectations of progress, and that is something that is largely up to us.