As professor emeritus of the University of California, Berkeley, David A. Patterson has led a distinguished academic career since joining the faculty in 1977. Patterson developed RISC I, a microprocessor architecture that simplified and streamlined the instructions needed for computing functions. This pioneering work earned Patterson, along with longtime collaborator John L. Hennessy, the Turing Award in 2018.

Patterson’s research directly impacted Moore’s Law, the notion that computers’ power will double every two years due to exponential developments in technology.

Today, computer programmers are facing the end of Moore’s Law, with some predicting the phenomenon to slow by 2025 or sooner. In a recent conversation with Hippo Reads, Patterson shared his thoughts on what’s next.

On the state of Moore’s Law

Hippo Reads: Let’s start by talking about Moore’s Law. How do you view its impact?

Patterson: For forty or fifty years, Moore’s Law was there to help us build better, faster computers. In the 1990’s and early 2000’s, it let us design computers that doubled performance over 18 months. People would throw away perfectly good working computers because their friend’s computer was two or three times faster and they were jealous.

Now, with the end of Moore’s Law, personal computers are barely improving at all. Last year, personal computers only improved three percent—that only doubles their speed every 20 years. You’d never throw your laptop away now unless it broke, right? You’d never be jealous of a friend with a faster laptop; they’re hardly any faster year over year.

That’s a big impact for everybody, particularly programmers who got used to the good old days where their programs would get faster. They could add a lot more features because computers would double performance every 18 months or so.

Hippo Reads: What do you see as the most promising ways to continue improving computing power, even without the assistance of Moore’s Law?

Patterson: It’s conventional wisdom among computer architects that the only thing we haven’t tried is domain-specific architectures. That idea is relatively simple: It’s that you design a computer that does one domain really well, and you don’t worry about the other domains. If we do that, we can get giant improvements.

A very popular area right now is machine learning. There are examples in start up companies—they’re all building domain specific accelerators for machine learning, and they’re seeing factors of ten or more improvements by narrowing what [they] do.

Hippo Reads: Will we inevitably see developments in technology slow down?

Patterson: Yes. There’s another, lesser known effect that’s called Dennard scaling. Robert H. Dennard made an observation that even though you put more transmission on the chip, with each technology generation, they use less energy. So, even though the chip could have twice as many transistors, it would still be the same power. We’re also limited by power. The transistors aren’t getting any better, and the power budget isn’t getting any better, so that’s a pretty severe limitation on how you design computers.

If you’re trying to make computers cheaper, you can probably make cheaper computers. Slow, cheap computers—you can probably continue to improve there. But if you’re wanting computer hardware to get faster and more energy efficient, the only trick left with think is domain specific architectures.

On the importance of open source development

Hippo Reads: Will developers have to become more creative at making their programs more efficient?

Patterson: It’s always an option to re-architect your software to make it run more efficiently on current hardware. Programmers got used to not having computers that would get faster so their software would run faster and support more features. They could afford more features and slow it down, but the net effect would be the same.

But that’s hard work. So, this is not going to be music to programmers’ ears. The good news: Going forward, we can still deliver increased performance. You’re just going to have to do all the work.

That has already happened. The so-called RISC microprocessors dominate all this, and what’s exciting about open architectures is that it lets many more people participate in the design of computers. Because it’s open, we would expect more innovation, because there are more people to tackle some of the hard problems facing us, with examples being the improvement in performance but also really sticky issues like security.

There are some really big challenges facing us, and what’s nice about an open anticurare is everybody can work on it. There’s no restrictions on who can innovate.

Hippo Reads: What are some security challenges you’re excited about open source hardware fixing?

Patterson: Security is a terrible problem in the computing industry—it’s embarrassing how bad it is. Thus far, we’ve largely tried to rely on software, thinking that if we just fix the bugs in software it will get more secure. That clearly hasn’t worked.

Recently, I’ve been direct attacked on computer designs. The Spectre and Meltdown attacks work on the hardware itself, not really on the software where most of the attacks have been. These are very sophisticated attacks that are not easily defended against, and it’s gonna take a lot of cleverness to figure out to defend against attacks. In particular, the Specter attack opened up a new class of attacks on computer designs.

With open architectures, there’s two things we can do. We can get many more people involved, rather than with proprietary architectures where it’s primarily the people that work for those companies. Secondly, there’s also open implementations. Given what’s called field-programmable gate arrays or FPGAs, you can have a hardware design that’s pliable, that you can make changes to. It doesn’t run as fast as real hardware, but runs pretty fast, so you can run real software on it.

Somebody anywhere in the world could come up with an idea, modify a risk-wise open core, put their idea in it and put it on the internet and see if it can survive attacks. If it can, great—and if not, they can still learn from that attack and do the next iteration. But given the software’s open and the architecture’s open, they can integrate every week so they can rapidly make progress, hopefully on this really important problem facing our field.

One of the other fringe benefits for both open source software and open architectures is that [it] lets students see industrial strength design. They don’t have to be just toy designs the professor comes up with, they can [show] what’s shipped in the real world. For example, “this is what the Linux operating system, or the LLVM Compiler looks like” and “this is what the open instructions/architectures look like.” You can look in the nitty gritty, and this is the real stuff.

That’s very exciting for students to know that it’s not a toy example—they are real examples that they can play with.

On what’s new and exciting

Hippo Reads: Are there any open source hardware projects beside the RISC-V that are particularly exciting for you?

Patterson: Another example comes from NVIDIA, the GPU company. They have something called ND DLA, which is one of these domain specific architectures for machine learning. DLA stands for deep learning architecture, but they’ve got a commercial design of a software stack and hardware and documentation. They’ve made all of that publicly available so anybody can download and use it. So, there’s two examples of open architectures that are these industrial strength things. The NVIDIA one is just not a full computer like RISC-V, it’s just an accelerator, but that’s very exciting for a lot of people.

Hippo Reads: Do you think we’re going to see more open architecture in the future?

Patterson: I think so. As a person who’s been in the field for a long time, I’m proud that we do open source software, yet we collaborate on software and then compete using that software. It’s a communal approach to technology, so it’s exciting to see that it actually worked in the real world for software. We’re trying to see if that’s going to work for hardware as well.

But if it’s going to work, we should see many examples besides just RISC-V and NVIDIA of other pieces of interesting computing equipment that people will design themselves and make available to the community for them to either use or to enhance and return.

If domain specific architectures or domain specific accelerators are the path forward, we’re going to need to have designs that allow people to add accelerators to them. That’s very easy to do in RISC-V because it was designed for it—it’s open architecture.

It’s less clear how that’s going to work with older designs and older business models, where they really don’t want you to play with the instruction set, they don’t want you to change it because that might screw up the software stack. If domain specific architectures are the one path forward, this is going to be a challenge for the traditional instruction set architectures from Intel and Arm.

One model would be to just get all the accelerators you would need from Arm and from Intel, but will you be satisfied with that? Or do you want to design your own? Or will they be able to supply enough of them to meet all the demand? It seems like the open architecture might be a better match to this opportunity, or open architectures might have an advantage if this is the future trend of the technology.

____________________________________

This conversation has been lightly edited for clarity.

Image by Michael Schwarzenberger from Pixabay