Discover more from Nich Fury
The von Neumann style and its curse on computer performance
Let us dive deeper into the backdrop of mechanicalism and functionalism.
In the previous instalment of this series on theoretical computer science, we looked at John Backus’ research work and Turing Award lecture about “liberating programming from the von Neumann style”. The main object there was to elucidate how von Neumann and Harvard architectures respectively relate to my own concept of functionalism and mechanicalism. How well that went is left as an exercise for the reader, but I want to discuss more about what these things are and why they matter.
Presently, all computers of note adhere to the principles that define a von Neumann computer. They adhere both concretely, in their hardware, and abstractly, in all of the programming languages they use. Even functional languages from lambda calculus fame are still sadly bound to von Neumann because of their value-level semantics. This is true despite the apparent protests of one angry Wikipedian who says that “they are in actual practise value-level languages, although they are not thus restricted by design.” (Emphasis theirs.) Oh, to be a Lisp-slinging functional idealist.
Of course, modern hardware often boldly claims to be of a “Harvard architecture”, but this implementation detail is merely a transparent optimisation that must be worked around at times to allow for such things as self-modifying code. Because duh, why not have multitudes of data busses so you can send more data at a time? In short, it’s not a Harvard architecture machine in any serious sense of the phrase. Real Harvard machines would be far more powerful if they existed.
Backus explains that von Neumann programming languages are all abstract isomorphic copies of the von Neumann architecture. So, the isomorphism follows like so:
program variables ↔ computer storage cells
control statements ↔ computer test-and-jump instructions
assignment statements ↔ fetching, storing instructions
expressions ↔ memory reference and arithmetic instructions
If you’re feeling confused, like you think you understand but aren’t sure, don’t worry. You do. This is essentially just a generalised description of every computer you’ve probably ever seen.
Backus also explains that the big drawback to the von Neumann architecture, and by extension von Neumann languages, is that they constitute a disorderly field with few algebraically exploitable properties about them.
Why does this matter? Performance. You know how big data uses farms of accelerators from Nvidia to run their racket? They depend on some level of divorce from the von Neumann architecture to be cost-effective and therefore competitive in the marketplace.
Here’s the funny thing, though: they’re really bad at it. Seriously. Here’s the problem: executives have only a high-level understanding of what makes their business tick, and they depend on software engineers, software architects, and computer scientists once it’s time to get into the weeds about what that really is. And when the buck passes to them, well, they’re all stuck in the vicious cycle of von Neumann style that Backus was talking about. Even in functional programming reprieves, the function-level style he invented sees virtually no real-world usage. Theory tells us that must imply there’s still change left on the table.
And oh boy do we find heaps and heaps of this when we dig into the weeds. Proprietary shader languages. Virtual memory. PCIe busses. Ancient ISAs patched through a billion times over unto today. New ISAs that constitute little more than rent-seeking committees and fan favourite laundry lists of features that don’t qualify as innovation at all. Everything is a big disgusting hodgepodge of ancient code, pointless legacy, and death by committee. Although leading developers have made immense strides at cutting out the support that users of old computers depend on, they are more boxed into their computers’ narrow, preconceived paradigms than ever before.
Have you ever read a principal software engineer’s blog at random? I have. It’s always filled with the same kind of tut-tutting about how C is bad, written by somebody who got paid mid-six-figures at some faceless billion-dollar corporation to port their hypervisor technology to another ISA. The theory at work here hasn’t progressed an inch since before Steve Jobs got fired from Apple.
This is a big problem: executive-adjacent leading engineers cannot practically live without all of their comfortable givens like large pointer sizes, virtual memory regimes, out-of-order execution, and automagically managed processor caches that programmers have no direct control over. It’s no secret how energy-intensive these things are, and the bottom line is, they’re very wasteful things for the entire industry to be taking for granted. They’re also big things that keep all programmers chained by code dependency to the von Neumann style.
There’s a lot of performance that properly architected computers are capable of regardless of the state of semiconductor processes, and virtually nobody is seizing it because their thinking is stuck in the 1990s where the best one can hope for is a plug-and-play accelerator card, or a datacentre filled with plug-and-play accelerator cards. Unfortunately, all of your acceleration stops being the ideal as it was sold to you the moment you plug it into a backplane or a motherboard, because that’s where it has to transit the mortal world of PCIe and Ethernet, among other comparatively pedestrian protocols. I know, there’s probably some tech out there angrily cursing me for being so dismissive of fancy five-figure-sum RAID controllers and the engineering marvel that is PCI Express, not to mention QSFP. You need to remember that this is small potatoes compared to the inside of a modern processor.
That kind of cognitive dissonance is really what I’m trying to get at here. It’s barely a technical problem that we can’t graduate beyond the von Neumann style despite how badly it hampers our abilities. It’s really more a social problem, and a collective one at that. People are very siloed into their own comfortable little lanes where they do the work that they’re told to and nothing else. Businesses don’t have the capacity to foster the kind of cross-domain knowledge that this entails. They think they do, and then promptly prove otherwise when C suite wants to “plug in” some existing “IP” that they think they “own” and the whole model gets utterly ruined by this.
In the next instalment I’m going to talk about my ideas on what to do to solve this problem. There is a generalised way to compartmentalise von Neumann machines and collate them into what could be considered a proper Harvard style computer. This should be out soon.