Does anyone really understand how computers work?

There was a thread on the DelMar Slack recently about how GitHub was announcing AI assistant programming. Yesterday I happened on this post by Eric Sink.  Erik Sink and Joel Spolsky were the two great bloggers (before it was called blogging) who wrote about software as a business.  I used to always read everything they wrote, and often used their essays as reading assignments for my upper-level courses at Purdue.  In this latest post, Eric echoes my feelings about the importance of knowing how computers work.

Eric wrote:

“ … it feels to me like yet another step toward shallow understanding in our field.

In my nearly 4 decades of writing code, I have consistently found that the most valuable thing is to know how things work. Nothing in software development is more effective than the ability to see deeper. To borrow Joel Spolsky's terminology, I claim that almost all abstractions are leakier than you think, so it is valuable to see through them.

… what I'm saying is that after I understand how things work, seeing how to do something is usually trivial.”


I also have spent the past four decades in the software development profession, and I agree completely with Eric. 

Whenever I talk to potential clients about DelMar, I always tell them how we can write software to do just about anything. We aren’t limited by a single UI toolkit, OS, programming language, some opinionated BE framework, or a single way to persistent data. 

I'm confident making this claim because DelMar has a talented team with varied experiences, we've backed up my claim over and over again, and most importantly, we understand how computers work.

My first job in 1982 was as a Cobol mainframe programmer.  Then I wrote software for DOS PCs, then Windows PCs, then Palm Pilots, Blackberries, Pocket PCs, Windows phones, iPhones, Android phones, and the web.  Each platform used a different programming language and other development tools.  But because I knew the first principles of computer programming, the learning curve for each of these platforms never seemed all that steep to me. 

I (and I assume Mr. Sink and Mr. Spolsky) had the advantage of learning computer programming back when there wasn’t all that much to learn.  One simple programming language (Cobol) and compiler, one text editor (CANDE), one OS (Burroughs MCP) … that was about it. Oh, and I had to know how to read a hexadecimal memory dump because that was the only debugging tool you had when trying figure out why your program crashed. After one year at my first job, I felt I had learned just about everything there was to know about writing software for that company.

Compare information-based systems from 1982 with modern systems of 2022, and you’ll find the amount of computing layers and developer tools it takes to move data stored on a server to be displayed as pixels on a screen, is staggeringly huge. Today’s developers have to learn dozens, if not hundreds, of technologies for even the simplest systems.

At DelMar, we still believe it’s important for all developers to know how computers work. That’s why when we hold job interviews, I ask questions related to first principles of computer programming, and not so much reviewing what they have on their GitHub or talking about fad technologies.  If we love the candidate but we feel they are missing some fundamentals, we’ll hire them anyway and spend the first few months in training them on the essential concepts.


Give Eric's post a quick read: https://ericsink.com/entries/depth.html

And read this one too if you haven't already: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/