Design principles for programming languages

Alex Feinman
5 min readMar 17, 2017

--

“Wait, I know this! This is Linux!” — dinosaur bait, age 9

The primary challenge when programming is managing human fallibility. Humans have a limited capacity for understanding and remembering, one which is radically outstripped by any but the simplest of programs.

You, after reading two pages of badly-written code.

Programming languages are designed to help complex programs fit into tiny human brains. This article will talk about specific principles that can help shape language design to best achieve this goal.

Designing For Humans

The design elements of a programming language have a large impact on how much brain fullness it creates. Design elements include:

  • Syntax: the actual glyphs used for expressing concepts, plus the production rules for applying those. These include alphabetic keywords (if, end, etc.) and symbols ( !@#$% etc.). Numbers generally are reserved for numeric values. Some languages go overboard with symbols; others eliminate them entirely. Either way, it’s all syntax.
APL wins on use of one-character glyphs. And with the wide-spread adoption of Unicode, it’s finally possible to write APL code without a keyboard from 1962!
  • Vocabulary: the names of common functions, properties, methods, and so forth; plus the conventional naming schemes used within the language. I’m also including standard or widely-used libraries; they are a crucial part of the vocabulary of a language.
  • Conventions: the practice of using a language. While not technically part of a language, often, a language community evolves a set of strong principles (“don’t use == when comparing strings!”) which work around a language’s gotchas. The combination of (language+practice) is stronger, and more interesting, than the language alone.

As a well-known example, Java had[1] a strongly enforced set of naming conventions, e.g.:

  • Package names in lower case, packages can contain packages in a tree structure, and each leaf is generally a short name describing its contents (‘util’, etc.).
  • Class names in TitleCase, by convention, whereas variables usually in camelCase. This simplifies identification, though in practice IDEs often label these for you using color.
  • Classes fulfilling the same pattern often have the same naming scheme: FooImpl and BarImpl are reference implementations of the abstract classes Foo and Bar, while FooWriter is almost certainly a class that knows how to save a Foo to an output stream.

In addition to helping with recognition (“what is that?”), these standards help with name production (“what should I name this?”). Given that naming things[2] takes up important brain space that could be better used for understanding other problems, it’s helpful to have conventions to fall back on. Finally, naming conventions often force you to think about the pattern underlying what you’re making, potentially helping you realize you should implement it a different way.

And then there are the conventions followed across most modern languages, e.g.:

  • Variables on the left side of the equals are changed by the assignment[3].
  • Lines are executed one after another, in order or at least in some sort of understandable order[4].

…and so forth.

These all matter; but why?

Seven Principles

Over time I’ve boiled down my thinking on language design to seven design principles:

  1. Readability: a programmer can easily understand what the code does by reading it
  2. Expressability: a programmer can easily figure out how to write what they have in their mind
  3. Predictability: a programmer can generalize new code from examples.
  4. Regularity: the code helps a programmer concentrate on the unusual; the ordinary stuff fades into the background.
  5. Concision: it’s short.
  6. Summarizability: a programmer can think about the code at whatever level of abstraction they currently care about.
  7. Separability: a programmer can edit and test the code in parts.

These principles are often in tension — concision, for example, is often at odds with readability. APL’s A↓1 notation for removing the first element of an array is more concise than MATLAB’s A(2:end), but may be harder for a reader to understand. Javascript provides the single-use verb A.shift() for this; the name is somewhat arbitrary (why is shift the converse of pop?), but once learned, it’s easy enough to recall.

The art of the designer is in balancing these principles and coming up with something that forms a cohesive whole.

Two Astounding Reasons!

UXers and folks with a background in psychology may notice these principles help us achieve two related goals:

Recognition is easier than recall: it’s easier to recognize something once we’ve seen it than it is to summon it up out of thin air. If you’re forced to recall, then cued recall (“think of an animal, say, one that roars”) is easier than free recall (with no hints at all).

In a graphical interface, the interface provides the cue: it prompts the user what to do next. “Buy!” “Enter your address!” “Press to jump!”

But in a programmatic context, the user can type almost anything next, which is a tremendous burden on the brain. So languages, and language conventions, are designed and agreed-upon to help reduce it.

A trivial example, in C:

for (int i = 0; i < n; ___)
printf(“hello world”);

What goes in the ___? In 99% of cases, i++. But why? We’ve decided on pattern to help you understand what comes next. It’s a form of serial recall.

It is also a form of chunking. The entire for-loop is a single chunk; parts of it recede into the background (did you even bother to check what data type i had?). To help with chunking, other languages (e.g. as in ruby here) provide other constructs:

n.times{ puts “hello world” }

That’s both more readable and more concise. It even leaves out certain implementation details (notably the temporary variable i) which are unneeded; they’re inessential complexity for the particular program.

Following these principles helps us create and maintain code by compensating for the aforementioned human fallibility. They reduce the load on memory, reduce the effort to figure out what to write, and reduce the effort to understand what you’ve read.

Astute readers may note these are the same mechanisms that underlie other forms of interaction design, and that’s on purpose.

Part Two: Details

So, that’s the quick lesson. But you’re still here. You want more. You want…details.

Stay tuned.

Footnotes!

[1] I refer to Java in the past tense here because I haven’t used it in some years, and it’s changed significantly since the days when I worked for Javasoft. Concerns about the runtime (notably, its insecurity, licensing concerns, and mediocre performance in both speed and memory) make people grumpy about the language. But the language itself had a lot of good features.

[2] “There are only three hard things in computer science: cache invalidation, naming things, and attributing quotes correctly.” — Abraham Lincoln or somebody.

[3] This is so innate to modern programmers that we speak of LHS and RHS as if it were the only design; but languages like R (3->I) and COBOL (MOVE 3 TO I) go the other direction instead.

[4] Yes, I’m aware of things like Befunge; stay tuned for part two ifUR a 1337 hax0r

[5] See also this discussion by the Nielsen-Norman Group.

--

--

Alex Feinman
Alex Feinman

Written by Alex Feinman

Obligate infovore. All posts made with 100% recycled electrons, sustainably crafted by artisanal artisans. He/him/his.

Responses (2)