Abstractions at the boundaries
Last week we looked at modularity and what that really meant. As part of looking at what a module is, we’ve already kinda identified what an abstraction must be: whatever we get out of a module, right? But this definition isn’t especially helpful in deepening our understanding.
Let’s start by talking about math
Abstraction has a much clearer definition in mathematics than in computer science, so let’s start there because it actually still applies. In mathematics we can take one of two very broad approaches to thinking about the thing we’re going to do mathematical reasoning with.
We can pick a concrete representation, like thinking about the set-theoretic definition of the integers. This has two effects on how we can reason: first, anything that can be done on the integers, we can do on the integers. We don’t suffer any artificial limitations. Second, anything we prove about the integers will only apply to the integers. We get no guarantees about generality.
Or, we can pick an abstract representation, like thinking about a group. For those who’ve managed to avoid learning what a group is, it’s pretty much just a “thing” with a binary operation and an identity element, and some properties like associativity and the presence of inverses. So the integers are a group: addition and zero have those properties.
But now, the situation has changed: first, we’re limited to only using what’s true of groups in general, not just anything true of the integers specifically. If we wanted that binary operation to be commutative, we’re out of luck. Second, whatever we prove will be quite general, applying to any group, not just the integers.
Mathematics can also teach us a little about the importance of abstraction. One of the interesting stories in the history of mathematics is the attempts to solve the quintic. (That is, to find a closed form solution giving the roots of degree-5 polynomials, just like we have for the degree-2 quadratic equations you’ve probably learned about.) It turns out there is no simple solution, but it took a long time to someone (Galois) to figure this out.
But the proof wasn’t direct: we didn’t sit around thinking about polynomials for a long time until, bam, the right idea emerged. It was indirect. It went through groups.
Through the use of the right abstractions, we humans were able to reduce our attention to some part of the problem, something small enough that the human brain is capable of actually thinking deeply about it. Something we could study, and eventually emerge with a deep understand of. Then we could go back, armed with our new understanding, to re-interpret the larger problem. Eventually, with the right luck and/or genius, the real problem gets made small enough that *click*.
But what about programming?
There’s an awkward thing that happens when we start thinking about programming in light of how things work for mathematicians. Nothing seems to obviously correspond, at first.
After all, let’s go back to thinking about the simplest thing we do in programming: writing a function. Is that function an abstraction? How does that fit in with that more mathematical idea of an “abstract notion” (like groups) having many different potential “concrete instantiations” (like integers)? What variety of concrete options are we ranging over with a function, the way “groups” range over options like the integers?
And why did we write a function the first place? Was it because someone told us “Don’t Repeat Yourself?” What does DRY have to do with “abstraction?”
The most commonly cited definition of abstraction is also (in my opinion) heavily misleading: Wikipedia, for example, says abstraction is “hiding details of implementation.” Okay, I we can sort of see how a “group” hides details about the integers, but is that all? We’re just thinking about hiding details? That’s it?
I think this definition is a bit like defining “color” as being a frequency of light. Sure, there’s truth to that, but that definition is useless to an artist, and not just because it ignores human-specific foibles (brown is not a single frequency of light, so already you’re missing colors!) It’s also that it’s just not the useful way of thinking about color, for artists.
Hiding details isn’t really the useful way of thinking about abstractions, for programmers.
Let’s make a correspondence
For mathematical abstraction, we saw two different notions:
- Concretions. These aren’t directly used often because set theory isn’t a very good programming language, but they’re supposed to be there in principle because otherwise you have to worry about foundations.
- Abstractions. These are almost always used instead, and with an infinite hierarchy from “low-level” to “high-level”.
For programming, I think we get three different notions:
- Concrete abstractions. Our programming languages are much better at these, so we happily use them much more often. Functions and data types are good examples.
- Indirect abstractions. We still get a sort of infinite hierarchy of high to low-level abstractions. These are things like interfaces and type classes that are obviously ranging over a multitude of possible concrete instances.
- Abstraction mechanisms. The notion of a “function” in mathematics is just another abstraction, but in a programming language, it’s something built into the language. Things built into the language are on a different level from things we can build within the programming language. As programmers, we are handed down notions like functions, data types, interfaces, classes, or type classes from our programming language, but don’t get to operate on this level ourselves.
The first thing I’d note is that “concrete abstractions” like functions are still abstractions. While we initially highlighted mathematical abstraction as being about ranging over a variety of concrete possibilities, that wasn’t actually the only thing abstractions did for us.
I was trying to be slightly sneaky, did you spot what the other thing was?
Besides filling that particular role, abstractions were also a tool for human thought. Remember how I mentioned Galois was able to reason about quintic equations? Sure, groups ranged over tons of potential instantiations, but the actual value was breaking off a useful smaller problem that can actually fit in our heads.
The value of groups is that we can spend time deeply understanding them, and that understanding will pay off big time when we go on to tackle bigger problems where they’re applicable.
And that’s what a good abstraction is: anything we can create where the time we have to invest in learning it pales in comparison to the value we gain from that understanding and being able to apply it elsewhere.
Of course, this loops back around to last week’s modularity discussion. Modularity is all about breaking systems down into smaller parts. Abstraction and modularity are deeply related topics.
So wait, what is a programming abstraction?
Every abstraction we create is either a term or a type. Define global variables or constants, or a function? Those are terms. Define new data types, classes, interfaces, type classes, etc? Types!
Most new terms and types are just concrete abstractions. But not all terms: polymorphic functions do kinda vary over a lot of concrete instantiations! And for some kinds of new types (interfaces, type classes, protocols), they’re always an indirect form of abstraction.
How richly these abstractions can work is dictated by the abstraction mechanisms of our programming language. Don’t have closures and lambdas? That’s not a very flexible function abstraction. Don’t have polymorphic interfaces? That’s not a very flexible interface abstraction. (Type classes seem like something different than just an ordinary type? Just flexibility of the language again.)
So when it comes to creating interesting abstractions beyond just functions, we’re pretty much always looking at types. And we circle back around to my earlier writing on how the expression problem informs type design: our usual options are data, objects, or ADTs.
Going beyond
That’s pretty much it for looking at abstraction. But I’m a programming languages person, so I’ve got some additional commentary. That whole “third kind of abstraction” thing we have going on is a huge pain in the butt, isn’t it? It sure looks like programming languages are limiting us. We might have better concrete abstractions than set theory, but we’re the dirt-covered peasants resentfully staring across the river at the kinds of towering and beautiful abstractions mathematicians can construct.
So here’s some speculation about the future.
I think there are three lines of active research that may someday liberate programming from these limitations. The end result would be a deeply mathematically rooted language that nevertheless is great for writing high-performance string-mangling CRUD apps.
Research line 1: making type theory good enough for math
There’s an active area of research today called homotopy type theory, the secret goal of which is to create an actually useable foundations for mathematics.
That is, replace set theory with something better, something mathematicians doing “mathematician stuff” might actually use on a day to day basis, instead of ignore (like set theory).
And it’s all based on dependent type theory, which programming languages researchers have been poking at for years as an interesting subject for allowing us to create much richer abstractions in programming languages. If we want to completely eliminate that “third tier” of abstraction, it seems like we can with dependent types. It collapses things back down to 2, just like mathematics, and we can do the full range of mathematical abstraction with it.
Research line 2: effects
One of the obvious objections some might raise to the above line of research is that it seems opposite to the “real world.” In the real world, we’re not working with abstract nonsense, we have machines, with performance to worry about, and concrete realities like registers, pointers, and cache. (And all our shitty data are strings!)
But all of this is perfectly handleable by such a language, we just haven’t really built one that demonstrates it yet. Part of the reason we haven’t is that we’re still struggling to figure out how to best represent effects. This is where Haskell has monads, other languages are trying out different ideas like algebraic effects, and there’s plenty more ideas out there, too.
But I think a lot of people don’t catch on to this fact because of “The C Language Aesthetic.”
In C, you build into the language the machine types, and pretty much the machine types only.
In C, you build into the language the machine operators, and pretty much the machine operators only.
Anything more abstract, and you make it look more abstract and more heavy weight.
It should be obvious that +
is fast and efficient and do_a_thing(x, y)
is who knows how expensive.
One thing I think I can predict: this will eventually get turned onto its head. “Abstract nonsense” will be the normal notion of functions, while manually memory-managed systems-y machine code will be the specially constructed function type fit for that purpose. You’ll be able to do all that stuff, no problem! It’s just going to be something built on top of the language’s more mathematical base, instead of that lower-level stuff being the base.
Research line 3: linguistic abstraction
There’s one sorta-kinda abstraction I haven’t talked about. It’s another one mathematicians get, and us dirty programmers do without: notation. Programming language notation is fixed, whereas in mathematics you can make it up as you see fit.
This isn’t exactly another form of abstraction, as fans of Lisp-like languages are fond of pointing out (while justifying why S-expressions are all you need). But it can be incredibly useful to humans in many situations, as fans of Lisp-like language will also point out (while extolling the virtues of macros).
Someday I should write something up on this specifically, but I think part of the reason we don’t have popular languages with good linguistic abstraction mechanisms (besides the fact that it’s a really hard problem) is that macros are a dead-end we’ve spent the last 50 years stuck in. Long time fans of the blog know I wrote a Ph.D. thesis about being able to extend programming languages with new linguistic abstractions, and we went looking for approaches better than macros. But I digress.
That’s my guess at programming 100 years from now. Some future mathematically-rooted language with an effects system that makes it useful for “real-world” programming and all the domain-specific syntax we could ask for.
Epilogue: how much abstraction do we actually do?
One observation I’d like to conclude with is this: we actually rarely reach for indirect abstractions. Most of programming is really about concrete abstraction. (At a wild-ass guess: most interfaces in Java are probably single-method workarounds for the lack of functions types. Next most common is likely replacements for a lack of algebraic data types. Next most common likely have only one concrete implementation. I suspect it’s only the tiny left over minority that are actually about meaningfully and openly varying over different possible implementations.)
I think there’s a good reason for that: it’s actually pretty difficult to understand a mathematical abstraction. Usually, when introducing an abstraction, to understand it, a student has to come up with a lot of different kinds of concrete examples of that abstraction. It’s not possible for the human brain to understand something abstractly just… directly like that. We have to work to test and forge our understanding.
Concrete abstractions are just so much simpler to understand and easier to work with. We only ever start reaching for indirection if we need it.
After all, if it were easy to learn new abstractions… it doesn’t get much simpler than a Monad!
End notes
- (2018-8-25) An interesting idea that I didn’t mention is the relationship between abstractions and metaphors. In some ways, abstractions are just rigorously defined metaphors. I think people conflate different things when calling abstractions “leaky:” sometimes you’re just taking the metaphor too seriously, sometimes not seriously enough…