The design of toolkits and the web front-end
For the last couple of weeks, we looked at designs for UI-related code, starting with Stateful MVC and then last week the Stateless MVC of the web. But there’s a few things in all this we haven’t examined in much detail. Web application back-ends are stateless, sure, but what about front-end designs? The interesting development on the front-end is that we’re are not coming up with the same designs as we had for native desktop UIs… and probably for very good reason.
The old days of OO UI toolkits
For a long time, UI toolkits were described as the ideal case for object-oriented programming, an obvious success story. Even as some flaws with OO ideology started to become more apparent, there were usually allowances for its obvious successes, and UI toolkits were one of the things to point to.
I was probably even one of these people at one point. I have no idea what I was thinking. It doesn’t even remotely make sense anymore.
Here’s an interesting observation: GtkColorSelection
(from Gtk2, so it’s an old design) inherits from GtkVBox
.
That’s inherits, you read that correctly.
And by the way, the inheritance hierarchy for that class is 8 classes deep.
Just for the benefit of anyone who doesn’t see the obvious problem here: There’s really very little reason for any widget to inherit from any other widget. Design like this is, at a minimum, an enormous confusion between “is-a” and “has-a” relationships, even within the usual object-oriented design dogma. And deep inheritance hierarchies, contrary to being useful code re-use, are actually dangerous opportunities to cause accidental breaking changes or otherwise be stuck unable to fix serious problems.
So why would we end up with designs like this? It’s not like they were stupid. The trouble is that OO came onto the scene with a promise of code re-use, and all these programmers looked at these situations and said “hey, if I inherit, I get all these methods ‘for free!’ What marvelous re-use!”
This isn’t to pick on Gtk. (Gtk version 3 fixes the obvious flaw above, and reduces the class inheritance count to 6!) The original Win32, MFC, Qt all made similar mistakes.
We can find more awful design mistakes in these libraries.
The “widget” base class tends to become a kind of “god object.”
Just take a look at the list of properties, functions, oh and multiple inheritance in Qt QWidget
or MFC CWnd
.
And how do we go about creating a CCheckBoxList
?
To create your own checklist box, you must derive your own class from
CCheckListBox
.
…
For many of these toolkits, this is a standard approach. You don’t create instances of the bare widget: you’re expected to subclass a widget, and specialize it. For MFC especially this is often necessary because event handling is done by methods. You can’t override methods except by subclassing. But even for other frameworks that allow hooking in event handlers, it’s still a common thing to do.
Enter HTML and the web
HTML did something remarkably under-appreciated. It gets some praise for being declarative, for being a semantic markup language, and for separating style out with CSS. But the really remarkable thing HTML did was just to create a composable, re-usable set of UI elements.
You don’t create HTML forms by first subclassing a “text input element” widget, and then using it in your form. You use the standard text input HTML element, with properties specified in the HTML, and styling specified in the CSS, and even with event hooks that you can register with it to alter its behavior.
This is an absolutely massive difference in design from the UI toolkits that came before it. It’s such a big change that I think many people might even believe it’s a category error to compare the two. After all, isn’t HTML a markup language? And UI toolkits are OO libraries. Are these apples and oranges?
Not especially. The thing here is that nothing was stopping us from creating a library with re-usable components like HTML did, except that it took a radical change in thinking to come up with it. You had to think in terms of a language for describing UI elements. Object-oriented design doesn’t even like this kind of thought. OO design encourages thinking in terms of an interface or base class from which widgets derive. HTML was a way of describing a UI, not with objects, but with data.
And the neat stuff didn’t stop there.
The use-case for HTML called for laying out a document, and the data-oriented approach to its representation allowed us to easily handle this via traversing the tree data structure to compute that layout information for each node.
OO UI toolkits at the time all involved creating the widget with a fixed position relative to its parent window, and with fixed width and height.
It was like position: absolute
was all there was.
Any positioning logic was something you used to have to implement yourself, from scratch, for each window subclass.
(Some code re-use, huh?)
HTML isn’t without its flaws, of course.
It occupies a kind of hybrid niche between semantic markup language and UI description data type.
This can cause some conflicts.
For a long time, “don’t use table
for layout” was a big principle on the markup language side of the divide, but tables were also one of the most effective layout tools available.
If you were just trying to create a UI, what’s the harm?
I’ve been out of the loop for awhile, but I believe most of these concerns have been resolved for modern browsers with better CSS, with the possible exception that the semantics people complain that div
is overused.
I doubt this tension will ever be fully resolved, and in some ways maybe that’s a good thing.
Maybe the semantics people have point, and we’re just identifying design weakness with HTML and CSS?
Modern front-end designs
But all this is history. OO UI toolkits have started accumulating some obvious missing features, like “container widgets” that layout their children. HTML is hardly the hot new thing anymore. So what’s going on with front-end design?
One of the first problems with the HTML approach is that we do suffer from a rather total lack of abstraction, at least within the description language itself. If you want to replicate a complicated UI, you include all of it’s declaration, each place you want it. There’s no other option. The web mostly got by with this state of affairs because you could use abstractions on the server-side (even just functions or templates) to generate the HTML that was being sent to browsers.
But let’s say we want to start including some abstractive capabilities “within the language.” Let’s call these “components” and allow them to be used as new tags within otherwise normal HTML. The next question is: how will these components work?
The obvious solution is to have components “render” themselves in terms of the plain old HTML the component consists of. Or not just plain: perhaps there are more components within. But ultimately, these have to render to an otherwise fully basic HTML description.
Next, we have the problem of dealing with state. OO UI toolkits often have internal widget state, but this is in conflict with Stateful MVC, which wants state to be externally represented in a model. Perhaps we can resolve this conflict?
And we can: let’s actually make component state immutable data, i.e. a model. And let’s invent some mechanism to request changes to that state, instead of the UI modifying it directly.
Sprinkle in some particular choices about how state changes get made, some optimizations for browsers and the web, do it all in JavaScript, and you get React.js. Of all the popular fads in the front-end development space, React is certainly among the ones that actually deserve their success. The design is excellent.
The takeaway
The original OO UI toolkit design was inherently object-oriented. The widget base class was designed to support anything. Existing widgets were not really special; they were just a library you could make use of. (Unfortunately, often by inheritance.)
The HTML approach provided a fixed set of composable UI elements.
Even with the React Component design, all we have is an abstraction mechanism; we’re not able to introduce truly new widgets.
The base set are special, and our abstractions have to be ultimately equivalent to some composition of those base elements.
(And this limitation can be very little limitation at all: canvas
elements let you draw arbitrarily, for instance. But use of these is the exception, not the rule.)
This point is all I want to communicate today. Something we used to believe was an inherent success story for object-oriented design has been improved upon—by approaching it from a less object-oriented perspective.
End notes
The “UI description language” approach of HTML is so good, I suspect that part of the reason many modern applications are reaching for tools like Electron is not just that it’s cross-platform and familiar, but also because it’s just plain a better way. The traditional UI approaches that native toolkits use are just antiquated. Qt appears to be in the lead in trying to modernize these older designs somewhat. The only real new attempt at creating a cross-platform UI toolkit I know of is React Native, which… does come with the constraint that you’re writing JavaScript.
I’ve hardly attempted to be comprehensive about UI toolkits here. I’m just making a general observation. Don’t hate on me for not mentioning Mozilla XUL. Thanks! :)