Why an interface with only one implementation?
Programmers can be a bit faddish sometimes. Part of why I want to write a book about design is to help be a little more explicit about why things should be a certain way. I see too much out there that’s accepted because… well, because. Reasons. It seems cool, everybody’s doing it, “they” said to do things this way, and they’ve got a name for their philosophy and a popular consultancy and everything!
I get why this happens. It’s not like the design of software is empirically justified or anything. So how should a new design philosophy get justified? What should we find convincing?
There are plenty of good ways to go about trying to understand something we can’t measure (I mean… all the humanities, for example). But the biggest failings are usually the simplest things: really, we just need to pay more attention to why.
Among the recent fads sweeping around out there are test-driven development, SOLID, and dependency injection. These mix together to produce some pretty incredible results sometime, both great and horrifying.
One common failure mode of this mixture is to extract an interface from a concrete implementation, supply that implementation via a dependency injection framework, and then justify this abstraction by substituting in mock implementations during testing. Now, this can absolutely be a reasonable design decision… for the right kind of interface. When the interface is actually quite well-defined, sits at a reasonable level of abstraction, and the win on testing really is greater than the costs of all this abstract indirection, that’s great.
The problem arises because some people get in the frame of mind “testing… good. DI… good. SOLID… GOOD!” and start applying this approach every chance they get, oblivious to the idea there could be drawbacks. And given a general propensity for over-mocking already out there…
This common recipe has already spawned a backlash. And so now there’s a rule of thumb out there that if an interface has only one concrete (non-testing) implementation, then it’s probably a crap interface that should be gotten rid of. This is actually a pretty reasonable rule, given the over-zealous application of the previous fad. But it’s also a faddish rule, and it too doesn’t really encourage thinking about why.
Dependencies, dependencies, dependencies
The perceived advantage of this form of abstraction is shown in the figure above. By eliminating a direct dependency on a concrete implementation, new implementations can be implemented and substituted in. This is obviously necessary when there are multiple actual implementations of the interface.
The situation in question is when there’s really still only one (non-testing) implementation. At that point, we’re introducing a lot of boilerplate and abstraction for (seemingly?) little benefit. Surely there’s a better way to do the required testing that doesn’t reduce the quality of the code?
Thus, the new reactionary rule of thumb: just get rid of these extraneous interfaces. Hindsight tells us they got overused.
But the above picture is incomplete. Let’s fill it out, thinking a bit more about the surrounding dependencies:
Now we can start to see a situation where it does make sense to keep an interface around, even if there is only one implementation. When we consider a module’s overall transitive dependencies, they stop at the interface. That next dependency arrow is the other way around—the interface doesn’t depend on the concrete implementation. In this situation, keeping around that interface can be a major win for de-coupling, separating a user from the dependencies the concrete implementation actually uses.
So an application might have only one implementation of an interface, but if that interface is in a separate module, there may still be very good reason for it to exist. Isolating modules from things they don’t need to know about is pretty much the foundation of how we’re able to implement large, complex programs. If we always had to think about everything, we’d never get anywhere.
But does this work?
One common difficulty with this approach, however, is that for the dependency chains to be broken, we have to be able to write that interface without referencing those other dependencies.
This can be quite difficult sometimes: we frequently need to pass interesting objects as parameters to the methods of our interface, and those types may come from the modules we are trying to exclude.
To be effective, the interface and ??
modules in the diagram above needs to avoid more arrows to the concrete dependencies on the right.
There are really only a couple of simple ways to try to solve this problem in an object-oriented way:
-
We can move those dependencies around. Maybe some of them can get lifted to be part of the interface’s module, instead of part of some downstream module. This all depends on how coupled they are to other things. And whether we’re able to control all the involved modules—perhaps they’re third-party dependencies.
-
We can try to refactor those types to also separate out an interface from their concrete implementations. This can get a little “infection-like”, with one interface de-coupling demanding another interface de-coupling demanding another, and so on.
One general design strategy that we’re effectively denied the opportunity to use by object-oriented languages is to use complex data as arguments, instead of objects, in the design of our interfaces. When a language offers a simple way to describe data types and transformations over them, we can much more reasonably create data types specifically for the interface. This is somewhat like pursuing both strategies above, but without the same downsides. The only real downside is that the appropriate arguments have to actually be data. If your interface really does need to pass specific objects around… well, this isn’t a good candidate to try to decouple through an interface, then.
So not only can interfaces separate modules from depending on each other, but data types can too. This shouldn’t be a novel idea. This is the general design strategy found when we did a case study on compilers. There, a data type called an “intermediate language” served the same kind of role: breaking up dependencies between modules. This ensures each module has a small role, and can remain ignorant of the rest of the compiler at large.
But without good language support, it can be hard to actually pursue this strategy, as the overhead and boilerplate involved can become too much.
A common occurrence
Besides MVC, there is MVVM or “Model–View–ViewModel.” One of the general ideas involed in this alternative approach is to actually separate the view from the model. Traditional MVC often exposes the database representation directly to views. In fact, in many cases the only thing stopping a view from calling the wrong methods and mutating the database is that hopefully your coworkers will have words with you when they discover what you’ve done.
The idea here is for the view module to actually be fully separate from the model, unable to touch the database at all. So the “view model” acts as this interface-like intermediary. The view model’s job is to represent as data the information needed by views to render, instead of the model, which may represent the database state as objects. So the model knows about the view-model and the view knows about the view-model, but now they are unaware of each other.
Of course, this can look ridiculous at times, because object-oriented languages don’t support data very well. The prospect of writing down a simple data type to describe the information a view wants as input looks like a lot of work in an OO language. It shouldn’t be, but that’s current state of programming.
Meanwhile, many basic CRUD apps would use representations in the view that are very close to the database’s representations, at least at first. That makes it look even more like ridiculous boilerplate: not only would a separate data type be a bothersome amount of work, but it also looks like you’d just be duplicating the existing type.
So we soldier on with views coupled to models.
End notes
All that said, if you find an interface and its sole implementation sitting in the same module… we’re back to taking that new rule of thumb into consideration. Can the testing be done better?