|
I'm very interested in feedback on this. Yes, it's somewhat related to my latest article[^], but I'm going through what I explain below, right now..
What if you were going to design some new service or app and you were told:
Development Manger: "All high-level classes must depend only on Interfaces." What if I told you that and was entirely serious.
Would you balk? or think, "Yes, that is the way it is and should be."
After that your manager says,
Development Manager: "Something else will decide how to build the implementation which will fulfill the Interfaces." Would that sound normal to you, or completely crazy? Or somewhere in between?
The Implications
Do you honestly understand the implications?
No Implementation Code
One of the implications is that the code Service or App you are creating basically has no implementation code in it. (Or very little.)
Why?
Because your high-level app only depends on easily-replaceable Interfaces.
That means if you want to see the implementation, you'll need to go to the Library (probably a separate project) which contains the implementation that is used to fulfill the Interface.
How do you feel about that?
Do you know how crazy it is to look at project that has been designed this way?
Have you ever experience a project that is carried out like this?
Why I'm Thinking About This Even More?
I have just completed 50 pages (of a total of 241) of the very old book (2013) DependencyInjection With Unity (free PDF or EPUB at link)[^].
|
|
|
|
|
If I get you right you're talking about having a 3rd DLL to carry the interface definitions, your App relies on that, and yet another DLL implements them, am I right?
Because if so, this seems logical for an enterprise app, and entirely overkill for small or even "mid size"** applications
** I gauge application size not by source code count, but by the size of the team that developed it. That's where the complexity really comes in, in terms of making the project fly. That's when separation of concerns and such become valuable, when your team grows too large to sit next to each other in the office. (just as an example. my point is when it goes from "hey, how's this work" to "let's have a meeting about that when i get a chance" that's when you need to go to interfaces, at least IMO.
For those apps where your team members aren't at the ready, separating concerns is useful.
From a maintenance perspective, it's a lot of extra work, so if it doesn't add value I'd skip it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I appreciate the feedback. You're summary is a good one and related to one of the key points made by the MS Unity Application Block PDF that I read, right after my original post.
Quote: When You Shouldn’t Use Dependency Injection
Dependency injection is not a silver bullet. There are reasons for not using it in
your application, some of which are summarized in this section.
• Dependency injection can be overkill in a small application, introducing
additional complexity and requirements that are not appropriate or useful.
• In a large application, it can make it harder to understand the code and
what is going on because things happen in other places that you can’t
immediately see, and yet they can fundamentally affect the bit of code you
are trying to read. There are also the practical difficulties of browsing code
like trying to find out what a typical implementation of the ITenantStore
interface actually does. This is particularly relevant to junior developers and
developers who are new to the code base or new to dependency injection.
It's interesting because "theoretically" I absolutely love the idea of DI, IoC and writing everything to an Interface.
But, if you like to look at code, it is quite terrible.
There's a project where this has been carried out that is a small(ish) project which has about 8 dependencies (all are Interfaces & the implementations are in separate DLL projects).
To look at the code or debug-step the code you need to create a VStudio solution with the 8 projects as included Projects and then you can step into the code of one or the other.
It's a lot of overhead.
And, yes, I agree with what you said about team size too. If you have nine different people working on each item (1 on the main service and 8 on each dependency) then breaking up is good.
|
|
|
|
|
I spent altogether too much time as a software architect operating in Microsoft's native habitat.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you don't DI on a large project, how are you doing unit/int tests? Sure, small project, whatever, but...
|
|
|
|
|
That's maybe the most important reason for DI.
And your question is the perfect question to get the bottom of the mystery of whether or not people are really using DI.
|
|
|
|
|
I've had a big hand in building about 30-40 microservices which all tend to more or less follow this convention.
I do think "favor composition over inheritance" is a very strong win.
I do not appreciate 3 levels of abstraction to do anything. But you get into CQRS and suddenly that's your world.
We only have one that went real heavy on that and it's the one I probably hate the most. It was NOT appreciated that I should liken that to spaghetti code of old just on a newer plate.
It doesn't help one bit that the typical way this is done obviates any simpler way of doing by design. Scope keywords are wielded as cudgels to keep you in line. The "simple" ctors - they're all internal/private - so do the abstractions or do nothing.
I think if I wanted all of that I would use XML comments to highlight that *maybe* something should be done in the more complex of the ways but otherwise not design things in such a way to make it very difficult to do so simply (ditch all this scoping).
|
|
|
|
|
DI is only really useful for mocking. Not everything needs to be mocked. Just use the real classes and test the ensemble, and only mock things that are heavy (eg the database).
|
|
|
|
|
The OSI network stack was entirely based on this principle.
The Service definition of a layer tells what the layer will do for the layer above. Not how. It can do it any way it likes, using any protocol it likes, as long as it fulfills the Service definition. OSI Transport Services could realize its offering using IP, X.25, Frame Relay, whatever - or even a combination (most likely: if one breaks down, switch to another one). Same for all the other layer interfaces. Of course the Service definition also included how received data are delivered to the Service user.
The separation between what and how was manifest in Service and Protocol being defined in different standards. Network Services were X.213, Transport Services X.214, Session Services X.215. A Network Protocol was defined in X.223, but more common was to use the X.25 protocol, which is not according to X.223. Yet, X.213 services were provided. For the Transport layer, there was a Transport Protocol defined in X.224, but if you used the RFC method of implementing X.214 over TCP, then TCP was the transport protocol.
If it wasn't for the terrible mess that the Internet protocol stack makes of 'layering', I might have been more tolerant to grayboxes or boxes with tinted glass. But even after 40 years with TCP/IP protocols, it still can give me a stomach ache when I encounter yet another protocol hack, a fix made with steel wire and gaffa tape in the protocols.
I still believe that OSI protocols could have been realized, strictly adhering to OSI principles. But after 40 years among software developers and internet guys, I realize that "must depend only on Interfaces" cannot be realized, especially not in a 'local' (not networking) context, with no communication peer depending on rules being followed.
So "must depend only on Interfaces" is a rosy dream that doesn't stand a snowballs change in today's software world.
(Edit: Numbers of protocol definition standards)
Religious freedom is the freedom to say that two plus two make five.
modified 16-Feb-24 19:45pm.
|
|
|
|
|
Fantastic post
Great points and a perfect analogy / example.
There are challenges in this type of dev that are so painful that it only makes sense in some instances
Also, the life of software (rewritten so often) that putting this much time into it can be questionable too since the final dream as rarely realized.
|
|
|
|
|
Define "high-level class".
|
|
|
|
|
That’s actually a very good question
It is a bit difficult to even explain that but think of any class that calls another to do work
The PDF that I mentioned uses the example of a WebController (MVC Controller) as being the high level class that would call other classes to do work
And high level class is basically saying “break apart all functionality (single responsibility principle)”
Controller here is just calling other things to do work.
|
|
|
|
|
I agree that that seems to be the only reasonable way to interpret it. As in a layered architecture.
Yet you can also look at it the other way -- a "high-level" class has derived classes descending from it.
Possibly, a better term would be "top-level" -- "high" doesn't necessarily indicate "top".
I think the use of the term simply indicates that the author is addressing only a very narrow scope of software architecture.
I might invert the definition though and say that a "high-level" class is one which depends only on interfaces.
|
|
|
|
|
We try to follow this principle. Even if you do something like:
IProvider thing = new RealProvider();
It makes you plan in terms of the consumer. If you end up with multiple providers later, switching to IOC is really easy. Or substituting a mock for testing, or writing tests based on IProvider, or etc
And of course, all the high level code that is calling the interfaces is “real” code.
If your IDE can’t show you all of the implementors of the interface in milliseconds, then try a better IDE.
|
|
|
|
|
englebart: If your IDE can’t show you all of the implementors of the interface in milliseconds, then try a better IDE.
Truth! Part of our problem is that we are using VS 2013
|
|
|
|
|
|
englebart wrote: If you end up with multiple providers later,
That however is the problem.
First of course it assumes that that is realistically even possible.
Second it assumes that the generalized interface will actually be abstract. So a different provider can be put in.
Databases are an excellent example of this. They do in fact change on occasion. But excluding very simple usages I have never seen this go smoothly or quickly.
I saw one company that specifically claimed (marketing) that their product was database agnostic and yet I saw the following
1. The product could not meet performance goals by probably at least an order of magnitude.
2. Their 'schema' was obviously generated in such a way that it would make any DBA tasked with supporting it not happy at all (as the DBA I talked to reported.)
3. They spent months on site trying to get it to work correctly and fast enough to be even possible for the company to use it. Still working on it when I left the company.
|
|
|
|
|
I stopped reading at “(marketing)”. 😊
Marketing still received their bonus?
Database compatibility layers are a whole different ball of yarn. Rarely do I ever have a second implementation, but I still like designing to the interface. (and keeping all dependency graphs one way)
|
|
|
|
|
Sadly I was confronted with legacy code where the developer thought it was a good idea to always use interfaces, which led to hundreds of extra files for very simple classes. Needless to say I was not amused.
|
|
|
|
|
I'd balk at the word "All".
While I do use lots of interfaces, even for classes that rarely get a second implementation, I don't use an interface for everything.
It's simply impossible now to say all your future "high-level classes" (whatever that means) need an interface or need to be injected.
And if you use an interface, do it right.
I've seen software that used interfaces like this:
public interface ISomething { ... }
public class Something : ISomething { ... }
ISomething something = new Something();
public Whatever DoSomeCalculations(Something something) { ... } Their idea was that you could now easily write a Something2 and replace all new Something() with new Something2() if that was ever necessary.
If that's how you're going to "use" interfaces, you might as well not.
And then the occassional idiot saying everything needs an interface.
So with interfaces it's like with many things in life: it depends.
|
|
|
|
|
Fantastic post! Great points and right along the lines as what I was thinking.
Thanks for the interesting discussion on this
|
|
|
|
|
raddevus wrote: All high-level classes must depend only on Interfaces That's one of the tenets of dependency injection because it makes for a testable and extensible design. We follow that guideline at my shop.
(Apologies if I misunderstood your comment.)
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: That's one of the tenets of dependency injection because it makes for a testable and extensible design
Yep, the devs who know about this know it really is like that. It's a foundational idea of DI.
Thanks for commenting. I'm curious about :
1. how many developers really know that concept
2. how many developers / shops actually use it.
3. how devs who work at shops where it is used, like or dislike it.
The comments so far have been very interesting.
Have you, by chance, read that MS Unity PDF that I referenced in my original post?
It has some great info on DI, but it's so old and further along the examples just jump into extreme details of using the Unity container. Oy! they should've made smaller set of examples.
That's actually what I was trying with my latest article.
Thanks again for the conversation.
|
|
|
|
|
Our latest “fresh” developers think OOP is microservices… Cannot even refactor simple code:
Original code did X 5 times, why does the new code do X 4 times?
Even some of the more experienced group is too reliant on full solutions on StackOverflow, etc. They cannot take two topics and synthesize a unique solution.
|
|
|
|
|
raddevus wrote: how many developers really know that concept I would have assumed devs with some experience would be aware of this. In our shop it's a given because you can't write a unit test with a mocked dependency without using this paradigm. It's also one of our pre-interview phone screen questions.
There's another subtle aspect to this, though: when using MEF, you can encounter a run-time failure (error constructing a service class) when any dependency in the chain fails to construct because of a missing [Export] attribute on a class in the dependency hierarchy. I didn't want our devs to have to manually check for this so I wrote a tool that reflects the codebase and identifies these broken classes.
/ravi
|
|
|
|