|
I'd balk at the word "All".
While I do use lots of interfaces, even for classes that rarely get a second implementation, I don't use an interface for everything.
It's simply impossible now to say all your future "high-level classes" (whatever that means) need an interface or need to be injected.
And if you use an interface, do it right.
I've seen software that used interfaces like this:
public interface ISomething { ... }
public class Something : ISomething { ... }
ISomething something = new Something();
public Whatever DoSomeCalculations(Something something) { ... } Their idea was that you could now easily write a Something2 and replace all new Something() with new Something2() if that was ever necessary.
If that's how you're going to "use" interfaces, you might as well not.
And then the occassional idiot saying everything needs an interface.
So with interfaces it's like with many things in life: it depends.
|
|
|
|
|
Fantastic post! Great points and right along the lines as what I was thinking.
Thanks for the interesting discussion on this
|
|
|
|
|
raddevus wrote: All high-level classes must depend only on Interfaces That's one of the tenets of dependency injection because it makes for a testable and extensible design. We follow that guideline at my shop.
(Apologies if I misunderstood your comment.)
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: That's one of the tenets of dependency injection because it makes for a testable and extensible design
Yep, the devs who know about this know it really is like that. It's a foundational idea of DI.
Thanks for commenting. I'm curious about :
1. how many developers really know that concept
2. how many developers / shops actually use it.
3. how devs who work at shops where it is used, like or dislike it.
The comments so far have been very interesting.
Have you, by chance, read that MS Unity PDF that I referenced in my original post?
It has some great info on DI, but it's so old and further along the examples just jump into extreme details of using the Unity container. Oy! they should've made smaller set of examples.
That's actually what I was trying with my latest article.
Thanks again for the conversation.
|
|
|
|
|
Our latest “fresh” developers think OOP is microservices… Cannot even refactor simple code:
Original code did X 5 times, why does the new code do X 4 times?
Even some of the more experienced group is too reliant on full solutions on StackOverflow, etc. They cannot take two topics and synthesize a unique solution.
|
|
|
|
|
raddevus wrote: how many developers really know that concept I would have assumed devs with some experience would be aware of this. In our shop it's a given because you can't write a unit test with a mocked dependency without using this paradigm. It's also one of our pre-interview phone screen questions.
There's another subtle aspect to this, though: when using MEF, you can encounter a run-time failure (error constructing a service class) when any dependency in the chain fails to construct because of a missing [Export] attribute on a class in the dependency hierarchy. I didn't want our devs to have to manually check for this so I wrote a tool that reflects the codebase and identifies these broken classes.
/ravi
|
|
|
|
|
Article? Or owned by the employer?
|
|
|
|
|
Sorry, the code is owned by my company so can't be shared.
/ravi
|
|
|
|
|
"because you can't write a unit test with a mocked dependency without using this paradigm."
An alternative is to use generic programming aka "static polymorphism", and inject dependencies via template parameters. No need for interfaces. Not saying this is a good choice, but it is certainly a choice.
|
|
|
|
|
That can leading to run time errors because you have to ensure you call the correct overload with the correctly mocked dependency for every method you want to test. It's safer to inject the required mocks (once) into a non-overloaded constructor of the system being tested, because those mocks are guaranteed to be used for all the methods being tested.
/ravi
|
|
|
|
|
The compiler takes care of calling the correct overload. I really don't understand the problem.
Pros of the generic solution: - no virtual function overhead
Con: - the interface contract is more implicit
Other than that, both approaches are about equally as complex and difficult to debug. Better if mocking is not used unless necessary.
|
|
|
|
|
hpcoder2 wrote: Better if mocking is not used unless necessary. How would you unit test a service without mocking its dependencies?
/ravi
|
|
|
|
|
Quite easily. Options include:
1. Black box testing - test the assembled class with its dependencies, based on whatever attributes are publicly visible. 90% of the time this is all that is needed.
2. White box testing - test the assembled class with its dependencies, but also declare internal state as protected, and have the test fixture inherit from the class being tested.
3. White box testing - instead of declaring the internal attributes protected, declare an internal class Test and make it friends with the class being tested. The actual implementation of the test class can be deferred to the unit test code.
All of the above I have used in a unit testing environment, and are way simpler to understand, debug and otherwise maintain than dependency injected/mocked code.
The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources.
|
|
|
|
|
IMHO, a member friendship violates Liskov.
hpcoder2 wrote: The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources. And that's often the case when testing enterprise systems that include several cooperating independent subsystems. That's the case at my shop.
/ravi
|
|
|
|
|
I have no problem with the independent subsystems being mocked. There are relatively few of these.
In examples I've seen, every single class implements an interface, and every interacting class is mocked, leading to triple the number of classes, and a nightmare to read and/or debug the code. Way too much!
Re friendship violating Liskov, then so much the worse for Liskov. Friendship has its place and uses, but shouldn't be overused - just like global variables, mutable members and dependency injection.
|
|
|
|
|
hpcoder2 wrote: I have no problem with the independent subsystems being mocked. There are relatively few of these. Right. In our codebase, services tend to have at most about 3-4 dependencies (independent services).
hpcoder2 wrote: In examples I've seen, every single class implements an interface Ouch. I agree that's overkill.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: That's one of the tenets of dependency injection because it makes for a testable and extensible design.
Those are buzz words however. It is like saying that the code should be 'readable'.
Has anyone measured, objective measurements, how successful that is?
How do you create a design that is 'extensible' when you do not know what business will be like in 5 years? Or 20?
What are you testing exactly? How do you measure it? Are bugs in production compared to those in QA and those in development? Does your testing cover not only simple unit testing but complex scenarios? What above fail over testing? What about production (not QA) testing?
Do you have actual injection scenarios that test different scenarios. This is possible in certain situations such as in performance testing specific code. But it must be plan for and then actually used in an ongoing way.
|
|
|
|
|
jschell wrote: Those are buzz words however. It is like saying that the code should be 'readable'. Requiring dependencies be defined as interfaces simply means their implementations can be changed at any time, as long as they adhere to the contract of the interface. That makes it possible to inject mocks (for testing) and improve/extend the functionality of a dependency without having to rewrite the consumer.
It's basic software engineering, not rocket science or a buzz word.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: It's basic software engineering, not rocket science or a buzz word.
Sigh...yes I understand how it is supposed to work.
I also understand in detail how Agile, Waterfall, project planning, work life balance and even designs are supposed to work.
The question is not about what should happen but whether it is actually happening.
Does it provide actual value that offsets the complexity.
|
|
|
|
|
jschell wrote: Does it provide actual value that offsets the complexity. Yes, I believe it provides several benefits (at the cost of slightly increasing the size of the project and lines of code):
- Contractual obligation Interfaces define a contract that classes must adhere to. By requiring a class to implement an interface, you ensure that it provides specific functionalities or behaviors as defined by that interface. This promotes consistency and predictability in your codebase.
- Polymorphism Interfaces enable polymorphic behavior in C#. When a class implements an interface, instances of that class can be treated as instances of the interface. This allows for greater flexibility in designing systems where different objects can be used interchangeably based on their common interface.
- Code reusability By implementing interfaces, classes can share common functionality without being directly related in terms of inheritance. This promotes code reuse and modular design, as multiple classes can implement the same interface to provide similar behavior.
- Decoupling and DI Interfaces facilitate loose coupling between components. Code that depends on interfaces is not tied to specific implementations, making it easier to change or extend functionality without affecting other parts of the codebase. This also enables dependency injection, where objects are passed into a class via interfaces, allowing for easier testing and maintenance.
- Design patterns Interfaces are integral to many design patterns such as Strategy, Observer and Factory. Requiring classes to implement interfaces enables the use of these patterns, leading to more maintainable and scalable code.
- Documentation and readability Interfaces serve as documentation for the expected behavior of classes. When a class implements an interface, it's clear what functionality it provides without needing to inspect the implementation details ("what vs. how"). This improves code readability and makes it easier for devs to understand and work with the codebase.
/ravi
|
|
|
|
|
Far as I can tell you are still telling me about what is is supposed to do.
|
|
|
|
|
jschell wrote: Far as I can tell you are still telling me about what is is supposed to do. Sorry, I don't understand.
Software engineering best practices don't magically do anything by themself. Developers have to use them correctly in order to benefit from them. Your statement is a bit like saying "Object oriented design has no benefits because it doesn't do what it's supposed to do." If you don't use object oriented programming principles correctly, you're not going to enjoy any of its benefits. It's the same with agile development practices (which IMHO very few organizations follow correctly).
/ravi
|
|
|
|
|
Which again is just stating how it is supposed to work.
As I asked you in the very first post that I made ...
Has anyone measured, objective measurements, how successful that is?
You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
|
|
|
|
|
jschell wrote: You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that? By measuring our sprint velocity and bug counts.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: and bug counts.
You originally responded (quoted) to the following
"All high-level classes must depend only on Interfaces"
Are you claiming that interfaces and nothing else reduced bug counts?
Versus and not the same as a large number of other code and process (not just coding) methods. And the sum total of those reduced bug counts?
And what was your time period and measurement. So for example you started with no processes in place in Jan of 2021, and you measured your production bug rate then for the last year (to Jan of 2020.) Then you implemented the new processes and now your production bug rate is 50% less? Or 90% less?
Specifically what are those numbers?
(Might note that I spend 15 years doing significant/principle work in Process Control procedures so I am fact rather knowledgeable both in the theory and the practice and the reality of doing this.)
|
|
|
|