|
IMHO, a member friendship violates Liskov.
hpcoder2 wrote: The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources. And that's often the case when testing enterprise systems that include several cooperating independent subsystems. That's the case at my shop.
/ravi
|
|
|
|
|
I have no problem with the independent subsystems being mocked. There are relatively few of these.
In examples I've seen, every single class implements an interface, and every interacting class is mocked, leading to triple the number of classes, and a nightmare to read and/or debug the code. Way too much!
Re friendship violating Liskov, then so much the worse for Liskov. Friendship has its place and uses, but shouldn't be overused - just like global variables, mutable members and dependency injection.
|
|
|
|
|
hpcoder2 wrote: I have no problem with the independent subsystems being mocked. There are relatively few of these. Right. In our codebase, services tend to have at most about 3-4 dependencies (independent services).
hpcoder2 wrote: In examples I've seen, every single class implements an interface Ouch. I agree that's overkill.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: That's one of the tenets of dependency injection because it makes for a testable and extensible design.
Those are buzz words however. It is like saying that the code should be 'readable'.
Has anyone measured, objective measurements, how successful that is?
How do you create a design that is 'extensible' when you do not know what business will be like in 5 years? Or 20?
What are you testing exactly? How do you measure it? Are bugs in production compared to those in QA and those in development? Does your testing cover not only simple unit testing but complex scenarios? What above fail over testing? What about production (not QA) testing?
Do you have actual injection scenarios that test different scenarios. This is possible in certain situations such as in performance testing specific code. But it must be plan for and then actually used in an ongoing way.
|
|
|
|
|
jschell wrote: Those are buzz words however. It is like saying that the code should be 'readable'. Requiring dependencies be defined as interfaces simply means their implementations can be changed at any time, as long as they adhere to the contract of the interface. That makes it possible to inject mocks (for testing) and improve/extend the functionality of a dependency without having to rewrite the consumer.
It's basic software engineering, not rocket science or a buzz word.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: It's basic software engineering, not rocket science or a buzz word.
Sigh...yes I understand how it is supposed to work.
I also understand in detail how Agile, Waterfall, project planning, work life balance and even designs are supposed to work.
The question is not about what should happen but whether it is actually happening.
Does it provide actual value that offsets the complexity.
|
|
|
|
|
jschell wrote: Does it provide actual value that offsets the complexity. Yes, I believe it provides several benefits (at the cost of slightly increasing the size of the project and lines of code):
- Contractual obligation Interfaces define a contract that classes must adhere to. By requiring a class to implement an interface, you ensure that it provides specific functionalities or behaviors as defined by that interface. This promotes consistency and predictability in your codebase.
- Polymorphism Interfaces enable polymorphic behavior in C#. When a class implements an interface, instances of that class can be treated as instances of the interface. This allows for greater flexibility in designing systems where different objects can be used interchangeably based on their common interface.
- Code reusability By implementing interfaces, classes can share common functionality without being directly related in terms of inheritance. This promotes code reuse and modular design, as multiple classes can implement the same interface to provide similar behavior.
- Decoupling and DI Interfaces facilitate loose coupling between components. Code that depends on interfaces is not tied to specific implementations, making it easier to change or extend functionality without affecting other parts of the codebase. This also enables dependency injection, where objects are passed into a class via interfaces, allowing for easier testing and maintenance.
- Design patterns Interfaces are integral to many design patterns such as Strategy, Observer and Factory. Requiring classes to implement interfaces enables the use of these patterns, leading to more maintainable and scalable code.
- Documentation and readability Interfaces serve as documentation for the expected behavior of classes. When a class implements an interface, it's clear what functionality it provides without needing to inspect the implementation details ("what vs. how"). This improves code readability and makes it easier for devs to understand and work with the codebase.
/ravi
|
|
|
|
|
Far as I can tell you are still telling me about what is is supposed to do.
|
|
|
|
|
jschell wrote: Far as I can tell you are still telling me about what is is supposed to do. Sorry, I don't understand.
Software engineering best practices don't magically do anything by themself. Developers have to use them correctly in order to benefit from them. Your statement is a bit like saying "Object oriented design has no benefits because it doesn't do what it's supposed to do." If you don't use object oriented programming principles correctly, you're not going to enjoy any of its benefits. It's the same with agile development practices (which IMHO very few organizations follow correctly).
/ravi
|
|
|
|
|
Which again is just stating how it is supposed to work.
As I asked you in the very first post that I made ...
Has anyone measured, objective measurements, how successful that is?
You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
|
|
|
|
|
jschell wrote: You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that? By measuring our sprint velocity and bug counts.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: and bug counts.
You originally responded (quoted) to the following
"All high-level classes must depend only on Interfaces"
Are you claiming that interfaces and nothing else reduced bug counts?
Versus and not the same as a large number of other code and process (not just coding) methods. And the sum total of those reduced bug counts?
And what was your time period and measurement. So for example you started with no processes in place in Jan of 2021, and you measured your production bug rate then for the last year (to Jan of 2020.) Then you implemented the new processes and now your production bug rate is 50% less? Or 90% less?
Specifically what are those numbers?
(Might note that I spend 15 years doing significant/principle work in Process Control procedures so I am fact rather knowledgeable both in the theory and the practice and the reality of doing this.)
|
|
|
|
|
jschell wrote: Are you claiming that interfaces and nothing else reduced bug counts? Obviously not.
jschell wrote: I am fact rather knowledgeable both in the theory and the practice and the reality of doing this.) Yes, we're all very impressed by your intellect.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: Yes, we're all very impressed by your intellect.
And yet I do know what I am talking about.
While you keep responding but fail to answer the original question
- how did you specifically measure the improvement?
- What was your specific improvement.?
I know that the answers to both those questions are available if in fact you are following stringent Process Control processes.
And those processes would demonstrate that your original claim is in fact correct.
Versus, as I said, applying it with the expectation of an improvement without any backing for that. That supposition would not be unique to you of course. I have seen many people make the claim. But who were unaware of that vast body of work that does in fact show that improvements (objective measured) are possible.
But only if one does the actual work, including the Process Control processes.
|
|
|
|
|
jschell wrote: While you keep responding but fail to answer the original question
- how did you specifically measure the improvement?
- What was your specific improvement.? Actually I did answer your questions. Perhaps you missed reading my reply of 27-Feb-2024 10.46. Here it is again:
Ravi Bhavnani wrote: By measuring our sprint velocity and bug counts. The increase in sprint velocity showed we were able to release new and modified functionality consistently faster (without growing our team), and the reduction in issues per new feature/enhancement spoke to better code quality brought about by the increase in the number of unit tests. Defining our injected dependencies as interfaces (vs. concrete classes) makes it easier to write unit tests because it's trivial to mock them. This is especially true when many of our service dependencies are implemented by other teams.
/ravi
|
|
|
|
|
First of course those are not numbers.
Second I responded to you and said the following...
"Are you claiming that interfaces and nothing else reduced bug counts?"
Ravi Bhavnani wrote: The increase in sprint velocity showed we were able to release new and modified functionality
Yes one can modify coding practices and development processes to achieve various measurable benefits.
But you cannot due that solely by using interfaces.
Please read the Original Post.
It says nothing about a comprehensive rework of coding practices and development processes.
It says that nothing but requiring interfaces is being required.
|
|
|
|
|
jschell wrote: First of course those are not numbers. Do you mean sprint velocity, code coverage and bug counts aren't numbers?
jschell wrote: But you cannot due that solely by using interfaces. Correct. And I didn't imply you could. Read my comment of 17-Feb-24 13:11 again.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: Do you mean sprint velocity, code coverage and bug counts aren't numbers?
No. Those are measurable quantities which over time can reflect a change.
A number is 3. You could state that you improved 30%. Or that your average bug count went from 30 to 3.
Those are numbers.
Ravi Bhavnani wrote: Correct. And I didn't imply you could. Read my comment of 17-Feb-24 13:11 again.
Stating it again....
The OP said that the ONLY thing requested was adding interfaces. Nothing else.
You responded directly to that, you quoted that line from the OP.
And stated you "follow that guideline at my shop."
You did not qualify that by suggesting that it was part of a larger process that would lead to better code.
Perhaps you meant to encompass the entire development process but that is NOT what you said in your original post.
|
|
|
|
|
It only makes sense to me when it can be more than one "is a". Other than that, it's another "ritual". And all you keep doing is going back and forth between (one) "implementation" and interface; until it's obvious one needs (or can benefit from) an interface.
Then you also have to deal with the school that says "no inheritance"; which in essence means no "base methods"; virtual or otherwise. Another pointless ritual that only becomes "real" because someone "ordered" it; or can't decide when it is appropriate. See: "abstract" TextBoxBase.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I do like the rule of no concrete super/base classes. One concrete type extending another concrete type always causes grief down the road when someone adds a third concrete type into the mix.
|
|
|
|
|
englebart wrote: One concrete type extending another concrete type
The problem there however is overuse of inheritance.
The solution is to use composition instead.
|
|
|
|
|
Whoever gave that directive is a man after my own heart.
It's extreme, to be sure. Realistic to literally follow 100% of the time? Probably not. But as an aspiration, a philosophy - absolutely.
If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported - without a rewrite.
Nuget packages, even entire application frameworks will come and go, yet your core code will be snug as a bug in a rug, wrapped in layers of abstraction that shield it from the chaos.
When your favorite library is deprecated, revealed to have a critical vulnerability, or the vendor jacks up the price on you, you scoff at how simple it is to assign someone to find a replacement and write the wrapper layer - completely independently of everyone else. Your customer tells you the application you designed for Azure now needs to run on AWS? "No problem", you say, "give me a week." Microsoft decides to make 100 new breaking changes to ASP.NET Core? Bah! The upgrade takes an hour.
You will never be stuck relying on proprietary technology outside of your control ever again. The term "technical debt" won't even be part of your vocabulary.
So yes. Those who know, do this.
|
|
|
|
|
Yep, I think you nailed the truth of it. It is definitely a core practice to follow.
Very few people think of software dev in these terms.
I'm guessing you've built or worked on extremely large projects?
|
|
|
|
|
Peter Moore - Chicago wrote: If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported
And have you actually done that?
I have worked on multiple legacy products and never seen anything like that.
At a minimum I can't see it happening in any moderate to large business unless the following was true
- Dedicated high level architect (at least director level) whose job is technical not marketing. The person enforces the design.
- Same architect for a very long time. With perhaps a couple other architects trained solely by that individual.
- Very strict controls on bringing in new idioms/frameworks.
- Very likely extensive business requirements to support multiple different configurations. From the beginning. That would insure the initial design actually supports that.
What I have seen is even in a company started with a known requirement to support multiple different implementations in rapid order (about a year) new hires decided to implement their own generalized interface on top of the original design without accounting for all the known (not hypothetical) variants. Making the addition of the newer variants into a kludge of code to fit on top of what the new hires did.
|
|
|
|
|
jschell wrote: At a minimum I can't see it happening in any moderate to large business unless the following was true
- Dedicated high level architect (at least director level) whose job is technical not marketing. The person enforces the design. You make a good point. It takes an experienced technical team to lay down guidelines like these.
Over the past 20 years I've worked mostly at early stage companies with very experienced small teams, each of which was tasked with implementing portions of a larger complex product. Because requirements are almost always less known early in a product's evolution, using the technique of enforcing interface definitions allows the code to naturally evolve as the requirements change and become more solidified. Coupled with a strict regimen of writing automated unit and integration tests, defensive programming designs like these increase the chances of developing a complex app with fewer bugs.
/ravi
|
|
|
|