|
I'd say that many C# developers -- if given a choice -- would rather have multiple-inheritance.
I doubt many C# developers are of the opinion that Interfaces are just the bestest thing ever.
I'd prefer to have both.
What most developers don't understand is that Interfaces enforce the "like a duck" requirement for Duck Typing.
Super Lloyd wrote: only implemented once
I do that.
I'm with you on the rest.
I also write partial Interfaces. Muhahaha!
modified 7-Apr-20 20:35pm.
|
|
|
|
|
Partial interface.. the new thing with default method implementation for interfaces?
Looks good, better than extension method!
Unfortunately can't quite use with .NET 4.7.2 I think (mm... I think there is a project settings to use them with .NET 4.7.2 but I have cold feet on that ^_^ )
|
|
|
|
|
Super Lloyd wrote: I think there is a project settings to use them with .NET 4.7.2
There isn't. You can enable C# 8 features in a .NET Framework project, so long as you're using a recent version of VS2019. But that doesn't mean everything will work.
Default interface members required changes to the runtime, which were not back-ported to .NET Framework. They will only work with .NET Core (including .NET 5 when it arrives).
C# 8.0 and .NET Standard 2.0 - Doing Unsupported Things[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
But hey - Windows forms still hasn't died yet so you should rest easy that staying on the last forward moving version of full framework will carry you until close to retirement.
|
|
|
|
|
I agree that interfaces shouldn't categorically be seen as a best practice. Doing something because it makes mocks easier to implement is putting the cart before the horse. And there's already enough boilerplate that obfuscates the code.
Unit testing is great for libraries: a collection of disparate things. But if you're building a system whose components all cooperate, integration testing should be paramount.
However, (pure) virtual functions are vital in object models where polymorphism and/or inheritance are important. But that's a design abstraction similar to code reuse: if it happens only once, the abstraction isn't needed! If it happens a second time, you start thinking about it. And if it happens a third time, abstraction is called for, just like finding a way to have one instance of the code that would otherwise be copy-pasted into multiple locations.
|
|
|
|
|
I totally agree but I’m more of a believer in the YAGNI principle.
Rarely do abstractions spring to mind fully formed and ready for battle. It’s better to wait until natural flow of the project forces you to create those abstractions. On the other hand, if you wait too long you end up with many almost repeating pieces of code. Knowing when to do it is the difference between a good designer and a mediocre one.
“The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material.” - Michelangelo
|
|
|
|
|
I had to look up YAGNI (Martin Fowler: You Ain't Gonna Need It). I haven't read what he says about it, so I'll just say that sometimes abstractions can precede applications. There's a spectrum for this:
- When applications in the current release are being designed and it's clear that some abstractions are in order.
- When you've read specifications that will be implemented in the next release and can foresee the abstractions.
- When you can anticipate where the product will go. This is getting a bit dubious, so I usually stuck to the first two.
The abstractions can then be made available before the applications are implemented. In the absence of this, refactoring will be needed later, which is great if the culture supports it. But managers usually favor the "If it ain't broke, don't fix it" rule and would prefer everyone to be beavering away on new features. You're lucky if you've got management that even believes in building a framework in the first place.
|
|
|
|
|
Isn't YAGNI also known as "The constant need for refactoring principle"?
|
|
|
|
|
Member 7989122 wrote: Isn't YAGNI also known as "The constant need for refactoring principle"?
My experience is that lack of refactoring is cancer to a project. It should probably stop when the project dies.
|
|
|
|
|
Interfaces, like all sorts of "contracts", defeat the agile philosophy. Maybe not if you ask a philosopher, but certainly if you as an agile code developer.
Defining an interface / contract will tie you on your hands and feet. You do not have the freedom to change that API whenever you feel like, to whatever you think it should be today. Contracts are like the waterfall model: It is an attempt to foresee what the solution will look like before you start coding.
Setting up contracts / interfaces requires planning. It requires problem analysis and defining a solution architecture before you start coding. Such elements are devastating to the very idea of 'agile'.
On the other hand: I am not personally an agile evangelist. So I think setting up contracts, hereunder interfaces, is an important part of the solution architecture work, done before you start coding.
In the agile congregations of today, you rarely will get accept for any such though. 'Solution architecture' is what your code looks like when you have completed it. 'Interface' is the API you finally ended up with. For this version, that is. Hey, it is just a function declaration! You can't let that restrict what we do in the next version!
|
|
|
|
|
<InsertObligatoryToolsCanBeMisusedObservation />
Software Zen: delete this;
|
|
|
|
|
I love interfaces, but the problem isn't interfaces, it's that any good thing can be use wrongly/badly/ugily So just because it can be done badly, doesn't mean it's bad.
I've had colleagues that would never write a class in c# without a corresponding interface, most of which were never actually used for any proper purpose. That's just extra maintenance with zero benefit.
And one often sees wrong things put in interfaces, e.g. implementation specific info which makes no sense for other implementations of the interface.
And often interfaces contain too many things that should be separated out into multiple interfaces.
This I find cool, that I can implement multiple interfaces in a single class.
|
|
|
|
|
IMHO, most software is way over architectured and is more about showcasing some gee-whiz-bang new and totally unnecessary library than actually solving the problem. These things are really nice maintenance nightmares for those that follow behind - I love having to flip through a dozen code files to get to the actual implementation tho.
|
|
|
|
|
I'm with you on this; it's vary rare that I find a good use for interfaces. more often than not they become must inherit classes with some base functionality, because I absolutely hate redundant code.
I think there has been a huge push to use interfaces for dependency injection crap. the downside is it's harder to debug and takes longer to develop.
I've been designing software since the mid-90's and I've seen trends come and go (and come back), but in the long run, K.I.S.S. methodology is the thing I go back to: will making this an interface easier or harder for me to figure out in 3 years when I have to revisit this code.
|
|
|
|
|
Matt McGuire wrote: I think there has been a huge push to use interfaces for dependency injection crap. the downside is it's harder to debug and takes longer to develop.
So true. I mean DI can do some amazing things, but man when you go to debug things it just slow you down so much.
|
|
|
|
|
I'm not a fan of interfaces either - recently needed to develop plugins for a wide variety of applications - interfaces by their very own nature are contracts with the originating app - it's like having shackles on a plugin that you want to freely distribute - the app shouldn't know anything about a plugin - its the plugin that needs to know about the app's eco-system
Danno
|
|
|
|
|
Would it be an alternative to switch to DDD - Documentation Driven Development, a sibling of TDD, Test Driven Development, but even before you write the tests, which by TDD is before you write the code, you write the Documentation!
Once you have documented all the externally visible method signatures (as well as the implementation architecture, data structures etc.) and then written tests bases on these, then the need for declaring a compileable interface definition is significantly reduced. It won't take you any more resources to start out with the documentation, instead of delaying it until the coding is complete. Quite to the contrary: Good documentation may help speed up coding, when you know where everything fits into The Big Picture. (I take for granted that you do write proper documentation of the system you create.)
(And then I will return to doing the last checks of the documentation of my new hobby project, so that I may start coding it tomorrow.)
|
|
|
|
|
Documentation is good to have before implementing the feature. Having everything spelled out beforehand is useful for the product owner, tester, and developer. It gives a contract of what to expect. I did this once at a company I worked for. The product owner gave us a set of requirements for a feature. We, the developers would flesh it out into a specification, including the UI, how the UI worked (buttons, sliders, inputs, screen layout, tabs, etc.), underlying algorithms, workflows (user input, processing, output, formatting), file storage formats, etc. This would be kept and later, parts would become user documents, functional documents, and technical documents.
The downside to this is that in future releases, when we needed to change the features, these documents would need updating. Usually, more work than we we first created them.
|
|
|
|
|
In the late 1960s, a Norwegian research institute build a prototype for a 16-bit mini - "mini" in those days meaning a full height 19" rack. Around 1971, one grand old company was going to commercialize this design. As always, they had all the documentation printed (500 copies) before the real production was started.
Then it was discovered that the guys doing the documentation had been making a couple of errors: For the shift instructions, they had mixed up the bit selecting rotational shift with the don't care bit. And the description of address calculation where the base register (i.e. stack frame pointer) was involved was incorrect. But the documentation was already made, the machines were not. So rather than having to revise 500 copies of documentation, they decided to rather build the machine to behave the way the documentation described it.
This is even more crazy considering that two companies were invited to commercialize the design. The other company (a small startup company where they hardly knew the meaning of the word 'documentation') copied the prototype design. So there you had two Norwegian-built machines, with identical instruction set except for rotational shift and identical addressing mechanisms except for stack relative addressing.
Both machine series survived for quite a few years, but I never saw a single piece of software that was made to run on both machines. No common compiler with a switch to select either the "KV" or "ND" CPU variant. They ended up in completely non-overlapping market segments, rather than the intention of establishing a solid Norwegian computer industry with two manufacturers sharing architecture and design resources. Thanks to a tech writer who didn't understand the things he was documenting.
|
|
|
|
|
The big problem is that the interface seems to NEVER live in the correct place. The interface should be either in the project of the consumer (for true Inversion of Control), or a shared project such that it could be consumed by multiple front ends.
All too often a class is written than interface is abstracted which is to say the horse is pushing the cart.
|
|
|
|
|
|
OriginalGriff wrote: they took their time doing it though They'll claim "incubation period", I'm sure.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
David Icke?
That explains a lot.
I met that pr1ck when he was trying to build a career as a TV "personality". A little self-obsession is normal, in the entertainments field, but this guy genuinely believed that the world revolved around him and the Sun shone out of his @rse (so even then he had zero grasp on science).
He quit his TV endeavours because he was too good for TV (he arrived at that conclusion about a year after everyone stopped hiring him because of his combination of being both useless at the job and unbearable to work with), and started on the conspiracy theory trail, which is the best way for incompetent bullsh1tters to pick up mentally deranged worshippers.
He's the kind of self-absorbed, "truth is less important than I am" w@nker who makes the world worse no matter what he does, because everything he does is to get attention for himself, and he doesn't care who gets hurt in the process.
I do hope that the police are looking into conspiracy to commit criminal damage charges.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Mark_Wallace wrote: He's the kind of self-absorbed, "truth is less important than I am" w@nker who makes the world worse no matter what he does, because everything he does is to get attention for himself, and he doesn't care who gets hurt in the process.
There are many more people of this kind than we actually need, and some of them can really do / are doing a lot of damage.
Mark_Wallace wrote: I do hope that the police are looking into conspiracy to commit criminal damage charges. It would be a good start. But I still doubt that the ones that should be in the accusation sits ever see the inner side of a court.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Not a fan then!
|
|
|
|