|
The perfect is the enemy of the good.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
I've found that for most shops, do it sensibly fast, first, then shore it up.
Managers want to see deliverables as soon as they can, and are usually less concerned about it working perfectly the first go round, at least in my experience.
Most of the time what PMs need aside from sensible scheduling and devs who deliver on time is a way to see progress. This is more important the higher up the ladder you go.
So progress it is. Make something fast, sensibly. Then iteratively develop it.
When I'm on my own I don't necessarily do it this way.
Just my $0.02
Real programmers use butterflies
|
|
|
|
|
OK, this makes sense. But how do you define "sensibly", when even baseline requirements ("...what are you looking for...") are like pulling teeth from a newborn child?
When Quality Counts...
|
|
|
|
|
Now you know why I don't work in software anymore.
Real programmers use butterflies
|
|
|
|
|
"Art Code is never finished, only abandoned." - Leonardo da Vinci
|
|
|
|
|
Over the years, in several software shops, I have seen reams of crappy code done quickly. Sooner or later, problems crop up, and I (and others) have to fix them or add new functionality.
No comments, no good exception handling, no null checks - a total lack of engineering by some coder that thought he or she was hot stuff because they could church out crappy code that works when everything is perfect.
A good engineering practice is to create coding patterns as snippets that can be copied and pasted that add value over the life of the app. Things like:
- KISS (Keep It Simple, Self) - because you are not Stupid.
- Apply this concept to all you do: "The more they overtake the plumbing, the easier it is to stop up the drain."
- Intelligent logging that does not affect performance and can be dynamically turned on and off for various types of things to log.
- Comments that explain the "what" and "why" of the code, not the "how" (the code already shows the how)
- Variable and method/property names match what they are or do, respectively.
- Genericize and put in a shared library when possible.
- Write unit tests AFTER you write the code that tests real world conditions, and do NOT use mock data.
- Don't put everything on one line just because the "dots" let you.
- Avoid "using" when you can use try-catch-finally. That is what the compiler does with your "using", anyway, so why not make good use of it?
- If some person or article or company says "code this function this way", and you do not understand the why, how, and what, don't use it. Including my list.
- Use your brain, not Entity Framework.
|
|
|
|
|
Wow! Sounds like a manifesto for starting a development war.
I agree with a lot of it, especially about unit tests after rather than before, principally just because it will annoy all those agile types. But I really have no idea what "The more they overtake the plumbing, the easier it is to stop up the drain." means. But I'm all for it!
Regards,
Rob Philpott.
|
|
|
|
|
Thanks, and I agree (though no intent to start a developer war), though I apply agile as it was intended (see https://www.linkedin.com/pulse/agile-principles-from-traditional-american-view-jeff-jones/[^]). No doubt that article will annoy the agile types that only operate from the logical fallacy of "appeal to authority".
The quote is from an old Star Trek movie (Star Trek III - The Search for Spock, if I remember correctly), where Scotty said that to Captain Kirk and Bones. Science fiction and trekkie lore aside, it is a great statement about not making something so complicated (the more they overtake the plumbing) that is is more brittle and prone to failure (the easier it is to stop up the drain).
|
|
|
|
|
MSBassSinger wrote: Genericize and put in a shared library when possible.
This 100%. Even if creating a library is overkill, separation of concerns is a solid tool for making code easier to use, extend, and understand.
|
|
|
|
|
Mostly agree with this. Any clues about logging though? I'll log caught exceptions, and if I've had to add a debug print statement, I'll change the print statement to logging at debug level, and that's about it.
When working in groups, I've rarely made use of logs, as there is too much crap in them to work out what's relevant. On my own code, I tend not to even have a logging framework.
|
|
|
|
|
"It's always a balance."
If you develop to perfection and simply stop when the time or money runs out, the result is incomplete.
If you develop to what you can complete within the contraints of time and money, the result is inadequate.
In neither case have you satisfied the user or customer. This survey doesn't apply to projects built for fun, where the resources are limited only by you.
You must balance the customer's needs with what you can provide given the constraints of time and money. Balance is achieved through dialogue and negotiation with them. Sometimes you find out a feature is a 'want' rather than a 'need', and can be postponed to a future effort. Other times you discover that the schedule is flexible, and they can give you that extra six weeks to finish an option, or they've got some discretionary money so that you can add more staff to the project.
This is one of those "soft skills" (like writing) that most tech folks regard with contempt, but would do well to learn. I'll freely admit I'm not nearly as good at it as I'd like.
Software Zen: delete this;
|
|
|
|
|
OR you could take the Microsoft approach and get it done as fast as possible and let the users de-bug it for you
the reason you can't make things foolproof is that fools are so ingenious...
|
|
|
|
|
I have a feeling all of us would be greatly surprised at how thoroughly Microsoft tests. When you have a user base measured in the billions, a few thousand of them experience a problem, and that's enough to have you pilloried in the media, that must wear on people after a while.
Software Zen: delete this;
|
|
|
|
|
|
Given the scope of the testing problem, 'crowdsourcing' one part of their testing process makes sense. It's the only way to get exposure to a statistically meaningful portion of their customer base.
I don't find the author's argument convincing. One of the constant complaints back in the "Good Old Days" was that "Microsoft must not be doing any testing at all! I tried running the most recent update on my Pentium II from Joe's Computer Shop and Shoe Store, and now it crashes all the time." Well of course it does. As the story goes, at one point Microsoft had an entire building that contained thousands of computers of various makes and models running tests, along with the employees to wrangle those machines. Much of the testing and related handling was automated, but eventually they realized they were chasing an ever-accelerating, and receding into the distance, target. The testing methodology was losing effectiveness from sheer numbers.
It simply isn't practical to resume that kind of testing. I have no idea what kind of QA or vetting process is used before a change makes it into an Insider build, but I'm sure there is one. I wouldn't be surprised to find there's a pile of servers, running an even bigger pile of virtual machines, running an immense set of regression and smoke tests on Insider builds before they're released.
Software Zen: delete this;
|
|
|
|
|
|
Gary Wheeler wrote: If you develop to what you can complete within the contraints of time and money, the result is inadequate.
Unless you've got a very patient client amenable to spiraling costs, that's exactly what you have to do. The result should only be inadequate if the costings/planning were inadequate.
Regards,
Rob Philpott.
|
|
|
|
|
Rob Philpott wrote: The result should only be inadequate if the costings/planning were inadequate. True. My experience has been that too many software types concentrate on the technical problems to be solved, and don't like to get involved in the planning. They seem to look at the process as inherently adversarial rather than one of participation.
I've been lucky the last few years to work for a boss that has weaned me away from that attitude. He lets us know the constraints and wants to know what's necessary. My job is to figure out options and costs (usually time) for each.
Software Zen: delete this;
|
|
|
|
|
Just remembered a training about assuring quality of software architecture. I told them about my "perfectionist" colleague and the way he used to design software architecture.
I was told by 2 trainers and 24 other participants to **immediately stop** calling him perfectionist and call him an overengineer. Their way of formulating it was "don't design your software for the one and only case where mars and moon are in the right angle..."
Will never forget that again. First cover what the software really needs to do, create it "as easy as possible" - so you understand it some years on when some user really has met Mars and Moon in that very angle. THEN expand your easy-to-understand-software.
Just to remind the start of my story: it was a training on **software architecture quality**!!
|
|
|
|
|
... if he wanted to do it so well that nobody would find fault with what he has done.
So goes an old saying.
|
|
|
|
|
The thing is, if you are experienced, hopefully you can see where you can maybe skip a bit now but have no problems going back and updating those parts later, because they are well divided or layered. You can concentrate on the big picture, knowing that you are don't have the ultimate implementations of the bits that you are slotting into that architecture, but that (to the degree possible) the improvements can be internal to slotted in bits. Then you can attack the internals of those once it's all sort of fitted together and you can see how it works in the real world before you start getting precious about the details.
The same sort of strategies that allow for encapsulation and layering in the long term can also be put to use here. In some cases you can even have them just simulate what they will ultimately be doing and make sure the overall interface between the parts is going to be sufficient.
Explorans limites defectum
|
|
|
|
|
It's quicker in the long term: you get less bugs to track down, you have much easier mods to do when the spec changes (and it will, it will ...), and you get much, much cheaper maintenance when it does go wrong.
After a while, it works out pretty much as quick as the quick and dirty "who cares about quality?" method.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
...you get fewer bugs...
There are degrees of 'rightness'. One should have, at a minimum, a decently-tiered setup with separation of app/persistence/logic layers. A controller should not be opening a db context to persist data, for example (unlike the herp-a-derp idiots I work with); it should collect that information and pass it to a layer which specializes in persistance.
Most of us are also writing software for a business, so it's not purely an academic exercise. At some point, we need to deliver something.
Sometimes, we should present to the business people the options. For example, "If we reference an enumerated value to make this decision, and you want another value in the future, that's a code change. However, we could add an attribute to that data, and build an administrative function to make it totally flexible. That would take about X days."
The business can choose to go the quick-and-dirty route, or might foresee other uses for the new functionality, and want to turn it on and off. I'm pretty agnostic to their choice. As long as I recognize that, present them with options, and document their decision, I'm pretty o.k. with whatever the outcome is.
So I guess my synopsis is, I generally agree with Griff, if we understand that there is a business element.
|
|
|
|
|
This quote applies to many things in life, not just programming.
|
|
|
|
|
To quote Krusty the Clown... "It's not just good, it's good enough"
|
|
|
|
|