|
But when calculating WTF/minute you must not forget to account for the possibility of extended recreational breaks after each WTF
(points at himself)
|
|
|
|
|
It the code performs its intended function with efficient, easy-to-understand and maintain code, it's of high quality.
#SupportHeForShe If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams
You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!-Wernher von Braun
Only 2 things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
Duncan Edwards Jones wrote: but is there any reference for what the range is typical of / acceptable for
real-world applications? Most companies that I worked for would oppose sharing such information, assuming that someone had it.
I find it useless in terms of determining entire applications; but if you take a look at sections of your code, you might find places where it goes up sharply where you might not expect it.
In terms of entire applications the amount of possible paths can go up hugely with much impact; think of adding another addin that saves the current document in "just another format".
You might indeed want to include the bug-count, LOC, avg L/method, amount of types, amount of namespaces, amount of violations of FxCop, amount of compiler-warnings, and profile things such as speed and memory-usage.
That also makes those numbers rather project- and team-related.
Speaking of the subject, would be nice to have some of those calculated for the articles here
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
If your big ball of mud is anywhere close to the size of my big ball of mud I would recommend rebuilding the app from scratch
|
|
|
|
|
Every developer anywhere always wants to build it again from scratch
Actually, I just did for a customers project... I plead guilty
|
|
|
|
|
If you're saying that it could be just my subjective perception that my application is a super sized ball of mud that can't be rescued but only be replaced by a whole new solution - then you're lucky because I won't show you the source because I don't want to be held liable for your mental state
|
|
|
|
|
|
I know there's nothing left to speak of but I fear your family could smell a chance and put the blame on me
|
|
|
|
|
Let me just leave the opinion[^] from Joel Spolsky here shall I.
|
|
|
|
|
Thank you for the link, Jörgen - an interesting read! And it probably applies to a lot of "those cases". If you're not concerned about your peace of mind I'll show you the source of my old program and you will acknowledge that there are exceptions, or, at least one
/Sascha
|
|
|
|
|
As others have said, quality is subjective.
The most objective ways to measure are using the existing tools that have been created:
Memory Leak Testing:
- Valgrind
Performance Tuning:
- Cachegrind
- Callgrind
- The profiling tools in Visual Studio
There are a number of static analysis tools:
Klocwork: You configure the tool with a coding standard, such as JSF++, MISRA, or your own custom rules and it analyzes for potential issues.
Lattix: Evaluates the coupling of the different modules and reports how modularly your code is organized.
Lines of code is useful if you combine that information with other statistics that you maintain, such as the number of defects, the volatility of the code for particular modules and the amount of time a developer spends modifying code in those modules.
Tools can help you identify issues, and sometimes even point towards possible solutions.
However, I have mostly witnessed people expecting to run the tools, and like a magic wand everything is fixed.
Tools cannot fix a social problem.
Ultimately, the value you get from the tools is related to how much time you want to invest in learning them and effectively using them.
|
|
|
|
|
I would actually be interested in an experiment (maybe a hackathon) where everyone was asked to write the same code, people would then look at the code and judge it by quality, and then apply these metric tools to see if they correlate.
Or, I could take some C# code that some other idiot wrote and benchmark it against my cleaned up version and see what the difference is. That would also be interested, to see how the numbers vary. Might even apply it to some code I have where I'm the idiot.
Marc
|
|
|
|
|
I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
|
|
|
|
|
Duncan Edwards Jones wrote: I'd personally have no problem at all running the standard .NET code metrics against all my code on this site.... could be an interesting exercise.
Exactly, but what I want is a more objective understanding of how the metric changes when I "improve" the code. Maybe even taking a small piece of code and applying, say, some basic design patterns to it, would be interesting.
What I would find a lot more useful in these metrics is if the code analyzer could actually say "here the design pattern foo is being applied which is good" and "here, design pattern fizbin ought to be applied." Now that might be interesting!
Marc
|
|
|
|
|
Marc Clifton wrote: "here, design pattern fizbin dustbin ought to be applied."
FTFY
|
|
|
|
|
Duncan Edwards Jones wrote: I'd personally have no problem at all running the standard .NET code metrics
against all my code on this site.... Maybe the hamsters would have a problem. Requires a lot of excercise for the code that is currently available.
Some might even die in the achievement. I'm not sure whether that is morally justifiable.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Marc Clifton wrote: I would actually be interested in an experiment (maybe a hackathon) Up to this point I hoped you would propose a contest for the ugliest piece of code - I would already have won it..
/Sascha
|
|
|
|
|
Interesting!
|
|
|
|
|
I'm a great fan of KLOCs. That's a measure of the purity of code per thousand litres of coffee.
|
|
|
|
|
I don't care about objective measures of code that make me want to SUBJECTIVELY PUKE when I work on it.
Red/Green/Refactor your way to happiness...
Make this commitment: Every file you touch to fix a bug, you will cleanup. You will at least make it 10% better. And using the testing along the way. try to fix as many bugs in one file as you can.
Eventually you will make your way through all of the files.
One must eat an elephant in bite sized pieces
The sheer pride of knowing you are chipping away at the Technical Debt should carry you forward.
I will promise you, in a years time, you will look back and laugh!
|
|
|
|
|
Sadly, if the coupling level is too high to allow for the creation of unit tests then "red/green/refactor" is as scary as having open heart surgery on a roller coaster.
|
|
|
|
|
It would help if there was a definition of code quality, but there isn't even that. Code might be defect-free (in the sense of working as designed), but still not fit for use. Code could be fit-for-use, but so complex as to be unmaintainable. There are numerous dimensions of that slippery concept called "quality".
Cyclomatic complexity doesn't measure code quality, it measures code complexity. The hypothesis is that there is a positive correlation between complexity and defect density. Most people think of measures of defects when they think about quality, but defects are not the only or even necessarily best measure of quality. Defects have the advantage that we can detect them, count them, and graph them on charts. Dimensions like maintainability are pretty squishy. And dimensions like fitness for use are measureable in principle, but so expensive to measure that most teams don't bother (and more's the pity).
With cyclomatic complexity, what you do is look for the parts of your code with high complexity and test the snot out of those parts, because bugs lurk there. You can also use cyclomatic complexity as a measure of where to focus refactoring and abstracting efforts. A team might go so far as to require special review of any code checked in whose cyclomatic complexity exceeded a particular value. But there's no number that is "too bug". Maybe the problem is just that hard.
|
|
|
|
|
- Prepare some chocolate chip cookies the night before
- Place cookie or cookies on the metal base directly under the cooling vent on the underside of the monitor
- Build and load test your latest creation. Test it hard.
- Enjoy the now warm cookie complete with melted chocolate chips.
cheers
Chris Maunder
|
|
|
|
|
I guess you really can have have your cakecookie and eat it, too
Tom
|
|
|
|
|
Chris Maunder wrote: Test it hard.Enjoy the now warm cookie complete with melted chocolate chips. Yummm. Debugging never smelled so good.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|