|
I am updating a NuGet package that, as an ancillary function, generates SQL and C# code. I tend to agree with your post, but do have a couple of deviations.
Quote: it's almost always stupid to try to support generated code that a user has modified
"Almost" being the key word here. By generating stored procedures, C# POCO classes, and C# manager/factory classes that tie the SPs and POCO instance creation together, I save the developer a lot of time on the initial creation of the classes and SQL code. If the developer wishes to change them to suit his or her particular approach, it still takes less time than writing the code themselves from scratch, especially with "find and replace" in the IDE. The only downside is if the developer has so many changes as to have to re-generate the SPs, POCOs, or factory classes, then any manual edits have to be redone. Good planning can avoid most, if not all, of that.
Quote: Commenting the generated code is pointless
I find it useful to add comments in my generated C# code so
1) Intellisense can show brief explanations of methods and parameters,
2) The user can understand why I coded something the way I did.
I find it useful to add comments to my generated T-SQL code so the developer understands why I coded that way, and what value there is to it.
That said, yes it does take more of my time, but I want to give any users my NuGet packages have some additional info. I even do sample WinForms apps and a readme Word document to assist. Senior developers might not need them, but less experienced developers might benefit.
|
|
|
|
|
Putting doc comments in the generated code is a good idea, but we part ways when it comes to where to document the "whys" - I prefer to document them in the generator, or the generator's rules, whichever is more appropriate.
But also it sounds like you and I tend to use generated code differently. You're doing it more "JUICE" style where you create a package and then the user includes that to handle all the boilerplate stuff based on some settings (or DB schemas or whatever) - then they take that project and use it as a starting point.
I tend to generate code from spec files, and the code is purely generated from those specfiles, including being regenerated when that spec file changes.
Most of my comment has to do with that latter method. I agree with you about a lot of your post, especially with the way you are using code generation. It's just that I don't typically use it that way. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
So, how do you go about setting breakpoints and debugging?
I wrote a code generator, but made sure that the output is clear and understandable with comments, because it is intended to be modified, debugged, and never generated again. If anyone needs to add a field or function, it will have to be done manually.
I'm glad I don't have to work with you.
|
|
|
|
|
I don't do generation that way, and if I do it better roundtrip rather than break my business rules engine or otherwise go against my master specs.
You don't want to have to work with me? I don't want to have to work with code that's nothing like the functional specs for it anymore.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I do the opposite when I generate code. When emitting code, I automatically indent (really, fairly trivial) and even emit occasional comments.
The reason is very simple -- I do not do so, so that users can maintain or modify the code, I do it so that I can maintain and modify the code generator. Just like any other code you write, code that you write automatically may need to be corrected for errors, large and small. Just because you do not directly maintain the generated code, does not mean that the code generator does not need to be maintained. Clean, formatted generated code helps substantially with that.
And, if one or eleven years, your code generator is not still in use (or possibly not even available), but the generated code is still running. Then it DOES need to be maintained directly.
Bottom line, treat generated code exactly like any other code that you write.
BTW, GOTO statements are fine regardless -- just use them to replacing missing control structures. Every language has them, including C and C++. For example, break and continue only apply to the inner level. I quite frequently need to break from two or (more rarely) three levels. Using a GOTO is the only choice, when the language is insufficient.
|
|
|
|
|
I hear you, and to a degree I agree. I do format my code, and I don't go out of my way to make the generated code cryptic, but I do concentrate on maintaining the generator. If it generates code that needs to be tweaked, then it's the generator that needs to be tweaked, IMO.
I'm running into that right now with a lexer generator I made that I'm using as a pre-build step in another project.
I fixed the lexer generator. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
I'm of the camp that you start off simple, write a clean implementation, and if you need to bring in other technologies like AWS lambda, distributed computing, whatever, you can do so once there's evidence that you *need* to do so. Again, unless that need is quantifiable up front and becomes the driving factor for the design.
I *am* biased, and if someone is going to suggest a totally different tool than what's currently in the chest, I want it vetted and I want that person to be responsible for documenting it.
So in writing this, I realized something. In this impromptu design meeting that was pretty much about implementation ideas for a solution, I realized that the requirements are incomplete. And with an incomplete (and by this I don't mean that one needs a *complete* set of requirements, just something sufficient for the major talking points) understanding of the requirements, you can have as many ideas as you want, but they are all pretty much worthless.
modified 26-Nov-19 10:03am.
|
|
|
|
|
I agree somewhat.
Sometimes performance requirements are obvious before you even start coding. Eg a high volume website that needs to be sensible with resources. However, I'd be preaching that the perf baked in should be at the architecture level, not the line-by-line optimisation level. ie Settle on a server framework that's been proven to be fast; bake in caching; make sensible database design decisions. I wouldn't be optimising your sorting routines just yet, though.
I 100% agree with not bringing in any technology until it's needed. We've had many, many projects fail because they were over engineered / overly complex. Instead of prototyping in Python, or building a monolithic prototype in .NET Core, systems were using literally dozens of frameworks and technologies spread over dozens of web services and 90% of the efforts ended up in DevOps, not Dev. Instead of focusing on the goal, the focus was on the projects.
The one point I will differ with you is that incomplete requirements don't faze me that much. We had a recent project that failed because a year was spent analysing requirements. If we'd just thrown something out there, played around and then worked out what the actual real world requirements were, we'd be far ahead. Instead we tried to predict the future. There's a point where you hold your breath and jump.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: is that incomplete requirements don't phase me that much.
Oh, no disagreement there! It was actually one of my points -- let's start with a simple and clean approach, get some datapoints on how well it's working, and then decide if a more involved solution is required.
In my particular case, it was revealing, and probably worthwhile to do a short investigation, that because performance kept coming up, it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts.
It's also somewhat annoying that there is no clear decision maker in this process. (Of course I want that person to be me!) But the way the game is being played is, let's get everyone's opinion and watch them argue / flail / wave unsubstantiated opinions like banners at a jousting competition. I actually very much dislike that approach, but I also recognize I could be very much in the wrong or minority with my dislike.
Sigh. The balance between "we use these technologies, does this problem warrant introducing a new technology?" The balance between how much time do we spend investigating concrete datapoints that can guide us on an implementation vs. try it and see what happens? In this case, it's really a ridiculously simple process that can run autonomously, there's just a couple specific things it needs to do.
It's funny, I actually have a conservative approach with regards to introducing newfangled technology and a rather aggressive approach to "let's leap and see what happens." I think though, that I'm comfortable leaping because I've learned how to write the actual code in a way that, if a course correction is required, it's not usually a big deal. The problem is, I don't trust other people to have that skill!
|
|
|
|
|
Marc Clifton wrote: it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts
There's your problem right there. You're letting facts get in the way of a good story.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: incomplete requirements
The reason many developers fear incomplete requirements, is that they know they're the ones that lose when the blame game starts.
|
|
|
|
|
|
Marc Clifton wrote: I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
Then how do you innovate or advance?
Marc Clifton wrote: I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
Performance is always a factor. Do you also not bother about security until that becomes a factor? I know what you're getting at is that you shouldn't "gold plate" things, but some things are often too hard to retro-fit so you should think about them even if they are not explicit requirements.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance? By investing in R&D. That shouldn't be part of normal production.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Aint nobody got time fo' that
|
|
|
|
|
Money you mean, since time is for sale.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance?
By making an informed decision (ok, can-of-worms) rather than just someone saying "oh, here's something I've played with that will work perfectly." Riiight. Show me.
I've worked with too many developers that leaped onto Ruby, for example, and it turned out that the only innovation that occurred was how must faster you could screw up a project.
|
|
|
|
|
I agree with almost all of your points, with the possible exception of having a complete set of requirements before starting the design.
Some of the requirements may not be apparent until you have already done part of the design. For example, the original design might have envisioned reading certain personal information from a public database, and immediately discarding it after use. Due to responsiveness and/or reliability constraints, it turned out to be necessary to keep a local cache of this personal information in a local database. This requires following the legal requirements for databases that store personal information, which is a whole new kettle of fish.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: with the possible exception of having a complete set of requirements before starting the design.
Yes, I agree. I'll edit the post.
|
|
|
|
|
I'm of the camp that you start off simple and write a clean implementation, performance will not be a problem.
|
|
|
|
|
One approach we've used in the past is the Pugh Matrix, which makes sure that all parties are heard, and that all criteria are taken into consideration.
The Wikipedia post for this Pugh Matrix is somewhat incomplete, and I find better references online.
|
|
|
|
|
Hear! Hear!
And keep me from punching they who respond to new projects with, "ooh, this would be a good opportunity to try out New Third-Party Product X which I read about in a blog this morning!"
|
|
|
|
|
Performance goals should be in the requirements.
We're not talking about optimization here, but high level performance goals.
something like, for example :
When application is launched, it should be reactive in less than X seconds (could mean what initialization can be postponed later instead at startup)
or
3D rendering should be Y frames/seconds (could mean, how can we reduce the complexity of the model to achieve that goal ?)
I'd rather be phishing!
|
|
|
|
|
I probably never had a single complete requirements...
That's why I add mine - and working web mostly, performance is one I always add...
I also does nothing until it is written in emails - even incomplete...
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
|
|
|
|
|
Marc Clifton wrote: and if you need to bring in other technologies like AWS lambda, distributed computing, whatever, you can do so once there's evidence that you *need* to do so In the case of AWS Lambda (or Azure Functions in my case) I prefer to use them unless I can't
They're simple, easy, cheap and they scale automatically out of the box.
Many programmers are like "I use the right tool for the job", but when new tools come around they're like "it's just a fad so I'm sticking to what I know."
Even when these tools solve actual problems and gain popularity and maturity, a lot of these "right tool for the job" people refuse to work with new technologies.
I (think I) know of at least a few people here who just outright refuse to work with cloud or containers or even anything that isn't vanilla JS (or jQuery) and HTML.
A tool you know may be right for the job simply because you know it, but if something is gaining traction, like in this case serverless solutions, I'll sure as hell try them out if I think I've got something for it.
If I'd start "simple" with "what I know" on each new project I did I would still be writing giant WinForms monoliths in VB.NET
I'm just saying there comes a time when you've got to try something new even when it *now* looks like the old would suffice.
That said, always wait for version 2
|
|
|
|
|