|
I render my state machines to images. As seen in the article images here: How to Build a Regex Engine in C#[^]
That's also how I learned how these FA state machines work. These visual aids are invaluable.
God bless GraphViz. =) I use it under the hood for my state machine rendering code. It helps when I'm debugging them immensely.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
(These figures don't display in Firefox ... Why am I still using that browser? IE11 can handle them, though!)
Sure, blobs and arrows are nice for getting an overview. But only up to a certain complexity. I don't think I would want to learn X.225 from a complete blobs and arrows diagram for the full protocol. Maybe for smaller subsets of the state transitions.
In such diagrams, you usually do not get an overview of error handling; they are definitely best suited for a more or less linear state flow and events that are "well behaved", they come as expected. Obviously you have alternate linear sequences from an initial to a final state, and loopbacks, but in the sample figures there is not one case of crossing lines (transitions). in a large model where you have defined various sorts of exception/error handling, you couldn't hope for avoiding crossing lines. So you simplify the logic, providing a general introduction to the FSM through the figures. But to see the full works, you must go to the state tables.
Actually, I was working on a GraphViz plugin for VisualStudio to make such figures based on the state tables, handled by another plugin, for editing table squares index by state/event, specifying entry conditions (predicates), actions, and next states, as well as tables of predicate definitions and action routines, covering all the requiremets for the X.225 state machine. I had the design ready for a selection mechanism for graphing a selected subset of the state transitions, which labels to add to the graph etc.; your project could contain multiple graph descriptions for different purposes. This was primarity for teaching FSM programming at a Tech University. But then I switched jobs, so the project was never completed. Maybe I should pick it up as a hobby project. Would be fun!
|
|
|
|
|
I ran into that problem too (in chrome) - turn off your ad blocker for this site (adds are minimal, unobtrusive here)
That FSM thing would be fun. I've been kind of doing a subset of it for character based machines.
Although I want to extend it to handle the state machines in LR parser tables. Those are PDAs so they're a bit more complicated - they have a stack.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Hmm. I only have one case in recent memory of generated code where I wrote the generator.
I agree that comments are not terribly useful, as long as you're always using the generator to update the generated code. In the past, I've had the generator include the original text as a comment in the generated stuff as a debugging aid for the generator itself.
Constants depend upon the purpose of the generator and the generated code. In some cases it's simpler to have the compiler compute constant values at compile time than it is to have the generator compute them.
I've not had [what I consider] a reasonable use for goto label in the last 20 years or so. That said, I do use break or continue fairly regularly.honey the codewitch wrote: ow many of you hate me for this, and who agrees with me here? Don't worry. We have burned a heretic at the stake in days and days [stomps a smoldering ember at his feet].
Software Zen: delete this;
|
|
|
|
|
Gary Wheeler wrote: We have burned a heretic at the stake in days and days
I use gotos for two reasons
1) state machine code, although these days i typically do table driven so it's moot. But while loops and such aren't really feasible. I could bore you with some pictures as to why. However, for all my state machines i can generate pretty pictures that graphically and precisely reflect the code that is generated to the point where you can directly see how they line up. The visual aid really helps.
2) To get around a codeDom limitation - it doesn't have break. So I've had to refactor my reference implementations of the code I intend to generate to remove breaks. I've done crazy things like set my i var inside a for loop to int.MaxValue-1 (whatever that works out to be) in my generated code to break the loop. If I can't use a for loop, and I must break, I'll use a goto .
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I am updating a NuGet package that, as an ancillary function, generates SQL and C# code. I tend to agree with your post, but do have a couple of deviations.
Quote: it's almost always stupid to try to support generated code that a user has modified
"Almost" being the key word here. By generating stored procedures, C# POCO classes, and C# manager/factory classes that tie the SPs and POCO instance creation together, I save the developer a lot of time on the initial creation of the classes and SQL code. If the developer wishes to change them to suit his or her particular approach, it still takes less time than writing the code themselves from scratch, especially with "find and replace" in the IDE. The only downside is if the developer has so many changes as to have to re-generate the SPs, POCOs, or factory classes, then any manual edits have to be redone. Good planning can avoid most, if not all, of that.
Quote: Commenting the generated code is pointless
I find it useful to add comments in my generated C# code so
1) Intellisense can show brief explanations of methods and parameters,
2) The user can understand why I coded something the way I did.
I find it useful to add comments to my generated T-SQL code so the developer understands why I coded that way, and what value there is to it.
That said, yes it does take more of my time, but I want to give any users my NuGet packages have some additional info. I even do sample WinForms apps and a readme Word document to assist. Senior developers might not need them, but less experienced developers might benefit.
|
|
|
|
|
Putting doc comments in the generated code is a good idea, but we part ways when it comes to where to document the "whys" - I prefer to document them in the generator, or the generator's rules, whichever is more appropriate.
But also it sounds like you and I tend to use generated code differently. You're doing it more "JUICE" style where you create a package and then the user includes that to handle all the boilerplate stuff based on some settings (or DB schemas or whatever) - then they take that project and use it as a starting point.
I tend to generate code from spec files, and the code is purely generated from those specfiles, including being regenerated when that spec file changes.
Most of my comment has to do with that latter method. I agree with you about a lot of your post, especially with the way you are using code generation. It's just that I don't typically use it that way. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
So, how do you go about setting breakpoints and debugging?
I wrote a code generator, but made sure that the output is clear and understandable with comments, because it is intended to be modified, debugged, and never generated again. If anyone needs to add a field or function, it will have to be done manually.
I'm glad I don't have to work with you.
|
|
|
|
|
I don't do generation that way, and if I do it better roundtrip rather than break my business rules engine or otherwise go against my master specs.
You don't want to have to work with me? I don't want to have to work with code that's nothing like the functional specs for it anymore.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I do the opposite when I generate code. When emitting code, I automatically indent (really, fairly trivial) and even emit occasional comments.
The reason is very simple -- I do not do so, so that users can maintain or modify the code, I do it so that I can maintain and modify the code generator. Just like any other code you write, code that you write automatically may need to be corrected for errors, large and small. Just because you do not directly maintain the generated code, does not mean that the code generator does not need to be maintained. Clean, formatted generated code helps substantially with that.
And, if one or eleven years, your code generator is not still in use (or possibly not even available), but the generated code is still running. Then it DOES need to be maintained directly.
Bottom line, treat generated code exactly like any other code that you write.
BTW, GOTO statements are fine regardless -- just use them to replacing missing control structures. Every language has them, including C and C++. For example, break and continue only apply to the inner level. I quite frequently need to break from two or (more rarely) three levels. Using a GOTO is the only choice, when the language is insufficient.
|
|
|
|
|
I hear you, and to a degree I agree. I do format my code, and I don't go out of my way to make the generated code cryptic, but I do concentrate on maintaining the generator. If it generates code that needs to be tweaked, then it's the generator that needs to be tweaked, IMO.
I'm running into that right now with a lexer generator I made that I'm using as a pre-build step in another project.
I fixed the lexer generator. =)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
I'm of the camp that you start off simple, write a clean implementation, and if you need to bring in other technologies like AWS lambda, distributed computing, whatever, you can do so once there's evidence that you *need* to do so. Again, unless that need is quantifiable up front and becomes the driving factor for the design.
I *am* biased, and if someone is going to suggest a totally different tool than what's currently in the chest, I want it vetted and I want that person to be responsible for documenting it.
So in writing this, I realized something. In this impromptu design meeting that was pretty much about implementation ideas for a solution, I realized that the requirements are incomplete. And with an incomplete (and by this I don't mean that one needs a *complete* set of requirements, just something sufficient for the major talking points) understanding of the requirements, you can have as many ideas as you want, but they are all pretty much worthless.
modified 26-Nov-19 10:03am.
|
|
|
|
|
I agree somewhat.
Sometimes performance requirements are obvious before you even start coding. Eg a high volume website that needs to be sensible with resources. However, I'd be preaching that the perf baked in should be at the architecture level, not the line-by-line optimisation level. ie Settle on a server framework that's been proven to be fast; bake in caching; make sensible database design decisions. I wouldn't be optimising your sorting routines just yet, though.
I 100% agree with not bringing in any technology until it's needed. We've had many, many projects fail because they were over engineered / overly complex. Instead of prototyping in Python, or building a monolithic prototype in .NET Core, systems were using literally dozens of frameworks and technologies spread over dozens of web services and 90% of the efforts ended up in DevOps, not Dev. Instead of focusing on the goal, the focus was on the projects.
The one point I will differ with you is that incomplete requirements don't faze me that much. We had a recent project that failed because a year was spent analysing requirements. If we'd just thrown something out there, played around and then worked out what the actual real world requirements were, we'd be far ahead. Instead we tried to predict the future. There's a point where you hold your breath and jump.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: is that incomplete requirements don't phase me that much.
Oh, no disagreement there! It was actually one of my points -- let's start with a simple and clean approach, get some datapoints on how well it's working, and then decide if a more involved solution is required.
In my particular case, it was revealing, and probably worthwhile to do a short investigation, that because performance kept coming up, it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts.
It's also somewhat annoying that there is no clear decision maker in this process. (Of course I want that person to be me!) But the way the game is being played is, let's get everyone's opinion and watch them argue / flail / wave unsubstantiated opinions like banners at a jousting competition. I actually very much dislike that approach, but I also recognize I could be very much in the wrong or minority with my dislike.
Sigh. The balance between "we use these technologies, does this problem warrant introducing a new technology?" The balance between how much time do we spend investigating concrete datapoints that can guide us on an implementation vs. try it and see what happens? In this case, it's really a ridiculously simple process that can run autonomously, there's just a couple specific things it needs to do.
It's funny, I actually have a conservative approach with regards to introducing newfangled technology and a rather aggressive approach to "let's leap and see what happens." I think though, that I'm comfortable leaping because I've learned how to write the actual code in a way that, if a course correction is required, it's not usually a big deal. The problem is, I don't trust other people to have that skill!
|
|
|
|
|
Marc Clifton wrote: it seemed that the amount of handwaving and opinions by everyone could be reduced with a couple well-measured facts
There's your problem right there. You're letting facts get in the way of a good story.
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: incomplete requirements
The reason many developers fear incomplete requirements, is that they know they're the ones that lose when the blame game starts.
|
|
|
|
|
|
Marc Clifton wrote: I'm of the camp that you don't reach new tools or one-off implementations that are like nothing you've done in the past.
Then how do you innovate or advance?
Marc Clifton wrote: I'm of the camp that you don't worry about performance until you actually have evidence that you have a performance issue or you really know in advance that performance is the driving factor of a design.
Performance is always a factor. Do you also not bother about security until that becomes a factor? I know what you're getting at is that you shouldn't "gold plate" things, but some things are often too hard to retro-fit so you should think about them even if they are not explicit requirements.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance? By investing in R&D. That shouldn't be part of normal production.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Aint nobody got time fo' that
|
|
|
|
|
Money you mean, since time is for sale.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
F-ES Sitecore wrote: Then how do you innovate or advance?
By making an informed decision (ok, can-of-worms) rather than just someone saying "oh, here's something I've played with that will work perfectly." Riiight. Show me.
I've worked with too many developers that leaped onto Ruby, for example, and it turned out that the only innovation that occurred was how must faster you could screw up a project.
|
|
|
|
|
I agree with almost all of your points, with the possible exception of having a complete set of requirements before starting the design.
Some of the requirements may not be apparent until you have already done part of the design. For example, the original design might have envisioned reading certain personal information from a public database, and immediately discarding it after use. Due to responsiveness and/or reliability constraints, it turned out to be necessary to keep a local cache of this personal information in a local database. This requires following the legal requirements for databases that store personal information, which is a whole new kettle of fish.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: with the possible exception of having a complete set of requirements before starting the design.
Yes, I agree. I'll edit the post.
|
|
|
|
|
I'm of the camp that you start off simple and write a clean implementation, performance will not be a problem.
|
|
|
|
|