|
Okay, that's fine, it just means the Eval method is more like a BuildDice method
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
using (var x = ....) is your friend!
try {} finally {} too
|
|
|
|
|
can i get an amen over here?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
yes!
|
|
|
|
|
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot...
So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
|
|
|
|
|
It looks like the behavior in the newer .NET is different than back when I tested this (.NET 2)
So I stand corrected, as I told Super Lloyd, his tests do indeed show the finalizer being called. So my mind has already been changed on the matter.
I don't use unmanaged resources directly in .NET and haven't since about 2008 or so, so it hasn't been a problem for me, and I hadn't really updated my information on the matter.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Adding, I hope you never count on those finalizers being called.
Designing your code such that they are dependant on them in any way is terrible design.
If I want to truly manage the disposal of objects, I'll keep a list of them and track them myself.
That's the proper way to do it.
Always
Call
Dispose.
On any disposable interface. If you don't do it assume you are leaking
If you want to use that additional set aside GC list for finalization be my guest, but your users are WRONG if they ever write code that needs it.
And I'd rather ASSERT that sort of wrong then try to manage it, because of the other problems i mentioned.
Consider this: - and I've done this on a web server in the real world and learned the hard way - which is one of the reasons i don't use finalize anymore!
Create GDI ico handles by using shell calls.
Forget to call dispose on a few of them, but implement finalizers.
I'll bet you my next check you run out of GDI handles and your system just stops producing more.
Until you restart the webserver
This is WITH your finalizers.
At least if you ASSERT you'll eventually get a debug on what happened.
I learned the hard way.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
LOL! I couldn't agree more. In my mind I think they should have gone all the way with 100% automated reference counting during reference assignment (a-la Visual Basic), with a lightweight garbage collector relegated to detecting and breaking circular-references (something Visual Basic couldn't do) and of course, compacting memory.
This would have offered automatic, true deterministic destruction, while preventing fragmentation. But most importantly, it would have avoided the dreaded Dispose pattern (alongside with the related Finalizer travesty) entirely.
Somewhere I remember reading they avoided this route mainly for performance reasons, what a costly decision in retrospective.
|
|
|
|
|
Totally agree. Machines aren't what they were. Take the hit. The code is already managed.
Besides, also I'd rather something slower and regular than something faster that spikes here and there if i needed raw performance. Consistency in streaming data is usually a bit more important than raw throughput but YMMV depending on the scenario and all of course, simply my opinion. I think it applies to running code as well.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
User the right tool for the right job.
|
|
|
|
|
That's why I use C++ for this.
But it would be nice to have other options.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
|
codewitch honey crisis wrote: universal garbage collection in .NET Now caught on camera[^]
Somehow that was the first thing that came to mind
|
|
|
|
|
seems accurate.
it even kinda looks like HAL
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
You might want to look at what Unity is doing with what they're calling High Performance C#. AIUI a big part is limiting themselves to stack allocations and language features that don't do heap allocations in the background to get C++ish equivalent performance/consistency by avoiding ever triggering GCs. You probably don't need the verifiably vectorized capabilities they've also built into their toolchain; but borrowing an off the shelf solution might be easier than doing the parts you need on your own.
C++, C# and Unity
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
what what?
ooooh. I'll definitely check it out. it sounds interesting.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Let us know what you find. From what I've read I think they're just limiting what language features they use to control memory allocations, but I've just read a few blogs and never tried anything.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
They appear to have some new syntax, among them "#include". Not sure if it's limited to a preprocessor or not, but they're talking about rendering to machine code, not IL
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Being thoroughly experienced in C++, C#, and near-real-time programming, I've always been bemused by the complaints of performance issues related to garbage collection in .NET. I've never had a performance concern in my C# code that I could attribute to the GC taking over the machine. It's always been thread contention between resources, poorly thought-out locks, misuse of .NET facilities, or something similar.
My opinion is that if your performance constraints are that tight, you shouldn't be using a garbage-collected language anyway. You need C or C++ and native threading constructs so that your control is as close to the bare metal as possible.
Software Zen: delete this;
|
|
|
|
|
Oh if you've ever tried to realtime audio apps you'll find out real quick.
You can't even play midi on your own using midiout/send at anything 96ppm or above without GC forcing you to drop frames or lag periodically.
suspend the GC like 4+ allows you to, the problem disappears, but you have to reserve reams of heap to do it.
I think your lack of running into the problem may be more due to the lack of trying to do anything crazy like that with C#?
I wouldn't have even tried, *precisely because* of my experience with C++ and RTOS apps except i was looking into the GC issue and looking at how realistic .NET 4+ suspend GC/critical region feature was to use in practice, so i developed some realtime code to test it. something sensitive. Ears are more time sensitive than eyes, so I wrote a midi player.
I've tested the results. I *could* post the project here just to settle the point, but it seems a lot of work just to do this.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
The folks & Unity have been working a lot on a performance-oriented subset of C#. I don't know exactly how tailored their solution is to real-time application, but since a game engine has to draw 60 frames a second without skipping too many of them, you may find something interesting there.
|
|
|
|
|
codewitch honey crisis wrote: I've always been kind of bummed about the universal garbage collection in .NET because you can't do realtime coding with a GC running in the background. Windows is not a real-time OS.
codewitch honey crisis wrote: *gasp* Delphi, which is costlier/more time consuming to write solid code with. I find .NET more verbose than Delphi.
Want realtime, check out QNX.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
i should have qualified that. everyone is getting on me about that.
I don't care about RTOS stuff.
I care about being able to play live music. Realtime enough for that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
You will never have true realtime. There are always unexpected delays, from cache misses to virtual memory paging operations. The systems getting closest to RT are the old ones: The first machine I laid my hands on was a 1976 design 16-bit mini (i.e. the class of PDP-11): It had 16 hardware interrupt levels - that is, 16 complete sets of hardware registers, so when an interrupt occured at a higher level than the one currently executing, the register bank was switched in an the first instruction executed 900 ns after the interrupt signal was received. That was certainly impressive in 1976, but the CPU was hard logic throughout, with no nasty pipelines to be emptied, no microcode loops that had to terminate, usually you had no paging or virtual memory. That got you close to RT. You won't see anything that compares today, even without GC.
In those days, people got shivers from the though of writing any system software in any high level language. Lots of people shook their heads in disbelief over Unix written in K&R C: It could never work, could never give good enough performance. I have saved a printout of a long discussion from around 1995, when NetNews was The Social Media: This one guy who insisted, at great length, that high level languages was nothing but a fad that would soon go away; they will never give high enough performance. (In 1995, he didn't get much support from the community, but he never gave in to the pressure.)
Using a high level language takes you one step away from RT. Using a machine with virtual memory is another step (even if your program is fixed in memory - the OS may be busy managing memory for other processes, with interrupts disabled). Dependency on pipelines and cache hit rates is yet another step. If you require really hard RT, you must stay away from such facilities - probably from any modern CISC CPU.
You probably do not have hard RT requirements; you can live with a cache miss, or the OS updating page table for other processes. You should design your code to be able to handle as large random delays as possible. Networking guys know the techniques, like window mechanisms and elastic buffers. If you do things the right way, I am not willing to believe that a modern multi-GHz six-core CPUs ability to run a VST plugin sufficiently close to RT is ruined by the CRT GC!
I grew up with high level languages, but knowing that RT "had to" be done in assembly. But soon I also learned how smart optimizing compilers can be, and gave up my belief in assembly. I much longer thought that I would have to manage the heap myself, if performance was essential. I switched to C#, where it isn't an option, and then came across a fairly good description of the CLR CG (in "CLR via C#"). Again and again I said to myself: Hey, that's a neat trick! or: I never though of that in my heap management! ... Twenty years earlier I concluded that a compiler is far smarter than I am in generating code. Now Irealized that CLR is far smarter than I am in managing memory. And I never had any performance problems in C#.
You sure can manage memory yourself, even in C#. As a graduate student I was a TA helping freshmen through their first programming course. The "hard" engineering disciplines still used Fortran, the rest had switched to Pascal. Exercises were common to both languages, except for one: The Pascal version was a linked list problem. One EE student was offended: Why can't we learn about this pointer stuff, like all the others? I gave her an introduction to pointers ... and next week she returned with a solution to linked list problem, written in Fortran, the heap as a big 2D array, pointers being integer indexes of the next object (i.e. array row) in the list.
You can do something similar: At program startup, allocate a large byte or integer array, cast any slice of the array into any class when referencing an object, cast from an arbitrary object to an array slice when allocating or modifying an object. As if you were writing your own memory manager in assembly language (except then you wouldn't have to do all that casting). I am sure that you could find a lot of smart strategies in the description of the CLR memory management
For being a little more serious: I would very much like to see a C++ solution and a C# solution side-by-side, everything being identical with the exception of the heap management. And then have a demonstration that the C# solution has untolerable hickups every time the GC makes a round of cleanup. I doubt that I will ever have that experience.
What probably would happen is that in the rewriting from C# to C++, experience from the C# work allows you to write better, more efficient code in C++ - improvements that might have been carried back to the C# version, but it isn't. So there are far more differences in the two alternatives than just the heap management.
One example showing this effect: When IBM developed its first RISC chip, the 801, at a conference they presented a paper on the speed increase made possible by the large register file of the 801, requiring new compiler optimization techniques. Then, more or less as a side remark, the presenter told that they had carried back those optimzation methods to the compiler for the 360/370 architecture, and gained a 30% speed increase ... without any extended register file. But that was only an informal experiment, so for the rest of the presentation, all performance gains on the RISC was ascribed to the RISC architecture, including the 30% which could be obtained on a quite different CISC architecture.
So if a C++ implementation with "handwritten" heap management runs more smoothly than a C# one with automatic heap management, I would suspect that it has more to do with "new optimization techniques" until I have compared the two, line by line, as well as the compiler options.
|
|
|
|
|
See my other reply where i said in this case i don't care about it being technically an RTOS
I care about being able to play live music with it
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|