|
Show me where Finalized objects get collected before proc exit in a real world scenario.
Except server apps, but if you're using unmanaged resources directly from a webserver i hate you.
In application code, the GC calls finalizer before proc exit. Show me where it doesn't.
Contrive a scenario even.
It won't slow down dramatically until reboot. The kernel keeps an slist of kernel handles by process around. Win32 does indeed clean them when the process exits. Your HBITMAP will be around until proc exit, not until reboot.
And *it would anyway* - at least in my tests, because Finalize doesn't get called until proc exit anyway.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
this is going nowhere so I will stop trying to convince to write proper utility classes.
although I will just add 2 (last) things:
- for the record memory management, and even dispose, is indeed totally useless if all you have is short running processes. the issue is all about singular long running process running as long as possible. which you seem to dismiss so casually for some mysterious reason
- you were asking for a sample that show if finalizer are even ever called. well run the winform app below as a console app (to see the Console.WriteLine() output), press the button a few times, tab out and in, do it again. you will see that finalizer are indeed called sometimes.
class Program
{
static void Main(string[] args)
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new MyForm());
}
}
public class MyForm : Form
{
public MyForm()
{
var b = new Button
{
Text = "Click",
Dock = System.Windows.Forms.DockStyle.Fill,
};
b.Click += (o, e) => { new Foo(); };
this.Controls.Add(b);
}
}
public class Foo
{
const int N = 100_000;
<pre>
IntPtr unmanaged;
public Foo()
{
unmanaged = Marshal.AllocHGlobal(N);
GC.AddMemoryPressure(N);
}
~Foo()
{
Console.WriteLine("in finalizer");
}
}</pre>
modified 15-May-19 0:23am.
|
|
|
|
|
I'll try running that. But yeah for now, let's agree to disagree. I'll kick around what you wrote. You may have changed my mind if that sample pans out.
The last time i actually tested this was pre .NET 3
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I should add, the only time I'll write a "Utility" class it's static. I use the convention a lot, but never for anything instantiated. That's just me. =) so when you said Utility class my first thought, was , where would I keep the state?
At any rate, it impacts nothing I've written since .NET 2 days, but if I do write some unmanaged wrapper I'll keep this exchange in mind.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I don't think it matters when I analyse your opinion but...
I want to point out that while finalizers are unpredictable, that doesn't mean they are unreliable.
Finalizers are very reliable. But they only happens once in a while.
|
|
|
|
|
What I mean is you can't rely on them to close something before you run into trouble.
Again, this may have changed in newer .NET renditions - since i tested which was a decade ago at least. It looks like your sample does indeed finalize on collect.
Still, I question whether it would collect often enough to keep up with the leakage from not calling dispose. It never did in the past for me.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
I have trouble understanding your statements... So here are some facts...
Using memory pressure hint you can be sure it will suggest to the runtime enough GC that you don't run out of memory or become slow or fragmented. This is, however, something that is only marginally useful, I didn't notice any obvious improvement after using that hit. I guess I never allocated huge unmanaged memory pages.
If your code is using other precious system resource, like a Window handle, or a brush handle or a socket... you might be out of luck. You might run out those without the system realizing a system GC is needed.
I can't even start to guess what you mean when you wrote "finalizer keep up with the leakage from not calling dispose"
|
|
|
|
|
Super Lloyd wrote: If your code is using other precious system resource, like a Window handle, or a brush handle or a socket... you might be out of luck. You might run out those without the system realizing a system GC is needed.
As it happens, this is just what I meant. If you use things like this, the GC won't necessarily keep up with your creation of objects. You can run out of GDI handles (though maybe not window handles - for other reasons) pretty quickly and Finalize won't help you, or at least that has happened to me.
One of the reasons I won't use GDI handles and such in serving web pages - at least not directly, is the unpredictability of them. What if the connection gets broken and ASP.NET or whatever halts your thread? Sure your finalizers will still run, but when? Will you have enough handles left to serve the next request?
(At least if you call dispose faithfully your odds are better but still)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
OIC.. you went from finalizer are not good enough to I flat out to waste time implementing them...
Sure enough finalizer won't help with those sparse handle on a web server. And hey, it's really ease to dispose of things in webservice application usually.
On the other hand you will run out of handle much more slowly in a user desktop application. And also some object can be very hard to track in desktop application, making finalizer really useful. And finalizer will run in a timely fashion there.
And utility class are not always static. For example string, Cursor, Bitmap, Regex, etc... (many of the 21,000+ class in the .NET framework BCL) are instantiable utility classes!
And I also happen to love writing my own.
In fact I shared DiceSet class with you, as a free custom example!
|
|
|
|
|
I just meant i think we define utility differently. mine is narrow when it comes to C# and .NET has but a few. - and it's just a convention i use in my own personal style. i've just been using it so long that it impacts how i understand the word, if that makes sense. I'm not saying you're wrong.
I responded a bit to your dice thread. i think we can get you from theory to code if you just explain the "meaning" of the dice syntax. I don't do tabletop gaming. i have friends that are into that stuff but i never was.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Nice!
I was planning to look at it tonight.. I don't have the code here, it's personal stuff!
The meaning is, you can often read: roll things like "3d6+2" and I try to create an object than can roll that, i.e. a dice collection with 3 x D6 (a Dice class that roll between 1 to 6) and sum them all up and adds 2.
Or maybe "D10+D4+1" which would be Dice (10) (roll between 1 to 10) + roll Dice (4) (between 1 to 4) plus 1.
|
|
|
|
|
That's an expression evaluator!
just look at the sample in my stuff. Oh there's a bug in the parser runtimes it both is and isn't serious but it's an 8 character long fix and it still works atm =). I can reupload and wait for reapproval but i'll do that tomorrow.
The question is, can you just roll while you parse? or do you NEED an object model?
because if you need an object model parsing is a two step process. (like, do you need Dice and DiceSets or can you just pass an expression to an Eval function and get your answer out? because if that's good enough your code just got cut by more than half)
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
the parsing doesn't return a number, it returns a DiceSet object which is a collection of Dice structure
Both DiceSet and Dice have a Roll() method and a nice ToString() implementation
It might be an evaluator. But it evaluate to an object, not a simple number.
|
|
|
|
|
Okay, that's fine, it just means the Eval method is more like a BuildDice method
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
using (var x = ....) is your friend!
try {} finally {} too
|
|
|
|
|
can i get an amen over here?
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
yes!
|
|
|
|
|
What are you talking? The "call Dispose for shure" pattern recommends writing a destructor, so if your object won't be disposed it will free resources at least in the finalizer. If diposed is called, the code in the finalizer is obsolet and supressed. You do this for base-classes only and for derived classes you use the simple Dispose pattern. I work a lot with hardware and unmananged ressources, my finalizers are called all the time, no one exits the application to free memory and system resources but coders forget to dispose (mostly implicit by not using a using-block)... And I talk here about backend and frontend. And: many resources will be hold by the OS until you reboot...
So I can understand that in your experience it "doesn't matter", why? I can just quess you write a very specific type of software, if memory is not your problem I'm fine with that, but don't recommend that ignorance to memory-management in .NET to others...
|
|
|
|
|
It looks like the behavior in the newer .NET is different than back when I tested this (.NET 2)
So I stand corrected, as I told Super Lloyd, his tests do indeed show the finalizer being called. So my mind has already been changed on the matter.
I don't use unmanaged resources directly in .NET and haven't since about 2008 or so, so it hasn't been a problem for me, and I hadn't really updated my information on the matter.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Adding, I hope you never count on those finalizers being called.
Designing your code such that they are dependant on them in any way is terrible design.
If I want to truly manage the disposal of objects, I'll keep a list of them and track them myself.
That's the proper way to do it.
Always
Call
Dispose.
On any disposable interface. If you don't do it assume you are leaking
If you want to use that additional set aside GC list for finalization be my guest, but your users are WRONG if they ever write code that needs it.
And I'd rather ASSERT that sort of wrong then try to manage it, because of the other problems i mentioned.
Consider this: - and I've done this on a web server in the real world and learned the hard way - which is one of the reasons i don't use finalize anymore!
Create GDI ico handles by using shell calls.
Forget to call dispose on a few of them, but implement finalizers.
I'll bet you my next check you run out of GDI handles and your system just stops producing more.
Until you restart the webserver
This is WITH your finalizers.
At least if you ASSERT you'll eventually get a debug on what happened.
I learned the hard way.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
LOL! I couldn't agree more. In my mind I think they should have gone all the way with 100% automated reference counting during reference assignment (a-la Visual Basic), with a lightweight garbage collector relegated to detecting and breaking circular-references (something Visual Basic couldn't do) and of course, compacting memory.
This would have offered automatic, true deterministic destruction, while preventing fragmentation. But most importantly, it would have avoided the dreaded Dispose pattern (alongside with the related Finalizer travesty) entirely.
Somewhere I remember reading they avoided this route mainly for performance reasons, what a costly decision in retrospective.
|
|
|
|
|
Totally agree. Machines aren't what they were. Take the hit. The code is already managed.
Besides, also I'd rather something slower and regular than something faster that spikes here and there if i needed raw performance. Consistency in streaming data is usually a bit more important than raw throughput but YMMV depending on the scenario and all of course, simply my opinion. I think it applies to running code as well.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
User the right tool for the right job.
|
|
|
|
|
That's why I use C++ for this.
But it would be nice to have other options.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|