I'm trying to find the cause of a bug in my program. It has been almost impossible for me to find. The Immediate Memory Corruption Detection project looks really useful but I don't know enough about how to make it run inside my program, I suppose because my program is a win32 app and is 64-bit.
I found something online to allow the MFC ASSERT and VERIFY functions work within my win32 application, and I think I also changed my program to 32-bit to see if that would work, but I get a number of errors preventing me from running it. I was wondering if there is an easy way to get it to work... I'm not very experienced with C++.
The point is that this specific method doesn't allow you to guard both preceding and consequent memory locations (unless your data is exact multiple of the page size).
However bugs like accessing the memory beyond the allocation boundaries (or after it's been freed) - are usually bugs that can be reproduced more-or-less consistently. Hence you can just run your program in both modes to identify them.
First of all, thank you for all the time and efforts you made for this work. It's really helpful and significant.
I got a question here. You said " it deallocates all the memory pages, but it does not immediately free them. Instead, it leaves them in the reserved state. In this state, they're inaccessible and they don't consume physical memory, and they're guaranteed not to become allocated eventually, hence they definitely will remain inaccessible. When the virtual size occupied by those reserved pages exceeds some number (which is 50MB by default), the heap starts to release the most recent pages."
Just wonder is it possible to change the size of max reserved pages size? so that I can guarantee that this max size is not too big.
Sure! This limitation is just a hard-coded number. You may change it at will, or maybe make it configurable at runtime.
But take into account that we're talking about reserved space, not allocated. It does consume a significant amount of the virtual address space of your process, however it doesn't take the physical resources of your computer. So that 50MB is usually not a problem.
Good explanation of the idea, if a bit over-eager on reinventing the wheel. Lacks references to other articles and technologies that also do this, or any of the real world issues. Also missing a decent self-test or example.
Edit: for some reason my previous link to additional discussion was cut from my post:
at the time this article was written, I was not aware of similar "technologies". Yes, now I know DUMA does something like this.
About "real-worlds issues": the aggressive virtual memory acquisition was mentioned. The hassles related to overriding the default new/delete are very environment/framework/library-dependent.
For instance, in C++ one may override the default new/delete operators easily, at least if they are inlined by the compiler. Of course this doesn't cover all the possible heap allocations (such as CRT internal allocations), but this may be sufficient for some.
Anyway, if you think 3 is a fair vote - I don't mind.
Not trying to knock you! Trying to give an honest set of feedback on the ups & downs of the article, purely as I saw them.
It's a great idea! I wish I could have gotten it to work on MSVC 2010. Maybe MS will address some of the limitations with their CRT to allow this in the future (I can dream!)
I didn't know any of this - your article was the first I hit on that looked promising when I began researching this idea. I was very glad to see you'd already implemented it. Obviously you, I, and many others have hit on the same idea.
Maybe you deserved a 4/5. I probably should be more generous - you spent the time to put out a usable implementation (not your fault that MS's CRT is a PIA).
This is a cool idea, and has been thought up several times once I started looking around for it (I found your article because I came up with this idea and thought someone must have done it).
Ultimately, I was unable to make any implementation work for VisualStudio C++ that uses the MS CRT. MS just makes it nigh-impossible to replace operator new & delete (they already replace the language operators, which leaves anything you write as a linker conflict). There are various articles on this subject, but nothing short of runtime patching of the CRT seems to be a viable option (unless one is happy to use this technique only for a few classes via class operator new/delete).
When you make a debug build using visual studio, doesn't it already use a different dll msvcr90d.dll instead of msvcr90.dll, and dbgdel.cpp for example does check the memory area before and aftrer the block being freed. So what are you saying new?
Difficult to tell how to use this. I included the code provided just above my _tmain() and added #include "DbgHeap.h" to my stdafx.h and I got the following crash:
test.exe!DbgHeap::Allocate(unsigned int nSize=52, bool bAlignTop=true) Line 59 + 0x8 bytes C++
test.exe!operator new(unsigned int nSize=52) Line 18 + 0x10 bytes C++ => this is from your "operator new"
void* operator new (size_t nSize)
// the last parameter - allocation alignment method
void* pPtr = g_Heap.Allocate(nSize, true); // ==> crash is here
// you may run your program with both methods to ensure everything's ok
// do whatever you want if no memory. Either return NULL pointer
// or throw an exception. This is flexible.
Here is some information to help you help me debug this crash:
m_dwPageSize 0 unsigned long
m_nSizeReserved 0 unsigned int
I think what is happening is the call to
is failing as
has "0" value, which causes a "divide by zero" exception at:
DWORD dwExtra = (DWORD) nSize % m_dwPageSize;
as m_dwPageSize = 0;
Additionally I would like to add that if source files re-write their "operator new" using the following, then you will get compile errors - so these lines have to be commented out:
static char THIS_FILE=__FILE__;
#define new DEBUG_NEW
First of all - thanks for trying it.
You're right regarding to commenting out the MFC's stuff:
#define new DEBUG_NEW
Worth to say that this should be done for MFC projects. Other project types may require other things.
Now, regarding the crash. You say m_dwPageSize is 0. IMO - this is not because GetSystemInfo might return sysInfo.dwPageSize equal to 0.
I believe this happenes because you have global C++ objects in your program, and some of them use heap (via new/delete operators) in their constructors/destructors. In this scenario the new operator may be called before the global object g_Heap is constructed. That's why its m_dwPageSize is still 0.
There're several ways to resolve this:
1. Ensure initialization in DbgHeap::Alloc.
I mean inside Allocate:
m_dwPageSize = sysInfo.dwPageSize;
Most probably this method will work,
2. Another method - allocate the g_Heap object dynamically on-demand.
First, Thanks valdok. I forgot to do that in my 1st post. I will try your suggestions and get back to you on how it went. I do hope it works for me, it will solve a problem my team has been having for many months now.
As for if we have global objects that are being initialized - I have to check on that. I am primarily using it to check an overrelease on a CppUnit test exe and am unaware of any global objects that I have declared, but I will confirm.
Last Visit: 31-Dec-99 19:00 Last Update: 3-Mar-24 0:35