|
Hi!
I am new to programming. I wish to know if there is a way in C++ to clear all the heap/memory of the program.I have a huge program with various variables. The program runs on an infinite loop(number of video frames to process).I want every time a frame is processed, to clear anything on the heap/memory and then move on to the next frame for processing. Thanks!
Alex
|
|
|
|
|
maybe its a thought to optionally ZeroMemory when new memory is allocated !!!
|
|
|
|
|
how is this different from a specialized hep for a certain allocation size?
we are here to help each other get through this thing, whatever it is Vonnegut jr.
sighist || Agile Programming | doxygen
|
|
|
|
|
That is exactly what it is
The problem is that even if you create separate heap for certain allocation size you still have no way to hint it that you're going to ask for blocks of ONLY such size. So that if you create a different heap you solve the fragmentation problem, but not the performance.
|
|
|
|
|
The first reason this kind of class does not already exist in C++ is because
we want to know what we are doing... We are C++ programer!! Not Dummy Java
programer!! Memory dump factory encourage you to does not matter about memory allocation and deallocation and to become more dummy then you are!!!
The second reason is that it is slower than allocation by "new" and "delete" on the fly!
When you allocate a memory block with "new" the system allocate a bigger block of memory for your
process in prevision that you will ask again!! So it is really useless to do that!!
The system already do the job for you... And let me tell... It does a really faster job then you with your wrapper!!
The only adventage you method potentialy do is when allocating really huge memory block ...
But even, I'm scepticle... Really...
C++ Matter
S.D.
|
|
|
|
|
I disagree with you. And I think you're confusing something.
1. Standard C++ new and delete operators allocate and free memory using windows heap functions (HeapAlloc , HeapFree ). Now, what do those functions do?
In fact the heap is implemented using more basic windows functionality, so-called virtual memory (VirtualAlloc , VirtualFree ), which has a granularity of 4k (on intel processor). Those functions really allocate memory pages for the process, and the heap just incapsulates them to allow you to allocate blocks of an arbitary size. In fact it allocares pages for your process (on demand), and manages them for you.
2. What I meant to optimze is the heap implementation only. Still, at some point there will be use of the virtual memory functions, and this is inavoidable.
3. Now, why do I claim that my implementation is faster than the standard heap? The reason is that my heap is especially designed to allocate blocks of a constant size. Hence, there is no memory fragmentation, and I also never perform any searches (to find you a block of appropriate size), whereas standard heap has to do so.
I suggest you read my previous article http://www.codeproject.com/cpp/advanced_heap.asp
|
|
|
|
|
New and delete do not use the VirtualXXX functions at all. They use the HeapXXX functions using a single Windows heap that the C runtime allocates from Windows at initialisation. Undoubtedly, the NT (and later) heap managers must of course use the VirtualXXX functions, but Windows hides all of this from the programmer. It uses various techniques to increase performance, including look-aside buffers etc.
The page size is indeed used by the VirtualXXX functions, but as a C++ programmer you are completely isolated from this when using the standard new and delete functions.
I agree that using a fixed-sized chunk allocator should, if written properly, enhance performance. However, there are numerous problems when writing code that does not 'free' this memory. In effect, you are cheating. Generally, you can allocate memory quickly, or free it quickly, but not both. Whether or not this is suitable for your application depends entirely on the application.
As an example, if your application copies a lot of small objects, using temporaries along the way (perhaps using a map, set etc etc), then you will likely find that your heap management routines use far more memory that they need to. Also, many real-world applications, especially data driven applications, create a lot of small memory blocks. On a server with lots of memory, it might not be important to free those blocks. However, on a client machine with limited memory resources, freeing it is essential.
If you are looking for a fast heap manager, look at WinHeap (http//www.winheap.com) - there's a free download. It manages small blocks of memory (less than 256 bytes) using special memory pools and on a single processor machine is about 30% faster than the Microsoft heap manager. On a multiprocessor machine, it's can be an order of magnitude faster.
|
|
|
|
|
1. About C++ new and delete : I totally agree with you that those functions (operators) don't use VirtualXXXX functions directly. And that's exactly what I wrote in the previous comment. When working with the virtual memory you're tied to the memory pages granularity (which is 4k usually). And the heap's destination is exactly to free you from that. I just meant that the process actually gets a physical storage (either physical memory or the page file) by calling VirtualXXXX functions, either directly or indirectly, when for example you ask an allocation from a heap, and it realizes that it has to allocate another memory page to satisfy your request.
2. Please read my previous article: http://www.codeproject.com/cpp/advanced_heap.asp. It describes how you CAN allocate and free memory really quickly, based on an assumption that you need blocks of constant size only. Well, strictly speaking it doesn't free anything at all, but both new and delete operations are fast.
Also note: such a heap is extremely effective for allocations of small size. It has small overhead (sizeof(index) per allocation), no fragmentation, and etc.
|
|
|
|
|
Most of your 'free' operations will result in your adding to your linked list. Realatively speaking, your free operation is considerably slower than your allocation operation, as you have to perform pointer arithmetic before being able to manipulate the list.
Like I said, the problem is not new, and your solution is not new. However, there are issues beyond the speed of allocation that must be dealt with by real world heap managers. Your heap management class is not threadsafe, for example. Users should also be aware that you are doing no runtime heap checking; the lesson here is to use the standard Visual C++ heap manager while you figure out any memory leaks, double deletes etc. This 'missing' functionality may not be a problem, depending upon your application. It's just important to know what these trade-offs are.
I'd be curious to know how a typical singly-linked list implementation like this one performs against the new low fragmentation heap in Windows XP. Have you tried benchmarking it ? For example, you could have an operator new and operator delete that simply call the HeapAlloc and HeapFree routines. Can't quite remember the API call to set the low fragmentation heap, but it'll be on MSDN somewhere.
|
|
|
|
|
Um, no.
There are many instances where a lookaside list increases performance of a system. (Lookaside lists for memory allocations have been aronud for a very long time.) Lookaside lists such as this are one of the few cases where you can actually increase performance by using a special allocator.
You can find more information about this by reading up on the Doug Lea allocator and the Univ. of Austin paper that compares different allocation systems.
BTW: The new/delete allocators provided by Microsoft have been historically very poor. I once improved an application's performance by 10 times by using a replacement allocator. Starting with VC6, they improved things greatly.
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
What you are saying is utter nonsense and only shows that you have no clue what you are talking about.
For example you will find that Microsoft implemented a so called SmallBlockHeap (starting with VC6) that uses a preallocated memory block to handle small (<= 480 bytes by default) allocs more effective then the os.
If you could be bothered to perform some actual profiling you will find that a GlobalAlloc easily eats up tens of thousands clockcycles while you get a way with a couple of hundreds or less when you manage your own memory pool.
|
|
|
|
|
You're confusing things too, and I don't like your tone. It is not a shame not to know something, but let's still respect each other.
Ok, I confess I was wrong when I told that standard C++ new/delete (and possibly malloc/free) use directly Windows HeapAlloc and HeapFree functions. Maybe they have been optimized. So what? We are talking about totally different things !
Let's make it clear. What is memory allocation on Win32 subsystems?
1. Each process is assigned a virtual address space. It is divided into so-called memory pages. Each such a page has a size of 4k on intel processors. While a process has a lot of such pages (2 or 4 gigs), in fact usually only a small part of them is accessible (committed). An attempt to access uncommitted page leads to an access violation exception. You can allocate/free memory pages by using VirtualAlloc and VirtualFree functions. Upon allocation of a memory page the OS in fact grants you another 4k of storage (either memory or page file).
NOTE: The address space is called virtual because an address of any page is valid only in the context of this process. In other words any page within virtual address space of the process is mapped into a physical storage using a kind of a per-process translation table. If you ask the OS to allocate you 50 contiguous memory pages (200k), it doesn't mean that they would be contigous in the machine's memory. However they are guaranteed to be contiguous within your address space.
2. All this is very good, but most of the programs require allocation of arbitary size, without the 4k-granularity. For this reasons there is so-called Heap. It is a structure that manages memory pages automatically, and provides you with blocks of the needed size. It is accessed via HeapAlloc and HeapFree functions.
The main penalties of any heap implementation is the fragmentation and the performance, since a heap usually has to search its internal structures to find you a contiguous block of the needed size.
3. You can't compare the performance of virtual memory management functions and heap functions, since they do totally different things. Usually small heap allocations are much faster, because they don't require the OS to allocate a physical storage for your process every time.
4. The ONLY thing that I meant to optimize is the heap implementation. It has nothing with managing memory pages. I believe that I created a heap with superior performance and minimal overhead. I strongly recommend you to read my previous article. And what I introduced in this article is in fact an efficient way to cache allocations (no matter by which heap they have been allocated).
Now, what makes me so sure that I've invented something so superiour, that beats any standard heap implementation. The point is that my heap is designed for allocations of a constant size, whereas all other heaps work with arbitary sizes. I showed that using my technique allocation/freeing of memory blocks (not memory pages, blocks that have been allocated via another heap!) can be done using few iterations and arithmetic operations.
5. I understand that Microsoft spotted at some point that small-sized allocations can be optimized by using so-called SmallBlockHeap. Perhaps. So f@@@en what ?! It is still a heap that allocates blocks of arbitary size.
Now, you've mentioned GlobalAlloc function. I have no idea what this function actually does. In general all those LocalXXXX and GlobalXXXX functions inherited from Windows 3.0, where they probably made sence. Nowdays they're obsolete I think. In particular they have such flags as MEM_DISCARDABLE, MEM_MOVEABLE, MEM_SHARE which have no meaning on win32 subsystems.
|
|
|
|
|
Dude sorry to say it but its you who got it wrong.
My post was a direct reply to " C++ Matter"'s post where he is bashing Java, apparently without knowing anything about memory allocation in general.
People like him drive me nuts.
He says "let the system handle it". I thought my post made it very clear that letting the os handle _all_ your memory allocations is a bad idea, thats why I used the GlobalAlloc() example but I'll come to that one later.
In fact I was saying handling memory allocations by our own is a good thing and everybody who ever got his hands dirty (used a profiler) knows that. So there is no reason to get jumpy, I wasnt saying anything bad about your allocator and I never would because I also wrote my own memory pool to accommodate my requirements (alignment on 16 byte boundaries, cacheline and page awareness, ...).
I also replaced the usual HeapAlloc() & HeapFree() with GlobalAlloc() & GlobalFree(). Eventhough the MSDN claims that GlobalAlloc() is slower then HeapAlloc() I found the opposite to be true (after a lot of profiling and fine tuning of my memory pool). GlobalAlloc() beats HeapAlloc() by around 20% (averaged) in all cases, so thats the reason I switched over to the "deprecated" GlobalAlloc() . You could also use LocalAlloc() but since its only a stub for GlobalAlloc() in W32 it dosent matter.
I know that XP introduced the so called "low fragmentation heap" thats supposed to be faster and works quite like your allocater. It handles 128 buckets of fixedsize memory chunks ranging from 8 byte up to 16KB but since I dont have XP I cant tell if its any good.
Anyhow I for one stick to my memory pool but you did a good job nevertheless so thank you for sharing.
As for you not liking my tone, I did not liked yours too and since you where the one talking about respect you may want to mind your own words in the future.
|
|
|
|
|
Ok, now I understood you.
Sorry if I offended you somehow
|
|
|
|
|
No not at all and honestly I should be the one making an apology since my first post came across kind of harsh.
I like this site and your articles so I finally decided to register.
Regards,
LG
|
|
|
|
|
OK, I just can't let this pass: you cannot possibly claim to have "invented" a new way of allocating memory using a linked list. That is, frankly, ridiculous.
There have been numerous attempts at small block allocators over the years. For a fairly recent one, check Andrei Alexandrescu, Modern C++ Design - Generic Programming And Design Patterns Applied (http://www.amazon.com/exec/obidos/tg/detail/-/0201704315/qid=1089407510/sr=8-1/ref=pd_ka_1/102-3203850-4922506?v=glance&s=books&n=507846). Chapter 4 is entitled "Small Object Allocation". His explanation is far more detailed than anything here; and bottom line, you'll find a nice linked list in his implementation.
I commend your giving this subject some attention. However, it seems to me that you are a C++ growing in experience and have just realised the impact that heap management can have on many C++ applications, not just poorly coded ones.
If you really want to test yourself, examine the issues of heap management in a multithreaded, multiprocessor environment. There's plenty of information on the web, you can start with the WinHeap (www.winheap.com), hoard (www.hoard.org) and smartheap (www.microquill.com) pages.
Read and enjoy.
|
|
|
|
|
|
In general, I would not use a specialized allocator unless it was required. The only way you can really know if it required is to do profiling.
The Univ of Austin TX did some research and compared Doug Lea's malloc replacement with some specialized allocation systems. They found that Doug's allocator was only slower in one instance.
If memory allocation is a problem, try Doug's general malloc replacement first. If that still isn't good enough, then try a specialized allocator.
http://gee.cs.oswego.edu/dl/[^]
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
new and delete can be overridden through operators. In your own implimentation of new and delete, you can ask your list for existing objects instead of allocating them on the heap and delete can put them into the list. It's an elegant way of handling the interface to your list.
-----------------------------------------------------
Bush To Iraqi Militants: 'Please Stop Bringing It On' - The Onion
"Moore's prominent presence in the news brings to light some serious questions, such as 'Can't he at least try to look presentable?'" - The Onion
|
|
|
|
|
Just so I understand what you're saying, "In the beginning, after you've created your list, through the use of overriding the 'new' operator you can obtain your required space from the list, and then when you're finished with the space, through the use of overriding the 'delete' operator, you can return the space to the list."
Is that what you're saying?
If it is, why bother with a link list in the first place? Why not use a vector? Some things are automatically done for you with vectors. And why the overriding of 'new' and 'delete'? Simply write a 'New' and 'Delete' set of functions that will do the interfacing for you (using 'push' and 'pop').
William
Fortes in fide et opere!
|
|
|
|
|
Is that what you're saying?
Yes.
If it is, why bother with a link list in the first place? Why not use a vector? Some things are automatically done for you with vectors.
I didn't mean linked list, just "list" in the most generic sense - i.e. a container. (And many linked lists are rather inefficient anyway. I believe CList does an allocation whenever you add an item to it.)
And why the overriding of 'new' and 'delete'?
I generally prefer to hide the details for a couple reasons:
It looks better. By overriding new and delete, your code looks exactly the same as it did before (which can be especially useful when working with other developers, since they might not know about this other container used to recycle objects).
You don't have to remember which objects have containers built for them ("do I allocate, or ask some container for an object?")
You can put some of the more complicated logic inside them (e.g. if the container is empty, you need to allocate a new object on the heap; if the container is too full, don't allow more objects to be placed into the container, just delete them). This code should be written once and placed somewhere - it can be placed inside the new/delete methods, or it can be placed in one of your ObjectContainer methods. This means you need to create an ObjectContainer class (most likely one ObjectContainer class for each class).
It makes your code very easy to convert (i.e. if you were allocating and deleting objects before, you can create new new and delete methods rather than tracking down every instance of new and delete to replace it with a container-access function.
Simply write a 'New' and 'Delete' set of functions that will do the interfacing for you (using 'push' and 'pop').
That's very similar to what I'm saying. Now, take your container class, make it a static object inside the class you want to recycle, and access it via overridden new and delete methods.
-----------------------------------------------------
Empires Of Steel[^]
|
|
|
|
|
Even if the idea could be improved upon, you've provided both a starting point and sample code to challenge the investigating mind.
Good work! (and a '5').
William
Fortes in fide et opere!
|
|
|
|
|