Click here to Skip to main content
15,888,579 members
Articles / Programming Languages / C++

Fixed Memory Pool Allocation with Custom Class

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
30 Nov 2018CPOL3 min read 9.1K   161   3   9
A fixed memory pool class written in C++ for generic usage

Introduction

Memory pools allow memory allocation with constant execution time. The memory release for thousands of objects in a pool is just one operation (chunks), not one by one if malloc is used to allocate memory for each object.

Memory pools can be grouped in hierarchical tree structures (like this one), which is suitable for special programming structures like loops and recursions.

Fixed-size block memory pools do not need to store allocation metadata for each allocation, describing characteristics like the size of the allocated block. Particularly for small allocations, this provides substantial space savings.

Allows deterministic behavior on real-time systems avoiding the out of memory errors.

Where This Can Fit

Say you have a large buffer of preallocated memory that we will call the pool.
Say you want to put things on this pool but you want to keep control where everything is located and you don't know how to do it or don't have time to do it.
Say you need something that tells you where you have place to put each small buffer inside the pool knowing its size.

But you don't care about how and where, you just need something nice that tells you where.
Then this class may fit your needs.

For example, in an OpenGL environment, you allocate a buffer large enough and init the pool with this buffer size (say for a vbo buffer), then you can use this class to manage your big buffer into smallest blocks (which is in fact one of my current uses for this class) to manage a large preallocated buffer (pool) in smallest memory areas (or windows) nothing in memory is changed by this class, is up to you if you want to move things around.

In this kind of environment, you can access gpu memory but the gpu does not keep track of the memory areas
So this class will fit your needs.

Another possible use is to manage a large mapped memory area shared or not shared between processes such a memory mapped file

Using the Code

Using this class is pretty simple. You must first initialize the pool and call the InitPool method.

So first, declare a pool object like:

C++
MemoryPool m_Pool;

Then, call the InitPool method with your required parameters:

C++
// Pool initializer, this function must be called after pool object is declared
// you should pass the size of the pool you want to manage
// base is the "address" you want to start the pool usually will be the address
// of your large buffer if you want to get addresses when invoking Allocate
// or 0 in which case values returned by Allocate will be equivalent to
// relative offsets measured in bytes
// to the real address of your big buffer

virtual bool InitPool(uint64_t size, uint64_t base =0, uint64_t minSplitSize=1024);

For example:

C++
m_Pool.InitPool(1024*1000,0);

This will create a pool of 1MB starting at base address 0. Then to request space, say for example, you need 2048 bytes from the pool:

C++
uint64_t offset =m_Pool.Allocate(2048);

Then to release this memory:

C++
m_Pool.Free(offset);

And that's all!!

Points of Interests

Included with the sources of the class is a test suite application when you can get a grasp of how this works, a visual map of the pool is presented to the user, this is done with a custom control which of course could be a nice thing to have for other uses.

This test suite reports the memory pool status and allows to test all the features of the class.

To keep track of allocated chunks and std::map is used, the pool tries to re-use freed chunks when possible and also tries to split them too if required, this way, we can avoid memory fragmentation.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Spain Spain
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
QuestionWhat is base address for? Pin
Stefan_Lang3-Dec-18 21:09
Stefan_Lang3-Dec-18 21:09 
AnswerRe: What is base address for? Pin
yafan4-Dec-18 11:25
yafan4-Dec-18 11:25 
GeneralRe: What is base address for? Pin
Stefan_Lang4-Dec-18 21:59
Stefan_Lang4-Dec-18 21:59 
GeneralRe: What is base address for? Pin
mrsilver10-Dec-18 8:32
mrsilver10-Dec-18 8:32 
QuestionNo benchmark? Pin
Shao Voon Wong30-Nov-18 23:05
mvaShao Voon Wong30-Nov-18 23:05 
AnswerRe: No benchmark? Pin
mrsilver3-Dec-18 1:07
mrsilver3-Dec-18 1:07 
GeneralRe: No benchmark? Pin
Stefan_Lang3-Dec-18 5:18
Stefan_Lang3-Dec-18 5:18 
Yes you need a benchmark. The standard memory manager functionality is more efficient than you give it credit for. I did my own O(1) memory pool implementation some 8-10 years ago, and only managed to beat standard new/delete by a factor of ~5. (malloc/free were only marginally faster than new/delete) My implementation is still sitting on Google Drive and it contains a sheet with minimal benchmark results (times in nanoseconds per allocation/deallocation pair). Note that the time does not increase beyond 800ns even for standard new/delete!

Achieving O(1) for both allocating and freeing is not trivial at all. Remember: O(1) means that 100 million allocations + deallocations should take only about 100 times as long as 1 million allocations + deallocations. A benchmark could help prove that claim.

I noticed that MemoryPool::Allocate() calls GetFreeChunk() which iterates over all chunks to see whether one of them is free. You're also using std::map to manage the chunks, which isn't O(1) for all of it's operations either. I haven't looked any further, but it seems your implementation is straightforward, with no visible effort to keep allocations and deallocations at O(1). The only assumption that would make this - technically - O(1), is that you never need more than a certain number of chunks.

That said, the functionality of splitting chunks means that you can have a rather large list of chunks eventually, if you just keep allocating small size blocks. I have considerable doubt this can be faster than standard memory allocation routines.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)

GeneralRe: No benchmark? Pin
mrsilver10-Dec-18 8:42
mrsilver10-Dec-18 8:42 
GeneralRe: No benchmark? Pin
Stefan_Lang10-Dec-18 21:34
Stefan_Lang10-Dec-18 21:34 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.