Yep that is how memory (heap) corruption works. You broke the heap at that point. Then somewhere after it something went to use the heap and at that point the data was bad.
Additionally the behavior that happens could be almost anything. Depends on what the actual data (bad) was in the heap and what it was attempting to do with it at that point, and even what was calling it.
Go back to the manufacturers and ask them: they are the only people with the source code, so they are the only ones who can help you fix it.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
That is usually a subjective thing. In general it should not be required.
But what happens, over time, is that as complexity grows so does the definitions.
And there are only two choices
1. Include the actual source of the definition
2. Include a short cut.
The first solution, in general requires that the compiler reread (parse, tokenize, etc) the full included source files every time and then likely the includes for those as well and that can continue for a while. That takes time. Compilers, now, tend to attempt to optimize that by using various caching strategies. But, at least in the past, some of those were less than reliable.
So people might want to avoid that by using short cuts. Thus the compiler can do far less work. And that is what that is.
Ideally of course the design should minimize all of that. Even at times to the point of perhaps reducing clarity. One trick that I used to use was to create two includes. One that was only for public consumption and the other for internal consumption (for example internal structs) and then the public visible items used something like a 'void *' and then I used casts for the internals. That did allow the external include to be smaller it size but it was a solution that only worked in some cases.
even with the comment "things grow in complexity "....
Have you worked with legacy systems? Systems that have been in continued use for 20 years with ongoing changes throughout that time period?
And for a product that has started with a small customer base but which has grown to many?
Google for example...
"span a whopping 2 billion lines of code that stretch across 1 billion files and require 86 terabytes of storage,"
Don't you suppose there are some less than ideal code in that?
Or if you prefer GCC is roughly 15 million lines of code.
Member 14968771 wrote:
and that should be all ( objective) what is needed
That is of course specifically subjective.
And you will not find any non-trivial application code base in C++ where the vast majority of files follow that model. Most will have many include files. Sometimes there are even include files that are nothing but a container for other include files (and that can recurse.)
I resolved all the initial heap errors still receiving some but I think I know the cause the Assembler listing I am streaming is 153 pages with 54 lines to page and about 130 characters across so that's a little bit over a meg large, Can I use LimitText to give myself the space?
I am working at blitting to back frame buffers a changing variety of bitmaps to create animations. When completed, that would be sent to the screen. I want to do this with minimum processing and minimum RAM.
A little bit of a background I am basically a mainframe assembler language programmer I have worked for software vendor and IBM on their OS after 2000 I started getting layed off all over the place. it was it was suggested to me that I learn something new I started with C then Windows then MFC, along the way I got a Job with the US federal govt the IRS it is my day job Im on the east cost in NY. When It came to renew my IBM MainFrame yearly license and I inquired why the cost was higher than others they told me I have a right to sell the software I develop. I then started to focus my attention on that The z/os as a server and a Windows client
Back to why I didnt make this code part of the .exe. The .DLL would be invoked by multiple user. Every user of the .exe would need to perform this functionality
Again thank you for help I could never have gotten this far without the kindness of people like yourself
I don't know the details of your application, but in the PC/Windows world it's rare to have more than one user running on the same machine. Even if you have more than one user, each process (.exe program) has it's own separate memory space. Using a DLL instead of a single program to save memory makes little sense these days. I know some people might disagree with me, but this is a broad brush picture. Anyway it doesn't pay off in terms of complications it adds to your project, doubly so if you are a newcomer to this field.
In short, my recommendation would be to give up on having a DLL and putting everything in a single program. If latter on, when the program is working well, you discover that using a DLL makes sense, you can change it. In the meantime you would have gained more familiarity with the OS/tools/programming language and it would be easier for you to transition.
Just my $0.02
Last Visit: 31-Dec-99 18:00 Last Update: 27-Sep-23 3:24