Click here to Skip to main content
15,891,375 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
Hi All

I'm learning C++ programming using Visual C++ 2003 and was wondering why when you create a form and double-click a component in the designer it puts the function definition in Form1.h rather than Form1.cpp.

I come from a Visual Basic background but have always found C++ intriguing and no sooner am I starting to understand the difference between header files (.h) and source files (.cpp) - that the class definition should go in header files and the function definitions should go in source files - windows forms comes along and appears to declare all functions inline by default such that there is only one function definition in the .cpp file. Is there something simple I'm missing here?

Any answers would be much appreciated :) just so I can know if it's just an exception to the rule or whether I've misunderstood something completely.
Posted
Updated 18-Feb-11 3:16am
v2
Comments
Nish Nishant 18-Feb-11 9:16am    
Added C++/CLI tag.
majawo 18-Feb-11 9:26am    
Wow. I didn't expect a reply that fast - thank you! Just one other question... does it effect the performance at all? I'm not too familiar with the benefits of inline functions and all I know is that it creates a new copy of the function at run time or something like that, but there must be a reason why most functions are usually not declared inline?
Nish Nishant 18-Feb-11 9:28am    
In managed code it does not matter (unlike in native code). All methods are generated as MSIL (there is no inlining during compilation). Any inlining is done at runtime by the JIT compiler.
majawo 18-Feb-11 9:41am    
Thank you again. I think that's all I needed to know... it means there's nothing wrong with my understanding of C++ itself - I just have to learn a bit more about Visual Studio. But luckily I do know what MSIL is and JIT so I think I'm coming along fine. Cheers again.
Nish Nishant 18-Feb-11 9:43am    
Word of warning though - the Managed C++ syntax in VC++ 2003 is now obsolete. Starting with VC++ 2005 and in VC++ 2008/2010, the new syntax in use is called C++/CLI. So unless this is purely an academic interest, I suggest that you move to a later version of VC++ before spending more time learning the details.

They decided to use that style for Managed C++ and C++/CLI (only the automatic designer does that though). I reckon they wanted to be similar to C# (where there is just one file, no separate declaration and definition files).

That said you can choose to keep them in separate files if that suites your needs better.

[Edit]
~~~~~~~~

In response to your comment about inlining, I will add this here in case anyone else has the same question:

In managed code it does not matter (unlike in native code). All methods are generated as MSIL (there is no inlining during compilation). Any inlining is done at runtime by the JIT compiler.

[Edit]
~~~~~~~~~

SA commented that it's a bad idea to keep declaration/implementation in separate h/cpp files in C++/CLI (managed code). But this is not true for C++/CLI, and I will give reasons for that below.

In summary, when you are doing C++/CLI it is still a good practice to keep your declarations and implementations in separate header and cpp files.

  1. The C++ compiler will not directly compile header files, so even if you put your entire class in a header file, you still need to include it in a cpp file for it to compile.
  2. Every time you change code in any method or property, if you have it all in a header file, then every cpp file that #include-s this header file will need to be re-compiled. For large projects this can be a nightmare.
  3. The core C++ idea of keeping declaration and definition separate has been ingrained into the C++ coding culture for decades. It's a good idea to continue that practice, specially since the vast majority of C++/CLI projects will be mixed mode and thus you don't want to have two different styles in the same project.
  4. Every time you #include a file, the compiler has to unnecessarily recompile the entire code if they are inline. If you keep them separate the actual compilation only happens once.
  5. C++ compiler front-ends are optimized to handle separate cpp/h files, so that's yet another reason to use them.
 
Share this answer
 
v5
Comments
Sergey Alexandrovich Kryukov 18-Feb-11 9:37am    
This is correct, my 5.
Nish Nishant 18-Feb-11 9:38am    
Thank you, SA.
Sergey Alexandrovich Kryukov 18-Feb-11 9:54am    
Actually, I added my answer when I explain the rationale behind having and not having include files (more on not having, very little on having). I found that you recommendation on includes is not quite sound.
--SA
Nish Nishant 18-Feb-11 10:03am    
Wow SA, your answer is so wrong, I don't even know what to say. I will not vote it a 1, but I think you should delete it because it gives wrong information to the OP.

I will update my answer to explain why it is a "good" thing to keep separate h and cpp files in C++/CLI.
Sergey Alexandrovich Kryukov 18-Feb-11 14:01pm    
I guess it your would be so nice to explain what's wrong. I fear this could be my "cultural" assessments, not the matter of facts, however... you insist that separation of h/cpp is good? Is is only about compilation speed? Is so, this is not a good argument, because one can make smaller files. You see, many of your arguments are fragile. I still think this is less rational, more of preoccupation with certain imprinted ideas.
1) but one can use CPP only, so this problem will never appear
2) with C++/CLI there is no need in multiple of compilation of declarations;
3) culture change, this one is archaic; the argument is not rational;
4) same as (1) -- not h files, no such problem;
5) I don't see any signs of such optimization; even C++ optimization don't deal with files, it's just stream of code with some declaration repeated.
Anyway, if you disagree, you can't see that my opinions have certain grounds. You're certainly a real expert in C++/*, unlike myself, so your opinion is important for me. Please tell me if I make a mistake, trying to keep to the facts.
Thank you.
--SA
The use of *.h and *.cpp files is related to a very archaic method of supporting modularity and separate compilation still used in standard C++. The method is bases on includes and linking of object and library files by symbolic names.

I'm risking to catch a flame from C++ addicts (please, if you want to flame, ask yourself how many other languages you know well, how many of them do not have "include"), but I cannot stop wondering how such archaic method of programming can still survive in XXI century (at the same time I understand why). It has nothing to do with the low-level programming; there are many-years-old systems that use meta-data and allow no includes; they support exact same possibilities of low-level programming.

In .NET programming separate compilation and modularity is based on modules and assemblies. Modules are not supported by Visual Studio (more exactly, the supported model is the Assembly composed of exactly one module, but compilers and API support multiple module files per single assembly). All the information needed to bind several Assemblies is put to executable files (here I mean not just .EXE file, but .DLL and anything else in PE format; there is no real difference between them anymore) int the form of meta-data. The assemblies reference each other or load other assemblies dynamically during run-time; in both cases, all required information about class/structure/enum members, events, properties, methods and their signatures is stored in meta-data which is accessible in a referencing assembly both during compile-time and run-time.

Despite of all this, inside C++/CLI include files are still should be used even for managed code and ref types, as this is a way to bind together declarations from different files: they are not made automatically visible to each other just because they are put in the same project, like in C#. Not very surprisingly, such a neat feature as partial type declaration is also missing. To me, this is extremely disappointing and is a result of heavy C++ legacy.

—SA
 
Share this answer
 
v5
Comments
Espen Harlinn 19-Feb-11 12:14pm    
If there is one thing I'm missing in c# (appart from templates - that support compile time polymorphism), it's the c preprocessor. You can also use partial classes in c# to achieve separation of class declaration and implementation similar to c++ - this is sometimes useful when several people are working on the same class.
Sergey Alexandrovich Kryukov 19-Feb-11 17:20pm    
I've been in argument with Nishant (see above) and want to review my assessment of C++/CLI and then probably update the answer. I probably made some mistakes based I my assumed but not confirmed idea of the language (probably I though it's wiser that it is).

From the very beginning I expected very negative reaction which is unavoidable if I disclose some of my views in programming, and they are not possible to "proof". So I'm trying to avoid some topics. (I just wanted to ask, did you voted "1"? I mean if this just your opinion, it's perfectly fine... OK, no matter...) Basically, my general assessment of computing is very negative; I view of the current apparent "progress" in programming as 80% of big spectacular failure, because narrow group interests and ambitions defeated both sanity and objective scientific approach. C++ and its domination in my opinion is one of the biggest failures. (In my article on enumeration I made weak hints on how it should be and how it is.) As you're familiar with meta-data and no-include approach long before .NET (from Borland or even before), you should possible feel the fallacy of symbolic name based linking and includes. So, this is to get your the idea on my grounds.

So, let's look at this pre-processor thing. Even though I think just the opposite (preprocessor is the absolute evil, I think), I think we agree from the in-depth standpoint. One of 2-3 major fallacies of C# and .NET is the lack of ability to share some declarations even between files. In fact, this is disaster! Only I think the right solution should be the ability to declare user primitive type (typedef, not preprocessor). I think you miss "typedef" the most, right. But my idea is deeper. Right approach is the derived types. All types should be able to serve as a base type (unless intentionally sealed), including all primitive and enumeration types: "enum Base { a, b, }" => "enum Derived : Base { c, d, }" => "Derived a; Base b = a;", right? This approach is well known.

Yes, I agree with partial classes, to me this is very important feature.

Thank you.
--SA
Sergey Alexandrovich Kryukov 19-Feb-11 23:51pm    
The false statement I made in this Answer are removed now. Thank you very much for the discussion and help.
--SA
Espen Harlinn 19-Feb-11 18:20pm    
You just got a 5 - as you see, it weighs in somewhat more heavily than the 1 somebody else gave you.

Around 1990 - the preprocessor was heavily used to add "features" similar to what was later achieved with templates - libraries like the NIH C++ class library relied heavily on the preprocessor (http://www.softwarepreservation.org/projects/c_plus_plus/library/nihcl).

This practice was often taken too far, resulting in code that was quite difficult to maintain, but in the case of the NIH C++ class it was appropriate as it allowed the developers to create type-safe containers. The technique was made popular and found its way into libraries, like MFC, where it was used more as a primary way of doing many things that could easily be solved using regular C++ language features.

Swearing is often considered bad form in polite society, macros like:
#define begin {
#define end }
#define and &&
#define or ||
is, in my opinion on the same level – but that doesn’t mean that the preprocessor doesn’t have legitimate uses like: http://www.boostpro.com/tmpbook/preprocessor.html. It’s a tool – if it can be used to create more maintainable code, or just to reduce the effort required to implement a tediously repeated pattern of operations – I’ll use it :)
Sergey Alexandrovich Kryukov 19-Feb-11 23:19pm    
Thank you for advance vote, I did not really mean it, just wanted to know...
(As to the voters' weights: are you familiar with the quantum mechanical physical and philosophical of measurement? Is not possible to measure state without modifying it in principle, not even make this effect weaker...:-)

I'm familiar with the history of the subject. The problem of applicability of preprocessor is in fact really complex. Most common modern view of preprocessor as an evil is not a valid answer, as any other statement based on trends, authority and "voting", "democratic" approach (I say so even though I share most of the negative judgment on preprocessor). One bright manifestation of the complexity of the problem, not even about preprocessor: in C++, you can make, for example, a floating-point algorithm abstracting from a concrete type. This is impossible in .NET, essentially because these types do not have a common interface with operations, so "where" clause is not allowed with these an many other types. (I saw a clever work-around, still too awkward.) One of the most important features is missing.

Now, pay attention to my note about the alternative to allow sub-classing for all types including primitive ones and enumerations? I am sure using this alternative would solve the problem of generics I mentioned above. Actually, such solutions are well known (take Ada, for example).

--SA

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900