|
C++ wins only in performance but in other things it kisses the C# feet.
No Manual Garbage Collection
Huge .NET-Framework library
Autoboxing - every type can be treated as if it inherits from object
Supports constructor-chaining (one constructor can call another constructor from the same class)
When a virtual method is called in a constructor, the method in the most derived class is used
Static constructors (run before the first instance of the class is created)
Advanced runtime type information and reflection
Built-in support for threads
No need for header files and #includes
Structs and Classes are actually different (structs are value types, have no default constructor in general cannot be derived from)
Supports properties
Readonly members are const, but can be changed in the constructor
Finally block for exceptions
Arrays are objects
Supports the base keyword for calling the overridden base class
and lots of
TVMU^P[[IGIOQHG^JSH`A#@`RFJ\c^JPL>;"[,*/|+&WLEZGc`AFXc!L
%^]*IRXD#@GKCQ`R\^SF_WcHbORY87֦ʻ6ϣN8ȤBcRAV\Z^&SU~%CSWQ@#2
W_AD`EPABIKRDFVS)EVLQK)JKSQXUFYK[M`UKs*$GwU#(QDXBER@CBN%
Rs0~53%eYrd8mt^7Z6]iTF+(EWfJ9zaK-iTV.C\y<pjxsg-b$f4ia>
--------------------------------------------------------
128 bit encrypted signature, crack if you can
|
|
|
|
|
Xmen wrote: C++ wins only in performance but in other things it kisses the C# feet.
No Manual Garbage Collection
Huge .NET-Framework library
Autoboxing - every type can be treated as if it inherits from object
Supports constructor-chaining (one constructor can call another constructor from the same class)
When a virtual method is called in a constructor, the method in the most derived class is used
Static constructors (run before the first instance of the class is created)
Advanced runtime type information and reflection
Built-in support for threads
No need for header files and #includes
Structs and Classes are actually different (structs are value types, have no default constructor in general cannot be derived from)
Supports properties
Readonly members are const, but can be changed in the constructor
Finally block for exceptions
Arrays are objects
Supports the base keyword for calling the overridden base class
I agree
|
|
|
|
|
I like C# too, and the .net framework is great. Unfortunately its just too darn slow, expecially on Windows mobile devices. I seem to spend all my time trying to optimize code, and never get close to the performance of C++.
A simple function to search a .csv file would take minutes to run in C#, but just milliseconds in C++. I eventually gave up on .netcf, and re-wrote my project, using a combination of native evc++ and freepascal for the gui.
Some benchmarks on the internet seem to sugest that C++ is only 3 to 10 times faster than C#, but my tests show the speed difference to be much greater on mobile devices. (at least 100x)
rjklindsay
|
|
|
|
|
I did a lot on mobile some comments
- Debug runs like a dog
- most of the code is not even implemented in C# it just calls native via pinvoke ....so you are running C++ this includes nearly all winform code.
- DO not use Reflection ..
A lot of people try to use reflection and fancy things like that and COmpact framework doesnt do caching the same goes with serialization of types.
If you avoid these things i found performance was better than Evc since it was quicker to optomize. Just KISS.
Ben
|
|
|
|
|
Nice article. what do think of D language anyway? I have heard it is compiled in native, however it has garbage collection so that you won't end up with memory leaks after hours of running your application.
|
|
|
|
|
If performance is important, a C/C++ application will be compiled twice -- the first time to generate profiling instrumentation, and the second time to optimize the code based on the profiling data generated from running the application. The performance improvement can be quite dramatic, and it is something not available to C# programs.
A JIT compiler is restrained by time. It can only take the time to make a broad pass at optimization. Otherwise, it runs the risk of spending more time figuring out how to optimize the code then the time saved from the actual optimizations. A C/C++ compiler, on the other hand, can take however much time it needs to analyze the code to provide as much optimization as it can manage.
It is true that a JIT compiler can theoretically use processor instructions not available to a previously compiled application. However, this will only be of benefit if those special instructions are designed to speed up an application. There isn't much motivation for a chip designer to add new instructions for applications -- by the time legacy applications have been updated to take advantage of them, that chip will be several generations old. The primary advantage of new instructions is for the operating systems that run on them. Unlike most applications, an Operating System typically runs a large number of concurrent tasks, so any new instructions to speed that up is a large advantage -- but not one that Applications will benefit from.
|
|
|
|
|
A JIT compiler is restrained by time, but heavy optimizations is possible; however, complex heuristics are needed to decide when to spend time on them.
About new processor instructions, what you say is a theoretical reasoning not backed by any facts, and indeed false (you're welcome to post specific evidence) - most new instructions are meant to be used by applications (the only OS-specific ones I can remember are sysenter/syscall for OS-level system calls, instead of software interrupts like "int 0x80" on Linux or "int 21h" on DOS). Most SIMD operations can't be possibly used in a OS (say SIMD floating point instructions). And performance-intensive applications like media manipulation (codec/editors/...) ones or video games will use them quite soon. In the other cases, I'd say that you need to introduce them really early. Introduce an op today, to make programs faster tomorrow (that's the case with cmov, added in P6/Pentium Pro; lots of binaries in the wild are still able to run on a 486/Pentium).
Profile-guided optimizations is something most commonly used by JIT. It's true that the speedups can be dramatic, but using data from a different usage scenario can invalidate some of the profiling results, and this does not apply to the JIT.
|
|
|
|
|
... C++/CLI
The best of both worlds... or close enough
I've been investigating this technology for quite some time now. I'm writing a game engine which main purpose is to be easy to use and enhance. C++/CLI not only provides me with the ability to use my assemblies in whatever language is out there for the .NET platform, but also to optimize the code to have as much performance as possible.
Best,
Hernan
|
|
|
|
|
Both C++ and C# will run at the "SAME" speed in .NET as both are converted into IL. Unless you use unsafe {} blocks, you will not find any specific differences in C++/CLI. If you really want performance gains, write C++ algos in classical C++ dll and call it thru a DllInterop as illustrated in my Point 3.
http://www.3dbuzz.com/vbforum/showthread.php?t=135493[^]
Alternatively You can search for game development in C# vs C++,
http://www.google.com/search?aq=f&hl=en&q=c%23+vs+c%2B%2B&btnG=Search[^]
Thanks for making a lively discussion.
Mugunth
Thanks & Regards,
M.Mugunth Kumar
M +65 82448625
W http://mugunth.kumar.googlepages.com
B http://tech-mugunthkumar.blogspot.com (Technology Blog) *NEW
Nanyang Technological University,
Wee Kim Wee School of Communication and Information,
31 Nanyang Link, Singapore - 637718.
|
|
|
|
|
C++/CLI provides custom data marshalling, which may result in huge performane gains. That is not possible in C#.
Eduardo León
|
|
|
|
|
computers now are faster. who has ever dreamed about a computer with more 4Gb of memory so use whatever language that would help u finishing ur task
|
|
|
|
|
true, they are faster but not fast enough for EVERY task so you still have to optimize, data-cache, pre-calc etc... AND well design your code
|
|
|
|
|
well c# will be better it does not need a lot of thinkingg(garbage collector does a lot of memory managment for u ) and it help u avoid lot of mistakes so i vote for C#
also c++ is powerful
|
|
|
|
|
i vote for knowledge : programming without knowledge is just like driving on the reverse lane on the highway... it can work but...
|
|
|
|
|
yassir.2 wrote: who has ever dreamed about a computer with more 4Gb of memory
And yet on a 4 processor machine with 6GB of memory I spent several months to make my research application work efficiently with only 6GB of memory.
yassir.2 wrote: so use whatever language that would help u finishing ur task
I would make sure that the language is suitable for the application you are writing. .NET is certainly not suitable for the work I do but it will be fine for 90% of business applications.
John
|
|
|
|
|
Java/C# are entering the HPC picture as well. But if you are tight on memory, stick with C++. Garbage collectors can be faster than manual memory management, but they use a lot more memory as any paper on the topic will tell you.
|
|
|
|
|
Hello,
Where should i start?
At first, i like articles which deals with such complex topics but i think the actual version need some improvements.
I think it isn't possible to find a general answer because it depends from the point of view.
You write: The reason why C# compiled applications could be faster is that, during the second compilation, the compiler knows the actual run-time environment and processor type and could generate instructions that targets a specific processor.
It will also not be able to take advantages of the Core 2 duo or Core 2 Quad's "true multi-threaded" instruction set as the compiler generated native code does not even know about these instruction sets.
Never heard about it. The most common way to control or optimize threading is the WIN32 API (f.e. control affinity mask to specifiy cpu). The API hides such details and generalize the usage via HAL. Then you already have a "true multithreaded" application. If your application is well designed it dosen't matters which cpu environment is present.
You write: In the earlier days, not much changes were introduced to the instruction set with every processor release. The advancement in the processor was only in the speed and very few additional instruction sets with every release. Intel or AMD normally expects game developers to use these additional instruction sets. But with the advent of PIV and then on, with every release, PIV, PIV HT, Core, Core 2, Core 2 Quad, Extreme, and the latest Penryn, there are additional instruction sets that could be utilized if your application needs performance. There are C++ compilers that generate code that targets specific processors. But the disadvantage is the application has to be tagged as "This application's minimum system requirements are atleast a Core 2 Quad processor" which means a lot of customers will start to run away.
I must diagree. Well, special instructions sets exists like SSE but there are serveral ways to dynamically use them.
You write: How many of us can manage memory efficiently in a C++ application that's so huge say a million lines of code? It's extremely difficult to "well-design" a C++ program especially when the program grows larger. The problem with "not-freeing" the memory at the right time is that the working set of the application increases which increases the number of "page faults". Everyone knows that page fault is one of the most time-consuming operation as it requires a hard disk access. One page fault and you are dead. Any optimization that you did spending your hours of time is wasted in this page fault because you did not "free" memory that you no longer needed.
I must also disagree, in C++ there are serveal ways to manage the memory very effective (more effective as C#).
Of course Page Faults are a critical situation but Page Faults will be raised if a requested memory page isn't present in the RAM (stored in swap file).
That's all. I hope it's a constructive feedback .
Ciao...
|
|
|
|
|
Quick article and the information about the two time compilation is good.
I am new to C#/.Net world. But I think the second time compilation at when we run
the application will make the starting of the application slower
But IMHO, We can use C++ smart pointers to manage memory without problem
And we don't need to ship a big runtime with our application.
|
|
|
|
|
For improving application load time, installer programs can be set to "NGen" the code.
NGen is a native code generator that compiles a C# application to native exe. This process is usually
done by the installer. Secondly, the .NET framework is present in all machines after Windows XP SP2.
this means, you don't have to ship the framework seperately for atleast 90% of the machines.
Regards,
Mugunth
Thanks & Regards,
M.Mugunth Kumar
M +65 82448625
W http://mugunth.kumar.googlepages.com
B http://tech-mugunthkumar.blogspot.com (Technology Blog) *NEW
Nanyang Technological University,
Wee Kim Wee School of Communication and Information,
31 Nanyang Link, Singapore - 637718.
|
|
|
|
|
90% simply isn't good enough
Unless the app you are writing is going to be ran ONLY on PCs that you have control over, then you MUST include the .NET framework in your install (or at least have your installer dynamicly install it).
Would you seriously ever do business with a company that guaranteed 90% uptime?
|
|
|
|
|
A couple things to note about this comment.
The JIT (Just In Time) compilation doesn't happen entirely at start time. It literally happens "just in time". In addition, this can be done at installation time instead. Finally, on this point, the IL is specifically designed to be fast to compile.
Yes, you can use smart pointers to manage memory in C++. However, by itself, a smart pointer doesn't do much to improve performance. Specialized allocators would be required to really optimize performance there. And there's an issue with memory management that is very difficult to address in C++, that's automatically handled in .NET. Memory allocations in C++ generally can lead to fragmentation and excessive page faults. The GC in .NET, on the other hand, is a "compacting garbage collector", which eliminates fragmentation and can reduce page faults.
There's a very interesting series of articles on this topic by some of the top Microsoft developers, both experts at optimizing code. http://blogs.msdn.com/ricom/archive/2005/05/10/performance-quiz-6-chinese-english-dictionary-reader.aspx[^].
William E. Kempf
|
|
|
|
|
FYI, I am new to .Net world.
William E. Kempf wrote: The JIT (Just In Time) compilation doesn't happen entirely at start time. It literally happens "just in time".
Compiling at "Just in time" makes the user feel like using a slower application.
William E. Kempf wrote: In addition, this can be done at installation time instead.
This seems to be a good option. But still the executable runs under the runtime where the slowness problem comes in.
William E. Kempf wrote: Finally, on this point, the IL is specifically designed to be fast to compile.
IL is designed to be fast to compile but the JIT Compiler is not compiled for the specific processor so JIT compiler gives us faster executable for the specific processor but the JIT compiler itself is slower because of the generic code in it.
William E. Kempf wrote: Yes, you can use smart pointers to manage memory in C++. However, by itself, a smart pointer doesn't do much to improve performance.
Smart pointers are to manage memory not to increase the performance.
And in C++ most of the time we don't need to use "new" to allocate heap memory.
We can avoid it as much as possible and use smart pointers whenever we need it.
But .Net allocates objects in heap memory and wait for GC to collect.
Thanks and correct me if i am wrong
|
|
|
|
|
.NET Framework during installation, installs not the IL code, but a NGen'd version of the IL code. Similarly, using our installer, we can "second compile" the application during installation using "NGen.exe". This is one way to improve the performance by preventing the second compilation that happens during startup.
Thanks & Regards,
M.Mugunth Kumar
M +65 82448625
W http://mugunth.kumar.googlepages.com
B http://tech-mugunthkumar.blogspot.com (Technology Blog) *NEW
Nanyang Technological University,
Wee Kim Wee School of Communication and Information,
31 Nanyang Link, Singapore - 637718.
|
|
|
|
|
Thanks Mugunth, I understand the NGen and the performance benefits.
|
|
|
|
|
codeprojecter_ wrote: Compiling at "Just in time" makes the user feel like using a slower application.
Maybe, maybe not. Since small amounts are being compiled, and they're compiling fast, in many circumstances the user can't even perceive the compilation overhead. There's too many variables to consider here to make a blanket claim one way or the other.
codeprojecter_ wrote: This seems to be a good option. But still the executable runs under the runtime where the slowness problem comes in.
I don't understand this comment. The "runtime" in this case is just an API, no different than running under the Win32 "runtime". With the way you said this, it seems like you think it's still somehow running in an interpreted mode. It's not. It's running as native code. The only overhead to the .NET runtime would be the overhead of the security features, and that's not overly significant and is likely worth the cost for most applications.
codeprojecter_ wrote: IL is designed to be fast to compile but the JIT Compiler is not compiled for the specific processor so JIT compiler gives us faster executable for the specific processor but the JIT compiler itself is slower because of the generic code in it.
You're making assumptions there. I'm not going to make any claims to know how the JIT is performed, as I've never had a reason to care enough to research it. But there's no reason not to assume the code that performs the JIT wasn't NGened and thus running optimally for the platform.
codeprojecter_ wrote: And in C++ most of the time we don't need to use "new" to allocate heap memory.
We can avoid it as much as possible and use smart pointers whenever we need it.
But .Net allocates objects in heap memory and wait for GC to collect.
Actually, you do have to use new to allocate heap memory most of the time. The other options are malloc and cousins and custom allocators, but for custom classes these options have to use at least placement new to construct the object. Maybe you didn't mean to say "heap" memory, and meant that you could instead rely on stack allocations, but I'd contend that most applications will rely on at least as much heap, if not significantly more, as it will stack.
And again, in case the point wasn't understood, smart pointers don't do anything for performance, or for dealing with allocations. They only address issues with deallocation. The GC addresses so much more than that, and produces significant performance benefits. Yes, there's drawbacks, such as the non-deterministic nature of collecting, and the fact that it addresses only memory and not other types of resource acquisitions, so I'm not trying to oversell the GC. But strictly discussing performance, which is the point of this article, the GC is a better general purpose solution.
BTW, the CLR uses both stack and heap allocations. It's just not as easy to determine where allocations occur.[^]
William E. Kempf
|
|
|
|
|