|
Hi,
if you are in charge of all the code, both managed and unmanaged, you can choose the interface between both worlds and get excellent performance, no matter which managed language you use.
IMO the rumors of C++/CLI being much better at interop than C# are highly overrated.
When done properly, bulk data does not need to be copied at all; a key factor is making your managed side do the allocation of all buffers that need to be passed from one side to the other.
Simple situations can be implemented with the "fixed" keyword; more complex situation may require the buffer/array pointers to remain in native use after their first passing returns, you then need the GCHandle class.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Dear Luc
thank you very much for your answer.
> IMO the rumors of C++/CLI being much better at interop than C# are highly overrated.
That's good news for me, since I would like to avoid C++ wherever possible. Although C++ invoke is more convenient and allows for the creation of native objects, I don't think I really need it, because I plan to minimize the communication between managed/unmanaged to exactly two events: 1. Managed object passing parameters and data to native object (via P/invoke and native wrapper function) and 2. native object returning results (again via native wrapper).
> a key factor is making your managed side do the allocation of all buffers that need to be passed from one side to the other.
O really? I'm surprised! I'd rather let the native code itself allocate all the memory it needs, since I though allocation and management of natively allocated memory was faster?!?
I wouldn't want garbage collection anyway. So do you really mean allocating memory by the managed code using "new"? Of what benefit would that be? Or do you mean creating vars, arrays etc. in the managed code and, then, passing pinning or even interior pointer to the native code? If so, I don't think I would do that, since I would still have managed memory in any cace, with produces overhead I must avoid.
Thanks & best regs
U.K.
|
|
|
|
|
Unbe Kant wrote: I'd rather let the native code itself allocate all the memory it needs
by all means, if the managed world is not involved in the buffer nor its content, let the native world take care of it. However as soon as the managed side is involved, put it in charge of allocation. It is easier and more efficient that way.
Unbe Kant wrote: I wouldn't want garbage collection anyway.
Why is that, do you have good reasons to avoid it? allocating and later collecting one array will be cheaper than copying its contents once.
Unbe Kant wrote: allocating memory by the managed code using "new"?
by any means suitable to your purpose. Could be new Something() or Image.FromFile() or anything that allocates (and initialized) a buffer or a chunk of memory.
Unbe Kant wrote: I would still have managed memory ... with produces overhead I must avoid.
I do image processing with C# and native C or assembly, I let C# do the overall stuff, the input/output, the GUI and the buffer allocations, and C/assembly the bitlevel stuff.
There are some app categories for which .NET isn't suited but for most of those Windows isn't suited either. Those are discussed seldomly on this site though.
Care to tell more about your app? What is it about, how big are your matrices, how many are there, and how long would a native function take typically?
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Hi Luc
Thank you again for your input. I will do as you say, since I believe that your experience is much bigger than mine. Actually, I am more familiar with Linux coding in C/C++.
Here is some more info about our app:
- It is a scientific programm solving integral equations numerically
- One single matrix: 128^3 double precision float (64bit) (Not that big, I know. It's not one single calculation, but the number of iterations that consumes time. Furthermore, future goal is to increase the precision AND dimensionality i.e. 128*x^(3*y) ... as soon as RAM becomes as cheap as a grain of sand )
- Need some 10 matrices simultaniously in RAM
- The current command-line based implementation is pure C++, however, even here the matrices are stored in arrays rather than STL stuff, because matrix-operations are carried out by C style Intel Math Kernel Libraries
- Intel C/C++ Compiler
- Time consumption per iteration on a single CPU 3GHz Intel CPU: 0.5s
- Still: We need a couple of thousand iterations until the calculation converges, which makes a single calculation lasting approx. half an hour for only this task
- Some post-interpolation stuff
Future plans:
- Extending matrix dimensionality to six while, of course, lowering the resolution (e.g. 32^6 or even more if we get the RAM )
- GUI (Here's where .NET becomes interesting)
- Database connection (dito)
Now maybe things become clearer. So what do I want to do?
- Let C# load the parameters (e.g. matrix size)
- Let native C++ allocate the memory for the 3D/6D arrays and pass the pointer to C# (using a pinned pointer, right?). However, these MUST be pure arrays since Intel Math Kernel Libraries accept nothing else
- Let C# fill the matrix
- Let C++ perform all calculations. I hope to be able to somehow handle the messaging between C++ and C# during the calculation for e.g. updating a progess bar
- When finished, let C# present and save the resulting matrix
As you can see, the managed world is hardly involved in the buffer. I could, of course, also let the managed code allocate the memory for the initial matrix and then copy it. But should I?
You asked:
> Why is that, do you have good reasons to avoid it?
> allocating and later collecting one array will be
> cheaper than copying its contents once.
Is this true in my case? Wouldn't the performance decline dramatically if GC was involved in the matrix calculations? It's just that I've heard (yet, never tested, maybe I should give it a try) that the performance of such calculations with matrices stored in managed memory is always lower.
> There are some app categories for which .NET isn't suited but
> for most of those Windows isn't suited either.
O dear - would that be one of those categories?
Regs
|
|
|
|
|
Unbe Kant wrote: would that be one of those categories?
absolutely not. as I understand it, you have a lot of data (filling almost all the memory)
and a heavy computation, nothing extremely dynamic or real-time oriented.
The one thing you must avoid is having your data copied, since 1) that slows things down, and 2) that requires extra memory (worst case: twice as much).
I still believe allocating on the managed side is the right way to go. I can't imagine Intel insists on allocating the data arrays themselves. They must accept pre-existing arrays, so allocate them in the managed world, use GCHandle to pin them down and get their pointer, then pass that to the native code, where Intel should accept them as if they were allocated with native code outside the Intel library. (If the Intel library would need some working space, which remains entirely hidden from the outside, so be it).
There is one caveat: managed memory blocks larger than some 80-85KB get allocated on the "large-object heap" which never gets compacted (moving the data would be too expensive). If lots of such blocks, all with different sizes, would be allocated and deallocated in random order, memory could become fragmented preventing a future allocation although sufficient free memory is available.
The relevance of a fragmentation risk depends on the variability of your array sizes, and the frequency of allocating and freeing such arrays. The subject is carefully avoided in all but the deepest documentation and discussions.
If, for some reason, you have to allocate all memory on the native side, you will have to:
- either provide managed but pointer-based code to fill the array yourself;
- or rely on one of the Marshal.Copy() overloads to fill part of the arrays (doing it by parts to prevent you needing twice the memory), maybe row-by-row.
BTW: I trust both Intel's library and yourself are sufficiently performance-aware to avoid multi-dimensional arrays; in my experience a little effort to linearize them immediately pays off.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
>as I understand it, you have a lot of data (filling almost all the memory)
>and a heavy computation, nothing extremely dynamic or real-time oriented.
Precisely.
> I still believe allocating on the managed side is the right way to go.
Ok, convinced, I believe you and will make it so.
> I can't imagine Intel insists on allocating the data arrays themselves
Of course, it doesn't. Please forgive me if I didn't made this point clear enough: The application must allocate the memory and pass the pointer all along with the number of elements (eg. m and n for a m*n matrix)
> BTW: I trust both Intel's library and yourself are sufficiently performance-aware
> to avoid multi-dimensional arrays; in my experience a little effort to linearize them immediately
> pays off.
Sure, I wrote about multi-dimensional matrices, not arrays
Again thank you very much for taking the time helping me with your input. I shall now start experimenting
Regs
UK
|
|
|
|
|
Luc Pattyn wrote: IMO the rumors of C++/CLI being much better at interop than C# are highly overrated.
Luc, I hope you won't mind if I defer to these fellows facts as opposed to your opinion?
While there is no question that the Microsoft® .NET Framework improves developer productivity, many people have some concern regarding the performance of managed code. The new version of Visual C++® will allow you to set aside these fears. For Visual Studio® 2005, the C++ syntax itself has been greatly improved to make it faster to write. In addition, a flexible language framework is provided for interacting with the common language runtime (CLR) to write high-performance programs.
Many programmers think that C++ gets good performance because it generates native code, but even if your code is completely managed you'll still get superior performance. With its flexible programming model, C++ doesn't tie you to procedural programming, object-oriented programming, generative programming, or meta-programming.
Another common misconception is that the same kind of superior performance on the .NET Framework can be attained regardless of the language you use—that the generated Microsoft intermediate language (MSIL) from various compilers is inherently equal. Even in Visual Studio .NET 2003 this was not true, but in Visual Studio 2005, the C++ compiler team went to great lengths to make sure that all of the expertise gained from years of optimizing native code was applied to managed code optimization. C++ gives you the flexibility to do fine tuning such as high-performance marshaling that is not possible with other languages. Moreover, the Visual C++ compiler generates the best optimized MSIL of any of the .NET languages. The result is that the best optimized code in .NET comes from the Visual C++ compiler.
http://msdn.microsoft.com/en-us/magazine/cc163855.aspx[^][^]
Wait Luc, there's more
Summary: Explore the design and rationale for the new C++/CLI language introduced with Visual C++ 2005. Use this knowledge to write powerful .NET applications with the most powerful programming language for .NET programming.
C++: The Most Powerful Language for .NET Framework Programming[^]
MORE!!!
Some people think that Microsoft forgets C++ playing with C#, but C++ is still widely used language. Could you comment it?
Microsoft has absolutely *not* forgotten C++. Most shrink-wrap applications are C++ because it's the smallest, tightest way to build apps and requires the least support from the OS itself. There is tons of existing C++ code in the world and Microsoft has implemented pure magic in the managed extensions for C++. MC++ allows you to flip a compiler switch, recompile your C++ code and begin producing and consuming .NET types immediately, all without changing your unmanaged types. In addition, the MC++ IJW (It Just Works) technology provides the lowest overhead interop layer between managed and unmanaged code, making it the best way to bridge the gap between the two technologies. Also, VS.NET 2003 provides a C++ compiler that compiles unmanaged C++ code down to smaller and faster code than ever before, while increasing ANSI C++ compliance to the highest level of any C++ compiler that I know of. I can't say enough good things about Microsoft's continued support for C++. I don't know of anyone doing more.
Interview with Chris Sells[^]
|
|
|
|
|
Hi Mike,
there is a lot you wrote/copied/linked to that I agree with. Even when some of it may sound a little biased (MS has to promote .NET, has to make clear what the position is with repect to C++, it is only normal compilers and code generators get improved, etc).
However it is besides the point I was making or trying to make: once it has been decided a mixture of managed code and native code is needed (maybe because of existing native code), making the two worlds cooperate often can be achieved with C# and P/Invoke as efficiently and with the same performance as can be done with C++/CLI. I am not saying C# is better or worse than C++/CLI, I am only referring to its ability to interface well with the native world.
I have been doing a lot of interop over the years, dealing with huge amounts of numeric data going back and forth, and the one thing that really matters performance-wise is the data does not get copied. Well, that can be organized by allocating managed buffers (e.g. int arrays), then passing pointers (I tend to use GCHandle for that, even when sometimes a simple fixed would suffice).
Example: I created an image processing library using run-time generated assembly code (including MMX/SSE for Intel x86 and Altivec for PowerPC) and called from both .NET (C#, using P/Invoke) and Java (using some kind of JNI); the code was not PC specific, it had to run on non-Windows embedded systems too. In the end the SIMD code was almost as fast as a copy operation, so adding one or two copy operations to it would be unacceptable.
Some people claim the best way to call native code from C# is by providing a C++/CLI wrapper, and maybe this is correct for complex data structure situations (not sure), but I object to those anyway. KISS is one of my motto's, and with a KISS interface, C# + P/Invoke works great.
Hence my statement: "IMO the rumors of C++/CLI being much better at interop than C# are highly overrated."
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
+5 for the links
|
|
|
|
|
Hi,
can somebody pleas tell me, how to correctly resize controls.
I made up a Windows Forms Control Library Projekt for a simple "status-line" control, with some text
for picture information (labels).
The control has to be able to be dynamically resized (f.e. when user stretches the window of the parent form).
But I do not come up with how to manage that.
I thought I can do it with the scale-function, but somehow alway just the panels are scaled (the text-labels are on two
different panels->upper and lower), the text-size of the labels remain in original size and so can no longer be shown correctly.
Why is that? I mean, when I scale a whole control, the scale function should scale all, or not?
I also tried with the "Resize-Event", so that I shrink /grow the labels manually, but it also didn't work.
And is there an opportunity to find over the Resize-Event, how MUCH the window grew / shrank - I think you should have a
factor to inherit the growth on somehow.
Maybe my mistake is just from false properties, but how would you pros manage that?
Thanks in advance,
cherry
|
|
|
|
|
|
I know about the anchoring, and I used it.
But as I said, the control ist anchored nicely, but the labels (text) remain as big as they are.
There are so many attributes concerning "sizing", so that I maybe always set on property not correctly.
How does anchoring, docking, autosizing and scaling work together - I found there no good descriptions in
the web.
I found that http://msdn.microsoft.com/en-us/library/ms171729.aspx[^]
You see, they write
"Note The Label control is the exception to this rule. When you set the value of a docked Label control's AutoSize property to true, the Label control will not stretch."
But how do I get the stretching then?
|
|
|
|
|
cherrymotion wrote: But how do I get the stretching then?
Why do you need a label control to stretch more than what is required to display the text?
|
|
|
|
|
Ok, I have it now. Something went wrong with my Auto-Size properties.
But can anyone tell me how to find out from the Control.Resize-Event, with which
factor the Form has been resized?
I have to calculate the space for my pictures an let them dynamically grow and shrink,
but for that I'm sure there is an opportunity to get that factor, isn't it?
Otherwise maybe I should save a rectangle with the current form size and set it then
in relation to the "grown" window parent. But a "resize-factor" would be much nicer I think...
Thanks for your answers, sometimes it is not easy
|
|
|
|
|
Hi,
The Form class already has 4 events regarding resizing: Resize, ResizeBegin, ResizeEnd, SizeChanged. That should cover it I'd say.
However you have to keep the current Size somewhere in order to calculate growth/shrink.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
modified on Friday, May 22, 2009 3:48 PM
|
|
|
|
|
Hello everyone,
Does anybody know of a way to send events to another process from the first process in managed C++.
I found an article for the same in C++ but couldn't find one for managed C++.
Thanks,
Parth
|
|
|
|
|
Member 3273983 wrote: I found an article for the same in C++
Provide a link please. Also explain why that article does not work for you.
|
|
|
|
|
|
Member 3273983 wrote: I want to do it in managed C++ which uses the form class instead of CWin* classes.
Yes. C++/CLI allows you to use managed and native code in the same project. So you can in fact use the approach from that article in your managed code. If you don't know how to create a C++/CLI project that supports managed and native code then you need to go read the Introductory CLI articles here on CodeProject.
|
|
|
|
|
Thanks for the info. I will look up the tutorials.
-Parth
|
|
|
|
|
I had one more question, is the same thing possible to get done with Java windows as target?
|
|
|
|
|
Hook up!
><O>
><O>
><O>
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
|
I have a situation where in i need to send the username and the password to a website and get the confirmation back from the website. Website takes the request in the form of query string.<b> The problem is when i send the request i am not getting the exact response, instead somtimes i get the html code of the website with the response or sometimes i get the url with response. below is the code, can someone please help me with this
mId is the username and mPwd is the password
try
{
String^ lcUrl = "http://login.somewebsite.com/Authorization/";
lcUrl = lcUrl + mId + "/" + mPwd;
HttpWebRequest^ loHttp = (HttpWebRequest^) WebRequest::Create(lcUrl);
String^ lcPostData = mId + "/" + mPwd;
loHttp->Method="POST";
loHttp->Credentials = CredentialCache::DefaultCredentials;
loHttp->ContentType = "application/x-www-form-urlencoded";
array<Byte,1>^ lbPostBuffer = gcnew array<Byte,1>(lcPostData->Length);
lbPostBuffer = System::Text::Encoding::UTF8->GetBytes(lcPostData);
loHttp->ContentLength = lcPostData->Length;
Stream^ loPostData = loHttp->GetRequestStream();
loPostData->Write(lbPostBuffer,0,lbPostBuffer->Length);
loPostData->Close();
HttpWebResponse^ loWebResponse = dynamic_cast<HttpWebResponse^> (loHttp->GetResponse());
Encoding^ enc = System::Text::Encoding::GetEncoding(1252);
StreamReader^ loResponseStream = gcnew StreamReader(loWebResponse-&gt;GetResponseStream(),enc);
lcHtml = loResponseStream->ReadToEnd();
loWebResponse->Close();
loResponseStream->Close();
MessageBox::Show(lcHtml,"Confirmation",MessageBoxButtons::OK);
}
catch(IOException^ exio)
{ MessageBox::Show(exio->Message,"Error",MessageBoxButtons::OK);
}
catch(WebException^ webEx)
{ MessageBox::Show(webEx->Message,"Error",MessageBoxButtons::OK);
}
catch(Exception^ ex)
{ MessageBox::Show(ex->Message,"Error",MessageBoxButtons::OK);
}
return lcHtml;
Naveen
|
|
|
|
|
Hi All,
I am Savitri here. I am new to this forum and also new to vc.net language.
I am getting confusion in delegates and events in vc.net. Please if any body have documents and examples for this topic please give me. I will read from that. I am not getting any examples in vc.net so i am asking you all. Please do needful for me.
Thanks in advance.
Regards,
Savitri p
|
|
|
|
|