|
Thanks for the Link Richard, sometimes my google foo seems off...
Uhm, long story short: There is a bunch of old men that define a standard i have to align to and use their libraries and software and put my stuff somewhere in-between. This application is running on 4.0.3 and that is why i am targeting that. The app loads my code and then i load the base stuff as well and all of that is compiled as 2.0 Standard.
Therefore my thought of extracting this TLS / Websocket bit into a "Service / Server" my old fart code can communicate with. And i already found out that Pipes do work like a charm there.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
HobbyProggy wrote: This application is running on 4.0.3 ... compiled as 2.0 Standard
If you mean .NET Standard 2.0[^], that requires at least .NET Framework 4.6.1, and preferably 4.7.2 or higher.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Exactly... well now i am really confused, my testing app is 4.7.2 but the version info of the official application is .NET 4.0.3. So additionally to not knowing 100% how all of that stuff works i even don't know how this specific works out!
*edit*
My Test app as well looks like .NET 4.0.3 or am i just dumb today?
Console output: DeviceModel: .NET Version 4.0.30319.42000
'TestApp.exe' (CLR v4.0.30319:TestApp.exe): Loaded
'C:\ProgramData\...\Device.dll'.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
modified 8-May-24 7:27am.
|
|
|
|
|
|
Interesting but keep in mind that the code as written does not 'use' TLS 1.3. Rather it attempts to specify it and then backs down if the system tells says it is not available.
I suspect one would also want to verify that something else is not also doing a backdown even further down the line. (Keep in mind my first post where I mention that encryption algorithms might also be needed which is only determined by the windows OS.)
|
|
|
|
|
I downloaded and installed a package from nuget
When I compile source code exe file cannot find dll file
but after package installation dll appeared on disk
How to fix it without using visual studio or copying dll's
modified 5-May-24 2:29am.
|
|
|
|
|
Check your project references, and if they all look OK, check the folders: the compiled DLL needs to be in the "bin" folder under the relevant project. If you have the references set correctly, the required non-system DLL files will be built and then copied to the bin folders.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I added reference to the dll at compile time and executable was created but executable cannot find dll to load
This program has two files one cs and one dll
I compiled it from command line with csc given with Windows
(Windows gives us .Net Framework 4.8 and C# up to 5.0)
(It is simple program with one cs file and one dll file so i don't want to create project for that)
Yes copying the dll to the program folder is kind of solution but wasting disk space
Such copying is unnecessary in my opinion and better way would be for example choosing path to dll
for executable
I would like to do not need copying dlls to program folder for each program which uses that dll
Now I am trying to extract data from html and I found nuget package for this
Some time ago I found Rational number class I downloaded and installed it via nuget and executable couldn't
locate dll
But when I compiled dll from sources there was no problems with finding and loading dll by executable
|
|
|
|
|
Check the build parameters: if your exe is 64 bit and the DLL is 32 then it can't load it, and vice versa.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Hello, I divided my solution for a third party dll in VS 2022 into
a) 1st.dll in project 1
b) 2nd.dll in project 2
Project 1 references project 2 and creates instances of objects based on classes in project 2.
Both dlls are copied to the same folder but only 1st.dll is loaed by an Application to which I have no access except vie an API used by my 1st.dll.
My understanding was, that 2nd.dll is kind of linked to 1st.dll, so that it is no necessary in the same folder. I also only want to ONLY ship 1st.dll. But when I delete 2nd.dll from the path my approach does not work.
Question:
What do I have to to, so that it is sufficient to only have 1st.dll at the customer side available ?
|
|
|
|
|
No, the .DLL's are not "linked" in the way you're seem to be thinking. 2.dll is NOT "linked into" 1.dll. They will remain separate .DLL's when you compile them. You said it yourself, 1 REFERENCES 2, so the two .DLL's must be shipped together.
You have a choice. You can either rewrite and get rid of the second project, copying your code in the 2 project into the 1 project, then update the references and namespace using statements, rebuild and you'll get your 1.dll file you can ship.
OR
You can try to use ILMERGE[^] to combine both .DLL's into the same file. You may or may not get away with doing this.
|
|
|
|
|
Hello & thank you for your reply. Is there a reason why in C#/.Net we do not have statically linkable libraries (.lib-Files) like in C/C++ ?
|
|
|
|
|
No, static linking is not directly supported. If you want to know why, ask Microsoft.
The closest approximation to it is to use ILMerge or similar. Not every library is compatible though, like WPF assemblies or code that uses Reflection.
|
|
|
|
|
Imagine you have a switch with anything between 3 and a silly amount of comparisons, at which size does a lookup dictionary with delegates get faster?
I'm fully aware there are no exact answers to this, I just want some elaboration on what's affecting the performance.
|
|
|
|
|
Let me add to that question - not using a lookup dictionary, but a 2D array:
I regularly see people claim that they program in a 'state machine' fashion (breaking numerous rules for state machine programming, but that's not the question here), essentially as a switch or sequence of if-else on the event, each switch/else alternative being another switch/if-else on the current state. I find that coding style terrible, impossible to maintain.
My coding style for state machines is creating a 2D array of Action and Output delegate references and a NextState value, possibly headed by a predicate delegate reference (so the array becomes a 2,5D one). In the worst case, three delegates must be called, plus one per failing predicate. In the simplest case, a single delegate is called (no predicate, no output).
Obviously, there is also the initial indexing of the state table on event and state, and if the entry has a chain of predicated alternatives, the code to iterate over them. This is part of the basic transition mechanism, unrelated to the specific table/transition.
This way of coding state machines has so many advantages that I will be very reluctant to change it. Yet I wonder: Is this indexing and delegate calling a CPU costly way of doing it, compared to nesting of switch / if-else in 2-3 levels? Are there performance pitfalls I should be aware of when indexing / calling delegates?
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Jörgen Andersson wrote: between 3 and a silly amount of comparisons, a
Presumably you mean cases.
Jörgen Andersson wrote: y with delegates get faster?
At the point where I have profiled the application (not the code) with realistic production data and determined that the specific code is in fact a performance bottle neck.
At that point then I would look at the design not the code to determine if there was some completely different way to do it.
|
|
|
|
|
Interesting question, to which I wouldn't like to guess the answer, but here's what I'd consider.
Firstly, what is it that is being switched? If its something primitive like an integer, the switch code would I expect boil down to a collection of CMP and BEQ instructions (compare and branch). These would be stupidly fast, and because they are consecutive in memory are likely to benefit from CPU caching, so in that instance, an awful lot of switch cases could be compared in the time of a dictionary look up.
If you are switching on strings though, things get more complicated. To do the dictionary look up, first the string needs to be hashed to give a bucket index, then an equality check is needed to make sure it matches. The switch statement doesn't need to do the hash, but has multiple equality checks to do, so I suspect the answer here boils down to the ratio of time taken to hash vs. time taken to do an equality check. So then you get into the realms of how similar are the strings? To check for equality, if the first character is different you can just bail out and fail the test quickly, but if its the last you have to go through every character before you can pass or fail the test.
It'd be interesting to profile this, but somehow the idea of creating a switch statement with hundreds/thousands of cases sounds unpleasant, I have no idea whether a compiler would accept it and would be completely impossible to work with.
Regards,
Rob Philpott.
|
|
|
|
|
Rob Philpott wrote: Firstly, what is it that is being switched?
It's strings.
Rob Philpott wrote: So then you get into the realms of how similar are the strings?
They are quite similar I'm afraid, as the words I'm parsing starts with the category.
So one of the larger switches will be 60+ words where the first difference is at the 19th position.
And as the files that will be parsed are in between GB and TB in size it will probably be worth some optimization.
|
|
|
|
|
Jörgen Andersson wrote: So one of the larger switches will be 60+ words where the first difference is at the 19th position.
Perhaps you could break that down a bit? Validate that the input contains at least 19 characters, then switch on the 19th character to decide which path to take.
if (input.Length >= 19)
{
return input[18] switch
{
'A' => ProcessA(input.AsSpan(19)),
'B' => ProcessB(input.AsSpan(19)),
...
};
}
You could even do that with a list pattern[^], although the repeated discards would look quite messy.
return input switch
{
[_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, 'A', ..] => ProcessA(input.AsSpan(19)),
...
};
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
modified 30-Apr-24 6:24am.
|
|
|
|
|
C# has gone a bit bonkers hasn't it? I still have to look up how to do this 'new' stuff. Although I do like being able to create empty arrays with [] etc.
Oh, and I think you're out-by-one: return input[18] switch
Regards,
Rob Philpott.
|
|
|
|
|
List pattern looks interesting
|
|
|
|
|
Ah ok, so is it that you've got these large files to process and you're trying to optimise the switching (state changes) for speed? Which approach are you using at the moment (switch vs. array lookup/not dictionary, sorry just read your update)?
I suppose another difference is that switch statements are compile timed things, turned into code, whereas dictionaries are created and used at runtime. Does this mean the switch/state change logic is fixed in advance?
I think the only well to tell really is try both methods and see which is quicker (if noticeably so). What I would say is I'd expect them both to be fast, so is this really the bottleneck to performance gains, or could something else be optimised? Multithreading/pipelining etc. TPL Dataflow (if you're in .NET) is good for this.
Regards,
Rob Philpott.
|
|
|
|
|
Rob Philpott wrote: so is it that you've got these large files to process and you're trying to optimise the switching (state changes) for speed
Indeed.
Rob Philpott wrote: Which approach are you using at the moment
I've set up the parsing using switches just to make sure it works, but it's painfully slow so I'm looking at refactoring it at the moment.
Rob Philpott wrote: Does this mean the switch/state change logic is fixed in advance?
This is where it gets funny.
In theory yes. But changes might happen every now and then
These files are supplied by a government entity. And while we're allowed to get the data (which is actually only a subset), we're not allowed to take part of the documentation (no, really).
And they can't be bothered to make separate documentation for our subset (without us paying an extortion fee that is).
So I've added logics that tells me when they've added or removed attributes.
Oddly enough, I'm having fun tinkering with these files, mostly.
Multithreading is the next logical step, but I want to get as far as possible without using brute force before that.
|
|
|
|
|
I believe I might have a generic answer to my question.
This is a simple implementation of IDictionary using a singly linked list. It is smaller and faster than a Hashtable if the number of elements is 10 or less. This should not be used if performance is important for large numbers of elements.
|
|
|
|
|
Maybe, that thing is old, predating generics so there might be some boxing overhead depending on what you stick in it, unless they've done a generic version of it.
It's hard to comment from this distance, but if the state machine might change, surely its better to model it at runtime so you just need to adjust some static data rather than go back to source...
Profiling is always a good option, to see where the bottlenecks lie. Anyway, best of luck!
Regards,
Rob Philpott.
|
|
|
|
|