15,746,758 members
Sign in
Sign in
Email
Password
Forgot your password?
Sign in with
home
articles
Browse Topics
>
Latest Articles
Top Articles
Posting/Update Guidelines
Article Help Forum
Submit an article or tip
Import GitHub Project
Import your Blog
quick answers
Q&A
Ask a Question
View Unanswered Questions
View All Questions
View C# questions
View Javascript questions
View C++ questions
View Python questions
View Java questions
discussions
forums
CodeProject.AI Server
All Message Boards...
Application Lifecycle
>
Running a Business
Sales / Marketing
Collaboration / Beta Testing
Work Issues
Design and Architecture
Artificial Intelligence
ASP.NET
JavaScript
Internet of Things
C / C++ / MFC
>
ATL / WTL / STL
Managed C++/CLI
C#
Free Tools
Objective-C and Swift
Database
Hardware & Devices
>
System Admin
Hosting and Servers
Java
Linux Programming
Python
.NET (Core and Framework)
Android
iOS
Mobile
WPF
Visual Basic
Web Development
Site Bugs / Suggestions
Spam and Abuse Watch
features
features
Competitions
News
The Insider Newsletter
The Daily Build Newsletter
Newsletter archive
Surveys
CodeProject Stuff
community
lounge
Who's Who
Most Valuable Professionals
The Lounge
The CodeProject Blog
Where I Am: Member Photos
The Insider News
The Weird & The Wonderful
help
?
What is 'CodeProject'?
General FAQ
Ask a Question
Bugs and Suggestions
Article Help Forum
About Us
Search within:
Articles
Quick Answers
Messages
Comments by Kosta Cherry (Top 12 by date)
Kosta Cherry
18-May-16 12:40pm
View
Ok, there is not config file in the app, so no help here.
"when A.DLL don't require other assemblies, does .\Bin\Ext\A.DLL works" - yes, it does.
"Now, let's assume that .\Bin\B\B.dll and .\Bin\B.dll both work. What's wrong with it?" - nothing, except the this I'm doing is a proof of a concept; it willbe many more dependencies and interdependencies, so client does not want to pollute .\Bin, and does not want to create additional folders for each referenced assembly (it is estimated it'll be over 100 of them).
When I say "works", it means "B.DLL" gets loaded by A.DLL when needed.
Anyway, I just found a solution and will post it right now.
Kosta Cherry
18-May-16 11:54am
View
No, application explicitly required user extensions to be in AppName\Bin\Ext, not AppName\Bin. Basically, I cannot place A.DLL into AppName\Bin - it will not let me "register" it. Yeah, this is 3rd party, you know...
For the 3 items "works-works-doesn't work" I mentioned, A.DLL always resides in AppName\Bin\Ext. It just cannot reside anywhere else.
Of course, I can add folder AppName\Bin\B, add there B.DLL and be doe with it, or place B.DLL into AppName\Bin, but end client does not approve this. They want A.dll and B.dll reside side by side in App\Bin\Ext, and I've no idea why search is not done there.
I just tried SetDllDirectory, as mentioned above - did not help at all. Pulling my hair now :(
Kosta Cherry
28-Mar-15 0:48am
View
I doubt that they will ever pay attention to the request of a stranger...
Kosta Cherry
12-Jan-15 23:42pm
View
Reason for my vote of 2 \n Test code is irrelevant to the target of the article
Kosta Cherry
7-Nov-13 13:18pm
View
Yes, really. From your own answer: "The assembler generated was just about identical". And this is exacly the case I pointed out (see my original reply): "as in such case they are implemented as bit shifts or bit maskings". Division is optimised in one, and only one case: "if the second operand is a constant power of two". If you take a habit of mindlessly using division, you are bound to a case when second operand is NOT a power of two, and performance drains down. It's not a premature optimisation, it's a habit of writing code that is optimised from the beginning, instead of brute force approach and hope that compiler writers foresaw your case. The best solution here (performance wise) was a union trick. Premature optimisation? Nope, just a good code to begin with, and experienced programmer who "been there, done that".
Kosta Cherry
7-Nov-13 10:49am
View
The reason people use masks and shifts is that it is much, much faster than a division. You may hope that optimiser will convert your div to shift or a mask, but why not to do the right thing to begin with, rather than hope that someone will correct you?
From here (http://en.wikibooks.org/wiki/Optimizing_C%2B%2B/Code_optimization/Faster_operations):
"The multiplication, division, and modulo operations between integer numbers are much faster if the second operand is a constant power of two, as in such case they are implemented as bit shifts or bit maskings."
Kosta Cherry
26-Sep-13 22:27pm
View
No diff in this case, and, as I mentioned, fast, rough first code. Also forgot "const" to first argument, the first "for" loop can be improved, etc, etc. By no means this is the final variant, and posted here just to illustrate the idea of the algo.
Kosta Cherry
26-Sep-13 20:13pm
View
It's not about math, it's about understanding problem first, sorta "requirement gathering" from self :). You know old joke (as you are Russian too) about the professor who was explaining the subject to students so many times that he finally understood it himself? That's what I got here today :) Once I started to talk with CPallini, I was able to explain to myself what exactly I need, then came the algo idea, the rest was very simple.
Kosta Cherry
26-Sep-13 20:02pm
View
Yeah, I did it differently, but you gave me right direction to think about.
I'll post the resulting function in a solution in case anyone is interested.
Kosta Cherry
26-Sep-13 17:11pm
View
Yeah, but that's an exact question I'm struggling with - what exact weight function w(x) to use, and how to define {x0,x1} interval so they are logarithmically distributed. But thank you anyway - now I know where to look.
Kosta Cherry
26-Sep-13 16:25pm
View
Nope, it's not a school algebra, and not an assignment either :)
It's a real-world problem. I have a data set - very big one. It can be presented as an array, where index is a second, and value is data value for that second. I need to compress data for future analysis, but in a special way - the older data is less "influential", and newer data is more "important". So, I decided to do logarithmic compression, so the older data is, the more it's gets compressed. For example, for the recent minute I want to keep all 60 values, but for a minute that was an hour ago I need just one value. For data a day ago I need one value for the whole hour, and so forth. Thing is, this is not exactly logarithmic, and averaging is not exactly the way to do it, because, within one compression interval values are also not equally important - the closer data is to present, the more "weight" it should have. I mean, I can go with the idea I currently have, but what bugs me is this - this should be pretty standard scaling algorithm, it's I'm already 20 year past the university :(, and never needed this before, and just don't know how to properly formulate a problem to google it. If someone can just point me to the right resource (preferably with readily available algo, language is not important), this would be very helpful.
Kosta Cherry
26-Sep-13 14:16pm
View
Because in (any) left part data values of y could be very different, and that part results in one y value. Basically, imagine an ln graph, but rotated by 90 degrees.
Show More