|
Hmm, unless my math is wrong, per Double-precision floating-point format - Wikipedia[^]:
Quote: Integers from −253 to 253 (−9,007,199,254,740,992 to 9,007,199,254,740,992) can be exactly represented.
US national dept is around $33.17T = 33,170,000,000,000. Seems you can accurately represent US national debt down to 0.1 cents using just a regular double-precision number.
Mircea
|
|
|
|
|
Depends on your precision. I mentioned earlier I want to store up to the sixth decimal place (one thousandth of a mill) and because of that my available numbers are smaller.
Jeremy Falcon
|
|
|
|
|
Even if I had less precession, it's only a factor of 300 times difference anyway, which while it would work... isn't really something I'd consider forward thinking to ensure some weird spike doesn't screw up the system.
I take my numbers seriously.
Jeremy Falcon
|
|
|
|
|
Now, if you take Carlo's idea of using 64 bit integers, you range becomes -263 to 263 or ±9.2E18. That gives you 5 decimal places for US national debt. If you can live with unsigned numbers your range becomes 0 to 1.8E19. That gives you 6 decimal places for numbers the size of US the national debt.
System design is finding the least bad compromise, so only you will know if the complication of using some fancy math library is justified or not in your case.
Mircea
|
|
|
|
|
I think peeps assume I’m a total n00b just because I’m asking for folk’s opinions. Using an integer was the first thing I mentioned. I promise you I know of 64-bit ints.
The question was has anybody used 128-bit ints and noticed a serious performance hit. The only reason I mentioned all three original ways is because I knew someone would come along and tell something unrelated.
Also, that’s grossly over simplifying system design. I’ve architected plenty of enterprise apps in my day. Future proofing is also a consideration.
So, to repeat man… the question isn’t what’s an int. It’s how fast is a 128-bit int for those who actually used it in a project. If someone has a better way to store currency than the four ways already mentioned (including fixed point) - great.
Jeremy Falcon
|
|
|
|
|
Addendum
I have settled on the following ( the example is for converting MSB ) , still convoluted ,code.
I would like to find out , discuss , the usage of "toLongLong" .
QString binaryNumber = QString::number(hexadecimalNumber.toLongLong(&ok, 16), 2).rightJustified(4,'0').leftJustified(8,'0');
Up front
I am very sorry to reopen this post.
For information, I am leaving the original post (code).
I have an additions issue, I need help with to correct.
This code snippet correctly converts string "42"
to binary code "01000000".
I do not need more help with that conversion,
BUT in need help converting when
the string contains hexadecimal value, such as "F9".
Changing "toInt(),2)" options to toInt(),16) does not do the job.
I realize that the Qt code is little convoluted and for this reason -
May I suggest that only coders with experience of Qt take a look at this ?
No, it is not an instruction how to reply, just a suggestion.
pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).number(pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).toInt(),2).leftJustified(7,'0');
Solution :
pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).number(n,2).rightJustified(4,'0');
Output :
" convert LSB to binary with leading zeroes "
"0001"
I have Qt style string QString frequency = "1426000" split into another QString as
QString frequency_array = "01 42 60 00"
I need to change each pair as 8 bit binary with
MSD as upper 4 bits of the 8 bit word
AND
LSD as lower 4 bits of the 8 bit word
For debugging purpose I like to print each step of the conversion.
As an example , I like to see "42 "
as
"01000010"
I prefer Qt QString in C++ code
Here is my code so far
```
for (int index = 0; index <6 ; index++)
{
text = pFT857_library->CAT_Data_String[index];
m_ui->lineEdit_14->setText(text);
qDebug() << text;
text = text.mid(0,1).toLocal8Bit();
qDebug() << text;
text = pFT857_library->CAT_Data_String[index];
text = text.mid(1,1).toLocal8Bit();
qDebug() << text;
}
```
The above WAS my initial post / code and I have dropped the post.
My current code is little over-documented so I am hesitant to post it.
However,
I have a (simple_) question.
Using the following snippet
int n = pFT857_library->CAT_Data_String[CAT_Data_Index].toInt();
text = pFT857_library->CAT_Data_String[CAT_Data_Index].number(n,2);
qDebug() << text;
q
I can visualize the binary representation of the string - that is partially my goal.
My question is - how do I visualize FULL 4 bits of the desired info.
"number" with option "2" "prints" all valid bits BUT I need full
length of 4 bits - including "leading zeroes".
Example
"number" prints "100" representing decimal 4
I need
"0100" - full 4 bits.
modified 6-Sep-24 15:33pm.
|
|
|
|
|
What is pFT857_library , as that appears to be the code you are using? But if you want a simple method then you could easily write a short loop that prints the value of each of the 4 or 8 bits one by one.
|
|
|
|
|
This is a Qt issue as far as using the toInt function. And as the documentation (QString Class | Qt Core 5.15.17[^]) clearly shows, it handles numbers in any base from 2 to 36.
modified 4-Sep-24 6:16am.
|
|
|
|
|
I'm maintaining old code.
I can't use enum class.
Do you wrap your enums in namespace to kinda simulate enum class ?
or is there a pattern Ì don't know about to make regular enum safer ?
Thanks.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
You don't need to go as far as a namespace, just a struct will do.
struct Color {
enum value { Red, Yello, Blue };
};
int main()
{
Color::value box = Color::value::Red;
}
If you want to be able to print Color::Red as a string, it's a bit more involved
#include <iostream>
struct Color {
enum hue { Red, Yellow, Blue } value;
std::string as_string() {
std::string color;
switch(value) {
case Red : color = "Red"; break;
case Yellow : color = "Yellow"; break;
case Blue : color = "Blue"; break;
};
return color;
}
Color(Color::hue val) : value(val) {};
bool operator==(const Color&other) {
return value == other.value;
}
friend std::ostream& operator<<(std::ostream& os, const Color& color);
};
std::ostream& operator<<(std::ostream& os, const Color& color)
{
os << color.value;
return os;
}
int main()
{
Color x = Color::Red;
Color y = Color::Blue;
std::cout << x << '\n';
std::cout << x.as_string() << '\n';
if(x == y)
return 1;
else
return 0;
}
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
modified 29-Aug-24 11:50am.
|
|
|
|
|
ahhhh yes, I've seen that before.
thanks.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Last time I did hardcore C was a while back. Before the 128-bit days. Ok, cool. But, I got a silly question when it comes to printing a 128-bit integer. You see online examples saying just do a long long cast and call it a day bro, but then they use small numbers. Which obviously works for them because it's a small number.
But, I figure hey, I'll try it for poops and giggles. As you'd might expect the number is never correct.
#include <stdio.h>
int main()
{
__uint128_t u128 = 34028236692093846346337460743176821145LL;
printf("%llu\n", (unsigned long long)u128);
return 0;
}
The above don't do it. Now, I could bit shift to get around any limits, but I ultimately need the output formatted with comma separators, so doing bit logic would cause issues when putting humpty dumpty back together again.
So, um... anyone know how to print a 128-bit number in C? Preferably portable C.
Jeremy Falcon
modified 28-Aug-24 17:19pm.
|
|
|
|
|
As far as I know, the C standard does not specify anything about 128 bit integers. _uint128_t is a GCC extension, also supported by clang. Maybe MSVC too? As far as I know, there's no printf format code for 128 bit integers either. See here for a possible solution: https://stackoverflow.com/questions/11656241/how-to-print-uint128-t-number-using-gcc
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
Yeah that's what I was afraid of. If it wasn't for the formatting, I'd just slice it up as two 64s to be portable for printf. I may just have to roll my own formatting for something I'm working on.
I mean, I could just forget about 128-bit support, but no cool points for that.
Thanks btw.
Jeremy Falcon
|
|
|
|
|
|
Never even heard of it. Does it handle localization? If so, you're totally my hero.
Jeremy Falcon
|
|
|
|
|
It should. Per above reference:
Quote: The decimal point character (or string) is taken from the current locale settings on systems which provide localeconv (see Locales and Internationalization in The GNU C Library Reference Manual). The C library will normally do the same for standard float output.
Mircea
|
|
|
|
|
Schweet. I'll have to check it out. Thanks man.
Jeremy Falcon
|
|
|
|
|
One is glad to be of service.
Mircea
|
|
|
|
|
Hi,
I want to create my own border in the client area of a dialog and could freely move it on the parent.
Can you plz suggest any refence?
|
|
|
|
|
Your question is not very clear. You can easily add a border to a dialog with a static control. And what do you mean by "and could freely move it on the parent."
|
|
|
|
|
You'll have to draw a frame manually (CDC::Rectangle) and handle all the mouse click events to be able to move it.
What exactly are you trying to do ?
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
A binary file is a file that contains bits. Every 32 bits make a number (several digits/symbols with no space between them) or a word. A number can then be transformed and become an integer (several digits/symbols) or a char (just one symbol). Character set (Unicode for example) has to do with how a word of bits becomes a char. Is that how it works?
[edit] A text file is an inefficient way to store numbers because each digit is one char
modified 26-Aug-24 9:00am.
|
|
|
|
|
Calin Negru wrote: Is that how it works?
yes, more or less.
Every file is just a series of bits.
It's up to the user (programmer) to interpret how the series of bits is converted to something practical (text, numbers, ... )
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Generally speaking, all files contain bytes. What interpretation you put on those bytes is up to you. A file containing the hex bytes 61 62 63 64 (without spaces ) might be interpreted as a 32 bit integer of value 1684234849 (assuming little endian byte ordering) or the 4 characters abcd. Interpretation is everything.
Text files may be slower than binary files to read/write, but they do have the advantage of being processor agnostic. For example in 32 bit mode, structs have different padding on ARM and x86, so given
struct S {
}; If you have a data file containing an array of struct S , you can't just copy the data file from an x86-32bit system to an ARM-32bit system and assume that the offsets for the member is going to match. That's also true for x86-32 to x86-64. Even if the struct members don't have different sizes (e.g. a long may have 32 bits or 64 bits), they may have different padding requirements between 32 and 64 bit systems.
Then there's the whole little endian vs big endian situation.
But a text file can be read by any system, without any conversion routines.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|