|
It seems to me that there could be a type (implemented in your favorite language has has classes, including operator overloading) that would store a number as a rational number. This type would be a list of links that would have prime factors and their exponents (e.g., in C++, the prime factor would be type "unsigned long", with the exponent being "long"). This would work because any decimal (or hexadecimal) input would be a rational number itself, and then any operations would be exact, so that, e.g., the result of (213.56 - 45.41) would be 168.15 and not 168.14999999, etc.
|
|
|
|
|
And storage of Pi would be?
Sent from my Amstrad PC 1640
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Again he beats me to it!
... such stuff as dreams are made on
|
|
|
|
|
My stomach. I like pie.
This space for rent
|
|
|
|
|
So do I! I have the last of my homemade Pork Pie for supper tonight with a salad (with ice cream and a fruit coulis to follow).
Sent from my Amstrad PC 1640
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Three significant digits should be enough for anybody.
|
|
|
|
|
Quote: 640K ought to be enough for anybody O...Kay!
Sent from my Amstrad PC 1640
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
As a rational number? I believe 23/7 comes close, but if you want, you can find even closer rational numbers.
pi isn't a rational number.
With irrational numbers, there would be roundoff problems, although small ones. Most our everyday problems end up in rational numbers. It isn't difficult to implement a rational number type with modern OO languages.
|
|
|
|
|
I suppose that there could be some setting in this RationalNumber class that would allow for infinite computation for numbers like pi & e, if the client so desired.
Pi could use Chudnovsky algorithm. e would be pretty straightforward too.
And here's a quickie:
1833616417 / 583658233
|
|
|
|
|
That wouldn't make pi a rational number, though. It would be misleading to provide pi in a class named something like rational_number - unless you called it not pi, but pi_approximated_as_a_rational_number.
A (true) story about pi, I believe it is from around 1980 - I was told the story around 1983 by a guy who had participated in the hunt for The True pi:
At Bergen University, Norway, one professor teaching numerical methods and error propagation had his students estimate the error expected in some trancendental functions in difficult number ranges, and verify it on the University's new shining IBM 3080 mainframe. The students came back and reported significantly larger errors than their estimates suggested. The surprised professor set out to find the cause of this.
It turned out that the IBM 3080 Fortran libraries were carbon copies of the 370 libraries. Which were carbon copies of 360 libraries. The 360 got its libraries (in assembler format, of course, with floating point constants in hexadecimal format) as an adaptation of the old 7090 libraries - machines with a different instruction set and 36 bit word length (rather the 360's 32 bits). Calculating a binary representation of pi anew would have had to be set up as a separate job. They didn't do that; they just chopped 4 bits off the 7090 binary floating point mantissa for the pi value. Ignoring rounding. So the least significant bit, which should have been rounded up to a 1, remained a 0. The professor's theoretical error estimates were based on a properly rounded, not a truncated pi value.
This truncated 7090-binary pi value from the end of the 1950s was interited all the way up to the 3080 series, more than 20 years later. When discovered, and the least significant bit rounded up to a 1, the theoretical error estimates matched the observed errors more or less perfectly.
|
|
|
|
|
Don't be irrational, π would be automatically promoted to 'floating point error'.
|
|
|
|
|
Still wouldn't help electrical engineers whose work is based on e and I. That said there are a lot of good reasons to support some sort of base 10 fractional numbers to avoid the rounding errors that result from converting base 2 fractionals to base 10 fractionals.
|
|
|
|
|
Most languages do have libraries for such types. But they would not solve irrationals such as pi or sqr(2).
... such stuff as dreams are made on
|
|
|
|
|
1/3 = ?
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
A perfectly rational number.
|
|
|
|
|
You may also consider a continued fraction representation which can also be calculated with, and if you allow them to be "generators" they can even represent some annoying numbers such as e and phi and irrational square roots.
But this also has various problems. For example, when subtracting "accidentally equal" numbers (ie two generators that are not the same instance and not binary equivalent, but generate the same list) it takes infinite steps to find out that there is never a difference, and this delays the generation of the "head" of the result generator infinitely long (ie there is no answer).
Working with finite CFs necessarily means that some numbers cannot be represented any more since they get truncated at some point.
By the way, floating point arithmetic is more exact than many people give it credit for. It doesn't always have to round, for example subtracting two numbers with equal sign that are within a factor of 2 of each other results in the actual difference between the two numbers (Sterbenz lemma). A large part of that perception is the great Decimal Bias of the modern world (at other times in history it would have been a Sexagesimal Bias), but there is nothing inherent about a base ten system.
|
|
|
|
|
|
It just gave me a hiss in return. Made no sense, except from making me scared.
|
|
|
|
|
Well... you will quickly get overflow after some simple operations ( + - * / )...
Not to mention some number can't be approximated by rational number. A large infinity of them!
|
|
|
|
|
|
Already done : check Rationals @ http://mathpaqs.sourceforge.net/.
It's even generic, so you can use whatever type for a,b in the fractions a/b: integers, big integers, polynomials, ...
|
|
|
|
|
When we were CompSci students around 1980, ready-made packages were not readily available, so a group of us created an arbitrary precision arithmetic fortram package for a friend in theoretical physics:
This fellow was working on an analythical model to describe, at the micro level, what happens when two wave fronts collide head on. When applying this model in a simulation, even the Univac 1100 double precision 72 bit floating point causd discontinuities - more or less perfect square wave forms, which none of us really believed existed in the physical world, certainly not in liquids... So we gave him a library where we set up buffers for 200 decimal digits. (For character I/O, BCD was much better suited than a 600+ bits pure binary value - speed of calculation was not of any importance.)
Even with 600+ bits of precision, he experienced too large discontinuities. Then we realized that he was adding up elements of a series expansions from the head of the series, from the big towards the smaller elements. This lead us into a huge argument with him: As I said, he was in theoretical physics, and simply refused to accept that the order of adding together series elements would make a difference. We were several guys, over a period of many days, trying to explain, each in our own way, why "in theory, therory and practice are identical, but in practice they are not". Finally, he gave in and turned the addition around the other way. The waves came out so smooth that we silently suspected that the Univac 72 bits format would have been sufficient, if had just, from the very beginning, added the smallest elements first.
There was an intense discussion in academia around that time whether Computer Science is a science requiring its own scientists to do computer related jobs, or if every engineer, mathmatician or physicist should learn to handle a computer himself. I several times used this story to argue that scientists getting close to a computer must learn enough about them to avoid silly problems caused by poor understanding of how computers operate.
|
|
|
|
|
It says this project has no files.
|
|
|
|
|
Well, it depends...
First, rational numbers would slow things down.
But if you want accuracy in math, I think the answer is to use integers with IMPLIED Decimals.
Some currency types do this with 4 digits of implied decimal accuracy.
Personally, I have never seen the need for more than this, unless I was using matlab or something where maintaing 1/3 of important to backing out of the problem properly.
Finally, you could always implement your own classes to behave properly. Good luck with that, and the resulting speed.
Kirk Out!
|
|
|
|
|
Kirk 10389821 wrote: First, rational numbers would slow things down. First, in quite a few applications, that doesn't matter. Whether the calculations are done in 10 or 20 milliseconds doesn't matter if you prepare a value that is displayed in an interactive dialog.
Second, if arithmetic calculations is only a small part of the handling, other formats than the traditional floating point may be faster in the other operations. Take an archetypical Cobol application: It reads in, moves around and displays values in counts and dollars and cents; every now and then doing a multiplication, but mostly just addition and subtraction. The time saved in I/O from a BCD format compared to a float format greatly offsets the slower BCD addition/subtraction.
Third: If you use rationals to avoid rounding errors, your float option might be an arbitrary precision float package. Adding 1/3 + 1/6 as rationals is likely to be much faster than calling a library function maintaining a list of mantiass elemnets to add 0.333333.... and 0.16666... for, say, 512 bits.
|
|
|
|
|