Do you thing that unlimited integer arithmetics is (from practical point of view) usable? Is not 64-bit integers good enough for every purpouses?
Coincidentally, I just did a calculation a few weeks ago whose answer was 10 ^ 3200.
Further coincidence: a friend yesterday bragged to me about how great Mathematica is. Here is his note, sorry the context will be sensless to you but observe the numbers:
__________ included e-mail from my friend ______________
Date: Wed, 21 Aug 2002 16:36:40 -0700 From: Raphael Rom <actual e-mail address deleted> Subject: mathematica is wonderful To: Randy Smith Randall.Smith@Sun.COM
I hjust spent 45 minutes with mathematica. I managed to compute the actual and approximated sums for N=1000 to N=20000 (20 digits accuray -- these numbers go to 10^-8000!). The ratio real/approximation is 1+10^-7 , meaning we have a very simple and good approximation.
Raphi
___________________________________________________
...so sometimes numbers just get really really big. Annoying little devils.
--Randy
Randy Smith randall.smith@sun.com writes:
Do you thing that unlimited integer arithmetics is (from practical point of view) usable? Is not 64-bit integers good enough for every purpouses?
64-bit integers suck rocks.
Here are some things for which they suck:
Times, in a *real* resolution (like, oh, say the resolution our computers actually give us, for some processors *nanoseconds*)
Universal addressing, for which you essentially need: number of network addresses in the world * number of bytes on a machine. The width of an IPv6 address is six bytes. 64-bit integers would leave only two bytes to address my data. Since I could easily have well over a terabyte of storage at a reasonable cost, this is ludicrous. Instead, we have six bytes of network address *plus* at least another six bytes to index local storage.
I disagree with this.
Universal addressing, for which you essentially need: number of network addresses in the world * number of bytes on a machine. The width of an IPv6 address is six bytes. 64-bit integers would leave
IPv6 8 bytes (it's 128 bits of address).
only two bytes to address my data. Since I could easily have well over a terabyte of storage at a reasonable cost, this is ludicrous. Instead, we have six bytes of network address *plus* at least another six bytes to index local storage.
There is nothing preventing your from addressing larger address spaces if you use a higher protocol layer. You wouldn't want to send an IP packet to a bit/nibble/byte of data.
Speaking of IPv6, I think it has TOO many bytes. This is way overkill for the IP address problem and it doesn't address the routing issues..
Anyways, back to the interesting talk on large numbers in Self.
--------------
the whole idea that you could do something like deal with large numbers or change all reals to integers with fractions in a running system is quite interesting..
Dru Nelson dru@redwoodsoft.com writes:
I disagree with this.
Universal addressing, for which you essentially need: number of network addresses in the world * number of bytes on a machine. The width of an IPv6 address is six bytes. 64-bit integers would leave
IPv6 8 bytes (it's 128 bits of address).
Sorry, eight bytes then. (128 bits is 16 bytes, btw).
only two bytes to address my data. Since I could easily have well over a terabyte of storage at a reasonable cost, this is ludicrous. Instead, we have six bytes of network address *plus* at least another six bytes to index local storage.
There is nothing preventing your from addressing larger address spaces if you use a higher protocol layer. You wouldn't want to send an IP packet to a bit/nibble/byte of data.
So this is the point. The claim was "nobody could want integers bigger than 64 bits".
Universal addressing is a very nifty idea, and the whole *point* of it is to not have extra layers of addressing. (Unix having *two* layers of time addressing, for example, totally sucks. Why should I have "seconds" and "microseconds"? And then, now, for some interfaces, you have seconds and nanoseconds. Blech blech blech.)
Having another layer simply means that when you need to address something bigger than 64 bits, you split it up into two integers. Why not skip that step and allow large integers directly?
IPv6 8 bytes (it's 128 bits of address).
Sorry, eight bytes then. (128 bits is 16 bytes, btw).
Yeah, that was wishfull thinking on my part. It is 128 bits. Now, 6 bytes will be taken by the MAC address of the computer that is using that address.
So this is the point. The claim was "nobody could want integers bigger than 64 bits".
Well, for the most part, most people don't need more than the 32 bits and 80 bits for floats that we have. I don't mind a definition of a larger integer standard, but computer architecture doesn't really prohibit you from using larger integers that you build yourself.
Universal addressing is a very nifty idea, and the whole *point* of it
I think that large address spaces or what used to?? be referred to as distributed shared memory is ok, not against that.
is to not have extra layers of addressing. (Unix having *two* layers of time addressing, for example, totally sucks. Why should I have "seconds" and "microseconds"? And then, now, for some interfaces, you have seconds and nanoseconds. Blech blech blech.)
When unix was written, they weren't that concerned about things like that. So they extended it and tried to maintain compatability.
Having another layer simply means that when you need to address something bigger than 64 bits, you split it up into two integers. Why not skip that step and allow large integers directly?
Sounds find, you should lobby the committee that defines C standards to make a 'really long long'...
Self, Smalltalk, etc... they don't have this problem.
Dru Nelson dru@redwoodsoft.com writes:
Well, for the most part, most people don't need more than the 32 bits and 80 bits for floats that we have. I don't mind a definition of a larger integer standard, but computer architecture doesn't really prohibit you from using larger integers that you build yourself.
Yes, it does. The ideal is a numeric tower that seamlessly integrates all the different sorts. A good place to look for how to design this is Scheme.
There is nothing preventing your from addressing larger address spaces if you use a higher protocol layer. You wouldn't want to send an IP packet to a bit/nibble/byte of data.
So this is the point. The claim was "nobody could want integers bigger than 64 bits".
No, it was not my idea. I just think that it is not necessary to hardwire unlimited arithmetics into language and VM, because for nearly every practical work is 64-bits enough. When user wants unlimited precision, then will use some library.
Having another layer simply means that when you need to address something bigger than 64 bits, you split it up into two integers. Why not skip that step and allow large integers directly?
Because I should implement it in VM, if it is a standard, and I am too lazy. That's true reason. 64-bit integers are much more easy for me.
Viktor
"Viktor" vi.ki@worldonline.cz writes:
No, it was not my idea. I just think that it is not necessary to hardwire unlimited arithmetics into language and VM, because for nearly every practical work is 64-bits enough. When user wants unlimited precision, then will use some library.
What you've missed is that the *exact* thing necessary is that the user interface is neutral about integer size.
Because I should implement it in VM, if it is a standard, and I am too lazy. That's true reason. 64-bit integers are much more easy for me.
You seem quite ignorant of how this is usually done in Scheme or Lisp; Smalltalk has a different but equally workable strategy.
No, it was not my idea. I just think that it is not necessary to
hardwire
unlimited arithmetics into language and VM, because for nearly every practical work is 64-bits enough. When user wants unlimited precision,
then
will use some library.
What you've missed is that the *exact* thing necessary is that the user interface is neutral about integer size.
Yes, you are right.
Because I should implement it in VM, if it is a standard, and I am too
lazy.
That's true reason. 64-bit integers are much more easy for me.
You seem quite ignorant of how this is usually done in Scheme or Lisp; Smalltalk has a different but equally workable strategy.
I don't know exactly what you mean by "strategy", but:
* there are "small integers", usually 32 bit values with on (LSB) bit "sacred". * when some operation with "small integers" is overflowed, then it is double dispatched to method, which converts operand to "big integers" and result is "big integer" too * operations with "big integers" are performed by some library downloaded from internet (LEDA, for example).
That's my "strategy". Did you mean something different?
Viktor
self-interest@lists.selflanguage.org