back to list

types

🔗Christopher Bailey <chris@...>

8/30/2004 12:11:46 PM

> competition for "real" languages in what they do best. It is not
> fancy, a feature I like. It isn't oo. It does not fuss very much about
> data types. It has a syntax which looks about as much like standard

Yes. . .this is what I like about LISP also. You can make lists of
instrument-name, pitch-class, ratio, etc., all co-existing in the same
list.

BTW, Ratio is a data type. Whoopeeee!

🔗Gene Ward Smith <gwsmith@...>

8/30/2004 5:04:02 PM

--- In metatuning@yahoogroups.com, Christopher Bailey <chris@m...> wrote:
>
>
> > competition for "real" languages in what they do best. It is not
> > fancy, a feature I like. It isn't oo. It does not fuss very much about
> > data types. It has a syntax which looks about as much like standard
>
> Yes. . .this is what I like about LISP also.

With all due respect to the Lots of Irritating Single Parenthesis, I
don't think LISP looks all that much like anything but LISP. It is
related to the lambda calculus, but that's hardly the same as standard
math notation.

You can make lists of
> instrument-name, pitch-class, ratio, etc., all co-existing in the
same
> list.

LISP can eat that stuff alive, I am sure.
>
> BTW, Ratio is a data type. Whoopeeee!

🔗Robert Walker <robertwalker@...>

8/31/2004 10:38:17 PM

Hi there,

Well while we are talking about computer languages
may I just put in a word for C :-).

I know all its failings - that it isn't type safe,
no array bound checking, permits wild pointers and so forth.
And this is about C, not about C++. I'd better not
say anything about C++ as I don't use it but I think
it isn't particularly reputed to be the best of the
object orientated languages available.

However, C itself was written as a low to intermediate
level language orignially, for such tasks as writing compilers.
So, it's no surprise that it was hard to extend it to
make a good object orientated language, and it shows
how good it must be that anyone would even attempt that
rather than start again from scratch.

It does what it was originally designed for well
- lets you work at a low level quite close to assembly,
and at the same time to be able to make high level
constructs and macros.

It is a demanding language - you have to think clearly to
write in C and if you don't you end up writing spaghetti
which no-one can read, even yourself. You have to
think clearly and comment your code extensively
and learn the art of commenting - what needs to be
said and what can be expected to be inferred by the
reader later on.

But when you get into the swing of it then it is
a beautiful language - it rewards clear thinking.
It also requires clear thinking too - that's the
thing.

What's nice about it is that it lets you design
your own architecture to such a large extent.
That then gives you the freedom to construct
something aesthetic, beautiful and easy to
add to later - or something ugly and impossible
for anyone else to work with. You can write
a C program of only a 10 lines that is almost
comletely unreadable and so spaghettified
that it would be hard to make even the
smallest change to it without breaking it completely.
- see the obfuscated C contests for
examples. Or you can write a C program of hundreds of
thousands of lines of code that is a pleasure to work with
when you add more features to it later on.

Most C programmers are somewhere between those
two extremes - well the obfuscated c programmers
deliberately make spaghetti code with just
a dozen lines or so - it must be one of the
best obfuscating languages there is :-).
Then Windows C also - C + the addition of the architecture
and routines of Windows libraries - that also is a very
beautiful language too in my opinion. I came to it
from X Windows - the Unix version of Windowing
and was very much struck by the clarity of the
architecture of Windows C - the low level
C programming again there - not the C++ constructs
which I don't know enough about to comment on.
It is a pleasure to write in it and you feel it
is worth all the effort to learn
to write well in Windows C.

I'm not trying to convert anyone else to C.
Very much an individual thing, what language
one likes to write in. Maple sounds good.
I remember that when I was at the Maths
dept then they had Maple there.

I'd get Mathematica or Maple myself certainly,
if they didn't cost so very much - a thousand
pounds for mathematica I believe, and
don't know how much for Maple, but quite a lot.

Surely Maple would have unlimited precision
arithmetic does it not? I'd be surprised if not.
E.g. if you enter 191! or something - it has
to write out all the digits or it would be
rather strange, for a mathematical package.

That's not so rare these days and there are various
open source libraries for it, and I think
it wouldn't take so much work to develop
such a library. I wish C had that.

Also native support for ratios would
be great. I just added 64 bit ratios
(i.e.denumerator and denominator 64 bit)
to the calculator in FTS and having sorted
out the arithmetic for adding, dividing
multiplying and subtracting, and exponentiating
by an integer, it would be routine
(but would take a while) to convert all
the scales in FTS to be able to handle
64 bit ratios in the same way, which would be
enough for almost any tuning application, apart
from some rare ones. That's the sort of
thing that might be easier in a higher level
object orienated language I gather
- but then I find the idea of operator
overloading very confusing to think about
and maybe it isn't for me. I prefer
to know exactly what an operator does.
Very much an individual thing I'm sure.

Robert

🔗Graham Breed <graham@...>

9/1/2004 1:47:05 AM

Robert Walker wrote:

> I'd get Mathematica or Maple myself certainly,
> if they didn't cost so very much - a thousand
> pounds for mathematica I believe, and
> don't know how much for Maple, but quite a lot.

I found Maple V for Windows in a bookshop for under 100 pounds. If you aren't a student, and you can't find a bookshop selling old copies, it is very expensive.

Say, would you like my copy? I hardly use it.

> Surely Maple would have unlimited precision
> arithmetic does it not? I'd be surprised if not.
> E.g. if you enter 191! or something - it has
> to write out all the digits or it would be
> rather strange, for a mathematical package.

It supports "arbitrary precision" rather than the unrealistic "unlimited precision". It also seems to support do arbitrary precision floating point, which is more difficult, and you don't get with Python out of the box.

> That's not so rare these days and there are various
> open source libraries for it, and I think
> it wouldn't take so much work to develop
> such a library. I wish C had that.

Well, here's a library:

http://www.swox.com/gmp/

Graham

🔗Gene Ward Smith <gwsmith@...>

9/1/2004 3:20:36 AM

--- In metatuning@yahoogroups.com, "Robert Walker" <robertwalker@n...>
wrote:

> Surely Maple would have unlimited precision
> arithmetic does it not? I'd be surprised if not.
> E.g. if you enter 191! or something - it has
> to write out all the digits or it would be
> rather strange, for a mathematical package.

It's limited by the physical limits of your computer, of course.
People can have funny ideas about what those are. A characteristically
brilliant Berkeley math professor asked me how I'd go about evaluating
n! modulo prime powers using Macsyma; it's easy enough mod a prime,
but how about prime powers? I said the bonehead method of simply
plugging it in would work fine unless the numbers involved were large,
and he replied in a worried voice that it could get up to 1000! or so,
so that wouldn't work!

🔗Carl Lumma <clumma@...>

9/1/2004 11:09:39 AM

> > Surely Maple would have unlimited precision
> > arithmetic does it not? I'd be surprised if not.
> > E.g. if you enter 191! or something - it has
> > to write out all the digits or it would be
> > rather strange, for a mathematical package.
>
> It's limited by the physical limits of your computer, of
> course. People can have funny ideas about what those are. A
characteristically brilliant Berkeley math professor asked
> me how I'd go about evaluating n! modulo prime powers using
> Macsyma; it's easy enough mod a prime, but how about prime
> powers? I said the bonehead method of simply plugging it in
> would work fine unless the numbers involved were large, and
> he replied in a worried voice that it could get up to 1000!
> or so, so that wouldn't work!

Hmm, I'm guessing is the joke that he was brilliant but
outdated, and 1000! is tiny. That right?

-Carl

🔗Gene Ward Smith <gwsmith@...>

9/1/2004 11:24:28 AM

--- In metatuning@yahoogroups.com, "Carl Lumma" <clumma@y...> wrote:

> Hmm, I'm guessing is the joke that he was brilliant but
> outdated, and 1000! is tiny. That right?

He was not a computer person; most mathematicians aren't. Yes, even
back in the 80s 1000!, a number of 2568 digits, hardly qualified as "big".

Maple, by the way, uses "1000!" to denote 1000!, a typical Maple move.
It also works for complex numbers--stick in (-0.5)!, square it, and
you get pi. Stick in ((-1/2)!)^2 and you have to simplify it, but if
you do you get pi exactly. This sort of thing is an example of why
Maple isn't really like a typical computer language and is called a
computer algebra package.

🔗Robert Walker <robertwalker@...>

9/1/2004 11:40:58 PM

Hi Gene,

> > Surely Maple would have unlimited precision
> > arithmetic does it not? I'd be surprised if not.
> > E.g. if you enter 191! or something - it has
> > to write out all the digits or it would be
> > rather strange, for a mathematical package.
>
> It's limited by the physical limits of your computer, of course.
> People can have funny ideas about what those are. A characteristically
> brilliant Berkeley math professor asked me how I'd go about evaluating
> n! modulo prime powers using Macsyma; it's easy enough mod a prime,
> but how about prime powers? I said the bonehead method of simply
> plugging it in would work fine unless the numbers involved were large,
> and he replied in a worried voice that it could get up to 1000! or so,
> so that wouldn't work!
>

Yes would be limited by the disk space required to store
all the digits :-). Well - sort of, usually would be.

Actually even if your entire disk were filled with a large integer
as digits type representation of some very large number,
you only see part of it at any time. Similarly even
if it is a large book, showing thousands of digits
of pi or something, then you can only see one line
or page at a time, and your eye may not even see the
complete line in one go.

Do you remember the algorithm someone found that
lets you calculate digits of pi at any particular
point to some particular base without calculating the
previous digits leading up to that one?

I can't remember how it worked now - but I remember downloading a program
that did the calculation, and trying it out, may even still have it
somewhere on my hard disk.

To get philsophical about it - if you can see e.g. a slice of
10 consecutive digits anywhere at any point in the expansion,
up to some very large number, too large to store all the digits on ones
hard drive, I wonder if that counts equally well as a representation
of that very large number. (I don't know if this particular
one really gets quite that far so easily but you never know).

Well - I could easily do a program to display e.g.
10^(10^(10^10)) in that way. Ask for any position before
the (10^(10^10)) th digit, and it shows you a string of 0s.
Anything after that and it doesn't show anythign at all.
If user happens to hit exactly on the (10^(10^10)) th
digit then show a 1 in the appropriate place
on the screen followed by 0s.

So you could say that that is a complete large integer
type digits representation of 10^(10^(10^10))
which could be shown on an ordinary p.c.
The digits happen to be created dynamically
rather than all in one go and then stored on
hard disk and recovered again later for display.

After all what difference does it make if the
digits are actually stored on the hard disk
or generated dynamically on request by
the user?

After all it is only the human
reading it on a computer display that makes it a
count as a digital representation of a number
- otherwise it is just a pattern of
magnetic domains on a piece of material.

You could even get the computer to
flash the digits up scrolling them as if reading
a huge number, dial in any point in the sequence
and get it to read it from there,
even though it doesn't have the whole thing
stored anywhere. Kind of true virtual memory...

So in some special cases you mighte able to even go
somewhat beyond the physical limitations of ones
computer.

way, uses "1000!" to denote 1000!, a typical Maple move.
> It also works for complex numbers--stick in (-0.5)!, square it, and
> you get pi. Stick in ((-1/2)!)^2 and you have to simplify it, but if
> you do you get pi exactly. This sort of thing is an example of why
> Maple isn't really like a typical computer language and is called a
> computer algebra package.

Yes (0.5)! is a bit strange looking when unacustomed to it.
:-).

It would be using the Gamma function generalisation of
factorials to reals I assume. I'm not very familiar
with that, and never actually needed to use it for
anything, but believe it uses pi.

Robert

🔗Robert Walker <robertwalker@...>

9/1/2004 11:38:34 PM

Hi Graham,

> > I'd get Mathematica or Maple myself certainly,
> > if they didn't cost so very much - a thousand
> > pounds for mathematica I believe, and
> > don't know how much for Maple, but quite a lot.
>
> I found Maple V for Windows in a bookshop for under 100 pounds. If you
> aren't a student, and you can't find a bookshop selling old copies, it
> is very expensive.
>
> Say, would you like my copy? I hardly use it.

That would be great, thanks!

>
> > Surely Maple would have unlimited precision
> > arithmetic does it not? I'd be surprised if not.
> > E.g. if you enter 191! or something - it has
> > to write out all the digits or it would be
> > rather strange, for a mathematical package.
>
> It supports "arbitrary precision" rather than the unrealistic "unlimited
> precision". It also seems to support do arbitrary precision floating
> point, which is more difficult, and you don't get with Python out of the
> box.

Rightio that makes sense - I suppose you set the precision in advance.

> > That's not so rare these days and there are various
> > open source libraries for it, and I think
> > it wouldn't take so much work to develop
> > such a library. I wish C had that.
>
> Well, here's a library:
>
> http://www.swox.com/gmp/

Thanks, great. I see that it is Lpgl'd rather than Gpl'd
so I can include it with commercial programs so long
as I keep it unchanged or make available any modifications
I make to it (unlikely to need that probably).

Also looks as though it should be very fast indeed
if that is ever a consideration.

I found a site for pre-compiled windows dlls for it here:
ftp://deltatrinity.dynip.com/gmp-4.1.3_DLL_SharedLibs/
so will give them a go some time. I'm sure it will be useful, as I
fairly often wish I had a big numbers library for something or other.

Robert

🔗Carl Lumma <clumma@...>

9/2/2004 2:57:51 AM

> Do you remember the algorithm someone found that
> lets you calculate digits of pi at any particular
> point to some particular base without calculating the
> previous digits leading up to that one?

There may be several "spigot" algoritms for pi, but
the most famous is due to Ramanujan (sp?).

This is actually very interesting, because pi is
normal in base 2 IIRC and maybe all bases. It is
also essentially random as far as Shannon entropy
is concerned. Yet there's a simple spigot algorithm
for it. This is the most extreme case of this
kind of thing I'm aware of, but I assume there are
many.

But not too many, because there aren't too many
short spigot algorithms out there, since they too
are strings.

> Yes (0.5)! is a bit strange looking when unacustomed to it.
> :-).
>
> It would be using the Gamma function generalisation of
> factorials to reals I assume. I'm not very familiar
> with that, and never actually needed to use it for
> anything, but believe it uses pi.

I too was shocked and aghast. But gamma functions are
mentioned at the bottom of mathworld's page on Factorial.
I haven't had a chance to look at it though, and probably
won't until I get back from burning man.

-Carl

🔗Graham Breed <graham@...>

9/2/2004 6:19:14 AM

Robert Walker wrote:

> Actually even if your entire disk were filled with a large integer
> as digits type representation of some very large number,
> you only see part of it at any time. Similarly even
> if it is a large book, showing thousands of digits > of pi or something, then you can only see one line > or page at a time, and your eye may not even see the
> complete line in one go.

And if you needed to multiply two such numbers, the number of times you had to read each one from the disk would be proporitional to the number of bits in each. It'd be really, incredibly slow.

> To get philsophical about it - if you can see e.g. a slice of
> 10 consecutive digits anywhere at any point in the expansion,
> up to some very large number, too large to store all the digits on ones
> hard drive, I wonder if that counts equally well as a representation
> of that very large number. (I don't know if this particular
> one really gets quite that far so easily but you never know).

Remember that if the number's stored in binary, extracting a decimal digit requires you to read the whole number. Some libraries do let you store the number in decimal.

> Well - I could easily do a program to display e.g.
> 10^(10^(10^10)) in that way. Ask for any position before
> the (10^(10^10)) th digit, and it shows you a string of 0s.
> Anything after that and it doesn't show anythign at all.
> If user happens to hit exactly on the (10^(10^10)) th > digit then show a 1 in the appropriate place
> on the screen followed by 0s. Yes, there are lots of cases where writing the program a better way saves you having to worry about a super-duper library.

> After all it is only the human
> reading it on a computer display that makes it a > count as a digital representation of a number > - otherwise it is just a pattern of
> magnetic domains on a piece of material.

Printing very large (that is, beyond RAM limitations) numbers in decimal is pretty useless anyway. Some calculations may happen to require very large numbers as intermediate results, but nobody has to see them. I wondered if Maple simplified them automatically, but it seems not. Examples like (10^1000000)-(10^100000 - 1) are obviously taking a non trivial amount of time.

There's a standard algorithm for raising one number to the power of another, modulo a third. It gets used in cryptography, and makes the calculation a lot faster than the naive approach, as well as saving memory. So math libraries tend to supply it.

> So in some special cases you mighte able to even go
> somewhat beyond the physical limitations of ones
> computer.

That's roughly what we do with prime factor notation. As long as the numbers start off relatively simple, they can be converted to arrays and then multiplied and divided to your heart's content. You can work out their sizes as intervals in cents without worrying about what the ratio looks like in decimal.

Graham

🔗Graham Breed <graham@...>

9/2/2004 6:36:27 AM

Robert Walker wrote:

> That would be great, thanks!

I'll drop it off if I'm ever around Oxford. May happen!

> Rightio that makes sense - I suppose you set the precision in advance.

For floating point, yes, I think so. At least that's how Maple and BC work. For integers, you can adapt to make them as large as they need be. Rationals too, by extension.

Arbitrary precision integers are mainstream now, because they're needed for encryption protocols. All kinds of languages decide they need to make SSL connections and the like. Another niche is decimal arithmetic for financial applications. This can be critically important, because a rounding error on a tax return can be a criminal offence :-O. Here's the Python proposal if anybody's interested:

http://www.python.org/peps/pep-0327.html

BTW, the educational language ABC used exact rational arithmetic by default. So if you did 1/2 it would hold it as a ratio. The idea was that newbie programmers didn't have to worry about the loss of precision inherent in floating point. It turns out that engineers ran into trouble because the numbers got so large their programs became inefficient. Usually, floating point was all they needed, or expected. It was this experience that led Guido to drop the feature in Python. For the average programmer, floating point is less confusing. Those who really need rationals can always find a way, and there aren't actually very many of us.

Graham

🔗Robert Walker <robertwalker@...>

9/2/2004 9:33:39 AM

Hi Carl,

Thanks for the mention of pi spigot programs. They were
new to me actually.

Actually what I had in mind was a digit extraction algorithm.

Here is the url:

http://mathworld.wolfram.com/BBPFormula.html

and more about it at the bottom of this page:

http://mathworld.wolfram.com/PiFormulas.html

Looking up the spigot programs,
I find that they are streaming algorithms
that let you discard earlier digits after you
calculate them - I just found this minimalist c program for
spigot pi:

http://www1.physik.tu-muenchen.de/~gammel/matpack/html/Mathematics/Pi.html

But the digit extraction algorithm lets
you calculate e.g. the trillionth digit of pi to base 16
without ever calculating any of the earlier digits at all.
Just plug in 10^12 (or 10^18 if it is an British trillion
though we all use American billions and trillions nowadays)
into the formula and it will give you the trillionth digit
of pi to base 16, no need to wait at all, with no more
calculation than that.

Quite remarkable!

If you can do it to base 16 then that means that
you have the binary expansion of pi too.

Robert

🔗Carl Lumma <clumma@...>

9/2/2004 9:40:27 AM

> BTW, the educational language ABC used exact rational
> arithmetic by default. So if you did 1/2 it would hold
> it as a ratio. The idea was that newbie programmers
> didn't have to worry about the loss of precision
> inherent in floating point.

Scheme does this, and it isn't only useful for newbies.
It also has arbitrary precision decimals. The data
types are called "exact" and "inexact" numbers, resp.

> It turns out that engineers ran into trouble because
> the numbers got so large their programs became inefficient.

Now this sounds like a problem of newbies.

> Usually, floating point was all they needed, or expected.

My position is the weaker the typing the better.

> For the average programmer, floating point is less confusing.

If somebody's used to something, it's bound to be the
less confusing way.

-Carl

🔗Robert Walker <robertwalker@...>

9/2/2004 10:52:22 PM

Hi Graham,

> > That would be great, thanks!
>
> I'll drop it off if I'm ever around Oxford. May happen!

Fine!

> > Rightio that makes sense - I suppose you set the precision in advance.
>
> For floating point, yes, I think so. At least that's how Maple and BC
> work. For integers, you can adapt to make them as large as they need
> be. Rationals too, by extension.
>
> Arbitrary precision integers are mainstream now, because they're needed
> for encryption protocols. All kinds of languages decide they need to
> make SSL connections and the like. Another niche is decimal arithmetic
> for financial applications. This can be critically important, because a
> rounding error on a tax return can be a criminal offence :-O. Here's
> the Python proposal if anybody's interested:
>
> http://www.python.org/peps/pep-0327.html
>
> BTW, the educational language ABC used exact rational arithmetic by
> default. So if you did 1/2 it would hold it as a ratio. The idea was
> that newbie programmers didn't have to worry about the loss of precision
> inherent in floating point. It turns out that engineers ran into
> trouble because the numbers got so large their programs became
> inefficient. Usually, floating point was all they needed, or expected.
> It was this experience that led Guido to drop the feature in Python.
> For the average programmer, floating point is less confusing. Those who
> really need rationals can always find a way, and there aren't actually
> very many of us.

Thanks yes - an ineresting thing on that page, that even 0.2
for instance has rounding errors in it as a floating point type,
since the binary expansion of 1/5 is infinite recurring, like 1/7 in
decimal.

Robert

🔗Manuel Op de Coul <coul@...>

9/3/2004 3:22:04 AM

Who needs types when we have Turing-complete programming
languages such as COW: http://www.bigzaphod.org/cow/
and BF: http://www.muppetlabs.com/~breadbox/bf/
:-)

Manuel

🔗Graham Breed <graham@...>

9/3/2004 4:34:00 AM

Manuel Op de Coul wrote:
> Who needs types when we have Turing-complete programming
> languages such as COW: http://www.bigzaphod.org/cow/
> and BF: http://www.muppetlabs.com/~breadbox/bf/
> :-)

Sure, there's a list here:

http://en.wikipedia.org/wiki/List_of_esoteric_programming_languages

My favourites are Whitespace (which Yahoo's formatting will unfortunately destroy) and Chef, for which I've copied the hello world:

Ingredients.
72 g haricot beans
101 eggs
108 g lard
111 cups oil
32 zucchinis
119 ml water
114 g red salmon
100 g dijon mustard
33 potatoes

Method.
Put potatoes into the mixing bowl.
Put dijon mustard into the mixing bowl.
Put lard into the mixing bowl.
Put red salmon into the mixing bowl.
Put oil into the mixing bowl.
Put water into the mixing bowl.
Put zucchinis into the mixing bowl.
Put oil into the mixing bowl.
Put lard into the mixing bowl.
Put lard into the mixing bowl.
Put eggs into the mixing bowl.
Put haricot beans into the mixing bowl.
Liquefy contents of the mixing bowl.
Pour contents of the mixing bowl into the baking dish.

Serves 1.

🔗Carl Lumma <clumma@...>

9/6/2004 10:43:32 AM

Hi Robert,

> Thanks for the mention of pi spigot programs. They were
> new to me actually.
>
> Actually what I had in mind was a digit extraction algorithm.

It isn't clear that there's any difference between "spigot"
and "digit extraction" from their mathworld definitions,
though they are not cross-referenced.

> and more about it at the bottom of this page:
>
> http://mathworld.wolfram.com/PiFormulas.html

This page does make it sound like there's a difference,
though,

"A spigot algorithm for pi is given by Rabinowitz
and Wagon (1995; Borwein and Bailey 2003, pp. 141-142).
More amazingly still, a closed form expression giving a
digit-extraction algorithm which produces digits of pi
(or pi^2) in base-16 was discovered by Bailey et al."

Here it isn't clear to me whether the digit extraction
algorithm is more amazing because digit extraction is
different than spigot, or because it is a "closed form
expression".

In any case, looks like I was wrong about Ramanujan.
He calculates pi, but apparently didn't give a spigot/
digit extraction algorithm for it.

> Looking up the spigot programs,
> I find that they are streaming algorithms
> that let you discard earlier digits after you
> calculate them

This doesn't seem to agree with the mathworld
definition.

> But the digit extraction algorithm lets
> you calculate e.g. the trillionth digit of pi to base 16
> without ever calculating any of the earlier digits at all.
> Just plug in 10^12 (or 10^18 if it is an British trillion
> though we all use American billions and trillions nowadays)
> into the formula and it will give you the trillionth digit
> of pi to base 16, no need to wait at all, with no more
> calculation than that.
>
> Quite remarkable!

This is what I thought a spigot algorithm was.

-Carl

🔗Robert Walker <robertwalker@...>

9/6/2004 7:25:15 PM

Hi Carl,

Yes I wondered too about the details of the distinction between
a spigot algorithm and a digit extraction algorithm.

However, searching again, the start of this paper makes it all clear:

http://web.comlab.ox.ac.uk/oucl/work/jeremy.gibbons/publications/spigot.pdf

A spigot algorithm makes the digits of pi one at a time so it would
produce
3, 1, 4, 1, 5, etc

but has to start from the beginning.

It starts by first putting the entire sequence in a mixed radix
base into the memory - then it works through that sequence
converting it to decimal. Actually I'm not sure I quite
understand how it does that from the paper but
the outline of the procedure is clear - it involves
multiplying all the digits by 10, then you
truncate it to get the first digit, output that,
subtract that from the sequence, and proceed in that way
to convert the mixed radix sequence into decimal digits
one at a time.

However, though you need to keep the terms of the
original sequence all in memory you don't need
to keep the digits as they are output.
You can discard them after they have been found.
The sequence itself I think gets smaller as you go through
it truncating it and removing terms,
though I don't quite understand how it works
yet.

The streaming algorithms introduced by the author
of that paper are a separate thing - they are like spigots
except that you work along the series
from left to right instead of repeatedly
working from right to left and truncating,
so don't need to allocate memory for all
the digits in advance. Instead you just
need a small amount of memory to store
a matrix, and a few numbers, and that's all,
if I understand correctly.

Digit extraction however is special because it
gives you the ten billionth digit
and gets it far faster than any method known for generating
all the first ten billion digits, and does it without
the need to find any of the earlier digits
(even to discard them). They just never
get calculated at all.

Normal pi calculations would work using arbitrary
precision arithmetic, and in order to find
the thousandth digit of pi, you will need
to add and multiply and so on terms
that involve a thousand digits.
The spigot algorithm doesn't require
that - you can use ordinary limited precision
arithmetic. So that's how you can have
short simple c code programs that
are able to output a 1000 digits of
pi using only normal 32 bit precision
arithmetic. Eventually you would hit a
limit there as the numbers in
2 + 1/3*(2 + 2/5 *(2 + 4/7* (...)))
get too large for 32 bit precision, but
you will be working with much smaller numbers
so can go far further with ordinary limited
precision arithmetic.

At least, that's the best I can make of
it so far. But I'm a little confused,
didn't understand in that paper what
the author meant when they say that
{0;2,4,6,8,...} is the mixed radix
representation of 2 - seemed to me
that it would be infinity instead.
I must be missing something. It is
easy to miss some essential point when reading
a paper in a field you aren't especially
familiar with.

Spigot alogrithms are reasonably old,
I gather from this search,
though I think not quite old enough
for Ramanujan to have made one,
maybe fifty years old or something,
but the digit extraction algorithm is
much more recent, well within the last decade.

Robert

🔗Gene Ward Smith <gwsmith@...>

9/6/2004 8:54:38 PM

--- In metatuning@yahoogroups.com, "Carl Lumma" <clumma@y...> wrote:

> This is what I thought a spigot algorithm was.

Stan Rabinowitz told me his spigot spits out the digits of pi, in base
10, one at a time. It's short but even so has been used for writing
obfuscated code--I think an obfuscated version of it won the C contest
one year.