back to list

Scalar Complexity

🔗Graham Breed <gbreed@gmail.com>

2/4/2007 1:51:28 AM

After I posted the latest version of my Prime Errors and Complexities
paper, which'll be in the files section of this list, I noted that one
equation implied a new complexity measure. After much pondering I've
decided it's the best candidate for a standard weighted complexity
measure. I'm moving my scripts to use it as a default.

The general formula is

sqrt(abs(square(M)))/n

where sqrt is the square root, abs is a way of getting a positive
number from whatever objects you're using, square is a way of squaring
whatever objects, you're using, M is a weighted representation of a
regular temperament, and n is the number of primes you're
approximating.

As it came up before, M is a matrix containing the Tenney-weighted
mapping where each column is the weighted mapping of an equal
temperament. In matrix terms, the formula becomes

sqrt(abs(det(trans(M)*M)))/n

where abs is the normal absolute value of a real number, det is the
determinant, trans is the transpose, and * is the matrix product. The
most obvious way of getting a real number to represent a rectangular
matrix is det(trans(M)*M). (You could do the multiplication the other
way round, and maybe it makes a difference, but in context it has to
be this way round.) Because the size should be an absolute value, and
the sign of the determinant is fairly arbitrary, remove it. Then,
because the original matrix was squared, take the square root to give
the result dimensions of a mapping. Dividing by n turns the implied
sums into means.

M can also be the weighted wedgie of the temperament, as a
multivector. In that case the formula is

sqrt(abs(comp(M)^M))/n

where comp is the complement and ^ is the wedge product. In Grassman
algebra, comp(M)^M is the scalar product of M with itself, hence I'm
calling this scalar complexity for now. This is the most obvious way
of getting the size of a multivector in Grassman algebra. It either
implies a Euclidian metric for the weighted wedgie, or means the Tenny
weights serve as the metric for the unweighted wedgie. The abs, sqrt
and division follow the same logic as for the matrices above.

I don't know if this has been proposed before. I remember thinking
that geometric complexity, way back when, *should* be similar to this.
As I never understood it I can't be sure if it was or not. We
generally shied away from this obvious wedgie complexity because we
don't like Euclidian distances in music space. But I was never
convinced that max-abs, sum-abs, or standard deviations were valid.
Now that I've tied this way in with other, non-wedgie complexities I'm
happy to say that it's the right way to go.

Scalar complexity is very close to the standard deviation complexity I
show in the PDF file. Scalar complexity times TOP-RMS error gives the
same simple badness as STD complexity times Tenney-weighted STD error.
That means scalar complexity is as close to STD complexity as STD
error is to the true RMS error, which means 3 significant figures for
common cases like miracle and mystery. The smaller the error in the
temperament, the closer these pairs of complexities and errors get to
each other.

The Tenney-weighted STD complexity is always smaller than half the
Kees-max complexity. Because the scalar complexity is so close to the
STD complexity, it should also be smaller than the Kees-max
complexity. I can't prove it always is, but it should be useful as an
approximation if you want to filter for the Kees-max. Because of this
relationship, I'm going to call for the Kees-max complexity to be
divided by two as standard so that it looks more like scalar
complexity. The more primes you look at the smaller the average case
tends to be compared to the worst case.

It's possible, though not very elegant, to express scalar complexity
using the same means, variances and covariance as the TOP-RMS error,
STD complexity, and so on. That means it's very efficient to
calculate if you keep using the same equal temperaments --- order n
for n primes. It's also efficient to update if you want to keep
extending the prime limit. All you do is update each of the sums of
products of weighted mapping elements in the RxR (for a rank R
temperament) matrix.

Compared to the Kees-max and STD complexities, the advantages of
scalar complexity are that it doesn't depend on the choice of
equivalence interval and can be generalized to any rank of
temperament. For an equal temperament, the scalar complexity is the
RMS of the weighted mapping, which will always be close to the number
of notes to the octave (when the mapping's in terms of octaves). For
a temperament with a single unison vector, it'll be proportional to
the RMS of the Tenney-weighted prime factorization of that unison
vector, the same way that the sum-abs of the weighted wedgie is
proportional to the Tenney-size of the unison vector.

It does imply an RMS, rather than a minimax, outlook. I still think
this is they right way of treating weighting because the set of
intervals is unbounded, so you can throw away whichever ones you like.
Knowing the average badness is more informative than the worst
badness. But even then I don't know of a minimax equivalent that
works as well as this, so I'm hoping weighted minimax folks can still
use this scalar complexity as the default.

Graham

🔗Carl Lumma <ekin@lumma.org>

2/4/2007 12:41:55 PM

I understand weighted error. What's weighted complexity?

-Carl

At 01:51 AM 2/4/2007, you wrote:
>After I posted the latest version of my Prime Errors and Complexities
>paper, which'll be in the files section of this list, I noted that one
>equation implied a new complexity measure. After much pondering I've
>decided it's the best candidate for a standard weighted complexity
>measure. I'm moving my scripts to use it as a default.
>
>The general formula is
>
>sqrt(abs(square(M)))/n
>
>where sqrt is the square root, abs is a way of getting a positive
>number from whatever objects you're using, square is a way of squaring
>whatever objects, you're using, M is a weighted representation of a
>regular temperament, and n is the number of primes you're
>approximating.
>
>As it came up before, M is a matrix containing the Tenney-weighted
>mapping where each column is the weighted mapping of an equal
>temperament. In matrix terms, the formula becomes
>
>sqrt(abs(det(trans(M)*M)))/n
>
>where abs is the normal absolute value of a real number, det is the
>determinant, trans is the transpose, and * is the matrix product. The
>most obvious way of getting a real number to represent a rectangular
>matrix is det(trans(M)*M). (You could do the multiplication the other
>way round, and maybe it makes a difference, but in context it has to
>be this way round.) Because the size should be an absolute value, and
>the sign of the determinant is fairly arbitrary, remove it. Then,
>because the original matrix was squared, take the square root to give
>the result dimensions of a mapping. Dividing by n turns the implied
>sums into means.
>
>M can also be the weighted wedgie of the temperament, as a
>multivector. In that case the formula is
>
>sqrt(abs(comp(M)^M))/n
>
>where comp is the complement and ^ is the wedge product. In Grassman
>algebra, comp(M)^M is the scalar product of M with itself, hence I'm
>calling this scalar complexity for now. This is the most obvious way
>of getting the size of a multivector in Grassman algebra. It either
>implies a Euclidian metric for the weighted wedgie, or means the Tenny
>weights serve as the metric for the unweighted wedgie. The abs, sqrt
>and division follow the same logic as for the matrices above.
>
>I don't know if this has been proposed before. I remember thinking
>that geometric complexity, way back when, *should* be similar to this.
> As I never understood it I can't be sure if it was or not. We
>generally shied away from this obvious wedgie complexity because we
>don't like Euclidian distances in music space. But I was never
>convinced that max-abs, sum-abs, or standard deviations were valid.
>Now that I've tied this way in with other, non-wedgie complexities I'm
>happy to say that it's the right way to go.
>
>Scalar complexity is very close to the standard deviation complexity I
>show in the PDF file. Scalar complexity times TOP-RMS error gives the
>same simple badness as STD complexity times Tenney-weighted STD error.
> That means scalar complexity is as close to STD complexity as STD
>error is to the true RMS error, which means 3 significant figures for
>common cases like miracle and mystery. The smaller the error in the
>temperament, the closer these pairs of complexities and errors get to
>each other.
>
>The Tenney-weighted STD complexity is always smaller than half the
>Kees-max complexity. Because the scalar complexity is so close to the
>STD complexity, it should also be smaller than the Kees-max
>complexity. I can't prove it always is, but it should be useful as an
>approximation if you want to filter for the Kees-max. Because of this
>relationship, I'm going to call for the Kees-max complexity to be
>divided by two as standard so that it looks more like scalar
>complexity. The more primes you look at the smaller the average case
>tends to be compared to the worst case.
>
>It's possible, though not very elegant, to express scalar complexity
>using the same means, variances and covariance as the TOP-RMS error,
>STD complexity, and so on. That means it's very efficient to
>calculate if you keep using the same equal temperaments --- order n
>for n primes. It's also efficient to update if you want to keep
>extending the prime limit. All you do is update each of the sums of
>products of weighted mapping elements in the RxR (for a rank R
>temperament) matrix.
>
>Compared to the Kees-max and STD complexities, the advantages of
>scalar complexity are that it doesn't depend on the choice of
>equivalence interval and can be generalized to any rank of
>temperament. For an equal temperament, the scalar complexity is the
>RMS of the weighted mapping, which will always be close to the number
>of notes to the octave (when the mapping's in terms of octaves). For
>a temperament with a single unison vector, it'll be proportional to
>the RMS of the Tenney-weighted prime factorization of that unison
>vector, the same way that the sum-abs of the weighted wedgie is
>proportional to the Tenney-size of the unison vector.
>
>It does imply an RMS, rather than a minimax, outlook. I still think
>this is they right way of treating weighting because the set of
>intervals is unbounded, so you can throw away whichever ones you like.
> Knowing the average badness is more informative than the worst
>badness. But even then I don't know of a minimax equivalent that
>works as well as this, so I'm hoping weighted minimax folks can still
>use this scalar complexity as the default.
>
>
> Graham

🔗Graham Breed <gbreed@gmail.com>

2/4/2007 11:04:03 PM

On 05/02/07, Carl Lumma <ekin@lumma.org> wrote:
> I understand weighted error. What's weighted complexity?

Like normal complexity but weighted. The odd limit complexity tells
you how many tempered notes you need for a complete chord. Weighted
complexity is some indication of the number of tempered notes you need
to get approximate arbitrarily complex ratios. For weighted error you
weight, well, the error. For weighted complexity you weight the
number of steps.

Graham

🔗Carl Lumma <ekin@lumma.org>

2/5/2007 12:23:28 AM

>> I understand weighted error. What's weighted complexity?
>
>Like normal complexity but weighted. The odd limit complexity tells
>you how many tempered notes you need for a complete chord. Weighted
>complexity is some indication of the number of tempered notes you need
>to get approximate arbitrarily complex ratios. For weighted error you
>weight, well, the error. For weighted complexity you weight the
>number of steps.
> Graham

Why should it be "harder" to approximate 11 than 7? Or are
you just weighting for the number of notes in a complete chord?

-Carl

🔗Graham Breed <gbreed@gmail.com>

2/6/2007 2:33:07 AM

On 05/02/07, Carl Lumma <ekin@lumma.org> wrote:
> >> I understand weighted error. What's weighted complexity?
> >
> >Like normal complexity but weighted. The odd limit complexity tells
> >you how many tempered notes you need for a complete chord. Weighted
> >complexity is some indication of the number of tempered notes you need
> >to get approximate arbitrarily complex ratios. For weighted error you
> >weight, well, the error. For weighted complexity you weight the
> >number of steps.
>
> Why should it be "harder" to approximate 11 than 7? Or are
> you just weighting for the number of notes in a complete chord?

I'm assuming you want simple ratios to approximate in a simple way.
It's always going to be harder to approximate 9 than 3 with a
prime-based regular temperament. All Tenney weighting does is treat
all intervals equally in this respect. It may not be harder to
approximate 11 -- the easier the better. There's nothing to penalize
an approximation for being too good.

It's nothing to do with complete chords. Prime limits don't have
complete chords anyway. I divide through by the number of primes so
they don't affect it. One of the nice things about this measure is
that it's roughly the same if you randomly add or remove a prime.
It's about the average complexity of an interval. Although the RMS is
really a compromise between mean-abs (the most obvious kind of
average) and max-abs.

I thought of maybe "quadratic complexity" instead of "scalar
complexity" as it's always about squaring things.

Graham

🔗Carl Lumma <ekin@lumma.org>

2/6/2007 2:55:40 AM

At 02:33 AM 2/6/2007, you wrote:
>On 05/02/07, Carl Lumma <ekin@lumma.org> wrote:
>> >> I understand weighted error. What's weighted complexity?
>> >
>> >Like normal complexity but weighted. The odd limit complexity tells
>> >you how many tempered notes you need for a complete chord. Weighted
>> >complexity is some indication of the number of tempered notes you need
>> >to get approximate arbitrarily complex ratios. For weighted error you
>> >weight, well, the error. For weighted complexity you weight the
>> >number of steps.
>>
>> Why should it be "harder" to approximate 11 than 7? Or are
>> you just weighting for the number of notes in a complete chord?
>
>I'm assuming you want simple ratios to approximate in a simple way.

Aha; I see.

>It's always going to be harder to approximate 9 than 3 with a
>prime-based regular temperament. All Tenney weighting does is treat
>all intervals equally in this respect. It may not be harder to
>approximate 11 -- the easier the better. There's nothing to penalize
>an approximation for being too good.

Well, you're penalizing temperaments that are good at
1:3:7 chords but bad at 1:3:5 chords -- correct? Not a bad
assumption at the end of the day. A better one, from my
point of view, than equal-weighted complexity for complete
chords.

>It's nothing to do with complete chords. Prime limits don't have
>complete chords anyway.

Is this prime limit, then? I seem to remember you thinking
considering error of primes individually was a good proxy for
the complete chord method.

-Carl

🔗Graham Breed <gbreed@gmail.com>

2/6/2007 9:25:49 PM

On 06/02/07, Carl Lumma <ekin@lumma.org> wrote:
> At 02:33 AM 2/6/2007, you wrote:
> >On 05/02/07, Carl Lumma <ekin@lumma.org> wrote:
> >> >> I understand weighted error. What's weighted complexity?
> >> >
> >> >Like normal complexity but weighted. The odd limit complexity tells
> >> >you how many tempered notes you need for a complete chord. Weighted
> >> >complexity is some indication of the number of tempered notes you need
> >> >to get approximate arbitrarily complex ratios. For weighted error you
> >> >weight, well, the error. For weighted complexity you weight the
> >> >number of steps.
> >>
> >> Why should it be "harder" to approximate 11 than 7? Or are
> >> you just weighting for the number of notes in a complete chord?
> >
> >I'm assuming you want simple ratios to approximate in a simple way.
>
> Aha; I see.
>
> >It's always going to be harder to approximate 9 than 3 with a
> >prime-based regular temperament. All Tenney weighting does is treat
> >all intervals equally in this respect. It may not be harder to
> >approximate 11 -- the easier the better. There's nothing to penalize
> >an approximation for being too good.
>
> Well, you're penalizing temperaments that are good at
> 1:3:7 chords but bad at 1:3:5 chords -- correct? Not a bad
> assumption at the end of the day. A better one, from my
> point of view, than equal-weighted complexity for complete
> chords.

Relative to equal weighting, yes. Of course the weights of 5 and 7
don't differ greatly. It's more a case of 1:3:5 compared to 1:11:13.

I still think equal-weighted complexity for complete chords makes
sense if you're actually planning to use compete chords and you want
as many of them as possible. The prime-based Tenney-weighted RMS is
never likely to be what you want, but close enough that it's a good
standard. We're all going to want different things in different
contexts, and most of the time don't even know what we want, so
standards are certainly useful here.

> >It's nothing to do with complete chords. Prime limits don't have
> >complete chords anyway.
>
> Is this prime limit, then? I seem to remember you thinking
> considering error of primes individually was a good proxy for
> the complete chord method.

Yes, this is prime limit.

If you're only going to look at primes, Tenney weighting gets you
closest to odd limits. You also need to consider the intervals
between primes in some way. For the error you can handle that by
optimizing the scale stretch. But using a standard deviation(STD)
instead of an RMS does the same job if you want to keep pure octaves
(or have some other rule for setting the scale stretch). I'm still
not entirely clear why STDs work for complexity as well, but they
clearly do. It's nice that they're always less than half the max Kees
complexity, which I do understand. But the standard deviation is
supposed to be the RMS deviation from the mean and I don't see that
the mean means anything in this context. I don't see why a
determinant or wedgie scalar product should consider intervals between
primes either but they obviously do because the result is extremely
close to the standard deviation.

For actual comparisons with complete chords, worst weighted errors and
complexities will get you closer than RMS/STD. I prefer the RMS/STD
partly because it gets closer to the average case (but also for
pragmatic reasons). This is especially true for higher prime limits.
There are certainly people out there who want to use 19-limit
intervals in a context where they could be tempered. But 19-limit
complete chords? I think not! I can even see a point to my 31-limit
searches if you're going to throw away the intervals that are too
complex or too far from JI. One reason for stopping at 31 is that
it's the highest prime Ptolemy used IIRC.

Ultimately, the software should be able to find useful subsets of a
prime limit and give you the worst error and complexity (weighted or
otherwise) for that subset. I can also see an advantage in the
error-weighting following the complexity of the tempered interval
rather than the ratio being approximated. But these are all
complications --- for now, I'm concentrating on getting the weighted
primes worked out, implemented and documented.

Graham

🔗Carl Lumma <ekin@lumma.org>

2/7/2007 10:05:22 AM

>> Well, you're penalizing temperaments that are good at
>> 1:3:7 chords but bad at 1:3:5 chords -- correct? Not a bad
>> assumption at the end of the day. A better one, from my
>> point of view, than equal-weighted complexity for complete
>> chords.
>
>Relative to equal weighting, yes. Of course the weights of 5 and 7
>don't differ greatly. It's more a case of 1:3:5 compared to 1:11:13.
>
>I still think equal-weighted complexity for complete chords makes
>sense if you're actually planning to use compete chords and you want
>as many of them as possible.

Yes.

>The prime-based Tenney-weighted RMS is
>never likely to be what you want, but close enough that it's a good
>standard. We're all going to want different things in different
>contexts, and most of the time don't even know what we want, so
>standards are certainly useful here.

That's why I like my idea of finding the best chords to use
with each ET.

>If you're only going to look at primes, Tenney weighting gets you
>closest to odd limits. You also need to consider the intervals
>between primes in some way. For the error you can handle that by
>optimizing the scale stretch.

In the past people suggested doing it by paying attention to
the signs of the errors.

Scale stretch??

>For actual comparisons with complete chords, worst weighted errors and
>complexities will get you closer than RMS/STD. I prefer the RMS/STD
>partly because it gets closer to the average case (but also for
>pragmatic reasons). This is especially true for higher prime limits.
>There are certainly people out there who want to use 19-limit
>intervals in a context where they could be tempered. But 19-limit
>complete chords? I think not!

Why not? Denny and I use to play 21-limit marimbas and stuff.

>But these are all
>complications --- for now, I'm concentrating on getting the weighted
>primes worked out, implemented and documented.

Very good then.

-Carl

🔗Graham Breed <gbreed@gmail.com>

2/7/2007 11:16:18 PM

On 08/02/07, Carl Lumma <ekin@lumma.org> wrote:

> >If you're only going to look at primes, Tenney weighting gets you
> >closest to odd limits. You also need to consider the intervals
> >between primes in some way. For the error you can handle that by
> >optimizing the scale stretch.
>
> In the past people suggested doing it by paying attention to
> the signs of the errors.

Did they have any concrete suggestions, or was it only a case of "hey,
why not look at the signs of the errors?"

> Scale stretch??

Yes, if you treat octaves on an equal footing with the other primes
and optimize for them you get the right results from a naive way of
looking at the primes. I explained it all in the Prime Errors and
Complexities PDF so that I don't have to keep on doing so. This is
probably why the new complexity measure works as well -- it uses the
whole definition of the temperament instead of treating octaves as
special.

> >For actual comparisons with complete chords, worst weighted errors and
> >complexities will get you closer than RMS/STD. I prefer the RMS/STD
> >partly because it gets closer to the average case (but also for
> >pragmatic reasons). This is especially true for higher prime limits.
> >There are certainly people out there who want to use 19-limit
> >intervals in a context where they could be tempered. But 19-limit
> >complete chords? I think not!
>
> Why not? Denny and I use to play 21-limit marimbas and stuff.

So one of you had 5 mallets and the other 6 to hit a complete chord between you?

Graham

🔗Carl Lumma <ekin@lumma.org>

2/8/2007 10:06:58 AM

>> >If you're only going to look at primes, Tenney weighting gets you
>> >closest to odd limits. You also need to consider the intervals
>> >between primes in some way. For the error you can handle that by
>> >optimizing the scale stretch.
>>
>> In the past people suggested doing it by paying attention to
>> the signs of the errors.
>
>Did they have any concrete suggestions, or was it only a case of "hey,
>why not look at the signs of the errors?"

I don't really know what you're talking about, so there's no point
of me going into detail yet.

>> Scale stretch??
>
>Yes, if you treat octaves on an equal footing with the other primes
>and optimize for them you get the right results from a naive way of
>looking at the primes. I explained it all in the Prime Errors and
>Complexities PDF so that I don't have to keep on doing so. This is
>probably why the new complexity measure works as well -- it uses the
>whole definition of the temperament instead of treating octaves as
>special.

I'm lost. You lost me in the PDF, too. But I guess I can try
again. Do you have a url for it these days?

>> >For actual comparisons with complete chords, worst weighted errors and
>> >complexities will get you closer than RMS/STD. I prefer the RMS/STD
>> >partly because it gets closer to the average case (but also for
>> >pragmatic reasons). This is especially true for higher prime limits.
>> >There are certainly people out there who want to use 19-limit
>> >intervals in a context where they could be tempered. But 19-limit
>> >complete chords? I think not!
>>
>> Why not? Denny and I use to play 21-limit marimbas and stuff.
>
>So one of you had 5 mallets and the other 6 to hit a complete chord
>between you?

We had four mallets between the two of us, and the sustain and
resonance in the instrument was plenty to consider something
like 19-limit error. As I should think it would be in any music
that uses the harmonic series as a scale.

-Carl

🔗Graham Breed <gbreed@gmail.com>

2/9/2007 3:16:07 AM

On 09/02/07, Carl Lumma <ekin@lumma.org> wrote:

> >> Scale stretch??
> >
> >Yes, if you treat octaves on an equal footing with the other primes
> >and optimize for them you get the right results from a naive way of
> >looking at the primes. I explained it all in the Prime Errors and
> >Complexities PDF so that I don't have to keep on doing so. This is
> >probably why the new complexity measure works as well -- it uses the
> >whole definition of the temperament instead of treating octaves as
> >special.
>
> I'm lost. You lost me in the PDF, too. But I guess I can try
> again. Do you have a url for it these days?

No. There's a copy in the files section here and I think you have it.

Not that I can remember what I said there, but I don't think I can
make it any clearer. You're going to have to think it through. It's
a simple idea but it has to click with you.

> >> >For actual comparisons with complete chords, worst weighted errors and
> >> >complexities will get you closer than RMS/STD. I prefer the RMS/STD
> >> >partly because it gets closer to the average case (but also for
> >> >pragmatic reasons). This is especially true for higher prime limits.
> >> >There are certainly people out there who want to use 19-limit
> >> >intervals in a context where they could be tempered. But 19-limit
> >> >complete chords? I think not!
> >>
> >> Why not? Denny and I use to play 21-limit marimbas and stuff.
> >
> >So one of you had 5 mallets and the other 6 to hit a complete chord
> >between you?
>
> We had four mallets between the two of us, and the sustain and
> resonance in the instrument was plenty to consider something
> like 19-limit error. As I should think it would be in any music
> that uses the harmonic series as a scale.

Would there be any point in tempering it? It's something to consider anyway.
And the odd-limit search does work up to the 21-limit.

Graham

🔗Carl Lumma <ekin@lumma.org>

2/9/2007 8:13:04 AM

>> We had four mallets between the two of us, and the sustain and
>> resonance in the instrument was plenty to consider something
>> like 19-limit error. As I should think it would be in any music
>> that uses the harmonic series as a scale.
>
>Would there be any point in tempering it?

Not really, no. I guess you've got me there.

-Carl

🔗Dan Amateur <xamateur_dan@yahoo.ca>

2/11/2007 8:58:06 PM

Saw this on another list, looks interesting and
familiar but can't place it?

Anyone out there have any ideas?

To: MakeMicroMusic@yahoogroups.com
From: "dar kone" <zarkorgon@yahoo.com> Add to Address
Book Add Mobile Alert
Date: Sun, 11 Feb 2007 20:15:46 -0800 (PST)
Subject: [MMM] Weird Frequency Phenomenon

Help, can someone explain how and what is
happening in the below?

Is there some standard musical, math explanation for
this?

Frequency #1 Reciprocal of Freq 1 *2
------------ --------- --------- --------- -
1.851351 1.080292

Frequency #2 Reciprocal of Freq 1 *2
------------ --------- --------- --------- -
-1.167568 -1.712963

Frequency #1 / Reciprocal of Freq 1 *2
------------ --------- --------- --------- ----
1.851351 / -1.712963 = -1.080789

-1.080789 = Close to Reciprocal of Freq # 1

Reciprocal of Freq 1 *2 / Frequency #2
------------ --------- --------- --------- ----
1.080292 / -1.167568 = -1.850501

-1.850501 = Close to Freq # 1

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/12/2007 3:03:33 PM

--- In tuning-math@yahoogroups.com, Dan Amateur <xamateur_dan@...>
wrote:
>
> Saw this on another list, looks interesting and
> familiar but can't place it?
>
> Anyone out there have any ideas?

It makes zero sense to me, sorry.