back to list

Gene's tuning-math OEIS submissions

🔗Carl Lumma <ekin@lumma.org>

2/28/2007 7:02:23 PM

Gene,

Can you give us a rundown of your OEIS submissions.
I found a few threads mentioning these, but I'm not
sure if they're complete.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/28/2007 7:51:24 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> Gene,
>
> Can you give us a rundown of your OEIS submissions.
> I found a few threads mentioning these, but I'm not
> sure if they're complete.

If you type "Gene Ward Smith" in OEIS, you get a complete list of my
submissions.

🔗Carl Lumma <ekin@lumma.org>

2/28/2007 9:14:38 PM

At 07:51 PM 2/28/2007, you wrote:
>--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>>
>> Gene,
>>
>> Can you give us a rundown of your OEIS submissions.
>> I found a few threads mentioning these, but I'm not
>> sure if they're complete.
>
>If you type "Gene Ward Smith" in OEIS, you get a complete list of my
>submissions.

Yeah but it doesn't explain them. -Carl

🔗Carl Lumma <ekin@lumma.org>

2/28/2007 9:26:10 PM

>>> Can you give us a rundown of your OEIS submissions.
>>> I found a few threads mentioning these, but I'm not
>>> sure if they're complete.
>>
>>If you type "Gene Ward Smith" in OEIS, you get a complete list of my
>>submissions.
>
>Yeah but it doesn't explain them. -Carl

It's actually better than I thought at first, but it would be
nice to have a way to compare the different zeta-based sequences
from a music theory standpoint.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/28/2007 11:39:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> It's actually better than I thought at first, but it would be
> nice to have a way to compare the different zeta-based sequences
> from a music theory standpoint.

I'm not sure what you mean, but the integral between zeros of Z(t)
seems like the best in terms of quantifying the goodness of an et.

🔗Carl Lumma <ekin@lumma.org>

3/1/2007 8:32:49 AM

>> It's actually better than I thought at first, but it would be
>> nice to have a way to compare the different zeta-based sequences
>> from a music theory standpoint.
>
>I'm not sure what you mean, but the integral between zeros of Z(t)
>seems like the best in terms of quantifying the goodness of an et.

Ok, I guess I can take your word for it. I think it would be
nice if 12 weren't put in to the calculation.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/1/2007 2:00:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> It's actually better than I thought at first, but it would be
> >> nice to have a way to compare the different zeta-based sequences
> >> from a music theory standpoint.
> >
> >I'm not sure what you mean, but the integral between zeros of Z(t)
> >seems like the best in terms of quantifying the goodness of an et.
>
> Ok, I guess I can take your word for it. I think it would be
> nice if 12 weren't put in to the calculation.

I only put it into one calculation. There's the increasing size of
the integral:

http://www.research.att.com/~njas/sequences/A117538

2, 5, 7, 12, 19, 31, 41, 53, 72, 130, 171, 224, 270

No 12 in that one. There's also the increasining peak value:

http://www.research.att.com/~njas/sequences/A117536

0, 1, 2, 3, 4, 5, 7, 10, 12, 19, 22, 27, 31, 41, 53, 72, 99, 118,
130, 152, 171, 217, 224, 270

No 12 there either.

There's even this:

http://www.research.att.com/~njas/sequences/A117537

2, 3, 5, 7, 12, 19, 31, 46, 53, 72, 270, 311, 954, 1178, 1308, 1395,
1578, 3395, 4190

No 12 there either.

The only 12 one is this:

http://www.research.att.com/~njas/sequences/A117539

12, 19, 31, 41, 46, 53, 58, 65, 72, 77, 87, 94, 99, 103, 111

I think this is an interesting list. What would you do instead?

🔗Carl Lumma <ekin@lumma.org>

3/1/2007 10:35:03 PM

>> >I'm not sure what you mean, but the integral between zeros of Z(t)
>> >seems like the best in terms of quantifying the goodness of an et.
>>
>> Ok, I guess I can take your word for it. I think it would be
>> nice if 12 weren't put in to the calculation.
>
>I only put it into one calculation. There's the increasing size of
>the integral:
>
>http://www.research.att.com/~njas/sequences/A117538
>
>2, 5, 7, 12, 19, 31, 41, 53, 72, 130, 171, 224, 270

I missed this somehow.

>No 12 in that one. There's also the increasining peak value:
>
>http://www.research.att.com/~njas/sequences/A117536

No integral in this one.

>There's even this:
>
>http://www.research.att.com/~njas/sequences/A117537
>
>2, 3, 5, 7, 12, 19, 31, 46, 53, 72, 270, 311, 954, 1178, 1308, 1395,
>1578, 3395, 4190
>
>No 12 there either.

Ditto.

>The only 12 one is this:
>
>http://www.research.att.com/~njas/sequences/A117539
>
>12, 19, 31, 41, 46, 53, 58, 65, 72, 77, 87, 94, 99, 103, 111
>
>I think this is an interesting list. What would you do instead?

This is the one I was referring to. It's also the best of the
lot as far as ETs, except it has nothing below 12 (notably from
my point of view, it's missing 5) and perhaps too many up high
(like 77, 103).

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/2/2007 4:27:21 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> This is the one I was referring to. It's also the best of the
> lot as far as ETs, except it has nothing below 12 (notably from
> my point of view, it's missing 5) and perhaps too many up high
> (like 77, 103).

I guess what's really needed is a rate of growth estimate for these
things, or an omega theorem (theorem about something exceeding a given
value infinitely often.)

🔗Paul G Hjelmstad <phjelmstad@msn.com>

3/3/2007 9:34:01 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >I'm not sure what you mean, but the integral between zeros of Z
(t)
> >> >seems like the best in terms of quantifying the goodness of an
et.
> >>
> >> Ok, I guess I can take your word for it. I think it would be
> >> nice if 12 weren't put in to the calculation.
> >
> >I only put it into one calculation. There's the increasing size
of
> >the integral:
> >
> >http://www.research.att.com/~njas/sequences/A117538
> >
> >2, 5, 7, 12, 19, 31, 41, 53, 72, 130, 171, 224, 270
>
> I missed this somehow.
>
> >No 12 in that one. There's also the increasining peak value:
> >
> >http://www.research.att.com/~njas/sequences/A117536
>
> No integral in this one.
>
> >There's even this:
> >
> >http://www.research.att.com/~njas/sequences/A117537
> >
> >2, 3, 5, 7, 12, 19, 31, 46, 53, 72, 270, 311, 954, 1178, 1308,
1395,
> >1578, 3395, 4190
> >
> >No 12 there either.
>
> Ditto.
>
> >The only 12 one is this:
> >
> >http://www.research.att.com/~njas/sequences/A117539
> >
> >12, 19, 31, 41, 46, 53, 58, 65, 72, 77, 87, 94, 99, 103, 111
> >
> >I think this is an interesting list. What would you do instead?
>
> This is the one I was referring to. It's also the best of the
> lot as far as ETs, except it has nothing below 12 (notably from
> my point of view, it's missing 5) and perhaps too many up high
> (like 77, 103).
>
> -Carl
>
Gene, in the last list, when you say midpoint, is it the only
possible integer value between the zeros of z(x)? (For which
integral I is greater or equal than for 12) Is this the normalized
version of the Z function as given on Wikipedia? Lastly, how
could there be counterexamples, I guess I don't see how an integer
value could not be within two zeros. Thanks

Paul H

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/3/2007 11:58:50 AM

--- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
<phjelmstad@...> wrote:

> Gene, in the last list, when you say midpoint, is it the only
> possible integer value between the zeros of z(x)?

Right. At most one integer value appears in the interval.

(For which
> integral I is greater or equal than for 12) Is this the normalized
> version of the Z function as given on Wikipedia?

Right, that's it. I wrote that Wikipedia article just so as to have
the Z function to refer to for a web page on this stuff, and never
wrote the web page.

Lastly, how
> could there be counterexamples, I guess I don't see how an integer
> value could not be within two zeros. Thanks

What I meant is that you could have an interval where the integral
was greater than the integral for 12, but no integer was in the
interval.

🔗Carl Lumma <ekin@lumma.org>

3/4/2007 12:25:32 AM

>What would you do instead?

I put a start on this. I'm doing a Graham-style thing with
the errors of primes in ETs. I can print the n best (as in
least badness) ETs under different conditions...

* Considering an arbitrary number primes (first n).
* Using no weighting, 1/log(p) weighting, or 1/sqrt(p)
weighting on the errors. log seems best.
* Averaging the errors with arithmetic mean or RMS.
RMS seems better.
* Using the logflat complexity exponent on the
size of the ET -- pi(p)/(pi(p)-1) I think, where p
is the largest prime considered.

Here are some results:

> (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
((52.19292881106997 540) (52.71489509664127 525)
(52.83228952846678 655) (53.29973988785825 472)
(53.53499993485932 894) (53.56657654466209 1178)
(53.570269194091885 711) (53.669427895800546 364)
(53.69902837746059 581) (53.818895242304684 301)
(54.060867805115585 1065) (54.09186516073035 1106))

> (et-series 12 1200 40 "log(p)" "rms" "logflat")
((63.37181427694144 894) (63.70218510093019 453)
(63.93641639551927 655) (64.37119921757083 540)
(64.40940464873894 301) (64.58928407527537 867)
(64.88705882352804 181) (65.03377597327825 193)
(65.05253784397192 1152) (65.39270381588925 94)
(65.47083273373606 364) (65.49786495126975 118))

> (et-series 12 1200 4 "log(p)" "rms" "logflat")
((115.64685457477877 171) (129.17157144597562 441)
(161.10444518958764 31) (173.52364395668954 12)
(175.43329621056932 2) (176.32669201889667 10)
(179.26495027507892 53) (179.72578002829843 5)
(187.07623916897785 3) (189.05935585753883 612)
(193.71314760522884 41) (199.03836254569222 270))

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/4/2007 12:27:28 AM

Eventually I'll print instead series of ETs of decreasing
badness. But I thought this might be a better way to play
with it for now.

-C.

At 12:25 AM 3/4/2007, you wrote:
>>What would you do instead?
>
>I put a start on this. I'm doing a Graham-style thing with
>the errors of primes in ETs. I can print the n best (as in
>least badness) ETs under different conditions...
>
> * Considering an arbitrary number primes (first n).
> * Using no weighting, 1/log(p) weighting, or 1/sqrt(p)
> weighting on the errors. log seems best.
> * Averaging the errors with arithmetic mean or RMS.
> RMS seems better.
> * Using the logflat complexity exponent on the
> size of the ET -- pi(p)/(pi(p)-1) I think, where p
> is the largest prime considered.
>
>Here are some results:
>
>> (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
>((52.19292881106997 540) (52.71489509664127 525)
> (52.83228952846678 655) (53.29973988785825 472)
> (53.53499993485932 894) (53.56657654466209 1178)
> (53.570269194091885 711) (53.669427895800546 364)
> (53.69902837746059 581) (53.818895242304684 301)
> (54.060867805115585 1065) (54.09186516073035 1106))
>
>> (et-series 12 1200 40 "log(p)" "rms" "logflat")
>((63.37181427694144 894) (63.70218510093019 453)
> (63.93641639551927 655) (64.37119921757083 540)
> (64.40940464873894 301) (64.58928407527537 867)
> (64.88705882352804 181) (65.03377597327825 193)
> (65.05253784397192 1152) (65.39270381588925 94)
> (65.47083273373606 364) (65.49786495126975 118))
>
>> (et-series 12 1200 4 "log(p)" "rms" "logflat")
>((115.64685457477877 171) (129.17157144597562 441)
> (161.10444518958764 31) (173.52364395668954 12)
> (175.43329621056932 2) (176.32669201889667 10)
> (179.26495027507892 53) (179.72578002829843 5)
> (187.07623916897785 3) (189.05935585753883 612)
> (193.71314760522884 41) (199.03836254569222 270))
>
>-Carl

🔗Carl Lumma <ekin@lumma.org>

3/4/2007 12:44:09 AM

A little on how to read these.

ETs <= 1200, first 40 primes:

>>> (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
>>((52.19292881106997 540) (52.71489509664127 525)
>> (52.83228952846678 655) (53.29973988785825 472)
>> (53.53499993485932 894) (53.56657654466209 1178)
>> (53.570269194091885 711) (53.669427895800546 364)
>> (53.69902837746059 581) (53.818895242304684 301)
>> (54.060867805115585 1065) (54.09186516073035 1106))
>>
>>> (et-series 12 1200 40 "log(p)" "rms" "logflat")
>>((63.37181427694144 894) (63.70218510093019 453)
>> (63.93641639551927 655) (64.37119921757083 540)
>> (64.40940464873894 301) (64.58928407527537 867)
>> (64.88705882352804 181) (65.03377597327825 193)
>> (65.05253784397192 1152) (65.39270381588925 94)
>> (65.47083273373606 364) (65.49786495126975 118))

7-limit:

>>> (et-series 12 1200 4 "log(p)" "rms" "logflat")
>>((115.64685457477877 171) (129.17157144597562 441)
>> (161.10444518958764 31) (173.52364395668954 12)
>> (175.43329621056932 2) (176.32669201889667 10)
>> (179.26495027507892 53) (179.72578002829843 5)
>> (187.07623916897785 3) (189.05935585753883 612)
>> (193.71314760522884 41) (199.03836254569222 270))

I think Graham learned some stuff about these different
choices. I should try to read his paper again now.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/4/2007 12:02:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> > (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
> ((52.19292881106997 540) (52.71489509664127 525)

I don't have a clue what you are doing. Is there a prime limit 540
looks outstanding in? I went up to the 101-limit and the best I found,
which I would only call good, was the 43/45 limit.

🔗Carl Lumma <ekin@lumma.org>

3/4/2007 12:10:55 PM

>I think Graham learned some stuff about these different
>choices. I should try to read his paper again now.

I'm making a start now. First sentence of 1.2 - this only
applies to regular temperaments, right? Taking the best
approximations of primes in an ET, we won't always wind up
up with a regular mapping, will we?

First paragraph of 1.4 - what does interval size have to
do with the proper weights for 7:1 and 8:1?

Bottom of page 3 - prime partials are strongest in typical
timbres?? What does "independent" mean?

Top of page 4 - composites like 5:2 will have a louder
complex of partials than a prime interval like 5:1 ... Does
this discussion actually make sense?

Equation 9 - I'm starting to loose track of the variables,
and I'm having to look them up in the text every time. Can
you put a legend somewhere and link the equations to it?

xi - coefficients of prime factorization of a ratio
hi - size of the ith prime in just intonation
ti - size of the ith prime, tempered
vi - weighted size of the ith prime in just intonation
wi - weighted size of the ith prime, tempered
di - error of the ith prime
bi - weighting factor for the ith prime
ei - weighted error of ith prime

Or better yet have mouse-overs for each variable?

It's never explained what e(x) is. It looks like the error
of a ratio x.

Comment on equations 9 & 10 - isn't there a way to write
eq. 9 such that the signs of the errors are kept and it
works, i.e. the sum of the weighted signed deviations?
Rather than throwing out the signs and taking the mean as
in eq. 10? Oh, I see you're using it to prove that TOP is
minimax. Hrm, ok.

Whoa. What does Tenney weighting have to do with the
probability a prime will occur in a composite ratio we
care about?

I don't follow the bit about tempered octaves balancing
the large and small intervals (pg. 5). Smallest intervals
with any given weight are the most likely to be musically
significant? And which theorists have given prime-based
measures a bad name by neglecting this?

...

Holy s***, this paper is a monster. Amazing!! Graham,
you really have done it. You've even got Herman's weighted
wedgie thing in there. This is definitely the closest thing
to a tuning-math paper I've seen. Unfortunately I won't be
able to master this stuff this year. I feel like this
should definitely be submitted somewhere. Unfortunately
it still contains enough inside tuning-theory references
to be shot down, probably. But my god, you're a genius.

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/4/2007 12:18:44 PM

>> > (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
>> ((52.19292881106997 540) (52.71489509664127 525)
>
>I don't have a clue what you are doing. Is there a prime limit 540
>looks outstanding in? I went up to the 101-limit and the best I found,
>which I would only call good, was the 43/45 limit.

The 40th prime is 173. If you look for ETs < 1201 in the 173-limit,
540 should win on error * complexity, where complexity = 540^(40/39)
and error = the RMS of the sqrt(p)-weighted errors of the primes.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/4/2007 1:26:20 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >I think Graham learned some stuff about these different
> >choices. I should try to read his paper again now.
>
> I'm making a start now.

Some further points

(1) Why bring up pre-weighting?
(2) Table 3 is kind of confusing if you are not used to seeing a
mapping/val presented thusly.
(3) It might be worthwhile to point out that the RMS-TOP is what one
obtains from the pseudoinverse.
(4) Which people tell you that TOP-Max is easier than TOP-RMS? Given
how easy the latter is, this is unlikely to be true.

🔗Carl Lumma <ekin@lumma.org>

3/5/2007 12:15:20 AM

At 12:18 PM 3/4/2007, you wrote:
>>> > (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
>>> ((52.19292881106997 540) (52.71489509664127 525)
>>
>>I don't have a clue what you are doing. Is there a prime limit 540
>>looks outstanding in? I went up to the 101-limit and the best I found,
>>which I would only call good, was the 43/45 limit.
>
>The 40th prime is 173. If you look for ETs < 1201 in the 173-limit,
>540 should win on error * complexity, where complexity = 540^(40/39)
>and error = the RMS of the sqrt(p)-weighted errors of the primes.
>
>-Carl

The point of going up to the 173-limit is to see if the results
converge on the zeta stuff.

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/5/2007 1:19:52 AM

At 12:15 AM 3/5/2007, you wrote:
>At 12:18 PM 3/4/2007, you wrote:
>>>> > (et-series 12 1200 40 "sqrt(p)" "rms" "logflat")
>>>> ((52.19292881106997 540) (52.71489509664127 525)
>>>
>>>I don't have a clue what you are doing. Is there a prime limit 540
>>>looks outstanding in? I went up to the 101-limit and the best I found,
>>>which I would only call good, was the 43/45 limit.
>>
>>The 40th prime is 173. If you look for ETs < 1201 in the 173-limit,
>>540 should win on error * complexity, where complexity = 540^(40/39)
>>and error = the RMS of the sqrt(p)-weighted errors of the primes.
>>
>>-Carl
>
>The point of going up to the 173-limit is to see if the results
>converge on the zeta stuff.
>
>-Carl

I just found a thread in which Graham was testing high limits
to see if log weighting "stabilized", and found it didn't, and
Paul was saying why would you want it to, and Graham was replying
(correctly, in my opinion) 'if the 1000th prime changes your
result, your weighting can't be right'.

Gene, in the thread when you first suggested sqrt(p) weighting
to "ape" the zeta function, it isn't clear if you're talking
about error weighting (eg error/sqrt(p)) or complexity weighting
(eg complexity^sqrt(p)).

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/5/2007 5:10:33 AM

On 05/03/07, Carl Lumma <ekin@lumma.org> wrote:
> >I think Graham learned some stuff about these different
> >choices. I should try to read his paper again now.
>
> I'm making a start now. First sentence of 1.2 - this only
> applies to regular temperaments, right? Taking the best
> approximations of primes in an ET, we won't always wind up
> up with a regular mapping, will we?

Yes, should be regular. But needn't be a temperament because it's
also true of JI.

> First paragraph of 1.4 - what does interval size have to
> do with the proper weights for 7:1 and 8:1?

It has to do with the weights for 2:1 and 8:1.

> Bottom of page 3 - prime partials are strongest in typical
> timbres?? What does "independent" mean?

Prime partials aren't strongest, but composite ones tend to be weaker
than their constituent primes. Independent means you can't produce
one from the others, like linear independence of vector spaces.

> Top of page 4 - composites like 5:2 will have a louder
> complex of partials than a prime interval like 5:1 ... Does
> this discussion actually make sense?

Which discussion? There's nothing about 5:2 on page 4.

> Equation 9 - I'm starting to loose track of the variables,
> and I'm having to look them up in the text every time. Can
> you put a legend somewhere and link the equations to it?

Equation 9 is described in words, which is how I tried to avoid this.

> xi - coefficients of prime factorization of a ratio
> hi - size of the ith prime in just intonation
> ti - size of the ith prime, tempered
> vi - weighted size of the ith prime in just intonation
> wi - weighted size of the ith prime, tempered
> di - error of the ith prime
> bi - weighting factor for the ith prime

buoyancy: reciprocal of weighting

> ei - weighted error of ith prime
>
> Or better yet have mouse-overs for each variable?

Dunno how to do mouse-overs. You don't need most of these most of the
time. Only wi, which depends on the temperament, and vi which it is
approximating, and the errors.

> It's never explained what e(x) is. It looks like the error
> of a ratio x.

Error of interval x.

> Comment on equations 9 & 10 - isn't there a way to write
> eq. 9 such that the signs of the errors are kept and it
> works, i.e. the sum of the weighted signed deviations?
> Rather than throwing out the signs and taking the mean as
> in eq. 10? Oh, I see you're using it to prove that TOP is
> minimax. Hrm, ok.

No, the sum of weighted prime deviations can be zero.

> Whoa. What does Tenney weighting have to do with the
> probability a prime will occur in a composite ratio we
> care about?

7 and 8 are about the same size, and 8 has three instances of 2. So 2
should have 3 times the weight of 7. Tenney weighting is this
extrapolated to infinity.

> I don't follow the bit about tempered octaves balancing
> the large and small intervals (pg. 5). Smallest intervals
> with any given weight are the most likely to be musically
> significant? And which theorists have given prime-based
> measures a bad name by neglecting this?

I know you don't follow that. I think small intervals contribute most
to dissonance, and are most likely to be used in melody. I think
you're neglecting it in your results :P (try a standard deviation).
Darreg & McLaren only used 7:4 (and one other?) along with the
5-limit. Lots of us have tried prime-limit spreadsheets, but
generally given up before publication.

> Holy s***, this paper is a monster. Amazing!! Graham,
> you really have done it. You've even got Herman's weighted
> wedgie thing in there. This is definitely the closest thing
> to a tuning-math paper I've seen. Unfortunately I won't be
> able to master this stuff this year. I feel like this
> should definitely be submitted somewhere. Unfortunately
> it still contains enough inside tuning-theory references
> to be shot down, probably. But my god, you're a genius.

It's not publishable because it doesn't answer any insteresting
questions. But it's useful background.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/5/2007 5:14:51 AM

On 05/03/07, Gene Ward Smith <genewardsmith@coolgoose.com> wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
> >
> > >I think Graham learned some stuff about these different
> > >choices. I should try to read his paper again now.
> >
> > I'm making a start now.
>
> Some further points
>
> (1) Why bring up pre-weighting?

It's a mess. At this stage it's easier to leave it in than take it
out. But I think this is what the prime-based measures are actually
some kind of average of.

> (2) Table 3 is kind of confusing if you are not used to seeing a
> mapping/val presented thusly.

Why Table 3 specifically?

> (3) It might be worthwhile to point out that the RMS-TOP is what one
> obtains from the pseudoinverse.

Why on earth??? The pseudoinverse is an obscure concept looks
subordinate to least squares optimizations to me.

> (4) Which people tell you that TOP-Max is easier than TOP-RMS? Given
> how easy the latter is, this is unlikely to be true.

You did! In a long thread over a year ago.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/5/2007 9:34:36 AM

>> >I think Graham learned some stuff about these different
>> >choices. I should try to read his paper again now.
>>
>> I'm making a start now. First sentence of 1.2 - this only
>> applies to regular temperaments, right? Taking the best
>> approximations of primes in an ET, we won't always wind up
>> up with a regular mapping, will we?
>
>Yes, should be regular. But needn't be a temperament because it's
>also true of JI.

I later realized this was true.

>> First paragraph of 1.4 - what does interval size have to
>> do with the proper weights for 7:1 and 8:1?
>
>It has to do with the weights for 2:1 and 8:1.

I mean, you say it's a contradiction that 8:1 should be allowed
3 times the error of 7:1 *because* it's similar in size. So what?

>> Bottom of page 3 - prime partials are strongest in typical
>> timbres?? What does "independent" mean?
>
>Prime partials aren't strongest, but composite ones tend to be weaker
>than their constituent primes.

Oh, their *constituent* primes. Yes, this is true (because they're
lower partials). I don't feel like the text made this clear,
but maybe I should go back and reread it before pronouncing judgement.

>> Top of page 4 - composites like 5:2 will have a louder
>> complex of partials than a prime interval like 5:1 ... Does
>> this discussion actually make sense?
>
>Which discussion? There's nothing about 5:2 on page 4.

I'm saying 5:2.

>> Equation 9 - I'm starting to loose track of the variables,
>> and I'm having to look them up in the text every time. Can
>> you put a legend somewhere and link the equations to it?
>
>Equation 9 is described in words, which is how I tried to avoid this.
>
>> xi - coefficients of prime factorization of a ratio
>> hi - size of the ith prime in just intonation
>> ti - size of the ith prime, tempered
>> vi - weighted size of the ith prime in just intonation
>> wi - weighted size of the ith prime, tempered
>> di - error of the ith prime
>> bi - weighting factor for the ith prime
>
>buoyancy: reciprocal of weighting

If I say something is a weighting factor, that implies (to me)
I'm using it as an inverse. The term buoyancy is OK I guess, but
I don't picture myself using it.

>> ei - weighted error of ith prime
>>
>> Or better yet have mouse-overs for each variable?
>
>Dunno how to do mouse-overs. You don't need most of these most of the
>time. Only wi, which depends on the temperament, and vi which it is
>approximating, and the errors.

It's just an alphabet soup is all. Many programming languages
make you declare variables in a legend. And compilers are a lot
better than humans at keeping track of this kind of thing!

>> It's never explained what e(x) is. It looks like the error
>> of a ratio x.
>
>Error of interval x.

Yeah, OK. But it isn't stated in the text that I could find.

>> Comment on equations 9 & 10 - isn't there a way to write
>> eq. 9 such that the signs of the errors are kept and it
>> works, i.e. the sum of the weighted signed deviations?
>> Rather than throwing out the signs and taking the mean as
>> in eq. 10? Oh, I see you're using it to prove that TOP is
>> minimax. Hrm, ok.
>
>No, the sum of weighted prime deviations can be zero.

Hm, I guess you're right about that. Maybe its the
max - min trick I'm thinking of.

>> Whoa. What does Tenney weighting have to do with the
>> probability a prime will occur in a composite ratio we
>> care about?
>
>7 and 8 are about the same size, and 8 has three instances of 2. So 2
>should have 3 times the weight of 7.

Again, what does size have to do with it?

>> Holy s***, this paper is a monster. Amazing!! Graham,
>> you really have done it. You've even got Herman's weighted
>> wedgie thing in there. This is definitely the closest thing
>> to a tuning-math paper I've seen. Unfortunately I won't be
>> able to master this stuff this year. I feel like this
>> should definitely be submitted somewhere. Unfortunately
>> it still contains enough inside tuning-theory references
>> to be shot down, probably. But my god, you're a genius.
>
>It's not publishable because it doesn't answer any insteresting
>questions. But it's useful background.
>
> Graham

Lots of papers are method papers, and this one would be a
monster.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/5/2007 1:14:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> Gene, in the thread when you first suggested sqrt(p) weighting
> to "ape" the zeta function, it isn't clear if you're talking
> about error weighting (eg error/sqrt(p)) or complexity weighting
> (eg complexity^sqrt(p)).

Error weighting.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/5/2007 1:18:03 PM

--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:

> > (4) Which people tell you that TOP-Max is easier than TOP-RMS? Given
> > how easy the latter is, this is unlikely to be true.
>
> You did! In a long thread over a year ago.

Did not. I said RMS was easier, but Max wasn't that hard.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/5/2007 1:35:34 PM

--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:

> > (3) It might be worthwhile to point out that the RMS-TOP is what one
> > obtains from the pseudoinverse.
>
> Why on earth??? The pseudoinverse is an obscure concept looks
> subordinate to least squares optimizations to me.

(1) It's a known concept, and hence you are relating the tuning to
known math.

(2) If you have a system which can deal with matricies, I think it will
be the easiet way to compute the tuning.

(3) It provides for another iota of justification for the tuning as
based on a significant mathematical concept.

(4) It's dead easy to do now, because I've given a web page you can
cite.

(5) Why not do it? What's the down side?

(6) I think it provides a useful point of view when explaining why you
get a consistent tuning scheme via least-squares optimizing (it's a
projection matrix.)

(7) Does anyone else want to weigh in? Does this pseudoinverse business
seem useful to someone other than me?

🔗Graham Breed <gbreed@gmail.com>

3/6/2007 1:06:01 AM

On 06/03/07, Gene Ward Smith <genewardsmith@coolgoose.com> wrote:
> --- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:
>
> > > (3) It might be worthwhile to point out that the RMS-TOP is what one
> > > obtains from the pseudoinverse.
> >
> > Why on earth??? The pseudoinverse is an obscure concept looks
> > subordinate to least squares optimizations to me.
>
> (1) It's a known concept, and hence you are relating the tuning to
> known math.

Linear least squares approximation is a known concept, and I intend to
write more about the analogy. The Moore-Penrose pseudoinverse can be
related to any linear least squares problem, and looks more obscure.
My objective reference for obscurity is one Oxford Concise Dictionary
of Mathematics -- it has an entry for "least squares" but nothing
about pseudoinverses that I can find. What other properties of a
linear least squares problem should I mention? The general
(non-Tenney weighted) case is exactly the standard problem, no more no
less. (Tenney weighting makes it slightly simpler.)

Maybe the pseudoinverse will be better known to algebraicists. But if
you want an explanation for algebraicists you're better off writing it
yourself. You can junk a lot of the baggage I need for a more general
audience.

> (2) If you have a system which can deal with matricies, I think it will
> be the easiet way to compute the tuning.

My system for dealing with matrices (Numeric Python) also comes with a
linear least squares solver. Not only simpler but more accurate than
the naive matrix approach, which implies a pesudoinverse. I'm not
sure that even a library function for the pseudoinverse would give the
same accuracy (the problem is to do with floating point precision -- I
mention it in the PDF).

The naive calculation of the pseudoinverse also entails a square
matrix (A*A from your web page) that's used to calculate the scalar
complexity. So if you want the tuning, error and complexity I don't
see that knowing about the pseudoinverse simplifies anything (although
a library function for the adjoint would be nice).

The rank 1 and 2 special cases are easy enough that you don't need a
matrix library at all. A statistics library might help.

> (3) It provides for another iota of justification for the tuning as
> based on a significant mathematical concept.

How so? This is a linear least squares problem, which is a very
common mathematical concept and and obvious one to apply in this case.
Why is it surprising that it behaves like a linear least squares
problem? Whether it's the correct approach or not is a separate
issue.

> (4) It's dead easy to do now, because I've given a web page you can
> cite.

It's always been dead easy because I could cite Mathworld. I can't
see anything in your page about why this is important.

> (5) Why not do it? What's the down side?

It makes the PDF that bit more complex, for no benefit that I can see.
Wouldn't anybody who knows about the pseudoinverse know that it ties
in with linear least squares optmizations? Wouldn't anybody who knows
the formula for the pseudoinverse recognize it in my equation 26?
What audience are you thinking of here?

> (6) I think it provides a useful point of view when explaining why you
> get a consistent tuning scheme via least-squares optimizing (it's a
> projection matrix.)

In what sense is this consistency not a constraint of the optimization?

> (7) Does anyone else want to weigh in? Does this pseudoinverse business
> seem useful to someone other than me?

The pseudoinverse may or may not be useful, but I don't think it's
worth mentioning when all I'm trying to do is explain the simple ways
of calculating errors and complexities. And finding that difficult.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/6/2007 1:24:05 AM

On 06/03/07, Carl Lumma <ekin@lumma.org> wrote:

> >> First paragraph of 1.4 - what does interval size have to
> >> do with the proper weights for 7:1 and 8:1?
> >
> >It has to do with the weights for 2:1 and 8:1.
>
> I mean, you say it's a contradiction that 8:1 should be allowed
> 3 times the error of 7:1 *because* it's similar in size. So what?

I don't say it's a contradiciton, only that it's unreasonable. Do you
think it's reasonable for three octaves to be three times as out of
tune as a 7:1, for equivalent badness? If so, use equal weighting.
The point at that stage is that it's not a default choice -- it gives
a very specific behaviour. Later on, Tenney weighting is justified
because it treats equally sized harmonics equally. You can take or
leave that but it's the most obvious weighting scheme (more obvious
than equal weighting if you look at it the right way).

> >> Bottom of page 3 - prime partials are strongest in typical
> >> timbres?? What does "independent" mean?
> >
> >Prime partials aren't strongest, but composite ones tend to be weaker
> >than their constituent primes.
>
> Oh, their *constituent* primes. Yes, this is true (because they're
> lower partials). I don't feel like the text made this clear,
> but maybe I should go back and reread it before pronouncing judgement.

It probably isn't clear. But I wasn't planning to revise it this
week, so I'll explain it here as you're reading it this week.

> >> Top of page 4 - composites like 5:2 will have a louder
> >> complex of partials than a prime interval like 5:1 ... Does
> >> this discussion actually make sense?
> >
> >Which discussion? There's nothing about 5:2 on page 4.
>
> I'm saying 5:2.

You know, I think I played with the margins, so my top of page 4 might
not be your top of page 4, and I may be missing something obvious.

Hmph, I still can't upload to my website :(

> >> Equation 9 - I'm starting to loose track of the variables,
> >> and I'm having to look them up in the text every time. Can
> >> you put a legend somewhere and link the equations to it?
> >
> >Equation 9 is described in words, which is how I tried to avoid this.
> >
> >> xi - coefficients of prime factorization of a ratio
> >> hi - size of the ith prime in just intonation
> >> ti - size of the ith prime, tempered
> >> vi - weighted size of the ith prime in just intonation
> >> wi - weighted size of the ith prime, tempered
> >> di - error of the ith prime
> >> bi - weighting factor for the ith prime
> >
> >buoyancy: reciprocal of weighting
>
> If I say something is a weighting factor, that implies (to me)
> I'm using it as an inverse. The term buoyancy is OK I guess, but
> I don't picture myself using it.

Weighting has a particular meaning in a mathematical sense, and it's
the opposite to what these factors do. So it could easily be
misleading if I called it a weighting factor.

> >> ei - weighted error of ith prime
> >>
> >> Or better yet have mouse-overs for each variable?
> >
> >Dunno how to do mouse-overs. You don't need most of these most of the
> >time. Only wi, which depends on the temperament, and vi which it is
> >approximating, and the errors.
>
> It's just an alphabet soup is all. Many programming languages
> make you declare variables in a legend. And compilers are a lot
> better than humans at keeping track of this kind of thing!

Languages aimed at humans rather than compilers generally don't.
Anyway, I certainly intend to write a table of mathematical notation
and a glossary given sufficient abundance of tuits of the right shape.
I'd also like the text to be understandable without one. I'm trying
to avoid this problem as I encountered it in Rothenberg and Blackwood.
The main way of doing that is to duplicate formulae with words.
(Good style for programming in any language is to use descriptive
variable names. Mathematical notation solves a different problem.)
Next time I look at this I'll clean up the cases where I don't do so
adequately -- and I certainly spotted some instances last night.

> >> It's never explained what e(x) is. It looks like the error
> >> of a ratio x.
> >
> >Error of interval x.
>
> Yeah, OK. But it isn't stated in the text that I could find.

Maybe not. I think I worked backwards changing something else to e(x)
for consistency.

> >> Comment on equations 9 & 10 - isn't there a way to write
> >> eq. 9 such that the signs of the errors are kept and it
> >> works, i.e. the sum of the weighted signed deviations?
> >> Rather than throwing out the signs and taking the mean as
> >> in eq. 10? Oh, I see you're using it to prove that TOP is
> >> minimax. Hrm, ok.
> >
> >No, the sum of weighted prime deviations can be zero.
>
> Hm, I guess you're right about that. Maybe its the
> max - min trick I'm thinking of.
>
> >> Whoa. What does Tenney weighting have to do with the
> >> probability a prime will occur in a composite ratio we
> >> care about?
> >
> >7 and 8 are about the same size, and 8 has three instances of 2. So 2
> >should have 3 times the weight of 7.
>
> Again, what does size have to do with it?

That's what Tenney weighting does. And for prime powers you can't get
away from it.

> >> Holy s***, this paper is a monster. Amazing!! Graham,
> >> you really have done it. You've even got Herman's weighted
> >> wedgie thing in there. This is definitely the closest thing
> >> to a tuning-math paper I've seen. Unfortunately I won't be
> >> able to master this stuff this year. I feel like this
> >> should definitely be submitted somewhere. Unfortunately
> >> it still contains enough inside tuning-theory references
> >> to be shot down, probably. But my god, you're a genius.
> >
> >It's not publishable because it doesn't answer any insteresting
> >questions. But it's useful background.
>
> Lots of papers are method papers, and this one would be a
> monster.

It's not the method for an interesting problem. It's part of the
background for the methods for finding equal temperaments, rank 2
temperaments from paired equal temperaments, and guaranteed complete
rank 2 temperament searches. That's three interesting papers I might
get round to writing.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/6/2007 9:41:41 AM

At 01:24 AM 3/6/2007, you wrote:
>On 06/03/07, Carl Lumma <ekin@lumma.org> wrote:
>
>> >> First paragraph of 1.4 - what does interval size have to
>> >> do with the proper weights for 7:1 and 8:1?
>> >
>> >It has to do with the weights for 2:1 and 8:1.
>>
>> I mean, you say it's a contradiction that 8:1 should be allowed
>> 3 times the error of 7:1 *because* it's similar in size. So what?
>
>I don't say it's a contradiciton, only that it's unreasonable. Do you
>think it's reasonable for three octaves to be three times as out of
>tune as a 7:1, for equivalent badness?

Bigger intervals should generally be allowed more mistuning than
small ones, but that doesn't mean intervals near in size should be
allowed similar mistuning. Why would it? (unless we're talking
about really huge intervals)

>Later on, Tenney weighting is justified
>because it treats equally sized harmonics equally.

Oh, if you restrict yourself to :1 intervals, then you can say
that size tends to tell you amplitude, and therefore tends to
tell the strength of a mistuning interaction.
But we need a further rationale that justifies Tenney weighting
for all intervals.

>> >> Top of page 4 - composites like 5:2 will have a louder
>> >> complex of partials than a prime interval like 5:1 ... Does
>> >> this discussion actually make sense?
>> >
>> >Which discussion? There's nothing about 5:2 on page 4.
>>
>> I'm saying 5:2.
>
>You know, I think I played with the margins, so my top of page 4 might
>not be your top of page 4, and I may be missing something obvious.

I'm *saying* 5:2. Not you, me.

>> >> Equation 9 - I'm starting to loose track of the variables,
>> >> and I'm having to look them up in the text every time. Can
>> >> you put a legend somewhere and link the equations to it?
>> >
>> >Equation 9 is described in words, which is how I tried to avoid this.
>> >
>> >> xi - coefficients of prime factorization of a ratio
>> >> hi - size of the ith prime in just intonation
>> >> ti - size of the ith prime, tempered
>> >> vi - weighted size of the ith prime in just intonation
>> >> wi - weighted size of the ith prime, tempered
>> >> di - error of the ith prime
>> >> bi - weighting factor for the ith prime
>> >
>> >buoyancy: reciprocal of weighting
>>
>> If I say something is a weighting factor, that implies (to me)
>> I'm using it as an inverse. The term buoyancy is OK I guess, but
>> I don't picture myself using it.
>
>Weighting has a particular meaning in a mathematical sense, and it's
>the opposite to what these factors do. So it could easily be
>misleading if I called it a weighting factor.

Oh, OK.

>> >> Or better yet have mouse-overs for each variable?
>> >
>> >Dunno how to do mouse-overs. You don't need most of these most of the
>> >time. Only wi, which depends on the temperament, and vi which it is
>> >approximating, and the errors.
>>
>> It's just an alphabet soup is all. Many programming languages
>> make you declare variables in a legend. And compilers are a lot
>> better than humans at keeping track of this kind of thing!
>
>Languages aimed at humans rather than compilers generally don't.

They generally don't instantiate a dozen one-letter pronouns and
expect readers to keep track of them over 9+ pages.

>Anyway, I certainly intend to write a table of mathematical notation
>and a glossary given sufficient abundance of tuits of the right shape.
> I'd also like the text to be understandable without one. I'm trying
>to avoid this problem as I encountered it in Rothenberg and Blackwood.
> The main way of doing that is to duplicate formulae with words.

That's a good method, but can get hairy.

>(Good style for programming in any language is to use descriptive
>variable names. Mathematical notation solves a different problem.)

It solves the same problem, but generally compresses expressions
more than programming languages do, so the variables are shorter.

>> >> Holy s***, this paper is a monster. Amazing!! Graham,
>> >> you really have done it. You've even got Herman's weighted
>> >> wedgie thing in there. This is definitely the closest thing
>> >> to a tuning-math paper I've seen. Unfortunately I won't be
>> >> able to master this stuff this year. I feel like this
>> >> should definitely be submitted somewhere. Unfortunately
>> >> it still contains enough inside tuning-theory references
>> >> to be shot down, probably. But my god, you're a genius.
>> >
>> >It's not publishable because it doesn't answer any insteresting
>> >questions. But it's useful background.
>>
>> Lots of papers are method papers, and this one would be a
>> monster.
>
>It's not the method for an interesting problem. It's part of the
>background for the methods for finding equal temperaments, rank 2
>temperaments from paired equal temperaments, and guaranteed complete
>rank 2 temperament searches. That's three interesting papers I might
>get round to writing.

Maybe it's four papers with this as the first.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/7/2007 2:25:08 AM

On 07/03/07, Carl Lumma <ekin@lumma.org> wrote:
> At 01:24 AM 3/6/2007, you wrote:
> >On 06/03/07, Carl Lumma <ekin@lumma.org> wrote:
> >
> >> >> First paragraph of 1.4 - what does interval size have to
> >> >> do with the proper weights for 7:1 and 8:1?
> >> >
> >> >It has to do with the weights for 2:1 and 8:1.
> >>
> >> I mean, you say it's a contradiction that 8:1 should be allowed
> >> 3 times the error of 7:1 *because* it's similar in size. So what?
> >
> >I don't say it's a contradiciton, only that it's unreasonable. Do you
> >think it's reasonable for three octaves to be three times as out of
> >tune as a 7:1, for equivalent badness?
>
> Bigger intervals should generally be allowed more mistuning than
> small ones, but that doesn't mean intervals near in size should be
> allowed similar mistuning. Why would it? (unless we're talking
> about really huge intervals)

The size of harmonics relative to the root (not intervals in general)
is the simplest way of doing it.

> >Later on, Tenney weighting is justified
> >because it treats equally sized harmonics equally.
>
> Oh, if you restrict yourself to :1 intervals, then you can say
> that size tends to tell you amplitude, and therefore tends to
> tell the strength of a mistuning interaction.
> But we need a further rationale that justifies Tenney weighting
> for all intervals.

Maybe you can piece that together for some other document. For now,
it's only a question of Tenney weighting compared to other weightings,
and it's fully determined by the harmonics.

> >> >> Top of page 4 - composites like 5:2 will have a louder
> >> >> complex of partials than a prime interval like 5:1 ... Does
> >> >> this discussion actually make sense?
> >> >
> >> >Which discussion? There's nothing about 5:2 on page 4.
> >>
> >> I'm saying 5:2.
> >
> >You know, I think I played with the margins, so my top of page 4 might
> >not be your top of page 4, and I may be missing something obvious.
>
> I'm *saying* 5:2. Not you, me.

Yes, and I'm saying I don't know what discussion you're referring to.

> >> >> Or better yet have mouse-overs for each variable?
> >> >
> >> >Dunno how to do mouse-overs. You don't need most of these most of the
> >> >time. Only wi, which depends on the temperament, and vi which it is
> >> >approximating, and the errors.
> >>
> >> It's just an alphabet soup is all. Many programming languages
> >> make you declare variables in a legend. And compilers are a lot
> >> better than humans at keeping track of this kind of thing!
> >
> >Languages aimed at humans rather than compilers generally don't.
>
> They generally don't instantiate a dozen one-letter pronouns and
> expect readers to keep track of them over 9+ pages.

Neither do I. You're complaining about symbols that are only used on
pages 2 to 4 with a reprise (and re-definition) on pp.16-17. Quite
often, now I check it, I can see that I do remind you what they mean.

> >Anyway, I certainly intend to write a table of mathematical notation
> >and a glossary given sufficient abundance of tuits of the right shape.
> > I'd also like the text to be understandable without one. I'm trying
> >to avoid this problem as I encountered it in Rothenberg and Blackwood.
> > The main way of doing that is to duplicate formulae with words.
>
> That's a good method, but can get hairy.

If the concepts are difficult to take in they're going to require
study however I present them.

> >(Good style for programming in any language is to use descriptive
> >variable names. Mathematical notation solves a different problem.)
>
> It solves the same problem, but generally compresses expressions
> more than programming languages do, so the variables are shorter.

No, I don't think so. Most programming languages have evolved to
explain things step by step, with self-explanatory modules.
Mathematical notation is for taking in complex expressions at a
glance. That means a lot of the meaning is contained in the notation,
and math texts always spend a lot of time on the notation.

> >> Lots of papers are method papers, and this one would be a
> >> monster.
> >
> >It's not the method for an interesting problem. It's part of the
> >background for the methods for finding equal temperaments, rank 2
> >temperaments from paired equal temperaments, and guaranteed complete
> >rank 2 temperament searches. That's three interesting papers I might
> >get round to writing.
>
> Maybe it's four papers with this as the first.

However good the set, it'd be five papers, with The Regular Mapping
Paradigm as the first.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/7/2007 8:38:33 AM

>> >(Good style for programming in any language is to use descriptive
>> >variable names. Mathematical notation solves a different problem.)
>>
>> It solves the same problem, but generally compresses expressions
>> more than programming languages do, so the variables are shorter.
>
>No, I don't think so. Most programming languages have evolved to
>explain things step by step, with self-explanatory modules.
>Mathematical notation is for taking in complex expressions at a
>glance. That means a lot of the meaning is contained in the notation,
>and math texts always spend a lot of time on the notation.

It sounds like you're agreeing with me.

>> >> Lots of papers are method papers, and this one would be a
>> >> monster.
>> >
>> >It's not the method for an interesting problem. It's part of the
>> >background for the methods for finding equal temperaments, rank 2
>> >temperaments from paired equal temperaments, and guaranteed complete
>> >rank 2 temperament searches. That's three interesting papers I might
>> >get round to writing.
>>
>> Maybe it's four papers with this as the first.
>
>However good the set, it'd be five papers, with The Regular Mapping
>Paradigm as the first.

Hey, that sounds familiar, but I don't have a PDF from you here.
Is it on the web?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/10/2007 4:25:11 AM

On 08/03/07, Carl Lumma <ekin@lumma.org> wrote:

> >However good the set, it'd be five papers, with The Regular Mapping
> >Paradigm as the first.
>
> Hey, that sounds familiar, but I don't have a PDF from you here.
> Is it on the web?

No, it's only HTML because it doesn't need any mathematical formulae.
I did a print-to-PDF because somebody wanted PDF, but the result
wasn't what they wanted.

Graham