back to list

kees complexity and kees expressibility

🔗Carl Lumma <ekin@lumma.org>

10/25/2005 5:34:15 PM

Can someone here provide definitions of these, in terms
of a comma n/d? Thanks.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

10/25/2005 8:54:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> Can someone here provide definitions of these, in terms
> of a comma n/d? Thanks.

Let u/v be the odd part of n/d, and then take the maximum of u and v,
and the log base two of that. That's one way to define the Kees norm
("expressibility" is a name I resist.)

If <<w1 w2 ... wk|| is a p-limit wedgie, take the first pi(p)-1
coefficients, [w1, ..., wn]. Then divide through by logs of primes,
[w1/log2(3), w2/log2(5), ... wn/log2(p)]. If we call this
[u1, u2, ... un] then take the maximum of the |u_i| and the |u_i - u_j|,
for all coefficients and pairs. This is Kees generator complexity.

🔗Carl Lumma <ekin@lumma.org>

10/25/2005 10:01:55 PM

>> Can someone here provide definitions of these, in terms
>> of a comma n/d? Thanks.
>
>Let u/v be the odd part of n/d, and then take the maximum of u and v,
>and the log base two of that. That's one way to define the Kees norm
>("expressibility" is a name I resist.)

Thanks Gene.

>If <<w1 w2 ... wk|| is a p-limit wedgie, take the first pi(p)-1
>coefficients, [w1, ..., wn]. Then divide through by logs of primes,
>[w1/log2(3), w2/log2(5), ... wn/log2(p)]. If we call this
>[u1, u2, ... un] then take the maximum of the |u_i| and the |u_i - u_j|,
>for all coefficients and pairs. This is Kees generator complexity.

Ah. I thought one of these was: remove all factors of 2 from
n and d and then take the log of their product. It looks like
you originally tried to do the same thing...

>> Why is this preferable to removing any factors of 2 and taking the
>> product of numerator and denominator?

...but Paul wrote...

>> It's *way* preferable. The latter is based on a false view of
>> octave-reducing the tenney lattice, at best. Do you think 5:3 and
>> 15:8 should count as equally 'distant' octave-equivalence classes
>> from 1:1? What I was asking about is supported by Partch,
>> octave-equivalent harmonic entropy, and pretty straighforward
>> explanations I posted for Maximiliano on the tuning list . .

However, in the 15-limit, I think 5:3 and 15:8 could be equally
distant, in the sense that they both allow modulations by common
dyad (in fact, 15:8 allows more such modulations than 5:3).

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 4:38:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Can someone here provide definitions of these, in terms
> >> of a comma n/d? Thanks.
> >
> >Let u/v be the odd part of n/d, and then take the maximum of u and
v,
> >and the log base two of that. That's one way to define the Kees
norm
> >("expressibility" is a name I resist.)
>
> Thanks Gene.
>
> >If <<w1 w2 ... wk|| is a p-limit wedgie, take the first pi(p)-1
> >coefficients, [w1, ..., wn]. Then divide through by logs of primes,
> >[w1/log2(3), w2/log2(5), ... wn/log2(p)]. If we call this
> >[u1, u2, ... un] then take the maximum of the |u_i| and the |u_i -
u_j|,
> >for all coefficients and pairs. This is Kees generator complexity.
>
> Ah. I thought one of these was: remove all factors of 2 from
> n and d and then take the log of their product.

No, that's not something that I think even deserves a name, since
it's useless as far as I can tell.

> It looks like
> you originally tried to do the same thing...
>
> >> Why is this preferable to removing any factors of 2 and taking
the
> >> product of numerator and denominator?
>
> ...but Paul wrote...
>
> >> It's *way* preferable. The latter is based on a false view of
> >> octave-reducing the tenney lattice, at best. Do you think 5:3 and
> >> 15:8 should count as equally 'distant' octave-equivalence classes
> >> from 1:1? What I was asking about is supported by Partch,
> >> octave-equivalent harmonic entropy, and pretty straighforward
> >> explanations I posted for Maximiliano on the tuning list . .
>
> However, in the 15-limit, I think 5:3 and 15:8 could be equally
> distant,

OK but the argument above was based on a prime or odd limit of 5, not
a higher limit. You'll have to show me gow you propose to construct a
lattice for the 15-limit and then I can respond to this.

> in the sense that they both allow modulations by common
> dyad (in fact, 15:8 allows more such modulations than 5:3).

I don't get that. Can you attempt to show me how you arrived at that?

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 5:14:12 PM

>> one of these was: remove all factors of 2 from
>> n and d and then take the log of their product.
//
>> in the 15-limit, I think 5:3 and 15:8 could be equally
>> distant,
>
>OK but the argument above was based on a prime or odd limit of 5, not
>a higher limit. You'll have to show me gow you propose to construct a
>lattice for the 15-limit and then I can respond to this.

Let's say we don't know what harmonic limit composers will be
working with, or even if they'll be complete. We're given a comma.
It seems like one thing to do is assume that any factor in there
is going to be considered consonant. We can remove the distinction
between prime and odd factors with log weighting, remove the 2s
for octave-equivalence, and...

>> in the sense that they both allow modulations by common
>> dyad (in fact, 15:8 allows more such modulations than 5:3).
>
>I don't get that. Can you attempt to show me how you arrived at that?

The only way to modulate in mutually-prime-limit JI and keep
a dyad is by modulating by from major to minor. But with odd
identities in the chords, you can do things like
C,E,G,D -> G,B,D,A.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 5:57:03 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> one of these was: remove all factors of 2 from
> >> n and d and then take the log of their product.
> //
> >> in the 15-limit, I think 5:3 and 15:8 could be equally
> >> distant,
> >
> >OK but the argument above was based on a prime or odd limit of 5,
not
> >a higher limit. You'll have to show me gow you propose to
construct a
> >lattice for the 15-limit and then I can respond to this.
>
> Let's say we don't know what harmonic limit composers will be
> working with, or even if they'll be complete. We're given a comma.
> It seems like one thing to do is assume that any factor in there
> is going to be considered consonant.

Prime factor?

> We can remove the distinction
> between prime and odd factors with log weighting,

That doesn't remove the distinction as far as I can see. What do you
mean?

> remove the 2s
> for octave-equivalence, and...

Not following.

> >> in the sense that they both allow modulations by common
> >> dyad (in fact, 15:8 allows more such modulations than 5:3).
> >
> >I don't get that. Can you attempt to show me how you arrived at
that?
>
> The only way to modulate in mutually-prime-limit JI and keep
> a dyad is by modulating by from major to minor. But with odd
> identities in the chords, you can do things like
> C,E,G,D -> G,B,D,A.

I still don't see it. How is it that "15:8 allows more such
modulations than 5:3"? Show me. I assumed you were talking about 15-
limit chords, but if not, please fill me in.

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 6:40:47 PM

>> >> one of these was: remove all factors of 2 from
>> >> n and d and then take the log of their product.
>> //
>> >> in the 15-limit, I think 5:3 and 15:8 could be equally
>> >> distant,
>> >
>> >OK but the argument above was based on a prime or odd limit of
>> >5, not a higher limit. You'll have to show me gow you propose
>> >to construct a lattice for the 15-limit and then I can respond
>> >to this.
>>
>> Let's say we don't know what harmonic limit composers will be
>> working with, or even if they'll be complete. We're given a comma.
>> It seems like one thing to do is assume that any factor in there
>> is going to be considered consonant.
>
>Prime factor?

Any factor.

>> We can remove the distinction
>> between prime and odd factors with log weighting,
>
>That doesn't remove the distinction as far as I can see. What do you
>mean?

log(35) = log(7) + log(5) but 35 != 7+5

>> remove the 2s for octave-equivalence, and...
>
>Not following.

It seems that if (log (* n d)) is a valid taxicab distance,
(log (apply * (remove 2s (factor (* n d))))) should be also.
If you can parse that.

>> The only way to modulate in mutually-prime-limit JI and keep
>> a dyad is by modulating by from major to minor. But with odd
>> identities in the chords, you can do things like
>> C,E,G,D -> G,B,D,A.
>
>I still don't see it. How is it that "15:8 allows more such
>modulations than 5:3"? Show me. I assumed you were talking about 15-
>limit chords, but if not, please fill me in.

Howabout C,E,G,B -> G,B,D,F# ?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2005 1:35:42 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> one of these was: remove all factors of 2 from
> >> >> n and d and then take the log of their product.
> >> //
> >> >> in the 15-limit, I think 5:3 and 15:8 could be equally
> >> >> distant,
> >> >
> >> >OK but the argument above was based on a prime or odd limit of
> >> >5, not a higher limit. You'll have to show me gow you propose
> >> >to construct a lattice for the 15-limit and then I can respond
> >> >to this.
> >>
> >> Let's say we don't know what harmonic limit composers will be
> >> working with, or even if they'll be complete. We're given a
comma.
> >> It seems like one thing to do is assume that any factor in there
> >> is going to be considered consonant.
> >
> >Prime factor?
>
> Any factor.

Huh. Why is that "the thing to do"? I don't do that . . .

> >> We can remove the distinction
> >> between prime and odd factors with log weighting,
> >
> >That doesn't remove the distinction as far as I can see. What do
you
> >mean?
>
> log(35) = log(7) + log(5) but 35 != 7+5

What does this have to do with the distinction between prime and odd
factors? It just seems to be a statement about logs and why they're
appropriate for a taxicab measure . . .

> >> remove the 2s for octave-equivalence, and...
> >
> >Not following.
>
> It seems that if (log (* n d)) is a valid taxicab distance,
> (log (apply * (remove 2s (factor (* n d))))) should be also.
> If you can parse that.

I don't know. Can you describe the lattice upon which this is equal
to the taxicab distance? Maybe that'll help.

> >> The only way to modulate in mutually-prime-limit JI and keep
> >> a dyad is by modulating by from major to minor. But with odd
> >> identities in the chords, you can do things like
> >> C,E,G,D -> G,B,D,A.
> >
> >I still don't see it. How is it that "15:8 allows more such
> >modulations than 5:3"? Show me. I assumed you were talking about
15-
> >limit chords, but if not, please fill me in.
>
> Howabout C,E,G,B -> G,B,D,F# ?

That's a demonstration? How many does 15:8 allow, and how many does
5:3 allow? (I still don't know what kinds of chords we may use here.)

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2005 3:09:48 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:

> > >> The only way to modulate in mutually-prime-limit JI and keep
> > >> a dyad is by modulating by from major to minor. But with odd
> > >> identities in the chords, you can do things like
> > >> C,E,G,D -> G,B,D,A.
> > >
> > >I still don't see it. How is it that "15:8 allows more such
> > >modulations than 5:3"? Show me. I assumed you were talking about
> 15-
> > >limit chords, but if not, please fill me in.
> >
> > Howabout C,E,G,B -> G,B,D,F# ?
>
> That's a demonstration? How many does 15:8 allow, and how many does
> 5:3 allow? (I still don't know what kinds of chords we may use
>here.)

Also, I note that in this particular example, there's only a single
15:8 between one note of the first chord and one note of the second
chord, while there are two 5:3s between notes of the first chord and
notes of the second chord, and another 5:3 within each chord. So if
this is meant to be an example of one of the modulations 15:8 allows,
I'm afraid I can't figure out what 'allows' means.

Or maybe the more complex the interval, the more modulations
it 'allows'? Then we certainly don't want to use "more modulations"
as a reason to make the interval *shorter* in the lattice, do we?

🔗Carl Lumma <ekin@lumma.org>

10/28/2005 1:43:27 AM

>> >> >> one of these was: remove all factors of 2 from
>> >> >> n and d and then take the log of their product.
>> >> //
>> >> >> in the 15-limit, I think 5:3 and 15:8 could be equally
>> >> >> distant,
>> >> >
>> >> >OK but the argument above was based on a prime or odd limit of
>> >> >5, not a higher limit. You'll have to show me gow you propose
>> >> >to construct a lattice for the 15-limit and then I can respond
>> >> >to this.
>> >>
>> >> Let's say we don't know what harmonic limit composers will be
>> >> working with, or even if they'll be complete. We're given a
>> >> comma. It seems like one thing to do is assume that any factor
>> >> in there is going to be considered consonant.
>> >
>> >Prime factor?
>>
>> Any factor.
>
>Huh. Why is that "the thing to do"? I don't do that . . .

I didn't say "the thing to do", I said "one thing to do". The idea
is to infer the intended limit from the factors in the comma.
Expressibility can be viewed like this, except it only considers
the larger of numerator and denominator, while weighted hahn
diameter considers both numerator and denominator.

>> >> We can remove the distinction
>> >> between prime and odd factors with log weighting,
>> >
>> >That doesn't remove the distinction as far as I can see. What do
>> >you mean?
>>
>> log(35) = log(7) + log(5) but 35 != 7+5
>
>What does this have to do with the distinction between prime and odd
>factors? It just seems to be a statement about logs and why they're
>appropriate for a taxicab measure . . .

It removes the distinction in the sense that it gives the same
result whether you use an odd limit of 35 or a prime limit of 7.

>> It seems that if (log (* n d)) is a valid taxicab distance,
>> (log (apply * (remove 2s (factor (* n d))))) should be also.
>> If you can parse that.
>
>I don't know. Can you describe the lattice upon which this is equal
>to the taxicab distance? Maybe that'll help.

It's the octave-equivalent rectangular lattice with log lengths.
It's an attempt at an octave-equivalent version of the Tenney HD.

>> >> The only way to modulate in mutually-prime-limit JI and keep
>> >> a dyad is by modulating by from major to minor. But with odd
>> >> identities in the chords, you can do things like
>> >> C,E,G,D -> G,B,D,A.
>> >
>> >I still don't see it. How is it that "15:8 allows more such
>> >modulations than 5:3"? Show me. I assumed you were talking about
>> >15-limit chords, but if not, please fill me in.
>>
>> Howabout C,E,G,B -> G,B,D,F# ?
>
>That's a demonstration?

Such modulations are impossible in mutually-prime-factor JI.

>How many does 15:8 allow, and how many does 5:3 allow? (I still
>don't know what kinds of chords we may use here.)

Chords whose dyads are within the harmonic limit. I shouldn't
have said there are more modulations by 15:8, rather, chords
containing 15:8 will admit to more common-dyad modulations than
chords contain only intervals like 5:3. But there are just as
many such modulations by 15:8 as by 5:3, which is one reason to
consider them equally distant.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 5:39:17 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> one of these was: remove all factors of 2 from
> >> >> >> n and d and then take the log of their product.
> >> >> //
> >> >> >> in the 15-limit, I think 5:3 and 15:8 could be equally
> >> >> >> distant,
> >> >> >
> >> >> >OK but the argument above was based on a prime or odd limit
of
> >> >> >5, not a higher limit. You'll have to show me gow you propose
> >> >> >to construct a lattice for the 15-limit and then I can
respond
> >> >> >to this.
> >> >>
> >> >> Let's say we don't know what harmonic limit composers will be
> >> >> working with, or even if they'll be complete. We're given a
> >> >> comma. It seems like one thing to do is assume that any
factor
> >> >> in there is going to be considered consonant.
> >> >
> >> >Prime factor?
> >>
> >> Any factor.
> >
> >Huh. Why is that "the thing to do"? I don't do that . . .
>
> I didn't say "the thing to do", I said "one thing to do". The idea
> is to infer the intended limit from the factors in the comma.

That's not the same as saying any factor is going to be considered
consonant, it seems to me. Right?

> Expressibility can be viewed like this, except it only considers
> the larger of numerator and denominator, while weighted hahn
> diameter considers both numerator and denominator.

I wouldn't really say it considers either the numerator or the
denominator, not directly . . . would you?

> >> >> We can remove the distinction
> >> >> between prime and odd factors with log weighting,
> >> >
> >> >That doesn't remove the distinction as far as I can see. What do
> >> >you mean?
> >>
> >> log(35) = log(7) + log(5) but 35 != 7+5
> >
> >What does this have to do with the distinction between prime and
odd
> >factors? It just seems to be a statement about logs and why
they're
> >appropriate for a taxicab measure . . .
>
> It removes the distinction in the sense that it gives the same
> result whether you use an odd limit of 35 or a prime limit of 7.

So you mean it removes the distinction between the odd and prime
limits? You said between odd and prime factors, which confuses me.
Also, what is "35 != 7+5" supposed to refer to?

>> >> It seems that if (log (* n d)) is a valid taxicab distance,
>> >> (log (apply * (remove 2s (factor (* n d))))) should be also.
>> >> If you can parse that.
> >
>> >I don't know. Can you describe the lattice upon which this is
equal
>> >to the taxicab distance? Maybe that'll help.
>
>> It's the octave-equivalent rectangular lattice with log lengths.
>> It's an attempt at an octave-equivalent version of the Tenney HD.

Where 195:128 is as short as 15:13 . . .

> >> >> The only way to modulate in mutually-prime-limit JI and keep
> >> >> a dyad is by modulating by from major to minor. But with odd
> >> >> identities in the chords, you can do things like
> >> >> C,E,G,D -> G,B,D,A.
> >> >
> >> >I still don't see it. How is it that "15:8 allows more such
> >> >modulations than 5:3"? Show me. I assumed you were talking about
> >> >15-limit chords, but if not, please fill me in.
> >>
> >> Howabout C,E,G,B -> G,B,D,F# ?
> >
> >That's a demonstration?
>
> Such modulations are impossible in mutually-prime-factor JI.

Huh? Impossible? What on earth . . . ???

> >How many does 15:8 allow, and how many does 5:3 allow? (I still
> >don't know what kinds of chords we may use here.)
>
> Chords whose dyads are within the harmonic limit.
>
> I shouldn't
> have said there are more modulations by 15:8, rather, chords
> containing 15:8 will admit to more common-dyad modulations

Always? Show it.

> than
> chords contain only intervals like 5:3. But there are just as
> many such modulations by 15:8 as by 5:3,

What does that mean?

> which is one reason to
> consider them equally distant.

By your argument, one could argue that longer and longer (more and
more complex) intervals should be considered equally distant. Chords
containing 225:128 "will" admit to still more common-dyad
modulations. Right?

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 3:01:47 AM

>> I didn't say "the thing to do", I said "one thing to do". The idea
>> is to infer the intended limit from the factors in the comma.
>
>That's not the same as saying any factor is going to be considered
>consonant, it seems to me. Right?

How would you infer it for say, an unweighted measure?

>> Expressibility can be viewed like this, except it only considers
>> the larger of numerator and denominator, while weighted hahn
>> diameter considers both numerator and denominator.
>
>I wouldn't really say it considers either the numerator or the
>denominator, not directly . . . would you?

Which? I calculate both via operations on the numerator
and denominator of the given ratio...

>> >> log(35) = log(7) + log(5) but 35 != 7+5
>> >
>> >What does this have to do with the distinction between prime
>> >and odd factors? It just seems to be a statement about logs
>> >and why they're appropriate for a taxicab measure . . .
>>
>> It removes the distinction in the sense that it gives the same
>> result whether you use an odd limit of 35 or a prime limit of 7.
>
>So you mean it removes the distinction between the odd and prime
>limits? You said between odd and prime factors, which confuses me.

I'm calling a factor prime here if we have a weight for it and
compound if its weight must be determined from its factors. With
log weighting, these two methods give the same answers.

>Also, what is "35 != 7+5" supposed to refer to?

!= is pseudocode for "not equal to".

>>> >> It seems that if (log (* n d)) is a valid taxicab distance,
>>> >> (log (apply * (remove 2s (factor (* n d))))) should be also.
>>> >> If you can parse that.
>> >
>>> >I don't know. Can you describe the lattice upon which this is
>>> >equal to the taxicab distance? Maybe that'll help.
>>
>>> It's the octave-equivalent rectangular lattice with log lengths.
>>> It's an attempt at an octave-equivalent version of the Tenney HD.
>
>Where 195:128 is as short as 15:13 . . .

Yes, bad for harmonic distance. But for determining the number
of intervals involved in tempering these ratios out, where octaves
are not tempered...

>> >> >> The only way to modulate in mutually-prime-limit JI and keep
>> >> >> a dyad is by modulating by from major to minor. But with odd
>> >> >> identities in the chords, you can do things like
>> >> >> C,E,G,D -> G,B,D,A.
>> >> >
>> >> >I still don't see it. How is it that "15:8 allows more such
>> >> >modulations than 5:3"? Show me. I assumed you were talking about
>> >> >15-limit chords, but if not, please fill me in.
>> >>
>> >> Howabout C,E,G,B -> G,B,D,F# ?
>> >
>> >That's a demonstration?
>>
>> Such modulations are impossible in mutually-prime-factor JI.
>
>Huh? Impossible? What on earth . . . ???

It's easy to see that with a mutually prime basis, the greatest
number of notes that two different instances of a chord in JI
can share is 1.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 1:19:44 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I didn't say "the thing to do", I said "one thing to do". The
idea
> >> is to infer the intended limit from the factors in the comma.
> >
> >That's not the same as saying any factor is going to be considered
> >consonant, it seems to me. Right?
>
> How would you infer it for say, an unweighted measure?

You're answering my question with a question. I neither see how your
answer answers my question (which referred to a statement of yours
which you snipped), nor what I'm supposed to say in answer to your
question, which I don't get.

>> >> Expressibility can be viewed like this, except it only considers
>> >> the larger of numerator and denominator, while weighted hahn
>> >> diameter considers both numerator and denominator.
> >
>> >I wouldn't really say it considers either the numerator or the
>> >denominator, not directly . . . would you?
>
>> Which?

The latter.

> I calculate both via operations on the numerator
> and denominator of the given ratio...

OK, but it's not as direct a function of them as expressibility is on
the "larger" (sometimes the smaller) of numerator and denominator.

> >> >> log(35) = log(7) + log(5) but 35 != 7+5
> >> >
> >> >What does this have to do with the distinction between prime
> >> >and odd factors? It just seems to be a statement about logs
> >> >and why they're appropriate for a taxicab measure . . .
> >>
> >> It removes the distinction in the sense that it gives the same
> >> result whether you use an odd limit of 35 or a prime limit of 7.
> >
> >So you mean it removes the distinction between the odd and prime
> >limits? You said between odd and prime factors, which confuses me.
>
> I'm calling a factor prime here if we have a weight for it and
> compound if its weight must be determined from its factors.

My head is going to explode with all these different definition of
concepts. Can't "prime" just mean prime? Anyway, I don't think you
can unambiguously make this distinction in the Hahn case, because
some factors of 3 will be subsumed into "prime" 9, while any odd ones
left over will become "prime" in themselves.

> With
> log weighting, these two methods give the same answers.
>
> >Also, what is "35 != 7+5" supposed to refer to?
>
> != is pseudocode for "not equal to".

I know that but what is the calculation supposed to refer to? What
weighting/complexity formula?

> >>> >> It seems that if (log (* n d)) is a valid taxicab distance,
> >>> >> (log (apply * (remove 2s (factor (* n d))))) should be also.
> >>> >> If you can parse that.
> >> >
> >>> >I don't know. Can you describe the lattice upon which this is
> >>> >equal to the taxicab distance? Maybe that'll help.
> >>
> >>> It's the octave-equivalent rectangular lattice with log lengths.
> >>> It's an attempt at an octave-equivalent version of the Tenney
HD.
> >
> >Where 195:128 is as short as 15:13 . . .
>
> Yes, bad for harmonic distance. But for determining the number
> of intervals involved in tempering these ratios out, where octaves
> are not tempered...

Just as bad.

> >> >> >> The only way to modulate in mutually-prime-limit JI and
keep
> >> >> >> a dyad is by modulating by from major to minor. But with
odd
> >> >> >> identities in the chords, you can do things like
> >> >> >> C,E,G,D -> G,B,D,A.
> >> >> >
> >> >> >I still don't see it. How is it that "15:8 allows more such
> >> >> >modulations than 5:3"? Show me. I assumed you were talking
about
> >> >> >15-limit chords, but if not, please fill me in.
> >> >>
> >> >> Howabout C,E,G,B -> G,B,D,F# ?
> >> >
> >> >That's a demonstration?
> >>
> >> Such modulations are impossible in mutually-prime-factor JI.
> >
> >Huh? Impossible? What on earth . . . ???
>
> It's easy to see that with a mutually prime basis, the greatest
> number of notes that two different instances of a chord in JI
> can share is 1.

When you said "mutually-prime-factor JI", I thought you were talking
about the lattice not having separate axes for integers with a common
factor (such as 3 and 9). When you say "a mutually prime basis", it
reinforces that interpretation, because infinite lattices have bases,
and chords generally aren't thought of as infinite lattices. But
you're actually talking about single chords and the numbers used in a
certain way of specifying them? That seems unrelated to the lattice
question. C,E,G,B -> G,B,D,F# and C,E,G,A -> G,B,D,E can both be
depicted perfectly reasonably on a lattice without a separate axis
for 9 or 15, so I don't know what your point is. The bigger the
chords you allow as chunks in the lattice, the more common tones it
can share with another instance of that chord. By no means does this
imply that all the intervals in these bigger chords should be
depicted with the same, small length!

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 8:57:19 PM

>> >> The idea is to infer the intended limit from the factors in the
>> >> comma.
>> >
>> >That's not the same as saying any factor is going to be considered
>> >consonant, it seems to me. Right?
>>
>> How would you infer it for say, an unweighted measure?
>
>You're answering my question with a question.

If it's not the same you ought to be able to give another method.

>>> >> Expressibility can be viewed like this, except it only considers
>>> >> the larger of numerator and denominator, while weighted hahn
>>> >> diameter considers both numerator and denominator.
>> >
>>> >I wouldn't really say it considers either the numerator or the
>>> >denominator, not directly . . . would you?
>>
>>> Which?
>
>The latter.

It uses information from *both* the numerator and denominator.
Expressibility throws out information from one.

>> I calculate both via operations on the numerator
>> and denominator of the given ratio...
>
>OK, but it's not as direct a function of them as expressibility is on
>the "larger" (sometimes the smaller) of numerator and denominator.

I guess I don't know what a direct function is. They both involve
factoring (express. involves taking out factors of 2 before doing
the comparison).

>> >>> >> It seems that if (log (* n d)) is a valid taxicab distance,
>> >>> >> (log (apply * (remove 2s (factor (* n d))))) should be also.
>> >>> >> If you can parse that.
>> >> >
>> >>> >I don't know. Can you describe the lattice upon which this is
>> >>> >equal to the taxicab distance? Maybe that'll help.
>> >>
>> >>> It's the octave-equivalent rectangular lattice with log lengths.
>> >>> It's an attempt at an octave-equivalent version of the Tenney
>> >>> HD.
>> >
>> >Where 195:128 is as short as 15:13 . . .
>>
>> Yes, bad for harmonic distance. But for determining the number
>> of intervals involved in tempering these ratios out, where octaves
>> are not tempered...
>
>Just as bad.

Ok, here's something in this thread that's interesting. Why is it
bad, and what's better?

>> >> >> Howabout C,E,G,B -> G,B,D,F# ?
>> >> >
>> >> >That's a demonstration?
>> >>
>> >> Such modulations are impossible in mutually-prime-factor JI.
>> >
>> >Huh? Impossible? What on earth . . . ???
>>
>> It's easy to see that with a mutually prime basis, the greatest
>> number of notes that two different instances of a chord in JI
>> can share is 1.
>
>When you said "mutually-prime-factor JI", I thought you were talking
>about the lattice not having separate axes for integers with a common
>factor (such as 3 and 9).

I was referring to harmonic limits defined this way.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 1:07:33 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> The idea is to infer the intended limit from the factors in
the
> >> >> comma.
> >> >
> >> >That's not the same as saying any factor is going to be
considered
> >> >consonant, it seems to me. Right?
> >>
> >> How would you infer it for say, an unweighted measure?
> >
> >You're answering my question with a question.
>
> If it's not the same you ought to be able to give another method.

You've snipped the original statement, and keep changing the subject
in response to my questions. I really have no idea what you want a
method for.

> >>> >> Expressibility can be viewed like this, except it only
considers
> >>> >> the larger of numerator and denominator, while weighted hahn
> >>> >> diameter considers both numerator and denominator.
> >> >
> >>> >I wouldn't really say it considers either the numerator or the
> >>> >denominator, not directly . . . would you?
> >>
> >>> Which?
> >
> >The latter.
>
> It uses information from *both* the numerator and denominator.
> Expressibility throws out information from one.

I don't know how to quantify what information is thrown out by Hahn
(and whatever weighted Hahn is (and where did it come from?)), but it
seems Tenney throws out less information from either numerator or
denominator.

> >> I calculate both via operations on the numerator
> >> and denominator of the given ratio...
> >
> >OK, but it's not as direct a function of them as expressibility is
on
> >the "larger" (sometimes the smaller) of numerator and denominator.
>
> I guess I don't know what a direct function is. They both involve
> factoring (express. involves taking out factors of 2 before doing
> the comparison).

For expressibility, you have to take out at most one factor of 2.
Hahn involves a lot more factoring than that!

> >> >>> >> It seems that if (log (* n d)) is a valid taxicab
distance,
> >> >>> >> (log (apply * (remove 2s (factor (* n d))))) should be
also.
> >> >>> >> If you can parse that.
> >> >> >
> >> >>> >I don't know. Can you describe the lattice upon which this
is
> >> >>> >equal to the taxicab distance? Maybe that'll help.
> >> >>
> >> >>> It's the octave-equivalent rectangular lattice with log
lengths.
> >> >>> It's an attempt at an octave-equivalent version of the Tenney
> >> >>> HD.
> >> >
> >> >Where 195:128 is as short as 15:13 . . .
> >>
> >> Yes, bad for harmonic distance. But for determining the number
> >> of intervals involved in tempering these ratios out, where
octaves
> >> are not tempered...
> >
> >Just as bad.
>
> Ok, here's something in this thread that's interesting. Why is it
> bad, and what's better?

We need to start with an idea of how bad x amount of tempering is for
interval y. Why would tempering 195:128 a given amount be as bad as
tempering 15:13 a given amount?

> >> >> >> Howabout C,E,G,B -> G,B,D,F# ?
> >> >> >
> >> >> >That's a demonstration?
> >> >>
> >> >> Such modulations are impossible in mutually-prime-factor JI.
> >> >
> >> >Huh? Impossible? What on earth . . . ???
> >>
> >> It's easy to see that with a mutually prime basis, the greatest
> >> number of notes that two different instances of a chord in JI
> >> can share is 1.
> >
> >When you said "mutually-prime-factor JI", I thought you were
talking
> >about the lattice not having separate axes for integers with a
common
> >factor (such as 3 and 9).
>
> I was referring to harmonic limits defined this way.

Please elaborate. I'd love to see this definition of harmonic limits
of yours.

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 2:57:36 PM

>> >> >> The idea is to infer the intended limit from the factors
>> >> >> in the comma.
>> >> >
>> >> >That's not the same as saying any factor is going to be
>> >> >considered consonant, it seems to me. Right?
>> >>
>> >> How would you infer it for say, an unweighted measure?
>> >
>> >You're answering my question with a question.
>>
>> If it's not the same you ought to be able to give another method.
>
>You've snipped the original statement,

I try to snip as little as possible.

>and keep changing the subject in response to my questions. I really
>have no idea what you want a method for.

I do? I'm asking you how you would suggest inferring a
harmonic limit from a single comma alone. I suppose your
answer would be to take the log of the largest odd factor?
But I'm doing prime factors here, and the only solution
I could come up with is to assume they're all consonant.

>> >>> >> Expressibility can be viewed like this, except it only
>> >>> >> considers the larger of numerator and denominator, while
>> >>> >> weighted hahn diameter considers both numerator and
>> >>> >> denominator.
>> >> >
>> >>> >I wouldn't really say it considers either the numerator
>> >>> >or the denominator, not directly . . . would you?
>> >>
>> >>> Which?
>> >
>> >The latter.
>>
>> It uses information from *both* the numerator and denominator.
>> Expressibility throws out information from one.
>
>I don't know how to quantify what information is thrown out by Hahn
>(and whatever weighted Hahn is (and where did it come from?)),

The algorithm I use for "weighted Hahn diameter" comes from Hahn.
He didn't give it a name. He did give the unweigthed version a
name: "diameter".

>but it seems Tenney throws out less information from either
>numerator or denominator.

The criticism here was of expressibility.

>> >> I calculate both via operations on the numerator
>> >> and denominator of the given ratio...
>> >
>> >OK, but it's not as direct a function of them as expressibility
>> >is on the "larger" (sometimes the smaller) of numerator and
>> >denominator.
>>
>> I guess I don't know what a direct function is. They both involve
>> factoring (express. involves taking out factors of 2 before doing
>> the comparison).
>
>For expressibility, you have to take out at most one factor of 2.
>Hahn involves a lot more factoring than that!

Yes, factoring. But not discarding.

>> >> >>> >> It seems that if (log (* n d)) is a valid taxicab
>> >> >>> >> distance, (log (apply * (remove 2s (factor (* n d)))))
>> >> >>> >> should be also. If you can parse that.
>> >> >> >
>> >> >>> >I don't know. Can you describe the lattice upon which
>> >> >>> >this is equal to the taxicab distance? Maybe that'll
>> >> >>> >help.
>> >> >>
>> >> >>> It's the octave-equivalent rectangular lattice with log
>> >> >>> lengths.
>> >> >>> It's an attempt at an octave-equivalent version of the
>> >> >>> Tenney HD.
>> >> >
>> >> >Where 195:128 is as short as 15:13 . . .
>> >>
>> >> Yes, bad for harmonic distance. But for determining the number
>> >> of intervals involved in tempering these ratios out, where
>> >> octaves are not tempered...
>> >
>> >Just as bad.
>>
>> Ok, here's something in this thread that's interesting. Why is it
>> bad, and what's better?
>
>We need to start with an idea of how bad x amount of tempering is for
>interval y. Why would tempering 195:128 a given amount be as bad as
>tempering 15:13 a given amount?

If you read my thing on pain = squared error, the amount of
'pain relief' depends only on the number of intervals being
tempered. And that's what I want to measure here.

>> >> >> >> Howabout C,E,G,B -> G,B,D,F# ?
>> >> >> >
>> >> >> >That's a demonstration?
>> >> >>
>> >> >> Such modulations are impossible in mutually-prime-factor JI.
>> >> >
>> >> >Huh? Impossible? What on earth . . . ???
>> >>
>> >> It's easy to see that with a mutually prime basis, the greatest
>> >> number of notes that two different instances of a chord in JI
>> >> can share is 1.
>> >
>> >When you said "mutually-prime-factor JI", I thought you were
>> >talking about the lattice not having separate axes for integers
>> >with a common factor (such as 3 and 9).
>>
>> I was referring to harmonic limits defined this way.
>
>Please elaborate. I'd love to see this definition of harmonic limits
>of yours.

Any interval y that can be factored using the factors [a q r t ...]
is in the [a q r t ...] limit. Great, ain't it?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 3:31:32 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> The idea is to infer the intended limit from the factors
> >> >> >> in the comma.
> >> >> >
> >> >> >That's not the same as saying any factor is going to be
> >> >> >considered consonant, it seems to me. Right?
> >> >>
> >> >> How would you infer it for say, an unweighted measure?
> >> >
> >> >You're answering my question with a question.
> >>
> >> If it's not the same you ought to be able to give another method.
> >
> >You've snipped the original statement,
>
> I try to snip as little as possible.
>
> >and keep changing the subject in response to my questions. I really
> >have no idea what you want a method for.
>
> I do? I'm asking you how you would suggest inferring a
> harmonic limit from a single comma alone. I suppose your
> answer would be to take the log of the largest odd factor?

I don't think so. 81 is the largest odd factor of 81, and 80 has no
larger ones, but that doesn't mean that the mention of the syntonic
comma (81:80) implies a harmonic limit of 81!

> But I'm doing prime factors here, and the only solution
> I could come up with is to assume they're all consonant.

Sure, though of course those wouldn't be the *only* consonances.

> >> >>> >> Expressibility can be viewed like this, except it only
> >> >>> >> considers the larger of numerator and denominator, while
> >> >>> >> weighted hahn diameter considers both numerator and
> >> >>> >> denominator.
> >> >> >
> >> >>> >I wouldn't really say it considers either the numerator
> >> >>> >or the denominator, not directly . . . would you?
> >> >>
> >> >>> Which?
> >> >
> >> >The latter.
> >>
> >> It uses information from *both* the numerator and denominator.
> >> Expressibility throws out information from one.
> >
> >I don't know how to quantify what information is thrown out by
Hahn
> >(and whatever weighted Hahn is (and where did it come from?)),
>
> The algorithm I use for "weighted Hahn diameter" comes from Hahn.
> He didn't give it a name.

Maybe there is another name for that around here . . .

> He did give the unweigthed version a
> name: "diameter".
>
> >but it seems Tenney throws out less information from either
> >numerator or denominator.
>
> The criticism here was of expressibility.

I'm criticizing the notion of "throwing out information", since it
doesn't seem well-defined. Expressibility is simply the log of what
we used to (by way of shorthand) call "odd limit". The justification
for that has been discussed elsewhere.

> >> >> I calculate both via operations on the numerator
> >> >> and denominator of the given ratio...
> >> >
> >> >OK, but it's not as direct a function of them as expressibility
> >> >is on the "larger" (sometimes the smaller) of numerator and
> >> >denominator.
> >>
> >> I guess I don't know what a direct function is. They both
involve
> >> factoring (express. involves taking out factors of 2 before doing
> >> the comparison).
> >
> >For expressibility, you have to take out at most one factor of 2.
> >Hahn involves a lot more factoring than that!
>
> Yes, factoring. But not discarding.

Yes, discarding! There are plenty of different intervals with the
same Hahn distance; therefore, some of the "information" in the
intervals had to be "discarded" in order to get the same answer for
each of them. Right?

> >> >> >>> >> It seems that if (log (* n d)) is a valid taxicab
> >> >> >>> >> distance, (log (apply * (remove 2s (factor (* n d)))))
> >> >> >>> >> should be also. If you can parse that.
> >> >> >> >
> >> >> >>> >I don't know. Can you describe the lattice upon which
> >> >> >>> >this is equal to the taxicab distance? Maybe that'll
> >> >> >>> >help.
> >> >> >>
> >> >> >>> It's the octave-equivalent rectangular lattice with log
> >> >> >>> lengths.
> >> >> >>> It's an attempt at an octave-equivalent version of the
> >> >> >>> Tenney HD.
> >> >> >
> >> >> >Where 195:128 is as short as 15:13 . . .
> >> >>
> >> >> Yes, bad for harmonic distance. But for determining the
number
> >> >> of intervals involved in tempering these ratios out, where
> >> >> octaves are not tempered...
> >> >
> >> >Just as bad.
> >>
> >> Ok, here's something in this thread that's interesting. Why is
it
> >> bad, and what's better?
> >
> >We need to start with an idea of how bad x amount of tempering is
for
> >interval y. Why would tempering 195:128 a given amount be as bad
as
> >tempering 15:13 a given amount?
>
> If you read my thing on pain = squared error, the amount of
> 'pain relief' depends only on the number of intervals being
> tempered. And that's what I want to measure here.

The number of intervals being tempered? Aren't *all* the intervals
going to be tempered in most cases? And if you're saying "the number
of intervals the comma is distributed over", well what stops you from
saying the answer is always 1 -- the comma itself?

> >> >> >> >> Howabout C,E,G,B -> G,B,D,F# ?
> >> >> >> >
> >> >> >> >That's a demonstration?
> >> >> >>
> >> >> >> Such modulations are impossible in mutually-prime-factor
JI.
> >> >> >
> >> >> >Huh? Impossible? What on earth . . . ???
> >> >>
> >> >> It's easy to see that with a mutually prime basis, the
greatest
> >> >> number of notes that two different instances of a chord in JI
> >> >> can share is 1.
> >> >
> >> >When you said "mutually-prime-factor JI", I thought you were
> >> >talking about the lattice not having separate axes for integers
> >> >with a common factor (such as 3 and 9).
> >>
> >> I was referring to harmonic limits defined this way.
> >
> >Please elaborate. I'd love to see this definition of harmonic
limits
> >of yours.
>
> Any interval y that can be factored using the factors [a q r t ...]
> is in the [a q r t ...] limit. Great, ain't it?

It would be if you could somehow convince me that this implies
C,E,G,B -> G,B,D,F# is impossible.

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 4:13:20 PM

>> I do? I'm asking you how you would suggest inferring a
>> harmonic limit from a single comma alone. I suppose your
>> answer would be to take the log of the largest odd factor?
>
>I don't think so. 81 is the largest odd factor of 81, and 80 has no
>larger ones, but that doesn't mean that the mention of the syntonic
>comma (81:80) implies a harmonic limit of 81!

Yes, but what if it did!

>> But I'm doing prime factors here, and the only solution
>> I could come up with is to assume they're all consonant.
>
>Sure, though of course those wouldn't be the *only* consonances.

Why not? I'm not assuming factors < them will be consonant!

>> The algorithm I use for "weighted Hahn diameter" comes from Hahn.
>> He didn't give it a name.
>
>Maybe there is another name for that around here . . .

Almost certainly... :)

>> He did give the unweigthed version a
>> name: "diameter".
>>
>> >but it seems Tenney throws out less information from either
>> >numerator or denominator.
>>
>> The criticism here was of expressibility.
>
>I'm criticizing the notion of "throwing out information", since it
>doesn't seem well-defined. Expressibility is simply the log of what
>we used to (by way of shorthand) call "odd limit". The justification
>for that has been discussed elsewhere.

It's a good octave-equivalent measure of dissonance. But why is
it good for measuring area on a lattice of notes? It seems given
your ASCI art here...

http://kees.cc/tuning/erl_perbl.html

...that it is simply less refined than a measure (such as your
icosceles one here, or Hahn's) that factors both sides of
the fraction.

>Yes, discarding! There are plenty of different intervals with the
>same Hahn distance; therefore, some of the "information" in the
>intervals had to be "discarded" in order to get the same answer for
>each of them. Right?

Yes, but see the above.

>> If you read my thing on pain = squared error, the amount of
>> 'pain relief' depends only on the number of intervals being
>> tempered. And that's what I want to measure here.
>
>The number of intervals being tempered? Aren't *all* the intervals
>going to be tempered in most cases?

Yes.

>And if you're saying "the number
>of intervals the comma is distributed over",

I am. (Consonant intervals, that is.)

>well what stops you from saying the answer is always 1 -- the
>comma itself?

Because one wouldn't expect a comma-to-be-tempered out to be
consonant.

>> Any interval y that can be factored using the factors [a q r t ...]
>> is in the [a q r t ...] limit. Great, ain't it?
>
>It would be if you could somehow convince me that this implies
>C,E,G,B -> G,B,D,F# is impossible.

Show me a single modulation like this where no two of (a q r t...)
share a common factor.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/3/2005 9:50:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Any interval y that can be factored using the factors [a q r t ...]
> is in the [a q r t ...] limit. Great, ain't it?

What you are doing is generating a subgroup, which makes it slicker to
find a minimal set of generators.

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 3:18:40 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I do? I'm asking you how you would suggest inferring a
> >> harmonic limit from a single comma alone. I suppose your
> >> answer would be to take the log of the largest odd factor?
> >
> >I don't think so. 81 is the largest odd factor of 81, and 80 has
no
> >larger ones, but that doesn't mean that the mention of the
syntonic
> >comma (81:80) implies a harmonic limit of 81!
>
> Yes, but what if it did!

Then we'd never have the opportunity to temper out any commas, since
a harmonic limit of 81 implies that 81:80 is a consonance in its own
right.

> >> But I'm doing prime factors here, and the only solution
> >> I could come up with is to assume they're all consonant.
> >
> >Sure, though of course those wouldn't be the *only* consonances.
>
> Why not? I'm not assuming factors < them will be consonant!

Really? What about ratios of them?

> >> He did give the unweigthed version a
> >> name: "diameter".
> >>
> >> >but it seems Tenney throws out less information from either
> >> >numerator or denominator.
> >>
> >> The criticism here was of expressibility.
> >
> >I'm criticizing the notion of "throwing out information", since it
> >doesn't seem well-defined. Expressibility is simply the log of
what
> >we used to (by way of shorthand) call "odd limit". The
justification
> >for that has been discussed elsewhere.
>
> It's a good octave-equivalent measure of dissonance. But why is
> it good for measuring area on a lattice of notes?

Area? Area can be measured using wedge products, but I'm not sure why
you bring it up here.

> It seems given
> your ASCI art here...
>
> http://kees.cc/tuning/erl_perbl.html
>
> ...that it is simply less refined than a measure (such as your
> icosceles one here, or Hahn's) that factors both sides of
> the fraction.

Huh? Less refined?? How do you come to that conclusion, particularly
given my ASCII art???

> >Yes, discarding! There are plenty of different intervals with the
> >same Hahn distance; therefore, some of the "information" in the
> >intervals had to be "discarded" in order to get the same answer
for
> >each of them. Right?
>
> Yes, but see the above.

?

> >> If you read my thing on pain = squared error, the amount of
> >> 'pain relief' depends only on the number of intervals being
> >> tempered. And that's what I want to measure here.
> >
> >The number of intervals being tempered? Aren't *all* the intervals
> >going to be tempered in most cases?
>
> Yes.
>
> >And if you're saying "the number
> >of intervals the comma is distributed over",
>
> I am. (Consonant intervals, that is.)
>
> >well what stops you from saying the answer is always 1 -- the
> >comma itself?
>
> Because one wouldn't expect a comma-to-be-tempered out to be
> consonant.

I thought you dismissed consonance as a consideration in this context.
Could you step back for me and clarify this?

> >> Any interval y that can be factored using the factors [a q r
t ...]
> >> is in the [a q r t ...] limit. Great, ain't it?
> >
> >It would be if you could somehow convince me that this implies
> >C,E,G,B -> G,B,D,F# is impossible.
>
> Show me a single modulation like this where no two of (a q r t...)
> share a common factor.

I don't get it. You said nothing about "modulation" in
your "definition" above, but I don't see how that changes anything.
Why can't we just say the set of factors is 2, 3, and 5, none of
which share a common factor? What are the letters a, q, r, and t
supposed to stand for?

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 3:19:47 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> > Any interval y that can be factored using the factors [a q r t ...]
> > is in the [a q r t ...] limit. Great, ain't it?
>
> What you are doing is generating a subgroup, which makes it slicker to
> find a minimal set of generators.

Can you explain this to me then? It makes no sense to me at the moment.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/6/2005 2:24:49 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >
> > > Any interval y that can be factored using the factors [a q r t ...]
> > > is in the [a q r t ...] limit. Great, ain't it?
> >
> > What you are doing is generating a subgroup, which makes it slicker to
> > find a minimal set of generators.
>
> Can you explain this to me then? It makes no sense to me at the moment.

If I said something was in the [5/3, 3/2, 5/2, 7/6]-limit, I'd have
one more generator than necessary, as this generates a rank three
group. Hence reducing to three generators would be better, and even
better might be a canonical reduction to, say, a Hermite or TM basis.
Alternatively, you could give the val <1 1 1 2|, whose kernel is the
"limit" in question.

🔗Paul Erlich <perlich@aya.yale.edu>

11/8/2005 12:21:07 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...>
wrote:
> > >
> > > > Any interval y that can be factored using the factors [a q r
t ...]
> > > > is in the [a q r t ...] limit. Great, ain't it?
> > >
> > > What you are doing is generating a subgroup, which makes it
slicker to
> > > find a minimal set of generators.
> >
> > Can you explain this to me then? It makes no sense to me at the
moment.
>
> If I said something was in the [5/3, 3/2, 5/2, 7/6]-limit, I'd have
> one more generator than necessary, as this generates a rank three
> group. Hence reducing to three generators would be better, and even
> better might be a canonical reduction to, say, a Hermite or TM
>basis.

So what does this have to do with the chord progression C-E-G-B -> G-
B-D-F# being "impossible"?

> Alternatively, you could give the val <1 1 1 2|, whose kernel is the
> "limit" in question.

Bizarre.

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 5:00:52 PM

>> >> I do? I'm asking you how you would suggest inferring a
>> >> harmonic limit from a single comma alone. I suppose your
>> >> answer would be to take the log of the largest odd factor?
>> >
>> >I don't think so. 81 is the largest odd factor of 81, and 80 has
>> >no larger ones, but that doesn't mean that the mention of the
>> >syntonic comma (81:80) implies a harmonic limit of 81!
>>
>> Yes, but what if it did!
>
>Then we'd never have the opportunity to temper out any commas, since
>a harmonic limit of 81 implies that 81:80 is a consonance in its own
>right.

Yes, this certainly makes sense. Anyway, my solution was to assume
all the primes in the comma are considered consonant.

>> >> But I'm doing prime factors here, and the only solution
>> >> I could come up with is to assume they're all consonant.
>> >
>> >Sure, though of course those wouldn't be the *only* consonances.
>>
>> Why not? I'm not assuming factors < them will be consonant!
>
>Really?

Yes!

>What about ratios of them?

Yes for the triangular formulations, no for the rectangular ones.

>> >> He did give the unweigthed version a
>> >> name: "diameter".
>> >>
>> >> >but it seems Tenney throws out less information from either
>> >> >numerator or denominator.
>> >>
>> >> The criticism here was of expressibility.
>> >
>> >I'm criticizing the notion of "throwing out information", since
>> >it doesn't seem well-defined. Expressibility is simply the log of
>> >what we used to (by way of shorthand) call "odd limit". The
>> >justification for that has been discussed elsewhere.
>>
>> It's a good octave-equivalent measure of dissonance. But why is
>> it good for measuring area on a lattice of notes?
>
>Area? Area can be measured using wedge products, but I'm not sure why
>you bring it up here.

This part of the thread, I believe, was about the part of my
badness formula that's supposed to say how many 'notes' a comma
will contribute to a generic PB in which it is a commatic uv.
My solution was to raise the unweighted rectangular lattice
distance of the comma to the power of the number of primes in
the comma. I'd be interested in other solutions! Can I wedge
a single comma?

>> It seems given
>> your ASCI art here...
>>
>> http://kees.cc/tuning/erl_perbl.html
>>
>> ...that it is simply less refined than a measure (such as your
>> icosceles one here, or Hahn's) that factors both sides of
>> the fraction.
>
>Huh? Less refined?? How do you come to that conclusion, particularly
>given my ASCII art???

I forget what I was thinking of there, but, for example, it gives
the same score to 81/80 and 81/64, which are different in the
rectangular art.

>> >Yes, discarding! There are plenty of different intervals with the
>> >same Hahn distance; therefore, some of the "information" in the
>> >intervals had to be "discarded" in order to get the same answer
>> >for each of them. Right?
>>
>> Yes, but see the above.
>
>?

In the 5-limit, Hahn-limit n gives the same score to 6n points.
81/80 is length 4, so that means 24 ratios share the same length.
It looks like a greater number of ratios have the same
expressibility as 81/80.

>> >> If you read my thing on pain = squared error, the amount of
>> >> 'pain relief' depends only on the number of intervals being
>> >> tempered. And that's what I want to measure here.
>> >
>> >The number of intervals being tempered? Aren't *all* the intervals
>> >going to be tempered in most cases?
>>
>> Yes.
>>
>> >And if you're saying "the number
>> >of intervals the comma is distributed over",
>>
>> I am. (Consonant intervals, that is.)
>>
>> >well what stops you from saying the answer is always 1 -- the
>> >comma itself?
>>
>> Because one wouldn't expect a comma-to-be-tempered out to be
>> consonant.
>
>I thought you dismissed consonance as a consideration in this context.
>Could you step back for me and clarify this?

It has to be a consideration in this sense, as you pointed out.
But the notion that 1163 isn't consonant -- I threw that out. But
some of my results were based on weigthed measures that penalize
factors like 1163 for their size (I haven't tried Gene's sqrt(p)
weighting yet, but it looks promising).

>> >> Any interval y that can be factored using the factors [a q r t ...]
>> >> is in the [a q r t ...] limit. Great, ain't it?
>> >
>> >It would be if you could somehow convince me that this implies
>> >C,E,G,B -> G,B,D,F# is impossible.
>>
>> Show me a single modulation like this where no two of (a q r t...)
>> share a common factor.
>
>I don't get it. You said nothing about "modulation" in
>your "definition" above,

-> is modulation.

>What are the letters a, q, r, and t supposed to stand for?

Any primes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/10/2005 2:27:48 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> I do? I'm asking you how you would suggest inferring a
> >> >> harmonic limit from a single comma alone. I suppose your
> >> >> answer would be to take the log of the largest odd factor?
> >> >
> >> >I don't think so. 81 is the largest odd factor of 81, and 80
has
> >> >no larger ones, but that doesn't mean that the mention of the
> >> >syntonic comma (81:80) implies a harmonic limit of 81!
> >>
> >> Yes, but what if it did!
> >
> >Then we'd never have the opportunity to temper out any commas,
since
> >a harmonic limit of 81 implies that 81:80 is a consonance in its
own
> >right.
>
> Yes, this certainly makes sense. Anyway, my solution was to assume
> all the primes in the comma are considered consonant.

Probably a fair assumption!

> >> >> But I'm doing prime factors here, and the only solution
> >> >> I could come up with is to assume they're all consonant.
> >> >
> >> >Sure, though of course those wouldn't be the *only* consonances.
> >>
> >> Why not? I'm not assuming factors < them will be consonant!
> >
> >Really?
>
> Yes!

So if you're looking at 128:125, you wouldn't assume ratios of 3 are
consonant?

> >> >> He did give the unweigthed version a
> >> >> name: "diameter".
> >> >>
> >> >> >but it seems Tenney throws out less information from either
> >> >> >numerator or denominator.
> >> >>
> >> >> The criticism here was of expressibility.
> >> >
> >> >I'm criticizing the notion of "throwing out information", since
> >> >it doesn't seem well-defined. Expressibility is simply the log
of
> >> >what we used to (by way of shorthand) call "odd limit". The
> >> >justification for that has been discussed elsewhere.
> >>
> >> It's a good octave-equivalent measure of dissonance. But why is
> >> it good for measuring area on a lattice of notes?
> >
> >Area? Area can be measured using wedge products, but I'm not sure
why
> >you bring it up here.
>
> This part of the thread, I believe, was about the part of my
> badness formula that's supposed to say how many 'notes' a comma
> will contribute to a generic PB in which it is a commatic uv.

Or chromatic uv -- it shouldn't matter.

> My solution was to raise the unweighted rectangular lattice
> distance of the comma to the power of the number of primes in
> the comma. I'd be interested in other solutions!

My solution is (Tenney) harmonic distance, of course, as it's length
in the lattice of notes. I actually divide this by the volume of one
cell in the lattice, to get closer to "number of notes". This is
where my complexity numbers come from.

> Can I wedge
> a single comma?

The comma by itself is fine. You can arrive at it by doing a wedge
product of vals and then taking the complement, if that makes you
feel better.

> >> It seems given
> >> your ASCI art here...
> >>
> >> http://kees.cc/tuning/erl_perbl.html
> >>
> >> ...that it is simply less refined than a measure (such as your
> >> icosceles one here, or Hahn's) that factors both sides of
> >> the fraction.
> >
> >Huh? Less refined?? How do you come to that conclusion,
particularly
> >given my ASCII art???
>
> I forget what I was thinking of there, but, for example, it gives
> the same score to 81/80 and 81/64, which are different in the
> rectangular art.

The rectangular depiction is clearly inferior to the two triangular
ones. Wouldn't you agree?

> >> >Yes, discarding! There are plenty of different intervals with
the
> >> >same Hahn distance; therefore, some of the "information" in the
> >> >intervals had to be "discarded" in order to get the same answer
> >> >for each of them. Right?
> >>
> >> Yes, but see the above.
> >
> >?
>
> In the 5-limit, Hahn-limit n gives the same score to 6n points.
> 81/80 is length 4, so that means 24 ratios share the same length.
> It looks like a greater number of ratios have the same
> expressibility as 81/80.

Looks like fewer to me.

> >> >> If you read my thing on pain = squared error, the amount of
> >> >> 'pain relief' depends only on the number of intervals being
> >> >> tempered. And that's what I want to measure here.
> >> >
> >> >The number of intervals being tempered? Aren't *all* the
intervals
> >> >going to be tempered in most cases?
> >>
> >> Yes.
> >>
> >> >And if you're saying "the number
> >> >of intervals the comma is distributed over",
> >>
> >> I am. (Consonant intervals, that is.)
> >>
> >> >well what stops you from saying the answer is always 1 -- the
> >> >comma itself?
> >>
> >> Because one wouldn't expect a comma-to-be-tempered out to be
> >> consonant.
> >
> >I thought you dismissed consonance as a consideration in this
context.
> >Could you step back for me and clarify this?
>
> It has to be a consideration in this sense, as you pointed out.
> But the notion that 1163 isn't consonant -- I threw that out. But
> some of my results were based on weigthed measures that penalize
> factors like 1163 for their size (I haven't tried Gene's sqrt(p)
> weighting yet, but it looks promising).
>
> >> >> Any interval y that can be factored using the factors [a q r
t ...]
> >> >> is in the [a q r t ...] limit. Great, ain't it?
> >> >
> >> >It would be if you could somehow convince me that this implies
> >> >C,E,G,B -> G,B,D,F# is impossible.
> >>
> >> Show me a single modulation like this where no two of (a q r
t...)
> >> share a common factor.
> >
> >I don't get it. You said nothing about "modulation" in
> >your "definition" above,
>
> -> is modulation.
>
> >What are the letters a, q, r, and t supposed to stand for?
>
> Any primes.

Dude, I have no idea what you're showing or talking about here. By
definition, no primes share a common factor. Maybe try again when you
feel better.

Regards,
Paul

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 10:11:51 PM

>> Yes, this certainly makes sense. Anyway, my solution was to assume
>> all the primes in the comma are considered consonant.
>
>Probably a fair assumption!

Whew!

>> >> >> But I'm doing prime factors here, and the only solution
>> >> >> I could come up with is to assume they're all consonant.
>> >> >
>> >> >Sure, though of course those wouldn't be the *only* consonances.
>> >>
>> >> Why not? I'm not assuming factors < them will be consonant!
>> >
>> >Really?
>>
>> Yes!
>
>So if you're looking at 128:125, you wouldn't assume ratios of 3 are
>consonant?

Exactly.

>> >> >what we used to (by way of shorthand) call "odd limit". The
>> >> >justification for that has been discussed elsewhere.
>> >>
>> >> It's a good octave-equivalent measure of dissonance. But why
>> >> is it good for measuring area on a lattice of notes?
>> >
>> >Area? Area can be measured using wedge products, but I'm not sure
>> >why ou bring it up here.
>>
>> This part of the thread, I believe, was about the part of my
>> badness formula that's supposed to say how many 'notes' a comma
>> will contribute to a generic PB in which it is a commatic uv.
>
>Or chromatic uv -- it shouldn't matter.

Sure.

>> My solution was to raise the unweighted rectangular lattice
>> distance of the comma to the power of the number of primes in
>> the comma. I'd be interested in other solutions!
>
>My solution is (Tenney) harmonic distance, of course, as it's length
>in the lattice of notes. I actually divide this by the volume of one
>cell in the lattice, to get closer to "number of notes". This is
>where my complexity numbers come from.

You divide a length by a volume? Can you give an example?

What I say above just assumes the PB will be a block made of
orthogonal, equally-complex commas.

We're using the same lattice, except mine isn't weighted. Actually
towards the end of my "exploring" msg., I think I was weighting.

>> Can I wedge a single comma?
>
>The comma by itself is fine. You can arrive at it by doing a wedge
>product of vals and then taking the complement, if that makes you
>feel better.

You said area could be measured with wedge products. I'd like to
know if it's possible to do that with only 1 comma.

>> >> It seems given your ASCI art here...
>> >>
>> >> http://kees.cc/tuning/erl_perbl.html
>> >>
>> >> ...that express. is simply less refined than a measure (such
>> >> as your icosceles one here, or Hahn's) that factors both sides
>> >> of the fraction.
//
>> >> for example, it gives the same score to 81/80 and 81/64, which
>> >> are different in the rectangular art.
>
>The rectangular depiction is clearly inferior to the two triangular
>ones. Wouldn't you agree?

I only see one rect. depiction there, and it seems better, since
135:128 is more consonant than 81:80, based simply on size. :)

>> In the 5-limit, Hahn-limit n gives the same score to 6n points.
>> 81/80 is length 4, so that means 24 ratios share the same length.
>> It looks like a greater number of ratios have the same
>> expressibility as 81/80.
>
>Looks like fewer to me.

All odds less than 80 (about 40) that are prime rel. to 81 (2/3 of
them?) will have the same expressibility. That looks like more
than 24 to me.

>> >> >> If you read my thing on pain = squared error, the amount of
>> >> >> 'pain relief' depends only on the number of intervals being
>> >> >> tempered. And that's what I want to measure here.
//
>> >> >if you're saying "the number of intervals the comma is
>> >> >distributed over",
>> >>
>> >>I am. (Consonant intervals, that is.)
>> >
>> >well what stops you from saying the answer is always 1 -- the
>> >comma itself?
>>
>>Because one wouldn't expect a comma-to-be-tempered out to be
>>consonant.
>
>I thought you dismissed consonance as a consideration in this
>context. Could you step back for me and clarify this?

I'm assuming all the prime factors of the comma to be consonant,
but not the comma itself.

Anyway, the idea is that pain is quadratic. Er, here's a quote
from the original msg.

"""
That brings us to error. John deLaubenfels proposed that the
amount of "pain" mistuning causes us is the square of the error
in a simultaneity, and I agree with him. My own listening tests
indicate the exponent should be > 1, and 2 is natural because
it gives a nice distribution over the target. Also, there would
scarcely be a reason to temper with an exponent of 1... if we
spread a 24-cent comma over 12 fifths, we'd experience the same
amount of pain once we heard all of them, no matter how we
tempered. But (12 * 2^2) = 48 < 24^2 = 576.

So, for error I arrived at:

((cents comma)/(comma-dist comma))^2 * (comma-dist comma)

or

(cents comma)^2 / (comma-dist comma)

I use comma-dist here because I want octave-specific (allowing
tempered octaves), unweighted (the available 'pain relief' depends
only on the number of intervals to temper over) distances.
"""

>> >> >> Any interval y that can be factored using the factors
>> >> >> [a q r t ...] is in the [a q r t ...] limit. Great, ain't
>> >> >> it?
>> >> >
>> >> >It would be if you could somehow convince me that this
>> >> >implies C,E,G,B -> G,B,D,F# is impossible.
>> >>
>> >> Show me a single modulation like this where no two of
>> >> (a q r t...) share a common factor.
>> >
>> >I don't get it. You said nothing about "modulation" in
>> >your "definition" above,
>>
>> -> is modulation.
>>
>> >What are the letters a, q, r, and t supposed to stand for?
>>
>> Any primes.
>
>Dude, I have no idea what you're showing or talking about here. By
>definition, no primes share a common factor. Maybe try again when
>you feel better.

They actually don't have to be primes (though in everything I
did, they were). They only have to have the property that no
pair of them divide without a remainder.

Let's start by answering this question: In the 7-limit, what is
the maximum number of notes a pair of chords can have in common.

Answer: 2

Question next: How many kinds of modulation achieve this?

Answer: 1 (major <-> minor)

Question final: What's the only way to up these numbers?

Answer: Choose a harmonic limit like odd limits, or in my
formulation, any basis in which a pair of elements divide.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 10:20:59 PM

I wrote...
>>> >> It seems given your ASCI art here...
>>> >>
>>> >> http://kees.cc/tuning/erl_perbl.html
//
>>The rectangular depiction is clearly inferior to the two triangular
>>ones. Wouldn't you agree?
>
>I only see one rect. depiction there, and it seems better, since
>135:128 is more consonant than 81:80, based simply on size. :)

D'oh! I meant, I see only one tri. depiction there, and the rect.
seems better, since...

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:24:41 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Yes, this certainly makes sense. Anyway, my solution was to
assume
> >> all the primes in the comma are considered consonant.
> >
> >Probably a fair assumption!
>
> Whew!
>
> >> >> >> But I'm doing prime factors here, and the only solution
> >> >> >> I could come up with is to assume they're all consonant.
> >> >> >
> >> >> >Sure, though of course those wouldn't be the *only*
consonances.
> >> >>
> >> >> Why not? I'm not assuming factors < them will be consonant!
> >> >
> >> >Really?
> >>
> >> Yes!
> >
> >So if you're looking at 128:125, you wouldn't assume ratios of 3
are
> >consonant?
>
> Exactly.

Ouch. That rules out Augmented and the related systems, doesn't it?

> >> >> >what we used to (by way of shorthand) call "odd limit". The
> >> >> >justification for that has been discussed elsewhere.
> >> >>
> >> >> It's a good octave-equivalent measure of dissonance. But why
> >> >> is it good for measuring area on a lattice of notes?
> >> >
> >> >Area? Area can be measured using wedge products, but I'm not
sure
> >> >why ou bring it up here.
> >>
> >> This part of the thread, I believe, was about the part of my
> >> badness formula that's supposed to say how many 'notes' a comma
> >> will contribute to a generic PB in which it is a commatic uv.
> >
> >Or chromatic uv -- it shouldn't matter.
>
> Sure.
>
> >> My solution was to raise the unweighted rectangular lattice
> >> distance of the comma to the power of the number of primes in
> >> the comma. I'd be interested in other solutions!
> >
> >My solution is (Tenney) harmonic distance, of course, as it's
length
> >in the lattice of notes. I actually divide this by the volume of
one
> >cell in the lattice, to get closer to "number of notes". This is
> >where my complexity numbers come from.
>
> You divide a length by a volume? Can you give an example?

In my paper, I give the complexity of 5-limit meantone as 3.44. This
comes from calculating the length of the comma:

4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778

and dividing by the volume of a unit cell:

log2(2)*log2(3)*log2(5) = 3.680169

yielding 3.440543.

The idea being that there's one note per unit cell . . . but this
number happens to coincide with what Gene calls the Tenney-L1 norm of
the *val* rather than of the comma.

> What I say above just assumes the PB will be a block made of
> orthogonal, equally-complex commas.

That ignores both "straightness" and the different complexities of
different commas. The wedge product is, in a sense, an affine-
invariant was of taking both into account without even trying :)

> We're using the same lattice, except mine isn't weighted. Actually
> towards the end of my "exploring" msg., I think I was weighting.
>
> >> Can I wedge a single comma?
> >
> >The comma by itself is fine. You can arrive at it by doing a wedge
> >product of vals and then taking the complement, if that makes you
> >feel better.
>
> You said area could be measured with wedge products. I'd like to
> know if it's possible to do that with only 1 comma.

No, 1 comma is a single vector, so can only "measure" a length. But
the general idea is the same no matter how many commas you use. In
the 5-limit (3D) lattice, a comma will define (though not uniquely!)
a set of parallel, infinite-area slices, with the comma as "width".
I've called these "periodicity sheets" in the past.

> >> >> It seems given your ASCI art here...
> >> >>
> >> >> http://kees.cc/tuning/erl_perbl.html
> >> >>
> >> >> ...that express. is simply less refined than a measure (such
> >> >> as your icosceles one here, or Hahn's) that factors both sides
> >> >> of the fraction.
> //
> >> >> for example, it gives the same score to 81/80 and 81/64, which
> >> >> are different in the rectangular art.
> >
> >The rectangular depiction is clearly inferior to the two
triangular
> >ones. Wouldn't you agree?
>
> I only see one rect. depiction there, and it seems better, since
> 135:128 is more consonant than 81:80, based simply on size. :)

Hmm . . . try comparing other intervals too, such as 15:8 with
5:3 . . .

> >> In the 5-limit, Hahn-limit n gives the same score to 6n points.
> >> 81/80 is length 4, so that means 24 ratios share the same length.
> >> It looks like a greater number of ratios have the same
> >> expressibility as 81/80.
> >
> >Looks like fewer to me.
>
> All odds less than 80 (about 40) that are prime rel. to 81 (2/3 of
> them?) will have the same expressibility. That looks like more
> than 24 to me.

No way. You said "In the 5-limit" above, so most of these odds never
enter the picture. If they did, you'd have to somehow "count" them in
the Hahn case as well, for a fair comparison.

> >> >> >> If you read my thing on pain = squared error, the amount of
> >> >> >> 'pain relief' depends only on the number of intervals being
> >> >> >> tempered. And that's what I want to measure here.
> //
> >> >> >if you're saying "the number of intervals the comma is
> >> >> >distributed over",
> >> >>
> >> >>I am. (Consonant intervals, that is.)
> >> >
> >> >well what stops you from saying the answer is always 1 -- the
> >> >comma itself?
> >>
> >>Because one wouldn't expect a comma-to-be-tempered out to be
> >>consonant.
> >
> >I thought you dismissed consonance as a consideration in this
> >context. Could you step back for me and clarify this?
>
> I'm assuming all the prime factors of the comma to be consonant,
> but not the comma itself.

OK, probably fair. But there are other consonances too, and you can
often arrive at a comma with fewer of these . . .

> Anyway, the idea is that pain is quadratic. Er, here's a quote
> from the original msg.
>
> """
> That brings us to error. John deLaubenfels proposed that the
> amount of "pain" mistuning causes us is the square of the error
> in a simultaneity, and I agree with him. My own listening tests
> indicate the exponent should be > 1, and 2 is natural because
> it gives a nice distribution over the target. Also, there would
> scarcely be a reason to temper with an exponent of 1... if we
> spread a 24-cent comma over 12 fifths, we'd experience the same
> amount of pain once we heard all of them, no matter how we
> tempered. But (12 * 2^2) = 48 < 24^2 = 576.
>
> So, for error I arrived at:
>
> ((cents comma)/(comma-dist comma))^2 * (comma-dist comma)
>
> or
>
> (cents comma)^2 / (comma-dist comma)
>
> I use comma-dist here because I want octave-specific (allowing
> tempered octaves), unweighted (the available 'pain relief' depends
> only on the number of intervals to temper over) distances.

Then you need to consider tempering over other consonances, IMHO.

> """
>
> >> >> >> Any interval y that can be factored using the factors
> >> >> >> [a q r t ...] is in the [a q r t ...] limit. Great, ain't
> >> >> >> it?
> >> >> >
> >> >> >It would be if you could somehow convince me that this
> >> >> >implies C,E,G,B -> G,B,D,F# is impossible.
> >> >>
> >> >> Show me a single modulation like this where no two of
> >> >> (a q r t...) share a common factor.
> >> >
> >> >I don't get it. You said nothing about "modulation" in
> >> >your "definition" above,
> >>
> >> -> is modulation.
> >>
> >> >What are the letters a, q, r, and t supposed to stand for?
> >>
> >> Any primes.
> >
> >Dude, I have no idea what you're showing or talking about here. By
> >definition, no primes share a common factor. Maybe try again when
> >you feel better.
>
> They actually don't have to be primes (though in everything I
> did, they were). They only have to have the property that no
> pair of them divide without a remainder.
>
> Let's start by answering this question: In the 7-limit, what is
> the maximum number of notes a pair of chords can have in common.
>
> Answer: 2
>
> Question next: How many kinds of modulation achieve this?
>
> Answer: 1 (major <-> minor)

What do you mean by "kinds of modulation"? For any tetrad (and I'm
not sure why you're restricting this to tetrads here), there are six
different tetrads (all of opposite quality) which share two notes
with it. Why isn't that six kinds of "modulation"?

> Question final: What's the only way to up these numbers?

You mean the numbers 2 and 1 above?

> Answer: Choose a harmonic limit like odd limits,

It's weird that you seem to be talking as if chords must be
constructed in a particular way given the limit. Seems totally
divorced from the kind of thinking I thought was at work in this
thread.

> or in my
> formulation, any basis in which a pair of elements divide.

The 7-limit clearly gets you "up to these numbers" 2 and 1 above,
since that's the very example you use. So in the basis for the 7-
limit, a pair of elements divide? What does that mean?

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:56:40 PM

>> >So if you're looking at 128:125, you wouldn't assume ratios of 3
>> >are consonant?
>>
>> Exactly.
>
>Ouch. That rules out Augmented and the related systems, doesn't it?

How so? I'm just using this to find commas.

>> >> My solution was to raise the unweighted rectangular lattice
>> >> distance of the comma to the power of the number of primes in
>> >> the comma. I'd be interested in other solutions!
>> >
>> >My solution is (Tenney) harmonic distance, of course, as it's
>> >length in the lattice of notes. I actually divide this by the
>> >volume of one cell in the lattice, to get closer to "number of
>> >notes". This is where my complexity numbers come from.
>>
>> You divide a length by a volume? Can you give an example?
>
>In my paper, I give the complexity of 5-limit meantone as 3.44. This
>comes from calculating the length of the comma:
>
>4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
>
>and dividing by the volume of a unit cell:
>
>log2(2)*log2(3)*log2(5) = 3.680169
>
>yielding 3.440543.
>
>The idea being that there's one note per unit cell . . . but this
>number happens to coincide with what Gene calls the Tenney-L1 norm of
>the *val* rather than of the comma.

Wow.

Seems like if you didn't use a weighted length here, you wouldn't
need to divide by the unit cell.

>> What I say above just assumes the PB will be a block made of
>> orthogonal, equally-complex commas.
>
>That ignores both "straightness"

Yes, but it's just a shot at the average size of of a block
employing the comma, so this seems ok, doesn't it?

>and the different complexities of different commas.

More complex commas will certainly enclose more notes by this
definition.

>The wedge product is, in a sense, an affine-invariant way of
>taking both into account without even trying :)

Again, can I wedge a single comma?

>No, 1 comma is a single vector, so can only "measure" a length. But
>the general idea is the same no matter how many commas you use. In
>the 5-limit (3D) lattice, a comma will define (though not uniquely!)
>a set of parallel, infinite-area slices, with the comma as "width".
>I've called these "periodicity sheets" in the past.

Right. Doesn't it make sense to raise this width by the
dimensionality of the sheet (which I suppose should be the
number of factors in the comma *minus 1*...)?

>> >> In the 5-limit, Hahn-limit n gives the same score to 6n points.
>> >> 81/80 is length 4, so that means 24 ratios share the same length.
>> >> It looks like a greater number of ratios have the same
>> >> expressibility as 81/80.
>> >
>> >Looks like fewer to me.
>>
>> All odds less than 80 (about 40) that are prime rel. to 81 (2/3 of
>> them?) will have the same expressibility. That looks like more
>> than 24 to me.
>
>No way. You said "In the 5-limit" above, so most of these odds never
>enter the picture. If they did, you'd have to somehow "count" them in
>the Hahn case as well, for a fair comparison.

Ah, you're right.

>> Anyway, the idea is that pain is quadratic. Er, here's a quote
>> from the original msg.
>>
>> """
>> That brings us to error. John deLaubenfels proposed that the
>> amount of "pain" mistuning causes us is the square of the error
>> in a simultaneity, and I agree with him. My own listening tests
>> indicate the exponent should be > 1, and 2 is natural because
>> it gives a nice distribution over the target. Also, there would
>> scarcely be a reason to temper with an exponent of 1... if we
>> spread a 24-cent comma over 12 fifths, we'd experience the same
>> amount of pain once we heard all of them, no matter how we
>> tempered. But (12 * 2^2) = 48 < 24^2 = 576.
>>
>> So, for error I arrived at:
>>
>> ((cents comma)/(comma-dist comma))^2 * (comma-dist comma)
>>
>> or
>>
>> (cents comma)^2 / (comma-dist comma)
>>
>> I use comma-dist here because I want octave-specific (allowing
>> tempered octaves), unweighted (the available 'pain relief' depends
>> only on the number of intervals to temper over) distances.
>
>Then you need to consider tempering over other consonances, IMHO.

You mean consonances that aren't factors in the comma? But in
this setup, we have no a priori harmonic limit.

>> >> >> >> Any interval y that can be factored using the factors
>> >> >> >> [a q r t ...] is in the [a q r t ...] limit. Great,
>> >> >> >> ain't it?
>> >> >> >
>> >> >> >It would be if you could somehow convince me that this
>> >> >> >implies C,E,G,B -> G,B,D,F# is impossible.
>> >> >>
>> >> >> Show me a single modulation like this where no two of
>> >> >> (a q r t...) share a common factor.
//
>> >Dude, I have no idea what you're showing or talking about here.
>> >By definition, no primes share a common factor. Maybe try again
>> >when you feel better.
>>
>> They actually don't have to be primes (though in everything I
>> did, they were). They only have to have the property that no
>> pair of them divide without a remainder.
>>
>> Let's start by answering this question: In the 7-limit, what is
>> the maximum number of notes a pair of chords can have in common.
>>
>> Answer: 2
>>
>> Question next: How many kinds of modulation achieve this?
>>
>> Answer: 1 (major <-> minor)
>
>What do you mean by "kinds of modulation"? For any tetrad (and I'm
>not sure why you're restricting this to tetrads here), there are six
>different tetrads (all of opposite quality) which share two notes
>with it. Why isn't that six kinds of "modulation"?

I mean to/from different kinds of chords.

>> Question final: What's the only way to up these numbers?
>
>You mean the numbers 2 and 1 above?

Yes.

>> Answer: Choose a harmonic limit like odd limits,
>
>It's weird that you seem to be talking as if chords must be
>constructed in a particular way given the limit. Seems totally
>divorced from the kind of thinking I thought was at work in this
>thread.

I have no idea why this is such a problem. This simple fact
has been mention many times on the tuning and on this list,
by myself and Gene.

>> or in my formulation, any basis in which a pair of elements divide.
>
>The 7-limit clearly gets you "up to these numbers"

That's *to up*, as in, "to increase".

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:08:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >So if you're looking at 128:125, you wouldn't assume ratios of
3
> >> >are consonant?
> >>
> >> Exactly.
> >
> >Ouch. That rules out Augmented and the related systems, doesn't it?
>
> How so? I'm just using this to find commas.

I thought you're *starting* with commas here -- how can you use this
to *find* commas?

> >> >> My solution was to raise the unweighted rectangular lattice
> >> >> distance of the comma to the power of the number of primes in
> >> >> the comma. I'd be interested in other solutions!
> >> >
> >> >My solution is (Tenney) harmonic distance, of course, as it's
> >> >length in the lattice of notes. I actually divide this by the
> >> >volume of one cell in the lattice, to get closer to "number of
> >> >notes". This is where my complexity numbers come from.
> >>
> >> You divide a length by a volume? Can you give an example?
> >
> >In my paper, I give the complexity of 5-limit meantone as 3.44.
This
> >comes from calculating the length of the comma:
> >
> >4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
> >
> >and dividing by the volume of a unit cell:
> >
> >log2(2)*log2(3)*log2(5) = 3.680169
> >
> >yielding 3.440543.
> >
> >The idea being that there's one note per unit cell . . . but this
> >number happens to coincide with what Gene calls the Tenney-L1 norm
of
> >the *val* rather than of the comma.
>
> Wow.
>
> Seems like if you didn't use a weighted length here, you wouldn't
> need to divide by the unit cell.

Because its volume would be 1? Anyway, I don't really need to divide
by it, but the numbers look more reasonable when you do, (Remember
the spooky complexity numbers for 7-limit 2D temperaments? They came
from this same formula, generalized to the two-comma case.)

> >> What I say above just assumes the PB will be a block made of
> >> orthogonal, equally-complex commas.
> >
> >That ignores both "straightness"
>
> Yes, but it's just a shot at the average size of of a block
> employing the comma,

Not a very good one, I'd say. Some commas are pretty uniquely short!

>so this seems ok, doesn't it?

Not really, but I'm willing to go along for now . . .

> >and the different complexities of different commas.
>
> More complex commas will certainly enclose more notes by this
> definition.

I mean the different commas enclosing the PB!

> >The wedge product is, in a sense, an affine-invariant way of
> >taking both into account without even trying :)
>
> Again, can I wedge a single comma?

I guess, since you can wedge three commas, you can wedge one comma,
though of course it doesn't change as a result of the "calculation".

> >No, 1 comma is a single vector, so can only "measure" a length.
But
> >the general idea is the same no matter how many commas you use. In
> >the 5-limit (3D) lattice, a comma will define (though not
uniquely!)
> >a set of parallel, infinite-area slices, with the comma
as "width".
> >I've called these "periodicity sheets" in the past.
>
> Right. Doesn't it make sense to raise this width by the
> dimensionality of the sheet (which I suppose should be the
> number of factors in the comma *minus 1*...)?

Not as far as I can see.

> >> >> In the 5-limit, Hahn-limit n gives the same score to 6n
points.
> >> >> 81/80 is length 4, so that means 24 ratios share the same
length.
> >> >> It looks like a greater number of ratios have the same
> >> >> expressibility as 81/80.
> >> >
> >> >Looks like fewer to me.
> >>
> >> All odds less than 80 (about 40) that are prime rel. to 81 (2/3
of
> >> them?) will have the same expressibility. That looks like more
> >> than 24 to me.
> >
> >No way. You said "In the 5-limit" above, so most of these odds
never
> >enter the picture. If they did, you'd have to somehow "count" them
in
> >the Hahn case as well, for a fair comparison.
>
> Ah, you're right.
>
> >> Anyway, the idea is that pain is quadratic. Er, here's a quote
> >> from the original msg.
> >>
> >> """
> >> That brings us to error. John deLaubenfels proposed that the
> >> amount of "pain" mistuning causes us is the square of the error
> >> in a simultaneity, and I agree with him. My own listening tests
> >> indicate the exponent should be > 1, and 2 is natural because
> >> it gives a nice distribution over the target. Also, there would
> >> scarcely be a reason to temper with an exponent of 1... if we
> >> spread a 24-cent comma over 12 fifths, we'd experience the same
> >> amount of pain once we heard all of them, no matter how we
> >> tempered. But (12 * 2^2) = 48 < 24^2 = 576.
> >>
> >> So, for error I arrived at:
> >>
> >> ((cents comma)/(comma-dist comma))^2 * (comma-dist comma)
> >>
> >> or
> >>
> >> (cents comma)^2 / (comma-dist comma)
> >>
> >> I use comma-dist here because I want octave-specific (allowing
> >> tempered octaves), unweighted (the available 'pain relief'
depends
> >> only on the number of intervals to temper over) distances.
> >
> >Then you need to consider tempering over other consonances, IMHO.
>
> You mean consonances that aren't factors in the comma?

They may be "factors" in that they describe part of its taxicab route.

> But in
> this setup, we have no a priori harmonic limit.

So what stops you from saying that the number of intervals to temper
over is 1, since the comma itself is an interval?

> >> >> >> >> Any interval y that can be factored using the factors
> >> >> >> >> [a q r t ...] is in the [a q r t ...] limit. Great,
> >> >> >> >> ain't it?
> >> >> >> >
> >> >> >> >It would be if you could somehow convince me that this
> >> >> >> >implies C,E,G,B -> G,B,D,F# is impossible.
> >> >> >>
> >> >> >> Show me a single modulation like this where no two of
> >> >> >> (a q r t...) share a common factor.
> //
> >> >Dude, I have no idea what you're showing or talking about here.
> >> >By definition, no primes share a common factor. Maybe try again
> >> >when you feel better.
> >>
> >> They actually don't have to be primes (though in everything I
> >> did, they were). They only have to have the property that no
> >> pair of them divide without a remainder.
> >>
> >> Let's start by answering this question: In the 7-limit, what is
> >> the maximum number of notes a pair of chords can have in common.
> >>
> >> Answer: 2
> >>
> >> Question next: How many kinds of modulation achieve this?
> >>
> >> Answer: 1 (major <-> minor)
> >
> >What do you mean by "kinds of modulation"? For any tetrad (and I'm
> >not sure why you're restricting this to tetrads here), there are
six
> >different tetrads (all of opposite quality) which share two notes
> >with it. Why isn't that six kinds of "modulation"?
>
> I mean to/from different kinds of chords.
>
> >> Question final: What's the only way to up these numbers?
> >
> >You mean the numbers 2 and 1 above?
>
> Yes.
>
> >> Answer: Choose a harmonic limit like odd limits,
> >
> >It's weird that you seem to be talking as if chords must be
> >constructed in a particular way given the limit. Seems totally
> >divorced from the kind of thinking I thought was at work in this
> >thread.
>
> I have no idea why this is such a problem. This simple fact
> has been mention many times on the tuning and on this list,
> by myself and Gene.
>
> >> or in my formulation, any basis in which a pair of elements
divide.
> >
> >The 7-limit clearly gets you "up to these numbers"
>
> That's *to up*, as in, "to increase".

Oh. Or you can just construct bigger chords in the lattice!

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:16:27 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > Right. Doesn't it make sense to raise this width by the
> > dimensionality of the sheet (which I suppose should be the
> > number of factors in the comma *minus 1*...)?
>
> Not as far as I can see.

In particular, I don't want to assume finite PBs. But even if we do, I
want to assume, when comparing individual commas, that the *rest* of
the commas needed to complete the PB, and/or their impact on the number
of notes, are the *same* for each of the commas we're comparing and the
only thing changing is that one comma itself.

If you really want to raise the complexities to some power, I suppose
that's OK, it won't affect the rankings in any given comparison anyway.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:32:14 PM

>> > Right. Doesn't it make sense to raise this width by the
>> > dimensionality of the sheet (which I suppose should be the
>> > number of factors in the comma *minus 1*...)?
>>
>> Not as far as I can see.
>
>In particular, I don't want to assume finite PBs. But even if we do, I
>want to assume, when comparing individual commas, that the *rest* of
>the commas needed to complete the PB, and/or their impact on the number
>of notes, are the *same* for each of the commas we're comparing and the
>only thing changing is that one comma itself.

Wouldn't this tend to average out?

>If you really want to raise the complexities to some power, I suppose
>that's OK, it won't affect the rankings in any given comparison anyway.

Not a fixed power, the power is the number of factors in the
comma. So a length 5 3-limit comma has 5 oct.eqv. "notes", and
a length 5 7-limit comma has 125.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/14/2005 2:38:47 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> 4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
>
> and dividing by the volume of a unit cell:
>
> log2(2)*log2(3)*log2(5) = 3.680169

I'd suggest sticking to a single notation (log2 by preference) and not
using both log(x)/log(2) *and* log2(x).

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:52:08 PM

>> >> >So if you're looking at 128:125, you wouldn't assume ratios
>> >> >of 3 are consonant?
>> >>
>> >> Exactly.
>> >
>> >Ouch. That rules out Augmented and the related systems, doesn't it?
>>
>> How so? I'm just using this to find commas.
>
>I thought you're *starting* with commas here -- how can you use this
>to *find* commas?

I plug in all ratios with denominator < something, and get out
the ten "best" commas.

>> >> >> My solution was to raise the unweighted rectangular lattice
>> >> >> distance of the comma to the power of the number of primes in
>> >> >> the comma. I'd be interested in other solutions!
//
>> >In my paper, I give the complexity of 5-limit meantone as 3.44.
>> >This comes from calculating the length of the comma:
>> >
>> >4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
>> >
>> >and dividing by the volume of a unit cell:
>> >
>> >log2(2)*log2(3)*log2(5) = 3.680169
>> >
>> >yielding 3.440543.
>> >
>> >The idea being that there's one note per unit cell . . . but this
>> >number happens to coincide with what Gene calls the Tenney-L1 norm
>> >of the *val* rather than of the comma.
>>
>> Wow.
>>
>> Seems like if you didn't use a weighted length here, you wouldn't
>> need to divide by the unit cell.
>
>Because its volume would be 1?

Yup.

>Anyway, I don't really need to divide by it, but the numbers look
>more reasonable when you do, (Remember the spooky complexity numbers
>for 7-limit 2D temperaments? They came from this same formula,
>generalized to the two-comma case.)

I remember there being spooky numbers, but not what that was
about. Was way pre-TOP, no?

>> >> What I say above just assumes the PB will be a block made of
>> >> orthogonal, equally-complex commas.
>> >
>> >That ignores both "straightness"
>>
>> Yes, but it's just a shot at the average size of of a block
>> employing the comma,
>
>Not a very good one, I'd say. Some commas are pretty uniquely short!

That's the 1663 problem I ran into.

>> >and the different complexities of different commas.
>>
>> More complex commas will certainly enclose more notes by this
>> definition.
>
>I mean the different commas enclosing the PB!

Ah. I just assumed they'd average out.

>> >> Anyway, the idea is that pain is quadratic. Er, here's a quote
>> >> from the original msg.
>> >>
>> >> """
>> >> That brings us to error. John deLaubenfels proposed that the
>> >> amount of "pain" mistuning causes us is the square of the error
>> >> in a simultaneity, and I agree with him. My own listening tests
>> >> indicate the exponent should be > 1, and 2 is natural because
>> >> it gives a nice distribution over the target. Also, there would
>> >> scarcely be a reason to temper with an exponent of 1... if we
>> >> spread a 24-cent comma over 12 fifths, we'd experience the same
>> >> amount of pain once we heard all of them, no matter how we
>> >> tempered. But (12 * 2^2) = 48 < 24^2 = 576.
>> >>
>> >> So, for error I arrived at:
>> >>
>> >> (cents comma)^2 / (comma-dist comma)
>> >>
>> >> I use comma-dist here because I want octave-specific (allowing
>> >> tempered octaves), unweighted (the available 'pain relief'
>> >> depends only on the number of intervals to temper over)
>> >> distances.
>> >
>> >Then you need to consider tempering over other consonances, IMHO.
>>
>> You mean consonances that aren't factors in the comma?
>
>They may be "factors" in that they describe part of its taxicab route.

You lost me here.

>> But in this setup, we have no a priori harmonic limit.
>
>So what stops you from saying that the number of intervals to temper
>over is 1, since the comma itself is an interval?

I thought I answered this already -- the prime factorization of
the comma is the harmonic limit. But we're not allowed to 'fill
it in' because we think the missing numbers are consonant.

>> >> >> >> >It would be if you could somehow convince me that this
>> >> >> >> >implies C,E,G,B -> G,B,D,F# is impossible.
>> >> >> >>
>> >> >> >> Show me a single modulation like this where no two of
>> >> >> >> (a q r t...) share a common factor.
//
>> >> Let's start by answering this question: In the 7-limit, what is
>> >> the maximum number of notes a pair of chords can have in common.
>> >>
>> >> Answer: 2
>> >>
>> >> Question next: How many kinds of modulation achieve this?
>> >>
>> >> Answer: 1 (major <-> minor)
//
>> >> Question final: What's the only way to increase these answers?
//
>Oh. Or you can just construct bigger chords in the lattice!

Without basis elements sharing factors, a dyad is the most you
can preserve, and the only modulations that do it are major<->minor.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 3:45:46 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> > Right. Doesn't it make sense to raise this width by the
> >> > dimensionality of the sheet (which I suppose should be the
> >> > number of factors in the comma *minus 1*...)?
> >>
> >> Not as far as I can see.
> >
> >In particular, I don't want to assume finite PBs. But even if we
do, I
> >want to assume, when comparing individual commas, that the *rest*
of
> >the commas needed to complete the PB, and/or their impact on the
number
> >of notes, are the *same* for each of the commas we're comparing
and the
> >only thing changing is that one comma itself.
>
> Wouldn't this tend to average out?

Wouldn't what tend to average out?

> >If you really want to raise the complexities to some power, I
suppose
> >that's OK, it won't affect the rankings in any given comparison
anyway.
>
> Not a fixed power, the power is the number of factors in the
> comma. So a length 5 3-limit comma has 5 oct.eqv. "notes", and
> a length 5 7-limit comma has 125.

Augmented would seem to fall by the wayside, since you don't consider
128:125 to have 3 as a factor -- so comparisons side-by-side with
other 5-limit commas, in which the commas are intended to serve the
same purposes, would fail.

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 3:46:55 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > 4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
> >
> > and dividing by the volume of a unit cell:
> >
> > log2(2)*log2(3)*log2(5) = 3.680169
>
> I'd suggest sticking to a single notation (log2 by preference) and not
> using both log(x)/log(2) *and* log2(x).

Whoops! I tried to use log2 but ended up copying too much text from my
Matlab window, where I must use log(x)/log(2).

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 4:03:31 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >So if you're looking at 128:125, you wouldn't assume ratios
> >> >> >of 3 are consonant?
> >> >>
> >> >> Exactly.
> >> >
> >> >Ouch. That rules out Augmented and the related systems, doesn't
it?
> >>
> >> How so? I'm just using this to find commas.
> >
> >I thought you're *starting* with commas here -- how can you use
this
> >to *find* commas?
>
> I plug in all ratios with denominator < something, and get out
> the ten "best" commas.

If I understand this thread correctly, the sense of "best" is quite
abstract and perhaps not entirely consistent . . .

> >> >> >> My solution was to raise the unweighted rectangular lattice
> >> >> >> distance of the comma to the power of the number of primes
in
> >> >> >> the comma. I'd be interested in other solutions!
> //
> >> >In my paper, I give the complexity of 5-limit meantone as 3.44.
> >> >This comes from calculating the length of the comma:
> >> >
> >> >4*log(2)/log(2)+4*log(3)/log(2)+1*log(5)/log(2) = 12.661778
> >> >
> >> >and dividing by the volume of a unit cell:
> >> >
> >> >log2(2)*log2(3)*log2(5) = 3.680169
> >> >
> >> >yielding 3.440543.
> >> >
> >> >The idea being that there's one note per unit cell . . . but
this
> >> >number happens to coincide with what Gene calls the Tenney-L1
norm
> >> >of the *val* rather than of the comma.
> >>
> >> Wow.
> >>
> >> Seems like if you didn't use a weighted length here, you wouldn't
> >> need to divide by the unit cell.
> >
> >Because its volume would be 1?
>
> Yup.
>
> >Anyway, I don't really need to divide by it, but the numbers look
> >more reasonable when you do, (Remember the spooky complexity
numbers
> >for 7-limit 2D temperaments? They came from this same formula,
> >generalized to the two-comma case.)
>
> I remember there being spooky numbers, but not what that was
> about. Was way pre-TOP, no?

No, but of course how you optimize the tuning (TOP or otherwise) has
nothing to do with the complexity calculation anyway.

> >> >> What I say above just assumes the PB will be a block made of
> >> >> orthogonal, equally-complex commas.
> >> >
> >> >That ignores both "straightness"
> >>
> >> Yes, but it's just a shot at the average size of of a block
> >> employing the comma,
> >
> >Not a very good one, I'd say. Some commas are pretty uniquely
short!
>
> That's the 1663 problem I ran into.

I would have thought 81:80 would be an example . . .
>
> >> >and the different complexities of different commas.
> >>
> >> More complex commas will certainly enclose more notes by this
> >> definition.
> >
> >I mean the different commas enclosing the PB!
>
> Ah. I just assumed they'd average out.

To the one you're looking at in particular? Why/how?

> >> >> Anyway, the idea is that pain is quadratic. Er, here's a
quote
> >> >> from the original msg.
> >> >>
> >> >> """
> >> >> That brings us to error. John deLaubenfels proposed that the
> >> >> amount of "pain" mistuning causes us is the square of the
error
> >> >> in a simultaneity, and I agree with him. My own listening
tests
> >> >> indicate the exponent should be > 1, and 2 is natural because
> >> >> it gives a nice distribution over the target. Also, there
would
> >> >> scarcely be a reason to temper with an exponent of 1... if we
> >> >> spread a 24-cent comma over 12 fifths, we'd experience the
same
> >> >> amount of pain once we heard all of them, no matter how we
> >> >> tempered. But (12 * 2^2) = 48 < 24^2 = 576.
> >> >>
> >> >> So, for error I arrived at:
> >> >>
> >> >> (cents comma)^2 / (comma-dist comma)
> >> >>
> >> >> I use comma-dist here because I want octave-specific (allowing
> >> >> tempered octaves), unweighted (the available 'pain relief'
> >> >> depends only on the number of intervals to temper over)
> >> >> distances.
> >> >
> >> >Then you need to consider tempering over other consonances,
IMHO.
> >>
> >> You mean consonances that aren't factors in the comma?
> >
> >They may be "factors" in that they describe part of its taxicab
route.
>
> You lost me here.

5/3 * 4/3 * 2/3 * 2/3 = 80/81; each of these consonances can be
considered a "factor" of the comma.

> >> But in this setup, we have no a priori harmonic limit.
> >
> >So what stops you from saying that the number of intervals to
temper
> >over is 1, since the comma itself is an interval?
>
> I thought I answered this already -- the prime factorization of
> the comma is the harmonic limit.

Instead of an a priori limit. Seems backwards to me.

> But we're not allowed to 'fill
> it in' because we think the missing numbers are consonant.
>
> >> >> >> >> >It would be if you could somehow convince me that this
> >> >> >> >> >implies C,E,G,B -> G,B,D,F# is impossible.
> >> >> >> >>
> >> >> >> >> Show me a single modulation like this where no two of
> >> >> >> >> (a q r t...) share a common factor.
> //
> >> >> Let's start by answering this question: In the 7-limit, what
is
> >> >> the maximum number of notes a pair of chords can have in
common.
> >> >>
> >> >> Answer: 2
> >> >>
> >> >> Question next: How many kinds of modulation achieve this?
> >> >>
> >> >> Answer: 1 (major <-> minor)
> //
> >> >> Question final: What's the only way to increase these answers?
> //
> >Oh. Or you can just construct bigger chords in the lattice!
>
> Without basis elements sharing factors,

I don't think of chords that way, Carl.

> a dyad is the most you
> can preserve, and the only modulations that do it are major<->minor.

You're thinking of chords as simplices in a triangular (A_n?) lattice.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 5:02:25 PM

>> >> > Right. Doesn't it make sense to raise this width by the
>> >> > dimensionality of the sheet (which I suppose should be the
>> >> > number of factors in the comma *minus 1*...)?
>> >>
>> >> Not as far as I can see.
>> >
>> >In particular, I don't want to assume finite PBs. But even if we
>> >do, I want to assume, when comparing individual commas, that the
>> >*rest* of the commas needed to complete the PB, and/or their
>> >impact on the number of notes, are the *same* for each of the
>> >commas we're comparing and the only thing changing is that one
>> >comma itself.
>>
>> Wouldn't this tend to average out?
>
>Wouldn't what tend to average out?

The angles and complexities of the other commas.

>> >If you really want to raise the complexities to some power, I
>> >suppose that's OK, it won't affect the rankings in any given
>> >comparison anyway.
>>
>> Not a fixed power, the power is the number of factors in the
>> comma. So a length 5 3-limit comma has 5 oct.eqv. "notes", and
>> a length 5 7-limit comma has 125.
>
>Augmented would seem to fall by the wayside, since you don't consider
>128:125 to have 3 as a factor -- so comparisons side-by-side with
>other 5-limit commas, in which the commas are intended to serve the
>same purposes, would fail.

Compared to something like 81:80, my way boosts 125:128, since the
exponent on the complexity goes down (Ceteris paribus).

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 5:03:22 PM

>Whoops! I tried to use log2 but ended up copying too much text from my
>Matlab window, where I must use log(x)/log(2).

Can't you make a library function called log2 (just curious)?

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 5:34:03 PM

>> I plug in all ratios with denominator < something, and get out
>> the ten "best" commas.
>
>If I understand this thread correctly, the sense of "best" is quite
>abstract

Naturally.

>and perhaps not entirely consistent . . .

I wonder how? (The problems with not agreeing with the assessment
of "best" were pointed out in the first message... I'm working on
it. But if it's not consistent there's a bigger problem than I'm
aware of.)

>> >> Yes, but it's just a shot at the average size of of a block
>> >> employing the comma,
>> >
>> >Not a very good one, I'd say. Some commas are pretty uniquely
>> >short!
>>
>> That's the 1663 problem I ran into.
>
>I would have thought 81:80 would be an example . . .

That one deserves credit for being short.

>> >> >and the different complexities of different commas.
>> >>
>> >> More complex commas will certainly enclose more notes by this
>> >> definition.
>> >
>> >I mean the different commas enclosing the PB!
>>
>> Ah. I just assumed they'd average out.
>
>To the one you're looking at in particular? Why/how?

No... sum over all the possible additional commas and get
nothing.

>> >> >> That brings us to error.
>> >> >> the idea is that pain is quadratic.
>> >> >> I use comma-dist here because I want octave-specific (allowing
>> >> >> tempered octaves), unweighted (the available 'pain relief'
>> >> >> depends only on the number of intervals to temper over)
>> >> >> distances.
>> >> >
>> >> >Then you need to consider tempering over other consonances,
>5/3 * 4/3 * 2/3 * 2/3 = 80/81; each of these consonances can be
>considered a "factor" of the comma.

Here's how I do it

(comma-dist 81/80) -> 9

The "factors" are (2 2 2 2 3 3 3 3 5).

What's the problem?

>> >> But in this setup, we have no a priori harmonic limit.
>> >
>> >So what stops you from saying that the number of intervals to
>> >temper over is 1, since the comma itself is an interval?
>>
>> I thought I answered this already -- the prime factorization of
>> the comma is the harmonic limit.
>
>Instead of an a priori limit. Seems backwards to me.

It's just the way I am. :)

>> >> >> >> >> Show me a single modulation like this where no two of
>> >> >> >> >> (a q r t...) share a common factor.
//
>> >> >> question: In the 7-limit, what is the maximum number
>> >> >> of notes a pair of chords can have in common?
>> >> >> Answer: 2
>> >> >>
>> >> >> How many kinds of modulation achieve this?
>> >> >> Answer: 1 (major <-> minor)
>> >> >>
>> >> >> What's the only way to increase these answers?
//
>> you can just construct bigger chords in the lattice!
>>
>> Without basis elements sharing factors,
>
>I don't think of chords that way, Carl.
>
>> a dyad is the most you
>> can preserve, and the only modulations that do it are major<->minor.
>
>You're thinking of chords as simplices in a triangular (A_n?) lattice.

I think so. How do you think of them?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:18:27 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >Whoops! I tried to use log2 but ended up copying too much text from
my
> >Matlab window, where I must use log(x)/log(2).
>
> Can't you make a library function called log2 (just curious)?
>
> -Carl

Yup! I guess I'm a fan of making my life difficult and doing everything
at the command line :)

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:17:40 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> > Right. Doesn't it make sense to raise this width by the
> >> >> > dimensionality of the sheet (which I suppose should be the
> >> >> > number of factors in the comma *minus 1*...)?
> >> >>
> >> >> Not as far as I can see.
> >> >
> >> >In particular, I don't want to assume finite PBs. But even if we
> >> >do, I want to assume, when comparing individual commas, that the
> >> >*rest* of the commas needed to complete the PB, and/or their
> >> >impact on the number of notes, are the *same* for each of the
> >> >commas we're comparing and the only thing changing is that one
> >> >comma itself.
> >>
> >> Wouldn't this tend to average out?
> >
> >Wouldn't what tend to average out?
>
> The angles and complexities of the other commas.

Sure. That's why I don't want to assume that they're similar in
length to the given comma -- rather, I want to assume they're
effectively the same for each given comma.

Angles differing from 90 degrees result in smaller volumes, so that
part can't average out.

> >> >If you really want to raise the complexities to some power, I
> >> >suppose that's OK, it won't affect the rankings in any given
> >> >comparison anyway.
> >>
> >> Not a fixed power, the power is the number of factors in the
> >> comma. So a length 5 3-limit comma has 5 oct.eqv. "notes", and
> >> a length 5 7-limit comma has 125.
> >
> >Augmented would seem to fall by the wayside, since you don't
consider
> >128:125 to have 3 as a factor -- so comparisons side-by-side with
> >other 5-limit commas, in which the commas are intended to serve
the
> >same purposes, would fail.
>
> Compared to something like 81:80, my way boosts 125:128, since the
> exponent on the complexity goes down (Ceteris paribus).

It "falls by the wayside" in that it gets an unfair boost due to the
assumption that prime 3 nowhere enters the picture; but Augmented is
defined so as to include prime 3.

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:31:19 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I plug in all ratios with denominator < something, and get out
> >> the ten "best" commas.
> >
> >If I understand this thread correctly, the sense of "best" is
quite
> >abstract
>
> Naturally.
>
> >and perhaps not entirely consistent . . .
>
> I wonder how?

There seems to be a circularity in that the comma defines the prime
limit, which then defines how you assess the comma . . .

> (The problems with not agreeing with the assessment
> of "best" were pointed out in the first message... I'm working on
> it. But if it's not consistent there's a bigger problem than I'm
> aware of.)

I think I had some other inconsistency in mind when I wrote that,
actually.

> >> >> Yes, but it's just a shot at the average size of of a block
> >> >> employing the comma,
> >> >
> >> >Not a very good one, I'd say. Some commas are pretty uniquely
> >> >short!
> >>
> >> That's the 1663 problem I ran into.
> >
> >I would have thought 81:80 would be an example . . .
>
> That one deserves credit for being short.

Right, but you can't assume that it's just as easy to find a set
equally short commas to complete the PB in this case as in any other.

> >> >> >and the different complexities of different commas.
> >> >>
> >> >> More complex commas will certainly enclose more notes by this
> >> >> definition.
> >> >
> >> >I mean the different commas enclosing the PB!
> >>
> >> Ah. I just assumed they'd average out.
> >
> >To the one you're looking at in particular? Why/how?
>
> No... sum over all the possible additional commas and get
> nothing.

Huh? What does that mean?

> >> >> >> That brings us to error.
> >> >> >> the idea is that pain is quadratic.
> >> >> >> I use comma-dist here because I want octave-specific
(allowing
> >> >> >> tempered octaves), unweighted (the available 'pain relief'
> >> >> >> depends only on the number of intervals to temper over)
> >> >> >> distances.
> >> >> >
> >> >> >Then you need to consider tempering over other consonances,
> >5/3 * 4/3 * 2/3 * 2/3 = 80/81; each of these consonances can be
> >considered a "factor" of the comma.
>
> Here's how I do it
>
> (comma-dist 81/80) -> 9
>
> The "factors" are (2 2 2 2 3 3 3 3 5).
>
> What's the problem?

IMO, the two breakdowns should yield the same result. At least,
they're equally meaningful.

> >> >> But in this setup, we have no a priori harmonic limit.
> >> >
> >> >So what stops you from saying that the number of intervals to
> >> >temper over is 1, since the comma itself is an interval?
> >>
> >> I thought I answered this already -- the prime factorization of
> >> the comma is the harmonic limit.
> >
> >Instead of an a priori limit. Seems backwards to me.
>
> It's just the way I am. :)
>
>
> >> >> >> >> >> Show me a single modulation like this where no two of
> >> >> >> >> >> (a q r t...) share a common factor.
> //
> >> >> >> question: In the 7-limit, what is the maximum number
> >> >> >> of notes a pair of chords can have in common?
> >> >> >> Answer: 2
> >> >> >>
> >> >> >> How many kinds of modulation achieve this?
> >> >> >> Answer: 1 (major <-> minor)
> >> >> >>
> >> >> >> What's the only way to increase these answers?
> //
> >> you can just construct bigger chords in the lattice!
> >>
> >> Without basis elements sharing factors,
> >
> >I don't think of chords that way, Carl.
> >
> >> a dyad is the most you
> >> can preserve, and the only modulations that do it are major<-
>minor.
> >
> >You're thinking of chords as simplices in a triangular (A_n?)
lattice.
>
> I think so. How do you think of them?

Blobs in the lattice, of any shape or size.

🔗Carl Lumma <ekin@lumma.org>

11/15/2005 1:03:49 PM

>>> and perhaps not entirely consistent . . .
>>
>> I wonder how?
>
> There seems to be a circularity in that the comma defines the prime
> limit, which then defines how you assess the comma . . .

Yes, there is some, and this is a problem.

>> (The problems with not agreeing with the assessment
>> of "best" were pointed out in the first message... I'm working on
>> it. But if it's not consistent there's a bigger problem than I'm
>> aware of.)
>
> I think I had some other inconsistency in mind when I wrote that,
> actually.

I didn't mean any particular consistency.

>> >> >> Yes, but it's just a shot at the average size of of a block
>> >> >> employing the comma,
>> >> >
>> >> >Not a very good one, I'd say. Some commas are pretty uniquely
>> >> >short!
>> >>
>> >> That's the 1663 problem I ran into.
>> >
>> >I would have thought 81:80 would be an example . . .
>>
>> That one deserves credit for being short.
>
>Right, but you can't assume that it's just as easy to find a set
>equally short commas to complete the PB in this case as in any other.

If you don't like the 'unit cube' rationalization, howabout a
'how many notes did we have to search to find this comma (notes
in the ball from the origin to the comma)' one?

>> >> >> >> That brings us to error.
>> >> >> >> the idea is that pain is quadratic.
>> >> >> >> I use comma-dist here because I want octave-specific
>> >> >> >> (allowing tempered octaves), unweighted (the available
>> >> >> >> 'pain relief' depends only on the number of intervals
>> >> >> >> to temper over) distances.
>> >
>> >Then you need to consider tempering over other consonances,
>> >5/3 * 4/3 * 2/3 * 2/3 = 80/81; each of these consonances can
>> >be considered a "factor" of the comma.
>>
>> Here's how I do it
>>
>> (comma-dist 81/80) -> 9
>>
>> The "factors" are (2 2 2 2 3 3 3 3 5).
>>
>> What's the problem?
>
>IMO, the two breakdowns should yield the same result. At least,
>they're equally meaningful.

How could allowing two factors to be removed per step and allowing
one ever give the same result?

>> >> >> >> >> >> Show me a single modulation like this where no two of
>> >> >> >> >> >> (a q r t...) share a common factor.
>> //
>> >> >> >> question: In the 7-limit, what is the maximum number
>> >> >> >> of notes a pair of chords can have in common?
>> >> >> >> Answer: 2
>> >> >> >>
>> >> >> >> How many kinds of modulation achieve this?
>> >> >> >> Answer: 1 (major <-> minor)
>> >> >> >>
>> >> >> >> What's the only way to increase these answers?
>> //
>> >> you can just construct bigger chords in the lattice!
>> >>
>> >> Without basis elements sharing factors,
>> >
>> >I don't think of chords that way, Carl.
>> >
>> >> a dyad is the most you can preserve, and the only modulations
>> >> that do it are major<->minor.
>> >
>> >You're thinking of chords as simplices in a triangular (A_n?)
>> >lattice.
>>
>> I think so. How do you think of them?
>
>Blobs in the lattice, of any shape or size.

Those are chords, but not necessarily consonant chords.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 2:07:09 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> If you don't like the 'unit cube' rationalization, howabout a
> 'how many notes did we have to search to find this comma (notes
> in the ball from the origin to the comma)' one?

That's fine within a particular prime limit, as it would just be a
fixed power of the 'length' of the comma.

> >> >> >> >> That brings us to error.
> >> >> >> >> the idea is that pain is quadratic.
> >> >> >> >> I use comma-dist here because I want octave-specific
> >> >> >> >> (allowing tempered octaves), unweighted (the available
> >> >> >> >> 'pain relief' depends only on the number of intervals
> >> >> >> >> to temper over) distances.
> >> >
> >> >Then you need to consider tempering over other consonances,
> >> >5/3 * 4/3 * 2/3 * 2/3 = 80/81; each of these consonances can
> >> >be considered a "factor" of the comma.
> >>
> >> Here's how I do it
> >>
> >> (comma-dist 81/80) -> 9
> >>
> >> The "factors" are (2 2 2 2 3 3 3 3 5).
> >>
> >> What's the problem?
> >
> >IMO, the two breakdowns should yield the same result. At least,
> >they're equally meaningful.
>
> How could allowing two factors to be removed per step and allowing
> one ever give the same result?

They do in the Tenney case, for example.

> >> >> >> >> >> >> Show me a single modulation like this where no
two of
> >> >> >> >> >> >> (a q r t...) share a common factor.
> >> //
> >> >> >> >> question: In the 7-limit, what is the maximum number
> >> >> >> >> of notes a pair of chords can have in common?
> >> >> >> >> Answer: 2
> >> >> >> >>
> >> >> >> >> How many kinds of modulation achieve this?
> >> >> >> >> Answer: 1 (major <-> minor)
> >> >> >> >>
> >> >> >> >> What's the only way to increase these answers?
> >> //
> >> >> you can just construct bigger chords in the lattice!
> >> >>
> >> >> Without basis elements sharing factors,
> >> >
> >> >I don't think of chords that way, Carl.
> >> >
> >> >> a dyad is the most you can preserve, and the only modulations
> >> >> that do it are major<->minor.
> >> >
> >> >You're thinking of chords as simplices in a triangular (A_n?)
> >> >lattice.
> >>
> >> I think so. How do you think of them?
> >
> >Blobs in the lattice, of any shape or size.
>
> Those are chords, but not necessarily consonant chords.

One doesn't draw any sharp line between consonance and dissonance in
the prime-limit based paradigms like TOP and 'Kees'. It's understood
to be a continuum ("shades of gray") instead.

🔗Carl Lumma <ekin@lumma.org>

11/16/2005 3:51:10 PM

>> If you don't like the 'unit cube' rationalization, howabout a
>> 'how many notes did we have to search to find this comma (notes
>> in the ball from the origin to the comma)' one?
>
>That's fine within a particular prime limit, as it would just be a
>fixed power of the 'length' of the comma.

But not for comparing different prime limits?

>> >> Here's how I do it
>> >>
>> >> (comma-dist 81/80) -> 9
>> >>
>> >> The "factors" are (2 2 2 2 3 3 3 3 5).
>> >>
>> >> What's the problem?
>> >
>> >IMO, the two breakdowns should yield the same result. At least,
>> >they're equally meaningful.
>>
>> How could allowing two factors to be removed per step and allowing
>> one ever give the same result?
>
>They do in the Tenney case, for example.

Can you give an example?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 7:29:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> If you don't like the 'unit cube' rationalization, howabout a
> >> 'how many notes did we have to search to find this comma (notes
> >> in the ball from the origin to the comma)' one?
> >
> >That's fine within a particular prime limit, as it would just be a
> >fixed power of the 'length' of the comma.
>
> But not for comparing different prime limits?

You mean "not for commas in different prime limits"? If so, my answer
is no. I think you can understand why I say that now.

> >> >> Here's how I do it
> >> >>
> >> >> (comma-dist 81/80) -> 9
> >> >>
> >> >> The "factors" are (2 2 2 2 3 3 3 3 5).
> >> >>
> >> >> What's the problem?
> >> >
> >> >IMO, the two breakdowns should yield the same result. At least,
> >> >they're equally meaningful.
> >>
> >> How could allowing two factors to be removed per step and
allowing
> >> one ever give the same result?
> >
> >They do in the Tenney case, for example.
>
> Can you give an example?

No matter how (or even if) you break down 81:80 into simpler
intervals (as long as you don't backtrack along any direction in the
lattice), the distance will still be the same:

log(81*80)
= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
= log(3*5)+log(3*4)+log(3*2)+log(3*2)
= (etc.)

And if you distribute the comma the TOP way, the maximum damage to
any interval making up the comma is the same, regardless of how (or
if) you break the comma down into simpler intervals.

🔗Carl Lumma <ekin@lumma.org>

11/16/2005 11:14:13 PM

>> >> If you don't like the 'unit cube' rationalization, howabout a
>> >> 'how many notes did we have to search to find this comma (notes
>> >> in the ball from the origin to the comma)' one?
>> >
>> >That's fine within a particular prime limit, as it would just be a
>> >fixed power of the 'length' of the comma.
>>
>> But not for comparing different prime limits?
>
>You mean "not for commas in different prime limits"?

Yes.

>If so, my answer is no. I think you can understand why I say that now.

Sadly not. :(

>> >> How could allowing two factors to be removed per step and
>> >> allowing one ever give the same result?
>> >
>> >They do in the Tenney case, for example.
>>
>> Can you give an example?
>
>No matter how (or even if) you break down 81:80 into simpler
>intervals (as long as you don't backtrack along any direction in the
>lattice), the distance will still be the same:
>
>log(81*80)
>= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
>= log(3*5)+log(3*4)+log(3*2)+log(3*2)
>= (etc.)

3*5 is not 3/5, and log(15) != log(3)-log(5)

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/17/2005 1:13:13 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> If you don't like the 'unit cube' rationalization, howabout a
> >> >> 'how many notes did we have to search to find this comma
(notes
> >> >> in the ball from the origin to the comma)' one?
> >> >
> >> >That's fine within a particular prime limit, as it would just
be a
> >> >fixed power of the 'length' of the comma.
> >>
> >> But not for comparing different prime limits?
> >
> >You mean "not for commas in different prime limits"?
>
> Yes.
>
> >If so, my answer is no. I think you can understand why I say that
now.
>
> Sadly not. :(

Remember the 128:125 example we were just discussing?

> >> >> How could allowing two factors to be removed per step and
> >> >> allowing one ever give the same result?
> >> >
> >> >They do in the Tenney case, for example.
> >>
> >> Can you give an example?
> >
> >No matter how (or even if) you break down 81:80 into simpler
> >intervals (as long as you don't backtrack along any direction in
the
> >lattice), the distance will still be the same:
> >
> >log(81*80)
> >= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
> >= log(3*5)+log(3*4)+log(3*2)+log(3*2)
> >= (etc.)
>
> 3*5 is not 3/5, and log(15) != log(3)-log(5)

So what? I don't get what the point of this statement is or what it's
supposed to show. As you know, the Tenney harmonic distance of 3/5 is
log(3*5) = log(15) = log(3)+log(5). I have no idea why or where you
think log(3)-log(5) comes in or should come in. It's clearly not the
taxicab distance of anything.

🔗Carl Lumma <ekin@lumma.org>

11/17/2005 1:46:07 PM

>> >> >> howabout a 'how many notes did we have to search to
>> >> >> find this comma' rationale?
>> >> >
>> >> >That's fine within a particular prime limit, as it
>> >> >would just be a fixed power of the 'length' of the comma.
>> >>
>> >> But not for comparing commas in different prime limits?
>>
>> > my answer is no. Remember the 128:125 example we were
>> > just discussing?

I get

5:4 = 1^2 = 1 'notes'
128:125 = 3^2 = 9 'notes'
21:20 = 2^4 = 16 'notes'
81:80 = 4^3 = 64 'notes'

where's the problem?

>
>> >> >> How could allowing two factors to be removed per step and
>> >> >> allowing one ever give the same result?
>> >> >
>> >> >They do in the Tenney case, for example.
>> >>
>> >> Can you give an example?
//
>> >log(81*80)
>> >= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
>> >= log(3*5)+log(3*4)+log(3*2)+log(3*2)
>> >= (etc.)
>>
>> 3*5 is not 3/5, and log(15) != log(3)-log(5)
>
>So what? I don't get what the point of this statement is or what it's
>supposed to show. As you know, the Tenney harmonic distance of 3/5 is
>log(3*5) = log(15) = log(3)+log(5). I have no idea why or where you
>think log(3)-log(5) comes in or should come in. It's clearly not the
>taxicab distance of anything.

All I'm saying is that 3/5 is one thing in triangular, and
two things in rectangular measures (including Tenney). I'm
at a loss for why you disagree.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/17/2005 1:52:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> howabout a 'how many notes did we have to search to
> >> >> >> find this comma' rationale?
> >> >> >
> >> >> >That's fine within a particular prime limit, as it
> >> >> >would just be a fixed power of the 'length' of the comma.
> >> >>
> >> >> But not for comparing commas in different prime limits?
> >>
> >> > my answer is no. Remember the 128:125 example we were
> >> > just discussing?
>
> I get
>
> 5:4 = 1^2 = 1 'notes'
> 128:125 = 3^2 = 9 'notes'
> 21:20 = 2^4 = 16 'notes'
> 81:80 = 4^3 = 64 'notes'
>
> where's the problem?

128:125 clearly leads to a temperament (or periodicity blocks) that
is about as complex, and requires about as many notes, as 81:80, in
the 5-prime-limit case. Not way fewer, as your numbers seem to
indicate. And this seems an important, if not the most important,
basis on which to compare them . . .

> >> >> >> How could allowing two factors to be removed per step and
> >> >> >> allowing one ever give the same result?
> >> >> >
> >> >> >They do in the Tenney case, for example.
> >> >>
> >> >> Can you give an example?
> //
> >> >log(81*80)
> >> >= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
> >> >= log(3*5)+log(3*4)+log(3*2)+log(3*2)
> >> >= (etc.)
> >>
> >> 3*5 is not 3/5, and log(15) != log(3)-log(5)
> >
> >So what? I don't get what the point of this statement is or what
it's
> >supposed to show. As you know, the Tenney harmonic distance of 3/5
is
> >log(3*5) = log(15) = log(3)+log(5). I have no idea why or where
you
> >think log(3)-log(5) comes in or should come in. It's clearly not
the
> >taxicab distance of anything.
>
> All I'm saying is that 3/5 is one thing in triangular, and
> two things in rectangular measures (including Tenney).

What's a "thing"?

> I'm
> at a loss for why you disagree.

Assuming I did, in what way did I disagree with this? And what does
your statement

> >> 3*5 is not 3/5, and log(15) != log(3)-log(5)

have to do with or say about it?

🔗Carl Lumma <ekin@lumma.org>

11/17/2005 2:15:03 PM

>> >> >> >> howabout a 'how many notes did we have to search to
>> >> >> >> find this comma' rationale?
>> >> >> >
>> >> >> >That's fine within a particular prime limit, as it
>> >> >> >would just be a fixed power of the 'length' of the comma.
>> >> >>
>> >> >> But not for comparing commas in different prime limits?
>> >>
>> >> > my answer is no. Remember the 128:125 example we were
>> >> > just discussing?
>>
>> I get
>>
>> 5:4 = 1^2 = 1 'notes'
>> 128:125 = 3^2 = 9 'notes'
>> 21:20 = 2^4 = 16 'notes'
>> 81:80 = 4^3 = 64 'notes'
>>
>> where's the problem?
>
>128:125 clearly leads to a temperament (or periodicity blocks) that
>is about as complex, and requires about as many notes, as 81:80, in
>the 5-prime-limit case. Not way fewer, as your numbers seem to
>indicate.

Sure, but if you throw the 5-prime-limit out as a basis for the
comparison this problem disappears. Surely 128:125 is better in
a no-3s world. The assumption that an identity can't be introduced
until all smaller identities have been introduced is a bad one, as
shown by La Monte Young and Aaron Johnson, at least. My way, the
only assumption is that in any system, the size of the smallest
interval should go down at about the same rate as notes goes up.
I would expect to find a smaller comma in radius 5 in the ll-limit
than I would the 5-limit...

>> All I'm saying is that 3/5 is one thing in triangular, and
>> two things in rectangular measures (including Tenney).
>
>What's a "thing"?

A term in distance sum.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 2:21:00 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> >> howabout a 'how many notes did we have to search to
> >> >> >> >> find this comma' rationale?
> >> >> >> >
> >> >> >> >That's fine within a particular prime limit, as it
> >> >> >> >would just be a fixed power of the 'length' of the comma.
> >> >> >>
> >> >> >> But not for comparing commas in different prime limits?
> >> >>
> >> >> > my answer is no. Remember the 128:125 example we were
> >> >> > just discussing?
> >>
> >> I get
> >>
> >> 5:4 = 1^2 = 1 'notes'
> >> 128:125 = 3^2 = 9 'notes'
> >> 21:20 = 2^4 = 16 'notes'
> >> 81:80 = 4^3 = 64 'notes'
> >>
> >> where's the problem?
> >
> >128:125 clearly leads to a temperament (or periodicity blocks)
that
> >is about as complex, and requires about as many notes, as 81:80,
in
> >the 5-prime-limit case. Not way fewer, as your numbers seem to
> >indicate.
>
> Sure, but if you throw the 5-prime-limit out as a basis for the
> comparison this problem disappears. Surely 128:125 is better in
> a no-3s world.

How can it be better than something that doesn't even exist in that
world?

> The assumption that an identity can't be introduced
> until all smaller identities have been introduced is a bad one, as
> shown by La Monte Young and Aaron Johnson, at least.

That's not my assumption -- far from it. You're mistaken if you think
TOP or anything like it depends on such an assumption.

> My way, the
> only assumption is that in any system, the size of the smallest
> interval should go down at about the same rate as notes goes up.

Gene has more precise formulae on that.

> I would expect to find a smaller comma in radius 5 in the ll-limit
> than I would the 5-limit...

Sometimes you would.

> >> All I'm saying is that 3/5 is one thing in triangular, and
> >> two things in rectangular measures (including Tenney).
> >
> >What's a "thing"?
>
> A term in distance sum.

Then as I just tried to show you (in the material you snipped here),
your statement above is wrong. The Tenney lattice, for example,
admits of various bases which permit a valid distance calculation to
be performed for a given interval using the basis intervals as the
terms in the sum. Your view is far too restrictive. Same goes for
Kees-triangular . . .

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 2:45:52 PM

>> The assumption that an identity can't be introduced
>> until all smaller identities have been introduced is a bad one, as
>> shown by La Monte Young and Aaron Johnson, at least.
>
>That's not my assumption -- far from it. You're mistaken if you think
>TOP or anything like it depends on such an assumption.

You're still choosing your 'top 10' lists in terms of Partchian
limits, are you not?

>> My way, the
>> only assumption is that in any system, the size of the smallest
>> interval should go down at about the same rate as notes goes up.
>
>Gene has more precise formulae on that.

The "critical exponent"? Yes, I think I have those somewhere.
Logflat is interesting, but maybe not desirable.

>> >> All I'm saying is that 3/5 is one thing in triangular, and
>> >> two things in rectangular measures (including Tenney).
>> >
>> >What's a "thing"?
>>
>> A term in distance sum.
>
>Then as I just tried to show you (in the material you snipped here),
>your statement above is wrong. The Tenney lattice, for example,
>admits of various bases which permit a valid distance calculation to
>be performed for a given interval using the basis intervals as the
>terms in the sum.

How? Here's the snipped material I think you're referring to:

> >> >log(81*80)
> >> >= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
> >> >= log(3*5)+log(3*4)+log(3*2)+log(3*2)
> >> >= (etc.)

Clearly this is not 'removing two things at once'.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 2:57:04 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> The assumption that an identity can't be introduced
> >> until all smaller identities have been introduced is a bad one,
as
> >> shown by La Monte Young and Aaron Johnson, at least.
> >
> >That's not my assumption -- far from it. You're mistaken if you
think
> >TOP or anything like it depends on such an assumption.
>
> You're still choosing your 'top 10' lists in terms of Partchian
> limits, are you not?

No. Top 10 lists? I didn't know I was choosing any. But the {2,3,7}
case is a big one for me, and was explicitly part of the plan I put
forth here for the series of 'Middle Path' papers. Also, don't forget
the cases (Gene has brought up) where the basis is formed from
consonant intervals not all of which are primes.

> >> My way, the
> >> only assumption is that in any system, the size of the smallest
> >> interval should go down at about the same rate as notes goes up.
> >
> >Gene has more precise formulae on that.
>
> The "critical exponent"? Yes, I think I have those somewhere.
> Logflat is interesting, but maybe not desirable.

It's not desirable to replace your assumption with a more accurate
formula?

> >> >> All I'm saying is that 3/5 is one thing in triangular, and
> >> >> two things in rectangular measures (including Tenney).
> >> >
> >> >What's a "thing"?
> >>
> >> A term in distance sum.
> >
> >Then as I just tried to show you (in the material you snipped
here),
> >your statement above is wrong. The Tenney lattice, for example,
> >admits of various bases which permit a valid distance calculation
to
> >be performed for a given interval using the basis intervals as the
> >terms in the sum.
>
> How? Here's the snipped material I think you're referring to:
>
> > >> >log(81*80)
> > >> >= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log
(5)
> > >> >= log(3*5)+log(3*4)+log(3*2)+log(3*2)
> > >> >= (etc.)
>
> Clearly this is not 'removing two things at once'.

It's showing several ways the distance sum can be computed, each of
which involves a different basis (partial or full) for the Tenney
lattice, and hence a different number of terms in the sum. I don't
know what you could mean by "Clearly this is not 'removing two things
at once'." You wrote something earlier where you went from the ratio
5/3 to log(5)-log(3), but that isn't how distance works in any of the
triangular cases either, so I still don't know what you mean.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 3:38:34 PM

>> >> The assumption that an identity can't be introduced
>> >> until all smaller identities have been introduced is a bad one,
>> >> as shown by La Monte Young and Aaron Johnson, at least.
>> >
>> >That's not my assumption -- far from it. You're mistaken if you
>> >think TOP or anything like it depends on such an assumption.
>>
>> You're still choosing your 'top 10' lists in terms of Partchian
>> limits, are you not?
>
>No. Top 10 lists? I didn't know I was choosing any. But the {2,3,7}
>case is a big one for me, and was explicitly part of the plan I put
>forth here for the series of 'Middle Path' papers. Also, don't forget
>the cases (Gene has brought up) where the basis is formed from
>consonant intervals not all of which are primes.

Your paper has a top-something list, plus one bonus temperament.
{2,3,7}? When did you first mention that?

>> >> My way, the
>> >> only assumption is that in any system, the size of the smallest
>> >> interval should go down at about the same rate as notes goes up.
>> >
>> >Gene has more precise formulae on that.
>>
>> The "critical exponent"? Yes, I think I have those somewhere.
>> Logflat is interesting, but maybe not desirable.
>
>It's not desirable to replace your assumption with a more accurate
>formula?

Accurate in what way?

>> Here's the snipped material I think you're referring to:
>>
>> >>log(81*80)
>> >>= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log(5)
>> >>= log(3*5)+log(3*4)+log(3*2)+log(3*2)
>> >>= (etc.)
>>
>> Clearly this is not 'removing two things at once'.
>
>It's showing several ways the distance sum can be computed, each of
>which involves a different basis (partial or full) for the Tenney
>lattice, and hence a different number of terms in the sum. I don't
>know what you could mean by "Clearly this is not 'removing two things
>at once'." You wrote something earlier where you went from the ratio
>5/3 to log(5)-log(3), but that isn't how distance works in any of the
>triangular cases either, so I still don't know what you mean.

Measures like your isosceles one ignore the 3 in 5/3.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 4:05:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> The assumption that an identity can't be introduced
> >> >> until all smaller identities have been introduced is a bad
one,
> >> >> as shown by La Monte Young and Aaron Johnson, at least.
> >> >
> >> >That's not my assumption -- far from it. You're mistaken if you
> >> >think TOP or anything like it depends on such an assumption.
> >>
> >> You're still choosing your 'top 10' lists in terms of Partchian
> >> limits, are you not?
> >
> >No. Top 10 lists? I didn't know I was choosing any. But the
{2,3,7}
> >case is a big one for me, and was explicitly part of the plan I
put
> >forth here for the series of 'Middle Path' papers. Also, don't
forget
> >the cases (Gene has brought up) where the basis is formed from
> >consonant intervals not all of which are primes.
>
> Your paper has a top-something list, plus one bonus temperament.

One?

> {2,3,7}? When did you first mention that?

Ages ago -- it's been part of the plan for part 2 of the paper all
along. I've posted specific examples too.

> >> >> My way, the
> >> >> only assumption is that in any system, the size of the
smallest
> >> >> interval should go down at about the same rate as notes goes
up.
> >> >
> >> >Gene has more precise formulae on that.
> >>
> >> The "critical exponent"? Yes, I think I have those somewhere.
> >> Logflat is interesting, but maybe not desirable.
> >
> >It's not desirable to replace your assumption with a more accurate
> >formula?
>
> Accurate in what way?

More reflective of the actual behavior of the size of the smallest
interval.

> >> Here's the snipped material I think you're referring to:
> >>
> >> >>log(81*80)
> >> >>= log(2)+log(2)+log(2)+log(2)+log(3)+log(3)+log(3)+log(3)+log
(5)
> >> >>= log(3*5)+log(3*4)+log(3*2)+log(3*2)
> >> >>= (etc.)
> >>
> >> Clearly this is not 'removing two things at once'.
> >
> >It's showing several ways the distance sum can be computed, each
of
> >which involves a different basis (partial or full) for the Tenney
> >lattice, and hence a different number of terms in the sum. I don't
> >know what you could mean by "Clearly this is not 'removing two
things
> >at once'." You wrote something earlier where you went from the
ratio
> >5/3 to log(5)-log(3), but that isn't how distance works in any of
the
> >triangular cases either, so I still don't know what you mean.
>
> Measures like your isosceles one ignore the 3 in 5/3.

And equilateral measures "ignore" *both* the 5 and the 3 in 5/3 in a
similar sense. So? I have no idea what any of this has to do with
your claim about 'things' in the Tenney case.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 4:20:52 PM

>> >> >> the only assumption is that in any system, the size of the
>> >> >> smallest interval should go down at about the same rate as
>> >> >> [the number of] notes goes up.
>> >> >
>> >> >Gene has more precise formulae on that.
//
>More reflective of the actual behavior of the size of the smallest
>interval.

Gene?

>And equilateral measures "ignore" *both* the 5 and the 3 in 5/3 in a
>similar sense. So?

So that's how I've been defining triangular is all.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 4:28:18 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> the only assumption is that in any system, the size of the
> >> >> >> smallest interval should go down at about the same rate as
> >> >> >> [the number of] notes goes up.
> >> >> >
> >> >> >Gene has more precise formulae on that.
> //
> >More reflective of the actual behavior of the size of the smallest
> >interval.
>
> Gene?
>
> >And equilateral measures "ignore" *both* the 5 and the 3 in 5/3 in a
> >similar sense. So?
>
> So that's how I've been defining triangular is all.

What is how you've been defining triangular? There's the isosceles
case, where only the 3 in 5/3 is "ignored", there's the equilateral
case, where both the 3 and the 5 in 5/3 are "ignored", and then there
are an infinity of other (such as scalene) triangular lattices, such as
the one I'm supposed to be making a diagram of for you. So what's your
definition?

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 5:04:18 PM

>> >And equilateral measures "ignore" *both* the 5 and the 3 in 5/3 in a
>> >similar sense. So?
>>
>> So that's how I've been defining triangular is all.
>
>What is how you've been defining triangular? There's the isosceles
>case, where only the 3 in 5/3 is "ignored", there's the equilateral
>case, where both the 3 and the 5 in 5/3 are "ignored", and then there
>are an infinity of other (such as scalene) triangular lattices, such as
>the one I'm supposed to be making a diagram of for you. So what's your
>definition?

In my original post, I use the term triangular to describe
measures where the numerator and denominator of the target
are treated separately; rectangular otherwise. For example,
the Hahn diameter (a triangular measure) of a ratio n/d is
the max of the rectangular diameters of n and d.

I thought this would correspond to whether the balls of the
measure were more spherical on a tri. or rect. lattice. My
definition of "spherical" wasn't very good, but yours may just
do the trick.

Since then, I've coded Gene's symmetrical Euclidean distance
(for any number of factors with no arbitrary symmetry breaking),
and a version of it which is crow's distance on a rectangular
lattice. Is the latter a "rectangular" version of the former?
Maybe so, since

5/3---5/4
| |
4/3---1/1---3/2
| |
8/5---6/5

is less spherical than

5/4
|
4/3---1/1---3/2
|
8/5

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 5:14:15 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >And equilateral measures "ignore" *both* the 5 and the 3 in 5/3
in a
> >> >similar sense. So?
> >>
> >> So that's how I've been defining triangular is all.
> >
> >What is how you've been defining triangular? There's the isosceles
> >case, where only the 3 in 5/3 is "ignored", there's the
equilateral
> >case, where both the 3 and the 5 in 5/3 are "ignored", and then
there
> >are an infinity of other (such as scalene) triangular lattices,
such as
> >the one I'm supposed to be making a diagram of for you. So what's
your
> >definition?
>
> In my original post, I use the term triangular to describe
> measures where the numerator and denominator of the target
> are treated separately; rectangular otherwise. For example,
> the Hahn diameter (a triangular measure) of a ratio n/d is
> the max of the rectangular diameters of n and d.

Rectangular diameters of n and d? What could that mean? The Hahn
diameter is something that depends on an assumed odd limit as a
parameter, and in general, I doubt that your statement above about
the Hahn diameter can be made to be true in general. Maybe you got
Hahn confused with Kees?

BTW, I really think you should use different terms than "triangular"
or "rectangular", since these words already mean something when
applied to a lattice but here you're talking about ratio-complexity
measures rather than lattices.

> I thought this would correspond to whether the balls of the
> measure were more spherical on a tri. or rect. lattice. My
> definition of "spherical" wasn't very good, but yours may just
> do the trick.
>
> Since then, I've coded Gene's symmetrical Euclidean distance
> (for any number of factors with no arbitrary symmetry breaking),
> and a version of it which is crow's distance on a rectangular
> lattice. Is the latter a "rectangular" version of the former?
> Maybe so, since
>
> 5/3---5/4
> | |
> 4/3---1/1---3/2
> | |
> 8/5---6/5
>
> is less spherical than
>
> 5/4
> |
> 4/3---1/1---3/2
> |
> 8/5
>
> -Carl
>

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 5:30:30 PM

>> In my original post, I use the term triangular to describe
>> measures where the numerator and denominator of the target
>> are treated separately; rectangular otherwise. For example,
>> the Hahn diameter (a triangular measure) of a ratio n/d is
>> the max of the rectangular diameters of n and d.
>
>Rectangular diameters of n and d? What could that mean?

The distances to n and d separately.

>The Hahn diameter is something that depends on an assumed
>odd limit as a parameter, and in general, I doubt that your
>statement above about the Hahn diameter can be made to be
>true in general.

That's right from the code, which works in general.

>Maybe you got Hahn confused with Kees?

Nope.

>BTW, I really think you should use different terms than "triangular"
>or "rectangular", since these words already mean something when
>applied to a lattice but here you're talking about ratio-complexity
>measures rather than lattices.

The point is to draw a correspondence.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 6:00:41 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Is the latter a "rectangular" version of the former?
> Maybe so, since
>
> 5/3---5/4
> | |
> 4/3---1/1---3/2
> | |
> 8/5---6/5
>
> is less spherical than
>
> 5/4
> |
> 4/3---1/1---3/2
> |
> 8/5

It only looks less spherical because you are drawing it that way. You
could draw things a different way and reach the opposite conclusion.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 6:12:15 PM

>> Is the latter a "rectangular" version of the former?
>> Maybe so, since
>>
>> 5/3---5/4
>> | |
>> 4/3---1/1---3/2
>> | |
>> 8/5---6/5
>>
>> is less spherical than
>>
>> 5/4
>> |
>> 4/3---1/1---3/2
>> |
>> 8/5
>
>It only looks less spherical because you are drawing it that way. You
>could draw things a different way and reach the opposite conclusion.

Yes, and my suggestion is to name the distance measure after the
way of drawing it that makes it look most spherical.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/23/2005 12:42:44 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> In my original post, I use the term triangular to describe
> >> measures where the numerator and denominator of the target
> >> are treated separately; rectangular otherwise. For example,
> >> the Hahn diameter (a triangular measure) of a ratio n/d is
> >> the max of the rectangular diameters of n and d.
> >
> >Rectangular diameters of n and d? What could that mean?
>
> The distances to n and d separately.

It seems to odd call these rectangular. For example, if the odd limit
is 9 or higher (for example, 11), Hahn gives the same 'distance' to 3
as to 9.

> >The Hahn diameter is something that depends on an assumed
> >odd limit as a parameter, and in general, I doubt that your
> >statement above about the Hahn diameter can be made to be
> >true in general.
>
> That's right from the code, which works in general.

Note that there was an incorrect version of Hahn's 'distance' in
SCALA for a while. With Paul Hahn's assent, Manuel has since fixed it.

🔗Carl Lumma <ekin@lumma.org>

11/23/2005 3:38:27 PM

>Note that there was an incorrect version of Hahn's 'distance' in
>SCALA for a while. With Paul Hahn's assent, Manuel has since fixed it.

I'm quite confident that my version is returning the right answers.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/25/2005 1:45:28 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >Note that there was an incorrect version of Hahn's 'distance' in
> >SCALA for a while. With Paul Hahn's assent, Manuel has since fixed
it.
>
> I'm quite confident that my version is returning the right answers.
>
> -Carl

If it doesn't take an odd limit as one of the input parameters, it
can't possibly be.

🔗Paul Erlich <perlich@aya.yale.edu>

11/25/2005 2:07:04 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >
> > >Note that there was an incorrect version of Hahn's 'distance' in
> > >SCALA for a while. With Paul Hahn's assent, Manuel has since fixed
> it.
> >
> > I'm quite confident that my version is returning the right answers.
> >
> > -Carl
>
> If it doesn't take an odd limit as one of the input parameters, it
> can't possibly be.

For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is 1, while
the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.

🔗Carl Lumma <ekin@lumma.org>

11/25/2005 2:43:43 PM

>> >Note that there was an incorrect version of Hahn's 'distance' in
>> >SCALA for a while. With Paul Hahn's assent, Manuel has since fixed
>> >it.
>>
>> I'm quite confident that my version is returning the right answers.
>
>If it doesn't take an odd limit as one of the input parameters, it
>can't possibly be.

Do I need to post Paul's code again? His code infers the limit
of the 'Fokker-style vector' just like mine does. But I'm happy
to test my code against a problem set if you like.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/25/2005 3:03:20 PM

>> > >Note that there was an incorrect version of Hahn's 'distance' in
>> > >SCALA for a while. With Paul Hahn's assent, Manuel has since fixed
>> > >it.
>> >
>> > I'm quite confident that my version is returning the right answers.
>> >
>> > -Carl
>>
>> If it doesn't take an odd limit as one of the input parameters, it
>> can't possibly be.
>
>For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is 1, while
>the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.

Or more accurately, diameter(0 0 -1 0 1) = 1 and diameter(0 2 -1) = 2.
The former applies not only to the 9- and 11-limits.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/27/2005 9:29:31 PM

>> >> In the 5-limit, Hahn-limit n gives the same score to 6n points.
>> >> 81/80 is length 4, so that means 24 ratios share the same length.
>> >> It looks like a greater number of ratios have the same
>> >> expressibility as 81/80.
>> >
>> >Looks like fewer to me.
>>
>> All odds less than 80 (about 40) that are prime rel. to 81 (2/3 of
>> them?) will have the same expressibility. That looks like more
>> than 24 to me.
>
>No way. You said "In the 5-limit" above, so most of these odds never
>enter the picture. If they did, you'd have to somehow "count" them
>in the Hahn case as well, for a fair comparison.

Howabout this: The base-2 expressibility of 81/80 is about 6.
In the 5-limit, there are 36 points in the radius-6 diameter
hull (or whatever it's called). log2(45) and log2(91) span 6
+/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
a base-2 expressibility of 6. That's fewer than 36, but of
course this depends on the base used. But as expressibility
goes up it will swamp diameter no matter what base is used.
It's not clear to me which blows up faster as harmonic limit
is increased.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/28/2005 11:44:38 AM

>>> >> In the 5-limit, Hahn-limit n gives the same score to 6n points.
>>> >> 81/80 is length 4, so that means 24 ratios share the same length.
>>> >> It looks like a greater number of ratios have the same
>>> >> expressibility as 81/80.
>>> >
>>> >Looks like fewer to me.
>>>
>>> All odds less than 80 (about 40) that are prime rel. to 81 (2/3 of
>>> them?) will have the same expressibility. That looks like more
>>> than 24 to me.
>>
>>No way. You said "In the 5-limit" above, so most of these odds never
>>enter the picture. If they did, you'd have to somehow "count" them
>>in the Hahn case as well, for a fair comparison.
>
>Howabout this: The base-2 expressibility of 81/80 is about 6.
>In the 5-limit, there are 36 points in the radius-6 diameter
>hull (or whatever it's called). log2(45) and log2(91) span 6
>+/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
>60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
>a base-2 expressibility of 6.

Classes of ratios, actually.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/28/2005 4:35:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> > >Note that there was an incorrect version of Hahn's 'distance'
in
> >> > >SCALA for a while. With Paul Hahn's assent, Manuel has since
fixed
> >> > >it.
> >> >
> >> > I'm quite confident that my version is returning the right
answers.
> >> >
> >> > -Carl
> >>
> >> If it doesn't take an odd limit as one of the input parameters,
it
> >> can't possibly be.
> >
> >For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is 1,
while
> >the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.
>
> Or more accurately, diameter(0 0 -1 0 1) = 1 and diameter(0 2 -1) =
>2.

Huh? What does (0 2 -1) represent, and how? And diameter applies to
equal temperaments, not to intervals.

> The former applies not only to the 9- and 11-limits.

But to all higher odd limits as well, yes.

🔗Paul Erlich <perlich@aya.yale.edu>

11/28/2005 4:58:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> In the 5-limit, Hahn-limit n gives the same score to 6n
points.
> >> >> 81/80 is length 4, so that means 24 ratios share the same
length.
> >> >> It looks like a greater number of ratios have the same
> >> >> expressibility as 81/80.
> >> >
> >> >Looks like fewer to me.
> >>
> >> All odds less than 80 (about 40) that are prime rel. to 81 (2/3
of
> >> them?) will have the same expressibility. That looks like more
> >> than 24 to me.
> >
> >No way. You said "In the 5-limit" above, so most of these odds
never
> >enter the picture. If they did, you'd have to somehow "count" them
> >in the Hahn case as well, for a fair comparison.
>
> Howabout this: The base-2 expressibility of 81/80 is about 6.
> In the 5-limit, there are 36 points in the radius-6 diameter
> hull (or whatever it's called). log2(45) and log2(91) span 6
> +/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
> 60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
> a base-2 expressibility of 6.

22 ratios? What are they? Remember that only ratios in lowest
terms "count" -- 81/72 isn't a ratio in its own right, but is already
accounted for by 9/8. Exactly an expressibility of 6 or 6 +/- .5?

> That's fewer than 36, but of
> course this depends on the base used. But as expressibility
> goes up it will swamp diameter no matter what base is used.

Swamp diameter? What do you mean?

> It's not clear to me which blows up faster as harmonic limit
> is increased.

Blows up? And is harmonic limit defined so as to allow this
comparison in an apples-to-apples way?

🔗Carl Lumma <ekin@lumma.org>

11/28/2005 10:49:02 PM

>> Howabout this: The base-2 expressibility of 81/80 is about 6.
>> In the 5-limit, there are 36 points in the radius-6 diameter
>> hull (or whatever it's called). log2(45) and log2(91) span 6
>> +/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
>> 60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
>> a base-2 expressibility of 6.
>
>22 ratios? What are they?

22 classes of ratio with those 11 numbers in the numerator and
denominator.

>Remember that only ratios in lowest terms "count" -- 81/72 isn't
>a ratio in its own right, but is already accounted for by 9/8.

As I attempted to estimate in...
>>>All odds less than 80 (about 40) that are prime rel. to 81
>>>(2/3 of them?) will have the same expressibility.
...which was quoted at the top of the last message in this
thread.

>Exactly an expressibility of 6 or 6 +/- .5?

The latter.

>> That's fewer than 36, but of
>> course this depends on the base used. But as expressibility
>> goes up it will swamp diameter no matter what base is used.
>
>Swamp diameter? What do you mean?

More ratios will fall within +/- .5 of a given score
than ratios having that diameter. Though "swamp diameter"
does sound pretty funny. :)

>> It's not clear to me which blows up faster as harmonic limit
>> is increased.
>
>Blows up? And is harmonic limit defined so as to allow this
>comparison in an apples-to-apples way?

This example was fixed 5-limit, which seems fair.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/28/2005 10:44:21 PM

>> >> > >Note that there was an incorrect version of Hahn's 'distance'
>> >> > >in SCALA for a while. With Paul Hahn's assent, Manuel has
>> >> > >since fixed it.
>> >> >
>> >> > I'm quite confident that my version is returning the right
>> >> > answers.
>> >> >
>> >> > -Carl
>> >>
>> >> If it doesn't take an odd limit as one of the input parameters,
>> >> it can't possibly be.
>> >
>> >For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is 1,
>> >while the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.
>>
>> Or more accurately, diameter(0 0 -1 0 1) = 1 and
>> diameter(0 2 -1) = 2.
>
>Huh? What does (0 2 -1) represent,

Any interval vector you want, in fact. Odd limit is completely
immaterial. In this example, "5-limit" 9:5.

>and how? And diameter applies to equal temperaments, not to intervals.

It applies to intervals in scales, including JI scales (see
Mills digest 1502 topics 3 & 4 and digest 1598 topic 1 for a
start). The diameter of a scale is the diameter of its longest
interval.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/29/2005 1:28:39 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Howabout this: The base-2 expressibility of 81/80 is about 6.
> >> In the 5-limit, there are 36 points in the radius-6 diameter
> >> hull (or whatever it's called). log2(45) and log2(91) span 6
> >> +/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
> >> 60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
> >> a base-2 expressibility of 6.

Did you really mean 6 +/- .5 here?

45 is slightly too low, but that doesn't matter as it's equivalent to
90 which is already there . . . We can remove either one from the
list.

> >22 ratios? What are they?
>
> 22 classes of ratio with those 11 numbers in the numerator and
> denominator.

I meant, "list them"!

I don't know what you're trying to accomplish by looking at ratios
with expressibility (in base 2) of 6 +/- .5. But given your list of
integers above (which is enough since it spans a factor of 2), you
can only create five lowest-terms fractions:

64/45
75/64
81/50
81/64
81/80

I say these are the *only* 5-limit interval classes with a base-2
expressibility of 6 +/- .5. (For pitch classes, include their
inversions as well, for a total of ten.) Did I miss any?

> >Remember that only ratios in lowest terms "count" -- 81/72 isn't
> >a ratio in its own right, but is already accounted for by 9/8.
>
> As I attempted to estimate in...
> >>>All odds less than 80 (about 40) that are prime rel. to 81
> >>>(2/3 of them?) will have the same expressibility.
> ...which was quoted at the top of the last message in this
> thread.

Makes no sense to me. Can you elaborate (if it's important)?

> >Exactly an expressibility of 6 or 6 +/- .5?
>
> The latter.
>
> >> That's fewer than 36, but of
> >> course this depends on the base used. But as expressibility
> >> goes up it will swamp diameter no matter what base is used.
> >
> >Swamp diameter? What do you mean?
>
> More ratios will fall within +/- .5 of a given score
> than ratios having that diameter.

I don't understand what that means. Can you explain?

> Though "swamp diameter"
> does sound pretty funny. :)
>
> >> It's not clear to me which blows up faster as harmonic limit
> >> is increased.
> >
> >Blows up? And is harmonic limit defined so as to allow this
> >comparison in an apples-to-apples way?
>
> This example was fixed 5-limit, which seems fair.

In the 5-limit, they both (Hahn 'distance' and Kees 'expressibility')
grow at the same rate. That seems clear, since they're both
determined by hexagons whose area grows as the square of the
complexity.

🔗Paul Erlich <perlich@aya.yale.edu>

11/29/2005 2:14:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> > >Note that there was an incorrect version of
Hahn's 'distance'
> >> >> > >in SCALA for a while. With Paul Hahn's assent, Manuel has
> >> >> > >since fixed it.
> >> >> >
> >> >> > I'm quite confident that my version is returning the right
> >> >> > answers.
> >> >> >
> >> >> > -Carl
> >> >>
> >> >> If it doesn't take an odd limit as one of the input
parameters,
> >> >> it can't possibly be.
> >> >
> >> >For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is
1,
> >> >while the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.
> >>
> >> Or more accurately, diameter(0 0 -1 0 1) = 1 and
> >> diameter(0 2 -1) = 2.
> >
> >Huh? What does (0 2 -1) represent,
>
> Any interval vector you want, in fact.

(0 2 -1) represents any JI interval you want? Or _______? What are
you saying?

> Odd limit is completely
> immaterial.

How can it be immaterial when it changes the Hahn 'distance' of an
interval?

> In this example, "5-limit" 9:5.
>
> >and how?

You didn't answer. How does (0 2 -1) represent "5-limit" 9:5? How
would you represent "5-limit" 27:20?

> And diameter applies to equal temperaments, not to intervals.
>
> It applies to intervals in scales, including JI scales (see
> Mills digest 1502 topics 3 & 4 and digest 1598 topic 1 for a
> start). The diameter of a scale is the diameter of its longest
> interval.

If you have links to these, I'd love them. Still, I could see the
term applying perfectly well to JI scales, but not to individual
intervals. You need an autonomous concept like 'length' so that you
can define diameter as the longest 'length' possible between two
notes in a scale.

🔗Carl Lumma <ekin@lumma.org>

11/29/2005 10:38:03 PM

>> >> >For example, the 9-limit or 11-limit Hahn 'distance' of 9:5 is
>> >> >1, while the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.
>> >>
>> >> Or more accurately, diameter(0 0 -1 0 1) = 1 and
>> >> diameter(0 2 -1) = 2.
>> >
>> >Huh? What does (0 2 -1) represent,
>>
>> Any interval vector you want, in fact.
>
>(0 2 -1) represents any JI interval you want? Or _______? What are
>you saying?
>
>> Odd limit is completely
>> immaterial.
>
>How can it be immaterial when it changes the Hahn 'distance' of an
>interval?

It doesn't change it. The kind of distance being discussed here
maps interval vectors (not intervals) to the positive integers, and
has absolutely nothing to do with odd limit.

>> In this example, "5-limit" 9:5.
>>
>> >and how?
>
>You didn't answer. How does (0 2 -1) represent "5-limit" 9:5?

It's a '5-limit monzo' or whatever you'd call it. Isn't that
obvious?

>> And diameter applies to equal temperaments, not to intervals.
>>
>> It applies to intervals in scales, including JI scales (see
>> Mills digest 1502 topics 3 & 4 and digest 1598 topic 1 for a
>> start). The diameter of a scale is the diameter of its longest
>> interval.
>
>If you have links to these, I'd love them.

They're not on the web anywhere that I know of, but I could
repost them here if you'd like. Surely you remember Paul
reporting the diameter of the 7-limit "max" scales I asked
about...

>Still, I could see the term applying perfectly well to JI scales,
>but not to individual intervals. You need an autonomous concept
>like 'length' so that you can define diameter as the longest
>'length' possible between two notes in a scale.

Sure. What it really is the length of a geodesic. The length
of the longest geodesic is the diameter. That's for scales,
which are finite subgraphs of the lattice. Of course now I'm
mixing metaphors, since graphs are not what Gene and I just
agreed lattices are. But anyway.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/29/2005 11:06:10 PM

>> >> Howabout this: The base-2 expressibility of 81/80 is about 6.
>> >> In the 5-limit, there are 36 points in the radius-6 diameter
>> >> hull (or whatever it's called). log2(45) and log2(91) span 6
>> >> +/- .5. The '5-limit' numbers in this range are 45, 48, 50, 54,
>> >> 60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
>> >> a base-2 expressibility of 6.
>
>Did you really mean 6 +/- .5 here?

Yes.

>45 is slightly too low,

Oop- yes.

>> >22 ratios? What are they?
>>
>> 22 classes of ratio with those 11 numbers in the numerator and
>> denominator.
>
>I meant, "list them"!
>
>I don't know what you're trying to accomplish by looking at ratios
>with expressibility (in base 2) of 6 +/- .5. But given your list of
>integers above (which is enough since it spans a factor of 2), you
>can only create five lowest-terms fractions:
>
>64/45
>75/64
>81/50
>81/64
>81/80
>
>I say these are the *only* 5-limit interval classes with a base-2
>expressibility of 6 +/- .5. (For pitch classes, include their
>inversions as well, for a total of ten.) Did I miss any?

Looks right. So 10 expressibility ratios vs. 36 graph length
ratios.

>> >> That's fewer than 36, but of
>> >> course this depends on the base used. But as expressibility
>> >> goes up it will swamp diameter no matter what base is used.
>> >
>> >Swamp diameter? What do you mean?
>>
>> More ratios will fall within +/- .5 of a given score
>> than ratios having that diameter.
>
>I don't understand what that means. Can you explain?

I meant:
6n < (x such that n-.5 < 2^x < n+.5) in the limit of n.
But this doesn't take into account the lowest-terms sieve,
which I underestimated, and it isn't clear that comparing
a fixed graph length to expressibility +/- .5 is fair.

>In the 5-limit, they both (Hahn 'distance' and Kees 'expressibility')
>grow at the same rate. That seems clear, since they're both
>determined by hexagons whose area grows as the square of the
>complexity.

That occurred to me, but I dismissed it for some reason.
Makes sense.

>> >> It's not clear to me which blows up faster as harmonic limit
>> >> is increased.

If Gene's statement about immediate generalization to higher
limits is true, perhaps this holds there.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:35:16 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >For example, the 9-limit or 11-limit Hahn 'distance' of 9:5
is
> >> >> >1, while the 7-limit or 5-limit Hahn 'distance' of 9:5 is 2.
> >> >>
> >> >> Or more accurately, diameter(0 0 -1 0 1) = 1 and
> >> >> diameter(0 2 -1) = 2.
> >> >
> >> >Huh? What does (0 2 -1) represent,
> >>
> >> Any interval vector you want, in fact.
> >
> >(0 2 -1) represents any JI interval you want? Or _______? What are
> >you saying?
> >
> >> Odd limit is completely
> >> immaterial.
> >
> >How can it be immaterial when it changes the Hahn 'distance' of an
> >interval?
>
> It doesn't change it. The kind of distance being discussed here
> maps interval vectors (not intervals)

Define "interval vectors", Carl.

> to the positive integers, and
> has absolutely nothing to do with odd limit.

How are we going to fit this into your table if it doesn't relate to
intervals in the end?

> >> In this example, "5-limit" 9:5.
> >>
> >> >and how?
> >
> >You didn't answer. How does (0 2 -1) represent "5-limit" 9:5?
>
> It's a '5-limit monzo' or whatever you'd call it. Isn't that
> obvious?

So it does represent an interval.

> >> And diameter applies to equal temperaments, not to intervals.
> >>
> >> It applies to intervals in scales, including JI scales (see
> >> Mills digest 1502 topics 3 & 4 and digest 1598 topic 1 for a
> >> start). The diameter of a scale is the diameter of its longest
> >> interval.
> >
> >If you have links to these, I'd love them.
>
> They're not on the web anywhere that I know of, but I could
> repost them here if you'd like. Surely you remember Paul
> reporting the diameter of the 7-limit "max" scales I asked
> about...

Yes, diameter of *scales* makes perfect sense, as I said below.

> >Still, I could see the term applying perfectly well to JI scales,
> >but not to individual intervals. You need an autonomous concept
> >like 'length' so that you can define diameter as the longest
> >'length' possible between two notes in a scale.
>
> Sure.

I'm glad you agree.

> What it really is the length of a geodesic.

That's not correct in terms of any meaningful generalization of the
term 'geodesic' to this problem that I can think of. Geodesic means
basically as close to a straight line as you can get in the space in
question. Scale diameters in the Hahn world don't necessarily fall on
straight lines.

> The length
> of the longest geodesic is the diameter.

It's easy to find blobs in the lattice which enclose scales for which
this is not the case.

> That's for scales,
> which are finite subgraphs of the lattice. Of course now I'm
> mixing metaphors, since graphs are not what Gene and I just
> agreed lattices are. But anyway.

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:41:29 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> Howabout this: The base-2 expressibility of 81/80 is about 6.
> >> >> In the 5-limit, there are 36 points in the radius-6 diameter
> >> >> hull (or whatever it's called). log2(45) and log2(91) span 6
> >> >> +/- .5. The '5-limit' numbers in this range are 45, 48, 50,
54,
> >> >> 60, 64, 72, 75, 80, 81, and 90. That's 22 5-limit ratios with
> >> >> a base-2 expressibility of 6.
> >
> >Did you really mean 6 +/- .5 here?
>
> Yes.
>
> >45 is slightly too low,
>
> Oop- yes.
>
> >> >22 ratios? What are they?
> >>
> >> 22 classes of ratio with those 11 numbers in the numerator and
> >> denominator.
> >
> >I meant, "list them"!
> >
> >I don't know what you're trying to accomplish by looking at ratios
> >with expressibility (in base 2) of 6 +/- .5. But given your list
of
> >integers above (which is enough since it spans a factor of 2), you
> >can only create five lowest-terms fractions:
> >
> >64/45
> >75/64
> >81/50
> >81/64
> >81/80
> >
> >I say these are the *only* 5-limit interval classes with a base-2
> >expressibility of 6 +/- .5. (For pitch classes, include their
> >inversions as well, for a total of ten.) Did I miss any?
>
> Looks right. So 10 expressibility ratios vs. 36 graph length
> ratios.

Isn't it arbitrary to use base 2 and thus wrong to treat this as an
apples-to-apples comparison?

> >> >> That's fewer than 36, but of
> >> >> course this depends on the base used. But as expressibility
> >> >> goes up it will swamp diameter no matter what base is used.
> >> >
> >> >Swamp diameter? What do you mean?
> >>
> >> More ratios will fall within +/- .5 of a given score
> >> than ratios having that diameter.
> >
> >I don't understand what that means. Can you explain?
>
> I meant:
> 6n < (x such that n-.5 < 2^x < n+.5) in the limit of n.

I can't figure out what this could mean.

> But this doesn't take into account the lowest-terms sieve,
> which I underestimated, and it isn't clear that comparing
> a fixed graph length to expressibility +/- .5 is fair.
>
> >In the 5-limit, they both (Hahn 'distance' and
Kees 'expressibility')
> >grow at the same rate. That seems clear, since they're both
> >determined by hexagons whose area grows as the square of the
> >complexity.
>
> That occurred to me, but I dismissed it for some reason.
> Makes sense.
>
> >> >> It's not clear to me which blows up faster as harmonic limit
> >> >> is increased.
>
> If Gene's statement about immediate generalization to higher
> limits is true, perhaps this holds there.

The problem is once you get beyond 7-limit, you've got odd limit vs.
prime limit, which isn't an apples-to-apples comparison.

🔗Carl Lumma <ekin@lumma.org>

11/30/2005 3:42:38 PM

>> >> In this example, "5-limit" 9:5.
>> >>
>> >> >and how?
>> >
>> >You didn't answer. How does (0 2 -1) represent "5-limit" 9:5?
>>
>> It's a '5-limit monzo' or whatever you'd call it. Isn't that
>> obvious?
>
>So it does represent an interval.

In this case 9:5, but it could be 25:17 with a 2-5-17 basis, etc.

>> >> And diameter applies to equal temperaments, not to intervals.
>> >>
>> >> It applies to intervals in scales, including JI scales (see
>> >> Mills digest 1502 topics 3 & 4 and digest 1598 topic 1 for a
>> >> start). The diameter of a scale is the diameter of its longest
>> >> interval.
>> >
>> >If you have links to these, I'd love them.
>>
>> They're not on the web anywhere that I know of, but I could
>> repost them here if you'd like. Surely you remember Paul
>> reporting the diameter of the 7-limit "max" scales I asked
>> about...
>
>Yes, diameter of *scales* makes perfect sense, as I said below.
>
>> >Still, I could see the term applying perfectly well to JI scales,
>> >but not to individual intervals. You need an autonomous concept
>> >like 'length' so that you can define diameter as the longest
>> >'length' possible between two notes in a scale.
>>
>> Sure.
>
>I'm glad you agree.
>
>> What it really is the length of a geodesic.
>
>That's not correct in terms of any meaningful generalization of the
>term 'geodesic' to this problem that I can think of. Geodesic means
>basically as close to a straight line as you can get in the space in
>question. Scale diameters in the Hahn world don't necessarily fall on
>straight lines.
>
>> The length
>> of the longest geodesic is the diameter.
>
>It's easy to find blobs in the lattice which enclose scales for which
>this is not the case.

You're thinking of

http://mathworld.wolfram.com/Geodesic.html

I was referring to

http://mathworld.wolfram.com/GraphGeodesic.html

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/30/2005 3:46:29 PM

>> >> >> It's not clear to me which blows up faster as harmonic limit
>> >> >> is increased.
>>
>> If Gene's statement about immediate generalization to higher
>> limits is true, perhaps this holds there.
>
>The problem is once you get beyond 7-limit, you've got odd limit vs.
>prime limit, which isn't an apples-to-apples comparison.

Why couldn't expressibility be plotted on a prime-limit lattice?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:47:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> It's not clear to me which blows up faster as harmonic limit
> >> >> >> is increased.
> >>
> >> If Gene's statement about immediate generalization to higher
> >> limits is true, perhaps this holds there.
> >
> >The problem is once you get beyond 7-limit, you've got odd limit vs.
> >prime limit, which isn't an apples-to-apples comparison.
>
> Why couldn't expressibility be plotted on a prime-limit lattice?

It can and is! It's the other one that can't/isn't.

🔗Carl Lumma <ekin@lumma.org>

11/30/2005 3:56:52 PM

>> >> >> >> It's not clear to me which blows up faster as harmonic limit
>> >> >> >> is increased.
>> >>
>> >> If Gene's statement about immediate generalization to higher
>> >> limits is true, perhaps this holds there.
>> >
>> >The problem is once you get beyond 7-limit, you've got odd limit vs.
>> >prime limit, which isn't an apples-to-apples comparison.
>>
>> Why couldn't expressibility be plotted on a prime-limit lattice?
>
>It can and is! It's the other one that can't/isn't.

The chart up to the 7-limit would be a fine thing to have. Beyond
that there could be separate columns for prime and odd limits (two
11-limit columns, for example). And in the 9- and 15-limit cases,
weighted graph length (Hahn distance) could be taken as a
representative of graph length.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 1:13:55 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> >> It's not clear to me which blows up faster as harmonic
limit
> >> >> >> >> is increased.
> >> >>
> >> >> If Gene's statement about immediate generalization to higher
> >> >> limits is true, perhaps this holds there.
> >> >
> >> >The problem is once you get beyond 7-limit, you've got odd
limit vs.
> >> >prime limit, which isn't an apples-to-apples comparison.
> >>
> >> Why couldn't expressibility be plotted on a prime-limit lattice?
> >
> >It can and is! It's the other one that can't/isn't.
>
> The chart up to the 7-limit would be a fine thing to have. Beyond
> that there could be separate columns for prime and odd limits (two
> 11-limit columns, for example).

OK.

> And in the 9- and 15-limit cases,
> weighted graph length (Hahn distance) could be taken as a
> representative of graph length.

Huh? Weighted graph length? I don't get it. What do you mean?

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 1:30:45 PM

>> And in the 9- and 15-limit cases,
>> weighted graph length (Hahn distance) could be taken as a
>> representative of graph length.
>
>Huh? Weighted graph length? I don't get it. What do you mean?

Maybe you could suggest terminology for what I was calling
diameter, and the weighted version.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 1:32:08 PM

At 01:30 PM 12/1/2005, you wrote:
>>> And in the 9- and 15-limit cases,
>>> weighted graph length (Hahn distance) could be taken as a
>>> representative of graph length.
>>
>>Huh? Weighted graph length? I don't get it. What do you mean?
>
>Maybe you could suggest terminology for what I was calling
>diameter, and the weighted version.

The weigthed version has also been going by "isosceles" and
"Hahn distance" in this thread.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 1:55:28 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> And in the 9- and 15-limit cases,
> >> weighted graph length (Hahn distance) could be taken as a
> >> representative of graph length.
> >
> >Huh? Weighted graph length? I don't get it. What do you mean?
>
> Maybe you could suggest terminology for what I was calling
> diameter, and the weighted version.

Hold on -- you seemed to imply that you could interpret Hahn distance
as weighted graph length. Did I grasp your meaning correctly? If so,
how do you do so?

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 1:58:31 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> At 01:30 PM 12/1/2005, you wrote:
> >>> And in the 9- and 15-limit cases,
> >>> weighted graph length (Hahn distance) could be taken as a
> >>> representative of graph length.
> >>
> >>Huh? Weighted graph length? I don't get it. What do you mean?
> >
> >Maybe you could suggest terminology for what I was calling
> >diameter, and the weighted version.
>
> The weigthed version has also been going by "isosceles" and
> "Hahn distance" in this thread.

These are distinct so I have no idea how the "weighted version" can
be both. Let me repeat what you yourself acknowledged: the "Hahn
distance" of 9:5 is:

1 in the 7-limit or below;
2 in the 9-limit or above.

Meanwhile 3:2 has a "Hahn distance" of 1 in all cases. Clearly this
isn't the same as what any norm on the isosceles lattice would give!

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 2:01:24 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >
> > At 01:30 PM 12/1/2005, you wrote:
> > >>> And in the 9- and 15-limit cases,
> > >>> weighted graph length (Hahn distance) could be taken as a
> > >>> representative of graph length.
> > >>
> > >>Huh? Weighted graph length? I don't get it. What do you mean?
> > >
> > >Maybe you could suggest terminology for what I was calling
> > >diameter, and the weighted version.
> >
> > The weigthed version has also been going by "isosceles" and
> > "Hahn distance" in this thread.
>
> These are distinct so I have no idea how the "weighted version" can
> be both. Let me repeat what you yourself acknowledged: the "Hahn
> distance" of 9:5 is:
>
> 1 in the 7-limit or below;

I meant 2, not 1, there.

> 2 in the 9-limit or above.

I meant 1, not 2, there. Sorry!

> Meanwhile 3:2 has a "Hahn distance" of 1 in all cases. Clearly this
> isn't the same as what any norm on the isosceles lattice would give!
>

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 2:42:42 PM

>> >> And in the 9- and 15-limit cases,
>> >> weighted graph length (Hahn distance) could be taken as a
>> >> representative of graph length.
>> >
>> >Huh? Weighted graph length? I don't get it. What do you mean?
>>
>> Maybe you could suggest terminology for what I was calling
>> diameter, and the weighted version.
>
>Hold on -- you seemed to imply that you could interpret Hahn distance
>as weighted graph length. Did I grasp your meaning correctly? If so,
>how do you do so?

Here's the code...

;; Octave-equivalent, triangular version of comma-dist.

(define comma-tridist
(lambda (f)
(let
((over (apply +
(all-vals
(occurrences
(remove 2
(factor
(numerator f)))))))
(under (apply +
(all-vals
(occurrences
(remove 2
(factor
(denominator f))))))))
(max over under))))

This is 'graph length' on a triangular lattice. Here's
the weighted version...

;; Weighted version of comma-tridist.
;; Algorithm due to Paul Hahn.

(define comma-isosceles
(lambda (f)
(letrec ((loop
(lambda (ls total)
(if (null? ls)
total
(loop
(cancel (cadar ls) (cdr ls))
(+ (* (log2 (caar ls)) (abs (cadar ls)))
total)))))
(cancel
(lambda (exp ls)
(cond
[(null? ls) '()]
[(zero? exp) ls]
[(eq? (sign exp) (sign (cadar ls)))
(cons (car ls) (cancel exp (cdr ls)))]
[(< (abs exp) (abs (cadar ls)))
(cons (list (caar ls) (+ exp (cadar ls)))
(cdr ls))]
[else (cancel (+ exp (cadar ls)) (cdr ls))]))))
(loop (reverse (monzo (remove-2s f))) 0))))

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 2:46:03 PM

>> >>> And in the 9- and 15-limit cases,
>> >>> weighted graph length (Hahn distance) could be taken as a
>> >>> representative of graph length.
>> >>
>> >>Huh? Weighted graph length? I don't get it. What do you mean?
>> >
>> >Maybe you could suggest terminology for what I was calling
>> >diameter, and the weighted version.
>>
>> The weigthed version has also been going by "isosceles" and
>> "Hahn distance" in this thread.
>
>These are distinct so I have no idea how the "weighted version" can
>be both. Let me repeat what you yourself acknowledged: the "Hahn
>distance" of 9:5 is:
>
>1 in the 7-limit or below;
>2 in the 9-limit or above.

I thought you insisted on using "Hahn distance" only for the
weighted version. The algorithm given by Paul gives 9:5 as
3.9068905956085187 in either the 7- or 9-limit.

>Meanwhile 3:2 has a "Hahn distance" of 1 in all cases. Clearly this
>isn't the same as what any norm on the isosceles lattice would give!

We really must standardize this terminology. I was been calling
this diameter or Hahn diameter until recently, when I switched to
"graph length".

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 2:51:01 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> And in the 9- and 15-limit cases,
> >> >> weighted graph length (Hahn distance) could be taken as a
> >> >> representative of graph length.
> >> >
> >> >Huh? Weighted graph length? I don't get it. What do you mean?
> >>
> >> Maybe you could suggest terminology for what I was calling
> >> diameter, and the weighted version.
> >
> >Hold on -- you seemed to imply that you could interpret Hahn
distance
> >as weighted graph length. Did I grasp your meaning correctly? If
so,
> >how do you do so?
>
> Here's the code...

I don't think that helps. I meant, how do you construct the graph,
and with what weights?

> ;; Octave-equivalent, triangular version of comma-dist.
>
> (define comma-tridist
> (lambda (f)
> (let
> ((over (apply +
> (all-vals
> (occurrences
> (remove 2
> (factor
> (numerator f)))))))
> (under (apply +
> (all-vals
> (occurrences
> (remove 2
> (factor
> (denominator f))))))))
> (max over under))))
>
> This is 'graph length' on a triangular lattice.

So you say. However what you're calculating here seems to be a ratio-
complexity measure (you input a ratio, not a set of graph positions),
and it's not at all clear how it corresponds (or even could
correspond) to 'graph length' on a triangular lattice.What happens in
the 9-limit and above, where ratios of 3 and ratios of 9 both have
the same length? How does the graph look?

> Here's
> the weighted version...
>
> ;; Weighted version of comma-tridist.
> ;; Algorithm due to Paul Hahn.
>
> (define comma-isosceles
> (lambda (f)
> (letrec ((loop
> (lambda (ls total)
> (if (null? ls)
> total
> (loop
> (cancel (cadar ls) (cdr ls))
> (+ (* (log2 (caar ls)) (abs (cadar ls)))
> total)))))
> (cancel
> (lambda (exp ls)
> (cond
> [(null? ls) '()]
> [(zero? exp) ls]
> [(eq? (sign exp) (sign (cadar ls)))
> (cons (car ls) (cancel exp (cdr ls)))]
> [(< (abs exp) (abs (cadar ls)))
> (cons (list (caar ls) (+ exp (cadar ls)))
> (cdr ls))]
> [else (cancel (+ exp (cadar ls)) (cdr
ls))]))))
> (loop (reverse (monzo (remove-2s f))) 0))))
>
> -Carl
>

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 2:54:03 PM

>> Here's the code...
>
>I don't think that helps. I meant, how do you construct the graph,
>and with what weights?

Obviously I'm still not using the magic words. What, Paul, would
you like me to call this thing we've been talking about now for
10 years?

>> ;; Octave-equivalent, triangular version of comma-dist.
>>
>> (define comma-tridist
>> (lambda (f)
>> (let
>> ((over (apply +
>> (all-vals
>> (occurrences
>> (remove 2
>> (factor
>> (numerator f)))))))
>> (under (apply +
>> (all-vals
>> (occurrences
>> (remove 2
>> (factor
>> (denominator f))))))))
>> (max over under))))
>>
>> This is 'graph length' on a triangular lattice.
>
>So you say. However what you're calculating here seems to be a ratio-
>complexity measure (you input a ratio, not a set of graph positions),
>and it's not at all clear how it corresponds (or even could
>correspond) to 'graph length' on a triangular lattice. What happens in
>the 9-limit and above, where ratios of 3 and ratios of 9 both have
>the same length? How does the graph look?

Didn't we just finish discussing this?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 2:57:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >>> And in the 9- and 15-limit cases,
> >> >>> weighted graph length (Hahn distance) could be taken as a
> >> >>> representative of graph length.
> >> >>
> >> >>Huh? Weighted graph length? I don't get it. What do you mean?
> >> >
> >> >Maybe you could suggest terminology for what I was calling
> >> >diameter, and the weighted version.
> >>
> >> The weigthed version has also been going by "isosceles" and
> >> "Hahn distance" in this thread.
> >
> >These are distinct so I have no idea how the "weighted version"
can
> >be both. Let me repeat what you yourself acknowledged: the "Hahn
> >distance" of 9:5 is:
> >
> >1 in the 7-limit or below;
> >2 in the 9-limit or above.
>
> I thought you insisted on using "Hahn distance" only for the
> weighted version.

Quite the contrary. I've been quite consistent in using for the
unweighted version, as I remember Paul Hahn's ideas well.

> The algorithm given by Paul gives 9:5 as
> 3.9068905956085187 in either the 7- or 9-limit.

He may have also written an algorithm for the 'isosceles' version,
which this may be, but it's not the 'distance' measure *he* favored,
which was unweighted.

> >Meanwhile 3:2 has a "Hahn distance" of 1 in all cases. Clearly
this
> >isn't the same as what any norm on the isosceles lattice would
give!
>
> We really must standardize this terminology. I was been calling
> this diameter or Hahn diameter until recently, when I switched to
> "graph length".

Carl, you provided some links (about "geodesic") when discussing
diameter but this would have been much more direct:

http://mathworld.wolfram.com/GraphDiameter.html

This captures what you and Paul Hahn mean by 'diameter' and you don't
have to mention 'geodesic' or anything like that. But graph diameter
*is* defined in terms of

http://mathworld.wolfram.com/GraphDistance.html

So you need the latter concept ('distance') in order to define the
former ('diameter').

This is what I was trying to tell you before but perhaps the
Mathworld pages will make you believe me . . .

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:00:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Here's the code...
> >
> >I don't think that helps. I meant, how do you construct the graph,
> >and with what weights?
>
> Obviously I'm still not using the magic words.

Words?

> What, Paul, would
> you like me to call this thing we've been talking about now for
> 10 years?

I don't care! I don't know why this is your response when I ask you
how you construct the graph!

> >> This is 'graph length' on a triangular lattice.
> >
> >So you say. However what you're calculating here seems to be a
ratio-
> >complexity measure (you input a ratio, not a set of graph
positions),
> >and it's not at all clear how it corresponds (or even could
> >correspond) to 'graph length' on a triangular lattice. What
happens in
> >the 9-limit and above, where ratios of 3 and ratios of 9 both have
> >the same length? How does the graph look?
>
> Didn't we just finish discussing this?

Not as far as I'm aware.

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:08:32 PM

>Carl, you provided some links (about "geodesic") when discussing
>diameter but this would have been much more direct:
>
>http://mathworld.wolfram.com/GraphDiameter.html
>
>This captures what you and Paul Hahn mean by 'diameter' and you don't
>have to mention 'geodesic' or anything like that. But graph diameter
>*is* defined in terms of
>
>http://mathworld.wolfram.com/GraphDistance.html
>
>So you need the latter concept ('distance') in order to define the
>former ('diameter').
>
>This is what I was trying to tell you before but perhaps the
>Mathworld pages will make you believe me . . .

I understood and agree, but I'm simply not clear on what your
preferred terminology is. I can only guess from the above
that you want the unweighted version to be called Hahn distance
and the weighted version to be called isosceles distance?
Please help!

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:08:05 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > The algorithm given by Paul gives 9:5 as
> > 3.9068905956085187 in either the 7- or 9-limit.
>
> He may have also written an algorithm for the 'isosceles' version,
> which this may be, but it's not the 'distance' measure *he* favored,
> which was unweighted.

Though it doesn't agree with 'weighted Hahn' as I would have ever
understood that term: there, the 7-limit distance is just what you say,
but the 9-limit distance is ~3.17. He did revise some of his algorithms
at one time; the one you're using is probably incorrect.

I still don't understand your plain or 'unweighted' Hahn code though.
You need to input both a ratio and and odd limit. Where are these read
or input, in the code?

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:13:55 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >Carl, you provided some links (about "geodesic") when discussing
> >diameter but this would have been much more direct:
> >
> >http://mathworld.wolfram.com/GraphDiameter.html
> >
> >This captures what you and Paul Hahn mean by 'diameter' and you
don't
> >have to mention 'geodesic' or anything like that. But graph
diameter
> >*is* defined in terms of
> >
> >http://mathworld.wolfram.com/GraphDistance.html
> >
> >So you need the latter concept ('distance') in order to define the
> >former ('diameter').
> >
> >This is what I was trying to tell you before but perhaps the
> >Mathworld pages will make you believe me . . .
>
> I understood and agree, but I'm simply not clear on what your
> preferred terminology is. I can only guess from the above
> that you want the unweighted version to be called Hahn distance

Sure, if we can find a graph to support it! I've been putting
the 'distance' in "Hahn distance" in quotes because this isn't clear
yet.

> and the weighted version to be called isosceles distance?

"Weighted Hahn 'distance'" works for me. The isosceles construction
may cease to work for geometrizing "weighted Hahn 'distance'" in the
9-limit and beyond, so I'd prefer to hold off on that.

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:14:34 PM

>> What, Paul, would you like me to call this thing we've been
>> talking about now for 10 years?
>
>I don't care! I don't know why this is your response when I ask you
>how you construct the graph!

But you do care. The reason people everywhere find communicating
with you so exhausting may be that every time they want to refer to
something they have to first stop and ask what term could be used
that will not cause you to object.

Isn't it bleedingly obvious that I'm talking about the triangular
lattice here?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:18:13 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> What, Paul, would you like me to call this thing we've been
> >> talking about now for 10 years?
> >
> >I don't care! I don't know why this is your response when I ask
you
> >how you construct the graph!
>
> But you do care. The reason people everywhere find communicating
> with you so exhausting may be that every time they want to refer to
> something they have to first stop and ask what term could be used
> that will not cause you to object.

Maybe so, but not now, not here.

> Isn't it bleedingly obvious that I'm talking about the triangular
> lattice here?

No, because in most people's idea of a triangular lattice, 9 will
have twice the distance of 3 no matter which norm or metric you use.

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:22:09 PM

>> > The algorithm given by Paul gives 9:5 as
>> > 3.9068905956085187 in either the 7- or 9-limit.
>>
>> He may have also written an algorithm for the 'isosceles' version,
>> which this may be, but it's not the 'distance' measure *he* favored,
>> which was unweighted.
>
>Though it doesn't agree with 'weighted Hahn' as I would have ever
>understood that term: there, the 7-limit distance is just what you say,
>but the 9-limit distance is ~3.17. He did revise some of his algorithms
>at one time; the one you're using is probably incorrect.

Hahn's assertion was that the odd- and prime-limit distance would
be the same. When did he take this back, and what code are you
using to get 3.17?

>I still don't understand your plain or 'unweighted' Hahn code though.
>You need to input both a ratio and and odd limit. Where are these read
>or input, in the code?

As discussed, this code factors the input ratio and assumes prime
limits in doing so. The length of particular monzos can be found
with another function. Nowhere are odd limits needed or used.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:24:13 PM

>> Isn't it bleedingly obvious that I'm talking about the triangular
>> lattice here?
>
>No, because in most people's idea of a triangular lattice, 9 will
>have twice the distance of 3 no matter which norm or metric you use.

If 9 has an axis, it will have length 1. If not and 3 has an
axis, it will have length 2.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:29:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> > The algorithm given by Paul gives 9:5 as
> >> > 3.9068905956085187 in either the 7- or 9-limit.
> >>
> >> He may have also written an algorithm for the 'isosceles'
version,
> >> which this may be, but it's not the 'distance' measure *he*
favored,
> >> which was unweighted.
> >
> >Though it doesn't agree with 'weighted Hahn' as I would have ever
> >understood that term: there, the 7-limit distance is just what you
say,
> >but the 9-limit distance is ~3.17. He did revise some of his
algorithms
> >at one time; the one you're using is probably incorrect.
>
> Hahn's assertion was that the odd- and prime-limit distance would
> be the same.

What exactly do you mean by this?

> When did he take this back,

As I recall, *I* made an incorrect assertion resembing this, which
you can find ("I was wrong about that") here:

http://kees.cc/tuning/erl_perbl.html

and *he* (Paul Hahn) was the one who first pointed out it was
incorrect.

> and what code are you
> using to get 3.17?

In the 7-limit, there are no single steps or rungs corresponding to
ratios of 9. So you need two steps: 3 and 3:5. The former has a
length of lg2(3), the latter has a length of lg2(5), and adding
yields ~3.17.

> >I still don't understand your plain or 'unweighted' Hahn code
though.
> >You need to input both a ratio and and odd limit. Where are these
read
> >or input, in the code?
>
> As discussed, this code factors the input ratio and assumes prime
> limits in doing so. The length of particular monzos can be found
> with another function. Nowhere are odd limits needed or used.

Then how can you correctly find, using the code, that the "Hahn
distance" of 9:5 is 2 in the 7-limit or below, and 1 in the 9-limit
or above? I must be missing something.

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:32:16 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Isn't it bleedingly obvious that I'm talking about the triangular
> >> lattice here?
> >
> >No, because in most people's idea of a triangular lattice, 9 will
> >have twice the distance of 3 no matter which norm or metric you use.
>
> If 9 has an axis, it will have length 1. If not and 3 has an
> axis, it will have length 2.

On the graph, there will be something you get to when you take two
steps in the 3 direction. What is this something?

And furthermore, even if there is a separate 9-axis, how will you
arrange for 9:5 and 9:7 to also get a length of 1, as Hahn 'distance'
requires in the 9-limit and above?

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:34:50 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > and what code are you
> > using to get 3.17?
>
> In the 7-limit, there are no single steps or rungs corresponding to
> ratios of 9. So you need two steps: 3 and 3:5. The former has a
> length of lg2(3), the latter has a length of lg2(5), and adding
> yields ~3.17.

Whoops -- adding yields 3.90689059560852, which is the number you gave.
But in the 9-limit or above, 9:5 is a consonant ratio of 9, a single
step of length lg2(9) = ~3.17.

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:43:24 PM

>> >> Isn't it bleedingly obvious that I'm talking about the triangular
>> >> lattice here?
>> >
>> >No, because in most people's idea of a triangular lattice, 9 will
>> >have twice the distance of 3 no matter which norm or metric you use.
>>
>> If 9 has an axis, it will have length 1. If not and 3 has an
>> axis, it will have length 2.
>
>On the graph, there will be something you get to when you take two
>steps in the 3 direction. What is this something?

9, but the distance is always the shortest one (hence, geodesic).

>And furthermore, even if there is a separate 9-axis, how will you
>arrange for 9:5 and 9:7 to also get a length of 1, as Hahn 'distance'
>requires in the 9-limit and above?

The same way that 7:5 is length 1.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:44:47 PM

>> >> > The algorithm given by Paul gives 9:5 as
>> >> > 3.9068905956085187 in either the 7- or 9-limit.
>> >>
>> >> He may have also written an algorithm for the 'isosceles'
>> >> version, which this may be, but it's not the 'distance'
>> >> measure *he* favored, which was unweighted.
>> >
>> >Though it doesn't agree with 'weighted Hahn' as I would have ever
>> >understood that term: there, the 7-limit distance is just what you
>> >say, but the 9-limit distance is ~3.17. He did revise some of his
>> >algorithms at one time; the one you're using is probably incorrect.
>>
>> Hahn's assertion was that the odd- and prime-limit distance would
>> be the same.
>
>What exactly do you mean by this?

That 9:5 would have a the same distance in the 7- and 9-limit.

>> When did he take this back,
>
>As I recall, *I* made an incorrect assertion resembing this, which
>you can find ("I was wrong about that") here:
>
> http://kees.cc/tuning/erl_perbl.html
>
>and *he* (Paul Hahn) was the one who first pointed out it was
>incorrect.

This compares it to expressibility. Don't see anything about its
value at different limits.

>> and what code are you using to get 3.17?
>
>In the 7-limit, there are no single steps or rungs corresponding to
>ratios of 9. So you need two steps: 3 and 3:5. The former has a
>length of lg2(3), the latter has a length of lg2(5), and adding
>yields ~3.17.

(Saw you just posted an update on this...)

>> >I still don't understand your plain or 'unweighted' Hahn code
>> >though. You need to input both a ratio and and odd limit.
>> >Where are these read or input, in the code?
>>
>> As discussed, this code factors the input ratio and assumes prime
>> limits in doing so. The length of particular monzos can be found
>> with another function. Nowhere are odd limits needed or used.
>
>Then how can you correctly find, using the code, that the "Hahn
>distance" of 9:5 is 2 in the 7-limit or below, and 1 in the 9-limit
>or above? I must be missing something.

Yes -- the code that accepts monzos, which I already demonstrated
in a message, like, yesterday.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/1/2005 3:53:39 PM

>> > and what code are you using to get 3.17?
>>
>> In the 7-limit, there are no single steps or rungs corresponding to
>> ratios of 9. So you need two steps: 3 and 3:5. The former has a
>> length of lg2(3), the latter has a length of lg2(5), and adding
>> yields ~3.17.
>
>Whoops -- adding yields 3.90689059560852, which is the number you gave.
>But in the 9-limit or above, 9:5 is a consonant ratio of 9, a single
>step of length lg2(9) = ~3.17.

Ah, ok, 9:5 does not have the same length in the 7- and 9-limits.
That's important. So the version that accepts monzos is
necessary.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

12/2/2005 2:29:21 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> Isn't it bleedingly obvious that I'm talking about the
triangular
> >> >> lattice here?
> >> >
> >> >No, because in most people's idea of a triangular lattice, 9
will
> >> >have twice the distance of 3 no matter which norm or metric you
use.
> >>
> >> If 9 has an axis, it will have length 1. If not and 3 has an
> >> axis, it will have length 2.
> >
> >On the graph, there will be something you get to when you take two
> >steps in the 3 direction. What is this something?
>
> 9, but the distance is always the shortest one (hence, geodesic).
>
> >And furthermore, even if there is a separate 9-axis, how will you
> >arrange for 9:5 and 9:7 to also get a length of 1, as
Hahn 'distance'
> >requires in the 9-limit and above?
>
> The same way that 7:5 is length 1.

But then 9:3 must have length 1 too, right? And 9:3 is acoustically
the same as 3, and 3 has length 1 but in a different direction.
Neither is the shortest one, so how do you know which one to use in a
given circumstance?

🔗Paul Erlich <perlich@aya.yale.edu>

12/2/2005 2:33:35 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> > and what code are you using to get 3.17?
> >>
> >> In the 7-limit, there are no single steps or rungs corresponding
to
> >> ratios of 9. So you need two steps: 3 and 3:5. The former has a
> >> length of lg2(3), the latter has a length of lg2(5), and adding
> >> yields ~3.17.
> >
> >Whoops -- adding yields 3.90689059560852, which is the number you
gave.
> >But in the 9-limit or above, 9:5 is a consonant ratio of 9, a single
> >step of length lg2(9) = ~3.17.
>
> Ah, ok, 9:5 does not have the same length in the 7- and 9-limits.
> That's important. So the version that accepts monzos is
> necessary.

I'd love clarification on how 'accepting monzos' also allows the
separate specification of an odd limit.

🔗Paul Erlich <perlich@aya.yale.edu>

12/2/2005 2:32:40 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> > The algorithm given by Paul gives 9:5 as
> >> >> > 3.9068905956085187 in either the 7- or 9-limit.
> >> >>
> >> >> He may have also written an algorithm for the 'isosceles'
> >> >> version, which this may be, but it's not the 'distance'
> >> >> measure *he* favored, which was unweighted.
> >> >
> >> >Though it doesn't agree with 'weighted Hahn' as I would have
ever
> >> >understood that term: there, the 7-limit distance is just what
you
> >> >say, but the 9-limit distance is ~3.17. He did revise some of
his
> >> >algorithms at one time; the one you're using is probably
incorrect.
> >>
> >> Hahn's assertion was that the odd- and prime-limit distance would
> >> be the same.
> >
> >What exactly do you mean by this?
>
> That 9:5 would have a the same distance in the 7- and 9-limit.

Well, clearly it doesn't.

> >> When did he take this back,
> >
> >As I recall, *I* made an incorrect assertion resembing this, which
> >you can find ("I was wrong about that") here:
> >
> > http://kees.cc/tuning/erl_perbl.html
> >
> >and *he* (Paul Hahn) was the one who first pointed out it was
> >incorrect.
>
> This compares it to expressibility. Don't see anything about its
> value at different limits.

Expressibility is the same as weighted Hahn distance in an infinite
(odd) limit.

> >> and what code are you using to get 3.17?
> >
> >In the 7-limit, there are no single steps or rungs corresponding
to
> >ratios of 9. So you need two steps: 3 and 3:5. The former has a
> >length of lg2(3), the latter has a length of lg2(5), and adding
> >yields ~3.17.
>
> (Saw you just posted an update on this...)

Yes . . .

> >> >I still don't understand your plain or 'unweighted' Hahn code
> >> >though. You need to input both a ratio and and odd limit.
> >> >Where are these read or input, in the code?
> >>
> >> As discussed, this code factors the input ratio and assumes prime
> >> limits in doing so. The length of particular monzos can be found
> >> with another function. Nowhere are odd limits needed or used.
> >
> >Then how can you correctly find, using the code, that the "Hahn
> >distance" of 9:5 is 2 in the 7-limit or below, and 1 in the 9-
limit
> >or above? I must be missing something.
>
> Yes -- the code that accepts monzos, which I already demonstrated
> in a message, like, yesterday.

I'm confused. The code accepts monzos but it also needs to know the
odd limit. How does it know?

🔗Paul Erlich <perlich@aya.yale.edu>

12/2/2005 3:08:54 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> 9, but the distance is always the shortest one (hence, geodesic).

BTW, geodesic means the shortest path between point A and point B. It
doesn't mean the shortest path between point A and any of point B, C,
and D, where B, C, and D represent the same ratio. But the latter is
the sense in which you're using it here.

Not trying to be pedantic; just want to keep a handle on how hairy this
is and, if Gene is reading, to keep some semblance of mathematical
precision around our ideas so that maybe he might understand,
contribute, and perhaps formalize and generalize . . .

🔗Carl Lumma <ekin@lumma.org>

12/3/2005 11:58:27 PM

>> 9, but the distance is always the shortest one (hence, geodesic).
>>
>> >And furthermore, even if there is a separate 9-axis, how will you
>> >arrange for 9:5 and 9:7 to also get a length of 1, as
>> >Hahn 'distance' requires in the 9-limit and above?
>>
>> The same way that 7:5 is length 1.
>
>But then 9:3 must have length 1 too, right? And 9:3 is acoustically
>the same as 3, and 3 has length 1 but in a different direction.
>Neither is the shortest one, so how do you know which one to use in a
>given circumstance?

Since the distance is the same, it wouldn't matter. Or maybe
I'm not following...

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/3/2005 11:59:40 PM

>Expressibility is the same as weighted Hahn distance in an infinite
>(odd) limit.

Yes.

>> >> >I still don't understand your plain or 'unweighted' Hahn code
>> >> >though. You need to input both a ratio and and odd limit.
>> >> >Where are these read or input, in the code?
>> >>
>> >> As discussed, this code factors the input ratio and assumes prime
>> >> limits in doing so. The length of particular monzos can be found
>> >> with another function. Nowhere are odd limits needed or used.
>> >
>> >Then how can you correctly find, using the code, that the "Hahn
>> >distance" of 9:5 is 2 in the 7-limit or below, and 1 in the 9-
>limit
>> >or above? I must be missing something.
>>
>> Yes -- the code that accepts monzos, which I already demonstrated
>> in a message, like, yesterday.
>
>I'm confused. The code accepts monzos but it also needs to know the
>odd limit. How does it know?

If you have a monzo you *don't* need an odd-limit.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/4/2005 12:02:53 AM

>> 9, but the distance is always the shortest one (hence, geodesic).
>
>BTW, geodesic means the shortest path between point A and point B. It
>doesn't mean the shortest path between point A and any of point B, C,
>and D, where B, C, and D represent the same ratio. But the latter is
>the sense in which you're using it here.

True.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 1:19:37 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> 9, but the distance is always the shortest one (hence, geodesic).
> >>
> >> >And furthermore, even if there is a separate 9-axis, how will you
> >> >arrange for 9:5 and 9:7 to also get a length of 1, as
> >> >Hahn 'distance' requires in the 9-limit and above?
> >>
> >> The same way that 7:5 is length 1.
> >
> >But then 9:3 must have length 1 too, right? And 9:3 is acoustically
> >the same as 3, and 3 has length 1 but in a different direction.
> >Neither is the shortest one, so how do you know which one to use in
a
> >given circumstance?
>
> Since the distance is the same, it wouldn't matter. Or maybe
> I'm not following...

In order to construct balls for other complexity measures on this
lattice (to fill out your table), you have to know which one to use.
For one thing.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 1:20:42 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >Expressibility is the same as weighted Hahn distance in an
infinite
> >(odd) limit.
>
> Yes.
>
> >> >> >I still don't understand your plain or 'unweighted' Hahn code
> >> >> >though. You need to input both a ratio and and odd limit.
> >> >> >Where are these read or input, in the code?
> >> >>
> >> >> As discussed, this code factors the input ratio and assumes
prime
> >> >> limits in doing so. The length of particular monzos can be
found
> >> >> with another function. Nowhere are odd limits needed or used.
> >> >
> >> >Then how can you correctly find, using the code, that the "Hahn
> >> >distance" of 9:5 is 2 in the 7-limit or below, and 1 in the 9-
> >limit
> >> >or above? I must be missing something.
> >>
> >> Yes -- the code that accepts monzos, which I already demonstrated
> >> in a message, like, yesterday.
> >
> >I'm confused. The code accepts monzos but it also needs to know
the
> >odd limit. How does it know?
>
> If you have a monzo you *don't* need an odd-limit.

Sure you do.

[0 2 -1> is the "monzo" that represents 9:5.

Does it have a Hahn complexity of 1 or 2? You need to additionally
specify an odd limit in order to answer this question.

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 2:30:50 PM

>> >> 9, but the distance is always the shortest one (hence, geodesic).
>> >>
>> >> >And furthermore, even if there is a separate 9-axis, how will you
>> >> >arrange for 9:5 and 9:7 to also get a length of 1, as
>> >> >Hahn 'distance' requires in the 9-limit and above?
>> >>
>> >> The same way that 7:5 is length 1.
>> >
>> >But then 9:3 must have length 1 too, right? And 9:3 is acoustically
>> >the same as 3, and 3 has length 1 but in a different direction.
>> >Neither is the shortest one, so how do you know which one to use in
>> >a given circumstance?
>>
>> Since the distance is the same, it wouldn't matter. Or maybe
>> I'm not following...
>
>In order to construct balls for other complexity measures on this
>lattice (to fill out your table), you have to know which one to use.
>For one thing.

The 1-ball is tangent to both points, etc. Maybe this makes it
non-convex, I don't know...

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 2:32:19 PM

>> If you have a monzo you *don't* need an odd-limit.
>
>Sure you do.
>
>[0 2 -1> is the "monzo" that represents 9:5.
>
>Does it have a Hahn complexity of 1 or 2? You need to additionally
>specify an odd limit in order to answer this question.

That monzo is always distance 2. No limit of any kind is
required. The numbers could be exponents on any bases, so long
as they're consonant.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 2:40:48 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> 9, but the distance is always the shortest one (hence,
geodesic).
> >> >>
> >> >> >And furthermore, even if there is a separate 9-axis, how
will you
> >> >> >arrange for 9:5 and 9:7 to also get a length of 1, as
> >> >> >Hahn 'distance' requires in the 9-limit and above?
> >> >>
> >> >> The same way that 7:5 is length 1.
> >> >
> >> >But then 9:3 must have length 1 too, right? And 9:3 is
acoustically
> >> >the same as 3, and 3 has length 1 but in a different direction.
> >> >Neither is the shortest one, so how do you know which one to
use in
> >> >a given circumstance?
> >>
> >> Since the distance is the same, it wouldn't matter. Or maybe
> >> I'm not following...
> >
> >In order to construct balls for other complexity measures on this
> >lattice (to fill out your table), you have to know which one to
use.
> >For one thing.
>
> The 1-ball is tangent to both points, etc.

The 1-ball? What does that refer to? Did you notice that I wrote
*other* complexity measures? Assuming you did, and assuming you're
thinking of some other complexity measure where 3 has a complexity of
1, are you saying that '3' and '9/3' belong to the ball
while '3*3*3/9', for instance, doesn't?

> Maybe this makes it
> non-convex, I don't know...

If '3*3*3/9', etc., do belong to it, which seems reasonable if the
acoustically identical 3 does, then the ball is non-finite.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 2:48:01 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> If you have a monzo you *don't* need an odd-limit.
> >
> >Sure you do.
> >
> >[0 2 -1> is the "monzo" that represents 9:5.
> >
> >Does it have a Hahn complexity of 1 or 2? You need to additionally
> >specify an odd limit in order to answer this question.
>
> That monzo is always distance 2. No limit of any kind is
> required. The numbers could be exponents on any bases, so long
> as they're consonant.

First of all, I don't think that's right for the Hahn deal. Only if
the pairwise *ratios* of these bases are also consonant, not just the
bases themselves, is this the case.

Anyway, this is not how Monz defines "monzos". Nor how Gene defines
them. Moreover, if both 3 and 9 are entries, the monzo for a given
ratio is not unique. But behind the whole "monzo" idea is the
fundamental theorem of arithmetic, and thus uniqueness. That's why I
haven't understood you on this point until now.

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 2:52:32 PM

>> >In order to construct balls for other complexity measures on this
>> >lattice (to fill out your table), you have to know which one to
>> >use. For one thing.
>>
>> The 1-ball is tangent to both points, etc.
>
>The 1-ball? What does that refer to? Did you notice that I wrote
>*other* complexity measures? Assuming you did, and assuming you're
>thinking of some other complexity measure where 3 has a complexity of
>1, are you saying that '3' and '9/3' belong to the ball
>while '3*3*3/9', for instance, doesn't?

I wrote "etc.", meaning I intended this chart to reveal things
about the proposed measures/lattices, not the other way around.

>> Maybe this makes it non-convex, I don't know...
>
>If '3*3*3/9', etc., do belong to it, which seems reasonable if the
>acoustically identical 3 does, then the ball is non-finite.

In the case of the unweighted Hahn choice awards, this is
(0 3 0 0 -1) = 3. So despite being acoustically identical, it
doesn't have the same distance. Which isn't immediately killing,
since this distance isn't meant to be only a measure of
psychoacoustic consonance.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 2:56:11 PM

At 02:48 PM 12/6/2005, you wrote:
>--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>
>> >> If you have a monzo you *don't* need an odd-limit.
>> >
>> >Sure you do.
>> >
>> >[0 2 -1> is the "monzo" that represents 9:5.
>> >
>> >Does it have a Hahn complexity of 1 or 2? You need to additionally
>> >specify an odd limit in order to answer this question.
>>
>> That monzo is always distance 2. No limit of any kind is
>> required. The numbers could be exponents on any bases, so long
>> as they're consonant.
>
>First of all, I don't think that's right for the Hahn deal. Only if
>the pairwise *ratios* of these bases are also consonant, not just the
>bases themselves, is this the case.

That's an assumption of the triangular lattice, yes.

>Anyway, this is not how Monz defines "monzos". Nor how Gene defines
>them.

Ok. What would you like me to call them? You've already rejected
"interval vectors".

>Moreover, if both 3 and 9 are entries, the monzo for a given
>ratio is not unique. But behind the whole "monzo" idea is the
>fundamental theorem of arithmetic, and thus uniqueness. That's why I
>haven't understood you on this point until now.

"Fokker style interval vectors" with odd entries comes directly
from Paul Hahn. I may remind you that *you're* the one insisting
on odd limits. My code can't even handle them.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 3:00:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >In order to construct balls for other complexity measures on
this
> >> >lattice (to fill out your table), you have to know which one to
> >> >use. For one thing.
> >>
> >> The 1-ball is tangent to both points, etc.
> >
> >The 1-ball? What does that refer to? Did you notice that I wrote
> >*other* complexity measures? Assuming you did, and assuming you're
> >thinking of some other complexity measure where 3 has a complexity
of
> >1, are you saying that '3' and '9/3' belong to the ball
> >while '3*3*3/9', for instance, doesn't?
>
> I wrote "etc.", meaning I intended this chart to reveal things
> about the proposed measures/lattices, not the other way around.

Huh?

> >> Maybe this makes it non-convex, I don't know...
> >
> >If '3*3*3/9', etc., do belong to it, which seems reasonable if the
> >acoustically identical 3 does, then the ball is non-finite.
>
> In the case of the unweighted Hahn choice awards, this is
> (0 3 0 0 -1) = 3. So despite being acoustically identical, it
> doesn't have the same distance. Which isn't immediately killing,
> since this distance isn't meant to be only a measure of
> psychoacoustic consonance.

It doesn't seem like you're paying attention to the word "other"
in "*other* complexity measures", Carl. I'm trying to understand how
to fill out the table you proposed!

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 3:04:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> At 02:48 PM 12/6/2005, you wrote:
> >--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >>
> >> >> If you have a monzo you *don't* need an odd-limit.
> >> >
> >> >Sure you do.
> >> >
> >> >[0 2 -1> is the "monzo" that represents 9:5.
> >> >
> >> >Does it have a Hahn complexity of 1 or 2? You need to
additionally
> >> >specify an odd limit in order to answer this question.
> >>
> >> That monzo is always distance 2. No limit of any kind is
> >> required. The numbers could be exponents on any bases, so long
> >> as they're consonant.
> >
> >First of all, I don't think that's right for the Hahn deal. Only
if
> >the pairwise *ratios* of these bases are also consonant, not just
the
> >bases themselves, is this the case.
>
> That's an assumption of the triangular lattice, yes.

I thought the lattice geometries were supposed to be independent of
this whole thing, so that we could fill out your table and show every
possible combination of complexity measure and lattice geometry.

> >Anyway, this is not how Monz defines "monzos". Nor how Gene
defines
> >them.
>
> Ok. What would you like me to call them? You've already rejected
> "interval vectors".

Well, they don't refer to intervals if you don't even know what the
bases of the exponents are, right?

> >Moreover, if both 3 and 9 are entries, the monzo for a given
> >ratio is not unique. But behind the whole "monzo" idea is the
> >fundamental theorem of arithmetic, and thus uniqueness. That's why
I
> >haven't understood you on this point until now.
>
> "Fokker style interval vectors" with odd entries comes directly
> from Paul Hahn. I may remind you that *you're* the one insisting
> on odd limits. My code can't even handle them.

Then I believe your code can't handle any actual intervals (as in
ratios) either.

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 3:15:59 PM

>> >> >In order to construct balls for other complexity measures on
>> >> >this lattice (to fill out your table), you have to know which
>> >> >one to use. For one thing.
>> >>
>> >> The 1-ball is tangent to both points, etc.
>> >
>> >The 1-ball? What does that refer to? Did you notice that I wrote
>> >*other* complexity measures? Assuming you did, and assuming you're
>> >thinking of some other complexity measure where 3 has a complexity
>> >of 1, are you saying that '3' and '9/3' belong to the ball
>> >while '3*3*3/9', for instance, doesn't?
>>
>> I wrote "etc.", meaning I intended this chart to reveal things
>> about the proposed measures/lattices, not the other way around.
>
>Huh?
>
>> >> Maybe this makes it non-convex, I don't know...
>> >
>> >If '3*3*3/9', etc., do belong to it, which seems reasonable if the
>> >acoustically identical 3 does, then the ball is non-finite.
>>
>> In the case of the unweighted Hahn choice awards, this is
>> (0 3 0 0 -1) = 3. So despite being acoustically identical, it
>> doesn't have the same distance. Which isn't immediately killing,
>> since this distance isn't meant to be only a measure of
>> psychoacoustic consonance.
>
>It doesn't seem like you're paying attention to the word "other"
>in "*other* complexity measures", Carl. I'm trying to understand how
>to fill out the table you proposed!

I don't know, man. Or I would have filled it out myself.

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/6/2005 3:19:43 PM

>> >> >> If you have a monzo you *don't* need an odd-limit.
>> >> >
>> >> >Sure you do.
>> >> >
>> >> >[0 2 -1> is the "monzo" that represents 9:5.
>> >> >
>> >> >Does it have a Hahn complexity of 1 or 2? You need to
>> >> >additionally specify an odd limit in order to answer
>> >> >this question.
>> >>
>> >> That monzo is always distance 2. No limit of any kind is
>> >> required. The numbers could be exponents on any bases, so long
>> >> as they're consonant.
>> >
>> >First of all, I don't think that's right for the Hahn deal. Only
>> >if the pairwise *ratios* of these bases are also consonant, not
>> >just the bases themselves, is this the case.
>>
>> That's an assumption of the triangular lattice, yes.
>
>I thought the lattice geometries were supposed to be independent of
>this whole thing, so that we could fill out your table and show every
>possible combination of complexity measure and lattice geometry.

On the triangular lattice, the balls of a measure making this
assumption will appear round. On the rectangular lattice, I'm
hoping they don't. And vice versa.

>> >Anyway, this is not how Monz defines "monzos". Nor how Gene
>> >defines them.
>>
>> Ok. What would you like me to call them? You've already rejected
>> "interval vectors".
>
>Well, they don't refer to intervals if you don't even know what the
>bases of the exponents are, right?

They just don't map uniquely to intervals. It's a non-issue once
the bases are declared, as they would be in the table.

>> >Moreover, if both 3 and 9 are entries, the monzo for a given
>> >ratio is not unique. But behind the whole "monzo" idea is the
>> >fundamental theorem of arithmetic, and thus uniqueness. That's why
>I
>> >haven't understood you on this point until now.
>>
>> "Fokker style interval vectors" with odd entries comes directly
>> from Paul Hahn. I may remind you that *you're* the one insisting
>> on odd limits. My code can't even handle them.
>
>Then I believe your code can't handle any actual intervals (as in
>ratios) either.

Give me a failing case.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/8/2005 4:35:11 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >In order to construct balls for other complexity measures on
> >> >> >this lattice (to fill out your table), you have to know which
> >> >> >one to use. For one thing.
> >> >>
> >> >> The 1-ball is tangent to both points, etc.
> >> >
> >> >The 1-ball? What does that refer to? Did you notice that I
wrote
> >> >*other* complexity measures? Assuming you did, and assuming
you're
> >> >thinking of some other complexity measure where 3 has a
complexity
> >> >of 1, are you saying that '3' and '9/3' belong to the ball
> >> >while '3*3*3/9', for instance, doesn't?
> >>
> >> I wrote "etc.", meaning I intended this chart to reveal things
> >> about the proposed measures/lattices, not the other way around.
> >
> >Huh?
> >
> >> >> Maybe this makes it non-convex, I don't know...
> >> >
> >> >If '3*3*3/9', etc., do belong to it, which seems reasonable if
the
> >> >acoustically identical 3 does, then the ball is non-finite.
> >>
> >> In the case of the unweighted Hahn choice awards, this is
> >> (0 3 0 0 -1) = 3. So despite being acoustically identical, it
> >> doesn't have the same distance. Which isn't immediately killing,
> >> since this distance isn't meant to be only a measure of
> >> psychoacoustic consonance.
> >
> >It doesn't seem like you're paying attention to the word "other"
> >in "*other* complexity measures", Carl. I'm trying to understand
how
> >to fill out the table you proposed!
>
> I don't know, man. Or I would have filled it out myself.

I'm well aware of that, Carl! That's why I'm *trying*.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/8/2005 4:36:57 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> If you have a monzo you *don't* need an odd-limit.
> >> >> >
> >> >> >Sure you do.
> >> >> >
> >> >> >[0 2 -1> is the "monzo" that represents 9:5.
> >> >> >
> >> >> >Does it have a Hahn complexity of 1 or 2? You need to
> >> >> >additionally specify an odd limit in order to answer
> >> >> >this question.
> >> >>
> >> >> That monzo is always distance 2. No limit of any kind is
> >> >> required. The numbers could be exponents on any bases, so
long
> >> >> as they're consonant.
> >> >
> >> >First of all, I don't think that's right for the Hahn deal. Only
> >> >if the pairwise *ratios* of these bases are also consonant, not
> >> >just the bases themselves, is this the case.
> >>
> >> That's an assumption of the triangular lattice, yes.
> >
> >I thought the lattice geometries were supposed to be independent
of
> >this whole thing, so that we could fill out your table and show
every
> >possible combination of complexity measure and lattice geometry.
>
> On the triangular lattice, the balls of a measure making this
> assumption will appear round. On the rectangular lattice, I'm
> hoping they don't. And vice versa.
>
> >> >Anyway, this is not how Monz defines "monzos". Nor how Gene
> >> >defines them.
> >>
> >> Ok. What would you like me to call them? You've already
rejected
> >> "interval vectors".
> >
> >Well, they don't refer to intervals if you don't even know what
the
> >bases of the exponents are, right?
>
> They just don't map uniquely to intervals. It's a non-issue once
> the bases are declared, as they would be in the table.
>
> >> >Moreover, if both 3 and 9 are entries, the monzo for a given
> >> >ratio is not unique. But behind the whole "monzo" idea is the
> >> >fundamental theorem of arithmetic, and thus uniqueness. That's
why
> >I
> >> >haven't understood you on this point until now.
> >>
> >> "Fokker style interval vectors" with odd entries comes directly
> >> from Paul Hahn. I may remind you that *you're* the one insisting
> >> on odd limits. My code can't even handle them.
> >
> >Then I believe your code can't handle any actual intervals (as in
> >ratios) either.
>
> Give me a failing case.

I didn't know the bases would be declared. I played hooky through
just about every Scheme class at the Columbia Science Honors Program
(SHP). Saturdays . . .

🔗Carl Lumma <ekin@lumma.org>

12/19/2005 1:08:55 AM

When looking for spreadsheets to test Excel replacements with
(Abykus and Gnumeric fail miserably... haven't gotten around
to testing OpenOffice yet...), I found this wonderful thing,
from Dave Keenan, in my files...

http://lumma.org/tuning/HarmonicComplexity.xls

I haven't looked closely at this (it looks like the agreement
is a little too good, considering there are octave equivalent
and octave specific things being compared), but it would seem
to satisfy one of the main purposes of the chart I proposed. . .

-Carl

🔗Carl Lumma <ekin@lumma.org>

12/20/2005 3:02:27 PM

At 01:08 AM 12/19/2005, I wrote:
>When looking for spreadsheets to test Excel replacements with
>(Abykus and Gnumeric fail miserably... haven't gotten around
>to testing OpenOffice yet...),

OpenOffice fails too, not that I really expected an excel clone
to handle a Keenan-type spreadsheet. The problem is, it's too
good an Excel clone; warts and all. Calc alone was 200 MB.
The Excel feature I was trying to forget, AutoWhatever (the one
that reformats your input no matter what you do), was present,
along with a clippy-esque pop-up help lightbulb in the lower
right...

"Ok, I'll bite." [click]
"D'oh! You found a bug..."

The open source movement seems to say, "You don't have to pay
money for crap software. We'll give you that for free!

Imitation is the sincerest form of flattery.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

12/28/2005 7:15:30 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> OpenOffice fails too, not that I really expected an excel clone
> to handle a Keenan-type spreadsheet.

Yeah. Even stock-standard Excel may fail on many of my spreadsheets --
those that use the GCD() function (as the harmonic complexity one
does). GCD() is in the optional "Analysis" pack which you need to
explicitly install (or used to in Excel 97 at least, which I'm still
using). OpenOffice may have an equivalent package containing GCD().

-- Dave Keenan

🔗Carl Lumma <ekin@lumma.org>

12/28/2005 6:46:54 PM

>> OpenOffice fails too, not that I really expected an excel clone
>> to handle a Keenan-type spreadsheet.
>
>Yeah. Even stock-standard Excel may fail on many of my spreadsheets --
>those that use the GCD() function (as the harmonic complexity one
>does). GCD() is in the optional "Analysis" pack which you need to
>explicitly install (or used to in Excel 97 at least, which I'm still
>using). OpenOffice may have an equivalent package containing GCD().

I explicitly installed that on Excel 2003, which I'm currently
using again (sigh). The failures of the clones were more severe
than missing GCD, though.

-Carl

🔗Carl Lumma <ekin@lumma.org>

1/19/2006 11:48:49 PM

Any comment on this? I was a little surprised these different
measures agreed so well.

-Carl

At 01:08 AM 12/19/2005, I wrote:
> ... I found this wonderful thing,
>from Dave Keenan, in my files...
>
>http://lumma.org/tuning/HarmonicComplexity.xls
>
>I haven't looked closely at this (it looks like the agreement
>is a little too good, considering there are octave equivalent
>and octave specific things being compared), but it would seem
>to satisfy one of the main purposes of the chart I proposed. . .
>
>-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/20/2006 2:06:41 AM

How is

"Erlich's log of odd limit" "also" "odd triangular lattice log
shortest path"?

The two are not the same, but you have them in the same column.

The former seems to be Kees complexity, while the latter would be
weighted Hahn. As you know, there is a lattice where Kees complexity
corresponds to a kind of distance measure, but it's not the length of
a path that follows the rungs of the lattice (in fact, this lattice
is best considered to have no rungs at all) . . .

Thanks for bringing this up again, and let's make this a better
spreadsheet!

-- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> Any comment on this? I was a little surprised these different
> measures agreed so well.

Not particularly.

>
> -Carl
>
> At 01:08 AM 12/19/2005, I wrote:
> > ... I found this wonderful thing,
> >from Dave Keenan, in my files...
> >
> >http://lumma.org/tuning/HarmonicComplexity.xls
> >
> >I haven't looked closely at this (it looks like the agreement
> >is a little too good, considering there are octave equivalent
> >and octave specific things being compared),

On different graphs!

> > but it would seem
> >to satisfy one of the main purposes of the chart I proposed. . .

Cool.

🔗Carl Lumma <ekin@lumma.org>

1/20/2006 11:18:52 AM

>How is
>
>"Erlich's log of odd limit" "also" "odd triangular lattice log
>shortest path"?
>
>The two are not the same, but you have them in the same column.

Dave made this spreadsheet, not me!

>> > ... I found this wonderful thing,
>> >from Dave Keenan, in my files...
>> >
>> >http://lumma.org/tuning/HarmonicComplexity.xls
>> >
>> >I haven't looked closely at this (it looks like the agreement
>> >is a little too good, considering there are octave equivalent
>> >and octave specific things being compared),
>
>On different graphs!

?

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/6/2006 11:00:27 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >How is
> >
> >"Erlich's log of odd limit" "also" "odd triangular lattice log
> >shortest path"?
> >
> >The two are not the same, but you have them in the same column.
>
> Dave made this spreadsheet, not me!
>
> >> > ... I found this wonderful thing,
> >> >from Dave Keenan, in my files...
> >> >
> >> >http://lumma.org/tuning/HarmonicComplexity.xls
> >> >
> >> >I haven't looked closely at this (it looks like the agreement
> >> >is a little too good, considering there are octave equivalent
> >> >and octave specific things being compared),
> >
> >On different graphs!
>
> ?

They're not being compared with each other because they're on
different graphs. Only on a given graph are different things being
compared, not between one graph and another.

🔗Carl Lumma <ekin@lumma.org>

2/6/2006 7:41:27 PM

>> >How is
>> >
>> >"Erlich's log of odd limit" "also" "odd triangular lattice log
>> >shortest path"?
>> >
>> >The two are not the same, but you have them in the same column.
>>
>> Dave made this spreadsheet, not me!
>>
>> >> > ... I found this wonderful thing,
>> >> >from Dave Keenan, in my files...
>> >> >
>> >> >http://lumma.org/tuning/HarmonicComplexity.xls
>> >> >
>> >> >I haven't looked closely at this (it looks like the agreement
>> >> >is a little too good, considering there are octave equivalent
>> >> >and octave specific things being compared),
>> >
>> >On different graphs!
>>
>> ?
>
>They're not being compared with each other because they're on
>different graphs.

I have no idea what you're referring to. This is not my spreadsheet,
but it clearly compares several complexity measures on a single
graph, and the compared measures agree very well, and that is all
I was referring to.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 4:27:18 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >How is
> >> >
> >> >"Erlich's log of odd limit" "also" "odd triangular lattice log
> >> >shortest path"?
> >> >
> >> >The two are not the same, but you have them in the same column.
> >>
> >> Dave made this spreadsheet, not me!
> >>
> >> >> > ... I found this wonderful thing,
> >> >> >from Dave Keenan, in my files...
> >> >> >
> >> >> >http://lumma.org/tuning/HarmonicComplexity.xls
> >> >> >
> >> >> >I haven't looked closely at this (it looks like the agreement
> >> >> >is a little too good, considering there are octave equivalent
> >> >> >and octave specific things being compared),
> >> >
> >> >On different graphs!
> >>
> >> ?
> >
> >They're not being compared with each other because they're on
> >different graphs.
>
> I have no idea what you're referring to. This is not my
spreadsheet,
> but it clearly compares several complexity measures on a single
> graph, and the compared measures agree very well, and that is all
> I was referring to.
>
> -Carl

You said "it looks like the agreement is a little too good,
considering there are octave equivalent and octave specific things
being compared." I simply noted that these two kinds of things are
*not* being compared on a single graph.

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 5:39:19 PM

>> >> >> > http://lumma.org/tuning/HarmonicComplexity.xls
//
>You said "it looks like the agreement is a little too good,
>considering there are octave equivalent and octave specific things
>being compared." I simply noted that these two kinds of things are
>*not* being compared on a single graph.

Oh!

-Carl