[Keenan:]

>>Certainly the next step is to get the complexity of a ratio from its

>>numerator and denominator, but I'm unclear whether to multiply them, or

>>take the maximum value (as is normally done with odd or prime limits).

[Lumma:]

>In my experience their product can be very useful. I learned this from

>Denny Genovese, who calls it the DF. The idea is based on Partch's

>(perhaps dubious) assumption that the length of the period of the

>composite wave determines its consonance. ...

>I'd like to learn more about how this approach overlaps or contradicts

>with the maximum value approach.

If linear complexity measures are multiplied, logarithmic ones should add.

Max value can be done for both, but clearly it throws away some

information. Just another point on the accuracy vs. user-friendliness

tradeoff.

[Keenan:]

>>If you multiply them you should then take the square-root (i.e. you should

>>find the geometric mean). This is to keep them commensurate with the

>>odd->limit or prime-limit where the max value is taken.

[Lumma:]

>I don't see how taking the square root hurts anything, but I don't see what

>it adds either. What do you mean by "commensurate" with the max value

>approaches?

No it doesn't hurt or add, but instead of being a linear measure it becomes

a quadratic one. "Commensurate" means literally "belonging to the same

system of measurement". Hertz are not comensurate with cents, but semitones

or schismas are. More specifically I mean that you can directly compare two

measures (metrics) with at most a constant multiplier required to bring

them into some kind of agreement.

[Monzo:]

>Since the ratio can be represented by the

>same series you use in your formula (the prime

>factors), it seems to me that you should just

>use negative exponents for the factors of the

>denominator, and multiply all the factors together.

Sure. Good idea. Just have to take absolute values of exponents when

calculating complexities.

[Wolf:]

>In chapter 5 of his _Divisions of the Tetrachord_ (Frog Peak 1993), John

>Chalmers gives an excellent summary of methods for the analysis of

>tetrachords which might be usefully applicable to individual intervals and

>systems other than tetrachords. The presentation of Klarenz Barlow's

>indigestability formula is, however, not so useful out of the context of

>Bawrlough's landmark _Bus Journey to Parametron_ (Feedback Papers 21-23,

>available in the states from Frog Peak).

I'm not sure what's fact and what's fiction here :-) but thanks for the

formulae.

[Wolf:]

>(5) Euler's _gradus suavatis_ function:

>

> (a) for prime numbers: the prime number itself

>

> (b) for composite numbers:

> the sum of the prime factors minus one less than

> the number of factors.

>

> (c) for ratios:

> convert a ratio to a segment of the harmonic series, then

> compute the least common multiple of the terms.

I don't understand part (c) above. What does it mean to "convert a ratio to

a segment of the harmonic series"? I just think of the harmonic series as

the series of whole numbers (positive integers).

[Keenan:]

>> I've switched to referring

>> to them as "prime exponent weights" now.

>> I suppose they represent something about

>> how the human brain processes combinations

>> of tones. The relative importance human evolution

>> has given to the various primes. ...

[Monzo:]

>This is very interesting to me. I've already concluded

>regarding the study of meters that we break all meters

>down into simpler and simpler subdivisions, until

>ultimately everything can be expressed by combinations

>of 2s and 3s.

...

>I'm certain that our difficulty of understanding

>more complex meters as anything but 2s and 3s

>has a lot to do with "The relative importance human

>evolution has given to the various primes". We

>can comprehend 1, 2 or 3 of anything *right away*,

>and from my knowledge of how evolution works,

>speed of recognition or comprehension ranks

>near the top of the list of importance.

I wish to make it clear that I don't expect evolution to have given the

same relative weights to the primes in relation to ryhthm (or cake cutting)

as it did in regard to frequency ratios.

[Monzo:]

>With your original approximated weights

>(0.3, 0.8, 0.9, 1.0, 1.0, ...), which I thought were

>OK at first, it's evident that there is a sharp

>increase in complexity after 2. Perhaps with

>a different weighting your formula is a mathematical

>validation and explanation of my idea.

Since you are free to choose the weights to fit your own judgement it can't

be a validation or explanation of that judgement, merely a description of

it. But tell us what weights you favour and we can compare and maybe home

in on a rough agreement.

Please use the generalised/parameterised Barlow-type formula I mentioned in

my previous post to this thread (i.e. multiply weights by absolute

exponents then add them up), or at least make it clear which version the

weights are for (including whether log or linear).

Actually, this could more simply be called a parameterised Wilson's

harmonic complexity (rather than "the absolute value of the reciprocal of a

parameterised Barlow's harmonicity"). Wilson's choice of weights

corresponds to the primes themselves, except for 2 whose weight is zero.

>> Give me a better term. [Keenan]

>

>"Prime importance"? [Monzo]

"Prime importance" could be used instead of "prime exponent weight", but I

don't think it could be used instead of "harmonic complexity" (of a ratio).

I now favour "harmonic complexity" over "musical complexity". It reminds us

it only works for (near) harmonic timbres and "musical" was too broad since

it could apply to rhythm as well. (Thanks for mentioning rhythm).

[Monzo:]

>I gave it a weight of as low as .05 before it looked

>like something I agreed with, but the weight of 3

>was much lower also - I don't remember what now.

>

>As I said above, my inclination would be to have

>low weights for 2 and 3 and then increase sharply

>after that. What do you think?

I might well agree with such a weighting. Try me.

[Monzo:]

>How about the

>question of 15? I've always thought that 15/8

>is a pretty consonance in a chord - is it more,

>or less, consonant than 7/4?

Ah but we're not (yet) talking about "in a chord". I agree with Paul Erlich

that the 15/8 just happens to appear in a chord with other highly consonant

intervals, but anyway, on its own I think 15/8 is more dissonant than 7/4.

>[Monzo (replying to Erlich):]

>OK, Paul, you got me there. So apparently the

>effect of octave-equivalence has more to do with

>prime-base 2 have an extremely low weight

>in Keenan's "musical complexity" prime-weights

>formula than anything else. But to account

>for the fact of octave-equivalency when no

>other prime equivalency occurs anywhere near

>as strongly, the curve of the weights must be

>very low at 2 then rise sharply for 3 and above.

I agree. You might say that the second harmonic so often accompanied a

fundamental that it carried very little additional information. It has very

low importance. But why *prime* importance. Why is 4 (and 8 and 16) also so

unimportant? Maybe it was just an accident of economical "wiring" that

treats four as a power of two. Or maybe it is just that the 4th harmonic is

just treated as the 2nd harmonic of the 2nd harmonic.

[Wolf:]

>Although I cannot invite the list to dinner in Cologne, I propose that we

>duplicate the experiment. Since most of us on the list are into higher

>primes, let's try all divisions through 19. Please send me, off list ,

>your own ranking of the difficulties in slicing tortes (cakes, pies) from 2

>to 19 equal parts. I'll compile the results and report back.

As I said, I don't think one can draw any musical conclusions from this,

but it sounds like fun and will be interesting to compare.

[Morrison:]

> Am I correct in inferring that, by how much they "cost", you mean how

much taking it out affects the high-level harmonic character?

Yes. Or equivalently how much we over or underestimate the dissonance of an

interval by ignoring *how many* factors of 2 (or 3 etc.) it has.

Regards,

-- Dave Keenan

http://dkeenan.com

Message text written by Dave Keenan

>

I'm not sure what's fact and what's fiction here :-) but thanks for the

formulae.<

In case you're wondering, the spelling of Barlow's name is highly variable.

Although he has lived in Europe for the past 30 years, he comes from the

Anglophone minority community in Calcutta, and I suspect that he intends

this as a humorous slant on both Indian English orthography and the

decaying remains of colonialism. To the best of my knowlege, everthing in

the passage you cited was true.

[Erlich:]

>P.S. Dave: I don't like it if 15/8 is given the same complexity as 6/5,

>assuming factors of 2 are ignored.

No. Nor do I. But first notice that I'm specifically saying that factors of

2 should *not* be ignored, just given a low weight. But no matter what the

weighting of 2, 15/4 would have the same complexity as 12/5. I certainly

don't thing they have the same dissonance (15/4 is greater). So maybe

you've just blown away all formulae where we ignore whether various factors

are on the same or different sides of the vinculum (the line between

numerator and denominator). If so, good work.

Maybe the tolerance function can take care of it. But it seems unlikely.

Can you establish that it won't?

More cases which would have the same complexity under any such scheme (same

complexity on the same line).

6/1 3/2

10/1 5/2

12/1 4/3

14/1 7/2

15/1 5/3 *

18/1 9/2

20/1 5/4

21/1 7/3 *

22/1 11/2

24/1 8/3

26/1 13/2

28/1 7/4

30/1 15/2 10/3 6/5 *

33/1 11/3 *

34/1 17/2

35/1 7/5 *

36/1 9/4

38/1 19/2

39/1 13/3

40/1 8/5

42/1 21/2 14/3 7/6 *

44/1 11/4

45/1 9/5 *

46/1 23/2

48/1 16/3

50/1 25/2

51/1 17/3

52/1 13/4

54/1 27/2

55/1 11/5 *

56/1 8/7 *

58/1 29/2

60/1 15/4 20/3 12/5 *

:

120/1 15/8 40/3 24/5 *

Those where only powers-of-two change sides, are not too objectionable. Nor

are those that involve primes higher than 11, since the tolerance function

should take care of them. I've flagged the remaining ones with *.

Surely oo/1 is as consonant as 1/1. ("oo" is meant to look like the lazy-8

infinity symbol). I'd even say n/1, where n>16, are almost as consonant as

1/1. Can we arrange for the tolerance function (e.g. Harmonic Entropy) to

give that result?

But it doesn't blow away all formulae based on separate prime

factorisations of numerator and denominator. There must be ways of

combining them that avoid this problem. So Paul (or anyone), how might we

modify odd-limit to include some consideration of 2's? e.g. so 6/5 is not

forced to have the same complexity as 5/3 (or 3/2 same as 4/3). (I think

Dan Wolf asked you the same question).

Do we agree that the superparticular series 2/1, 3/2, 4/3, 5/4, 6/5, 7/6,

8/7, 9/8, 10/9, 11/10 ... is progressively more dissonant until it gets

close enough to 1/1 for the tolerance function to take over? If someone

were to ask me "What is dissonance?". I don't think I could do better than

to say "It is that quality of the sound (independent of pitch and loudness)

which you hear increasing as you go up this series of just intervals

(assuming a near harmonic timbre)".

Here's are more series of intervals which IMHO increase in dissonance

(until getting too near some low complexity interval). Since the tolerance

function can take care of the "until too near ..." bit, these can be taken

as unlimited series of increasing *complexity*. I've arbitrarily stayed

within 2 octaves.

2/1, 3/2, 4/3, 5/4, 6/5, 7/6, 8/7, 9/8, 10/9, 11/10 ... until too near 1/1

3/2, 5/3, 7/4, 9/5, 11/6, 13/7, 15/8 ... until too near 2/1

3/1, 5/2, 7/3, 9/4, 11/5, 13/6, 15/7 ... until too near 2/1

2/1, 5/2, 8/3, 11/4, 13/5, ... until too near 3/1

4/1, 7/2, 10/3, 13/4, 14/5, ... until too near 3/1

3/1, 7/2, 11/3, 15/4, ... until too near 4/1

1/1, 4/3, 7/5, 10/7, 13/8, ... until too near 3/2

2/1, 5/3, 8/5, 11/7, 14/9, ... until too near 3/2

2/1, 7/3, 12/5, ... until too near 5/2

3/1, 8/3, 13/5, ... until too near 5/2

3/2, 7/5, 11/8, ... until too near 4/3

1/1, 5/4, 9/7, ... until too near 4/3

3/2, 8/5, 13/8, ... until too near 5/3

2/1, 7/4, 12/7, ... until too near 5/3

4/3, 9/7, ... until too near 5/4

1/1, 6/5, 11/9, ... until too near 5/4

3/1, 10/3, ... until too near 7/2

4/1, 11/3, ... until too near 7/2

Anyone disagree with any of these? Next we need to decide how they interleave.

[Rosati:]

> During a discussion with Paul Erlich awhile back, I mentioned that 13/8

> sounded more consonant to me than 11/8. I attributed this to 11/8 falling in

> the tritone "hump", while 13/8 sits between 5/3 and 8/5. After Paul gave us

> David Canright's new web address I was reading his "Tour up the Harmonic

> Series" (http://www.mbay.net/~anne/david/harmser/index.htm) and noticed that

> he also hears 13/8 as more consonant than 11/8. So, I was wondering how

> other distinguished ears in this forum might weigh in on this and what

> implications it might have for theories of odd-limit (or prime limit, for

> that matter) consonance indexing.

I don't think it has any implications for any *complexity* measure, since

11/8 and 13/8 are in regions where the lower complexity of notes either

side will dominate, i.e. this result will be taken care of by the tolerance

or blurring function. I agree totally with what you said. The dissonance

hump between 4/3 and 7/5 (where 11/8 is) is higher than the one between 8/5

and 5/3 (where 13/8 is).

[Keenan:]

>>The second function has been called tolerance. This is some kind of

>>blurring function where the dissonance of a complex ratio will depend more

>>on its proximity to nearby simpler ratios, thus limiting the significance

>>of higher primes.

[Erlich replied:]

>Have you noticed that this argument is far more valid if you replace

>"primes" with "odds" at the end of the sentence,

Actually, I'd prefer to just say "limiting the significance of higher

*numbers*", whether they be prime, odd or otherwise.

[Erlich:]

>showing that a strict

>lattice approach like all those we've been discussing is much more

>likely to be meaningful if we restrict ourselves to, say, the

>11-odd-limit, than if we restrict ourselves to the 11-prime-limit, the

>7-prime-limit, the 5-prime-limit, or even the 3-prime-limit?

I didn't think we needed to limit the complexity measure in this way

because the tolerance function should take care of it in a less arbitrary

way. Isn't what-you-are-talking-about a more user-friendly but less

accurate way of modelling tolerance? Certainly a useful point in that

tradeoff, and I agree odd-limit is much better than prime-limit for this

purpose of "limiting the limit".

[Erlich:]

>The 11-odd-limit seems about right given ideal conditions for pitch

>discrimination.

I agree. And before anyone jumps on us, remember we're talking about bare

intervals (not in chords).

[Erlich:]

>And again, a triangular lattice would be more appropriate.

So what does the corresponding equation (or algorithm) look like, to get

complexity from n/d, without ignoring factors of 2.

I'm convinced that n+d isn't a good enough complexity measure *in chords*,

but remind me what is wrong with it for intervals?

Regards,

-- Dave Keenan

http://dkeenan.com

Dave Keenan wrote,

>So maybe

>you've just blown away all formulae where we ignore whether various

factors

>are on the same or different sides of the vinculum (the line between

>numerator and denominator). If so, good work.

Thanks. I like to blow things away.

>Surely oo/1 is as consonant as 1/1. ("oo" is meant to look like the

lazy-8

>infinity symbol). I'd even say n/1, where n>16, are almost as consonant

as

>1/1. Can we arrange for the tolerance function (e.g. Harmonic Entropy)

to

>give that result?

Actually, Harmonic Entropy already works that way! Some time ago I

posted an old derivation of mine that the certainty with which an

interval is perceived as a just ratio is inversely proportional to the

DENOMINATOR of the ratio (when ratios are expressed with a larger

numerator than denominator). This derivation is a corrected version of

one originally done by Van Eck and is based on the exact same model I've

used for all the harmonic entropy work I've described so far (including

some graphs I just set to Joe Monzo). Now notice that the odd limit is

an upper bound for the denominator, and you'll see why I find the odd

limit useful even in octave-specific situations!

>But it doesn't blow away all formulae based on separate prime

>factorisations of numerator and denominator. There must be ways of

>combining them that avoid this problem. So Paul (or anyone), how might

we

>modify odd-limit to include some consideration of 2's? e.g. so 6/5 is

not

>forced to have the same complexity as 5/3 (or 3/2 same as 4/3). (I

think

>Dan Wolf asked you the same question).

Use just the denominator. That's based on the derivation I described

above. One assumption in that derivation is that the pitch of the upper

note is fixed when comparing different intervals. If instead you fix the

center of the interval, I think my derivation can be modified to show

that the certainty is inversely proportional to the sum of the numerator

and the denominator. According to you, that sum models critical-band

roughness very well up to ratios of 17, so it seems _both_ components of

dissonance can be modeled by the sum of the numerator and the

denominator!

>>showing that a strict

>>lattice approach like all those we've been discussing is much more

>>likely to be meaningful if we restrict ourselves to, say, the

>>11-odd-limit, than if we restrict ourselves to the 11-prime-limit, the

>>7-prime-limit, the 5-prime-limit, or even the 3-prime-limit?

>I didn't think we needed to limit the complexity measure in this way

>because the tolerance function should take care of it in a less

arbitrary

>way. Isn't what-you-are-talking-about a more user-friendly but less

>accurate way of modelling tolerance?

Yes. I have a more explicit provision for tolerance in this context

which I'll dig up soon.

>Certainly a useful point in that

>tradeoff, and I agree odd-limit is much better than prime-limit for

this

>purpose of "limiting the limit".

Yup.

>>And again, a triangular lattice would be more appropriate.

>So what does the corresponding equation (or algorithm) look like, to

get

>complexity from n/d, without ignoring factors of 2.

If you mean in a lattice context, I haven't thought about

octave-specific lattices. As for the usual octave-invariant lattice, it

appears Paul Hahn found an error in his original algorithm, but I'm sure

if anyone can correct it, he can.

>I'm convinced that n+d isn't a good enough complexity measure *in

chords*,

>but remind me what is wrong with it for intervals?

Nothing!

[Paul Erlich:]

>Re-reading Dave's e-mail to me, I see that the sum [of numerator and

>denominator] only models

>critical-band roughness well if the _sum_ is 17 or less. So intervals

>like 11/8 and 13/8 are already too complex for this type of formula, as

>they are too close to other ratios of similar complexity to be clearly

>distinguished from them.

Agreed.

>Note that my harmonic entropy model, to the

>accuracy that I've evaluated it so far, did not show local minima at

>11/8 or 13/8, but there was a very tiny one at 11/6.

Neato!

I'd be interested to see your harmonic entropy applied to, not a Farey

series but, all ratios n/d, where n>d, n+d <= N, where N is large, say 40

or 80. Will this still predict low dissonance for n/1 where n>=16?

Regards,

-- Dave Keenan

http://dkeenan.com

On Thu, 11 Mar 1999, Paul H. Erlich wrote:

> As for the usual octave-invariant lattice, it

> appears Paul Hahn found an error in his original algorithm, but I'm sure

> if anyone can correct it, he can.

I'm touched by your faith in me. In any case, I have in fact found a

simpler algorithm, though I think Manuel has beaten me to it already,

since it sounds like he has already coded it up. Nevertheless, here it

is, for the curious:

Given a Fokker-style interval vector (I1, I2, . . . In):

1. Go to the rightmost nonzero exponent; add the product of its

absolute value with the log of its base to the total.

2. Use that exponent to cancel out as many exponents of the opposite

sign as possible, starting to its immediate left and working right;

discard anything remaining of that exponent.

Example: starting with, say, (4 2 -3), we would add 3 lg(7) to

our total, then cancel the -3 against the 2, then the remaining

-1 against the 4, leaving (3 0 0). OTOH, starting with

(-2 3 5), we would add 5 lg(7) to our total, then cancel 2 of

the 5 against the -2 and discard the remainder, leaving (0 3 0).

3. If any nonzero exponents remain, go back to step one, otherwise

stop.

To illustrate on the list of intervals Manuel culled from an earlier

post of mine:

225:224 (2 2 -1):

1st iteration lg(7); (2 1 0)

2nd iteration lg(7) + lg(5); (2 0 0)

3rd iteration lg(7) + lg(5) + 2 lg(3);(0 0 0) Done.

126:125 (2 -3 1):

1st iteration lg(7); (2 -2 0)

2nd iteration lg(7) + 2 lg(5); (0 0 0) Done.

128:125 (0 -3):

1st iteration 3 lg(5); (0 0) Done.

81:80 (4 -1):

1st iteration lg(5); (3 0)

2nd iteration lg(5) + 3 lg(3); (0 0) Done.

64:63 (-2 0 -1):

1st iteration lg(7); (-2 0 0)

2nd iteration lg(7) + 2 lg(3); ( 0 0 0) Done.

50:49 (0 2 -2):

1st iteration 2 lg(7); (0 0 0) Done.

--pH <manynote@lib-rary.wustl.edu> http://library.wustl.edu/~manynote

O

/\ "How about that? The guy can't run six balls,

-\-\-- o and they make him president."

NOTE: dehyphenate node to remove spamblock. <*>

Dear tuning folk,

As promised, I've put up a spreadsheet with a chart comparing 10 different

complexity measures for all ratios n/d up to n+d=21.

http://dkeenan.com/Music/HarmonicComplexity.xls 174kB

Unfortunately the complexity comparison is not complete. I need a third

opinion on how Euler's totient function and gradus suavatis apply to

ratios. Also, for whole numbers, is the totient function equal to the prime

(or one less) in the case of primes?

I'll also take requests to include other dyadic complexity measures.

Remember, complexity alone is not dissonance. You need tolerance when

complexity is high. :-)

I've also put up my spreadsheet implementation of Sethares' dissonance

curve (from timbre) algorithm. Thanks Bill. Unfortunately the number of

points it can plot is limited to 28 so you have to zoom in on areas of

interest by changing the start point and reducing the step size.

http://dkeenan.com/Music/SetharesDissonance.xls 1.5MB

Regards,

-- Dave Keenan

http://dkeenan.com

I've thought about it and I don't agree with the algorithm Manuel and

Paul Hahn have come up with for computing the city-block metric on the

triangular lattice. The problem comes down, as usual, to an

over-reliance on primes. For instance, the algorithm makes 11:10 a

single step of length lg(11), while 9:5 is a step of length lg(3) plus a

step of lg(5), which is longer. But if you have an 11-axis in the

lattice, then that means you're considering 11-limit intervals

consonant, so you should also consider 9-limit intervals consonant, and

you should have a 9-axis too. That would make 9:5 a single step of

length lg(9).

Clearly in this lattice 9:5 occurs in two different places. But, Paul

Hahn himself wrote,

>If you're an odd-limit proponent such as myself, things get a little

>complicated at the 9-limit and above, since 9 is composite, and

>odd-factorization does not necessarily yield unique results. However,

>the minimum complexity is achieved by assigning the 9 exponent as large

>as possible, and the 3 exponent 0 or 1 as appropriate.

So it appears he agreed with me to begin with, then renegged (or

forgot). Another way to think about it is that the lattice with prime

axes is the basic construct, so every pitch appears only once, but then

you have "wormholes" with shorter than the apparent length for intervals

like 9:5.

The correct algorithm is of course much simpler than the last one Paul

H. described. It is: remove all factors of two, then take the log of the

denominator or the numerator, whichever is larger.

On Fri, 12 Mar 1999, Paul H. Erlich wrote:

> I've thought about it and I don't agree with the algorithm Manuel and

> Paul Hahn have come up with for computing the city-block metric on the

> triangular lattice. [snip]

> So it appears he agreed with me to begin with, then renegged (or

> forgot).

I do wish you wouldn't use such emotionally charged words, especially in

view of the fact that I have neither "reneged" nor forgotten.

Everything I've written on this subject so far works using vectors

including separate exponents for the odd composites, provided you

reduce them as I described in an earlier article.

> The correct algorithm is of course much simpler than the last one Paul

> H. described. It is: remove all factors of two, then take the log of the

> denominator or the numerator, whichever is larger.

No, it isn't. Consider 225:224, for example: your way (and with my

uncorrected algorithm for the weighted version) the complexity is

2 lg(3) + 2 lg(5), when in fact the correct version, if you work it out

on a lattice or use my corrected algorithm, is 2 lg(3) + lg(5) + lg(7).

--pH <manynote@lib-rary.wustl.edu> http://library.wustl.edu/~manynote

O

/\ "How about that? The guy can't run six balls,

-\-\-- o and they make him president."

NOTE: dehyphenate node to remove spamblock. <*>

I wrote,

>> I've thought about it and I don't agree with the algorithm Manuel and

>> Paul Hahn have come up with for computing the city-block metric on

the

>> triangular lattice. [snip]

>> So it appears he agreed with me to begin with, then renegged (or

>> forgot).

>I do wish you wouldn't use such emotionally charged words, especially

in

>view of the fact that I have neither "reneged" nor forgotten.

>Everything I've written on this subject so far works using vectors

>including separate exponents for the odd composites, provided you

>reduce them as I described in an earlier article.

I didn't think there was any emotion there. But there does appear to be

a conflict between your/Manuel's algorithm and using all odd composites,

as my 9:5 vs. 11:5 example pointed out.

>> The correct algorithm is of course much simpler than the last one

Paul

>> H. described. It is: remove all factors of two, then take the log of

the

>> denominator or the numerator, whichever is larger.

>No, it isn't. Consider 225:224, for example: your way (and with my

>uncorrected algorithm for the weighted version) the complexity is

>2 lg(3) + 2 lg(5), when in fact the correct version, if you work it out

>on a lattice or use my corrected algorithm, is 2 lg(3) + lg(5) + lg(7).

I stand by my way and would rather you addressed my examples of 9:5 and

11:5 instead of 225:224 (for which the lattice evaluation is unlikely to

be very meaningful anyway). If it must be discussed, my value for

225:224 assumes there is a 225-axis or 225-wormholes in the lattice. It

might make more sense to pick an odd limit for the lattice like 7, 9, or

11, but then you couldn't evaluate anything with 13 in it. Perhaps a

separate complexity measure for several choices of odd limit would

satisfy us both (but complicate the matter greatly).

On Tue, 16 Mar 1999, Paul H. Erlich wrote:

>> I do wish you wouldn't use such emotionally charged words, especially in

>> view of the fact that I have neither "reneged" nor forgotten.

>> Everything I've written on this subject so far works using vectors

>> including separate exponents for the odd composites, provided you

>> reduce them as I described in an earlier article.

>

> I didn't think there was any emotion there.

You don't think that someone might show a bit of heat at being (wrongly)

accused of having reneged on something? Aside from that, I find it a

bit presumptuous that sufficient commitment was assumed for the word

"renege" to be used in the first place.

> But there does appear to be

> a conflict between your/Manuel's algorithm and using all odd composites,

> as my 9:5 vs. 11:5 example pointed out.

No, there isn't. (Sheesh! I don't remember the last time I've had to

argue so hard to convince someone I agreed with him. Though I have a

feeling that it was Paul E. that time, too. 8-)> )

Look. Let's look at what I said about using the algorithms in an

odd-limit vs. prime-limit environment, _which you (Paul E.) yourself

quoted_:

| If you're an odd-limit proponent such as myself, things get a little

| complicated at the 9-limit and above, since 9 is composite, and

| odd-factorization does not necessarily yield unique results. However,

^^^^^^^^

| the minimum complexity is achieved by assigning the 9 exponent as large

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

| as possible, and the 3 exponent 0 or 1 as appropriate.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Okay. The 11-limit prime vectors for 9:5 and 11:5 are (2 -1 0 0) and

(0 -1 0 1). Converting these to the optimal 11-limit odd-factor vectors

as described above, we get (0 -1 0 1 0) for 9:5 and (0 -1 0 0 1) for

11:5.

Now apply the algorithms:

Simple version:

In each interval, the absolute values of the sums of the

positive and negative exponents are 1. Hence, each of these are

primary/consonant intervals within the 11-limit.

Weighted version:

(0 -1 0 1 0):

1st iteration lg(9) (0 0 0 0 0) done

(0 -1 0 0 1):

1st iteration lg(11) (0 0 0 0 0) done

See? It works.

Incidentally, since you don't like the 225:224 example, let's just

consider the 9:5 within the traditional 5-limit (vector [2 -1]). With

your largest-odd-factor method, we get lg(9). However, the shortest

path through the 5-limit lattice for 9:5 is a 3:2 and a 6:5, or

lg(3) + lg(5). Eh?

>>> The correct algorithm is of course much simpler than the last one Paul

>>> H. described. It is: remove all factors of two, then take the log of the

>>> denominator or the numerator, whichever is larger.

>

>> No, it isn't. Consider 225:224, for example: [snip]

>

> I stand by my way and would rather you addressed my examples of 9:5 and

> 11:5

See above.

> instead of 225:224 (for which the lattice evaluation is unlikely to

> be very meaningful anyway).

Meaningfulness is a separate question. As we have already seen, we all

use these various metric functions for slightly different purposes and

in slightly different ways. But is the function not defined on a large

interval like 225:224? Is not the shortest path through the 7-limit

lattice one septimal interval, one 5-limit interval, and two 3-limit

intervals?

> If it must be discussed, my value for

> 225:224 assumes there is a 225-axis or 225-wormholes in the lattice.

As Carl Lumma has already pointed out, this seems quite strange--it

would seem to imply that you can only apply this metric to intervals

which one considers primary/within a given odd limit/consonant.

IOW, it _is_ (log of) odd-limit. Which would negate the whole point of

using a city-block function in the first place.

--pH <manynote@lib-rary.wustl.edu> http://library.wustl.edu/~manynote

O

/\ "How about that? The guy can't run six balls,

-\-\-- o and they make him president."

NOTE: dehyphenate node to remove spamblock. <*>