back to list

lattice wrap up

🔗Carl Lumma <clumma@xxx.xxxx>

3/24/1999 8:17:50 AM

After much thought, I have reached some conclusions regarding the lattice
metric stuff. I believe strongly in these, and hope that they can
contribute to a consensus on this thread...

1. Triangular is better than rectangular.

2. "Euclidean" distance isn't good for much.

3. Weighting by (n*d) is trivial. If you add the results of each rung on
the way to the target, you get nonsense. If you multiply, you get (n*d) of
the original fraction.

4. Because they are useful for measuring dyads, (n+d), whole-number-limit,
and odd-limit all seem like viable ways to weight the rungs. But I don't
see how anything is gained by weighting on an octave-equivalent lattice.
Simply counting the rungs in the shortest path to target at the declared
odd-limit (ala Paul Hahn's consistency and diameter) is the pinnacle of
city block metrics. Somebody please get on and say any reason this would
not be so.

5. A 2-axis is probably not desirable. If it is, some form of weighting
would likely be needed; a weight of log(axis) is suggested.

Carl

🔗Paul H. Erlich <PErlich@Acadian-Asset.com>

3/25/1999 3:20:28 PM

Carl Lumma wrote,

>1. Triangular is better than rectangular.

For octave-equivalent lattices, I totally agree, and perhaps Joe Monzo
is coming around on this too. But for octave-specific lattices, an (n*d)
measure of complexity would be handled by a rectangular lattice with
prime axes (or, equivalently as far as the city-block metric is
concerned, a Monzo-type lattice with an additional axis for octaves) and
lengths proportional to the log of the axes. This is equivalent to
Tenney's harmonic distance if he included a 2-axis (anyone know if he
did or didn't)? I now see where Graham Breed was coming from in
supporting rectangular lattices for octave-specific lattices.

The reason omitting the 2-axis forces one to make the lattice triangular
is that typically many more powers of two will be needed to bring a
product of prime factors into close position than to bring a ratio of
prime factors into close position. So the latter should be represented
by a shorter distance than the former. Simply ignoring distances along
the 2-axis and sticking with a rectangular (or Monzo) lattice is
throwing away information.

>2. "Euclidean" distance isn't good for much.

Yeah, city-block all the way.

>3. Weighting by (n*d) is trivial. If you add the results of each rung
on
>the way to the target, you get nonsense. If you multiply, you get
(n*d) of
>the original fraction.

Or add the logs (like Tenney).

>4. Because they are useful for measuring dyads, (n+d),
whole-number-limit,
>and odd-limit all seem like viable ways to weight the rungs. But I
don't
>see how anything is gained by weighting on an octave-equivalent
lattice.
>Simply counting the rungs in the shortest path to target at the
declared
>odd-limit (ala Paul Hahn's consistency and diameter) is the pinnacle of
>city block metrics. Somebody please get on and say any reason this
would
>not be so.

Because one wants lattice distance to be directly related to complexity!
In the Breed-Tenney case, complexity is sqrt(n*d), and distance is
log(n*d), so distance is 2*log(complexity). In my case, the complexity
is the odd limit, and distance is log(odd limit), so distance is
log(complexity).

>5. A 2-axis is probably not desirable. If it is, some form of
weighting
>would likely be needed; a weight of log(axis) is suggested.

right . . . a weight of log(axis) should be applied to all axes, and if
a 2-axis is included, a rectangular lattice is OK. If a 2-axis is not
included, a triangular lattice is better.

🔗Carl Lumma <clumma@xxx.xxxx>

3/27/1999 7:28:40 AM

>>3. Weighting by (n*d) is trivial. If you add the results of each rung
>>on the way to the target, you get nonsense. If you multiply, you get
>>(n*d) of the original fraction.
>
>Or add the logs (like Tenney).

Adding the logs of (n*d) ought to be the same as multipling n*d, as far as
being trivial or not. Is this what you meant? You saw that I tried it
once, and I wasn't too happy with it. Now sqrt(n*d) may be different...
but as I say, I've lost interest in weighted city-block metrics.

>But for octave-specific lattices, an (n*d) measure of complexity would be
>handled by a rectangular lattice with prime axes

Actually, aren't the prime-factor rectangular lattice and the odd-factor
triangular lattice equivalent under these conditions?

>Because one wants lattice distance to be directly related to complexity!

Bleck! It implies something that isn't there. 49/32 isn't more complex
than 25/16 because it took two 7-limit rungs and 25/16 took two 5-limit ones.

>The reason omitting the 2-axis forces one to make the lattice triangular
>is that typically many more powers of two will be needed to bring a
>product of prime factors into close position than to bring a ratio of
>prime factors into close position.

What am I not getting? The triangular lattice can represent multiple
factors on a single rung, the rectangular lattice cannot. This is true no
matter what is going on with the 2's, as far as I can see. For example,
7/5 is one rung in a triangular lattice whereas it is two rungs on a
rectangular one.

carl

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

3/29/1999 5:40:16 PM

>>>3. Weighting by (n*d) is trivial. If you add the results of each rung
>>>on the way to the target, you get nonsense. If you multiply, you get
>>>(n*d) of the original fraction.
>
>>Or add the logs (like Tenney).

>Adding the logs of (n*d) ought to be the same as multipling n*d, as far as
>being trivial or not. Is this what you meant?

Sure, but "trivial" is good in this case.

>You saw that I tried it
>once, and I wasn't too happy with it.

Was that in octave-specific mode? What was wrong with it?

>>But for octave-specific lattices, an (n*d) measure of complexity would be
>>handled by a rectangular lattice with prime axes

>Actually, aren't the prime-factor rectangular lattice and the odd-factor
>triangular lattice equivalent under these conditions?

No -- unless you use a very strange, non-Euclidean type of triangle where
the length of one side is equal to the sum of the lengths of the other two
sides!

>Bleck! It implies something that isn't there. 49/32 isn't more complex
>than 25/16 because it took two 7-limit rungs and 25/16 took two 5-limit
ones.

I would say it is more complex.

>>The reason omitting the 2-axis forces one to make the lattice triangular
>>is that typically many more powers of two will be needed to bring a
>>product of prime factors into close position than to bring a ratio of
>>prime factors into close position.

>What am I not getting? The triangular lattice can represent multiple
>factors on a single rung, the rectangular lattice cannot. This is true no
>matter what is going on with the 2's, as far as I can see. For example,
>7/5 is one rung in a triangular lattice whereas it is two rungs on a
>rectangular one.

Right, but in an octave-specific rectangular (or parallelogram) lattice, 7:1
and 5:1 are each one rung and 7:5 is two rungs. In an octave-specific sense,
7:1 and 5:1 really are simpler than 7:5; the former are more consonant. 7:4
and 5:4 are each three rungs in the rectangular lattice, but they still come
out a litle simpler than 7:5 since the rungs along the 2-axis are so short.
If you can buy that 35:1 is as simple as 7:5, then the octave-specific
lattice really should be rectangular, not triangular. 35:1 is really
difficult to compare with 7:5 -- it's much less rough but also much harder
to tune . . .

🔗Carl Lumma <clumma@xxx.xxxx>

3/31/1999 6:24:19 AM

>>If you'd been following the discussion, you'd see that I too mistook Paul
>>H's algorithm as assuming primes, in contrast to his stance on odds vs.
>>primes, but he then claimed that he had been assuming odds all along. Since
>>the example that accompanied his algorithm only had 3-component vectors
>>(3,5,7), there was in fact no possible basis for deciding whether the
>>components were to be primes or odds on the basis of that one post alone.
>
>I just went back over the whole thread, and I see your point.

This thread has been rather hard to follow, but I don't see his point. If
there's one person who's been clear from the beginning, it is Paul H. His
original example mentioned the odds 9 and 15, and a general method for odd
factorization.

>>The problem with Hahn's intended algorithm is that it needs one additional
>>parameter, the odd limit of the lattice, in order to be unambiguous.
>
>Yes, and it was his apparent failure to appreciate that this is a problem,
>that led me to think he may have been saying "odd" but still thinking
>"prime". But now I realise that it probably isn't a problem for his
>intended use of this metric (difficulty of singing). It *is* a problem for
>our intended use as the basis for a dissonance metric.

It was I who first pointed out Paul H's odd-factoring requires a declared
odd limit, although I consider this obvious, and I disagree that it is a
problem. Erlich's metric is no different -- just Hahn's with odd limit
infinity.

Log weighting does merge prime and odd limits up to the declared limit,
although I don't see why this is desirable.

>The shortest path on an <snip> triangular lattice <snip> was worth looking
>at. But now that I've considered it, I have to agree that for purposes of
>predicting dyadic dissonance, prime factorisation is pointless. In fact
all >factorisations are pointless and therefore lattice distances are
pointless >whether triangular or rectangular. Not that you can't make some
of them >work, it's just that there are easier ways of doing it that give
better >results (or at least the same).

Amen!