back to list

Composite Errors and Complexities

🔗Graham Breed <gbreed@gmail.com>

4/5/2008 8:14:14 AM

I've been working on a new PDF:

http://x31eq.com/composite.pdf

It's a complement to the Prime Errors and Complexities paper I spent a long time working on. It covers RMS errors and complexities of arbitrary sets of intervals, with examples from unweighted Tenney (product) and Farey (integer) limits.

I planned to get it finished tonight, but I failed. So, you can see it in its incomplete form. I'll add more waffle and clean it up some time next week.

Graham

🔗Graham Breed <gbreed@gmail.com>

4/11/2008 4:12:20 PM

Now in a "preliminary finished" state.

http://x31eq.com/composite.pdf

Also a single column format which may be better for reading electronically but also pushes all the tables to the end:

http://x31eq.com/composite_onecol.pdf

Graham

🔗Carl Lumma <carl@lumma.org>

4/11/2008 11:35:14 PM

Theorem 3 looks closest to what I'm after. I don't follow
the matrices. Can you confirm that it states that the RMS
error over all unique intervals in prime limit p is the same
as the Tenney-weighted RMS error of the primes 2...p?
Because I don't see how that's possible.

It does say "For any Tenney limit", but if it really were
any Tenney limit I'm not sure why it would need to say that.

I was thinking in terms of lattice distance, which must be
closely related to Tenney limit in prime limit p. For
taxicab distance y in the 3-limit with pure octaves, the
RMS error is

N[y] = N[y-1]*(y-1) + y^2e^2
---------------------
y

N[0] = 0

where e is the error of 3.

It looks to me like this would grow without bound as y goes
to infinity. Therefore, I don't see how it could be the
"same" as e/log(3).

-Carl

At 04:12 PM 4/11/2008, you wrote:
>Now in a "preliminary finished" state.
>
>http://x31eq.com/composite.pdf
>
>Also a single column format which may be better for reading
>electronically but also pushes all the tables to the end:
>
>http://x31eq.com/composite_onecol.pdf
>
>
> Graham

🔗Graham Breed <gbreed@gmail.com>

4/11/2008 11:54:41 PM

Carl Lumma wrote:
> Theorem 3 looks closest to what I'm after. I don't follow
> the matrices. Can you confirm that it states that the RMS
> error over all unique intervals in prime limit p is the same
> as the Tenney-weighted RMS error of the primes 2...p?
> Because I don't see how that's possible.

No. It's the same as *a* prime weighted error. There are n free parameters for n primes (one redundant).

> It does say "For any Tenney limit", but if it really were
> any Tenney limit I'm not sure why it would need to say that.
> > I was thinking in terms of lattice distance, which must be
> closely related to Tenney limit in prime limit p. For
> taxicab distance y in the 3-limit with pure octaves, the
> RMS error is
> > > N[y] = N[y-1]*(y-1) + y^2e^2
> ---------------------
> y
> > N[0] = 0
> > where e is the error of 3.

Why are you using RMS error for taxicab distances? Scalar complexity and badnesses are both lattice distances. And they generalize for any (symmetric, positive definite) metric G. (Strictly speaking scalar badness may not count but any other parametric scalar badness does.)

> It looks to me like this would grow without bound as y goes
> to infinity. Therefore, I don't see how it could be the
> "same" as e/log(3).

That's why I normalize it.

Graham

🔗Carl Lumma <carl@lumma.org>

4/12/2008 12:02:44 AM

At 11:54 PM 4/11/2008, you wrote:
>Carl Lumma wrote:
>> Theorem 3 looks closest to what I'm after. I don't follow
>> the matrices. Can you confirm that it states that the RMS
>> error over all unique intervals in prime limit p is the same
>> as the Tenney-weighted RMS error of the primes 2...p?
>> Because I don't see how that's possible.
>
>No. It's the same as *a* prime weighted error. There are n
>free parameters for n primes (one redundant).

Huh?

>> It does say "For any Tenney limit", but if it really were
>> any Tenney limit I'm not sure why it would need to say that.
>>
>> I was thinking in terms of lattice distance, which must be
>> closely related to Tenney limit in prime limit p. For
>> taxicab distance y in the 3-limit with pure octaves, the
>> RMS error is
>>
>>
>> N[y] = N[y-1]*(y-1) + y^2e^2
>> ---------------------
>> y
>>
>> N[0] = 0
>>
>> where e is the error of 3.
>
>Why are you using RMS error for taxicab distances?

We both take the unweighted RMS of intervals in a prime
limit unioned with something else. You use max Tenney height.
I use max taxicab distance.

>> It looks to me like this would grow without bound as y goes
>> to infinity. Therefore, I don't see how it could be the
>> "same" as e/log(3).
>
>That's why I normalize it.

I don't get that part.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/12/2008 7:25:14 AM

Carl Lumma wrote:
> At 11:54 PM 4/11/2008, you wrote:
>> Carl Lumma wrote:
>>> Theorem 3 looks closest to what I'm after. I don't follow
>>> the matrices. Can you confirm that it states that the RMS
>>> error over all unique intervals in prime limit p is the same
>>> as the Tenney-weighted RMS error of the primes 2...p?
>>> Because I don't see how that's possible.
>> No. It's the same as *a* prime weighted error. There are n >> free parameters for n primes (one redundant).
> > Huh?

You can choose how you weight each prime interval. Tenney weighting is one way of doing it. An unweighted Tenney limit is always the same as a weighted prime limit. But the specific weighting's different and that's what the matrices show. You take the square root of each element on the diagonal and those are the weights.

Conjecture 1 says that the matrices approach Tenney weighting as the limit approaches infinity.

>>> It does say "For any Tenney limit", but if it really were
>>> any Tenney limit I'm not sure why it would need to say that.
>>>
>>> I was thinking in terms of lattice distance, which must be
>>> closely related to Tenney limit in prime limit p. For
>>> taxicab distance y in the 3-limit with pure octaves, the
>>> RMS error is
>>>
>>>
>>> N[y] = N[y-1]*(y-1) + y^2e^2
>>> ---------------------
>>> y
>>>
>>> N[0] = 0
>>>
>>> where e is the error of 3.
>> Why are you using RMS error for taxicab distances?
> > We both take the unweighted RMS of intervals in a prime
> limit unioned with something else. You use max Tenney height.
> I use max taxicab distance.

I can't digest that now. Tenney height is a taxicab distance, isn't it?

You could use the Euclidean distance, and I expect it'd make sense, but I haven't looked at it. The scalar complexity is the RMS of this weighted, Euclidean distance.

Gene gave an integral that gives the TOP-RMS tuning as the result. That's clever because it's obvious that the tuning won't go to infinity whatever the normalization. But I don't completely understand it so I can't use it as a proof.

>>> It looks to me like this would grow without bound as y goes
>>> to infinity. Therefore, I don't see how it could be the
>>> "same" as e/log(3).
>> That's why I normalize it.
> > I don't get that part.

Originally I didn't want to specify the number of notes in the formulas, because I was using so many letters already, so I worked out that it's also the dot product of the weighted JI primes. But that happens to work, and generalizes to other weightings.

You can think of it in terms of dimensional analysis. There should be as much W (where it's explicit) on the top and bottom of the formulas. For the error, M and H cancel out as well.

It gets harder for the cross-weighted metrics and you'll have to follow the algebra to see why. I'm not clear about the general case myself but it works reasonably well.

Graham

🔗Carl Lumma <carl@lumma.org>

4/12/2008 10:07:59 AM

Graham wrote...
>>>> Theorem 3 looks closest to what I'm after. I don't follow
>>>> the matrices. Can you confirm that it states that the RMS
>>>> error over all unique intervals in prime limit p is the same
>>>> as the Tenney-weighted RMS error of the primes 2...p?
>>>> Because I don't see how that's possible.
>>>
>>> No. It's the same as *a* prime weighted error. There are n
>>> free parameters for n primes (one redundant).
>>
>> Huh?
>
>You can choose how you weight each prime interval. Tenney
>weighting is one way of doing it. An unweighted Tenney
>limit is always the same as a weighted prime limit. But the
>specific weighting's different and that's what the matrices
>show. You take the square root of each element on the
>diagonal and those are the weights.
>
>Conjecture 1 says that the matrices approach Tenney
>weighting as the limit approaches infinity.

Do you prove that? I'd love to understand how you normalize.

>>>> It does say "For any Tenney limit", but if it really were
>>>> any Tenney limit I'm not sure why it would need to say that.
>>>>
>>>> I was thinking in terms of lattice distance, which must be
>>>> closely related to Tenney limit in prime limit p. For
>>>> taxicab distance y in the 3-limit with pure octaves, the
>>>> RMS error is
>>>>
>>>>
>>>> N[y] = N[y-1]*(y-1) + y^2e^2
>>>> ---------------------
>>>> y
>>>> N[0] = 0
>>>>
>>>> where e is the error of 3.
>>> Why are you using RMS error for taxicab distances?
>>
>> We both take the unweighted RMS of intervals in a prime
>> limit unioned with something else. You use max Tenney height.
>> I use max taxicab distance.
>
>I can't digest that now.

For taxicab distance zero you have only 1/1 and the error
is zero. Taxicab 1 (on a 1-D lattice) and the RMS error is

e^2

y=2 and it's

(e^2 + (2e)^2 ) / 2
3/2 9/8

or

e^2 + 4e^2
----------
2

and generally you get the formula I gave (quoted above).
The 5-limit gets way more complicated (when the prime errors
have opposite signs in composite intervals) and I haven't
even tried to work it out.

You must be dealing with this in your approach and I wish
I had a clue how you're doing it.

>Tenney height is a taxicab distance, isn't it?

No, Tenney height is product limit; what you're unioning
with prime limit. I'm unioning taxicab distance, which is
loosely related but not the same. The best way to compare
would be to draw the balls on the lattice.

>You could use the Euclidean distance, and I expect it'd make
>sense, but I haven't looked at it.

It's very closely related to taxicab. The balls are almost
the same. In the 5-limit, it's like approximating a disk
using n pixels, where n is the number of notes.

>Gene gave an integral that gives the TOP-RMS tuning as the
>result.

What is it?

>>>> It looks to me like this would grow without bound as y goes
>>>> to infinity. Therefore, I don't see how it could be the
>>>> "same" as e/log(3).
>>>
>>> That's why I normalize it.
>>
>> I don't get that part.
>
>Originally I didn't want to specify the number of notes in
>the formulas, because I was using so many letters already,
>so I worked out that it's also the dot product of the
>weighted JI primes. But that happens to work, and
>generalizes to other weightings.
>
>You can think of it in terms of dimensional analysis. There
>should be as much W (where it's explicit) on the top and
>bottom of the formulas. For the error, M and H cancel out
>as well.

I'm going to need hand-holding.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/12/2008 10:20:27 PM

Carl Lumma wrote:
> Graham wrote...

>> Conjecture 1 says that the matrices approach Tenney >> weighting as the limit approaches infinity.
> > Do you prove that? I'd love to understand how you normalize.

No. If I proved it then it'd be a theorem, not a conjecture. The normalization here is to divide the matrix by the number in the top left-hand corner. As long as the weights are normalized in whatever formula you apply them to you get the same result.

>>>>> It does say "For any Tenney limit", but if it really were
>>>>> any Tenney limit I'm not sure why it would need to say that.
>>>>>
>>>>> I was thinking in terms of lattice distance, which must be
>>>>> closely related to Tenney limit in prime limit p. For
>>>>> taxicab distance y in the 3-limit with pure octaves, the
>>>>> RMS error is
>>>>>
>>>>>
>>>>> N[y] = N[y-1]*(y-1) + y^2e^2
>>>>> ---------------------
>>>>> y
>>>>> N[0] = 0
>>>>>
>>>>> where e is the error of 3.
>>>> Why are you using RMS error for taxicab distances?
>>> We both take the unweighted RMS of intervals in a prime
>>> limit unioned with something else. You use max Tenney height.
>>> I use max taxicab distance.
>> I can't digest that now.
> > For taxicab distance zero you have only 1/1 and the error
> is zero. Taxicab 1 (on a 1-D lattice) and the RMS error is
> > e^2

RMS is sqrt(e^2)

> y=2 and it's
> > (e^2 + (2e)^2 ) / 2
> 3/2 9/8

And that's a Euclidean formula, not a taxicab formula. The taxicab distance would be

(|e| + |2e|)/2

With the assumption that octaves are pure and we can ignore weights.

> or
> > e^2 + 4e^2
> ----------
> 2

3|e|/2 for taxicab

> and generally you get the formula I gave (quoted above).
> The 5-limit gets way more complicated (when the prime errors
> have opposite signs in composite intervals) and I haven't
> even tried to work it out.

Where does the taxicab limit come in?

> You must be dealing with this in your approach and I wish
> I had a clue how you're doing it.

Okay, start with this

N[y] = N[y-1]*(y-1) + y^2e^2
---------------------
y
N[0] = 0

It's better to calculate the sum-squared than the mean-squared, so multiply your N[y] by y

N[y] = [N[y-1]/(y-1)]*(y-1) + y^2e^2

N[0] = 0

ERMS[y] = sqrt(N[y]/y)

where ERMS is now the RMS error. The terms in (y-1) cancel out

N[y] = N[y-1] + y^2e^2

and you can also write it

N[y] = sum[i=1,y, i^2 * e^2]

So the i^2 term is because the error in the ith interval is i*e. And, um, is that?

This is the same as the formula for a weighted mean.

http://en.wikipedia.org/wiki/Weighted_mean

You can call the weights i^2 and the data e^2. If you normalize it as in the standard formula, you get

ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])

which is the same as

ERMS[y] = |e|

So it's clearly stable as y tends to infinity.

There must be something special about the intervals that means you can use i as the weighting. In general there'll be a list of numbers that applies the weight to each interval. Call that list W where W[i] = i. The weighted RMS error is then

ERMS[y] = sqrt[sum[i=1,y, W[i]^2 * e^2] / sum[i=1,y, W[i]^2]]

Make W a column vector and W* its transpose and you can get rid of the explicit sum.

ERMS = sqrt[(W* W e^2) / (W* W)]

In the 5-limit, instead of a single error E, you have a pair of errors E. So call E a column matrix and you could have either

ERMS = sqrt[(W* W E* E) / (W* W)]

ERMS = sqrt[(W E)* (W E) / (W* W)]

The first equation is boring because it doesn't depend on the weights. The second equation gives a weighted RMS. Even that's boring if W is a column vector because it becomes

ERMS = sqrt[ (E* W* W E) / (W* W)]

Probably you want W to be a rectangular matrix. The problem there is that W* W stops being a scalar that you can divide by. To make it a scalar you could try a determinant

ERMS = sqrt[det(E* W* W E) / det(W* W)]

Now you need to work out what W and E should be in the 5-limit. Maybe you'll find this formula is valid for something.

>> Tenney height is a taxicab distance, isn't it?
> > No, Tenney height is product limit; what you're unioning
> with prime limit. I'm unioning taxicab distance, which is
> loosely related but not the same. The best way to compare
> would be to draw the balls on the lattice.

Tenney height is the log of the product. It's also the taxicab distance on a lattice with the right weighting.

>> You could use the Euclidean distance, and I expect it'd make >> sense, but I haven't looked at it.
> > It's very closely related to taxicab. The balls are almost
> the same. In the 5-limit, it's like approximating a disk
> using n pixels, where n is the number of notes.

What have disks got to do with it?

>> Gene gave an integral that gives the TOP-RMS tuning as the >> result.
> > What is it?

See tenney_integral.pdf in the files section.

>>>>> It looks to me like this would grow without bound as y goes
>>>>> to infinity. Therefore, I don't see how it could be the
>>>>> "same" as e/log(3).
>>>> That's why I normalize it.
>>> I don't get that part.
>> Originally I didn't want to specify the number of notes in >> the formulas, because I was using so many letters already, >> so I worked out that it's also the dot product of the >> weighted JI primes. But that happens to work, and >> generalizes to other weightings.
>>
>> You can think of it in terms of dimensional analysis. There >> should be as much W (where it's explicit) on the top and >> bottom of the formulas. For the error, M and H cancel out >> as well.
> > I'm going to need hand-holding.

http://en.wikipedia.org/wiki/Dimensional_analysis

Take the equation above for the weighted RMS error

ERMS = sqrt[det(E* W* W E) / det(W* W)]

Whether or not it's valid, it is at least dimensionally correct as some kind of error function. The factors of W in the numerator and denominator cancel out. The dimensions of ERMS will be the same as those of E for the simple case where W* W is a scalar so that the determinants cancel out. More generally, ERMS has dimensions of error to the power of the rank of (E* W* W E). That means ERMS has dimensions as a function of only error. Which makes some sense but, even then, isn't a true weighted RMS. The error formula can't be quite this simple.

So lets go to scalar complexity instead. It's something to do with the volume of the weighted mapping.

K = sqrt[det(M* W2 M)]

These weights form a square matrix, so it can be squared. And as W is symmetric that means W^2 or W2 = W* W.

The formula isn't normalized because the dimensions of W still affect it. To look like a weighted RMS you could try

KRMS = sqrt[det((M* W2 M)/sum(y=1,i, W[i,i]^2))]

For an equal temperament, the determinant becomes redundant, so that's

KRMS = sqrt[(M* W2 M)/sum(y=1,i, W[i,i]^2)]

The dimensions are the same as M, the mapping, which we can call "steps".

Generally, the complexity of an equal temperament should be "steps per octave". The "per octave" part comes from choosing to take the first element of the mapping, which relates scale steps to octaves. The formula for KRMS is suspicious because it doesn't make the "per octave" part explicit.

The scalar complexity of an equal temperament is

Kscal = sqrt[(M* W2 M)/(H* W2 H)]

where H is a column vector with the sizes of the prime intervals. If you measure intervals in octaves this gives the correct dimensions of steps per octave. It also means the scalar complexity is close to the number of steps to an octave

Kscal ~ M[0]

That's C-style indexing where the first element is zero. It works because the mapping should be close to JI. That is it's roughly a multiple of H.

M[i]/M[0] ~ H[i]/H[0]

If H has units of octave that becomes

M[i]/M[0] ~ H[i]

The approximation still holds if you square both sides

(M[i]/M[0])^2 ~ (H[i])^2

M[i]^2/M[0]^2 ~ H[i]^2

And you can rearrange it

M[0]^2 ~ M[i]^2 / H[i]^2

What's true of all elements should also be true of the sums, which in matrix terms is

M[0]^2 ~ (M* M) / (H* H)

The Kscal above is simply the weighted version of that mean. So how do you generalize it for any rank of temperament class? As H* W2 H is a scalar, it could be either inside or outside the determinant

Kscal = sqrt[det((M* W2 M)/(H* W2 H))]

or

Kscal = sqrt[det(M* W2 M)/(H* W2 H)]

Check the dimensions now. In the first one the factors of W2 in the numerator and denominator are balanced so the dimensions of W2 don't affect the dimensions of Kscal. If M is in "steps" and H is in octaves, the dimensions of Kscal are some function of steps per octave, which is correct.

Take the second formula now. The factors of W2 in the numerator and denominator do not balance out because the determinant alters the dimensions of its argument. Also the "steps" and "octaves" don't balance out for the same reason. Because of that you can say this formula is not correctly normalized.

Note that if you measure H in cents, you can still have Kscal a function of steps per octave:

Kscal = sqrt[det(H[0]^2 (M* W2 M)/(H* W2 H))]

If you now check my formula for scalar badness, you should see that its dimensions match those for scalar complexity. (If not it means I made a mistake. Dimensional analysis is a great way of spotting errors.) That means the TOP-RMS error (and its generalization for arbitrary weights) is dimensionless. It doesn't depend on the units of either M, W or H because they're balanced. That's why you can guess it's properly normalized and so doesn't depend on the number of intervals you look at.

One other thing. When you look at errors of composite intervals, including the same interval multiple times is the same as including it once but giving it more weight. That's another clue that balancing the weights gives you a normalization that doesn't depend strongly on the number of intervals.

Graham

🔗Carl Lumma <carl@lumma.org>

4/12/2008 10:42:36 PM

Graham wrote...

>>>>>> For taxicab distance y in the 3-limit with pure octaves,
>>>>>> the RMS error is
>>>>>>
>>>>>> N[y] = N[y-1]*(y-1) + y^2e^2
>>>>>> ---------------------
>>>>>> y
>>>>>> N[0] = 0
>>>>>>
>>>>>> where e is the error of 3.
>>>>>
>>>>> Why are you using RMS error for taxicab distances?
>>>> We both take the unweighted RMS of intervals in a prime
>>>> limit unioned with something else. You use max Tenney height.
>>>>
>>>> I use max taxicab distance.
>>>
>>> I can't digest that now.
>>
>> For taxicab distance zero you have only 1/1 and the error
>> is zero. Taxicab 1 (on a 1-D lattice) and the RMS error is
>>
>> e^2
>
>RMS is sqrt(e^2)

Yes.

>> y=2 and it's
>>
>> (e^2 + (2e)^2 ) / 2
>> 3/2 9/8
>
>And that's a Euclidean formula, not a taxicab formula. The
>taxicab distance would be
>
>(|e| + |2e|)/2

No no no. The above formula is the MS *error* over all
unique intervals within a certain taxicab distance. y=2 gives
two intervals on this lattice: 3 and 9.

>With the assumption that octaves are pure and we can ignore
>weights.

??????????

1. There are no weights in this formula.
2. There are no octaves on this lattice.

>> or
>>
>> e^2 + 4e^2
>> ----------
>> 2
>
>3|e|/2 for taxicab

??????????????????????

>> and generally you get the formula I gave (quoted above).
>> The 5-limit gets way more complicated (when the prime errors
>> have opposite signs in composite intervals) and I haven't
>> even tried to work it out.
>
>Where does the taxicab limit come in?

y is the taxicab limit.

>> You must be dealing with this in your approach and I wish
>> I had a clue how you're doing it.
>
>Okay, start with this
>
>N[y] = N[y-1]*(y-1) + y^2e^2
> ---------------------
> y
>N[0] = 0
>
>It's better to calculate the sum-squared than the
>mean-squared, so multiply your N[y] by y
>
>N[y] = [N[y-1]/(y-1)]*(y-1) + y^2e^2

Huh? This is what it looks like for sum-squared:

N[y] = N[y-1] + y^2e^2

>ERMS[y] = sqrt(N[y]/y)

OK.

>where ERMS is now the RMS error. The terms in (y-1) cancel out
>
>N[y] = N[y-1] + y^2e^2

Whew.

>and you can also write it
>
>N[y] = sum[i=1,y, i^2 * e^2]

Yes.

>So the i^2 term is because the error in the ith interval is
>i*e. And, um, is that?

The i^2 term is there because the error for the ith interval
is i * the error of its prime.

>This is the same as the formula for a weighted mean.
>
> http://en.wikipedia.org/wiki/Weighted_mean

You say that about every formula. There are no wegihts here.

>You can call the weights i^2 and the data e^2. If you
>normalize it as in the standard formula, you get
>
>ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])
>
>which is the same as
>
>ERMS[y] = |e|
>
>So it's clearly stable as y tends to infinity.

What allows you to take out the i^2 term? That makes no sense.

//

>>> Tenney height is a taxicab distance, isn't it?
>>
>> No, Tenney height is product limit; what you're unioning
>> with prime limit. I'm unioning taxicab distance, which is
>> loosely related but not the same. The best way to compare
>> would be to draw the balls on the lattice.
>
>Tenney height is the log of the product.

Nope. You're thinking of Tenney complexity.

>>> You could use the Euclidean distance, and I expect it'd make
>>> sense, but I haven't looked at it.
>>
>> It's very closely related to taxicab. The balls are almost
>> the same. In the 5-limit, it's like approximating a disk
>> using n pixels, where n is the number of notes.
>
>What have disks got to do with it?

Euclidean distance gives you disks. Taxicab gives you
pixelated disks.

>>>>>> It looks to me like this would grow without bound as y goes
>>>>>> to infinity. Therefore, I don't see how it could be the
>>>>>> "same" as e/log(3).
>>>>>
>>>>> That's why I normalize it.
>>>>
>>>> I don't get that part.
>>>
>>> Originally I didn't want to specify the number of notes in
>>> the formulas, because I was using so many letters already,
>>> so I worked out that it's also the dot product of the
>>> weighted JI primes. But that happens to work, and
>>> generalizes to other weightings.
>>>
>>> You can think of it in terms of dimensional analysis. There
>>> should be as much W (where it's explicit) on the top and
>>> bottom of the formulas. For the error, M and H cancel out
>>> as well.
>>
>> I'm going to need hand-holding.
>
>http://en.wikipedia.org/wiki/Dimensional_analysis
>
>Take the equation above for the weighted RMS error
>
>ERMS = sqrt[det(E* W* W E) / det(W* W)]
>
>Whether or not it's valid, it is at least dimensionally
>correct as some kind of error function. The factors of W in
>the numerator and denominator cancel out. The dimensions of
>ERMS will be the same as those of E for the simple case
>where W* W is a scalar so that the determinants cancel out.
>More generally, ERMS has dimensions of error to the power
>of the rank of (E* W* W E). That means ERMS has dimensions
>as a function of only error. Which makes some sense but,
>even then, isn't a true weighted RMS. The error formula
>can't be quite this simple.
>
>So lets go to scalar complexity instead. It's something to
>do with the volume of the weighted mapping.
>
>K = sqrt[det(M* W2 M)]
>
>These weights form a square matrix, so it can be squared.
>And as W is symmetric that means W^2 or W2 = W* W.
>
>The formula isn't normalized because the dimensions of W
>still affect it. To look like a weighted RMS you could try
>
>KRMS = sqrt[det((M* W2 M)/sum(y=1,i, W[i,i]^2))]
>
>For an equal temperament, the determinant becomes redundant,
>so that's
>
>KRMS = sqrt[(M* W2 M)/sum(y=1,i, W[i,i]^2)]
>
>The dimensions are the same as M, the mapping, which we can
>call "steps".
>
>Generally, the complexity of an equal temperament should be
>"steps per octave". The "per octave" part comes from
>choosing to take the first element of the mapping, which
>relates scale steps to octaves. The formula for KRMS is
>suspicious because it doesn't make the "per octave" part
>explicit.
>
>The scalar complexity of an equal temperament is
>
>Kscal = sqrt[(M* W2 M)/(H* W2 H)]
>
>where H is a column vector with the sizes of the prime
>intervals. If you measure intervals in octaves this gives
>the correct dimensions of steps per octave. It also means
>the scalar complexity is close to the number of steps to an
>octave
>
>Kscal ~ M[0]
>
>That's C-style indexing where the first element is zero. It
>works because the mapping should be close to JI. That is
>it's roughly a multiple of H.
>
>M[i]/M[0] ~ H[i]/H[0]
>
>If H has units of octave that becomes
>
>M[i]/M[0] ~ H[i]
>
>The approximation still holds if you square both sides
>
>(M[i]/M[0])^2 ~ (H[i])^2
>
>M[i]^2/M[0]^2 ~ H[i]^2
>
>And you can rearrange it
>
>M[0]^2 ~ M[i]^2 / H[i]^2
>
>What's true of all elements should also be true of the sums,
>which in matrix terms is
>
>M[0]^2 ~ (M* M) / (H* H)
>
>The Kscal above is simply the weighted version of that mean.
> So how do you generalize it for any rank of temperament
>class? As H* W2 H is a scalar, it could be either inside or
>outside the determinant
>
>Kscal = sqrt[det((M* W2 M)/(H* W2 H))]
>
>or
>
>Kscal = sqrt[det(M* W2 M)/(H* W2 H)]
>
>Check the dimensions now. In the first one the factors of
>W2 in the numerator and denominator are balanced so the
>dimensions of W2 don't affect the dimensions of Kscal. If M
>is in "steps" and H is in octaves, the dimensions of Kscal
>are some function of steps per octave, which is correct.
>
>Take the second formula now. The factors of W2 in the
>numerator and denominator do not balance out because the
>determinant alters the dimensions of its argument. Also the
>"steps" and "octaves" don't balance out for the same reason.
> Because of that you can say this formula is not correctly
>normalized.
>
>Note that if you measure H in cents, you can still have
>Kscal a function of steps per octave:
>
>Kscal = sqrt[det(H[0]^2 (M* W2 M)/(H* W2 H))]
>
>If you now check my formula for scalar badness, you should
>see that its dimensions match those for scalar complexity.
>(If not it means I made a mistake. Dimensional analysis is
>a great way of spotting errors.) That means the TOP-RMS
>error (and its generalization for arbitrary weights) is
>dimensionless. It doesn't depend on the units of either M,
>W or H because they're balanced. That's why you can guess
>it's properly normalized and so doesn't depend on the number
>of intervals you look at.
>
>One other thing. When you look at errors of composite
>intervals, including the same interval multiple times is the
>same as including it once but giving it more weight. That's
>another clue that balancing the weights gives you a
>normalization that doesn't depend strongly on the number of
>intervals.
>
>
> Graham

[head explodes]

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/13/2008 2:34:18 AM

Carl Lumma wrote:

>>> y=2 and it's
>>>
>>> (e^2 + (2e)^2 ) / 2
>>> 3/2 9/8
>> And that's a Euclidean formula, not a taxicab formula. The >> taxicab distance would be
>>
>> (|e| + |2e|)/2
> > No no no. The above formula is the MS *error* over all
> unique intervals within a certain taxicab distance. y=2 gives
> two intervals on this lattice: 3 and 9.

No it doesn't. You missed 1. And if it's 1, 3 and 9 you don't have 3/2 or 9/8.

>> With the assumption that octaves are pure and we can ignore >> weights.
> > ??????????
> > 1. There are no weights in this formula.

Yes, that's how you can ignore them.

> 2. There are no octaves on this lattice.

How come? What exactly is this lattice?

I see you called it a "1-D lattice". In that case how can it make a difference if you take the taxicab or Euclidean distance?

>>> and generally you get the formula I gave (quoted above).
>>> The 5-limit gets way more complicated (when the prime errors
>>> have opposite signs in composite intervals) and I haven't
>>> even tried to work it out.
>> Where does the taxicab limit come in?
> > y is the taxicab limit.

On what lattice?

>> It's better to calculate the sum-squared than the >> mean-squared, so multiply your N[y] by y
>>
>> N[y] = [N[y-1]/(y-1)]*(y-1) + y^2e^2
> > Huh? This is what it looks like for sum-squared:
> > N[y] = N[y-1] + y^2e^2

So why didn't you start with that? It's simpler than your original formula.

>> So the i^2 term is because the error in the ith interval is >> i*e. And, um, is that?
> > The i^2 term is there because the error for the ith interval
> is i * the error of its prime.

If you number the intervals so that this is true.

>> This is the same as the formula for a weighted mean.
>>
>> http://en.wikipedia.org/wiki/Weighted_mean
> > You say that about every formula. There are no wegihts here.

No, I only say it about formulae that look like that.

>> You can call the weights i^2 and the data e^2. If you >> normalize it as in the standard formula, you get
>>
>> ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])
>>
>> which is the same as
>>
>> ERMS[y] = |e|
>>
>> So it's clearly stable as y tends to infinity.
> > What allows you to take out the i^2 term? That makes no sense.

You can take it out because it cancels out.

>>>> Tenney height is a taxicab distance, isn't it?
>>> No, Tenney height is product limit; what you're unioning
>>> with prime limit. I'm unioning taxicab distance, which is
>>> loosely related but not the same. The best way to compare
>>> would be to draw the balls on the lattice.
>> Tenney height is the log of the product.
> > Nope. You're thinking of Tenney complexity.

So what do these things mean all of a sudden? I found Tenney height here:

http://tonalsoft.com/enc/t/tm-basis.aspx

It says "if p / q is a positive rational number in reduced form, then the Tenney height is TH(p / q) = p � q"

So great, there's no log. It still makes Tenney height by this definition a monotonic function of Tenney harmonic distance as Tenney himself defined it so a limit on one is the same as a limit on the other.

>>>> You could use the Euclidean distance, and I expect it'd make >>>> sense, but I haven't looked at it.
>>> It's very closely related to taxicab. The balls are almost
>>> the same. In the 5-limit, it's like approximating a disk
>>> using n pixels, where n is the number of notes.
>> What have disks got to do with it?
> > Euclidean distance gives you disks. Taxicab gives you
> pixelated disks.

Any set of points on a lattice will be pixelated. A taxicab metric gives you a rhombus doesn't it?

Graham

🔗Carl Lumma <carl@lumma.org>

4/13/2008 12:27:28 PM

Graham wrote:

>>>> y=2 and it's
>>>>
>>>> (e^2 + (2e)^2 ) / 2
>>>> 3/2 9/8
>>> And that's a Euclidean formula, not a taxicab formula. The
>>> taxicab distance would be
>>>
>>> (|e| + |2e|)/2
>>
>> No no no. The above formula is the MS *error* over all
>> unique intervals within a certain taxicab distance. y=2 gives
>> two intervals on this lattice: 3 and 9.
>
>No it doesn't. You missed 1.

The error is over the intervals. There's no error on 1.

>And if it's 1, 3 and 9 you
>don't have 3/2 or 9/8.

I put those labels there to make it easier to follow. There
are no 2s in this system.

>>> With the assumption that octaves are pure and we can ignore
>>> weights.
>>
>> ??????????
>>
>> 1. There are no weights in this formula.
>
>Yes, that's how you can ignore them.
>
>> 2. There are no octaves on this lattice.
>
>How come? What exactly is this lattice?

The linear lattice of 3s.

>I see you called it a "1-D lattice". In that case how can
>it make a difference if you take the taxicab or Euclidean
>distance?

Euclidean distance isn't restricted to integer values, but
it won't otherwise make a difference. However there is a
small difference on other lattices, as already mentioned.

>>>> and generally you get the formula I gave (quoted above).
>>>> The 5-limit gets way more complicated (when the prime errors
>>>> have opposite signs in composite intervals) and I haven't
>>>> even tried to work it out.
>>>
>>> Where does the taxicab limit come in?
>>
>> y is the taxicab limit.
>
>On what lattice?

The linear lattice of 3s in this example.

>>> It's better to calculate the sum-squared than the
>>> mean-squared, so multiply your N[y] by y
>>>
>>> N[y] = [N[y-1]/(y-1)]*(y-1) + y^2e^2
>>
>> Huh? This is what it looks like for sum-squared:
>>
>> N[y] = N[y-1] + y^2e^2
>
>So why didn't you start with that? It's simpler than your
>original formula.

Because I want the mean square error, not the sum
squared error.

>>> So the i^2 term is because the error in the ith interval is
>>> i*e. And, um, is that?
>>
>> The i^2 term is there because the error for the ith interval
>> is i * the error of its prime.
>
>If you number the intervals so that this is true.

???

>>> You can call the weights i^2 and the data e^2. If you
>>> normalize it as in the standard formula, you get
>>>
>>> ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])
>>>
>>> which is the same as
>>>
>>> ERMS[y] = |e|
>>>
>>> So it's clearly stable as y tends to infinity.
>>
>> What allows you to take out the i^2 term? That makes no sense.
>
>You can take it out because it cancels out.

Unless there's a reason to take it out you can't take it
out. I'm pretty sure about that.

>>>>> Tenney height is a taxicab distance, isn't it?
>>>> No, Tenney height is product limit; what you're unioning
>>>> with prime limit. I'm unioning taxicab distance, which is
>>>> loosely related but not the same. The best way to compare
>>>> would be to draw the balls on the lattice.
>>> Tenney height is the log of the product.
>>
>> Nope. You're thinking of Tenney complexity.
>
>So what do these things mean all of a sudden?

Same meaning they've had for years.

>I found Tenney height here:
>
>http://tonalsoft.com/enc/t/tm-basis.aspx
>
>It says "if p / q is a positive rational number in reduced
>form, then the Tenney height is TH(p / q) = p · q"
>
>So great, there's no log. It still makes Tenney height by
>this definition a monotonic function of Tenney harmonic
>distance as Tenney himself defined it so a limit on one is
>the same as a limit on the other.

OK. So?

>>>>> You could use the Euclidean distance, and I expect it'd make
>>>>> sense, but I haven't looked at it.
>>>> It's very closely related to taxicab. The balls are almost
>>>> the same. In the 5-limit, it's like approximating a disk
>>>> using n pixels, where n is the number of notes.
>>> What have disks got to do with it?
>>
>> Euclidean distance gives you disks. Taxicab gives you
>> pixelated disks.
>
>Any set of points on a lattice will be pixelated.

Yes but the Euclidean distance really does define circles in
Euclidean space (the superspace in which the lattice resides).

>A taxicab
>metric gives you a rhombus doesn't it?

Drawing rhombi might give exact correspondence with taxicab
whereas you get only partial correspondence for circles. But
in a sense any continuous shape implies points that aren't
measurable with taxicab.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/13/2008 8:03:14 PM

Carl Lumma wrote:
> Graham wrote:

>>> No no no. The above formula is the MS *error* over all
>>> unique intervals within a certain taxicab distance. y=2 gives
>>> two intervals on this lattice: 3 and 9.
>> No it doesn't. You missed 1.
> > The error is over the intervals. There's no error on 1.

The average is over the intervals and 1:1 is an interval.

>> And if it's 1, 3 and 9 you >> don't have 3/2 or 9/8.
> > I put those labels there to make it easier to follow. There
> are no 2s in this system.

Okay.

<snip>

>> How come? What exactly is this lattice?
> > The linear lattice of 3s.

So you have a simplified system where the RMS error of a temperament is the error in the one independent interval multiplied by the RMS complexity of the intervals you average over. Normalize by that RMS complexity and you get a very simple result -- the normalized RMS error of the temperament is the error of that one interval regardless of what intervals you average over. For more realistic cases this still roughly works but not perfectly.

>> I see you called it a "1-D lattice". In that case how can >> it make a difference if you take the taxicab or Euclidean >> distance?
> > Euclidean distance isn't restricted to integer values, but
> it won't otherwise make a difference. However there is a
> small difference on other lattices, as already mentioned.

Lattice distances are restricted to discrete points. There's a significant difference between Euclidean and taxicab distances on other lattices. But no difference at all on the example you gave so there was no need to specify which you were using.

>>>>> and generally you get the formula I gave (quoted above).
>>>>> The 5-limit gets way more complicated (when the prime errors
>>>>> have opposite signs in composite intervals) and I haven't
>>>>> even tried to work it out.
>>>> Where does the taxicab limit come in?
>>> y is the taxicab limit.
>> On what lattice?
> > The linear lattice of 3s in this example.

Then taxicab/Euclidean has nothing to do with it.

>>>> You can call the weights i^2 and the data e^2. If you >>>> normalize it as in the standard formula, you get
>>>>
>>>> ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])
>>>>
>>>> which is the same as
>>>>
>>>> ERMS[y] = |e|
>>>>
>>>> So it's clearly stable as y tends to infinity.
>>> What allows you to take out the i^2 term? That makes no sense.
>> You can take it out because it cancels out.
> > Unless there's a reason to take it out you can't take it
> out. I'm pretty sure about that.

You can take out anything that isn't a function of the temperament you're looking at. The sum of squares of i is the same for any temperament you measure relative to these intervals. The ratio of errors of two different temperaments is the same regardless of the normalization. So the normalization is valid. The only difference is that you can make a better comparison between different sets of intervals for the same temperament. The normalized RMS error only tells you how well that temperament approximates those errors instead of also telling you about the intervals themselves.

>>>>>> Tenney height is a taxicab distance, isn't it?
<snip>
>> I found Tenney height here:
>>
>> http://tonalsoft.com/enc/t/tm-basis.aspx
>>
>> It says "if p / q is a positive rational number in reduced >> form, then the Tenney height is TH(p / q) = p � q"
>>
>> So great, there's no log. It still makes Tenney height by >> this definition a monotonic function of Tenney harmonic >> distance as Tenney himself defined it so a limit on one is >> the same as a limit on the other.
> > OK. So?

So Tenney height is a taxicab distance.

I still say I choose my set of intervals by Tenney height. But I normalize by the RMS of the Euclidean complexity. I don't define it that way but that's what it is.

>>>>>> You could use the Euclidean distance, and I expect it'd make >>>>>> sense, but I haven't looked at it.
>>>>> It's very closely related to taxicab. The balls are almost
>>>>> the same. In the 5-limit, it's like approximating a disk
>>>>> using n pixels, where n is the number of notes.
>>>> What have disks got to do with it?
>>> Euclidean distance gives you disks. Taxicab gives you
>>> pixelated disks.
>> Any set of points on a lattice will be pixelated.
> > Yes but the Euclidean distance really does define circles in
> Euclidean space (the superspace in which the lattice resides).

Yes and the taxicab distance really does define rhombuses in a continuous taxicab geometry (as Wikipedia calls it):

http://en.wikipedia.org/wiki/Taxicab_metric

>> A taxicab >> metric gives you a rhombus doesn't it?
> > Drawing rhombi might give exact correspondence with taxicab
> whereas you get only partial correspondence for circles. But
> in a sense any continuous shape implies points that aren't
> measurable with taxicab.

Wikipedia agrees with me. A continous taxicab geometry gives exactly a rhombus the same way a continuous Euclidean geometry gives exactly a circle. Now, Wikipedia calls that rhombus a "circle" which shows you the wacky way mathematicians have with language.

The practical result of this is that a Euclidean limit would allow in more composite intervals if it's set for the same prime powers.

Graham

🔗Carl Lumma <carl@lumma.org>

4/13/2008 9:10:29 PM

Graham wrote...

>>>> No no no. The above formula is the MS *error* over all
>>>> unique intervals within a certain taxicab distance.
>>>> y=2 gives two intervals on this lattice: 3 and 9.
>>>
>>> No it doesn't. You missed 1.
>>
>> The error is over the intervals. There's no error on 1.
>
>The average is over the intervals and 1:1 is an interval.

It seems completely academic whether it's included or not.

>> Euclidean distance isn't restricted to integer values, but
>> it won't otherwise make a difference. However there is a
>> small difference on other lattices, as already mentioned.
>
>Lattice distances are restricted to discrete points.
>There's a significant difference between Euclidean and
>taxicab distances on other lattices.

I don't think it's significant for most applications.
Gene seemed to agree.

>But no difference at
>all on the example you gave so there was no need to specify
>which you were using.

OK, fine. It's actually what I chose for the regime.
The fact that my example is trivial doesn't have anything
to do with the price of tea.

>>>>> You can call the weights i^2 and the data e^2. If you
>>>>> normalize it as in the standard formula, you get
>>>>>
>>>>> ERMS[y] = sqrt(sum[i=1,y, i^2 * e^2] / sum[i=1,y, i^2])
>>>>>
>>>>> which is the same as
>>>>>
>>>>> ERMS[y] = |e|
>>>>>
>>>>> So it's clearly stable as y tends to infinity.
>>>> What allows you to take out the i^2 term? That makes no sense.
>>> You can take it out because it cancels out.
>>
>> Unless there's a reason to take it out you can't take it
>> out. I'm pretty sure about that.
>
>You can take out anything that isn't a function of the
>temperament you're looking at. The sum of squares of i is
>the same for any temperament you measure relative to these
>intervals.
>The ratio of errors of two different
>temperaments is the same regardless of the normalization.
>So the normalization is valid. //

The RMS error grows without bound as you consider more
intervals by lattice distance. But I gather you're saying
that for any two temperaments mapping the same JI, their
curves on the plot

y = RMS error
x = lattice distance for number of intervals considered

never cross, and therefore we can just compare them at
the smallest value of x. Is that about right?

While it seems true for the example I gave, I'm not so
sure about higher JI limits where the signs of the errors
come into play.

>>>>>>> Tenney height is a taxicab distance, isn't it?
><snip>
>>> I found Tenney height here:
>>>
>>> http://tonalsoft.com/enc/t/tm-basis.aspx
>>>
>>> It says "if p / q is a positive rational number in reduced
>>> form, then the Tenney height is TH(p / q) = p · q"
>>>
>>> So great, there's no log. It still makes Tenney height by
>>> this definition a monotonic function of Tenney harmonic
>>> distance as Tenney himself defined it so a limit on one is
>>> the same as a limit on the other.
>>
>> OK. So?
>
>So Tenney height is a taxicab distance.

With the right lattice scaling, which I didn't mention in
my setup.

>I still say I choose my set of intervals by Tenney height.
>But I normalize by the RMS of the Euclidean complexity. I
>don't define it that way but that's what it is.

OK.

>Yes and the taxicab distance really does define rhombuses in
>a continuous taxicab geometry (as Wikipedia calls it):
>
> http://en.wikipedia.org/wiki/Taxicab_metric

It mentions both discrete and continuous taxicab geometries,
of which the former is arguably the more pertinent here.

>The practical result of this is that a Euclidean limit would
>allow in more composite intervals if it's set for the same
>prime powers.

Yes.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/14/2008 4:49:12 AM

Carl Lumma wrote:
> Graham wrote...

>> Lattice distances are restricted to discrete points. >> There's a significant difference between Euclidean and >> taxicab distances on other lattices.
> > I don't think it's significant for most applications.
> Gene seemed to agree.

For what applications? What did he agree about and why?

Theorem 3 does hold for a Euclidean cutoff. All you need is that the "circles" are symmetrical about the axes. So swapping positive and negative numbers doesn't affect the complexity. That means intervals will always be included in pairs as the theorem demands.

>> The ratio of errors of two different >> temperaments is the same regardless of the normalization.
>> So the normalization is valid. //
> > The RMS error grows without bound as you consider more
> intervals by lattice distance. But I gather you're saying
> that for any two temperaments mapping the same JI, their
> curves on the plot
> > y = RMS error
> x = lattice distance for number of intervals considered
> > never cross, and therefore we can just compare them at
> the smallest value of x. Is that about right?

For your example, the "curves" are straight lines that pass through the origin. Or at least the errors are a points on such lines. Not crossing follows from the ratio being constant.

We can compare them at x=1. That may not be a real value. The smallest may be x=0 which is the grand unification point.

> While it seems true for the example I gave, I'm not so
> sure about higher JI limits where the signs of the errors
> come into play.

They won't be straight lines and the curves can cross. For example see Table 5 of the Composite PDF. 22-equal has a lower error than 19-equal in the 42-limit but they're reversed in the 56-limit. The order is the same whether or not they're normalized. Really, whether the curves cross has nothing to do with it.

The normalization should make each curve as near to flat as possible without losing information. And it should allow a finite error to exist for an infinitely complex set of intervals.

Graham

🔗Carl Lumma <carl@lumma.org>

4/14/2008 11:26:13 AM

Graham wrote...

>>> The ratio of errors of two different
>>> temperaments is the same regardless of the normalization.
>>> So the normalization is valid. //
>>
>> The RMS error grows without bound as you consider more
>> intervals by lattice distance. But I gather you're saying
>> that for any two temperaments mapping the same JI, their
>> curves on the plot
>>
>> y = RMS error
>> x = lattice distance for number of intervals considered
>>
>> never cross, and therefore we can just compare them at
>> the smallest value of x. Is that about right?
>
>For your example, the "curves" are straight lines that pass
>through the origin. Or at least the errors are a points on
>such lines. Not crossing follows from the ratio being constant.

Yes.

>We can compare them at x=1.

OK.

>> While it seems true for the example I gave, I'm not so
>> sure about higher JI limits where the signs of the errors
>> come into play.
>
>They won't be straight lines and the curves can cross. For
>example see Table 5 of the Composite PDF. 22-equal has a
>lower error than 19-equal in the 42-limit but they're
>reversed in the 56-limit. The order is the same whether or
>not they're normalized. Really, whether the curves cross
>has nothing to do with it.
>
>The normalization should make each curve as near to flat as
>possible without losing information. And it should allow a
>finite error to exist for an infinitely complex set of
>intervals.

So you are or are not proposing a way to compare the RMS
errors of temperaments mapping > 1-D of JI?

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/18/2008 11:25:36 PM

Carl Lumma wrote:
> Graham wrote...

>> The normalization should make each curve as near to flat as >> possible without losing information. And it should allow a >> finite error to exist for an infinitely complex set of >> intervals.
> > So you are or are not proposing a way to compare the RMS
> errors of temperaments mapping > 1-D of JI?

You can compare the errors if you want. For Tenney limits the result is that it doesn't matter much which limit you choose. For Farey limits I don't know if the comparison tells you anything useful which may be because the normalization isn't the best.

Graham

🔗Carl Lumma <carl@lumma.org>

4/18/2008 11:38:14 PM

>>> The normalization should make each curve as near to flat as
>>> possible without losing information. And it should allow a
>>> finite error to exist for an infinitely complex set of
>>> intervals.
>>
>> So you are or are not proposing a way to compare the RMS
>> errors of temperaments mapping > 1-D of JI?
>
>You can compare the errors if you want.

I'm not aware of a way to do so.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/18/2008 11:47:58 PM

Carl Lumma wrote:
>>>> The normalization should make each curve as near to flat as >>>> possible without losing information. And it should allow a >>>> finite error to exist for an infinitely complex set of >>>> intervals.
>>> So you are or are not proposing a way to compare the RMS
>>> errors of temperaments mapping > 1-D of JI?
>> You can compare the errors if you want.
> > I'm not aware of a way to do so.

Well, how about looking at a pair of numbers and seeing which is larger?

Graham

🔗Carl Lumma <carl@lumma.org>

4/19/2008 12:11:14 AM

>>> You can compare the errors if you want.
>>
>> I'm not aware of a way to do so.
>
>Well, how about looking at a pair of numbers and seeing
>which is larger?

Since the RMSE of all intervals isn't bounded and the RMSE
of primes erases sign information, I don't see why I should
believe such a comparison would be meaningful in the way
the TOP/max comparison is. -C.

🔗Graham Breed <gbreed@gmail.com>

4/19/2008 12:14:36 AM

Carl Lumma wrote:
>>>> You can compare the errors if you want.
>>> I'm not aware of a way to do so.
>> Well, how about looking at a pair of numbers and seeing >> which is larger?
> > Since the RMSE of all intervals isn't bounded and the RMSE
> of primes erases sign information, I don't see why I should
> believe such a comparison would be meaningful in the way
> the TOP/max comparison is. -C.

Why do you care about all intervals?

In what sense is a TOP-max comparison meaningful?

Graham

🔗Carl Lumma <carl@lumma.org>

4/19/2008 11:26:21 AM

>>>>> You can compare the errors if you want.
>>>> I'm not aware of a way to do so.
>>>
>>> Well, how about looking at a pair of numbers and seeing
>>> which is larger?
>>
>> Since the RMSE of all intervals isn't bounded and the RMSE
>> of primes erases sign information, I don't see why I should
>> believe such a comparison would be meaningful in the way
>> the TOP/max comparison is. -C.
>
>Why do you care about all intervals?

It's true that I only care about all the consonant
chords I could ever form. But then I'd have to say
what those are.

>In what sense is a TOP-max comparison meaningful?

No matter what I'm interested in, it's got me covered.

-Carl

🔗Graham Breed <gbreed@gmail.com>

4/19/2008 9:42:30 PM

Carl Lumma wrote:
>>>>>> You can compare the errors if you want.
>>>>> I'm not aware of a way to do so.
>>>> Well, how about looking at a pair of numbers and seeing >>>> which is larger?
>>> Since the RMSE of all intervals isn't bounded and the RMSE
>>> of primes erases sign information, I don't see why I should
>>> believe such a comparison would be meaningful in the way
>>> the TOP/max comparison is. -C.
>> Why do you care about all intervals?
> > It's true that I only care about all the consonant
> chords I could ever form. But then I'd have to say
> what those are.

Then why did you ask about comparing the errors of different sets of intervals?

There's no way to avoid saying what intervals you want with an RMS. An infinite set is guaranteed to be too large. And you still need to specify how you approach infinity because it does make a difference to the result.

Alternatively, you could use the normalized RMS, and compare the errors for different sets of intervals you might be interested in. That gives you an idea of the error and the uncertainty in it. Fortunately the uncertainty is quite small for unweighted Tenney limits (and, by implication, different related limits and weightings).

>> In what sense is a TOP-max comparison meaningful?
> > No matter what I'm interested in, it's got me covered.

I'm so happy for you! But you said it was meaningful. How is it meaningful?

Graham