back to list

Exploring parametric badness

🔗Graham Breed <gbreed@gmail.com>

1/2/2009 8:04:37 AM

I've finally written a PDF about the badness measure I thought up the
other year. This is part of a series of articles that was supposed to
be finished by now, but still has a long way to go.

http://x31eq.com/badness.pdf

As usual, comments are welcome, and maybe you can identify some of the
nameless temperaments.

Graham

🔗Carl Lumma <carl@lumma.org>

1/2/2009 2:45:44 PM

Very good to see you carrying on. Congratulations! I love the
first-person style and professional typesetting. Have you considered
submitting one of these to a journal?

For my own part, I've decided to operate in 'bonehead simple' mode
with tuning stuff for the foreseeable future (even more boneheaded
than before!). I'll stick to things I can understand in one
sentence and 2 seconds, and spend my energy putting them together.
A hacker's approach, if you will. It suits my limited brainpower
and time resources, and everyone will be better off.

I looked up TOM-RMS in the glossary, and it pointed me to primerr.
I searched that doc for "TOP-RMS", and found it mainly in the
legends of charts. Oh, and you're missing a space in
"(See Primerrfor details.)" Also I think your citation style
isn't consistent throughout the paper.

So, here's my code for TOP-max damage of ETs

(define top-damage
(lambda (val basis)
(let*
((weights (map log2 basis))
(ji (map (lambda (x) (* x 1200)) weights))
(errors (map abs (map - ji (top-val val basis)))))
(apply max (map / errors weights)))))

I'm sure you can follow that -- the only external function is
top-val, and it returns one of those bastard vals containing
nonintegers -- in this case, cents of the top tuning of the val.

What do I have to do to turn this into TOP-RMS?

If this is the answer I can't quite parse it.

(let* ((mapping (append seed-mapping (list next-guess)))
(weighted-size (/ (* next-guess step-size) (car more-primes)))
; the next few lines determine the type of error
(tot (+ tot weighted-size))
(tot2 (+ tot2 (* weighted-size weighted-size)))
(error (- (length mapping) (/ (* tot tot) tot2))))

-Carl

At 08:04 AM 1/2/2009, you wrote:
>I've finally written a PDF about the badness measure I thought up the
>other year. This is part of a series of articles that was supposed to
>be finished by now, but still has a long way to go.
>
>http://x31eq.com/badness.pdf
>
>As usual, comments are welcome, and maybe you can identify some of the
>nameless temperaments.
>
>
> Graham

🔗Herman Miller <hmiller@IO.COM>

1/2/2009 5:24:57 PM

Graham Breed wrote:
> I've finally written a PDF about the badness measure I thought up the
> other year. This is part of a series of articles that was supposed to
> be finished by now, but still has a long way to go.
> > http://x31eq.com/badness.pdf
> > As usual, comments are welcome, and maybe you can identify some of the
> nameless temperaments.

http://tonalsoft.com/enc/e/equal-temperament.aspx has a list of 5-limit temperaments. 53&270 is vulture. I've used "vulture" in the 7-limit for a 53&58 temperament that has a different mapping of prime 7 -- 53&270 doesn't look that great as a 7-limit temperament.

53&58 vulture [<1, 0, -6, 4], <0, 4, 21, -3]>
TOP-max P = 1199.274449, G = 475.411671
TOP-RMS P = 1199.307143, G = 475.361486

53&270 [<1, 0, -6, 25], <0, 4, 21, -56]>
TOP-max P = 1199.923914, G = 475.518899
TOP-RMS P = 1199.904982, G = 475.513479

7-limit 152&171 is enneadecal.
[<19, 30, 44, 53], <0, 1, 1, 3]>
TOP-max P = 63.160319, G = 7.152770
TOP-RMS P = 63.159903, G = 7.143746

11-limit 72&152 is octoid. I don't recall whether "octoid" was originally proposed as a name for a 7-limit temperament (in which case you'll want to call it octoid'), or an 11-limit one.

[<8, 13, 19, 23, 28], <0, -3, -4, -5, -3]>
TOP-max P = 150.033735, G = 16.238470
TOP-RMS P = 149.993205, G = 16.037109

🔗Graham Breed <gbreed@gmail.com>

1/2/2009 7:14:43 PM

2009/1/3 Carl Lumma <carl@lumma.org>:
> Very good to see you carrying on. Congratulations! I love the
> first-person style and professional typesetting. Have you considered
> submitting one of these to a journal?

Bill Sethares suggested the complete search one might be publishable.
But I haven't done any work towards it -- it's extra hassle. The
priorities getting things documented. (Or "published" if you like.)

> For my own part, I've decided to operate in 'bonehead simple' mode
> with tuning stuff for the foreseeable future (even more boneheaded
> than before!). I'll stick to things I can understand in one
> sentence and 2 seconds, and spend my energy putting them together.
> A hacker's approach, if you will. It suits my limited brainpower
> and time resources, and everyone will be better off.
>
> I looked up TOM-RMS in the glossary, and it pointed me to primerr.
> I searched that doc for "TOP-RMS", and found it mainly in the
> legends of charts. Oh, and you're missing a space in
> "(See Primerrfor details.)" Also I think your citation style
> isn't consistent throughout the paper.

That looks like something I should have spotted a year ago. Anyway,
I've changed the glossary to point to section 2 because that's all
about TOP-RMS.

I think I use the Harvard citation standard except for the special
cases where I say I don't.

> So, here's my code for TOP-max damage of ETs
>
> (define top-damage
> (lambda (val basis)
> (let*
> ((weights (map log2 basis))
> (ji (map (lambda (x) (* x 1200)) weights))
> (errors (map abs (map - ji (top-val val basis)))))
> (apply max (map / errors weights)))))
>
> I'm sure you can follow that -- the only external function is
> top-val, and it returns one of those bastard vals containing
> nonintegers -- in this case, cents of the top tuning of the val.
>
> What do I have to do to turn this into TOP-RMS?

I'll guess replace "max" with "rms". If there's no RMS function it needs to:

- square every element in the list
- add them up
- divide by the length of the list
- square root

> If this is the answer I can't quite parse it.
>
> (let* ((mapping (append seed-mapping (list next-guess)))
> (weighted-size (/ (* next-guess step-size) (car more-primes)))
> ; the next few lines determine the type of error
> (tot (+ tot weighted-size))
> (tot2 (+ tot2 (* weighted-size weighted-size)))
> (error (- (length mapping) (/ (* tot tot) tot2))))

No, that's part of the code for producing all equal temperaments
within a given error.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/2/2009 7:17:56 PM

2009/1/3 Herman Miller <hmiller@io.com>:

> http://tonalsoft.com/enc/e/equal-temperament.aspx has a list of 5-limit
> temperaments. 53&270 is vulture. I've used "vulture" in the 7-limit for
> a 53&58 temperament that has a different mapping of prime 7 -- 53&270
> doesn't look that great as a 7-limit temperament.
>
> 53&58 vulture [<1, 0, -6, 4], <0, 4, 21, -3]>
> TOP-max P = 1199.274449, G = 475.411671
> TOP-RMS P = 1199.307143, G = 475.361486
>
> 53&270 [<1, 0, -6, 25], <0, 4, 21, -56]>
> TOP-max P = 1199.923914, G = 475.518899
> TOP-RMS P = 1199.904982, G = 475.513479

Thanks! There's a kind of vulture in the 19-limit as well.

> 7-limit 152&171 is enneadecal.
> [<19, 30, 44, 53], <0, 1, 1, 3]>
> TOP-max P = 63.160319, G = 7.152770
> TOP-RMS P = 63.159903, G = 7.143746
>
> 11-limit 72&152 is octoid. I don't recall whether "octoid" was
> originally proposed as a name for a 7-limit temperament (in which case
> you'll want to call it octoid'), or an 11-limit one.
>
> [<8, 13, 19, 23, 28], <0, -3, -4, -5, -3]>
> TOP-max P = 150.033735, G = 16.238470
> TOP-RMS P = 149.993205, G = 16.037109

I put it in the 11-limit. It also figures in the 13 and 17 limits.
(The computer sorts all that out.)

Graham

🔗Graham Breed <gbreed@gmail.com>

1/2/2009 9:27:24 PM

2009/1/3 Carl Lumma <carl@lumma.org>:

> So, here's my code for TOP-max damage of ETs
>
> (define top-damage
> (lambda (val basis)
> (let*
> ((weights (map log2 basis))
> (ji (map (lambda (x) (* x 1200)) weights))
> (errors (map abs (map - ji (top-val val basis)))))
> (apply max (map / errors weights)))))
>
> I'm sure you can follow that -- the only external function is
> top-val, and it returns one of those bastard vals containing
> nonintegers -- in this case, cents of the top tuning of the val.

I don't see the optimization there. TOP-max should be
(max(E)-min(E))/(2+max(E)+min(E)) in algebraic form.

> What do I have to do to turn this into TOP-RMS?

For the optimized calculation, it's (untested and probably unbalanced
parentheses)

(/ (std (map / errors weights))
(rms (map / (top-val val basis) weights))))

where std is the standard deviation and rms is, well, the RMS. I
assume they're defined to take a list so you don't need "apply". For
the standard deviation you can subtract the mean from each element in
the list and then calculate the RMS. If the rest isn't working, try

(std (map / errors weights))

to get the STD error (which is almost TOP-RMS).

I think I have the right calculation for "w". It's easier not to
calculate the errors in both cases.

(let ((w (map / (top-val val basis) weights)))
(/ (std w) (rms w)))

Then multiply by 1200 to get cents per octave.

The equivalent for TOP-max is

(let ((w (map / (top-val val basis) weights)))
(/ (- (apply max w) (apply min w))
(+ (apply max w) (apply min w))))

I think these are the correct statistical functions:

(define (sqr x) (* x x))
(define (mean s) (/ (apply + s) (length s)))
(define (rms s) (sqrt (mean (map sqr s))))
(define (std s) (sqrt (- (sqr (rms s)) (sqr (mean s)))))

Graham

🔗Carl Lumma <carl@lumma.org>

1/2/2009 10:22:01 PM

Graham wrote:

>I think I use the Harvard citation standard except for the special
>cases where I say I don't.

In the recent doc, I saw "(Smith sevlat)" and "Smith (sevlat)" I
think, though it wasn't sevlat both times. Neither one was linked,
while most of the other citations were.

>> So, here's my code for TOP-max damage of ETs
>>
>> (define top-damage
>> (lambda (val basis)
>> (let*
>> ((weights (map log2 basis))
>> (ji (map (lambda (x) (* x 1200)) weights))
>> (errors (map abs (map - ji (top-val val basis)))))
>> (apply max (map / errors weights)))))
>>
>> I'm sure you can follow that -- the only external function is
>> top-val, and it returns one of those bastard vals containing
>> nonintegers -- in this case, cents of the top tuning of the val.
>>
>> What do I have to do to turn this into TOP-RMS?
>
>I'll guess replace "max" with "rms".

Is that all it is? The rms of the weighted errors of the
identities? I think we've been over this before. :(

If so, it occurs to me that my other question about TOP-RMS that
IIRC you said you still hadn't fully answered boils down to
this: If a/b <= c/d, where a and c are errors and b and d are
weights, then for TOP-max

a/b <= (a+c)/(b+d) <= c/d

in the worst case, when errors add. TOP-RMS is then

(a+c)/(b+d) ?? sqrt((a/b)^2 + (c/d)^2)

Let's assume ?? is <=, which is what we want. So,

( a^2 d^2 + c^2 b^2 )
... <= sqrt( ----------------- )
( b^2 d^2 )

(a+c)^2 a^2 d^2 + c^2 b^2
-------- <= -----------------
(b+d)^2 b^2 d^2

a^2 + 2ac + c^2 a^2 d^2 + c^2 b^2
--------------- <= -----------------
b^2 + 2bd + d^2 b^2 d^2

(a^2 b^2 c^2) + 2(a^2 c^2 bd) <= (c^4 a^2) + (d^4 b^2) +
2(c^3 a^2 d) + 2(d^3 b^2 c) + (b^2 c^2 d^2)

Which seems quite true. Unless I made a mistake, which I
probably did. Does this make any sense?

>> If this is the answer I can't quite parse it.
>>
>> (let* ((mapping (append seed-mapping (list next-guess)))
>> (weighted-size (/ (* next-guess step-size) (car more-primes)))
>> ; the next few lines determine the type of error
>> (tot (+ tot weighted-size))
>> (tot2 (+ tot2 (* weighted-size weighted-size)))
>> (error (- (length mapping) (/ (* tot tot) tot2))))
>
>No, that's part of the code for producing all equal temperaments
>within a given error.

Could you copy the relevant part of the code here?

-Carl

🔗Carl Lumma <carl@lumma.org>

1/2/2009 10:28:59 PM

Graham wrote:

>> So, here's my code for TOP-max damage of ETs
>>
>> (define top-damage
>> (lambda (val basis)
>> (let*
>> ((weights (map log2 basis))
>> (ji (map (lambda (x) (* x 1200)) weights))
>> (errors (map abs (map - ji (top-val val basis)))))
>> (apply max (map / errors weights)))))
>>
>> I'm sure you can follow that -- the only external function is
>> top-val, and it returns one of those bastard vals containing
>> nonintegers -- in this case, cents of the top tuning of the val.
>
>I don't see the optimization there. TOP-max should be
>(max(E)-min(E))/(2+max(E)+min(E)) in algebraic form.

I think you're talking about the usual trick for catching
errors of the interior intervals, e.g. 5:3. But I thought it
wasn't needed for TOP, because in the worst case the errors
add, and so do the weights.

Anyway, let's compare results. I've used this on several
occasions and it seemed to work.

Successive improvements in 17-limit TOP damage.

3 5 7 11 13 17
----------------------------------
2 : 33.0 77.3 77.3 77.3 77.6 77.6
3 : 30.2 30.2 61.0 61.0 61.0 61.0
4 : 33.0 33.0 33.0 40.0 41.0 41.0
5 : 5.7 19.8 21.4 30.2 30.2 32.8
6 : 30.2 30.2 30.2 30.2 35.6 35.6
7 : 5.1 9.4 20.0 20.0 20.0 20.0
9 : 11.2 14.2 14.2 14.2 14.2 14.7
10 : 5.7 11.4 11.4 12.7 12.7 12.7
12 : 0.6 3.6 6.1 7.6 12.5 12.5
15 : 5.7 5.7 7.2 7.2 7.2 8.7
19 : 2.3 2.3 3.8 6.3 6.3 6.4
22 : 2.2 3.2 3.3 3.3 5.3 5.3
26 : 3.1 3.7 3.8 4.1 4.1 4.1
31 : 1.6 1.8 1.8 1.8 3.1 3.1
41 : 0.2 1.4 1.4 1.9 2.4 2.7
46 : 0.8 1.1 1.7 1.7 1.9 1.9
58 : 0.5 1.5 1.5 1.5 1.5 1.6
72 : 0.6 0.6 0.6 0.6 1.0 1.0

>> What do I have to do to turn this into TOP-RMS?
>
>For the optimized calculation, it's (untested and probably
>unbalanced parentheses)

...I'll come back to this once some of the more basic outstanding
questions get put to bed.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/4/2009 4:11:56 AM

2009/1/3 Carl Lumma <carl@lumma.org>:
> Graham wrote:
>
>>I think I use the Harvard citation standard except for the special
>>cases where I say I don't.
>
> In the recent doc, I saw "(Smith sevlat)" and "Smith (sevlat)" I
> think, though it wasn't sevlat both times. Neither one was linked,
> while most of the other citations were.

One's for referring to the article directly, like "Smith said this".
The other where you add the citation for something else, like
"lattices are really cool. (Smith)"

>>> So, here's my code for TOP-max damage of ETs
>>>
>>> (define top-damage
>>> (lambda (val basis)
>>> (let*
>>> ((weights (map log2 basis))
>>> (ji (map (lambda (x) (* x 1200)) weights))
>>> (errors (map abs (map - ji (top-val val basis)))))
>>> (apply max (map / errors weights)))))
>>>
>>> I'm sure you can follow that -- the only external function is
>>> top-val, and it returns one of those bastard vals containing
>>> nonintegers -- in this case, cents of the top tuning of the val.
>>>
>>> What do I have to do to turn this into TOP-RMS?
>>
>>I'll guess replace "max" with "rms".
>
> Is that all it is? The rms of the weighted errors of the
> identities? I think we've been over this before. :(

The "O" in TOP the way I do it is "optimal". So yes, it's the RMS of
the weighted errors of the identities, but it should be the optimal
one for any tuning of the temperament (class).

> If so, it occurs to me that my other question about TOP-RMS that
> IIRC you said you still hadn't fully answered boils down to
> this: If a/b <= c/d, where a and c are errors and b and d are
> weights, then for TOP-max
>
> a/b <= (a+c)/(b+d) <= c/d
>
> in the worst case, when errors add. TOP-RMS is then
>
> (a+c)/(b+d) ?? sqrt((a/b)^2 + (c/d)^2)
>
> Let's assume ?? is <=, which is what we want. So,
>
> ( a^2 d^2 + c^2 b^2 )
> ... <= sqrt( ----------------- )
> ( b^2 d^2 )
>
> (a+c)^2 a^2 d^2 + c^2 b^2
> -------- <= -----------------
> (b+d)^2 b^2 d^2
>
> a^2 + 2ac + c^2 a^2 d^2 + c^2 b^2
> --------------- <= -----------------
> b^2 + 2bd + d^2 b^2 d^2
>
> (a^2 b^2 c^2) + 2(a^2 c^2 bd) <= (c^4 a^2) + (d^4 b^2) +
> 2(c^3 a^2 d) + 2(d^3 b^2 c) + (b^2 c^2 d^2)
>
> Which seems quite true. Unless I made a mistake, which I
> probably did. Does this make any sense?

I haven't followed it, but it should be. The logic for RMS being less
than the max value is that RMS is a kind of average. The average is
always going to be less than the worst case. So if you take the
TOP-max tuning, the weighted RMS must be less than the worst case,
which is the TOP-max error. The TOP-RMS will be for a different
tuning but that, also being optimal, can only be smaller again.

>>> If this is the answer I can't quite parse it.
>>>
>>> (let* ((mapping (append seed-mapping (list next-guess)))
>>> (weighted-size (/ (* next-guess step-size) (car more-primes)))
>>> ; the next few lines determine the type of error
>>> (tot (+ tot weighted-size))
>>> (tot2 (+ tot2 (* weighted-size weighted-size)))
>>> (error (- (length mapping) (/ (* tot tot) tot2))))
>>
>>No, that's part of the code for producing all equal temperaments
>>within a given error.
>
> Could you copy the relevant part of the code here?

That's what I put in the other message. I don't think I have Scheme
code for finding the TOP-RMS of ETs because I trust these functions to
generate ETs within the limit. Explaining those functions isn't easy,
I'm afraid. I have to work it out on paper and even then there's some
trial and error.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/4/2009 4:21:54 AM

2009/1/3 Carl Lumma <carl@lumma.org>:
> Graham wrote:

>>I don't see the optimization there. TOP-max should be
>>(max(E)-min(E))/(2+max(E)+min(E)) in algebraic form.
>
> I think you're talking about the usual trick for catching
> errors of the interior intervals, e.g. 5:3. But I thought it
> wasn't needed for TOP, because in the worst case the errors
> add, and so do the weights.

It isn't needed for TOP because the optimization takes care of it.
That formula takes you directly to the optimal error without needing
the optimal scale stretch. It happens that you can simplify it to

[max(E) - min(E)]/2

which ignores the optimal scale stretch but catches the interior
intervals anyway.

> Anyway, let's compare results. I've used this on several
> occasions and it seemed to work.
>
> Successive improvements in 17-limit TOP damage.
>
> 3 5 7 11 13 17
> ----------------------------------
> 2 : 33.0 77.3 77.3 77.3 77.6 77.6
> 3 : 30.2 30.2 61.0 61.0 61.0 61.0
> 4 : 33.0 33.0 33.0 40.0 41.0 41.0
> 5 : 5.7 19.8 21.4 30.2 30.2 32.8
> 6 : 30.2 30.2 30.2 30.2 35.6 35.6
> 7 : 5.1 9.4 20.0 20.0 20.0 20.0
> 9 : 11.2 14.2 14.2 14.2 14.2 14.7
> 10 : 5.7 11.4 11.4 12.7 12.7 12.7
> 12 : 0.6 3.6 6.1 7.6 12.5 12.5
> 15 : 5.7 5.7 7.2 7.2 7.2 8.7
> 19 : 2.3 2.3 3.8 6.3 6.3 6.4
> 22 : 2.2 3.2 3.3 3.3 5.3 5.3
> 26 : 3.1 3.7 3.8 4.1 4.1 4.1
> 31 : 1.6 1.8 1.8 1.8 3.1 3.1
> 41 : 0.2 1.4 1.4 1.9 2.4 2.7
> 46 : 0.8 1.1 1.7 1.7 1.9 1.9
> 58 : 0.5 1.5 1.5 1.5 1.5 1.6
> 72 : 0.6 0.6 0.6 0.6 1.0 1.0

That does look right, and certainly not what you'd get if you weren't
optimizing. Here are my figures

2: 33.0 77.3 77.3 77.3 79.6 79.6
3: 30.2 30.2 39.8 39.8 46.7 46.7
4: 33.0 33.0 33.0 37.5 37.5 37.5
5: 5.7 19.8 21.4 30.2 25.5 25.5
6: 30.2 30.2 30.2 30.2 30.2 30.2
7: 5.1 9.4 20.2 20.2 20.0 20.0
9: 11.2 14.2 14.2 14.2 14.2 14.7
10: 5.7 11.4 11.4 12.7 12.7 12.7
12: 0.6 3.6 6.1 7.6 8.6 8.6
15: 5.7 5.7 7.2 7.2 7.2 8.3
19: 2.3 2.3 3.8 6.3 6.3 6.7
22: 2.2 3.2 3.3 3.3 5.3 5.3
26: 3.1 3.7 3.8 4.1 4.1 4.1
31: 1.6 1.8 1.8 1.8 3.1 3.1
41: 0.2 1.4 1.4 1.9 2.4 2.7
46: 0.8 1.1 1.7 1.7 1.9 1.9
58: 0.5 1.5 1.5 1.5 1.5 1.6
72: 0.6 0.6 0.6 0.6 1.0 1.0

Graham

🔗Carl Lumma <carl@lumma.org>

1/4/2009 8:22:27 PM

Graham wrote:

>>>> What do I have to do to turn this into TOP-RMS?
>>>
>>>I'll guess replace "max" with "rms".
>>
>> Is that all it is? The rms of the weighted errors of the
>> identities? I think we've been over this before. :(
>
>The "O" in TOP the way I do it is "optimal". So yes, it's the RMS of
>the weighted errors of the identities, but it should be the optimal
>one for any tuning of the temperament (class).

Understood.

>> If so, it occurs to me that my other question about TOP-RMS that
>> IIRC you said you still hadn't fully answered boils down to
>> this: If a/b <= c/d, where a and c are errors and b and d are
>> weights, then for TOP-max
>>
>> a/b <= (a+c)/(b+d) <= c/d
>>
>> in the worst case, when errors add. TOP-RMS is then
>>
>> (a+c)/(b+d) ?? sqrt((a/b)^2 + (c/d)^2)
>>
>> Let's assume ?? is <=, which is what we want. So,
>>
>> ( a^2 d^2 + c^2 b^2 )
>> ... <= sqrt( ----------------- )
>> ( b^2 d^2 )
>>
>> (a+c)^2 a^2 d^2 + c^2 b^2
>> -------- <= -----------------
>> (b+d)^2 b^2 d^2
>>
>> a^2 + 2ac + c^2 a^2 d^2 + c^2 b^2
>> --------------- <= -----------------
>> b^2 + 2bd + d^2 b^2 d^2
>>
>> (a^2 b^2 c^2) + 2(a^2 c^2 bd) <= (c^4 a^2) + (d^4 b^2) +
>> 2(c^3 a^2 d) + 2(d^3 b^2 c) + (b^2 c^2 d^2)
>>
>> Which seems quite true. Unless I made a mistake, which I
>> probably did. Does this make any sense?
>
>I haven't followed it, but it should be. The logic for RMS being
>less than the max value is that RMS is a kind of average.

The above is an argument that, e.g. in the 5-limit, the RMS
of the Tenney-weighted errors of 3 & 5 would be *greater than*
or equal to the weighted error of 5/3 or 15/8. That is,
RMS-primes would give an upper bound on the weighted errors of
these compound intervals. Ideally this would be extended to
all intervals, as in Max-primes. Otherwise, we might start
looking for a weighting (other than log Tenney Height) which
satisfied such an inequality for RMS-primes.

>The average is always going to be less than the worst case. So
>if you take the TOP-max tuning, the weighted RMS must be less
>than the worst case, which is the TOP-max error. The TOP-RMS
>will be for a different tuning but that, also being optimal, can
>only be smaller again.

Right, but this sounds like it is not desirable. Don't we
want an *upper* bound to be quickly computable from the primes?

-Carl

🔗Carl Lumma <carl@lumma.org>

1/4/2009 9:41:10 PM

Graham wrote:

>>>>(define top-damage
>>>> (lambda (val basis)
>>>> (let*
>>>> ((weights (map log2 basis))
>>>> (ji (map (lambda (x) (* x 1200)) weights))
>>>> (errors (map abs (map - ji (top-val val basis)))))
>>>> (apply max (map / errors weights)))))
>>>>
>>>
>>> I don't see the optimization there. TOP-max should be
>>> (max(E)-min(E))/(2+max(E)+min(E)) in algebraic form.
>>
>> I think you're talking about the usual trick for catching
>> errors of the interior intervals, e.g. 5:3. But I thought it
>> wasn't needed for TOP, because in the worst case the errors
>> add, and so do the weights.
>
>It isn't needed for TOP because the optimization takes care of it.
>That formula takes you directly to the optimal error without needing
>the optimal scale stretch. It happens that you can simplify it to
>
>[max(E) - min(E)]/2
>
>which ignores the optimal scale stretch but catches the interior
>intervals anyway.

As shown in the code, I simply take the max weighted error among
the primes to generate the below table. Not max-min/2.

>> Anyway, let's compare results. I've used this on several
>> occasions and it seemed to work.
>>
>> Successive improvements in 17-limit TOP damage.
>>
>> 3 5 7 11 13 17
>> ----------------------------------
>> 2 : 33.0 77.3 77.3 77.3 77.6 77.6
>> 3 : 30.2 30.2 61.0 61.0 61.0 61.0
>> 4 : 33.0 33.0 33.0 40.0 41.0 41.0
>> 5 : 5.7 19.8 21.4 30.2 30.2 32.8
>> 6 : 30.2 30.2 30.2 30.2 35.6 35.6
>> 7 : 5.1 9.4 20.0 20.0 20.0 20.0
>> 9 : 11.2 14.2 14.2 14.2 14.2 14.7
>> 10 : 5.7 11.4 11.4 12.7 12.7 12.7
>> 12 : 0.6 3.6 6.1 7.6 12.5 12.5
>> 15 : 5.7 5.7 7.2 7.2 7.2 8.7
>> 19 : 2.3 2.3 3.8 6.3 6.3 6.4
>> 22 : 2.2 3.2 3.3 3.3 5.3 5.3
>> 26 : 3.1 3.7 3.8 4.1 4.1 4.1
>> 31 : 1.6 1.8 1.8 1.8 3.1 3.1
>> 41 : 0.2 1.4 1.4 1.9 2.4 2.7
>> 46 : 0.8 1.1 1.7 1.7 1.9 1.9
>> 58 : 0.5 1.5 1.5 1.5 1.5 1.6
>> 72 : 0.6 0.6 0.6 0.6 1.0 1.0
>
>That does look right, and certainly not what you'd get if you weren't
>optimizing. Here are my figures
>
> 2: 33.0 77.3 77.3 77.3 79.6 79.6
> 3: 30.2 30.2 39.8 39.8 46.7 46.7
> 4: 33.0 33.0 33.0 37.5 37.5 37.5
> 5: 5.7 19.8 21.4 30.2 25.5 25.5
> 6: 30.2 30.2 30.2 30.2 30.2 30.2
> 7: 5.1 9.4 20.2 20.2 20.0 20.0
> 9: 11.2 14.2 14.2 14.2 14.2 14.7
>10: 5.7 11.4 11.4 12.7 12.7 12.7
>12: 0.6 3.6 6.1 7.6 8.6 8.6
>15: 5.7 5.7 7.2 7.2 7.2 8.3
>19: 2.3 2.3 3.8 6.3 6.3 6.7
>22: 2.2 3.2 3.3 3.3 5.3 5.3
>26: 3.1 3.7 3.8 4.1 4.1 4.1
>31: 1.6 1.8 1.8 1.8 3.1 3.1
>41: 0.2 1.4 1.4 1.9 2.4 2.7
>46: 0.8 1.1 1.7 1.7 1.9 1.9
>58: 0.5 1.5 1.5 1.5 1.5 1.6
>72: 0.6 0.6 0.6 0.6 1.0 1.0

I get 12.49595001136964 for the 17-limit TOP damage of
12-ET. You're getting 8.6. I'm using only patent vals
here, I think. <12 19 28 34 42 44 49| in this case.

Yup, that looks like the porblem. You're no-doubt
using <12 19 28 34 42 45 49| to get 8.599414873810039.

Let's try 19-ET's 17-limit damage. I used
<19 30 44 53 66 70 78| to get 6.441057506662558, and
this time it happens to be the best val and your figure
is higher at 6.7. There are a few like this. Know why?

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/4/2009 9:46:03 PM

2009/1/5 Carl Lumma <carl@lumma.org>:

> Let's try 19-ET's 17-limit damage. I used
> <19 30 44 53 66 70 78| to get 6.441057506662558, and
> this time it happens to be the best val and your figure
> is higher at 6.7. There are a few like this. Know why?

I was finding the best mapping for TOP-RMS. Here's the table using
the best TOP-max mappings:

2: 33.0 77.3 77.3 77.3 77.6 77.6
3: 30.2 30.2 39.8 39.8 45.3 45.3
4: 33.0 33.0 33.0 37.5 37.5 37.5
5: 5.7 19.8 21.4 25.5 25.5 25.5
6: 30.2 30.2 30.2 30.2 30.2 30.2
7: 5.1 9.4 20.0 20.0 20.0 20.0
9: 11.2 14.2 14.2 14.2 14.2 14.7
10: 5.7 11.4 11.4 12.7 12.7 12.7
12: 0.6 3.6 6.1 7.6 8.6 8.6
15: 5.7 5.7 7.2 7.2 7.2 8.3
19: 2.3 2.3 3.8 6.3 6.3 6.4
22: 2.2 3.2 3.3 3.3 5.3 5.3
26: 3.1 3.7 3.8 4.1 4.1 4.1
31: 1.6 1.8 1.8 1.8 3.1 3.1
41: 0.2 1.4 1.4 1.9 2.4 2.7
46: 0.8 1.1 1.7 1.7 1.9 1.9
58: 0.5 1.5 1.5 1.5 1.5 1.6
72: 0.6 0.6 0.6 0.6 1.0 1.0

Graham

🔗Graham Breed <gbreed@gmail.com>

1/4/2009 11:18:12 PM

2009/1/5 Carl Lumma <carl@lumma.org>:
> Graham wrote:

>>> If so, it occurs to me that my other question about TOP-RMS that
>>> IIRC you said you still hadn't fully answered boils down to
>>> this: If a/b <= c/d, where a and c are errors and b and d are
>>> weights, then for TOP-max
>>>
>>> a/b <= (a+c)/(b+d) <= c/d
>>>
>>> in the worst case, when errors add. TOP-RMS is then
>>>
>>> (a+c)/(b+d) ?? sqrt((a/b)^2 + (c/d)^2)
>>>
>>> Let's assume ?? is <=, which is what we want. So,
>>>
>>> ( a^2 d^2 + c^2 b^2 )
>>> ... <= sqrt( ----------------- )
>>> ( b^2 d^2 )
>>>
>>> (a+c)^2 a^2 d^2 + c^2 b^2
>>> -------- <= -----------------
>>> (b+d)^2 b^2 d^2
>>>
>>> a^2 + 2ac + c^2 a^2 d^2 + c^2 b^2
>>> --------------- <= -----------------
>>> b^2 + 2bd + d^2 b^2 d^2
>>>
>>> (a^2 b^2 c^2) + 2(a^2 c^2 bd) <= (c^4 a^2) + (d^4 b^2) +
>>> 2(c^3 a^2 d) + 2(d^3 b^2 c) + (b^2 c^2 d^2)
>>>
>>> Which seems quite true. Unless I made a mistake, which I
>>> probably did. Does this make any sense?

I get 2 a b^2 c d^2 <= a^2 d^4 + 2 a^2 b d^3 + b^4 c^2 + 2 b^3 c^2 d.
That may still be wrong but I don't see how you can get a^2 b^2 c^2.

>>I haven't followed it, but it should be. The logic for RMS being
>>less than the max value is that RMS is a kind of average.
>
> The above is an argument that, e.g. in the 5-limit, the RMS
> of the Tenney-weighted errors of 3 & 5 would be *greater than*
> or equal to the weighted error of 5/3 or 15/8. That is,
> RMS-primes would give an upper bound on the weighted errors of
> these compound intervals. Ideally this would be extended to
> all intervals, as in Max-primes. Otherwise, we might start
> looking for a weighting (other than log Tenney Height) which
> satisfied such an inequality for RMS-primes.

Why?

>>The average is always going to be less than the worst case. So
>>if you take the TOP-max tuning, the weighted RMS must be less
>>than the worst case, which is the TOP-max error. The TOP-RMS
>>will be for a different tuning but that, also being optimal, can
>>only be smaller again.
>
> Right, but this sounds like it is not desirable. Don't we
> want an *upper* bound to be quickly computable from the primes?

I don't know what you want. You said this came from a question you
asked at some unspecified time. If you want the worst error, you can
always calculate the worst error.

Knowing the TOP-RMS is smaller than TOP-max means if you can find all
the temperament classes in some range with TOP-RMS error below a given
value, you know it also contains all classes in the same range with
TOP-max below the same value. Assuming you're interested in TOP-max
for some reason, that's useful because it's easier to search for low
TOP-RMS errors.

Graham

🔗Carl Lumma <carl@lumma.org>

1/5/2009 12:27:54 AM

Graham wrote:
>>>> a^2 + 2ac + c^2 a^2 d^2 + c^2 b^2
>>>> --------------- <= -----------------
>>>> b^2 + 2bd + d^2 b^2 d^2
>>>>
>>>> (a^2 b^2 c^2) + 2(a^2 c^2 bd) <= (c^4 a^2) + (d^4 b^2) +
>>>> 2(c^3 a^2 d) + 2(d^3 b^2 c) + (b^2 c^2 d^2)
>>>>
>>>> Which seems quite true. Unless I made a mistake, which I
>>>> probably did. Does this make any sense?
>
>I get 2 a b^2 c d^2 <= a^2 d^4 + 2 a^2 b d^3 + b^4 c^2 + 2 b^3 c^2 d.
>That may still be wrong but I don't see how you can get a^2 b^2 c^2.

I used
http://xrjunque.nom.es/precis/polycalc.aspx

Now I'm getting
(a^2*b^2*d^2)+(b^2*c^2*d^2)+(b^2*d^2*2ac) <=
(a^4*d^2)+(c^4*b^2)+(a^3*d^2*2c)+(c^3*b^2*2a)+(a^2*b^2*c^2)+(a^2*c^2*d^2)

>>>I haven't followed it, but it should be. The logic for RMS being
>>>less than the max value is that RMS is a kind of average.
>>
>> The above is an argument that, e.g. in the 5-limit, the RMS
>> of the Tenney-weighted errors of 3 & 5 would be *greater than*
>> or equal to the weighted error of 5/3 or 15/8. That is,
>> RMS-primes would give an upper bound on the weighted errors of
>> these compound intervals. Ideally this would be extended to
>> all intervals, as in Max-primes. Otherwise, we might start
>> looking for a weighting (other than log Tenney Height) which
>> satisfied such an inequality for RMS-primes.
>
>Why?

The whole point of prime-based error is that it's a shortcut to
measuring all the intervals. Ideally for Max-primes one uses an
error weighting that makes it come out equal to to Max-all.
Tenney weighting happens to do that. For RMS-primes the natural
extension would be to use a weighting that makes it come out
equal to RMS-all. But then there's the question of how to
define "all"... does the RMS converge? I think you talk about
this in one of your papers. Here I was thinking an upper bound
would be good, but maybe that doesn't make sense.

>Knowing the TOP-RMS is smaller than TOP-max means if you can find all
>the temperament classes in some range with TOP-RMS error below a given
>value, you know it also contains all classes in the same range with
>TOP-max below the same value. Assuming you're interested in TOP-max
>for some reason, that's useful because it's easier to search for low
>TOP-RMS errors.

You're probably right.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/5/2009 1:07:53 AM

2009/1/5 Carl Lumma <carl@lumma.org>:

> The whole point of prime-based error is that it's a shortcut to
> measuring all the intervals. Ideally for Max-primes one uses an
> error weighting that makes it come out equal to to Max-all.
> Tenney weighting happens to do that. For RMS-primes the natural
> extension would be to use a weighting that makes it come out
> equal to RMS-all. But then there's the question of how to
> define "all"... does the RMS converge? I think you talk about
> this in one of your papers. Here I was thinking an upper bound
> would be good, but maybe that doesn't make sense.

Any Tenney limit, or the intersection of any Tenney and prime limits,
is a particular set of prime weights for an RMS calculation. So the
optimal RMS over all intervals is the same as the optimal RMS for
given weights of the prime intervals. These weights tend to the
Tenney weights the complexity tends to infinity, if you weight all the
intervals equally.

I think you still get Tenney weighting for a big family of weightings
as the complexity approaches to infinity. It depends on intervals of
similar complexity having similar weight.

The reason it matters how you approach infinity is that Farey limits
are of a different form. They can't be done with simple prime weights
at all. You need a matrix to specify cross-weights. But such a
matrix still doesn't add much to the complexity of the calculation. I
don't know what the RMS converges to in this case.

So TOP-RMS, or weighted-prime RMS in general, is consistent with
Tenney limits. It means intervals like 30:1 are treated equally with
6:5. As long as you optimize the scale stretch that's probably good
enough. It's the argument for not bothering with STD errors unless
they're simpler in a given context. The applicability is that you
don't know exactly what you want so you guess some numbers that are
likely to work.

Graham

🔗Carl Lumma <carl@lumma.org>

1/5/2009 1:41:54 AM

Graham wrote:

>> The whole point of prime-based error is that it's a shortcut to
>> measuring all the intervals. Ideally for Max-primes one uses an
>> error weighting that makes it come out equal to to Max-all.
>> Tenney weighting happens to do that. For RMS-primes the natural
>> extension would be to use a weighting that makes it come out
>> equal to RMS-all. But then there's the question of how to
>> define "all"... does the RMS converge? I think you talk about
>> this in one of your papers. Here I was thinking an upper bound
>> would be good, but maybe that doesn't make sense.
>
>Any Tenney limit, or the intersection of any Tenney and prime limits,
>is a particular set of prime weights for an RMS calculation.

Yes, that's what I remember from your paper. Except the other
day when I was looking at it again, the tables seemed to show
Farey limits intsead.

>So the
>optimal RMS over all intervals is the same as the optimal RMS for
>given weights of the prime intervals. These weights tend to the
>Tenney weights the complexity tends to infinity,

You've proven this then? I don't know why you keep saying
"optimal", since that to me means a tuning. Hopefully what
you're describing here is valid for any tuning I'd like to
measure.

>if you weight all the intervals equally.
>
>I think you still get Tenney weighting for a big family of weightings
>as the complexity approaches to infinity. It depends on intervals of
>similar complexity having similar weight.

I don't know how to parse this... you seem to be talking about
two different but simultaneous sets of weightings.

>The reason it matters how you approach infinity is that Farey limits
>are of a different form. They can't be done with simple prime weights
>at all. You need a matrix to specify cross-weights. But such a
>matrix still doesn't add much to the complexity of the calculation. I
>don't know what the RMS converges to in this case.

Yes, well Tenney limits are all I care about here.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/5/2009 2:05:15 AM

2009/1/5 Carl Lumma <carl@lumma.org>:
> Graham wrote:

>>Any Tenney limit, or the intersection of any Tenney and prime limits,
>>is a particular set of prime weights for an RMS calculation.
>
> Yes, that's what I remember from your paper. Except the other
> day when I was looking at it again, the tables seemed to show
> Farey limits intsead.

There are tables for both.

>>So the
>>optimal RMS over all intervals is the same as the optimal RMS for
>>given weights of the prime intervals. These weights tend to the
>>Tenney weights the complexity tends to infinity,
>
> You've proven this then? I don't know why you keep saying
> "optimal", since that to me means a tuning. Hopefully what
> you're describing here is valid for any tuning I'd like to
> measure.

If you're looking for temperament classes you naturally compare them
with their optimal tunings. The computation is the same order of
complexity regardless of how many intervals you started with. It only
depends on the number of primes.

Going by equation (5) in composite.pdf it looks like the straight RMS
also only depends on the metric in equation (7). So optimal errors
aren't special here. They happened to be the kind I was interested
in.

I haven't proven the limit to infinity but it obviously works.

>>if you weight all the intervals equally.
>>
>>I think you still get Tenney weighting for a big family of weightings
>>as the complexity approaches to infinity. It depends on intervals of
>>similar complexity having similar weight.
>
> I don't know how to parse this... you seem to be talking about
> two different but simultaneous sets of weightings.

Start with n intervals defined on d primes. Put the weights in a
matrix called W, which is nxd or dxn whichever way you hold it up.
There's another matrix G which is dxd and defines the same problem
using only the prime intervals.

>>The reason it matters how you approach infinity is that Farey limits
>>are of a different form. They can't be done with simple prime weights
>>at all. You need a matrix to specify cross-weights. But such a
>>matrix still doesn't add much to the complexity of the calculation. I
>>don't know what the RMS converges to in this case.
>
> Yes, well Tenney limits are all I care about here.

So then it's easy. The matrix G is already diagonalized. It gives
you a set of prime weights for a TOP-RMS-like error. And the higher
the complexity the more it looks like TOP.

Graham

🔗Carl Lumma <carl@lumma.org>

1/5/2009 2:11:47 AM

Graham wrote:

>Going by equation (5) in composite.pdf it looks like the straight RMS
>also only depends on the metric in equation (7). So optimal errors
>aren't special here. They happened to be the kind I was interested
>in.

OK.

>I haven't proven the limit to infinity but it obviously works.

OK.

>So then it's easy. The matrix G is already diagonalized. It gives
>you a set of prime weights for a TOP-RMS-like error. And the higher
>the complexity the more it looks like TOP.

What's the primary advantage, as you see it, to TOP-RMS over
TOP-Max?

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/5/2009 4:50:34 AM

2009/1/5 Carl Lumma <carl@lumma.org>:

> What's the primary advantage, as you see it, to TOP-RMS over
> TOP-Max?

That it's easier to work with the optimizations algebraically. I
think the badness in the title is a very interesting function and
should be studied as pure mathematics. But I haven't found it in that
context.

Graham

🔗Carl Lumma <carl@lumma.org>

1/6/2009 3:23:46 AM

Graham wrote:

>I was finding the best mapping for TOP-RMS. Here's the
>table using the best TOP-max mappings:
>
> 2: 33.0 77.3 77.3 77.3 77.6 77.6
> 3: 30.2 30.2 39.8 39.8 45.3 45.3
> 4: 33.0 33.0 33.0 37.5 37.5 37.5
> 5: 5.7 19.8 21.4 25.5 25.5 25.5
> 6: 30.2 30.2 30.2 30.2 30.2 30.2
> 7: 5.1 9.4 20.0 20.0 20.0 20.0
> 9: 11.2 14.2 14.2 14.2 14.2 14.7
> 10: 5.7 11.4 11.4 12.7 12.7 12.7
> 12: 0.6 3.6 6.1 7.6 8.6 8.6
> 15: 5.7 5.7 7.2 7.2 7.2 8.3
> 19: 2.3 2.3 3.8 6.3 6.3 6.4
> 22: 2.2 3.2 3.3 3.3 5.3 5.3
> 26: 3.1 3.7 3.8 4.1 4.1 4.1
> 31: 1.6 1.8 1.8 1.8 3.1 3.1
> 41: 0.2 1.4 1.4 1.9 2.4 2.7
> 46: 0.8 1.1 1.7 1.7 1.9 1.9
> 58: 0.5 1.5 1.5 1.5 1.5 1.6
> 72: 0.6 0.6 0.6 0.6 1.0 1.0
>
> Graham

My numbers agree, though this no longer shows all
the successive improvements in 17-limit damage.
Here's the complete list:

(2 (33.0 77.3 77.3 77.3 77.6 77.6))
(3 (30.2 30.2 39.8 39.8 45.3 45.3))
(4 (33.0 33.0 33.0 37.5 37.5 37.5))
(5 (5.7 19.8 21.4 25.5 25.5 25.5))
(7 (5.1 9.4 20.0 20.0 20.0 20.0))
(8 (15.0 15.0 15.0 15.0 15.0 15.0))
(9 (11.2 14.2 14.2 14.2 14.2 14.7))
(10 (5.7 11.4 11.4 12.7 12.7 12.7))
(12 (0.6 3.6 6.1 7.6 8.6 8.6))
(15 (5.7 5.7 7.2 7.2 7.2 8.3))
(17 (1.2 8.0 8.0 8.0 8.0 8.0))
(19 (2.3 2.3 3.8 6.3 6.3 6.4))
(22 (2.2 3.2 3.3 3.3 5.3 5.3))
(26 (3.1 3.7 3.8 4.1 4.1 4.1))
(27 (2.9 2.9 2.9 3.8 3.8 3.8))
(29 (0.5 3.5 3.5 3.5 3.5 3.5))
(31 (1.6 1.8 1.8 1.8 3.1 3.1))
(39 (1.8 2.9 2.9 2.9 2.9 2.9))
(41 (0.2 1.4 1.4 1.9 2.4 2.7))
(46 (0.8 1.1 1.7 1.7 1.9 1.9))
(58 (0.5 1.5 1.5 1.5 1.5 1.6))
(72 (0.6 0.6 0.6 0.6 1.0 1.0))

-Carl