back to list

Parametric scalar badness

🔗Graham Breed <gbreed@gmail.com>

2/8/2008 1:20:32 AM

In my errors and complexities paper (which I've finally managed to upload in "revised final" version to http://x31eq.com/primerr.pdf ) I introduced a badness function that takes a free parameter, called epsilon. (Equations 81 and 82, page 18.)

Here are the octave sizes for the best equal temperaments with under 10,000 notes to the octave

5-limit
0.01: 3 0.003: 12 0.001: 12 0.0003: 53 0.0001: 53

7-limit
0.01: 4 0.003: 12 0.001: 19 0.0003: 72 0.0001:171

11-limit
0.01: 3 0.003: 12 0.001: 31 0.0003: 72 0.0001: 72

13-limit
0.01: 4 0.003: 9 0.001: 31 0.0003: 72 0.0001:270

17-limit
0.01: 4 0.003: 10 0.001: 31 0.0003: 72 0.0001: 72

19-limit
0.01: 3 0.003: 9 0.001: 27 0.0003: 72 0.0001:270

In each case it looks like there's a sensible trade-off between complexity and error. I can't prove that there aren't any better ETs with a stupidly large number of notes, but it's looking that way.

For another example, the complete set of 11-limit ETs with a 0.001-badness below 0.07 is:

12, 15, 22, 27, 31, 41, 46

11-limit ETs with a 0.0001-badness below 0.04 are:

31, 72, 152, 270, 342

11-limit ETs with a 0.0003-badness below 0.05 are:

31, 41, 72

I don't know how to predict the relationship between epsilon and the error/complexity trade-off. And I haven't looked at higher rank temperaments. But I think it'll still work there. Geometry suggests that the badness of a temperament class is related to the badness of the ETs that belong to it.

I believe the function is a positive definite quadratic form (for 0 < epsilon < 1) and as such a valid lattice norm. So the problem of finding good equal temperaments maps to that of finding short vectors in a lattice. Unfortunately, this isn't the kind of norm that works with LLL reduction. But it does anchor us pretty much in real mathematics.

Higher rank temperaments are small areas, volumes, etc, in the lattice. Maybe it's possible to represent each as a vector in a lattice as well.

I don't plan to look into this more in the near future, but it shows promise. If anybody wants a project you could maybe take it up.

Graham

🔗Carl Lumma <carl@lumma.org>

2/8/2008 8:54:25 AM

Graham wrote...
>In each case it looks like there's a sensible trade-off
>between complexity and error. I can't prove that there
>aren't any better ETs with a stupidly large number of notes,
>but it's looking that way.
>
>For another example, the complete set of 11-limit ETs with a
>0.001-badness below 0.07 is:
>
>12, 15, 22, 27, 31, 41, 46
>
>11-limit ETs with a 0.0001-badness below 0.04 are:
>
>31, 72, 152, 270, 342
>
>11-limit ETs with a 0.0003-badness below 0.05 are:
>
>31, 41, 72
>
>I don't know how to predict the relationship between epsilon
>and the error/complexity trade-off.

Have you identified the epsilon that yields logflat badness
(in the sense of there being just barely an infinite number of
improving ETs)?

-Carl

🔗Graham Breed <gbreed@gmail.com>

2/8/2008 9:33:52 PM

Carl Lumma wrote:

> Have you identified the epsilon that yields logflat badness
> (in the sense of there being just barely an infinite number of
> improving ETs)?

It's not possible to get logflat badness in the strict sense. With epsilon=0 (that's standard error*complexity badness) it looks like a linear-flat badness. There's about the same chance of an ET having the required badness whatever its size. I assume that any finite epsilon means there is a single best ET. But I can't prove any of this.

Graham