back to list

Complete rank 2 temperament surveys

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 1:24:26 PM

I'm making progress on proving that all pairs of a set of equal temperaments can find all R2 temperaments with certain properties. I don't know enough number theory to actually make it a proof.

The error has to be defined as the standard deviation of the weighted mapping. That is

E2 = <w2> - <w>2

where w is each tempered prime interval divided by the just interval it's approximating, "<x>" means "the mean of x" and "x2" means "x squared". In a rank 2 system,

w = ap + bg

where a is the weighted period mapping, p is the period, b is the generator mapping and g is the generator. So the error formula is

E2 = <(ap + bg)2> - <ap + bg>2
= <(ap)2 + (bg)2 + 2a.bpg> - <ap + bg>2
= <(ap)2> + <(bg)2> + <2a.bpg> - <ap>2 - <bg>2 - 2<ap><bg>
= p2<a2> + g2<b2> + 2pg<ab> - p2<a>2 - g2<b>2 - 2pg<a><b>
= g2(<b2> - <b>2) + something or other

The point of this exercise is that we know how the error varies with the generator. You can call <b2> - <b>2 as var(b) for the variance of b. To keep things simple p is fixed. The error can also be written in the form

E2 = k(g - Gopt)2 + Eopt2

Here, k is some constant, Gopt is the optimal generator and Eopt is the optimal error. Without actually solving the equation for Gopt and Eopt we know k must be var(b) from above. So

E2 = var(b)(g - Gopt)2 + Eopt2

That's weighted prime RMS error as a function of the generator size for any rank 2 temperament. What we want to know is how big that error gets in an equal temperament that belongs to the same class/family/whatever. In that case, g becomes a rational number, written as n/d:

E2 = var(b)(n/d - Gopt)2 + Eopt2

So how close can we expect n/d to get to Gopt? For good matches it's always within 1/pd2, which follows from the gaps between nodes in the scale tree (Stern Brocot/Farey tree). I think that means "convergents" rather than "semiconvergents" or that the DE/MOS scale is maximally even/strictly proper, but I don't know how likely we are to get them. But given that, we have

E2 <= var(b)/p2d4 + Eopt2

Now, var(b)/p2 is the square of a complexity measure. Remember that b is the generator mapping. The Kees complexity is

Ckees = (max(b) - min(b))/p

we have

c = std(b)/p

where std is the standard deviation, defined as the square root of the variance. The standard deviation is the RMS deviation relative to the mean. The max - min is the worst deviation relative to some kind of average. so,

std(b) <= (max(b) - min(b))/2

and the two look like similar things. So bring back this

E2 <= var(b)/p2d4 + Eopt2

and rewrite it

E2 <= c2/d4 + Eopt2

where c is my new complexity. This tells you a constraint on the error of an equal temperament that belongs to an R2 temperament class. We therefore know that any R2 temperment with a complexity within c and optimal error within Eopt must have equal members that obey the inequality. We can replace the actual complexity and optimal error with the maximum values we allow, Cmax and Emax respectively

E2 <= Cmax2/d4 + Emax2

Note that this is not intended as an endorsement of any text editor.

also, because we know this complexity is always less than half the Kees complexity, we can also say

E2 <= Ckees/2d4 + Emax2

if you don't trust this new complexity measure. That aside, let's study this equation again:

E2 <= Cmax2/d4 + Emax2

For any equal temperament with d steps, this tells us how large its error can get before we know it isn't a good example of an R2 temperament with complexity below Cmax and error below Emax. What restrictions does it imply?

Firstly, when d is small, the ET error can be larger than the R2T complexity. That's huge and means that it isn't a practical restraint at all. This is sadly to be expected. ETs with a small number of notes may belong to a good class but their errors are so high (such that they aren't practical temperaments at all) that you can't really predict this.

When d gets large, the error in the ET becomes the same as the error we're looking for in the R2T. This is to be expected as well, because it's easy to get a moderate error with a large number of notes and that doesn't tell you anything about whether it belongs to a class of simpler R2 temperaments. The inequality doesn't give us a practical restriction on high ETs either.

In the middle, it does make sense. I like to think of it in terms of my ET badness measure, error times complexity:

badness2 = E2d2 <= Cmax2/d2 + Emax2d2

To find it's minimum point, you differentiate with respect to d and set it equal to zero. That gives

Dopt = sqrt(CmaxEmax)

where sqrt is the square root. This is the best place to look for equal temperaments. All I need now is a way of saying how likely the equal temperaments we're looking for are to be around here and I can prove we can find them. The ET badness at this point is

minbad = sqrt(2CmaxEmax)

Now for an example. I was talking about lists including this 17-limit temperament before:

13/41

1202.6 cents period
381.4 cents generator

mapping by period and generator:
[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]

mapping by steps:
[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]

complexity measure: 4.545
RMS weighted error: 2.643 cents/octave
max weighted error: 3.966 cents/octave

Is it outstanding or are there some to beat it? In dimensionless terms, its standard-deviation of weighted mapping complexity is 1.914 and its error is 0.0022. Let's try and prove there are no better R2 temperaments with complexity below 2 and error below 0.0025.

First, where to look for ETs? The minimum point of the badness cure is at sqrt(2/0.0025)=28.28 steps to the octave. So we want something with around 28 notes to the octave. At this point, the badness is sqrt(2*2*0.025)=sqrt(0.01) = 0.1. That's the kind of value I use to make sure I catch everything. Unfortunately, this is a minumum and so most of the time the badness threshold is higher. Taking 1,000 ETs starting with 14 notes means we only get up to 70 notes.

To try and get a good sample, I started with 10 notes and took 2,000 equal temperaments, so the biggest ones had 77 notes. Even with my slick new Pyrex library this search took a few seconds but the result is that 6 linear temperaments with a Kees complexity below 5 and dimensionless Tenney-weighted prime-RMS whatever error below 0.0025 came out. That's good because I said there were 6 around this point before. I'm more certain now that I was right.

How likely is it that I missed an ET? The most likely ones are those where the period divides the octave a lot of times. In those classes, the equal special cases are further spaced. The most divisions of the period within a complexity threshold come when the generator mapping is all zeros with a prime at the end. Then the Kees complexity is

1/p/q[-1]

where q[-1] is the size of the highest prime interval (log of prime number) in the limit, and p is the period. The weighted std complexity is

c = sqrt((1/p/q[-1])**2/len(q) - (1/p/q[-1]/len(q))**2)\
= sqrt((len(q)-1)*(1/p/q[-1]/len(q))**2)
= sqrt(len(q)-1)/p/q[-1]/len(q)

so len(q) is the number of primes and "**2" is now "squared". For this complexity to be within the threshold the number of notes to the octave must obey

sqrt(len(q)-1)/p/q[-1]/len(q) < c

so

1/p < c*len(q)*q[-1]/sqrt(len(q)-1)

and 1/p is the number of steps to the octave.

Well, we were in the 17-limit. There are 7 primes and the last one has size log2(17)=4.09 octaves. For a complexity of 2,

1/p < 2*7*4.09/sqrt(6)
1/p < 23.376...

Well, I covered 23-equal and I covered 46-equal and I even covered 69-equal so I had a good chance of getting all alternatives. It is still possible that the generator is very small and none of these are convergents, however. But then the chances are that 23 would have come up with two different mappings.

This is the point at which the argument disappears amidst much hand waving. If pathologically non-converging series are a problem we may need to run special case searches for highly divided periods. But the time is much closer to when we can guarantee we didn't miss any top notch temperaments. Unless the theory improves dramatically we won't be able to say we caught all the mediocre ones. The constraints become too weak when you lower the error and complexity.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 3:37:23 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> I'm making progress on proving that all pairs of a set of equal
> temperaments can find all R2 temperaments with certain properties.

Here's another thought.

Suppose we start with an R2 wedgie, and divide through each
coefficient by the corresponding product of logs base 2,
1/log2(q)*log2(r) for the q-r coefficient. Then we get a weighted
wedgie W = <<w1 w2 .., wn||, and C = max(|wi|) is a complexity measure.

If v is a val belonging to the temperament, let w be the corresponding
weighted val, dividing through by 1/log2(q) for the prime q term. Then
the wedge product W^w = zero trival. Therefore, if u = w/N, where the
val is an N-et val, we also have W^u = zero trival.

If j = <1 1 1 ... 1| is the JI val, we have that the maximum of the
absolute values of the coefficients of W^j, call it E, is an error
measurement for W. We also have that the maximum of the absolute
values of j-u, call it e, is an error measurement for u, and hence for
the original val w.

Now

E = ||W^j|| = ||W^(j-u)|| <= 3 ||W|| ||j-u|| = 3Ce

Here here "3" appears because there are three added products in the
computation of the trival.

🔗Graham Breed <gbreed@gmail.com>

1/21/2006 2:42:42 PM

Gene Ward Smith wrote:

> Here's another thought.
> > Suppose we start with an R2 wedgie, and divide through each
> coefficient by the corresponding product of logs base 2,
> 1/log2(q)*log2(r) for the q-r coefficient. Then we get a weighted
> wedgie W = <<w1 w2 .., wn||, and C = max(|wi|) is a complexity measure.
> > If v is a val belonging to the temperament, let w be the corresponding
> weighted val, dividing through by 1/log2(q) for the prime q term. Then
> the wedge product W^w = zero trival. Therefore, if u = w/N, where the
> val is an N-et val, we also have W^u = zero trival.

So u's the weighted tuning map. There isn't anything to stop the octave being optimized, is there?

> If j = <1 1 1 ... 1| is the JI val, we have that the maximum of the
> absolute values of the coefficients of W^j, call it E, is an error
> measurement for W. We also have that the maximum of the absolute
> values of j-u, call it e, is an error measurement for u, and hence for
> the original val w.

Yes, j-u is the weighted errors of the primes so any function of that is a weighted prime error.

> Now
> > E = ||W^j|| = ||W^(j-u)|| <= 3 ||W|| ||j-u|| = 3Ce
> > Here here "3" appears because there are three added products in the
> computation of the trival. Does the modulus have to be a sum-abs or would an RMS work as well? It looks bigger than my limit. One thing you aren't doing is selecting the good equal temperaments. This mapping for 32-equal:

(32, 50, 75, 90, 109)

is valid in 11-limit miracle, but it doesn't get past my error threshold. I'm not interested in the poor ETs. Only the ones that converge on the optimal generator -- 10, 31, 41 and 72. It happens that 11 and 21 work but I don't think they have to. You're being much more liberal.

It might be a better threshold for very small equal temperaments though, if I can reconcile the units. It's nice and simple!

And really, where does that 3 come from?

Graham

p.s. In my original message, there's an equation

Dopt = sqrt(CmaxEmax)

which should be

Dopt = sqrt(Cmax/Emax)

Sorry about that. The example calculation was correct.

🔗Gene Ward Smith <gwsmith@svpal.org>

1/21/2006 10:55:58 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> So u's the weighted tuning map. There isn't anything to stop the
octave
> being optimized, is there?

I don't see how that helps in connection with finding suitable vals.
For instance, if you have a tuning map M = <1 m3 m5 ... mp| belonging
to a temperament, and then round off, you get an infinite number of
vals belonging to the temperament appearing at a fixed density. If you
did that for a TOP tuning map, it wouldn't work.

> > If j = <1 1 1 ... 1| is the JI val, we have that the maximum of the
> > absolute values of the coefficients of W^j, call it E, is an error
> > measurement for W. We also have that the maximum of the absolute
> > values of j-u, call it e, is an error measurement for u, and hence for
> > the original val w.
>
> Yes, j-u is the weighted errors of the primes so any function of
that is
> a weighted prime error.

And minimums for it give TOP and NOT tunings.

> > Now
> >
> > E = ||W^j|| = ||W^(j-u)|| <= 3 ||W|| ||j-u|| = 3Ce
> >
> > Here here "3" appears because there are three added products in the
> > computation of the trival.
>
> Does the modulus have to be a sum-abs or would an RMS work as well?

I didn't use either one; I meant max-abs.

> And really, where does that 3 come from?

You take terms, one of which comes from the weighted wedgie, the other
from the weighted val, and multiply them. Sums of three of such terms
make up the coefficients of the trival. The 3, in other words, results
from this being a trival.

🔗Graham Breed <gbreed@gmail.com>

1/22/2006 5:14:31 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> >>So u's the weighted tuning map. There isn't anything to stop the
> octave >>being optimized, is there?
> > I don't see how that helps in connection with finding suitable vals.
> For instance, if you have a tuning map M = <1 m3 m5 ... mp| belonging
> to a temperament, and then round off, you get an infinite number of
> vals belonging to the temperament appearing at a fixed density. If you
> did that for a TOP tuning map, it wouldn't work.

I can't follow this at all. Round off to what? You mean like your standard vals? So what difference does it make starting with TOP? And where did we get this tuning map from in the first place?

>>Does the modulus have to be a sum-abs or would an RMS work as well?
> > I didn't use either one; I meant max-abs.

Then does the modulus have to be a max-abs, or would an RMS work as well?

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/23/2006 9:33:10 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> > For instance, if you have a tuning map M = <1 m3 m5 ... mp| belonging
> > to a temperament, and then round off, you get an infinite number of
> > vals belonging to the temperament appearing at a fixed density. If you
> > did that for a TOP tuning map, it wouldn't work.
>
> I can't follow this at all. Round off to what? You mean like your
> standard vals?

Exactly.

> So what difference does it make starting with TOP?

You can start with TOP, actually. It would work, except when you
multiply by N, you might not get a division into N parts.

And
> where did we get this tuning map from in the first place?

Anywhere, so long as the values are irrational and it maps all the
commas to zero. However, I don't know how to pull them out of the air
without already knowing a temperament.

> >>Does the modulus have to be a sum-abs or would an RMS work as well?
> >
> > I didn't use either one; I meant max-abs.
>
> Then does the modulus have to be a max-abs, or would an RMS work as
well?

Max-abs was how I got my formula.

🔗Graham Breed <gbreed@gmail.com>

1/24/2006 1:21:19 PM

Gene Ward Smith wrote:
>>I can't follow this at all. Round off to what? You mean like your >>standard vals?
> > Exactly.

For either approximate-TOP or NOT, that puts a limit of

E < k/2d + Emax

where E is the error in the ET, d is the number of notes, Emax is the maximum error in the R2 temperaments you're looking for (using the same error measure as for ETs, not your vaguely correlated wedgie-error) and k is some constant deriving from the prime intervals that I haven't worked out.

That may well be a practical limit for small ETs, but how do you show that such ETs (that round off correctly, and still suppport the R2 temperament that's being rounded) will exist?

I can't see that it has anything to do with the rest of your proof, either.

>>So what difference does it make starting with TOP? > > You can start with TOP, actually. It would work, except when you
> multiply by N, you might not get a division into N parts.

You can always divide through by the octave stretch, get the ET approximation, and stretch the octave again. What's so special about the octave, anyway?

As for your limit, I get the max-abs of the weighted wedgie for 11-limit miracle as 1.469 where the weighting matches the interval sizes. The maximum of the weighted wedgie wedged with ones is 0.007. That gives 3Ce as 0.032. The actual TOP error for the version of 1-equal mapped <1 1 3 3 2] is 0.382 (NOT is 0.422), and it belongs to miracle temperament. So something's not working.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/24/2006 5:24:19 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> As for your limit, I get the max-abs of the weighted wedgie for
11-limit
> miracle as 1.469 where the weighting matches the interval sizes. The
> maximum of the weighted wedgie wedged with ones is 0.007.

I get totally different numbers. The max-abs of the weighted wedgie I
find to be C = 7.345, and the max of "weighted ^ ones" I get to be
E = 0.0048. For such high error it doesn't make sense to use the val
you gave, and in fact the best bound is found with TOP. The weighted
error there (maximum TOP error in octave terms) is e = 0.0005258. Then
E = 0.0048 < 3Ce = 0.0116.

🔗Graham Breed <gbreed@gmail.com>

1/25/2006 1:09:27 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>As for your limit, I get the max-abs of the weighted wedgie for
> > 11-limit > >>miracle as 1.469 where the weighting matches the interval sizes. The >>maximum of the weighted wedgie wedged with ones is 0.007. > > > I get totally different numbers. The max-abs of the weighted wedgie I
> find to be C = 7.345, and the max of "weighted ^ ones" I get to be > E = 0.0048. For such high error it doesn't make sense to use the val
> you gave, and in fact the best bound is found with TOP. The weighted
> error there (maximum TOP error in octave terms) is e = 0.0005258. Then
> E = 0.0048 < 3Ce = 0.0116.

Yes, 7.345 is correct, sorry.

Ah, you have the wedgie complexity on the left. So

E <= 3Ce

becomes

e >= E/3C

as a constraint on the TOP error of the equal temperaments. Yes, that's probably going to work, but what's the point of it? It means we have to look for equal temperaments with an error at least as high as something so low it couldn't be in the right class anyway??? Well, I suppose we have a wedgie badness measure of some value but it doesn't have anything to do with the original question.

For that wedgie badness calculation, W is

<<3.786 -3.015 -0.712 4.336 -6.793 -4.495 0.547 2.301 7.345 5.045]]

and W^j is

<<<0.0071 0.0032 -0.0033 -0.0012 -0.0056 -0.0030 0.0028 0.0048 0.0034 0.0014]]]

so there is a 0.0048 but it isn't the maximum.

Graham

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 12:47:18 PM

Thanks for this analysis, Graham. I don't have time to sit down and
work through it all right now; hopefully some free time will come down
the pike my way. But it doesn't look complete anyway so I don't feel
too bad about moving on . . .

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> I'm making progress on proving that all pairs of a set of equal
> temperaments can find all R2 temperaments with certain properties. I
> don't know enough number theory to actually make it a proof.

[snip]