back to list

Alternative measures for complexity and error

🔗Herman Miller <hmiller@IO.COM>

5/16/2010 6:46:52 PM

One of the things that bugs me about the usual error measurements for regular temperaments is that they focus on optimizing the prime intervals, which are not as relevant as smaller intervals around the size of an octave or less in actual music. In the case when one of those large intervals is used in actual music, the tuning is less critical. So for a 7-limit temperament you end up with an optimized 1:7, but the actually useful intervals like 4:7 and 5:7 can be so out of tune that they're not even recognizable as 7-limit intervals.

A while back I considered optimizing a set of superparticular prime intervals (1:2, 2:3, 4:5, 6:7), which has some advantages. For one, all of these in themselves are musically useful intervals. But recently it occurred to me that there's no reason not to consider all the superparticular intervals in the error measurement, up to the desired limit (1:2, 2:3, 3:4, 4:5, 5:6, 6:7). This also allows for the possibility of optimizing non-prime limits such as 9-limit. But how to weight these intervals? You could weight the simpler intervals as more important, or the smaller intervals, and there could be good reasons for doing it either way. For simplicity, I'm using unweighted errors.

For complexity, I look at the standard generator mapping (with the octave as one of the generators or a multiple of it, and the other generator less than half the size of the first). I look at the difference between the highest and lowest numbers in the mapping of the second generator, but each one is divided by the base 2 log of the prime, to give more weight to lower primes. Musically, it's more useful to have good mappings for fifths and thirds, especially fifths, so it makes sense to give more weight to the complexity of the fifths. So, a temperament like hemiw�rschmidt has a relatively higher complexity of 10.1 (16 generators to reach 1:3), but catakleismic has a lower complexity (7.8) even though it takes 22 generators to reach 1:7. I also multiply by the number of periods in an octave, since you need that many more notes on a keyboard.

Here's how this complexity measure works out with the usual weighted-prime RMS error measure:

generator mapping period gener err com name
[<1, 1, 2, 2], <0, 2, 1, 3]> 1206.246 338.132 14.6 1.3 dicot
[<1, 2, 3, 3], <0, -2, -3, -1]> 1204.794 264.447 10.9 1.3 beep
[<1, 2, 1, 1], <0, -1, 3, 4]> 1210.494 531.757 9.6 2.1 mavila
[<3, 5, 7, 9], <0, -1, 0, -2]> 399.128 103.763 4.7 2.1 august
[<1, 2, 4, 2], <0, -1, -4, 2]> 1195.412 496.521 4.7 2.4 dominant
[<2, 3, 5, 6], <0, 1, -2, -2]> 598.859 106.844 2.6 3.0 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> 1201.242 504.026 1.4 3.6 meantone
[<1, 0, 2, -1], <0, 5, 1, 12]> 1201.082 380.695 1.1 4.3 magic
[<1, 2, -1, -3], <0, -1, 8, 14]> 1200.125 497.967 0.7 5.6 garibaldi
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.822 116.755 0.5 6.8 miracle
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.597 316.889 0.5 7.8 catakleismic

Some temperaments like diminished and orwell, which you might expect to show up on a list like this, are evaluated as more complex than others on this list (in this case, dominant beats diminished in the complexity measure, and garibaldi beats orwell).

The unweighted superparticular measure of error results in a remarkably similar list, but with periods closer to an octave, and higher overall errors. This may give a more realistic estimate of the error of temperaments like beep or the 7-limit extensions of dicot and mavila.

generator mapping period gener err com name
[<1, 1, 2, 2], <0, 2, 1, 3]> 1200.000 334.156 39.6 1.3 dicot
[<1, 2, 3, 3], <0, -2, -3, -1]> 1200.000 262.346 29.2 1.3 beep
[<1, 2, 1, 1], <0, -1, 3, 4]> 1201.665 531.648 29.1 2.1 mavila
[<3, 5, 7, 9], <0, -1, 0, -2]> 399.953 109.473 16.0 2.1 august
[<5, 8, 12, 14], <0, 0, -1, 0]> 239.392 84.360 15.1 2.2 blacksmith
[<2, 3, 5, 6], <0, 1, -2, -2]> 599.442 108.470 8.0 3.0 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> 1200.635 503.671 3.8 3.6 meantone
[<1, 0, 2, -1], <0, 5, 1, 12]> 1200.030 380.634 3.6 4.3 magic
[<1, 2, -1, -3], <0, -1, 8, 14]> 1200.041 497.926 1.9 5.6 garibaldi
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.379 116.656 1.7 6.8 miracle
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.231 316.832 1.4 7.8 catakleismic

So, what happens when you consider a 9-limit optimization?

generator mapping period gener err com name
[<1, 1, 2, 2], <0, 2, 1, 3]> 1200.811 341.362 38.8 1.3 dicot
[<1, 2, 3, 3], <0, -2, -3, -1]> 1190.057 252.005 37.6 1.3 beep
[<1, 2, 4, 4], <0, -1, -4, -3]> 1192.745 492.345 31.5 1.7 sharptone
[<3, 5, 7, 9], <0, -1, 0, -2]> 399.086 101.771 17.5 2.1 august
[<2, 3, 5, 6], <0, 1, -2, -2]> 599.349 106.506 9.0 3.0 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> 1200.613 503.664 6.1 3.6 meantone
[<1, 0, 2, -1], <0, 5, 1, 12]> 1200.675 380.619 3.9 4.3 magic
[<1, -1, -1, -2], <0, 7, 9, 13]> 1199.342 443.130 3.4 4.6 sensi
[<1, 2, -1, -3], <0, -1, 8, 14]> 1200.235 498.056 2.0 5.6 garibaldi
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.629 116.783 2.0 6.8 miracle
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.397 316.908 1.4 7.8 catakleismic

Unsurprisingly, blacksmith falls out of this list (it's not that great in the 9-limit), but the appearance of sharptone is a little alarming. On the other hand, it does have a suitably high error value; it just seems that dicot and beep are even worse in the 9-limit.

🔗Graham Breed <gbreed@gmail.com>

5/17/2010 1:38:27 AM

On 17 May 2010 05:46, Herman Miller <hmiller@io.com> wrote:
> One of the things that bugs me about the usual error measurements for
> regular temperaments is that they focus on optimizing the prime
> intervals, which are not as relevant as smaller intervals around the
> size of an octave or less in actual music. In the case when one of those
> large intervals is used in actual music, the tuning is less critical. So
> for a 7-limit temperament you end up with an optimized 1:7, but the
> actually useful intervals like 4:7 and 5:7 can be so out of tune that
> they're not even recognizable as 7-limit intervals.

This is something we've thought about before, and some prime-based
measures explicitly correct for it. For a simple optimization of
primes, the situation you suggest shouldn't happen, because it would
mean 4 and 5 were ignored.

> A while back I considered optimizing a set of superparticular prime
> intervals (1:2, 2:3, 4:5, 6:7), which has some advantages. For one, all
> of these in themselves are musically useful intervals. But recently it
> occurred to me that there's no reason not to consider all the
> superparticular intervals in the error measurement, up to the desired
> limit (1:2, 2:3, 3:4, 4:5, 5:6, 6:7). This also allows for the
> possibility of optimizing non-prime limits such as 9-limit. But how to
> weight these intervals? You could weight the simpler intervals as more
> important, or the smaller intervals, and there could be good reasons for
> doing it either way. For simplicity, I'm using unweighted errors.

The set of intervals you have there has two 3-limit, two more 5-limit,
and only one more 7-limit. So it naturally under-represents 7, which
is what will have the most impact on the overall error. What happens
if you add 7:8?

> For complexity, I look at the standard generator mapping (with the
> octave as one of the generators or a multiple of it, and the other
> generator less than half the size of the first). I look at the
> difference between the highest and lowest numbers in the mapping of the
> second generator, but each one is divided by the base 2 log of the
> prime, to give more weight to lower primes. Musically, it's more
<snip>

That's the old "range" complexity, isn't it?

> Some temperaments like diminished and orwell, which you might expect to
> show up on a list like this, are evaluated as more complex than others
> on this list (in this case, dominant beats diminished in the complexity
> measure, and garibaldi beats orwell).

There's very little difference between garibaldi and orwell, right?

> The unweighted superparticular measure of error results in a remarkably
> similar list, but with periods closer to an octave, and higher overall
> errors. This may give a more realistic estimate of the error of
> temperaments like beep or the 7-limit extensions of dicot and mavila.

The interval sets I looked at before gave fairly consistent generator
sizes, provide there were enough intervals. Maybe you're getting
purer octaves because you're looking at a relatively small set of
intervals.

> generator mapping                period   gener   err  com name
> [<1, 1, 2, 2], <0, 2, 1, 3]>     1200.000 334.156 39.6 1.3 dicot
> [<1, 2, 3, 3], <0, -2, -3, -1]>  1200.000 262.346 29.2 1.3 beep
> [<1, 2, 1, 1], <0, -1, 3, 4]>    1201.665 531.648 29.1 2.1 mavila
> [<3, 5, 7, 9], <0, -1, 0, -2]>    399.953 109.473 16.0 2.1 august
> [<5, 8, 12, 14], <0, 0, -1, 0]>   239.392  84.360 15.1 2.2 blacksmith
> [<2, 3, 5, 6], <0, 1, -2, -2]>    599.442 108.470  8.0 3.0 pajara
> [<1, 2, 4, 7], <0, -1, -4, -10]> 1200.635 503.671  3.8 3.6 meantone
> [<1, 0, 2, -1], <0, 5, 1, 12]>   1200.030 380.634  3.6 4.3 magic
> [<1, 2, -1, -3], <0, -1, 8, 14]> 1200.041 497.926  1.9 5.6 garibaldi
> [<1, 1, 3, 3], <0, 6, -7, -2]>   1200.379 116.656  1.7 6.8 miracle
> [<1, 0, 1, -3], <0, 6, 5, 22]>   1200.231 316.832  1.4 7.8 catakleismic
>
> So, what happens when you consider a 9-limit optimization?
>
> generator mapping                period   gener   err  com name
> [<1, 1, 2, 2], <0, 2, 1, 3]>     1200.811 341.362 38.8 1.3 dicot
> [<1, 2, 3, 3], <0, -2, -3, -1]>  1190.057 252.005 37.6 1.3 beep
> [<1, 2, 4, 4], <0, -1, -4, -3]>  1192.745 492.345 31.5 1.7 sharptone
> [<3, 5, 7, 9], <0, -1, 0, -2]>    399.086 101.771 17.5 2.1 august
> [<2, 3, 5, 6], <0, 1, -2, -2]>    599.349 106.506  9.0 3.0 pajara
> [<1, 2, 4, 7], <0, -1, -4, -10]> 1200.613 503.664  6.1 3.6 meantone
> [<1, 0, 2, -1], <0, 5, 1, 12]>   1200.675 380.619  3.9 4.3 magic
> [<1, -1, -1, -2], <0, 7, 9, 13]> 1199.342 443.130  3.4 4.6 sensi
> [<1, 2, -1, -3], <0, -1, 8, 14]> 1200.235 498.056  2.0 5.6 garibaldi
> [<1, 1, 3, 3], <0, 6, -7, -2]>   1200.629 116.783  2.0 6.8 miracle
> [<1, 0, 1, -3], <0, 6, 5, 22]>   1200.397 316.908  1.4 7.8 catakleismic
>
> Unsurprisingly, blacksmith falls out of this list (it's not that great
> in the 9-limit), but the appearance of sharptone is a little alarming.
> On the other hand, it does have a suitably high error value; it just
> seems that dicot and beep are even worse in the 9-limit.

This sharptone has a low complexity. Maybe your 9-limit intervals are
shifting the balance onto 9 (which is tuned well) rather than 7.

Graham

🔗Herman Miller <hmiller@IO.COM>

5/17/2010 7:05:04 PM

Graham Breed wrote:
> On 17 May 2010 05:46, Herman Miller <hmiller@io.com> wrote:
>> One of the things that bugs me about the usual error measurements for
>> regular temperaments is that they focus on optimizing the prime
>> intervals, which are not as relevant as smaller intervals around the
>> size of an octave or less in actual music. In the case when one of those
>> large intervals is used in actual music, the tuning is less critical. So
>> for a 7-limit temperament you end up with an optimized 1:7, but the
>> actually useful intervals like 4:7 and 5:7 can be so out of tune that
>> they're not even recognizable as 7-limit intervals.
> > This is something we've thought about before, and some prime-based
> measures explicitly correct for it. For a simple optimization of
> primes, the situation you suggest shouldn't happen, because it would
> mean 4 and 5 were ignored.

But the error in the 4:7 could be as much as twice the error of the 1:2 plus the error of the 1:7 (and the 1:7 error can be 2.8 times the error of the 1:2 in the case of TOP-max). The way I'm suggesting, it would only be as much as the error of the 2:3 plus the error of the 6:7 at most.

>> A while back I considered optimizing a set of superparticular prime
>> intervals (1:2, 2:3, 4:5, 6:7), which has some advantages. For one, all
>> of these in themselves are musically useful intervals. But recently it
>> occurred to me that there's no reason not to consider all the
>> superparticular intervals in the error measurement, up to the desired
>> limit (1:2, 2:3, 3:4, 4:5, 5:6, 6:7). This also allows for the
>> possibility of optimizing non-prime limits such as 9-limit. But how to
>> weight these intervals? You could weight the simpler intervals as more
>> important, or the smaller intervals, and there could be good reasons for
>> doing it either way. For simplicity, I'm using unweighted errors.
> > The set of intervals you have there has two 3-limit, two more 5-limit,
> and only one more 7-limit. So it naturally under-represents 7, which
> is what will have the most impact on the overall error. What happens
> if you add 7:8?

[<1, 1, 2, 2], <0, 2, 1, 3]> 1200.000 341.632 41.1 1.3 dicot
[<1, 2, 3, 3], <0, -2, -3, -1]> 1200.000 256.425 30.7 1.3 beep
[<3, 5, 7, 9], <0, -1, 0, -2]> 400.193 105.756 16.3 2.1 august
[<2, 3, 5, 6], <0, 1, -2, -2]> 598.878 107.567 8.6 3.0 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> 1201.529 503.950 5.1 3.6 meantone
[<1, 0, 2, -1], <0, 5, 1, 12]> 1199.920 380.535 3.4 4.3 magic
[<1, 2, -1, -3], <0, -1, 8, 14]> 1200.014 497.916 1.8 5.6 garibaldi
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.401 316.908 1.5 7.8 catakleismic

Mavila, dominant, and miracle drop down to the next grade (actually, mavila drops down two grades).

[<1, 2, 1, 1], <0, -1, 3, 4]> 1205.761 531.648 35.3 2.1 mavila
[<1, 2, 4, 2], <0, -1, -4, 2]> 1197.856 497.443 17.3 2.4 dominant
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.625 116.784 2.1 6.8 miracle

What I'm trying to get at with this error measure is that I don't care about 7 per se, but 7 in combination with nearby integers; mainly ones like 4:7, 5:7, 6:7, and 7:8. Especially when you get to higher limits like 11 and 13, it seems that it's more important for the character of scales and melodies to get the small steps right like 11:12 and 12:13, while wider intervals like 8:13 can be off by more.

>> For complexity, I look at the standard generator mapping (with the
>> octave as one of the generators or a multiple of it, and the other
>> generator less than half the size of the first). I look at the
>> difference between the highest and lowest numbers in the mapping of the
>> second generator, but each one is divided by the base 2 log of the
>> prime, to give more weight to lower primes. Musically, it's more
> <snip>
> > That's the old "range" complexity, isn't it?

I used to just look at the highest and lowest generators without weighting them, if that's what you're thinking of. So hemiwürschmidt with a span from 0 to 16 would have a lower complexity than catakleismic with a span from 0 to 22. But with the weighting that I'm suggesting, catakleismic only has a complexity of 7.8 (because the complexity of the 7 is weighted less), while hemiwürschmidt has a complexity of 10.1 because the complexity of 3 is more significant.

>> Some temperaments like diminished and orwell, which you might expect to
>> show up on a list like this, are evaluated as more complex than others
>> on this list (in this case, dominant beats diminished in the complexity
>> measure, and garibaldi beats orwell).
> > There's very little difference between garibaldi and orwell, right?

Both have error around 0.7, and similar complexity by this measure (5.6 for garibaldi vs. 5.7 for orwell).

>> The unweighted superparticular measure of error results in a remarkably
>> similar list, but with periods closer to an octave, and higher overall
>> errors. This may give a more realistic estimate of the error of
>> temperaments like beep or the 7-limit extensions of dicot and mavila.
> > The interval sets I looked at before gave fairly consistent generator
> sizes, provide there were enough intervals. Maybe you're getting
> purer octaves because you're looking at a relatively small set of
> intervals.

It could be; also considering that 1:2 is one of the intervals I'm considering, along with 2:3 and 3:4 which add up to an octave. If octaves are less important to tune accurately, you might consider a set of intervals like 4:5, 5:6, 6:7, 7:8 (which gets you the octave indirectly, but puts more weight on the smaller intervals).

🔗Graham Breed <gbreed@gmail.com>

5/17/2010 10:51:54 PM

On 18 May 2010 06:05, Herman Miller <hmiller@io.com> wrote:

> But the error in the 4:7 could be as much as twice the error of the 1:2
> plus the error of the 1:7 (and the 1:7 error can be 2.8 times the error
> of the 1:2 in the case of TOP-max). The way I'm suggesting, it would
> only be as much as the error of the 2:3 plus the error of the 6:7 at most.

If 4 and 7 have such large errors that don't cancel out, there's no
stretch that's going to remove them. It means 7 isn't well
approximated. You expect errors to increase the more complex an
interval gets. Optimizing the octaves balances the small and large
intervals.

>> is what will have the most impact on the overall error.  What happens
>> if you add 7:8?

> Mavila, dominant, and miracle drop down to the next grade (actually,
> mavila drops down two grades).

Miracle gets worse with more emphasis on 7:8?

> What I'm trying to get at with this error measure is that I don't care
> about 7 per se, but 7 in combination with nearby integers; mainly ones
> like 4:7, 5:7, 6:7, and 7:8. Especially when you get to higher limits
> like 11 and 13, it seems that it's more important for the character of
> scales and melodies to get the small steps right like 11:12 and 12:13,
> while wider intervals like 8:13 can be off by more.

Provided the scale stretch is optimized, it doesn't matter how big the
intervals are, because they get balanced. When the scale stretch
isn't optimized, this is still a controversial goal, but I've covered
it with standard deviations and error ranges.

> I used to just look at the highest and lowest generators without
> weighting them, if that's what you're thinking of. So hemiwürschmidt
> with a span from 0 to 16 would have a lower complexity than catakleismic
> with a span from 0 to 22. But with the weighting that I'm suggesting,
> catakleismic only has a complexity of 7.8 (because the complexity of the
> 7 is weighted less), while hemiwürschmidt has a complexity of 10.1
> because the complexity of 3 is more significant.

You may have been doing that. I've been looking at weighted
generators along with weighted errors and I didn't think I was alone.
Even the odd-limit complexity is weighted in that the 7- and 9-limits
are treated differently.

> It could be; also considering that 1:2 is one of the intervals I'm
> considering, along with 2:3 and 3:4 which add up to an octave. If
> octaves are less important to tune accurately, you might consider a set
> of intervals like 4:5, 5:6, 6:7, 7:8 (which gets you the octave
> indirectly, but puts more weight on the smaller intervals).

Right. The scale stretch affects the largest intervals most strongly.
By making 1:2 the largest interval, you're giving a lot of weight to
pure octaves. The next largest intervals are 2:3 and 3:4 which
roughly balance out. But there'll still be a slight preference for
fifths over fourths.

If you're only interested in such small intervals there's no need to
worry about the scale stretch. What it does is allow large intervals
to have the same errors as small intervals do with pure octaves.

Graham

🔗Herman Miller <hmiller@IO.COM>

5/18/2010 8:18:54 PM

Graham Breed wrote:
> On 18 May 2010 06:05, Herman Miller <hmiller@io.com> wrote:
> >> But the error in the 4:7 could be as much as twice the error of the 1:2
>> plus the error of the 1:7 (and the 1:7 error can be 2.8 times the error
>> of the 1:2 in the case of TOP-max). The way I'm suggesting, it would
>> only be as much as the error of the 2:3 plus the error of the 6:7 at most.
> > If 4 and 7 have such large errors that don't cancel out, there's no
> stretch that's going to remove them. It means 7 isn't well
> approximated. You expect errors to increase the more complex an
> interval gets. Optimizing the octaves balances the small and large
> intervals.

Well, one of the issues with the TOP error is that it hides the large error of the 7. Two temperaments can have similar TOP error, but one of them happens to have a 7 that's off in the right direction, so the 4:7 and 5:7 are reasonable, and the other has a 7 that's off in the wrong direction. I'm saying that I don't care about the 7. I just care mainly about intervals like 4:7, 5:7, 6:7, 7:8, and 7:9. In an 11-limit temperament I want the small steps like 10:11 and 11:12 to be around the right size.

>>> is what will have the most impact on the overall error. What happens
>>> if you add 7:8?
> >> Mavila, dominant, and miracle drop down to the next grade (actually,
>> mavila drops down two grades).
> > Miracle gets worse with more emphasis on 7:8?

Oops, it seems that my original figures included 7:8 to begin with, and so what I actually added was 8:9!

The actual figures with optimizing 1:2 up to 6:7 are:

[<1, 1, 2, 2], <0, 2, 1, 3]> 1190.655 341.632 38.3 1.3 dicot
[<1, 2, 3, 3], <0, -2, -3, -1]> 1204.871 265.269 28.6 1.3 beep
[<1, 2, 4, 4], <0, -1, -4, -3]> 1205.700 501.414 28.3 1.7 sharptone
[<1, 2, 1, 1], <0, -1, 3, 4]> 1209.949 531.648 22.7 2.1 mavila
[<3, 5, 7, 9], <0, -1, 0, -2]> 398.709 102.427 15.6 2.1 august
[<2, 3, 5, 6], <0, 1, -2, -2]> 598.589 107.106 5.9 3.0 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> 1201.434 504.303 3.7 3.6 meantone
[<1, -1, -1, -2], <0, 7, 9, 13]> 1200.622 443.477 3.4 4.6 sensi
[<1, 2, -1, -3], <0, -1, 8, 14]> 1199.688 497.854 1.8 5.6 garibaldi
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.615 116.691 1.7 6.8 miracle
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.196 316.829 1.5 7.8 catakleismic

And the fact that 7:8 was included may explain why the octaves were closer to 1200.

But I'm thinking that my complexity measure still isn't appropriate, and I'm trying an RMS-based complexity measure, using the same intervals as the ones the error measure is evaluating. This happens to give a more complete list of good temperaments.

[<1, 2, 3, 3], <0, -2, -3, -1]> 1200.000 262.346 29.2 4.5 beep
[<1, 2, 4, 2], <0, -1, -4, 2]> 1197.777 497.456 18.5 6.3 dominant
[<5, 8, 12, 14], <0, 0, -1, 0]> 239.392 84.360 15.1 7.1 blacksmith
[<1, 2, 2, 3], <0, -4, 3, -2]> 1200.886 125.915 10.7 9.9 negri
[<1, 2, 4, 3], <0, -2, -8, -1]> 1200.665 251.905 10.3 10.5 semaphore
[<2, 3, 5, 6], <0, 1, -2, -2]> 599.442 108.470 8.0 10.6 pajara
[<3, 5, 7, 8], <0, -1, 0, 2]> 399.360 88.510 8.0 12.0 augene
[<1, 2, 6, 2], <0, -1, -9, 2]> 1198.556 489.296 6.4 12.6 superpyth
[<1, 2, 4, 7], <0, -1, -4, -10]> 1200.635 503.671 3.8 14.4 meantone
[<1, 0, 2, -1], <0, 5, 1, 12]> 1200.030 380.634 3.6 16.1 magic
[<1, 0, 3, 1], <0, 7, -3, 8]> 1200.148 271.512 2.3 16.5 orwell
[<1, 1, 3, 3], <0, 6, -7, -2]> 1200.379 116.656 1.7 18.9 miracle
[<1, 0, 1, -3], <0, 6, 5, 22]> 1200.231 316.832 1.4 28.9 catakleismic

Subjectively this seems like a better list, as it includes magic and orwell, augene over august, and excludes mavila. Garibaldi gets a higher complexity rating than miracle, which seems reasonable.

I tried this on 11-limit temperaments, optimizing superparticular intervals up to 12/11.

septimal
[<7, 11, 16, 20, 24], <0, 0, 0, -1, 0]> 171.877 68.090 21.1 9.9

dominant
[<1, 2, 4, 2, 1], <0, -1, -4, 2, 6]> 1196.256 494.680 18.6 14.0

blacksmith
[<5, 8, 12, 14, 18], <0, 0, -1, 0, -2]> 238.843 74.965 17.4 14.1

porcupine
[<1, 2, 3, 2, 4], <0, -3, -5, 6, -4]> 1197.903 163.136 10.3 14.2

keemun
[<1, 0, 1, 2, 4], <0, 6, 5, 3, -2]> 1199.024 317.276 9.5 20.5

mohajira
[<1, 1, 0, 6, 2], <0, 2, 8, -11, 5]> 1200.880 348.638 6.5 21.2

magic
[<1, 0, 2, -1, 6], <0, 5, 1, 12, -8]> 1199.838 380.605 4.8 26.3

orwell
[<1, 0, 3, 1, 3], <0, 7, -3, 8, 2]> 1200.131 271.621 3.6 28.4

amity
[<1, 3, 6, -2, 6], <0, -5, -13, 17, -9]> 1199.747 339.361 3.2 34.6

miracle
[<1, 1, 3, 3, 2], <0, 6, -7, -2, 15]> 1200.530 116.721 1.9 37.8

catakleismic
[<1, 0, 1, -3, 9], <0, 6, 5, 22, -21]> 1200.408 316.857 1.5 49.4

wizard
[<2, 1, 5, 2, 8], <0, 6, -1, 10, -3]> 600.165 216.888 1.5 50.4

unidec
[<2, 5, 8, 5, 6], <0, -6, -11, 2, 3]> 600.125 183.186 1.2 53.2

harry
[<2, 4, 7, 7, 9], <0, -6, -17, -10, -15]> 600.089 83.173 1.2 58.4

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/18/2010 10:30:33 PM

--- In tuning-math@yahoogroups.com, Herman Miller <hmiller@...> wrote:

> The actual figures with optimizing 1:2 up to 6:7 are:

It seems to me that if you are going to concentrate on smaller intervals, you might as well make octaves pure. What's the percentage in doing otherwise? Now all you need to look at is the range up to sqrt(2), and all else follows.