back to list

Top-ranking 2.5.11.13-limit temperaments

🔗Herman Miller <hmiller@IO.COM>

3/26/2008 7:34:53 PM

Here are a few of the best 2.5.11.13-limit temperaments, as found by wedging optimal ET's, limiting error to less than 10 and complexity less than 50. I've included a logflat badness measurement along with the complexity and error. Petr Pařízek's temperament really stands out with its low badness of 0.016301.

Gold medal temperaments:
[<3, 7, 10, 11], <0, 0, 1, 0]> (400.844740, 142.870544)
c = 2.597928, e = 8.441030, b = 0.047475
[<2, 5, 7, 8], <0, -1, 0, -2]> (596.366755, 178.647793)
c = 4.140159, e = 7.266490, b = 0.103795
[<1, 2, 3, 4], <0, 2, 3, -2]> (1200.000000, 188.448896)
c = 5.071741, e = 4.055217, b = 0.086925
[<1, 2, 5, 4], <0, 1, -5, -1]> (1202.756537, 374.400159)
c = 5.883697, e = 2.756537, b = 0.079521
[<1, 3, 3, 3], <0, -3, 2, 3]> (1202.232620, 275.189376)
c = 6.877559, e = 2.232620, b = 0.088004
[<1, 5, 5, 6], <0, -7, -4, -6]> (1198.226337, 458.419470)
c = 9.525147, e = 1.773663, b = 0.134101
[<1, 1, 1, 2], <0, 7, 13, 9]> (1199.384989, 226.908105)
c = 11.861587, e = 0.615011, b = 0.072109
[<1, 6, 3, 6], <0, -8, 1, -5]> (1199.875918, 551.653738)
c = 12.555849, e = 0.124082, b = 0.016301
[<1, 1, -5, 0], <0, 5, 32, 14]> (1199.976283, 317.212082)
c = 29.380455, e = 0.119306, b = 0.085822
[<1, 2, 3, 2], <0, 7, 10, 37]> (1200.011121, 55.157066)
c = 30.120391, e = 0.082695, b = 0.062520
[<1, 2, 7, 14], <0, 1, -11, -32]> (1200.006105, 386.239280)
c = 30.414513, e = 0.026798, b = 0.020658
[<3, 8, 9, 8], <0, -3, 4, 9]> (399.991700, 137.854023)
c = 36.987094, e = 0.024900, b = 0.028387

Silver medal temperaments:
[<1, 2, 4, 4], <0, 1, -2, -1]> (1206.510943, 352.701580)
c = 3.285769, e = 8.867738, b = 0.079782
[<1, 2, 3, 3], <0, 3, 4, 6]> (1192.733510, 139.239654)
c = 5.012065, e = 7.266490, b = 0.152117
[<3, 7, 10, 11], <0, 0, 2, 1]> (398.423931, 75.360862)
c = 6.040104, e = 4.728206, b = 0.143749
[<1, 1, 3, 5], <0, 3, 1, -3]> (1203.364877, 529.582749)
c = 6.576320, e = 3.364877, b = 0.121270
[<3, 7, 10, 12], <0, 0, 1, -2]> (399.020013, 161.117812)
c = 7.489194, e = 2.939961, b = 0.137414
[<2, 5, 8, 7], <0, -1, -3, 1]> (601.254285, 222.464855)
c = 7.669669, e = 2.508570, b = 0.122970
[<1, 0, 3, 0], <0, 5, 1, 8]> (1198.385254, 556.162181)
c = 8.346547, e = 2.369931, b = 0.137584
[<8, 19, 28, 30], <0, -1, -1, -1]> (150.271565, 63.801586)
c = 10.476748, e = 2.172523, b = 0.198718
[<1, 2, 7, 5], <0, 1, -11, -4]> (1198.062068, 385.628385)
c = 11.923802, e = 1.964399, b = 0.232744
[<4, 9, 13, 14], <0, 1, 3, 3]> (300.161421, 81.309312)
c = 11.924607, e = 1.529595, b = 0.181252
[<1, 2, 3, 3], <0, 7, 10, 15]> (1198.695738, 55.974274)
c = 12.302662, e = 1.304262, b = 0.164506
[<1, 2, 8, 5], <0, 1, -14, -4]> (1199.440155, 388.733325)
c = 14.521730, e = 0.559845, b = 0.098384
[<1, 3, 4, 6], <0, -5, -4, -17]> (1199.724635, 162.335560)
c = 14.780847, e = 0.509229, b = 0.092711
[<1, -3, -3, -2], <0, 14, 17, 15]> (1199.495567, 455.973511)
c = 18.955416, e = 0.504433, b = 0.151039
[<1, 8, 8, 9], <0, -15, -12, -14]> (1200.331683, 454.373579)
c = 19.689429, e = 0.331683, b = 0.107154
[<1, -4, 2, -7], <0, 13, 3, 22]> (1199.837061, 583.561922)
c = 22.568698, e = 0.276939, b = 0.117548
[<1, 8, 17, 12], <0, -13, -31, -19]> (1199.863847, 524.070246)
c = 27.348895, e = 0.136153, b = 0.084864
[<3, 6, 9, 10], <0, 7, 10, 8]> (399.956149, 55.181623)
c = 29.323081, e = 0.131554, b = 0.094264
[<1, 2, 3, 4], <0, 14, 20, -13]> (1199.906238, 27.596179)
c = 34.412286, e = 0.093762, b = 0.092527
[<1, -1, 6, 8], <0, 17, -13, -22]> (1199.946804, 234.493179)
c = 43.559675, e = 0.053196, b = 0.084113

🔗Carl Lumma <carl@lumma.org>

3/26/2008 9:23:39 PM

What does the list look like if you use logflat badness
with a complexity cutoff equal to the highest complexity
on the gold list?

-Carl

At 07:34 PM 3/26/2008, you wrote:
>Here are a few of the best 2.5.11.13-limit temperaments, as found by
>wedging optimal ET's, limiting error to less than 10 and complexity less
>than 50. I've included a logflat badness measurement along with the
>complexity and error. Petr PaÅ™ízek's temperament really stands out with
>its low badness of 0.016301.
>
>Gold medal temperaments:
>[<3, 7, 10, 11], <0, 0, 1, 0]> (400.844740, 142.870544)
> c = 2.597928, e = 8.441030, b = 0.047475
>[<2, 5, 7, 8], <0, -1, 0, -2]> (596.366755, 178.647793)
> c = 4.140159, e = 7.266490, b = 0.103795
>[<1, 2, 3, 4], <0, 2, 3, -2]> (1200.000000, 188.448896)
> c = 5.071741, e = 4.055217, b = 0.086925
>[<1, 2, 5, 4], <0, 1, -5, -1]> (1202.756537, 374.400159)
> c = 5.883697, e = 2.756537, b = 0.079521
>[<1, 3, 3, 3], <0, -3, 2, 3]> (1202.232620, 275.189376)
> c = 6.877559, e = 2.232620, b = 0.088004
>[<1, 5, 5, 6], <0, -7, -4, -6]> (1198.226337, 458.419470)
> c = 9.525147, e = 1.773663, b = 0.134101
>[<1, 1, 1, 2], <0, 7, 13, 9]> (1199.384989, 226.908105)
> c = 11.861587, e = 0.615011, b = 0.072109
>[<1, 6, 3, 6], <0, -8, 1, -5]> (1199.875918, 551.653738)
> c = 12.555849, e = 0.124082, b = 0.016301
>[<1, 1, -5, 0], <0, 5, 32, 14]> (1199.976283, 317.212082)
> c = 29.380455, e = 0.119306, b = 0.085822
>[<1, 2, 3, 2], <0, 7, 10, 37]> (1200.011121, 55.157066)
> c = 30.120391, e = 0.082695, b = 0.062520
>[<1, 2, 7, 14], <0, 1, -11, -32]> (1200.006105, 386.239280)
> c = 30.414513, e = 0.026798, b = 0.020658
>[<3, 8, 9, 8], <0, -3, 4, 9]> (399.991700, 137.854023)
> c = 36.987094, e = 0.024900, b = 0.028387
>
>Silver medal temperaments:
>[<1, 2, 4, 4], <0, 1, -2, -1]> (1206.510943, 352.701580)
> c = 3.285769, e = 8.867738, b = 0.079782
>[<1, 2, 3, 3], <0, 3, 4, 6]> (1192.733510, 139.239654)
> c = 5.012065, e = 7.266490, b = 0.152117
>[<3, 7, 10, 11], <0, 0, 2, 1]> (398.423931, 75.360862)
> c = 6.040104, e = 4.728206, b = 0.143749
>[<1, 1, 3, 5], <0, 3, 1, -3]> (1203.364877, 529.582749)
> c = 6.576320, e = 3.364877, b = 0.121270
>[<3, 7, 10, 12], <0, 0, 1, -2]> (399.020013, 161.117812)
> c = 7.489194, e = 2.939961, b = 0.137414
>[<2, 5, 8, 7], <0, -1, -3, 1]> (601.254285, 222.464855)
> c = 7.669669, e = 2.508570, b = 0.122970
>[<1, 0, 3, 0], <0, 5, 1, 8]> (1198.385254, 556.162181)
> c = 8.346547, e = 2.369931, b = 0.137584
>[<8, 19, 28, 30], <0, -1, -1, -1]> (150.271565, 63.801586)
> c = 10.476748, e = 2.172523, b = 0.198718
>[<1, 2, 7, 5], <0, 1, -11, -4]> (1198.062068, 385.628385)
> c = 11.923802, e = 1.964399, b = 0.232744
>[<4, 9, 13, 14], <0, 1, 3, 3]> (300.161421, 81.309312)
> c = 11.924607, e = 1.529595, b = 0.181252
>[<1, 2, 3, 3], <0, 7, 10, 15]> (1198.695738, 55.974274)
> c = 12.302662, e = 1.304262, b = 0.164506
>[<1, 2, 8, 5], <0, 1, -14, -4]> (1199.440155, 388.733325)
> c = 14.521730, e = 0.559845, b = 0.098384
>[<1, 3, 4, 6], <0, -5, -4, -17]> (1199.724635, 162.335560)
> c = 14.780847, e = 0.509229, b = 0.092711
>[<1, -3, -3, -2], <0, 14, 17, 15]> (1199.495567, 455.973511)
> c = 18.955416, e = 0.504433, b = 0.151039
>[<1, 8, 8, 9], <0, -15, -12, -14]> (1200.331683, 454.373579)
> c = 19.689429, e = 0.331683, b = 0.107154
>[<1, -4, 2, -7], <0, 13, 3, 22]> (1199.837061, 583.561922)
> c = 22.568698, e = 0.276939, b = 0.117548
>[<1, 8, 17, 12], <0, -13, -31, -19]> (1199.863847, 524.070246)
> c = 27.348895, e = 0.136153, b = 0.084864
>[<3, 6, 9, 10], <0, 7, 10, 8]> (399.956149, 55.181623)
> c = 29.323081, e = 0.131554, b = 0.094264
>[<1, 2, 3, 4], <0, 14, 20, -13]> (1199.906238, 27.596179)
> c = 34.412286, e = 0.093762, b = 0.092527
>[<1, -1, 6, 8], <0, 17, -13, -22]> (1199.946804, 234.493179)
> c = 43.559675, e = 0.053196, b = 0.084113
>
>

🔗Herman Miller <hmiller@IO.COM>

3/27/2008 7:47:05 PM

Carl Lumma wrote:
> What does the list look like if you use logflat badness
> with a complexity cutoff equal to the highest complexity
> on the gold list?

The list doesn't depend on a badness measurement; the badness measurements are printed out after the fact of creating the list. Or are you asking what would a list of all temperaments under some logflat badness would look like?

Here's a list of temperaments with complexity under 36.987094, and logflat badness under 0.134101 (from the same list of optimal ET's). I haven't tested this very thoroughly, so it may be missing some.

11&13 <<8, -1, 5, -30, -18, 21]] [<1, 6, 3, 6], <0, -8, 1, -5]>
c = 12.555849, e = 0.124082, b = 0.016301
P = 1199.875918, G = 551.653738

28&59 <<1, -11, -32, -29, -78, -70]] [<1, 2, 7, 14], <0, 1, -11, -32]>
c = 30.414513, e = 0.026798, b = 0.020658
P = 1200.006105, G = 386.239280

9&78 <<9, -12, -27, -59, -96, -49]] [<3, 8, 9, 8], <0, -3, 4, 9]>
c = 36.987094, e = 0.024900, b = 0.028387
P = 399.991700, G = 137.854023

3&6 <<0, 3, 0, 7, 0, -11]] [<3, 7, 10, 11], <0, 0, 1, 0]>
c = 2.597928, e = 8.441030, b = 0.047475
P = 400.844740, G = 142.870544

1f&2f <<1, 1, 2, -1, 1, 3]] [<1, 2, 3, 3], <0, 1, 1, 2]>
c = 1.735445 , e = 24.332027, b = 0.061069
P = 1224.332027, G = 394.146877

22&65 <<7, 10, 37, -1, 60, 91]] [<1, 2, 3, 2], <0, 7, 10, 37]>
c = 30.120391, e = 0.082695, b = 0.062520
P = 1200.011121, G = 55.157066

5e&16 <<7, 13, 9, 6, -5, -17]] [<1, 1, 1, 2], <0, 7, 13, 9]>
c = 11.861587, e = 0.615011, b = 0.072109
P = 1199.384989, G = 226.908105

3&13 <<1, -5, -1, -15, -6, 15]] [<1, 2, 5, 4], <0, 1, -5, -1]>
c = 5.883697, e = 2.756537, b = 0.079521
P = 1202.756537, G = 374.400159

3&4 <<1, -2, -1, -8, -6, 4]] [<1, 2, 4, 4], <0, 1, -2, -1]>
c = 3.285769, e = 8.867738, b = 0.079782
P = 1206.510943, G = 352.701580

16&71 <<13, 31, 19, 27, -4, -49]] [<1, 8, 17, 12], <0, -13, -31, -19]>
c = 27.348895, e = 0.136153, b = 0.084864
P = 1199.863847, G = 524.070246

34&53 <<5, 32, 14, 57, 14, -70]] [<1, 1, -5, 0], <0, 5, 32, 14]>
c = 29.380455, e = 0.119306, b = 0.085822
P = 1199.976283, G = 317.212082

6&7 <<2, 3, -2, 0, -12, -18]] [<1, 2, 3, 4], <0, 2, 3, -2]>
c = 5.071741, e = 4.055217, b = 0.086925
P = 1200.000000, G = 188.448896

4&9 <<3, -2, -3, -15, -18, -3]] [<1, 3, 3, 3], <0, -3, 2, 3]>
c = 6.877559, e = 2.232620, b = 0.088004
P = 1202.232620, G = 275.189376

43&44 <<14, 20, -13, -2, -82, -119]] [<1, 2, 3, 4], <0, 14, 20, -13]>
c = 34.412286, e = 0.093762, b = 0.092527
P = 1199.906238, G = 27.596179

15&22 <<5, 4, 17, -8, 21, 44]] [<1, 3, 4, 6], <0, -5, -4, -17]>
c = 14.780847, e = 0.509229, b = 0.092711
P = 1199.724635, G = 162.335560

21&45ef <<21, 30, 24, -3, -22, -28]] [<3, 6, 9, 10], <0, 7, 10, 8]>
c = 29.323081, e = 0.131554, b = 0.094264
P = 399.956149, G = 55.181623

3&34 <<1, -14, -4, -36, -13, 38]] [<1, 2, 8, 5], <0, 1, -14, -4]>
c = 14.521730, e = 0.559845, b = 0.098384
P = 1199.440155, G = 388.733325

1f&5e <<2, 3, 4, 0, 2, 3]] [<1, 2, 3, 3], <0, 2, 3, 4]>
c = 3.276620, e = 11.321475, b = 0.101292
P = 1188.678525, G = 208.149413

9&28 <<3, -5, -12, -22, -39, -23]] [<1, 2, 4, 5], <0, 3, -5, -12]>
c = 15.054768, e = 0.547076, b = 0.103327
P = 1199.452924, G = 129.559378

2f&6 <<2, 0, 4, -7, 2, 14]] [<2, 5, 7, 8], <0, -1, 0, -2]>
c = 4.140159, e = 7.266490, b = 0.103795
P = 596.366755, G = 178.647793

6&81 <<6, 21, -18, 28, -64, -140]] [<3, 6, 7, 14], <0, 2, 7, -6]>
c = 35.389467, e = 0.099675, b = 0.104028
P = 399.973229, G = 193.121451

8&29 <<15, 12, 14, -24, -23, 4]] [<1, 8, 8, 9], <0, -15, -12, -14]>
c = 19.689429, e = 0.331683, b = 0.107154
P = 1200.331683, G = 454.373579

3&84 <<3, -33, -9, -87, -32, 91]] [<3, 7, 10, 11], <0, -1, 11, 3]>
c = 34.927157, e = 0.109973, b = 0.111797
P = 399.972178, G = 13.746883

16&34 <<6, 18, 10, 21, 1, -32]] [<2, 5, 8, 8], <0, -3, -9, -5]>
c = 15.720078, e = 0.547438, b = 0.112736
P = 600.273719, G = 72.108665

2f&35f <<13, 3, 22, -38, 3, 65]] [<1, -4, 2, -7], <0, 13, 3, 22]>
c = 22.568698, e = 0.276939, b = 0.117548
P = 1199.837061, G = 583.561922

6&31 <<2, 9, -8, 14, -26, -61]] [<1, 2, 2, 5], <0, 2, 9, -8]>
c = 15.158863, e = 0.617283, b = 0.118205
P = 1199.382717, G = 194.490784

3&5e <<1, 4, 2, 6, 1, -8]] [<1, 2, 2, 3], <0, 1, 4, 2]>
c = 3.615689, e = 10.991706, b = 0.119747
P = 1189.008294, G = 433.819075

7&9 <<3, 1, -3, -8, -18, -14]] [<1, 1, 3, 5], <0, 3, 1, -3]>
c = 6.576320, e = 3.364877, b = 0.121270
P = 1203.364877, G = 529.582749

6&10e <<2, 6, -2, 7, -12, -29]] [<2, 5, 8, 7], <0, -1, -3, 1]>
c = 7.669669, e = 2.508570, b = 0.122970
P = 601.254285, G = 222.464855

37&48f <<21, 2, 27, -68, -15, 86]] [<1, 8, 4, 11], <0, -21, -2, -27]>
c = 33.848106, e = 0.132628, b = 0.126626
P = 1200.132628, G = 324.497112

1f&4 <<1, 2, 3, 1, 3, 3]] [<1, 2, 3, 3], <0, 1, 2, 3]>
c = 2.527519, e = 24.310940, b = 0.129422
P = 1224.310940, G = 281.243578

22&28 <<2, 6, 20, 7, 39, 47]] [<2, 5, 8, 11], <0, -1, -3, -10]>
c = 17.082452, e = 0.547438, b = 0.133123
P = 600.273719, G = 216.325995

9&41 <<6, -7, -15, -37, -57, -26]] [<1, 1, 5, 7], <0, 6, -7, -15]>
c = 21.932326, e = 0.332116, b = 0.133131
P = 1200.332116, G = 264.201742

8&13 <<7, 4, 6, -15, -12, 6]] [<1, 5, 5, 6], <0, -7, -4, -6]>
c = 9.525147, e = 1.773663, b = 0.134101
P = 1198.226337, G = 458.419470

🔗Carl Lumma <carl@lumma.org>

3/27/2008 11:22:39 PM

Herman wrote...

>Or are
>you asking what would a list of all temperaments under some logflat
>badness would look like?

All the temperaments on your source list, yes. I wish you'd
stop saying "all temperaments".

>Here's a list of temperaments with complexity under 36.987094, and
>logflat badness under 0.134101 (from the same list of optimal ET's).

Thanks!

>1f&2f <<1, 1, 2, -1, 1, 3]] [<1, 2, 3, 3], <0, 1, 1, 2]>
> c = 1.735445 , e = 24.332027, b = 0.061069
> P = 1224.332027, G = 394.146877

This is the first one that isn't obtained by sorting your
gold medal list by badness. How does that make you feel
about your method? Like it better, worse, or...?

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/28/2008 6:55:43 PM

Carl Lumma wrote:

>> 1f&2f <<1, 1, 2, -1, 1, 3]] [<1, 2, 3, 3], <0, 1, 1, 2]>
>> c = 1.735445 , e = 24.332027, b = 0.061069
>> P = 1224.332027, G = 394.146877
> > This is the first one that isn't obtained by sorting your
> gold medal list by badness. How does that make you feel
> about your method? Like it better, worse, or...?

I expect the gold list by itself will have a few outstanding temperaments that might obscure other temperaments that are almost as good. The same with the silver list by itself (as I've seen with negri vs. keemun). By looking at the top 3 grades as a group, I was hoping to smooth out some of the unevenness. (I would've liked to say "ranks", but there's no need for unnecessary confusion with the "rank" of a temperament. So "grades" it is.)

What I tend to see, though, is that unevenness in one grade tends to propagate to the lower grades. And logflat badness ends up looking pretty good as a default standard. The main thing I'm expecting to get out of this now that I've seen some graphs is a sense of what the maximum value for badness of a "really good" temperament in any specified limit might be, and some minor tweaks to the exponent of the badness measure on a case by case basis.

It seems to be fairly consistent that the gold and silver temperaments are also low in logflat badness; the higher-badness grades tend to be mixed more. Beyond the bronze grade I've got "iron" and "stone" (from Bronze Age, Iron Age, and Stone Age). It's not uncommon for the irons to be mixed up with the stones when sorted by badness. And even the neat separation between gold and silver 7-limit temperaments doesn't hold up for higher complexity. Tertiaseptal, enneadecal, and sesquiquartififths (in the silver grade) are as low in badness as some of the best gold temperaments (better than meantone and miracle); they just happen to be outranked by ennealimmal.

🔗Carl Lumma <carl@lumma.org>

3/28/2008 9:26:35 PM

My two suggestions would be:

1. Best n temperaments (lowest logflat badness) with
complexity < y.

2. Successive improvements in logflat badness as complexity
goes from 1 to y.

I'd love to see a showdown between these techniques. I
suspect that technique #2 will satisfy almost everyone.

-Carl

At 06:55 PM 3/28/2008, you wrote:
>Carl Lumma wrote:
>
>>> 1f&2f <<1, 1, 2, -1, 1, 3]] [<1, 2, 3, 3], <0, 1, 1, 2]>
>>> c = 1.735445 , e = 24.332027, b = 0.061069
>>> P = 1224.332027, G = 394.146877
>>
>> This is the first one that isn't obtained by sorting your
>> gold medal list by badness. How does that make you feel
>> about your method? Like it better, worse, or...?
>
>I expect the gold list by itself will have a few outstanding
>temperaments that might obscure other temperaments that are almost as
>good. The same with the silver list by itself (as I've seen with negri
>vs. keemun). By looking at the top 3 grades as a group, I was hoping to
>smooth out some of the unevenness. (I would've liked to say "ranks", but
>there's no need for unnecessary confusion with the "rank" of a
>temperament. So "grades" it is.)
>
>What I tend to see, though, is that unevenness in one grade tends to
>propagate to the lower grades. And logflat badness ends up looking
>pretty good as a default standard. The main thing I'm expecting to get
>out of this now that I've seen some graphs is a sense of what the
>maximum value for badness of a "really good" temperament in any
>specified limit might be, and some minor tweaks to the exponent of the
>badness measure on a case by case basis.
>
>It seems to be fairly consistent that the gold and silver temperaments
>are also low in logflat badness; the higher-badness grades tend to be
>mixed more. Beyond the bronze grade I've got "iron" and "stone" (from
>Bronze Age, Iron Age, and Stone Age). It's not uncommon for the irons to
>be mixed up with the stones when sorted by badness. And even the neat
>separation between gold and silver 7-limit temperaments doesn't hold up
>for higher complexity. Tertiaseptal, enneadecal, and sesquiquartififths
>(in the silver grade) are as low in badness as some of the best gold
>temperaments (better than meantone and miracle); they just happen to be
>outranked by ennealimmal.

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 10:55:15 PM

Carl Lumma wrote:
> My two suggestions would be:
> > 1. Best n temperaments (lowest logflat badness) with
> complexity < y.
> > 2. Successive improvements in logflat badness as complexity
> goes from 1 to y.
> > I'd love to see a showdown between these techniques. I
> suspect that technique #2 will satisfy almost everyone.

Currently orwell's gold, but it wouldn't pass #2 would it? Not that I'd be dissatisfied with such a failure.

Graham

🔗Carl Lumma <carl@lumma.org>

3/28/2008 11:20:22 PM

At 10:55 PM 3/28/2008, you wrote:
>Carl Lumma wrote:
>> My two suggestions would be:
>>
>> 1. Best n temperaments (lowest logflat badness) with
>> complexity < y.
>>
>> 2. Successive improvements in logflat badness as complexity
>> goes from 1 to y.
>>
>> I'd love to see a showdown between these techniques. I
>> suspect that technique #2 will satisfy almost everyone.
>
>Currently orwell's gold, but it wouldn't pass #2 would it?
>Not that I'd be dissatisfied with such a failure.

Method #2 will make it harder for more complex temperaments
to get in, but the best of them should make it and in turn it
should satisfy those who like higher-error temperaments, who
were often displeased with Gene's lists (which were obtained
by method #1 I believe).

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 11:47:22 PM

Carl Lumma wrote:
> At 10:55 PM 3/28/2008, you wrote:
>> Carl Lumma wrote:
>>> My two suggestions would be:
>>>
>>> 1. Best n temperaments (lowest logflat badness) with
>>> complexity < y.
>>>
>>> 2. Successive improvements in logflat badness as complexity
>>> goes from 1 to y.
>>>
>>> I'd love to see a showdown between these techniques. I
>>> suspect that technique #2 will satisfy almost everyone.
>> Currently orwell's gold, but it wouldn't pass #2 would it? >> Not that I'd be dissatisfied with such a failure.

I was being dismissive of orwell there. It's very close to garibaldi (the simpler 7-limit schismatic) in both error and complexity. According to my own list, garabaldi has a lower error for every measure other than unweighted RMS of the 7-odd limit. Garibaldi also has a lower complexity according to the max-Kees (range) and max-abs wedgie complexities. That's what I remembered, and means garibaldi is better than orwell on both counts and so orwell can never have the lower badness.

However, now I check the figures, I see that orwell has a lower complexity by average-based complexity measures. That is scalar, STD, and sum-abs (Erlich) complexity. And by simple error*complexity badness (STD error and complexity or TOP-RMS error and scalar complexity) orwell is slightly better. As logflat badness gives complexity more weight than error*complexity it follows that orwell will still be better.

I have to revise my criteria for satisfaction in that case. In reality, orwell and garibaldi are equally good. Any badness cluster that lets one through but not the other is being capricious. A first past the post cutoff like you suggest will always run into problems like this. It's still fine as a first approximation though.

> Method #2 will make it harder for more complex temperaments
> to get in, but the best of them should make it and in turn it
> should satisfy those who like higher-error temperaments, who
> were often displeased with Gene's lists (which were obtained
> by method #1 I believe).

Or shift more weight onto complexity.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 12:17:08 AM

Graham wrote...

>>>> My two suggestions would be:
>>>>
>>>> 1. Best n temperaments (lowest logflat badness) with
>>>> complexity < y.
>>>>
>>>> 2. Successive improvements in logflat badness as complexity
>>>> goes from 1 to y.
>>>>
>>>> I'd love to see a showdown between these techniques. I
>>>> suspect that technique #2 will satisfy almost everyone.
>>> Currently orwell's gold, but it wouldn't pass #2 would it?
>>> Not that I'd be dissatisfied with such a failure.
//
>I have to revise my criteria for satisfaction in that case.
>In reality, orwell and garibaldi are equally good. Any
>badness cluster that lets one through but not the other is
>being capricious. A first past the post cutoff like you
>suggest will always run into problems like this. It's still
>fine as a first approximation though.

That sounds like a problem of not knowing which complexity
is right rather than a problem with the successive improvements
method.

>> Method #2 will make it harder for more complex temperaments
>> to get in, but the best of them should make it and in turn it
>> should satisfy those who like higher-error temperaments, who
>> were often displeased with Gene's lists (which were obtained
>> by method #1 I believe).
>
>Or shift more weight onto complexity.

Yes. But then the question is, how to agree on that weight.
Logflat puts as much as possible while still producing infinite
lists of temperaments under any badness limit (according
to Gene). That's a special amount of weight. People will still
disagree about the additional complexity cutoff, so their lists
won't have the same number of temperaments on it, but at least
they'll have the same badness numbers for temperaments that
their lists *do* share.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 1:05:00 AM

Carl Lumma wrote:
> Graham wrote...

>> I have to revise my criteria for satisfaction in that case.
>> In reality, orwell and garibaldi are equally good. Any >> badness cluster that lets one through but not the other is >> being capricious. A first past the post cutoff like you >> suggest will always run into problems like this. It's still >> fine as a first approximation though.
> > That sounds like a problem of not knowing which complexity
> is right rather than a problem with the successive improvements
> method.

But how *can* we know which complexity's right? As long as we're living with uncertainty an all-purpose badness clustering should be robust given different, reasonable complexity and error measures.

>>> Method #2 will make it harder for more complex temperaments
>>> to get in, but the best of them should make it and in turn it
>>> should satisfy those who like higher-error temperaments, who
>>> were often displeased with Gene's lists (which were obtained
>>> by method #1 I believe).
>> Or shift more weight onto complexity.
> > Yes. But then the question is, how to agree on that weight.
> Logflat puts as much as possible while still producing infinite
> lists of temperaments under any badness limit (according
> to Gene). That's a special amount of weight. People will still
> disagree about the additional complexity cutoff, so their lists
> won't have the same number of temperaments on it, but at least
> they'll have the same badness numbers for temperaments that
> their lists *do* share.

I don't think we'll ever agree on a single list. You always have an idea of what size of temperament you want and you should make that subjective choice explicit. You say it's a special amount of weight, which it is, but you don't like the results because in your opinion there aren't enough low-complexity classes. So now you're looking for another special list and hope it gives results that suit our subjective preconceptions.

Which comes back to my parametric badness. It should be similar to Ec+kc where E is error, c is complexity, and k is the free parameter. For k=0 it's the same as linear-flat badness (probably, although Gene said number theorists don't use our specific complexity measures so we can't be sure). That works fairly well but gives too much emphasis to low-error temperaments. Adding the kc term hardly affects high-error temperaments but makes the error irrelevant below a given point.

I prefer this to log-flat badness because it entails a natural upper limit to the complexity (where E=0). You can't get away from the subjective choice of k but in reality there's a subjective choice of complexity cut-off with a list of highest log-flat badness.

There have been similar parametric badnesses proposed in the past, like E+kc. But mine's different because it's a small perturbation from simple badness. I haven't seen that proposed before. It also has the great benefit that there's a geometric version that gives a clear relationship between badnesses of different rank temperaments.

Instead of "gold", "silver", and "bronze" you can have the best exo-, micro-, and nano-temperaments. And, of course, you can list the temperaments for which there's a value of k that makes them top of the list.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 1:52:00 AM

>>> I have to revise my criteria for satisfaction in that case.
>>> In reality, orwell and garibaldi are equally good. Any
>>> badness cluster that lets one through but not the other is
>>> being capricious. A first past the post cutoff like you
>>> suggest will always run into problems like this. It's still
>>> fine as a first approximation though.
>>
>> That sounds like a problem of not knowing which complexity
>> is right rather than a problem with the successive improvements
>> method.
>
>But how *can* we know which complexity's right?

There should be a way. In the meantime, have you any suggestion
for a more robust method?

>> Yes. But then the question is, how to agree on that weight.
>> Logflat puts as much as possible while still producing infinite
>> lists of temperaments under any badness limit (according
>> to Gene). That's a special amount of weight. People will still
>> disagree about the additional complexity cutoff, so their lists
>> won't have the same number of temperaments on it, but at least
>> they'll have the same badness numbers for temperaments that
>> their lists *do* share.
>
>I don't think we'll ever agree on a single list.

Nor should we. But we might agree on a badness.

>There have been similar parametric badnesses proposed in the
>past, like E+kc. But mine's different because it's a small
>perturbation from simple badness. I haven't seen that
>proposed before. It also has the great benefit that there's
>a geometric version that gives a clear relationship between
>badnesses of different rank temperaments.

What's that geometry again?

-Carl

🔗Carl Lumma <carl@lumma.org>

3/29/2008 1:57:56 AM

>There have been similar parametric badnesses proposed in the
>past, like E+kc. But mine's different because it's a small
>perturbation from simple badness. I haven't seen that
>proposed before. It also has the great benefit that there's
>a geometric version that gives a clear relationship between
>badnesses of different rank temperaments.
//
>You can't get away from the subjective choice of k but in
>reality there's a subjective choice of complexity cut-off
>with a list of highest log-flat badness.

I can't believe any k can make the list finite for E+kc < foo.
If it doesn't, then you need a cutoff too.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 9:18:03 AM

Carl Lumma wrote:
>> There have been similar parametric badnesses proposed in the >> past, like E+kc. But mine's different because it's a small >> perturbation from simple badness. I haven't seen that >> proposed before. It also has the great benefit that there's >> a geometric version that gives a clear relationship between >> badnesses of different rank temperaments.
> //
>> You can't get away from the subjective choice of k but in >> reality there's a subjective choice of complexity cut-off >> with a list of highest log-flat badness.
> > I can't believe any k can make the list finite for E+kc < foo.
> If it doesn't, then you need a cutoff too.

Maybe not but it can for Ec + kc < foo.

Let's see, d notes equal temperament would be E < foo-kd. That becomes impossible when d>k/foo, so it's bounded in that direction. When d=1 you have E < foo-k. That might give a lot of 1 note equal temperaments.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 9:22:33 AM

Carl Lumma wrote:
>>>> I have to revise my criteria for satisfaction in that case.
>>>> In reality, orwell and garibaldi are equally good. Any >>>> badness cluster that lets one through but not the other is >>>> being capricious. A first past the post cutoff like you >>>> suggest will always run into problems like this. It's still >>>> fine as a first approximation though.
>>> That sounds like a problem of not knowing which complexity
>>> is right rather than a problem with the successive improvements
>>> method.
>> But how *can* we know which complexity's right?
> > There should be a way. In the meantime, have you any suggestion
> for a more robust method?

Scalar, STD, range, and max-abs wedgie complexity agree reasonably well. A best-of list with equivalent cutoffs should mostly agree. Ideally you'd cut it off in a big gap but there might not be any big gaps.

>> There have been similar parametric badnesses proposed in the >> past, like E+kc. But mine's different because it's a small >> perturbation from simple badness. I haven't seen that >> proposed before. It also has the great benefit that there's >> a geometric version that gives a clear relationship between >> badnesses of different rank temperaments.
> > What's that geometry again?

Taking vals as points, the badness of an equal temperament is its distance from the origin. For rank 2 it's the area involving two points and the origin. For rank 3 a volume, and so on.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 11:47:34 AM

At 09:18 AM 3/29/2008, you wrote:
>Carl Lumma wrote:
>>> There have been similar parametric badnesses proposed in the
>>> past, like E+kc. But mine's different because it's a small
>>> perturbation from simple badness. I haven't seen that
>>> proposed before. It also has the great benefit that there's
>>> a geometric version that gives a clear relationship between
>>> badnesses of different rank temperaments.
>> //
>>> You can't get away from the subjective choice of k but in
>>> reality there's a subjective choice of complexity cut-off
>>> with a list of highest log-flat badness.
>>
>> I can't believe any k can make the list finite for E+kc < foo.
>> If it doesn't, then you need a cutoff too.
>
>Maybe not but it can for Ec + kc < foo.

Sorry, got the wrong one. Can it? I'm surprised. I
thought it'd have to be an exponent.

-C.

🔗Carl Lumma <carl@lumma.org>

3/29/2008 12:26:52 PM

Graham wrote...

>>> There have been similar parametric badnesses proposed in the
>>> past, like E+kc. But mine's different because it's a small
>>> perturbation from simple badness. I haven't seen that
>>> proposed before. It also has the great benefit that there's
>>> a geometric version that gives a clear relationship between
>>> badnesses of different rank temperaments.
>>
>> What's that geometry again?
>
>Taking vals as points, the badness of an equal temperament
>is its distance from the origin. For rank 2 it's the area
>involving two points and the origin.

OK, just wanted to make sure you were still talking about
that one.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/29/2008 2:28:47 PM

Graham Breed wrote:
> Carl Lumma wrote:
>> My two suggestions would be:
>>
>> 1. Best n temperaments (lowest logflat badness) with
>> complexity < y.
>>
>> 2. Successive improvements in logflat badness as complexity
>> goes from 1 to y.
>>
>> I'd love to see a showdown between these techniques. I
>> suspect that technique #2 will satisfy almost everyone.
> > Currently orwell's gold, but it wouldn't pass #2 would it? > Not that I'd be dissatisfied with such a failure.

Yes, orwell would fail, but not for the reason you might suspect. Meantone has such a low logflat badness that nothing else beats it until you get to ennealimmal. On the other hand: beep, blacksmith, and dominant make the #2 list.

🔗Herman Miller <hmiller@IO.COM>

3/29/2008 2:35:54 PM

Graham Breed wrote:

> Instead of "gold", "silver", and "bronze" you can have the > best exo-, micro-, and nano-temperaments. And, of course, > you can list the temperaments for which there's a value of k > that makes them top of the list.

The point of the "gold", "silver", "bronze" lists is that they're independent of any specific badness measure; they only depend on the ability to find all temperaments less than a specified error and complexity. (Since I haven't implemented that, I've been using the best approximation I have, but the original intent is to consider all temperaments.)

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 8:50:50 PM

Herman Miller wrote:
> Graham Breed wrote:

>> Currently orwell's gold, but it wouldn't pass #2 would it? >> Not that I'd be dissatisfied with such a failure.
> > Yes, orwell would fail, but not for the reason you might suspect. > Meantone has such a low logflat badness that nothing else beats it until > you get to ennealimmal. On the other hand: beep, blacksmith, and > dominant make the #2 list.

Sound similar to my parametric badness races. Although blacksmith doesn't do so well and some 9&4 with <2 3 1] takes over from dominant. Is that beep?

Orwell only manages bronze, say for 0.6 cent/oct. Garibaldi is the true underdog and never makes the medals.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 11:30:16 PM

Herman wrote...

>>> My two suggestions would be:
>>>
>>> 1. Best n temperaments (lowest logflat badness) with
>>> complexity < y.
>>>
>>> 2. Successive improvements in logflat badness as complexity
>>> goes from 1 to y.
>>>
>>> I'd love to see a showdown between these techniques. I
>>> suspect that technique #2 will satisfy almost everyone.
>>
>> Currently orwell's gold, but it wouldn't pass #2 would it?
>> Not that I'd be dissatisfied with such a failure.
>
>Yes, orwell would fail, but not for the reason you might suspect.
>Meantone has such a low logflat badness that nothing else beats it until
>you get to ennealimmal.

Hrm, maybe logflat is too strong for #2. I've done it before
with error and also with simple badness (Ec) and it seems to have
worked well.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/30/2008 2:17:50 PM

Graham Breed wrote:
> Herman Miller wrote:
>> Graham Breed wrote:
> >>> Currently orwell's gold, but it wouldn't pass #2 would it? >>> Not that I'd be dissatisfied with such a failure.
>> Yes, orwell would fail, but not for the reason you might suspect. >> Meantone has such a low logflat badness that nothing else beats it until >> you get to ennealimmal. On the other hand: beep, blacksmith, and >> dominant make the #2 list.
> > Sound similar to my parametric badness races. Although > blacksmith doesn't do so well and some 9&4 with <2 3 1] > takes over from dominant. Is that beep?

[<1, 2, 3, 3], <0, -2, -3, -1]> yes, that looks like it. I don't know about a badness measure that ranks beep as better than dominant. But low complexity, high-error temperaments do seem to be a problem with badness measures.

> Orwell only manages bronze, say for 0.6 cent/oct. Garibaldi > is the true underdog and never makes the medals.

I don't know how they compare using different error and complexity measures, but they're pretty close according to what I've been using. Garibaldi is slightly more complex, and orwell has a slightly higher error.

Logflat badness isn't ideal, but it seems like a reasonable starting point. It includes a bunch of temperaments at both extremes that are unlikely to be used: low-complexity temperaments with high error:

[<2, 3, 5, 6], <0, 0, -1, -1]>
P = 616.524512, G = 296.308847
c = 3.266201, e = 30.152577, b = 0.268058

as well as high-complexity temperaments with low error:

[<3, -3, -9, -8], <0, 17, 35, 36]>
P = 400.003281, G = 182.467603
c = 141.954523, e = 0.009842, b = 0.165266

I also notice temperaments that I consider at least somewhat useful (although not among the best) have logflat badness scores way up above most of the familiar temperaments. What I'd like to find is a badness score that gives better scores to these kinds of temperaments in the middle range, while temperaments with error > 10 cents / octave or complexity above the first 20 or so temperaments on the gold list will receive worse scores.

[<1, 2, 1, 3], <0, -2, 6, -1]> superpelog
P = 1206.548265, G = 260.760141
c = 11.820058, e = 6.548265, b = 0.762402

[<1, 0, 2, 5], <0, 5, 1, -7]> muggles
P = 1203.148011, G = 379.393104
c = 17.329377, e = 3.148011, b = 0.787809

[<1, 1, 1, 3], <0, 3, 7, -1]> gorgo
P = 1205.820043, G = 228.199305
c = 11.973078, e = 7.279064, b = 0.869573

🔗Graham Breed <gbreed@gmail.com>

3/30/2008 9:42:04 PM

Herman Miller wrote:
> Graham Breed wrote:

>> Sound similar to my parametric badness races. Although >> blacksmith doesn't do so well and some 9&4 with <2 3 1] >> takes over from dominant. Is that beep?
> > [<1, 2, 3, 3], <0, -2, -3, -1]> yes, that looks like it. I don't know > about a badness measure that ranks beep as better than dominant. But low > complexity, high-error temperaments do seem to be a problem with badness > measures.

I still have the rank 2 temperament module wired up, so I can get the full description:

[2/9

1204.794 cents period
264.447 cents generator

mapping by period and generator:
<1, 2, 3, 3]
<0, -2, -3, -1]

mapping by steps:
<5, 8, 12, 14]
<4, 6, 9, 11]

scalar complexity: 0.561
RMS weighted error: 10.862 cents/octave
max weighted error: 14.956 cents/octave]

>> Orwell only manages bronze, say for 0.6 cent/oct. Garibaldi >> is the true underdog and never makes the medals.
> > I don't know how they compare using different error and complexity > measures, but they're pretty close according to what I've been using. > Garibaldi is slightly more complex, and orwell has a slightly higher error.

They're very close which is why it's unfair for one to get a medal but not the other. By the weighted range complexity, or max-abs weighted wedgie, orwell is even more complex.

> Logflat badness isn't ideal, but it seems like a reasonable starting > point. It includes a bunch of temperaments at both extremes that are > unlikely to be used: low-complexity temperaments with high error:
> > [<2, 3, 5, 6], <0, 0, -1, -1]>
> P = 616.524512, G = 296.308847
> c = 3.266201, e = 30.152577, b = 0.268058

Yes, I get that at the top of the list with Ek=30 cent/oct. And it's probably correct. If you want a temperament with such a high error this is very efficient.

> as well as high-complexity temperaments with low error:
> > [<3, -3, -9, -8], <0, 17, 35, 36]>
> P = 400.003281, G = 182.467603
> c = 141.954523, e = 0.009842, b = 0.165266

That's second to ennealimmal at 7 millicents per octave.

It looks like log-flat badness is doing what it was designed to do: score outstanding temperaments regardless of what error/complexity trade-off they make. You have to correct for that when you read the results.

> I also notice temperaments that I consider at least somewhat useful > (although not among the best) have logflat badness scores way up above > most of the familiar temperaments. What I'd like to find is a badness > score that gives better scores to these kinds of temperaments in the > middle range, while temperaments with error > 10 cents / octave or > complexity above the first 20 or so temperaments on the gold list will > receive worse scores.

You could try modulating logflat badness by a normal or lognormal curve. Parametric badness only has one parameter, so you can choose the center point but not the width.

If there's some other reason you think they're being unfairly treated maybe we can take account of it.

> [<1, 2, 1, 3], <0, -2, 6, -1]> superpelog
> P = 1206.548265, G = 260.760141
> c = 11.820058, e = 6.548265, b = 0.762402

Number 36 at 6 cents per octave or number 366 at 1 cent per octave.

> [<1, 0, 2, 5], <0, 5, 1, -7]> muggles
> P = 1203.148011, G = 379.393104
> c = 17.329377, e = 3.148011, b = 0.787809

Number 35 for 2.5 cents per octave or number 91 at 1 cent per octave.

> [<1, 1, 1, 3], <0, 3, 7, -1]> gorgo
> P = 1205.820043, G = 228.199305
> c = 11.973078, e = 7.279064, b = 0.869573

Number 42 for 6.6 cents per octave or number 486 at 1 cent per octave.

Finding those 500 temperament classes takes a long time (over 9 seconds on my desktop) and reading through them will take a lot longer. A more practical method of getting a definitive list would be to run multiple searches for different parameters, put them together and apply your badness measure of choice.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/31/2008 7:19:46 PM

Graham Breed wrote:
> Herman Miller wrote:
>> I also notice temperaments that I consider at least somewhat useful >> (although not among the best) have logflat badness scores way up above >> most of the familiar temperaments. What I'd like to find is a badness >> score that gives better scores to these kinds of temperaments in the >> middle range, while temperaments with error > 10 cents / octave or >> complexity above the first 20 or so temperaments on the gold list will >> receive worse scores.
> > You could try modulating logflat badness by a normal or > lognormal curve. Parametric badness only has one parameter, > so you can choose the center point but not the width.
> > If there's some other reason you think they're being > unfairly treated maybe we can take account of it.

I think the scores are reasonable compared with other temperaments of the same complexity. High-error temperaments aren't worth considering because the tempered intervals begin to sound like entirely different intervals, and any bonus they gain from being low in complexity can't make up for that. On the other end, once the error gets sufficiently low, any further improvement is inaudible. But in this medium-complexity range, even some temperaments with more than the optimal error are acceptable enough to be useful.

Bug (or beep) and father, both with errors around 14 cents per octave, are on the edge of what might be considered acceptable temperaments, and that's being generous. So basically I'm looking for a weighted badness score that rates bug as worse than pretty much most other temperaments that have names, including gorgo. I happen to like bug, and even father has its uses, but I realize they're pretty questionable as temperaments.

On the other end, supermajor has a way low error of 0.02 cents per octave, but it's twice as complex as ennealimmal. When a temperament is as close to just as ennealimmal, any improvement is going to be minimal, and the cost for the added complexity makes these temperaments less attractive. So the weighted badness should increase more with complexity than it decreases from the improved accuracy, to the point where pretty much anything more complex than supermajor doesn't have a chance of showing up (as supermajor is already good enough).