back to list

7-limit JI-approx. efficiency of N-tets

🔗akjmicro <aaron@akjmusic.com>

2/8/2006 10:49:07 PM

Hey all,

I wrote a Python script to rank the octave-ET's according to 7-limit
JI-approximation efficiency. By this, I mean the following equation:

Eff = TE*W

where Eff is "efficiency" (the lower the number, the more efficient
the n-tet is at approximating n-limit JI with fewer notes), TE is the
total error in cents from JI of the intervals one is interested in
looking at (for me, I chose the 7 limit intervals plus their
inversions), and W stands for "waste", which can be said to be the
ratio of dissonant intervals to consonant intervals in the ET, which
obviously grows as the ET size grows. Thus, it is a 'penalty' for an
ET to get to large in this ranking, unless of course, it really
significantly lowers the JI-approximation error.

One motivation for the ranking might be the question--What n-tet gives
me the most consonance for the buck (the buck being the size of the
n-tet).

Here's the ranking for all ET's, 5 to 99--31 wins! (no surprise there)
And, perhaps we get another insight into why 12 will always remain an
important system: as we all know, but don't often admit, it does a
fair job of 'expressing' a fair amount of important intervals with
only a handful of tones.....we also see that the equivalent of
Republican wasteful spending, i.e. the George Bush of n-tets, is 97-equal.

The left column is 'total error in cents' and the right column is our
ranking of interest, 'efficiency':

total_error efficiency

99: 11.9495 31: 66.8946
72: 22.2008 12: 76.6857
94: 29.0217 99: 79.0507
68: 31.7373 41: 96.4323
87: 32.0118 72: 100.7573
84: 32.4549 7: 102.3707
95: 37.2044 19: 104.3473
53: 37.2751 15: 112.3440
77: 37.9581 53: 114.6928
89: 38.7576 14: 124.1117
90: 40.4889 6: 125.4311
93: 40.4928 27: 126.0480
80: 44.4171 22: 133.4705
41: 44.7721 68: 134.2733
82: 44.7721 10: 137.0975
91: 44.9235 18: 144.5536
62: 46.9444 37: 149.9255
98: 47.2205 21: 153.0049
88: 47.3411 25: 153.4885
96: 47.5408 46: 155.2680
81: 47.7354 23: 159.2434
58: 47.9142 11: 161.2244
31: 48.3128 13: 165.7086
76: 48.6515 58: 165.8568
78: 49.3335 26: 167.4354
83: 49.3755 35: 169.1267
97: 50.4854 5: 171.8133
92: 52.8566 62: 176.9442
85: 55.1878 84: 177.2534
86: 55.9476 17: 178.0986
65: 58.9982 94: 180.8277
75: 59.1556 36: 182.0417
63: 59.8286 87: 182.2212
46: 61.1662 8: 182.5136
74: 62.6937 77: 186.8706
79: 63.0488 16: 188.6049
64: 63.4544 50: 205.3443
73: 63.6051 43: 208.8975
66: 63.7535 9: 214.2032
57: 64.4410 45: 215.9116
60: 65.5829 49: 216.1089
70: 65.8457 57: 218.1080
71: 66.4328 34: 222.3343
67: 66.6762 47: 225.0262
69: 69.3722 89: 226.5829
56: 69.9429 80: 228.9191
50: 72.1480 63: 230.1098
59: 74.6132 56: 231.3495
54: 76.1355 38: 232.3184
61: 77.8686 30: 232.4634
49: 78.0393 29: 232.9472
37: 81.2097 95: 234.6738
52: 82.0285 42: 235.3100
47: 86.0394 76: 235.7728
45: 87.7141 65: 235.9929
55: 89.2080 60: 237.1073
43: 90.5223 82: 237.6367
51: 90.6945 40: 238.8968
48: 92.7956 90: 239.8191
35: 99.9385 54: 240.1196
27: 100.8384 39: 240.9800
36: 102.8932 24: 241.9821
42: 105.4838 33: 243.2895
40: 115.0244 32: 245.7927
44: 116.7938 52: 246.0854
39: 120.4900 78: 246.6677
38: 120.8056 64: 248.9364
34: 121.2733 93: 249.1862
22: 133.4705 81: 249.6930
33: 139.0226 48: 249.8344
29: 142.3567 20: 250.2792
19: 143.4776 28: 252.7545
26: 143.5161 66: 259.9180
32: 147.4756 59: 264.0161
30: 154.9756 51: 265.1071
28: 163.5470 83: 265.8680
25: 166.2792 91: 269.5412
24: 172.8444 88: 273.1216
23: 173.7200 67: 276.9627
21: 204.0065 44: 278.5083
15: 224.6880 75: 282.1268
18: 227.1556 61: 287.5149
12: 230.0571 55: 288.2105
16: 242.4920 70: 288.7081
20: 250.2792 73: 293.5622
17: 254.4266 74: 294.1783
14: 310.2792 71: 296.3927
10: 319.8941 69: 298.8341
13: 372.8444 96: 303.5295
9: 428.4065 85: 305.6556
11: 429.9317 98: 308.7495
8: 547.5408 86: 314.1673
7: 614.2242 79: 320.0938
6: 627.1556 92: 321.2052
5: 687.2532 97: 326.2135

-Aaron.

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 12:57:48 AM

Hi Aaron!

This is what we might call a type of "badness" calculation, or
rather, the inverse of one. Actually, since you've defined
efficiency as going up when Eff goes down, it's not the inverse.
The general approach is

badness = complexity * error

Complexity can be number of notes, but this gets tricky in linear
and planar temperaments where the number of notes is not fixed.
Gene, Graham, Dave, Paul, myself, and probably someone I'm
forgetting have proposed various complexity measures. Usually
they wind up being fairly similar to one another.

Error also has variants. Weighted error is popular; the deviation
from JI is weighted depending on the harmonic limit of the interval
in question. There's weighted complexity, too, for that matter.

Probably thousands of messages have been written on this topic,
with little consensus to this day. :(

-Carl

At 10:49 PM 2/8/2006, you wrote:
>Hey all,
>
>I wrote a Python script to rank the octave-ET's according to 7-limit
>JI-approximation efficiency. By this, I mean the following equation:
>
>Eff = TE*W
>
>where Eff is "efficiency" (the lower the number, the more efficient
>the n-tet is at approximating n-limit JI with fewer notes), TE is the
>total error in cents from JI of the intervals one is interested in
>looking at (for me, I chose the 7 limit intervals plus their
>inversions), and W stands for "waste", which can be said to be the
>ratio of dissonant intervals to consonant intervals in the ET, which
>obviously grows as the ET size grows. Thus, it is a 'penalty' for an
>ET to get to large in this ranking, unless of course, it really
>significantly lowers the JI-approximation error.
>
>One motivation for the ranking might be the question--What n-tet gives
>me the most consonance for the buck (the buck being the size of the
>n-tet).
>
>Here's the ranking for all ET's, 5 to 99--31 wins! (no surprise there)
>And, perhaps we get another insight into why 12 will always remain an
>important system: as we all know, but don't often admit, it does a
>fair job of 'expressing' a fair amount of important intervals with
>only a handful of tones.....we also see that the equivalent of
>Republican wasteful spending, i.e. the George Bush of n-tets, is 97-equal.
>
>The left column is 'total error in cents' and the right column is our
>ranking of interest, 'efficiency':
>
>total_error efficiency
>
>99: 11.9495 31: 66.8946
>72: 22.2008 12: 76.6857
>94: 29.0217 99: 79.0507
>68: 31.7373 41: 96.4323
>87: 32.0118 72: 100.7573
>84: 32.4549 7: 102.3707
>95: 37.2044 19: 104.3473
>53: 37.2751 15: 112.3440
>77: 37.9581 53: 114.6928
>89: 38.7576 14: 124.1117
>90: 40.4889 6: 125.4311
>93: 40.4928 27: 126.0480
>80: 44.4171 22: 133.4705
>41: 44.7721 68: 134.2733
>82: 44.7721 10: 137.0975
>91: 44.9235 18: 144.5536
>62: 46.9444 37: 149.9255
>98: 47.2205 21: 153.0049
>88: 47.3411 25: 153.4885
>96: 47.5408 46: 155.2680
>81: 47.7354 23: 159.2434
>58: 47.9142 11: 161.2244
>31: 48.3128 13: 165.7086
>76: 48.6515 58: 165.8568
>78: 49.3335 26: 167.4354
>83: 49.3755 35: 169.1267
>97: 50.4854 5: 171.8133
>92: 52.8566 62: 176.9442
>85: 55.1878 84: 177.2534
>86: 55.9476 17: 178.0986
>65: 58.9982 94: 180.8277
>75: 59.1556 36: 182.0417
>63: 59.8286 87: 182.2212
>46: 61.1662 8: 182.5136
>74: 62.6937 77: 186.8706
>79: 63.0488 16: 188.6049
>64: 63.4544 50: 205.3443
>73: 63.6051 43: 208.8975
>66: 63.7535 9: 214.2032
>57: 64.4410 45: 215.9116
>60: 65.5829 49: 216.1089
>70: 65.8457 57: 218.1080
>71: 66.4328 34: 222.3343
>67: 66.6762 47: 225.0262
>69: 69.3722 89: 226.5829
>56: 69.9429 80: 228.9191
>50: 72.1480 63: 230.1098
>59: 74.6132 56: 231.3495
>54: 76.1355 38: 232.3184
>61: 77.8686 30: 232.4634
>49: 78.0393 29: 232.9472
>37: 81.2097 95: 234.6738
>52: 82.0285 42: 235.3100
>47: 86.0394 76: 235.7728
>45: 87.7141 65: 235.9929
>55: 89.2080 60: 237.1073
>43: 90.5223 82: 237.6367
>51: 90.6945 40: 238.8968
>48: 92.7956 90: 239.8191
>35: 99.9385 54: 240.1196
>27: 100.8384 39: 240.9800
>36: 102.8932 24: 241.9821
>42: 105.4838 33: 243.2895
>40: 115.0244 32: 245.7927
>44: 116.7938 52: 246.0854
>39: 120.4900 78: 246.6677
>38: 120.8056 64: 248.9364
>34: 121.2733 93: 249.1862
>22: 133.4705 81: 249.6930
>33: 139.0226 48: 249.8344
>29: 142.3567 20: 250.2792
>19: 143.4776 28: 252.7545
>26: 143.5161 66: 259.9180
>32: 147.4756 59: 264.0161
>30: 154.9756 51: 265.1071
>28: 163.5470 83: 265.8680
>25: 166.2792 91: 269.5412
>24: 172.8444 88: 273.1216
>23: 173.7200 67: 276.9627
>21: 204.0065 44: 278.5083
>15: 224.6880 75: 282.1268
>18: 227.1556 61: 287.5149
>12: 230.0571 55: 288.2105
>16: 242.4920 70: 288.7081
>20: 250.2792 73: 293.5622
>17: 254.4266 74: 294.1783
>14: 310.2792 71: 296.3927
>10: 319.8941 69: 298.8341
>13: 372.8444 96: 303.5295
> 9: 428.4065 85: 305.6556
>11: 429.9317 98: 308.7495
> 8: 547.5408 86: 314.1673
> 7: 614.2242 79: 320.0938
> 6: 627.1556 92: 321.2052
> 5: 687.2532 97: 326.2135
>
>-Aaron.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 2:12:15 AM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:

> Eff = TE*W
>
> where Eff is "efficiency" (the lower the number, the more efficient
> the n-tet is at approximating n-limit JI with fewer notes), TE is the
> total error in cents from JI of the intervals one is interested in
> looking at (for me, I chose the 7 limit intervals plus their
> inversions), and W stands for "waste", which can be said to be the
> ratio of dissonant intervals to consonant intervals in the ET, which
> obviously grows as the ET size grows.

I'm afraid these statements need to be clarified if you intend for
them to be precise definitions. What, *precisely*, is TE? Sum of
absolute values of errors of elements of the 7-limit tonality diamond,
perhaps? What is W? "Ratio of dissonant intervals to consonant
intervals" is meaningless unless you defined what a consonant interval
is. Tonality diamond interval, approximated by the rounded val, or
something else?

> Here's the ranking for all ET's, 5 to 99--31 wins!

Here's the sorted ranking using another system, which you might want
to consider for comparison; it ranks 5-99 according to the unweighted
maximum error of the 7-limit tonality diamond, adjusted to be logflat,
so that there are an infinity of ets scoring less than 1, and the size
of such ets grows roughly exponentially.

31 .483982
99 .597311
5 .730389
41 .743399
72 .743933
12 .758467
9 .767221
10 .796400
15 .828367
6 .890798
22 .898361
19 .906572
68 .908372
27 .923824
53 1.023130
16 1.113068
26 1.122497
7 1.151608
46 1.181095
37 1.187562
62 1.219558
18 1.233517
58 1.270308
57 1.346596
50 1.354788
29 1.380638
84 1.397227
21 1.444911
43 1.531426
87 1.542628
35 1.549463
36 1.569623
90 1.583616
77 1.607562
49 1.608663
25 1.636896
42 1.665086
94 1.681776
63 1.706349
81 1.712305
60 1.727613
45 1.731369
56 1.736848
8 1.766829
80 1.773558
40 1.783082
34 1.818531
89 1.830823
32 1.850589
38 1.861797
82 1.873248
24 1.875628
59 1.896716
14 1.918414
17 1.922660
95 1.955489
47 1.976035
76 1.976161
11 1.986856
70 1.999464
83 2.000319
20 2.006801
78 2.065046
73 2.086531
30 2.087354
93 2.094068
13 2.095278
91 2.110875
88 2.120343
51 2.170341
33 2.181558
65 2.222431
44 2.263729
74 2.279532
28 2.347320
61 2.405343
75 2.500727
23 2.516749
52 2.529855
67 2.531584
48 2.542196
97 2.628467
55 2.656269
69 2.686210
66 2.739556
79 2.794541
92 2.976172
96 2.977757
54 3.009229
39 3.159409
85 3.248025
71 3.270674
86 3.295969
64 3.336808
98 3.584023

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 6:29:29 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
>
> > Eff = TE*W
> >
> > where Eff is "efficiency" (the lower the number, the more efficient
> > the n-tet is at approximating n-limit JI with fewer notes), TE is the
> > total error in cents from JI of the intervals one is interested in
> > looking at (for me, I chose the 7 limit intervals plus their
> > inversions), and W stands for "waste", which can be said to be the
> > ratio of dissonant intervals to consonant intervals in the ET, which
> > obviously grows as the ET size grows.
>
> I'm afraid these statements need to be clarified if you intend for
> them to be precise definitions.

You are right, of course; forgive my 1-in-the-morning lack-of-rigor...;)

>What, *precisely*, is TE? Sum of
> absolute values of errors of elements of the 7-limit tonality diamond,
> perhaps?

Exactly!

> What is W? "Ratio of dissonant intervals to consonant
> intervals" is meaningless unless you defined what a consonant interval
> is. Tonality diamond interval, approximated by the rounded val, or
> something else?

A consonance would be anything in said diamond. So, in a given n-tet,
it would be that n-tet's approximation of that interval.

I should have noted that I set up an array to 'catch' a count of such
intervals in a given n-tet, so that if an n-tet has the same index for
say, 7/6 and 6/5, it won't count twice---otherwise, that would
artificially give an 'advantage' to those tunings that do a lot of
comma tempering of that sort.

> > Here's the ranking for all ET's, 5 to 99--31 wins!
>
> Here's the sorted ranking using another system, which you might want
> to consider for comparison; it ranks 5-99 according to the unweighted
> maximum error of the 7-limit tonality diamond, adjusted to be logflat,

It sound like we had the same idea, except I didn't make mine
'logflat'. How would I do that? Would I have to know the error of
1-tet beforehand, perhaps?

> so that there are an infinity of ets scoring less than 1, and the size
> of such ets grows roughly exponentially.

Gene, this is interesting--however, how did you define the 'penalty'
for the size of the n-tet? I'm not clear on this...can you walk me
through it?

Better yet, could you post a snippet of Python code which could
explain it as well? ;)

Best,
Aaron.

P.S. Shall I upload my code to the 'files' section?

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 7:16:03 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> Hi Aaron!
>
> This is what we might call a type of "badness" calculation, or
> rather, the inverse of one. Actually, since you've defined
> efficiency as going up when Eff goes down, it's not the inverse.
> The general approach is
>
> badness = complexity * error

Yup....I hit on the same idea....I knew I was a genius! ;)

> Complexity can be number of notes, but this gets tricky in linear
> and planar temperaments where the number of notes is not fixed.

That's not an issue if you are just interested in n-tets that
*real men* would use---the EDO's. We could call my measure the "real
man" measure, or the "stud measure", take your pick.....

(No doubt, someone will chime in about why linear or planar
temperaments are more for real men)

> Gene, Graham, Dave, Paul, myself, and probably someone I'm
> forgetting have proposed various complexity measures. Usually
> they wind up being fairly similar to one another.

Mine is as good as any, I suppose. They probably all show different
facets of the same phenomenon. I just thought this: we might do a
recursive function, and take the last best "efficiency", and have the
next one usurp the best ranked ET iff it can prove that its error
decrease over the last best ET has a greater factor than it's ratio of
note increase over the last. But maybe this is equivalent to some
easier measure that wouldn't involve recursion....

In fact, my first script used the size of n-tet directly in its
equation, but I found it didn't capture as precisely my instincts--it
didn't penalize the growth of the n-tet as much as I intuitively
wanted, I guess.

> Error also has variants. Weighted error is popular; the deviation
> from JI is weighted depending on the harmonic limit of the interval
> in question. There's weighted complexity, too, for that matter.

I used simple linear error in cents, for each interval in the 7 limit
diamond. I thought about this, and concluded simply that although the
higher harmonics are more sensitive to error, they also tend to be
lower in amplitude. This justifies a simple unweighted error factor if
you assume they cancel each other out, as I do. Furthermore, cents
measure is more in line with our perception of pitch distance than
say, error in Hertz.

> Probably thousands of messages have been written on this topic,
> with little consensus to this day. :(

I don't see why anyone would conclude anything other than what I said,
but that's probably only because I've thought so long about this, and
come to my own conclusions that I'm comfortable with, perhaps....

-Aaron.

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 7:55:48 AM

Gene,

Below you chose unweigted maximum error. I respectfully submit that
total sum error is a better measure for my (our) purposes. (unless I'm
wrong, see below)...I reason that if a particular interval error is
bad in a given EDO, it may have superlative representations of other
intervals; however the maximum error alone wouldn't reflect this, and
the EDO by your accounting would suffer in its ranking. Furthermore,
the maximum error may be close to the other errors, and may make a
poor EDO appear better than one that has an passable maximum error in
a given interval, but several smaller very good ones (witness
31-equal, with its passable 3/2, but excellent 5/4 and 7/4)

The sum, by contrast, I think would give us an overall sense of the
average sound of the EDO in relation to JI.

Unless, of course, somehow, I've misunderstood you, and maximum error
means "sum of all errors". Unless you can clarify, my instincts (which
of course may be wrong) tell me that maximum wouldn't capture the
overall utility of a given EDO at JI approximation?

Best,
Aaron.

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:

> Here's the sorted ranking using another system, which you might want
> to consider for comparison; it ranks 5-99 according to the
> unweighted
> maximum error of the 7-limit tonality diamond, adjusted to be
> logflat,
> so that there are an infinity of ets scoring less than 1, and the
> size
> of such ets grows roughly exponentially.
> 31 .483982
> 99 .597311
> 5 .730389
> 41 .743399
> 72 .743933
> 12 .758467
> 9 .767221
> 10 .796400
> 15 .828367

etc, etc...

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 8:07:35 AM

Correction:

I hadn't used *all* the 14 intervals of the 7-limit diamond in my
script as of my post below. (there are 14, aren't there? 8/7,9/7,10/7,
12/7,6/5,7/5,8/5,9/5,4/3,5/3,7/6,3/2,5/4,7/4)

With the correction, and all else being equal, my script now ranks the
EDO's 5-99 thusly (31 is still king, but 99 and 15!!!--who'da thunk it
--maybe that's why we want *some* weighting?--are 'better' than 12,
and the EDO equivalent of George Bush's wasteful economic policies is
now 96-EDO):

((Incidentally, I'm sure you guys have talked about 171-equal, which
fares best when you look at 5-1024 EDO, it looks like the most
practical way to do 'virtual JI', as I'm sure Gene would agree))

total_error efficiency

99: 11.2555 31: 69.4797
72: 21.3796 99: 74.4596
68: 31.5781 15: 79.9373
94: 33.7436 12: 91.6392
84: 34.4581 72: 97.0306
87: 35.4273 27: 103.4966
93: 40.7937 7: 108.7439
53: 43.1694 19: 114.5954
77: 43.8469 41: 115.3822
95: 44.7669 21: 118.7218
90: 45.3493 22: 125.7986
89: 45.4157 14: 125.8967
98: 45.7160 9: 128.6322
80: 46.9049 53: 132.8288
91: 47.2159 68: 133.5995
62: 47.2453 6: 134.4032
78: 48.0401 26: 147.9777
31: 50.1798 10: 150.0970
97: 51.5543 46: 152.8003
92: 51.8844 16: 155.9879
58: 52.3187 25: 159.4998
88: 52.4807 37: 162.8902
41: 53.5703 23: 167.1498
82: 53.5703 18: 173.1011
83: 53.5744 11: 174.1613
81: 54.7269 62: 178.0784
96: 54.9011 13: 178.4491
85: 55.3423 5: 179.3963
86: 57.5989 58: 181.1032
73: 59.5743 84: 188.1942
76: 60.0364 8: 198.2868
79: 60.0584 35: 199.5827
46: 60.1940 87: 201.6631
74: 63.9653 17: 206.7396
67: 64.7421 36: 206.7867
63: 64.8899 94: 210.2485
57: 65.2995 33: 212.8132
66: 65.6186 77: 215.8617
75: 68.6434 57: 221.0137
65: 69.2027 50: 223.8791
64: 72.3848 45: 224.9345
60: 72.9025 32: 237.2267
71: 74.2971 43: 237.2873
56: 74.5608 47: 237.3757
69: 78.1644 20: 237.5989
70: 78.5261 38: 237.6167
50: 78.6602 78: 240.2004
52: 80.8627 28: 241.3798
61: 80.9260 80: 241.7408
59: 82.1855 52: 242.5880
54: 85.8033 56: 246.6241
37: 88.2322 34: 247.9334
47: 90.7613 40: 249.5714
45: 91.3796 63: 249.5765
49: 91.7888 93: 251.0380
51: 92.7914 39: 252.4136
27: 96.1040 49: 254.1843
55: 98.3458 42: 256.8767
48: 102.4634 60: 263.5705
43: 102.8245 89: 265.5073
42: 115.1516 30: 266.2317
36: 116.8794 66: 267.5218
35: 117.9352 90: 268.6075
40: 120.1640 67: 268.9287
38: 123.5607 54: 270.6104
44: 125.2905 51: 271.2365
39: 126.2068 73: 274.9584
26: 126.8380 48: 275.8631
34: 135.2364 65: 276.8109
33: 138.3286 95: 282.3760
32: 142.3360 91: 283.2956
22: 150.9583 29: 283.6773
19: 157.5687 64: 283.9710
25: 172.7914 82: 284.3346
29: 173.3583 81: 286.2637
30: 177.4878 24: 287.4992
28: 181.0349 83: 288.4776
21: 192.9229 59: 290.8102
24: 205.3566 76: 290.9454
23: 217.2948 44: 298.7697
15: 219.8276 61: 298.8035
20: 237.5989 98: 298.9121
16: 259.9798 74: 300.1450
18: 272.0160 88: 302.7732
12: 274.9175 79: 304.9117
17: 295.3423 85: 306.5113
14: 314.7417 92: 315.2975
10: 350.2263 55: 317.7327
13: 401.5104 86: 323.4397
9: 450.2128 75: 327.3763
11: 464.4302 71: 331.4796
8: 594.8604 97: 333.1202
7: 652.4634 69: 336.7080
6: 672.0160 70: 344.3067
5: 717.5853 96: 350.5227

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
>
> Hey all,
>
> I wrote a Python script to rank the octave-ET's according to 7-limit
> JI-approximation efficiency. By this, I mean the following equation:
>
> Eff = TE*W
>
> where Eff is "efficiency" (the lower the number, the more efficient
> the n-tet is at approximating n-limit JI with fewer notes), TE is the
> total error in cents from JI of the intervals one is interested in
> looking at (for me, I chose the 7 limit intervals plus their
> inversions), and W stands for "waste", which can be said to be the
> ratio of dissonant intervals to consonant intervals in the ET, which
> obviously grows as the ET size grows. Thus, it is a 'penalty' for an
> ET to get to large in this ranking, unless of course, it really
> significantly lowers the JI-approximation error.
>
> One motivation for the ranking might be the question--What n-tet gives
> me the most consonance for the buck (the buck being the size of the
> n-tet).
>
> Here's the ranking for all ET's, 5 to 99--31 wins! (no surprise there)
> And, perhaps we get another insight into why 12 will always remain an
> important system: as we all know, but don't often admit, it does a
> fair job of 'expressing' a fair amount of important intervals with
> only a handful of tones.....we also see that the equivalent of
> Republican wasteful spending, i.e. the George Bush of n-tets, is
97-equal.
>
> The left column is 'total error in cents' and the right column is our
> ranking of interest, 'efficiency':
>
> total_error efficiency
>
> 99: 11.9495 31: 66.8946
> 72: 22.2008 12: 76.6857
> 94: 29.0217 99: 79.0507
> 68: 31.7373 41: 96.4323
> 87: 32.0118 72: 100.7573
> 84: 32.4549 7: 102.3707
> 95: 37.2044 19: 104.3473
> 53: 37.2751 15: 112.3440
> 77: 37.9581 53: 114.6928
> 89: 38.7576 14: 124.1117
> 90: 40.4889 6: 125.4311
> 93: 40.4928 27: 126.0480
> 80: 44.4171 22: 133.4705
> 41: 44.7721 68: 134.2733
> 82: 44.7721 10: 137.0975
> 91: 44.9235 18: 144.5536
> 62: 46.9444 37: 149.9255
> 98: 47.2205 21: 153.0049
> 88: 47.3411 25: 153.4885
> 96: 47.5408 46: 155.2680
> 81: 47.7354 23: 159.2434
> 58: 47.9142 11: 161.2244
> 31: 48.3128 13: 165.7086
> 76: 48.6515 58: 165.8568
> 78: 49.3335 26: 167.4354
> 83: 49.3755 35: 169.1267
> 97: 50.4854 5: 171.8133
> 92: 52.8566 62: 176.9442
> 85: 55.1878 84: 177.2534
> 86: 55.9476 17: 178.0986
> 65: 58.9982 94: 180.8277
> 75: 59.1556 36: 182.0417
> 63: 59.8286 87: 182.2212
> 46: 61.1662 8: 182.5136
> 74: 62.6937 77: 186.8706
> 79: 63.0488 16: 188.6049
> 64: 63.4544 50: 205.3443
> 73: 63.6051 43: 208.8975
> 66: 63.7535 9: 214.2032
> 57: 64.4410 45: 215.9116
> 60: 65.5829 49: 216.1089
> 70: 65.8457 57: 218.1080
> 71: 66.4328 34: 222.3343
> 67: 66.6762 47: 225.0262
> 69: 69.3722 89: 226.5829
> 56: 69.9429 80: 228.9191
> 50: 72.1480 63: 230.1098
> 59: 74.6132 56: 231.3495
> 54: 76.1355 38: 232.3184
> 61: 77.8686 30: 232.4634
> 49: 78.0393 29: 232.9472
> 37: 81.2097 95: 234.6738
> 52: 82.0285 42: 235.3100
> 47: 86.0394 76: 235.7728
> 45: 87.7141 65: 235.9929
> 55: 89.2080 60: 237.1073
> 43: 90.5223 82: 237.6367
> 51: 90.6945 40: 238.8968
> 48: 92.7956 90: 239.8191
> 35: 99.9385 54: 240.1196
> 27: 100.8384 39: 240.9800
> 36: 102.8932 24: 241.9821
> 42: 105.4838 33: 243.2895
> 40: 115.0244 32: 245.7927
> 44: 116.7938 52: 246.0854
> 39: 120.4900 78: 246.6677
> 38: 120.8056 64: 248.9364
> 34: 121.2733 93: 249.1862
> 22: 133.4705 81: 249.6930
> 33: 139.0226 48: 249.8344
> 29: 142.3567 20: 250.2792
> 19: 143.4776 28: 252.7545
> 26: 143.5161 66: 259.9180
> 32: 147.4756 59: 264.0161
> 30: 154.9756 51: 265.1071
> 28: 163.5470 83: 265.8680
> 25: 166.2792 91: 269.5412
> 24: 172.8444 88: 273.1216
> 23: 173.7200 67: 276.9627
> 21: 204.0065 44: 278.5083
> 15: 224.6880 75: 282.1268
> 18: 227.1556 61: 287.5149
> 12: 230.0571 55: 288.2105
> 16: 242.4920 70: 288.7081
> 20: 250.2792 73: 293.5622
> 17: 254.4266 74: 294.1783
> 14: 310.2792 71: 296.3927
> 10: 319.8941 69: 298.8341
> 13: 372.8444 96: 303.5295
> 9: 428.4065 85: 305.6556
> 11: 429.9317 98: 308.7495
> 8: 547.5408 86: 314.1673
> 7: 614.2242 79: 320.0938
> 6: 627.1556 92: 321.2052
> 5: 687.2532 97: 326.2135
>
> -Aaron.
>

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 9:31:24 AM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:

> A consonance would be anything in said diamond. So, in a given n-tet,
> it would be that n-tet's approximation of that interval.

This doesn't seem to enforce consistency, and therefore would tend to
give too high a score to inconsistent systems.

> It sound like we had the same idea, except I didn't make mine
> 'logflat'. How would I do that? Would I have to know the error of
> 1-tet beforehand, perhaps?

I used a little multiple Diophantine approximation theory. Take any
badness measure of of the type you are using, which is
error*complexity. Since we are looking at n-ets, complexity is just n,
and so we have n*error, or a relative error measure. On average, such
a badness measure applied to various n will give a fixed percentage of
the n less than the cutoff score, with actual error O(1/n). Now we
penalize the larger n a smidgen more, in order to make the percentage
passing a badness score continually drop, but not so fast we only get
a finite list passing the requirement. If we are in a m-odd limit, and
the number of odd primes less than or equal to m is d, it turns out
that multiplying relative error, or n*error, by n^(1/d), and using
this as a badness figure will give us an infinite list so long as the
cutoff is not set too low; this is by Diophantine approximation theory.

> P.S. Shall I upload my code to the 'files' section?

If you like, but first I'd address the question of inconsistency. What
you really want to measure the badness of is a val, that is, a map
which tells you, for each prime number under the limit, how many et
steps that prime is mapped to. From the val, you can compute how many
et steps any interval under the limit should get; hence you can figure
out what the error is, computed in a consistent way.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 9:34:22 AM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
>
>
> Gene,
>
> Below you chose unweigted maximum error. I respectfully submit that
> total sum error is a better measure for my (our) purposes.

You can certainly argue the point, for the reasons you give. A
compromise is to take rms error.

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 9:41:48 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
>
> > A consonance would be anything in said diamond. So, in a given n-tet,
> > it would be that n-tet's approximation of that interval.
>
> This doesn't seem to enforce consistency, and therefore would tend to
> give too high a score to inconsistent systems.

I don't understand, for musical purposes, why we just wouldn't want to
take some measure of the audible error for each interval we care to
look at?

> > It sound like we had the same idea, except I didn't make mine
> > 'logflat'. How would I do that? Would I have to know the error of
> > 1-tet beforehand, perhaps?
>
> I used a little multiple Diophantine approximation theory. Take any
> badness measure of of the type you are using, which is
> error*complexity. Since we are looking at n-ets, complexity is just n,
> and so we have n*error, or a relative error measure. On average, such
> a badness measure applied to various n will give a fixed percentage of
> the n less than the cutoff score, with actual error O(1/n). Now we
> penalize the larger n a smidgen more, in order to make the percentage
> passing a badness score continually drop, but not so fast we only get
> a finite list passing the requirement. If we are in a m-odd limit, and
> the number of odd primes less than or equal to m is d, it turns out
> that multiplying relative error, or n*error, by n^(1/d), and using
> this as a badness figure will give us an infinite list so long as the
> cutoff is not set too low; this is by Diophantine approximation theory.

Hmm...I'll have to grok this further when I have time-I only half
understand.

> > P.S. Shall I upload my code to the 'files' section?
>
> If you like, but first I'd address the question of inconsistency. What
> you really want to measure the badness of is a val, that is, a map
> which tells you, for each prime number under the limit, how many et
> steps that prime is mapped to. From the val, you can compute how many
> et steps any interval under the limit should get; hence you can figure
> out what the error is, computed in a consistent way.

So let me understand....do we only have to measure the error of 3/1,
5/1, and 7/1? (as opposed to any octave reduced diamond interval?)

-Aaron.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 9:41:17 AM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
>
> Correction:
>
> I hadn't used *all* the 14 intervals of the 7-limit diamond in my
> script as of my post below. (there are 14, aren't there? 8/7,9/7,10/7,
> 12/7,6/5,7/5,8/5,9/5,4/3,5/3,7/6,3/2,5/4,7/4)

Actually, 9/7 and 9/5 are 9-limit intervals. Take those two out and
you have the 12 intervals of the 7-limit diamond; add 10/9, 9/8, 14/9
and 16/9 and you have the 18 elements of the 9-limit diamond. Of
course, you can simply use the half of these between 1 and sqrt(2) to
the same effect as using the whole diamond.

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 9:45:21 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
> >
> > Correction:
> >
> > I hadn't used *all* the 14 intervals of the 7-limit diamond in my
> > script as of my post below. (there are 14, aren't there? 8/7,9/7,10/7,
> > 12/7,6/5,7/5,8/5,9/5,4/3,5/3,7/6,3/2,5/4,7/4)
>
> Actually, 9/7 and 9/5 are 9-limit intervals. Take those two out and
> you have the 12 intervals of the 7-limit diamond; add 10/9, 9/8, 14/9
> and 16/9 and you have the 18 elements of the 9-limit diamond. Of
> course, you can simply use the half of these between 1 and sqrt(2) to
> the same effect as using the whole diamond.

Cool....of course! And that sqrt(2) trick is neat...the same kind of
shortcut used in prime number finding algorithms.

-Aaron.

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 9:47:18 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
> >
> >
> > Gene,
> >
> > Below you chose unweigted maximum error. I respectfully submit that
> > total sum error is a better measure for my (our) purposes.
>
> You can certainly argue the point, for the reasons you give. A
> compromise is to take rms error.

Yes....but it turns out that I have used rms error rankings before,
and the ranking, if I remember correctly, remains *exactly* the same
as linear error sum.

-Aaron.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 10:01:20 AM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
> <genewardsmith@> wrote:

> > This doesn't seem to enforce consistency, and therefore would tend to
> > give too high a score to inconsistent systems.
>
> I don't understand, for musical purposes, why we just wouldn't want to
> take some measure of the audible error for each interval we care to
> look at?

One reason is that we are interested in chords, not just intervals.
Conisder evaluating 11-edo. If we take 4 steps for a major third and 6
steps for a fifth, then the minor third between the major third and
the fifth is 2 steps, which is 218 cents. You are scoring it as
327 cents, from 3 steps, however. Hence, you end up giving 11 too high
a score. Of course you can use other mappings, but you get the same
problem.

> So let me understand....do we only have to measure the error of 3/1,
> 5/1, and 7/1? (as opposed to any octave reduced diamond interval?)

What I'm suggesting is that you figure out what 3, 5 and 7 are in
terms of number of steps; then for instance 5/3 will be "number of 5
steps" minus "number of 3 steps", and you use *that* as the number of
steps in computing the error for 5/3.

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 10:34:25 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
> > <genewardsmith@> wrote:
>
> > > This doesn't seem to enforce consistency, and therefore would
tend to
> > > give too high a score to inconsistent systems.
> >
> > I don't understand, for musical purposes, why we just wouldn't want to
> > take some measure of the audible error for each interval we care to
> > look at?
>
> One reason is that we are interested in chords, not just intervals.
> Conisder evaluating 11-edo. If we take 4 steps for a major third and 6
> steps for a fifth, then the minor third between the major third and
> the fifth is 2 steps, which is 218 cents. You are scoring it as
> 327 cents, from 3 steps, however. Hence, you end up giving 11 too high
> a score. Of course you can use other mappings, but you get the same
> problem.
>
> > So let me understand....do we only have to measure the error of 3/1,
> > 5/1, and 7/1? (as opposed to any octave reduced diamond interval?)
>
> What I'm suggesting is that you figure out what 3, 5 and 7 are in
> terms of number of steps; then for instance 5/3 will be "number of 5
> steps" minus "number of 3 steps", and you use *that* as the number of
> steps in computing the error for 5/3.

Good...got it.

-Aaron.

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 11:43:09 AM

>I should have noted that I set up an array to 'catch' a count of such
>intervals in a given n-tet, so that if an n-tet has the same index for
>say, 7/6 and 6/5, it won't count twice---otherwise, that would
>artificially give an 'advantage' to those tunings that do a lot of
>comma tempering of that sort.

Some people wouldn't like this... the very point of temperament is
to use an interval in more than one sense.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 12:06:28 PM

>> Complexity can be number of notes, but this gets tricky in linear
>> and planar temperaments where the number of notes is not fixed.
>
>That's not an issue if you are just interested in n-tets that
>*real men* would use---the EDO's. We could call my measure the "real
>man" measure, or the "stud measure", take your pick.....
>
>(No doubt, someone will chime in about why linear or planar
>temperaments are more for real men)

That would be me... of course, you can always tune them in an
n-tet, but your calculation wouldn't be the relevant one. Actually,
to get 'notes' for linear temperaments, someone did suggest 'size
of the smallest MOS', but I'm not sure that's so relevant either.

Too bad Graham took off just as you showed up. He's been writing
python code left and right, and none of us can read it.

>> Gene, Graham, Dave, Paul, myself, and probably someone I'm
>> forgetting have proposed various complexity measures. Usually
>> they wind up being fairly similar to one another.
>
>Mine is as good as any, I suppose. They probably all show different
>facets of the same phenomenon. I just thought this: we might do a
>recursive function, and take the last best "efficiency", and have the
>next one usurp the best ranked ET iff it can prove that its error
>decrease over the last best ET has a greater factor than it's ratio
>of note increase over the last. But maybe this is equivalent to some
>easier measure that wouldn't involve recursion....

"Consistency" is a form of badness, and Paul Hahn has charts that
show an ET only if it beats all lower ETs...

http://library.wustl.edu/~manynote/music.html

>In fact, my first script used the size of n-tet directly in its
>equation, but I found it didn't capture as precisely my instincts--it
>didn't penalize the growth of the n-tet as much as I intuitively
>wanted, I guess.

Did you try raising it to a power? With your harmonic waste, all
you're doing is subtracting a fixed number, right? So this will
become the same as notes for big ets.

>> Error also has variants. Weighted error is popular; the deviation
>> from JI is weighted depending on the harmonic limit of the interval
>> in question. There's weighted complexity, too, for that matter.
>
>I used simple linear error in cents, for each interval in the 7 limit
>diamond. I thought about this, and concluded simply that although the
>higher harmonics are more sensitive to error, they also tend to be
>lower in amplitude. This justifies a simple unweighted error factor if
>you assume they cancel each other out, as I do.

Higher harmonics might require more accuracy to distinguish because
they're closer together and/or because the brain has to work harder
to pick them out. But, mistuning them a given amount seems to be less
*painful* than mistuning lower harmonics the same amount. For example,
a 2:1 - 5cents is worse, to my ear, than 3:2 - 5cents. And in 12-tET,
the octave, 5th, major 3rd, and dom. 7th get progressively worse, and
it seems to work. This is one way of thinking about the weighting of
Paul's TOP temperaments.

>Furthermore, cents measure is more in line with our perception of
>pitch distance than say, error in Hertz.

That may not be true, actually, but it's a necessary abstraction
to get results that don't change depending on the concert pitch.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 12:14:53 PM

>Of course, you can simply use the half of these between 1 and sqrt(2)
>to the same effect as using the whole diamond.

Assuming an octave-equivalent tuning...

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 1:00:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >Of course, you can simply use the half of these between 1 and sqrt(2)
> >to the same effect as using the whole diamond.
>
> Assuming an octave-equivalent tuning...

Which, given we are talking about the tonality diamond, we have
already assumed.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 2:37:48 PM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:

> Below you chose unweigted maximum error. I respectfully submit that
> total sum error is a better measure for my (our) purposes.

Here are 5 to 99 (rounded vals) sorted according to an unweighted
average error loglat badness. One can vary this proceedure in all
sorts of ways; one of the most significant would be to look at the
9-limit instead, where 31 will not do as well. We can also use systems
such as TOP error, where there is no difference between the 7 and 9
limits, and could look at more vals (for instance, all of what I've
called "semistandard" vals, etc.)

31: .256652
99: .311647
72: .380463
5: .386623
10: .424609
41: .430062
12: .431456
19: .454330
68: .460325
27: .464210
6: .471880
9: .475364
15: .484503
22: .510277
53: .513451
7: .606008
37: .615556
16: .619887
46: .645323
62: .646723
26: .664455
58: .700468
84: .729241
18: .731336
21: .743399
50: .770551
87: .777033
35: .780217
36: .817088
77: .833641
25: .841939
94: .851137
43: .857439
29: .862325
45: .870225
49: .872220
57: .880881
56: .886261
80: .917481
34: .938662
63: .943791
24: .956611
17: .985173
8: .990181
60: 1.006005
32: 1.012412
89: 1.023616
90: 1.024532
42: 1.029797
14: 1.035317
11: 1.037572
78: 1.051263
76: 1.057390
38: 1.058976
40: 1.059231
95: 1.065617
47: 1.066582
20: 1.069948
81: 1.076663
82: 1.083687
91: 1.103317
93: 1.110469
13: 1.120164
65: 1.126335
28: 1.180739
59: 1.181819
51: 1.193396
83: 1.203700
73: 1.206710
30: 1.220871
74: 1.263842
88: 1.269551
52: 1.275842
33: 1.281781
70: 1.283007
44: 1.285817
48: 1.318464
67: 1.328597
75: 1.340198
61: 1.410839
55: 1.437747
23: 1.450026
97: 1.488086
69: 1.488737
66: 1.536314
79: 1.548907
96: 1.561303
92: 1.626112
39: 1.685094
71: 1.734099
54: 1.758384
85: 1.827869
86: 1.873669
98: 1.884821

🔗Keenan Pepper <keenanpepper@gmail.com>

2/9/2006 3:06:34 PM

How about this:

Any EDO will have two different intervals that approximate a just
interval. If it's a good EDO one will be a lot better than the other.
Define the ratio between the errors on either side to be the
"ambiguity", and sort the temperaments by the maximum ambiguity of any
ratio in the odd limit.

The least ambiguous 5-limit EDOs <100:

53 0.0663129681303
65 0.107709678321
34 0.125207429082
87 0.131230283612
19 0.132033355179
99 0.148261987296
84 0.158547263143
31 0.182126131158
12 0.185414006976
72 0.21776400859
46 0.23656416643
22 0.271040602217
41 0.274842474139
68 0.286256269759
15 0.291259767292
38 0.304236011756
3 0.324700696639
80 0.325824585915
50 0.330008376335
96 0.335653737938
7 0.339578742731

The least ambiguous 7-limit EDOs <100:

99 0.148261987296
31 0.182126131158
72 0.21776400859
41 0.274842474139
68 0.286256269759
53 0.374340275394
27 0.444964422042
62 0.44536483704
84 0.468507722095
22 0.471908195495
58 0.488472886111
46 0.49172254311
12 0.495418659238
15 0.505766302044
19 0.514562944734

The least ambiguous 9-limit EDOs <100:

99 0.215694406482
41 0.302298401211
72 0.306506579143
53 0.374340275394
31 0.404305241048
58 0.488472886111
46 0.49172254311
19 0.514562944734
22 0.524548406018
57 0.538228221377
12 0.540454537344
90 0.584015021081
94 0.58699488416
24 0.603895874325

The least ambiguous 11-limit EDOs <100:

72 0.306506579143
31 0.404305241048
46 0.49172254311
58 0.545514121318
41 0.568272040567
22 0.585151764949
94 0.58699488416
90 0.600812284154
24 0.603895874325

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 3:09:37 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:

> Here are 5 to 99 (rounded vals) sorted according to an unweighted
> average error loglat badness.

Here's something else to ponder, again using logflat average error
badness. If look at ets which beat out previous ets in terms of
7-limit badness, you get this:

1: .400590
4: .285161
31: .256652
171: .179643
103169: .090844

This list is probably infinite, but proving it is another matter. One
can prove its rate of growth for n is hyperexponential. The 103169 et
is the Paul Erlich division, and beating it will probably not be easy.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 3:49:30 PM

--- In tuning-math@yahoogroups.com, Keenan Pepper <keenanpepper@...>
wrote:
>
> How about this:
>
> Any EDO will have two different intervals that approximate a just
> interval. If it's a good EDO one will be a lot better than the other.
> Define the ratio between the errors on either side to be the
> "ambiguity", and sort the temperaments by the maximum ambiguity of any
> ratio in the odd limit.

I like it; it's a nice variation on the consistency level business.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/9/2006 3:52:33 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:

> The 103169 et
> is the Paul Erlich division, and beating it will probably not be easy.

In case anyone wants some 7-limit nanocommas to play with, here is its
TM basis:

[<9 -28 37 -18|, <-92 -17 21 25|, <110 -71 -11 10|]

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 4:03:46 PM

>> How about this:
>>
>> Any EDO will have two different intervals that approximate a just
>> interval. If it's a good EDO one will be a lot better than the other.
>> Define the ratio between the errors on either side to be the
>> "ambiguity", and sort the temperaments by the maximum ambiguity of any
>> ratio in the odd limit.
>
>I like it; it's a nice variation on the consistency level business.

The only difference is Keenan isn't discarding the decimal part of
the result.

-Carl

🔗Keenan Pepper <keenanpepper@gmail.com>

2/9/2006 8:28:24 PM

On 2/9/06, Carl Lumma <ekin@lumma.org> wrote:
> >> How about this:
> >>
> >> Any EDO will have two different intervals that approximate a just
> >> interval. If it's a good EDO one will be a lot better than the other.
> >> Define the ratio between the errors on either side to be the
> >> "ambiguity", and sort the temperaments by the maximum ambiguity of any
> >> ratio in the odd limit.
> >
> >I like it; it's a nice variation on the consistency level business.
>
> The only difference is Keenan isn't discarding the decimal part of
> the result.
>
> -Carl

Ah yes, that's true. I guess I haven't really discovered anything new then.

Keenan

🔗akjmicro <aaron@akjmusic.com>

2/9/2006 9:41:30 PM

Keenan,

This is interesting. But I fail to see how it has anything to do with
my original concept, where the larger the EDO, the more it gets
penalized....

Best,
Aaron.

-- In tuning-math@yahoogroups.com, Keenan Pepper <keenanpepper@...> wrote:
>
> How about this:
>
> Any EDO will have two different intervals that approximate a just
> interval. If it's a good EDO one will be a lot better than the other.
> Define the ratio between the errors on either side to be the
> "ambiguity", and sort the temperaments by the maximum ambiguity of any
> ratio in the odd limit.
>
> The least ambiguous 5-limit EDOs <100:
>
> 53 0.0663129681303
> 65 0.107709678321
> 34 0.125207429082
> 87 0.131230283612
> 19 0.132033355179
> 99 0.148261987296
> 84 0.158547263143
> 31 0.182126131158
> 12 0.185414006976
> 72 0.21776400859
> 46 0.23656416643
> 22 0.271040602217
> 41 0.274842474139
> 68 0.286256269759
> 15 0.291259767292
> 38 0.304236011756
> 3 0.324700696639
> 80 0.325824585915
> 50 0.330008376335
> 96 0.335653737938
> 7 0.339578742731
>
> The least ambiguous 7-limit EDOs <100:
>
> 99 0.148261987296
> 31 0.182126131158
> 72 0.21776400859
> 41 0.274842474139
> 68 0.286256269759
> 53 0.374340275394
> 27 0.444964422042
> 62 0.44536483704
> 84 0.468507722095
> 22 0.471908195495
> 58 0.488472886111
> 46 0.49172254311
> 12 0.495418659238
> 15 0.505766302044
> 19 0.514562944734
>
> The least ambiguous 9-limit EDOs <100:
>
> 99 0.215694406482
> 41 0.302298401211
> 72 0.306506579143
> 53 0.374340275394
> 31 0.404305241048
> 58 0.488472886111
> 46 0.49172254311
> 19 0.514562944734
> 22 0.524548406018
> 57 0.538228221377
> 12 0.540454537344
> 90 0.584015021081
> 94 0.58699488416
> 24 0.603895874325
>
> The least ambiguous 11-limit EDOs <100:
>
> 72 0.306506579143
> 31 0.404305241048
> 46 0.49172254311
> 58 0.545514121318
> 41 0.568272040567
> 22 0.585151764949
> 94 0.58699488416
> 90 0.600812284154
> 24 0.603895874325
>

🔗Carl Lumma <ekin@lumma.org>

2/9/2006 10:51:10 PM

>Keenan,
>
>This is interesting. But I fail to see how it has anything to do with
>my original concept, where the larger the EDO, the more it gets
>penalized....
>
>Best,
>Aaron.

As I said, what Keenan's doing is consistency, and consistency is a
type of badness. It penalizes error in terms of the size of the scale
step, which goes down as notes go up, so is the same.

-Carl

🔗akjmicro <aaron@akjmusic.com>

2/10/2006 2:39:52 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >Keenan,
> >
> >This is interesting. But I fail to see how it has anything to do with
> >my original concept, where the larger the EDO, the more it gets
> >penalized....
> >
> >Best,
> >Aaron.
>
> As I said, what Keenan's doing is consistency, and consistency is a
> type of badness. It penalizes error in terms of the size of the scale
> step, which goes down as notes go up, so is the same.

Of course I know what you mean, but you are missing my point.

Suppose you have an error of .2 and .1, the ratio being 2:1
Then you have .6 and .3, also 2:1

Clearly, the first one is a larger EDO. But the ratio wouldn't tell us
that, we'd already have to know that.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 5:30:16 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> Hi Aaron!
>
> This is what we might call a type of "badness" calculation, or
> rather, the inverse of one. Actually, since you've defined
> efficiency as going up when Eff goes down, it's not the inverse.
> The general approach is
>
> badness = complexity * error

No, that's not the general approach. I don't know why you say it is!
We've seen huge number of results from Gene which use logflat badness
instead (which rarely agrees with your formula above), Dave Keenan
has also proposed different badness functions, and my paper
implicitly uses

badness = a*complexity + b*error

The general approach, instead, is to define a badness function that
is an increasing function of both complexity and error. Anything more
specific is less general. :)

> Complexity can be number of notes, but this gets tricky in linear
> and planar temperaments where the number of notes is not fixed.
> Gene, Graham, Dave, Paul, myself, and probably someone I'm
> forgetting have proposed various complexity measures. Usually
> they wind up being fairly similar to one another.
>
> Error also has variants. Weighted error is popular; the deviation
> from JI is weighted depending on the harmonic limit of the interval
> in question. There's weighted complexity, too, for that matter.
>
> Probably thousands of messages have been written on this topic,
> with little consensus to this day. :(

Actually, Graham and I have made relatively huge strides toward
consensus in the last few weeks. At least, we'll have a number of
different but clearly defined and clearly motivated measures.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 5:39:17 PM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:

> > Complexity can be number of notes, but this gets tricky in linear
> > and planar temperaments where the number of notes is not fixed.
>
> That's not an issue if you are just interested in n-tets that
> *real men* would use---the EDO's. We could call my measure the "real
> man" measure, or the "stud measure", take your pick.....
>
> (No doubt, someone will chime in about why linear or planar
> temperaments are more for real men)

I assume you're joking, because I know you've used both meantone and
Superpythagorean temperaments in the past.

> Furthermore, cents
> measure is more in line with our perception of pitch distance than
> say, error in Hertz.

Fortunately, we don't need to worry about this, since it's impossible
to specify the error of a given ratio in a given ET (or regular
temperament) in Hertz.

> > Probably thousands of messages have been written on this topic,
> > with little consensus to this day. :(
>
> I don't see why anyone would conclude anything other than what I
said,
> but that's probably only because I've thought so long about this,
and
> come to my own conclusions that I'm comfortable with, perhaps....
>
> -Aaron.

I see that Gene already brought up the very important issue of chords
(and thus the very important issue of consistency) that you missed. I
could go on and on about other issues, but for now I'll leave you
with this: ratios of higher numbers are *less* sensitive to mistuning
than ratios of lower numbers, since mistuning a ratio of lower
numbers by a small amount will cause a greater increase in dissonance
than mistuning a ratio of larger numbers by that same small amount.
What do you think? You seem to have said something that contradicts
this.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 5:49:15 PM

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
>
>
> Gene,
>
> Below you chose unweigted maximum error. I respectfully submit that
> total sum error

You mean sum absolute error?

> is a better measure for my (our) purposes.

They're actually identical (proportional) for the 5-odd-limit case.
And in the 7-odd-limit case, they don't diverge nearly as much as
your illustration below suggests, because of all the other intervals
(7/5, 6/5, 7/6) you're considering.

> (unless I'm
> wrong, see below)...I reason that if a particular interval error is
> bad in a given EDO, it may have superlative representations of other
> intervals; however the maximum error alone wouldn't reflect this,
and
> the EDO by your accounting would suffer in its ranking. Furthermore,
> the maximum error may be close to the other errors, and may make a
> poor EDO appear better than one that has an passable maximum error
in
> a given interval, but several smaller very good ones (witness
> 31-equal, with its passable 3/2, but excellent 5/4 and 7/4)

There are a lot of ways of thinking about this, but typically larger
errors are considered more important than smaller ones, and in
particular if someone has a particular threshold for acceptable
error, knowing the maximum error will be of quite a bit of interest.
Nevertheless, your point of view is certainly valid too (so long as
you take consistency into account), and corresponds to what we'd call
p=1, since we're raising the absolute errors to the 1st power before
adding. Max-error would be p=infinity, and rms-error is p=2. Gene
previously defined "poptimal" as optimal for any p from 2 to
infinity. If he hasn't lowered that lower bound to 1 yet, I hope he
will after reading your message.

>
> The sum, by contrast, I think would give us an overall sense of the
> average sound of the EDO in relation to JI.
>
> Unless, of course, somehow, I've misunderstood you, and maximum
error
> means "sum of all errors". Unless you can clarify, my instincts
(which
> of course may be wrong) tell me that maximum wouldn't capture the
> overall utility of a given EDO at JI approximation?
>
> Best,
> Aaron.

George Secor has used maximum error to come up with things like his
optimal miracle tuning, and I'm sure he has very good reasons for
doing so (which I know he's explained before) . . .

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 5:56:13 PM

It looks like you're messing up your diamonds. What you have below is
more than the 7-limit diamond but less than the 9-limit diamond.
You're missing 10/9, 9/8, and 16/9 from the 9-limit diamond, but
should omit 9/7 and 9/5 if you want the 7-limit diamond.

If you are scoring 21 as better than 22 in the 7-limit or 9-limit,
it's clear that consistency, or at least something else, has bitten
you in the @$$ :)

--- In tuning-math@yahoogroups.com, "akjmicro" <aaron@...> wrote:
>
> Correction:
>
> I hadn't used *all* the 14 intervals of the 7-limit diamond in my
> script as of my post below. (there are 14, aren't there?
8/7,9/7,10/7,
> 12/7,6/5,7/5,8/5,9/5,4/3,5/3,7/6,3/2,5/4,7/4)
>
> With the correction, and all else being equal, my script now ranks
the
> EDO's 5-99 thusly (31 is still king, but 99 and 15!!!--who'da thunk
it
> --maybe that's why we want *some* weighting?--are 'better' than 12,
> and the EDO equivalent of George Bush's wasteful economic policies
is
> now 96-EDO):
>
> ((Incidentally, I'm sure you guys have talked about 171-equal, which
> fares best when you look at 5-1024 EDO, it looks like the most
> practical way to do 'virtual JI', as I'm sure Gene would agree))
>
> total_error efficiency
>
> 99: 11.2555 31: 69.4797
> 72: 21.3796 99: 74.4596
> 68: 31.5781 15: 79.9373
> 94: 33.7436 12: 91.6392
> 84: 34.4581 72: 97.0306
> 87: 35.4273 27: 103.4966
> 93: 40.7937 7: 108.7439
> 53: 43.1694 19: 114.5954
> 77: 43.8469 41: 115.3822
> 95: 44.7669 21: 118.7218
> 90: 45.3493 22: 125.7986
> 89: 45.4157 14: 125.8967
> 98: 45.7160 9: 128.6322
> 80: 46.9049 53: 132.8288
> 91: 47.2159 68: 133.5995
> 62: 47.2453 6: 134.4032
> 78: 48.0401 26: 147.9777
> 31: 50.1798 10: 150.0970
> 97: 51.5543 46: 152.8003
> 92: 51.8844 16: 155.9879
> 58: 52.3187 25: 159.4998
> 88: 52.4807 37: 162.8902
> 41: 53.5703 23: 167.1498
> 82: 53.5703 18: 173.1011
> 83: 53.5744 11: 174.1613
> 81: 54.7269 62: 178.0784
> 96: 54.9011 13: 178.4491
> 85: 55.3423 5: 179.3963
> 86: 57.5989 58: 181.1032
> 73: 59.5743 84: 188.1942
> 76: 60.0364 8: 198.2868
> 79: 60.0584 35: 199.5827
> 46: 60.1940 87: 201.6631
> 74: 63.9653 17: 206.7396
> 67: 64.7421 36: 206.7867
> 63: 64.8899 94: 210.2485
> 57: 65.2995 33: 212.8132
> 66: 65.6186 77: 215.8617
> 75: 68.6434 57: 221.0137
> 65: 69.2027 50: 223.8791
> 64: 72.3848 45: 224.9345
> 60: 72.9025 32: 237.2267
> 71: 74.2971 43: 237.2873
> 56: 74.5608 47: 237.3757
> 69: 78.1644 20: 237.5989
> 70: 78.5261 38: 237.6167
> 50: 78.6602 78: 240.2004
> 52: 80.8627 28: 241.3798
> 61: 80.9260 80: 241.7408
> 59: 82.1855 52: 242.5880
> 54: 85.8033 56: 246.6241
> 37: 88.2322 34: 247.9334
> 47: 90.7613 40: 249.5714
> 45: 91.3796 63: 249.5765
> 49: 91.7888 93: 251.0380
> 51: 92.7914 39: 252.4136
> 27: 96.1040 49: 254.1843
> 55: 98.3458 42: 256.8767
> 48: 102.4634 60: 263.5705
> 43: 102.8245 89: 265.5073
> 42: 115.1516 30: 266.2317
> 36: 116.8794 66: 267.5218
> 35: 117.9352 90: 268.6075
> 40: 120.1640 67: 268.9287
> 38: 123.5607 54: 270.6104
> 44: 125.2905 51: 271.2365
> 39: 126.2068 73: 274.9584
> 26: 126.8380 48: 275.8631
> 34: 135.2364 65: 276.8109
> 33: 138.3286 95: 282.3760
> 32: 142.3360 91: 283.2956
> 22: 150.9583 29: 283.6773
> 19: 157.5687 64: 283.9710
> 25: 172.7914 82: 284.3346
> 29: 173.3583 81: 286.2637
> 30: 177.4878 24: 287.4992
> 28: 181.0349 83: 288.4776
> 21: 192.9229 59: 290.8102
> 24: 205.3566 76: 290.9454
> 23: 217.2948 44: 298.7697
> 15: 219.8276 61: 298.8035
> 20: 237.5989 98: 298.9121
> 16: 259.9798 74: 300.1450
> 18: 272.0160 88: 302.7732
> 12: 274.9175 79: 304.9117
> 17: 295.3423 85: 306.5113
> 14: 314.7417 92: 315.2975
> 10: 350.2263 55: 317.7327
> 13: 401.5104 86: 323.4397
> 9: 450.2128 75: 327.3763
> 11: 464.4302 71: 331.4796
> 8: 594.8604 97: 333.1202
> 7: 652.4634 69: 336.7080
> 6: 672.0160 70: 344.3067
> 5: 717.5853 96: 350.5227
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
> >
> > Hey all,
> >
> > I wrote a Python script to rank the octave-ET's according to 7-
limit
> > JI-approximation efficiency. By this, I mean the following
equation:
> >
> > Eff = TE*W
> >
> > where Eff is "efficiency" (the lower the number, the more
efficient
> > the n-tet is at approximating n-limit JI with fewer notes), TE is
the
> > total error in cents from JI of the intervals one is interested in
> > looking at (for me, I chose the 7 limit intervals plus their
> > inversions), and W stands for "waste", which can be said to be the
> > ratio of dissonant intervals to consonant intervals in the ET,
which
> > obviously grows as the ET size grows. Thus, it is a 'penalty' for
an
> > ET to get to large in this ranking, unless of course, it really
> > significantly lowers the JI-approximation error.
> >
> > One motivation for the ranking might be the question--What n-tet
gives
> > me the most consonance for the buck (the buck being the size of
the
> > n-tet).
> >
> > Here's the ranking for all ET's, 5 to 99--31 wins! (no surprise
there)
> > And, perhaps we get another insight into why 12 will always
remain an
> > important system: as we all know, but don't often admit, it does a
> > fair job of 'expressing' a fair amount of important intervals with
> > only a handful of tones.....we also see that the equivalent of
> > Republican wasteful spending, i.e. the George Bush of n-tets, is
> 97-equal.
> >
> > The left column is 'total error in cents' and the right column is
our
> > ranking of interest, 'efficiency':
> >
> > total_error efficiency
> >
> > 99: 11.9495 31: 66.8946
> > 72: 22.2008 12: 76.6857
> > 94: 29.0217 99: 79.0507
> > 68: 31.7373 41: 96.4323
> > 87: 32.0118 72: 100.7573
> > 84: 32.4549 7: 102.3707
> > 95: 37.2044 19: 104.3473
> > 53: 37.2751 15: 112.3440
> > 77: 37.9581 53: 114.6928
> > 89: 38.7576 14: 124.1117
> > 90: 40.4889 6: 125.4311
> > 93: 40.4928 27: 126.0480
> > 80: 44.4171 22: 133.4705
> > 41: 44.7721 68: 134.2733
> > 82: 44.7721 10: 137.0975
> > 91: 44.9235 18: 144.5536
> > 62: 46.9444 37: 149.9255
> > 98: 47.2205 21: 153.0049
> > 88: 47.3411 25: 153.4885
> > 96: 47.5408 46: 155.2680
> > 81: 47.7354 23: 159.2434
> > 58: 47.9142 11: 161.2244
> > 31: 48.3128 13: 165.7086
> > 76: 48.6515 58: 165.8568
> > 78: 49.3335 26: 167.4354
> > 83: 49.3755 35: 169.1267
> > 97: 50.4854 5: 171.8133
> > 92: 52.8566 62: 176.9442
> > 85: 55.1878 84: 177.2534
> > 86: 55.9476 17: 178.0986
> > 65: 58.9982 94: 180.8277
> > 75: 59.1556 36: 182.0417
> > 63: 59.8286 87: 182.2212
> > 46: 61.1662 8: 182.5136
> > 74: 62.6937 77: 186.8706
> > 79: 63.0488 16: 188.6049
> > 64: 63.4544 50: 205.3443
> > 73: 63.6051 43: 208.8975
> > 66: 63.7535 9: 214.2032
> > 57: 64.4410 45: 215.9116
> > 60: 65.5829 49: 216.1089
> > 70: 65.8457 57: 218.1080
> > 71: 66.4328 34: 222.3343
> > 67: 66.6762 47: 225.0262
> > 69: 69.3722 89: 226.5829
> > 56: 69.9429 80: 228.9191
> > 50: 72.1480 63: 230.1098
> > 59: 74.6132 56: 231.3495
> > 54: 76.1355 38: 232.3184
> > 61: 77.8686 30: 232.4634
> > 49: 78.0393 29: 232.9472
> > 37: 81.2097 95: 234.6738
> > 52: 82.0285 42: 235.3100
> > 47: 86.0394 76: 235.7728
> > 45: 87.7141 65: 235.9929
> > 55: 89.2080 60: 237.1073
> > 43: 90.5223 82: 237.6367
> > 51: 90.6945 40: 238.8968
> > 48: 92.7956 90: 239.8191
> > 35: 99.9385 54: 240.1196
> > 27: 100.8384 39: 240.9800
> > 36: 102.8932 24: 241.9821
> > 42: 105.4838 33: 243.2895
> > 40: 115.0244 32: 245.7927
> > 44: 116.7938 52: 246.0854
> > 39: 120.4900 78: 246.6677
> > 38: 120.8056 64: 248.9364
> > 34: 121.2733 93: 249.1862
> > 22: 133.4705 81: 249.6930
> > 33: 139.0226 48: 249.8344
> > 29: 142.3567 20: 250.2792
> > 19: 143.4776 28: 252.7545
> > 26: 143.5161 66: 259.9180
> > 32: 147.4756 59: 264.0161
> > 30: 154.9756 51: 265.1071
> > 28: 163.5470 83: 265.8680
> > 25: 166.2792 91: 269.5412
> > 24: 172.8444 88: 273.1216
> > 23: 173.7200 67: 276.9627
> > 21: 204.0065 44: 278.5083
> > 15: 224.6880 75: 282.1268
> > 18: 227.1556 61: 287.5149
> > 12: 230.0571 55: 288.2105
> > 16: 242.4920 70: 288.7081
> > 20: 250.2792 73: 293.5622
> > 17: 254.4266 74: 294.1783
> > 14: 310.2792 71: 296.3927
> > 10: 319.8941 69: 298.8341
> > 13: 372.8444 96: 303.5295
> > 9: 428.4065 85: 305.6556
> > 11: 429.9317 98: 308.7495
> > 8: 547.5408 86: 314.1673
> > 7: 614.2242 79: 320.0938
> > 6: 627.1556 92: 321.2052
> > 5: 687.2532 97: 326.2135
> >
> > -Aaron.
> >
>

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 6:17:59 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
> <genewardsmith@> wrote:
>
> The 103169 et
> is the Paul Erlich division,

It was already known to Marc Jones as "Halloween '69" before I found it.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 6:20:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >Keenan,
> >
> >This is interesting. But I fail to see how it has anything to do with
> >my original concept, where the larger the EDO, the more it gets
> >penalized....
> >
> >Best,
> >Aaron.
>
> As I said, what Keenan's doing is consistency,

Not really -- it's more closely related to "unique articulation" than
to consistency.

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 6:36:10 PM

>> Hi Aaron!
>>
>> This is what we might call a type of "badness" calculation, or
>> rather, the inverse of one. Actually, since you've defined
>> efficiency as going up when Eff goes down, it's not the inverse.
>> The general approach is
>>
>> badness = complexity * error
>
>No, that's not the general approach.

Gene said the same thing.

>I don't know why you say it is! We've seen huge number of results
>from Gene which use logflat badness instead (which rarely agrees
>with your formula above),

Clearly the formula above includes logflat badness.

>Dave Keenan has also proposed different badness functions, and my
>paper implicitly uses
>
>badness = a*complexity + b*error
>
>The general approach, instead, is to define a badness function that
>is an increasing function of both complexity and error. Anything more
>specific is less general. :)

In what way does my formula above preclude such definitions? Since
I gave none, and mentioned such definitions in the accompanying text,
I can only call your response here a misguided one.

>> Complexity can be number of notes, but this gets tricky in linear
>> and planar temperaments where the number of notes is not fixed.
>> Gene, Graham, Dave, Paul, myself, and probably someone I'm
>> forgetting have proposed various complexity measures. Usually
>> they wind up being fairly similar to one another.
>>
>> Error also has variants. Weighted error is popular; the deviation
>> from JI is weighted depending on the harmonic limit of the interval
>> in question. There's weighted complexity, too, for that matter.
>>
>> Probably thousands of messages have been written on this topic,
>> with little consensus to this day. :(
>
>Actually, Graham and I have made relatively huge strides toward
>consensus in the last few weeks.

That's good.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 6:38:18 PM

>> >Keenan,
>> >
>> >This is interesting. But I fail to see how it has anything to do with
>> >my original concept, where the larger the EDO, the more it gets
>> >penalized....
>> >
>> >Best,
>> >Aaron.
>>
>> As I said, what Keenan's doing is consistency,
>
>Not really -- it's more closely related to "unique articulation" than
>to consistency.

What's "unique articulation"? It'd be pretty hard to get any closer
to consistency (as Paul Hahn defines it) than Keenan has.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 6:52:36 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> Hi Aaron!
> >>
> >> This is what we might call a type of "badness" calculation, or
> >> rather, the inverse of one. Actually, since you've defined
> >> efficiency as going up when Eff goes down, it's not the inverse.
> >> The general approach is
> >>
> >> badness = complexity * error
> >
> >No, that's not the general approach.
>
> Gene said the same thing.
>
> >I don't know why you say it is! We've seen huge number of results
> >from Gene which use logflat badness instead (which rarely agrees
> >with your formula above),
>
> Clearly the formula above includes logflat badness.

Rarely.

> >Dave Keenan has also proposed different badness functions, and my
> >paper implicitly uses
> >
> >badness = a*complexity + b*error
> >
> >The general approach, instead, is to define a badness function
that
> >is an increasing function of both complexity and error. Anything
more
> >specific is less general. :)
>
> In what way does my formula above preclude such definitions?

Because it's not equivalent to them! You'll never get a weighted sum
of complexity and error to be equivalent to a product of them.

> Since
> I gave none,

Huh? You wrote,

"
> >> The general approach is
> >>
> >> badness = complexity * error
"

That reads "badness equals complexity times error".

> and mentioned such definitions in the accompanying text,
> I can only call your response here a misguided one.

Since Aaron was very pleased to see your post since he's also
multiplying complexity by error, it seems he was also misled by what
you wrote. So I don't think my response was misguided at all! It
seemed very "guided" considering the context of this list. Now, what
is it that you *really* meant?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 6:54:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >Keenan,
> >> >
> >> >This is interesting. But I fail to see how it has anything to
do with
> >> >my original concept, where the larger the EDO, the more it gets
> >> >penalized....
> >> >
> >> >Best,
> >> >Aaron.
> >>
> >> As I said, what Keenan's doing is consistency,
> >
> >Not really -- it's more closely related to "unique articulation"
than
> >to consistency.
>
> What's "unique articulation"?

It means all the consonances map to different numbers of steps in the
ET.

> It'd be pretty hard to get any closer
> to consistency (as Paul Hahn defines it) than Keenan has.
>
> -Carl

Why? I could see inconsistent tunings getting a higher score than
consistent tunings by Keenan's measure.

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 7:07:32 PM

>>> This is interesting. But I fail to see how it has anything to do with
>>> my original concept, where the larger the EDO, the more it gets
>>> penalized....
>>
>> As I said, what Keenan's doing is consistency, and consistency is a
>> type of badness. It penalizes error in terms of the size of the scale
>> step, which goes down as notes go up, so is the same.
>
>Of course I know what you mean, but you are missing my point.
>
>Suppose you have an error of .2 and .1, the ratio being 2:1
>Then you have .6 and .3, also 2:1
>
>Clearly, the first one is a larger EDO.

Why's that? Are these errors fractions of the ET step?

>But the ratio wouldn't tell us that, we'd already have to know that.

If they are fractions of the ET step and the actual errors are
the same size, then the first would be a smaller ET and it would
have an advantage in complexity as you'd hope. If the errors
aren't the same size, then...

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 7:11:24 PM

"
>> >> The general approach is
>> >>
>> >> badness = complexity * error
>"
>
>That reads "badness equals complexity times error".

And it doesn't say what complexity and error are.

>> and mentioned such definitions in the accompanying text,
>> I can only call your response here a misguided one.
>
>Since Aaron was very pleased to see your post since he's also
>multiplying complexity by error, it seems he was also misled by what
>you wrote. So I don't think my response was misguided at all! It
>seemed very "guided" considering the context of this list.

In another post I asked him about exponents. Gene also's
been talking with him about that.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 7:11:51 PM

>> It'd be pretty hard to get any closer
>> to consistency (as Paul Hahn defines it) than Keenan has.
>>
>> -Carl
>
>Why? I could see inconsistent tunings getting a higher score than
>consistent tunings by Keenan's measure.

Once again, Keenan's measure is just Paul's without the decimal
part lopped off.

-C.

🔗Graham Breed <gbreed@gmail.com>

2/10/2006 7:20:01 PM

wallyesterpaulrus wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

Carl:
>>Complexity can be number of notes, but this gets tricky in linear
>>and planar temperaments where the number of notes is not fixed.
>>Gene, Graham, Dave, Paul, myself, and probably someone I'm
>>forgetting have proposed various complexity measures. Usually
>>they wind up being fairly similar to one another.
>>
>>Error also has variants. Weighted error is popular; the deviation
>>from JI is weighted depending on the harmonic limit of the interval
>>in question. There's weighted complexity, too, for that matter.
>>
>>Probably thousands of messages have been written on this topic,
>>with little consensus to this day. :(

Paul E:
> Actually, Graham and I have made relatively huge strides toward > consensus in the last few weeks. At least, we'll have a number of > different but clearly defined and clearly motivated measures.

Oh, that's good! There are a few choices you have to make:

weighted/unweighted

odd/prime limit

max/mean/rms/whatever

A lot of variations are covered by my code at:

http://x31eq.com/temper/regular.zip

except that I moved the weighted-primes code to new files because it's a lot simpler without the baggage for odd-limits.

Complexity is a bit less obvious. For rank 2, most people use max-unweighted, which they call "Graham complexity". The point is that the number of notes in a scale minus the complexity gives you the number of otonal chords you can play. That can be adapted to weighted prime-limits. But for higher rank temperaments it doesn't obviously generalize so you can use wedgies instead.

Graham

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 7:25:41 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> It'd be pretty hard to get any closer
> >> to consistency (as Paul Hahn defines it) than Keenan has.
> >>
> >> -Carl
> >
> >Why? I could see inconsistent tunings getting a higher score than
> >consistent tunings by Keenan's measure.
>
> Once again, Keenan's measure is just Paul's without the decimal
> part lopped off.

Show me.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/10/2006 8:33:46 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> It'd be pretty hard to get any closer
> >> to consistency (as Paul Hahn defines it) than Keenan has.
> >>
> >> -Carl
> >
> >Why? I could see inconsistent tunings getting a higher score than
> >consistent tunings by Keenan's measure.
>
> Once again, Keenan's measure is just Paul's without the decimal
> part lopped off.

floor(1 / Pepper ambiguity) you mean? That doesn't strike me as much
of a consistency measure. I thought it had something to do with the
whether the rounded value was chosen for each element of the
corresponding diamond.

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 9:02:41 PM

>> >Why? I could see inconsistent tunings getting a higher score than
>> >consistent tunings by Keenan's measure.
>>
>> Once again, Keenan's measure is just Paul's without the decimal
>> part lopped off.
>
>Show me.

For one, you can read the integer part of Keenan's result off the
tables on Paul's website. For two, Keenan's already agreed. For
three, here is Paul's algorithm for consistency

;; Takes an equal temperament and a list of natural number
;; identities and returns the exact consistency level at which
;; the et approximates the identities.

(define consist
(lambda (et ls)
(let ((y (map (lambda (r) (- r (round r)))
(map (lambda (s) (* et (log2 s))) ls))))
(/ 1 (* 2 (abs (- (apply max y) (apply min y))))))))

;; Returns the Hahn consistency level for an equal temperament
;; over a list of natural number identities. Requires consist.

(define consist-int
(lambda (et ls)
(truncate (consist et ls))))

-C.

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 9:14:25 PM

>For one, you can read the integer part of Keenan's result off the
>tables on Paul's website.

Uh, I mean the ranking.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2006 11:07:06 PM

>> >> It'd be pretty hard to get any closer
>> >> to consistency (as Paul Hahn defines it) than Keenan has.
>> >>
>> >> -Carl
>> >
>> >Why? I could see inconsistent tunings getting a higher score than
>> >consistent tunings by Keenan's measure.
>>
>> Once again, Keenan's measure is just Paul's without the decimal
>> part lopped off.
>
>floor(1 / Pepper ambiguity) you mean?

That's what I meant, but it's off by a factor of 2 and still only
close. Keenan's taking the worst interval's error, E, and the step
size S and reporting

E/(S-E)

Paul H.'s consistency is the integer part of how many times the worst
interval's error goes into half a step; floor(S/2E).

So ignoring floor, the factor of 2, and the inverse, we still
have

E/S != E/(S-E)

So sorry about that, but it looks like the ranking should still be
the same.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/11/2006 3:17:14 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> So sorry about that, but it looks like the ranking should still be
> the same.

If P is Pepper ambiguity, then the Hahn rank, according to this, will be

floor((P+1)/(2P))

This makes more sense; if this has a value greater than 1, then
P<=1/3, which insures that the rounded val/patent breed or whatever
the hell you call it always gives the rounded value for every element
of the diamond, which is a reasonable notion of consistency. This is a
pretty strong requirement, however, in the higher limits.

🔗akjmicro <aaron@akjmusic.com>

2/11/2006 6:22:03 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "akjmicro" <aaron@> wrote:
> >
> >
> > Gene,
> >
> > Below you chose unweigted maximum error. I respectfully submit that
> > total sum error
>
> You mean sum absolute error?

Yes. What else would I mean?

> > is a better measure for my (our) purposes.
>
> They're actually identical (proportional) for the 5-odd-limit case.
> And in the 7-odd-limit case, they don't diverge nearly as much as
> your illustration below suggests, because of all the other intervals
> (7/5, 6/5, 7/6) you're considering.

but they *do* diverge, which might be of interest, agreed? ;)

> > (unless I'm
> > wrong, see below)...I reason that if a particular interval error is
> > bad in a given EDO, it may have superlative representations of other
> > intervals; however the maximum error alone wouldn't reflect this,
> and
> > the EDO by your accounting would suffer in its ranking. Furthermore,
> > the maximum error may be close to the other errors, and may make a
> > poor EDO appear better than one that has an passable maximum error
> in
> > a given interval, but several smaller very good ones (witness
> > 31-equal, with its passable 3/2, but excellent 5/4 and 7/4)
>
> There are a lot of ways of thinking about this, but typically larger
> errors are considered more important than smaller ones, and in
> particular if someone has a particular threshold for acceptable
> error, knowing the maximum error will be of quite a bit of interest.
> Nevertheless, your point of view is certainly valid too (so long as
> you take consistency into account), and corresponds to what we'd call
> p=1, since we're raising the absolute errors to the 1st power before
> adding. Max-error would be p=infinity, and rms-error is p=2. Gene
> previously defined "poptimal" as optimal for any p from 2 to
> infinity. If he hasn't lowered that lower bound to 1 yet, I hope he
> will after reading your message.

Interesting, I see your points...

> >
> > The sum, by contrast, I think would give us an overall sense of the
> > average sound of the EDO in relation to JI.
> >
> > Unless, of course, somehow, I've misunderstood you, and maximum
> error
> > means "sum of all errors". Unless you can clarify, my instincts
> (which
> > of course may be wrong) tell me that maximum wouldn't capture the
> > overall utility of a given EDO at JI approximation?
> >
> > Best,
> > Aaron.
>
> George Secor has used maximum error to come up with things like his
> optimal miracle tuning, and I'm sure he has very good reasons for
> doing so (which I know he's explained before) . . .

I'm going to try something novel....metaranking (or should it be
called 'AKJ superranking'....just kidding)....ranking an EDO in the
n-limit not by using it's sum of absolute error(s), but by it's sum of
rankings in a list of errors for each interval. It might give me the
same thing, or it might illumine something else entirely.

Maybe there's a way to take this result and scale it consistently from
0 to 1, like the 'logflat' idea....

(As you can see, I'm not a rigorous mathematician, but more of a
playful explorer of sorts)

-Aaron.

🔗Carl Lumma <ekin@lumma.org>

2/11/2006 1:29:41 PM

>I'm going to try something novel....metaranking (or should it be
>called 'AKJ superranking'....just kidding)....ranking an EDO in the
>n-limit not by using it's sum of absolute error(s), but by it's sum of
>rankings in a list of errors for each interval. It might give me the
>same thing, or it might illumine something else entirely.

Interesting...

-C.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/22/2006 11:53:49 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >For one, you can read the integer part of Keenan's result off the
> >tables on Paul's website.
>
> Uh, I mean the ranking.
>
> -Carl
>
Carl, you've lost me. Can you pull this all together for me, with URLsn formulae, and no scheme code please?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/23/2006 3:52:49 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >> It'd be pretty hard to get any closer
> >> >> to consistency (as Paul Hahn defines it) than Keenan has.
> >> >>
> >> >> -Carl
> >> >
> >> >Why? I could see inconsistent tunings getting a higher score than
> >> >consistent tunings by Keenan's measure.
> >>
> >> Once again, Keenan's measure is just Paul's without the decimal
> >> part lopped off.
> >
> >floor(1 / Pepper ambiguity) you mean?
>
> That's what I meant, but it's off by a factor of 2 and still only
> close. Keenan's taking the worst interval's error, E, and the step
> size S and reporting
>
> E/(S-E)
>
> Paul H.'s consistency is the integer part of how many times the worst
> interval's error

I don't think this uses the same set of intervals that Keenan is using, Carl. If you do it with the full odd limit like Keenan, the measure won't help you to distinguish consistent from inconsistent tunings in every case.

>goes into half a >step; floor(S/2E).
>
> So ignoring floor, the factor of 2, and the inverse, we still
> have
>
> E/S != E/(S-E)
>
> So sorry about that, but it looks like the ranking should still be
> the same.
>
> -Carl

I'm still skeptical, but not in number-crunch mode right now . . .

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/23/2006 4:09:35 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@> wrote:
>
> > So sorry about that, but it looks like the ranking should still be
> > the same.
>
> If P is Pepper ambiguity, then the Hahn rank, according to this, will be
>
> floor((P+1)/(2P))

I really don't think you meant to say "rank" here. Plus, I don't think Carl got the differences between the Hahn and Pepper measures right in the first place.

> This makes more sense; if this has a value greater than 1, then
> P<=1/3, which insures that the rounded val/patent breed or whatever
> the hell you call it always gives the rounded value for every element
> of the diamond, which is a reasonable notion of consistency. This is a
> pretty strong requirement, however, in the higher limits.

Is this stronger than our traditional consistency requirement?

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/23/2006 4:15:04 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...>
wrote:

> > This makes more sense; if this has a value greater than 1, then
> > P<=1/3, which insures that the rounded val/patent breed or whatever
> > the hell you call it always gives the rounded value for every element
> > of the diamond, which is a reasonable notion of consistency. This is a
> > pretty strong requirement, however, in the higher limits.
>
> Is this stronger than our traditional consistency requirement?

Much stronger, especially in higher limits.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/23/2006 4:20:57 PM

>
> > > is a better measure for my (our) purposes.
> >
> > They're actually identical (proportional) for the 5-odd-limit case.
> > And in the 7-odd-limit case, they don't diverge nearly as much as
> > your illustration below suggests, because of all the other intervals
> > (7/5, 6/5, 7/6) you're considering.
>
> but they *do* >diverge, which >might be of >interest, agreed? >;)

Yeah, and that's no joke. Using the squared errors gives a thirdn compromise option to keep an eye on.

> > > (unless I'm
> > > wrong, see below)...I reason that if a particular interval error is
> > > bad in a given EDO, it may have superlative representations of other
> > > intervals; however the maximum error alone wouldn't reflect this,
> > and
> > > the EDO by your accounting would suffer in its ranking. Furthermore,
> > > the maximum error may be close to the other errors, and may make a
> > > poor EDO appear better than one that has an passable maximum error
> > in
> > > a given interval, but several smaller very good ones (witness
> > > 31-equal, with its passable 3/2, but excellent 5/4 and 7/4)
> >
> > There are a lot of ways of thinking about this, but typically larger
> > errors are considered more important than smaller ones, and in
> > particular if someone has a particular threshold for acceptable
> > error, knowing the maximum error will be of quite a bit of interest.
> > Nevertheless, your point of view is certainly valid too (so long as
> > you take consistency into account), and corresponds to what we'd call
> > p=1, since we're raising the absolute errors to the 1st power before
> > adding. Max-error would be p=infinity, and rms-error is p=2. Gene
> > previously defined "poptimal" as optimal for any p from 2 to
> > infinity. If he hasn't lowered that lower bound to 1 yet, I hope he
> > will after reading your message.
>
> Interesting, I see your points...
>
> > >
> > > The sum, by contrast, I think would give us an overall sense of the
> > > average sound of the EDO in relation to JI.
> > >
> > > Unless, of course, somehow, I've misunderstood you, and maximum
> > error
> > > means "sum of all errors". Unless you can clarify, my instincts
> > (which
> > > of course may be wrong) tell me that maximum wouldn't capture the
> > > overall utility of a given EDO at JI approximation?
> > >
> > > Best,
> > > Aaron.
> >
> > George Secor has used maximum error to come up with things like his
> > optimal miracle tuning, and I'm sure he has very good reasons for
> > doing so (which I know he's explained before) . . .
>
> I'm going to try something novel....metaranking (or should it be
> called 'AKJ superranking'....just kidding)....ranking an EDO in the
> n-limit not by using it's sum of absolute error(s), but by it's sum of
> rankings in a list of errors for each interval.

It seems that this would make consistency very hard to deal with.

It might give me the
> same thing, or it might illumine something else entirely.
>
> Maybe there's a way to take this result and scale it consistently from
> 0 to 1, like the 'logflat' idea....
>
> (As you can see, I'm not a rigorous mathematician, but more of a
> playful explorer of sorts)
>
> -Aaron.

Did you read the longish e-mail I sent you a couple weeks ago?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/23/2006 11:24:27 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@>
> wrote:
>
> > > This makes more sense; if this has a value greater than 1, then
> > > P<=1/3, which insures that the rounded val/patent breed or whatever
> > > the hell you call it always gives the rounded value for every element
> > > of the diamond, which is a reasonable notion of consistency. This is a
> > > pretty strong requirement, however, in the higher limits.
> >
> > Is this stronger than our traditional consistency requirement?
>
> Much stronger, especially in higher limits.

I don't think Carl is aware of this difference right now. It doesn't look like Keenan's measure relates in any clean way to our traditional consistency condition.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/23/2006 11:44:07 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...>
wrote:

> I don't think Carl is aware of this difference right now. It doesn't
look like Keenan's measure relates in any clean way to our traditional
consistency condition.

He didn't call it consistency, but ambiguity. I think it's an
interesting measure.

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 12:07:22 AM

>> > So sorry about that, but it looks like the ranking should still be
>> > the same.
>>
>> If P is Pepper ambiguity, then the Hahn rank, according to this, will be
>>
>> floor((P+1)/(2P))
>
>I really don't think you meant to say "rank" here. Plus, I don't think
>Carl got the differences between the Hahn and Pepper measures right in
>the first place.

I did finally get it right.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 1:54:40 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> > So sorry about that, but it looks like the ranking should
still be
> >> > the same.
> >>
> >> If P is Pepper ambiguity, then the Hahn rank, according to this,
will be
> >>
> >> floor((P+1)/(2P))
> >
> >I really don't think you meant to say "rank" here. Plus, I don't
think
> >Carl got the differences between the Hahn and Pepper measures
right in
> >the first place.
>
> I did finally get it right.
>
> -Carl

Where?

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 2:03:34 PM

>> I did finally get it right.
>>
>> -Carl
>
>Where?

/tuning-math/message/14422

-C.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 2:12:14 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> I did finally get it right.
> >>
> >> -Carl
> >
> >Where?
>
> /tuning-math/message/14422
>
> -C.

That's incorrect, as I remarked in my reply. Are you caught up?

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 2:25:48 PM

>> >> I did finally get it right.
>> >>
>> >> -Carl
>> >
>> >Where?
>>
>> /tuning-math/message/14422
>>
>> -C.
>
>That's incorrect, as I remarked in my reply. Are you caught up?

What's incorrect about it?

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 2:47:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >> I did finally get it right.
> >> >>
> >> >> -Carl
> >> >
> >> >Where?
> >>
> >> /tuning-math/message/14422
> >>
> >> -C.
> >
> >That's incorrect, as I remarked in my reply. Are you caught up?
>
> What's incorrect about it?
>
> -Carl

I guess you haven't followed the thread. Keenan and Hahn use a
different set of intervals as input into their formulae. If they used
the same set of intervals, the consistency condition Gene derived
from Keenan ambiguity would be the same as our traditional
consistency condition, but as he told us, it isn't.

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 2:59:11 PM

>> >> >> I did finally get it right.
>> >> >>
>> >> >> -Carl
>> >> >
>> >> >Where?
>> >>
>> >> /tuning-math/message/14422
>> >>
>> >> -C.
>> >
>> >That's incorrect, as I remarked in my reply. Are you caught up?
>>
>> What's incorrect about it?
>>
>> -Carl
>
>I guess you haven't followed the thread. Keenan and Hahn use a
>different set of intervals as input into their formulae.

What I saw were brief, vague assertions from you that this was
the case.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 3:18:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >> It'd be pretty hard to get any closer
> >> >> to consistency (as Paul Hahn defines it) than Keenan has.
> >> >>
> >> >> -Carl
> >> >
> >> >Why? I could see inconsistent tunings getting a higher score
than
> >> >consistent tunings by Keenan's measure.
> >>
> >> Once again, Keenan's measure is just Paul's without the decimal
> >> part lopped off.
> >
> >floor(1 / Pepper ambiguity) you mean?
>
> That's what I meant, but it's off by a factor of 2 and still only
> close. Keenan's taking the worst interval's error, E, and the step
> size S and reporting
>
> E/(S-E)
>
> Paul H.'s consistency is the integer part of how many times the
worst
> interval's error goes into half a step; floor(S/2E).

Paul H.'s definition:

"N-TET is level-P consistent at the M-limit IFF:
For any triad whose three intervals can each be expressed as a
product of P or fewer primary (or "consonant") M-limit intervals, the
sum of the N-TET approximations of the two smaller intervals equals
the (N-TET) approximation of their sum (the larger interval)."

The calculation you're referring to doesn't give you Hahn's
consistency level if you just use all the best approximations to the
consonant intervals (which is what Keenan ambiguity uses). This is
made clear when you consider that some ETs are inconsistent, so must
have consistency level below 1, but your formula will never give
anything below 1 when you use the same inputs as into Keenan
ambiguity.

Instead, what Hahn uses in his formula are the signed errors of the
N:1 consonances. The difference of the highest signed error and the
lowest signed error is what goes into half a step X times, where X is
the Hahn consistency level.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 3:33:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >> >> I did finally get it right.
> >> >> >>
> >> >> >> -Carl
> >> >> >
> >> >> >Where?
> >> >>
> >> >> /tuning-math/message/14422
> >> >>
> >> >> -C.
> >> >
> >> >That's incorrect, as I remarked in my reply. Are you caught up?
> >>
> >> What's incorrect about it?
> >>
> >> -Carl
> >
> >I guess you haven't followed the thread. Keenan and Hahn use a
> >different set of intervals as input into their formulae.
>
> What I saw were brief, vague assertions from you that this was
> the case.
>
> -Carl

(See /tuning-math/message/14538)

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 3:57:51 PM

>> Keenan's taking the worst interval's error, E, and the step
>> size S and reporting
>>
>> E/(S-E)
>>
>> Paul H.'s consistency is the integer part of how many times the
>> worst interval's error goes into half a step; floor(S/2E).
//
>The calculation you're referring to doesn't give you Hahn's
>consistency level if you just use all the best approximations to the
>consonant intervals (which is what Keenan ambiguity uses).
>This is made clear when you consider that some ETs are inconsistent,
>so must have consistency level below 1, but your formula will never
>give anything below 1 when you use the same inputs as into Keenan
>ambiguity.

For example, 6:5 is the least-accurate 5-limit interval in 12-tET.
The best approximation is 3 steps, or 300 cents. The error is
15.6 cents, so floor(S/2E) is 3. In the 11-limit, the least-accurate
interval is, I think, 11:8, with an error of 48.7 cents.
Floor(S/2E) is now zero.

>Instead, what Hahn uses in his formula are the signed errors of the
>N:1 consonances. The difference of the highest signed error and the
>lowest signed error is what goes into half a step X times, where X is
>the Hahn consistency level.

That's true, but it's equivalent, he claims, to checking all the
intervals.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/24/2006 4:17:32 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> That's true, but it's equivalent, he claims, to checking all the
> intervals.

Consistency is equivalent to the patent val always giving the best
tuning for every element of the appropriate tonality diamond. How do
you get that?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 4:22:09 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> Keenan's taking the worst interval's error, E, and the step
> >> size S and reporting
> >>
> >> E/(S-E)
> >>
> >> Paul H.'s consistency is the integer part of how many times the
> >> worst interval's error goes into half a step; floor(S/2E).
> //
> >The calculation you're referring to doesn't give you Hahn's
> >consistency level if you just use all the best approximations to the
> >consonant intervals (which is what Keenan ambiguity uses).
> >This is made clear when you consider that some ETs are inconsistent,
> >so must have consistency level below 1, but your formula will never
> >give anything below 1 when you use the same inputs as into Keenan
> >ambiguity.
>
> For example, 6:5 is the least-accurate 5-limit interval in 12-tET.
> The best approximation is 3 steps, or 300 cents. The error is
> 15.6 cents, so floor(S/2E) is 3. In the 11-limit, the least-accurate
> interval is, I think, 11:8, with an error of 48.7 cents.
> Floor(S/2E) is now zero.

Consider the 7-limit. The worst errors, for 7:4 and 7:6, are over 30 cents, so floor (S/2E) is again zero. But 12-equal *is* consistent in the 7-limit!

> >Instead, what Hahn uses in his formula are the signed errors of the
> >N:1 consonances. The difference of the highest signed error and the
> >lowest signed error is what goes into half a step X times, where X is
> >the Hahn consistency level.
>
> That's true, but it's equivalent, he claims, to checking all the
> intervals.

It' equivalent to checking all the triads direcly as per his definition. It's *not* equivalent to plugging all the intervals' errors into his clever formula which takes the largest difference between the signed errors of the N:1 intervals as input. Hopefully I showed that with the 12-equal 7-limit example above, but many other examples can be found.

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 4:49:32 PM

>> That's true, but it's equivalent, he claims, to checking all the
>> intervals.
>
>Consistency is equivalent to the patent val always giving the best
>tuning for every element of the appropriate tonality diamond.

In odd limits > 7, that's only true if:

() The val maps odd composites directly.
() There is > level-1 consistency.

>How do you get that?

I forget why it works, but it does.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 4:50:19 PM

>> >> Keenan's taking the worst interval's error, E, and the step
>> >> size S and reporting
>> >>
>> >> E/(S-E)
>> >>
>> >> Paul H.'s consistency is the integer part of how many times the
>> >> worst interval's error goes into half a step; floor(S/2E).
>> //
>> >The calculation you're referring to doesn't give you Hahn's
>> >consistency level if you just use all the best approximations to the
>> >consonant intervals (which is what Keenan ambiguity uses).
>> >This is made clear when you consider that some ETs are inconsistent,
>> >so must have consistency level below 1, but your formula will never
>> >give anything below 1 when you use the same inputs as into Keenan
>> >ambiguity.
>>
>> For example, 6:5 is the least-accurate 5-limit interval in 12-tET.
>> The best approximation is 3 steps, or 300 cents. The error is
>> 15.6 cents, so floor(S/2E) is 3. In the 11-limit, the least-accurate
>> interval is, I think, 11:8, with an error of 48.7 cents.
>> Floor(S/2E) is now zero.
>
>Consider the 7-limit. The worst errors, for 7:4 and 7:6, are over 30
>cents, so floor (S/2E) is again zero. But 12-equal *is* consistent in
>the 7-limit!
>
>> >Instead, what Hahn uses in his formula are the signed errors of the
>> >N:1 consonances. The difference of the highest signed error and the
>> >lowest signed error is what goes into half a step X times, where X is
>> >the Hahn consistency level.
>>
>> That's true, but it's equivalent, he claims, to checking all the
>> intervals.
>
>It' equivalent to checking all the triads direcly as per his
>definition. It's *not* equivalent to plugging all the intervals'
>errors into his clever formula which takes the largest difference
>between the signed errors of the N:1 intervals as input. Hopefully I
>showed that with the 12-equal 7-limit example above, but many other
>examples can be found.

Ah.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 4:55:40 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@> wrote:
> >
> > >> Keenan's taking the worst interval's error, E, and the step
> > >> size S and reporting
> > >>
> > >> E/(S-E)
> > >>
> > >> Paul H.'s consistency is the integer part of how many times the
> > >> worst interval's error goes into half a step; floor(S/2E).
> > //
> > >The calculation you're referring to doesn't give you Hahn's
> > >consistency level if you just use all the best approximations to the
> > >consonant intervals (which is what Keenan ambiguity uses).
> > >This is made clear when you consider that some ETs are inconsistent,
> > >so must have consistency level below 1, but your formula will never
> > >give anything below 1 when you use the same inputs as into Keenan
> > >ambiguity.
> >
> > For example, 6:5 is the least-accurate 5-limit interval in 12-tET.
> > The best approximation is 3 steps, or 300 cents. The error is
> > 15.6 cents, so floor(S/2E) is 3. In the 11-limit, the least-accurate
> > interval is, I think, 11:8, with an error of 48.7 cents.
> > Floor(S/2E) is now zero.

Carl, I let this one slip by and fool me! Floor(100/97,4) is not zero, it's one!

> Consider the 7-limit. The worst errors, for 7:4 and 7:6, are over 30 cents, so floor (S/2E) is again zero.

It's again *one*.

> But 12-equal *is* >consistent in the >7-limit!

And it's inconsistent in the 11-limit, where your formula again gives a one. So what you're calculating is *not* telling you the consistency or consistency level.

> > >Instead, what Hahn uses in his formula are the signed errors of the
> > >N:1 consonances. The difference of the highest signed error and the
> > >lowest signed error is what goes into half a step X times, where X is
> > >the Hahn consistency level.
> >
> > That's true, but it's equivalent, he claims, to checking all the
> > intervals.
>
> It' equivalent to checking all the triads direcly as per his definition. It's *not* equivalent to plugging all the intervals' errors into his clever formula which takes the largest difference between the signed errors of the N:1 intervals as input. Hopefully I showed that with the 12-equal 7-limit example above, but many other examples can be found.
>

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 5:01:39 PM

>> > >> Keenan's taking the worst interval's error, E, and the step
>> > >> size S and reporting
>> > >>
>> > >> E/(S-E)
>> > >>
>> > >> Paul H.'s consistency is the integer part of how many times the
>> > >> worst interval's error goes into half a step; floor(S/2E).
>> > //
>> > >The calculation you're referring to doesn't give you Hahn's
>> > >consistency level if you just use all the best approximations to
>> > >the consonant intervals (which is what Keenan ambiguity uses).
>> > >This is made clear when you consider that some ETs are inconsistent,
>> > >so must have consistency level below 1, but your formula will never
>> > >give anything below 1 when you use the same inputs as into Keenan
>> > >ambiguity.
>> >
>> > For example, 6:5 is the least-accurate 5-limit interval in 12-tET.
>> > The best approximation is 3 steps, or 300 cents. The error is
>> > 15.6 cents, so floor(S/2E) is 3. In the 11-limit, the least-accurate
>> > interval is, I think, 11:8, with an error of 48.7 cents.
>> > Floor(S/2E) is now zero.
>
>Carl, I let this one slip by and fool me! Floor(100/97,4) is not zero,
>it's one!
>
>> Consider the 7-limit. The worst errors, for 7:4 and 7:6, are over 30
>cents, so floor (S/2E) is again zero.
>
>It's again *one*.

Heh.

>> But 12-equal *is* consistent in the 7-limit!
>
>And it's inconsistent in the 11-limit, where your formula again gives
>a one. So what you're calculating is *not* telling you the consistency
>or consistency level.

True that.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/24/2006 5:08:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> >Consistency is equivalent to the patent val always giving the best
> >tuning for every element of the appropriate tonality diamond.
>
> In odd limits > 7, that's only true if:
>
> () The val maps odd composites directly.
> () There is > level-1 consistency.

I don't know what you mean. What is an example?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/24/2006 5:09:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> That's true, but it's equivalent, he claims, to checking all the
> >> intervals.
> >
> >Consistency is equivalent to the patent val always giving the best
> >tuning for every element of the appropriate tonality diamond.
>
> In odd limits > 7, that's only true if:
>
> () The val maps odd composites directly.

What does that mean?

> () There is > level-1 consistency.

No, it's true even if the consistency level is just 1.

🔗Carl Lumma <ekin@lumma.org>

2/24/2006 6:25:42 PM

>> >Consistency is equivalent to the patent val always giving the best
>> >tuning for every element of the appropriate tonality diamond.
>>
>> In odd limits > 7, that's only true if:
>>
>> () The val maps odd composites directly.
>> () There is > level-1 consistency.
>
>I don't know what you mean. What is an example?

Do vals usually map odd composites like 9? If not, level-1
9-limit consistency would actually require that the 3-limit
be level-2 consistent to make your statement true. Yes?

-Carl