back to list

126 7-limit linears

🔗Gene Ward Smith <gwsmith@svpal.org>

2/5/2004 4:08:21 AM

I first made a candidate list by the kitchen sink method:

(1) All pairs n,m<=200 of standard vals

(2) All pairs n,m<=200 of TOP vals

(3) All pairs 100<=n,m<400 of standard vals

(4) All pairs 100<=n,m<=400 of TOP vals

(5) Generators of standard vals up to 100

(6) Generators of certain nonstandard vals up to 100

(7) Pairs of commas from Paul's list of relative error < 0.06,
epimericity < 0.5

(8) Pairs of vals with consistent badness figure < 1.5 up to 5000

This lead to a list of 32201 candidate wedgies, most of which of
course were incredible garbage. I then accepted everything with a 2.8
exponent badness less than 10000, where error is TOP error and
complexity is our mysterious L1 TOP complexity. I did not do any
cutting off for either error or complexity, figuring people could
decide how to do that for themselves. The first six systems are
macrotemperaments of dubious utility, number 7 is the {15/14, 25/24}
temperament, and 8 and 9 are the beep-ennealimmal pair, and number 13
is father. After ennealimmal, we don't get back into the micros until
number 46; if we wanted to avoid going there we can cutoff at 4000.
Number 46, incidentally, has TM basis {2401/2400, 65625/65536} and is
covered by 140, 171, 202 and 311; the last is interesting because of
the peculiar talents of 311.

1 [0, 0, 2, 0, 3, 5] 662.236987 77.285947 2.153690
2 [1, 1, 0, -1, -3, -3] 806.955502 64.326132 2.467788
3 [0, 0, 3, 0, 5, 7] 829.171704 30.152577 3.266201
4 [0, 2, 2, 3, 3, -1] 870.690617 33.049025 3.216583
5 [1, 2, 1, 1, -1, -3] 888.831828 49.490949 2.805189
6 [1, 2, 3, 1, 2, 1] 1058.235145 33.404241 3.435525
7 [2, 1, 3, -3, -1, 4] 1076.506437 16.837898 4.414720
8 [2, 3, 1, 0, -4, -6] 1099.121425 14.176105 4.729524
9 [18, 27, 18, 1, -22, -34] 1099.959348 .036377 39.828719
10 [1, -1, 0, -4, -3, 3] 1110.471803 39.807123 3.282968
11 [0, 5, 0, 8, 0, -14] 1352.620311 7.239629 6.474937
12 [1, -1, -2, -4, -6, -2] 1414.400610 20.759083 4.516198
13 [1, -1, 3, -4, 2, 10] 1429.376082 14.130876 5.200719
14 [1, 4, -2, 4, -6, -16] 1586.917865 4.771049 7.955969
15 [1, 4, 10, 4, 13, 12] 1689.455290 1.698521 11.765178
16 [2, 1, -1, -3, -7, -5] 1710.030839 16.874108 5.204166
17 [1, 4, 3, 4, 2, -4] 1749.120722 14.253642 5.572288
18 [0, 0, 4, 0, 6, 9] 1781.787825 33.049025 4.153970
19 [1, -1, 1, -4, -1, 5] 1827.319456 54.908088 3.496512
20 [4, 4, 4, -3, -5, -2] 1926.265442 5.871540 7.916963
21 [2, -4, -4, -11, -12, 2] 2188.881053 3.106578 10.402108
22 [3, 0, 6, -7, 1, 14] 2201.891023 5.870879 8.304602
23 [0, 0, 5, 0, 8, 12] 2252.838883 19.840685 5.419891
24 [4, 2, 2, -6, -8, -1] 2306.678659 7.657798 7.679190
25 [2, 1, 6, -3, 4, 11] 2392.139586 9.396316 7.231437
26 [2, -1, 1, -6, -4, 5] 2452.275337 22.453717 5.345120
27 [0, 0, 7, 0, 11, 16] 2580.688285 9.431411 7.420171
28 [1, -3, -4, -7, -9, -1] 2669.323351 9.734056 7.425960
29 [5, 1, 12, -10, 5, 25] 2766.028555 1.276744 15.536039
30 [7, 9, 13, -2, 1, 5] 2852.991531 1.610469 14.458536
31 [2, -2, 1, -8, -4, 8] 3002.749158 14.130876 6.779481
32 [3, 0, -6, -7, -18, -14] 3181.791246 2.939961 12.125211
33 [2, 8, 1, 8, -4, -20] 3182.905310 3.668842 11.204461
34 [6, -7, -2, -25, -20, 15] 3222.094343 .631014 21.101881
35 [4, -3, 2, -14, -8, 13] 3448.998676 3.187309 12.124601
36 [1, -3, -2, -7, -6, 4] 3518.666155 18.633939 6.499551
37 [1, 4, 5, 4, 5, 0] 3526.975600 19.977396 6.345287
38 [2, 6, 6, 5, 4, -3] 3589.967809 8.400361 8.700992
39 [2, 1, -4, -3, -12, -12] 3625.480387 9.146173 8.470366
40 [2, -2, -2, -8, -9, 1] 3634.089963 14.531543 7.185526
41 [3, 2, 4, -4, -2, 4] 3638.704033 20.759083 6.329002
42 [6, 5, 3, -6, -12, -7] 3680.095702 3.187309 12.408714
43 [2, 8, 8, 8, 7, -4] 3694.344150 3.582707 11.917575
44 [2, 3, 6, 0, 4, 6] 3938.578264 20.759083 6.510560
45 [0, 0, 5, 0, 8, 11] 3983.263457 38.017335 5.266481
46 [22, -5, 3, -59, -57, 21] 4009.709706 .073527 49.166221
47 [3, 5, 9, 1, 6, 7] 4092.014696 6.584324 9.946084
48 [7, -3, 8, -21, -7, 27] 4145.427852 .946061 19.979719
49 [1, -8, -14, -15, -25, -10] 4177.550548 .912904 20.291786
50 [3, 5, 1, 1, -7, -12] 4203.022260 12.066285 8.088219
51 [1, 9, -2, 12, -6, -30] 4235.792998 2.403879 14.430906
52 [6, 10, 10, 2, -1, -5] 4255.362112 3.106578 13.189661
53 [2, 5, 3, 3, -1, -7] 4264.417050 21.655518 6.597656
54 [6, 5, 22, -6, 18, 37] 4465.462582 .536356 25.127403
55 [0, 0, 12, 0, 19, 28] 4519.315488 3.557008 12.840061
56 [1, -3, 3, -7, 2, 15] 4555.017089 15.315953 7.644302
57 [1, -1, -5, -4, -11, -9] 4624.441621 14.789095 7.782398
58 [16, 2, 5, -34, -37, 6] 4705.894319 .307997 31.211875
59 [4, -32, -15, -60, -35, 55] 4750.916876 .066120 54.255591
60 [1, -8, 39, -15, 59, 113] 4919.628715 .074518 52.639423
61 [3, 0, -3, -7, -13, -7] 4967.108742 11.051598 8.859010
62 [6, 0, 0, -14, -17, 0] 5045.450988 5.526647 11.410361
63 [37, 46, 75, -13, 15, 45] 5230.896745 .021640 83.678088
64 [1, 6, 5, 7, 5, -5] 5261.484667 11.970043 8.788871
65 [3, 2, -1, -4, -10, -8] 5276.949135 17.564918 7.671954
66 [1, 4, -9, 4, -17, -32] 5338.184867 2.536420 15.376139
67 [1, -3, 5, -7, 5, 20] 5338.971970 8.959294 9.797992
68 [10, 9, 7, -9, -17, -9] 5386.217633 1.171542 20.325677
69 [19, 19, 57, -14, 37, 79] 5420.385757 .046052 64.713343
70 [5, 3, 7, -7, -3, 8] 5753.932407 7.459874 10.743721
71 [3, 5, -6, 1, -18, -28] 5846.930660 3.094040 14.795975
72 [3, 12, -1, 12, -10, -36] 5952.918469 1.698521 18.448015
73 [6, 0, 3, -14, -12, 7] 6137.760804 5.291448 12.429144
74 [4, 4, 0, -3, -11, -11] 6227.282004 12.384652 9.221275
75 [3, 0, 9, -7, 6, 21] 6250.704457 6.584324 11.570803
76 [9, 5, -3, -13, -30, -21] 6333.111158 1.049791 22.396682
77 [0, 0, 8, 0, 13, 19] 6365.852053 14.967465 8.686091
78 [4, 2, 5, -6, -3, 6] 6370.380556 16.499269 8.391154
79 [1, -8, -2, -15, -6, 18] 6507.074340 4.974313 12.974488
80 [2, -6, 1, -14, -4, 19] 6598.741284 6.548265 11.820058
81 [2, 25, 13, 35, 15, -40] 6657.512727 .299647 35.677429
82 [6, -2, -2, -17, -20, 1] 6845.573750 3.740932 14.626943
83 [1, 7, 3, 9, 2, -13] 6852.061008 12.161876 9.603642
84 [0, 5, 5, 8, 8, -2] 7042.202107 19.368923 8.212986
85 [4, 2, 9, -6, 3, 15] 7074.478038 8.170435 11.196673
86 [8, 6, 6, -9, -13, -3] 7157.960980 3.268439 15.596153
87 [5, 8, 2, 1, -11, -18] 7162.155511 5.664628 12.817743
88 [3, 17, -1, 20, -10, -50] 7280.048554 .894655 24.922952
89 [4, 2, -1, -6, -13, -8] 7307.246603 13.289190 9.520562
90 [5, 13, -17, 9, -41, -76] 7388.593186 .276106 38.128083
91 [8, 18, 11, 10, -5, -25] 7423.457669 .968741 24.394122
92 [3, -2, 1, -10, -7, 8] 7553.291925 18.095699 8.628089
93 [3, 7, -1, 4, -10, -22] 7604.170165 7.279064 11.973078
94 [6, 10, 3, 2, -12, -21] 7658.950254 3.480440 15.622931
95 [14, 59, 33, 61, 13, -89] 7727.766150 .037361 79.148236
96 [3, -5, -6, -15, -18, 0] 7760.555544 4.513934 14.304666
97 [13, 14, 35, -8, 19, 42] 7785.862490 .261934 39.585940
98 [11, 13, 17, -5, -4, 3] 7797.739891 1.485250 21.312375
99 [2, -4, -16, -11, -31, -26] 7870.803242 1.267597 22.628529
100 [2, -9, -4, -19, -12, 16] 7910.552221 2.895855 16.877046
101 [0, 0, 9, 0, 14, 21] 7917.731843 14.176105 9.573860
102 [3, 12, 11, 12, 9, -8] 7922.981072 2.624742 17.489863
103 [1, -6, 3, -12, 2, 24] 8250.683192 8.474270 11.675656
104 [55, 73, 93, -12, -7, 11] 8282.844862 .017772 105.789216
105 [4, 7, 2, 2, -8, -15] 8338.658153 10.400103 10.893408
106 [0, 5, -5, 8, -8, -26] 8426.314560 8.215515 11.894828
107 [5, 8, 14, 1, 8, 10] 8428.707855 4.143252 15.190723
108 [6, 7, 5, -3, -9, -8] 8506.845926 6.986391 12.646486
109 [8, 13, 23, 2, 14, 17] 8538.660000 1.024522 25.136807
110 [0, 0, 10, 0, 16, 23] 8630.819015 11.358665 10.686371
111 [3, -7, -8, -18, -21, 1] 8799.551719 2.900537 17.521249
112 [0, 5, 10, 8, 16, 9] 8869.402675 6.941749 12.865826
113 [4, 16, 9, 16, 3, -24] 8931.184092 1.698521 21.324102
114 [6, 5, 7, -6, -6, 2] 8948.277847 9.097987 11.718042
115 [3, -3, 1, -12, -7, 11] 9072.759561 14.130876 10.062449
116 [0, 12, 24, 19, 38, 22] 9079.668325 .617051 30.795105
117 [33, 78, 90, 47, 50, -10] 9153.275887 .016734 112.014440
118 [5, 1, -7, -10, -25, -19] 9260.372155 3.148011 17.329377
119 [1, -6, -2, -12, -6, 12] 9290.939644 13.273963 10.377495
120 [2, -2, 4, -8, 1, 15] 9367.180611 25.460673 8.247748
121 [3, 5, 16, 1, 17, 23] 9529.360455 3.220227 17.366255
122 [6, 3, 5, -9, -9, 3] 9771.701969 9.773087 11.787090
123 [15, -2, -5, -38, -50, -6] 9772.798330 .479706 34.589494
124 [2, -6, -6, -14, -15, 3] 9810.819078 6.548265 13.618691
125 [1, 9, 3, 12, 2, -18] 9825.667878 9.244393 12.047225
126 [1, -13, -2, -23, -6, 32] 9884.172505 2.432212 19.449425

🔗Gene Ward Smith <gwsmith@svpal.org>

2/5/2004 11:18:21 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
After all the complaints, no response. :(

Here are some high-error temperaments:

> 25 [2, 1, 6, -3, 4, 11] 2392.139586 9.396316 7.231437
commas: {21/20, 25/24}
mapping: [<1 1 2 3|, <0 2 1 -2|]

> 28 [1, -3, -4, -7, -9, -1] 2669.323351 9.734056 7.425960
commas: {21/20, 135/128}
mapping: [<1 2 1 1|, <0 -1 3 4|]

> 39 [2, 1, -4, -3, -12, -12] 3625.480387 9.146173 8.470366
commas: {25/24, 64/63} "Number 56"
mapping: [<1 1 2 4|, <0 2 1 -4|]

> 62 [6, 0, 0, -14, -17, 0] 5045.450988 5.526647 11.410361
commas: {50/49, 128/125} "Number 85" Has a 12 tone DE
mapping: [<6 10 14 17|, <0 -1 0 0|]

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/6/2004 12:10:20 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> 1 [0, 0, 2, 0, 3, 5] 662.236987 77.285947 2.153690
> 2 [1, 1, 0, -1, -3, -3] 806.955502 64.326132 2.467788
> 3 [0, 0, 3, 0, 5, 7] 829.171704 30.152577 3.266201
...

Thanks for the list. I can certainly get it into a spreadsheet and
plot it easily, but I have no idea what I'm plotting. I assume the
last two columns are error and complexity but I have no idea which is
which.

Also, I'm not yet up to speed on reading wedgies directly so I have no
idea of the identity of the temperaments. Can we please have
generators or mappings or comma pairs, if not names (where they exist)?

I suppose you figure it was difficult to generate so it should be
difficult to interpret as well. ;-)

🔗Gene Ward Smith <gwsmith@svpal.org>

2/6/2004 2:44:30 AM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > 1 [0, 0, 2, 0, 3, 5] 662.236987 77.285947 2.153690
> > 2 [1, 1, 0, -1, -3, -3] 806.955502 64.326132 2.467788
> > 3 [0, 0, 3, 0, 5, 7] 829.171704 30.152577 3.266201
> ...
>
> Thanks for the list. I can certainly get it into a spreadsheet and
> plot it easily, but I have no idea what I'm plotting. I assume the
> last two columns are error and complexity but I have no idea which
is
> which.

It's easy to tell which is which, but it's badness, error, complexity.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/6/2004 4:18:16 AM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:

> Also, I'm not yet up to speed on reading wedgies directly so I have no
> idea of the identity of the temperaments.

The first three numbers give the relevant information re the mapping,
so this should not be a problem. The gcd is the period, and dividing
by the period gives the mapping to primes of the "generator". There is
no reason to think that [<1 2 4 7|, <0 -1 -4 -10|] is going to be any
clearer or better than <<1 4 10 4 13 12|| that I can see; it's in the
13 limit and beyond that you end up using fewer numbers with the mapping.

🔗Paul Erlich <perlich@aya.yale.edu>

2/6/2004 8:44:34 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> After all the complaints, no response. :(

Some of us have to sleep sometimes . . . patience . . .

🔗Paul Erlich <perlich@aya.yale.edu>

2/6/2004 10:47:29 AM

Since there's a huge empty gap between complexity ~25+ and ~31, I was
forced to look for a lower-complexity moat (probably a good thing
anyway). I'll upload a graph showing the temperaments indicated by
their ranking according to error/8.125 + complexity/25, since I saw a
reasonable linear moat where this measure equals 1. Twenty
temperaments make it in:

1. Huygens meantone
2. Semisixths
3. Magic
4. Pajara
5. Tripletone
6. Superpythagorean
7. Negri
8. Kleismic
9. Hemifourths
10. Dominant Seventh
11. [598.4467109, 162.3159606],[[2, 4, 6, 7], [0, -3, -5, -5]]
12. Orwell
13. Injera
14. Miracle
15. Schismic
16. Flattone
17. Supermajor seconds
18. 1/12 oct. period, 25 cent generator (we discussed this years ago)
19. Nonkleismic
20. Porcupine

If we allow the moat to be slightly concave, we would include:

26. Diminished
29. Augmented

A bit more concavity still and we include

45. Blackwood

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> I first made a candidate list by the kitchen sink method:
>
> (1) All pairs n,m<=200 of standard vals
>
> (2) All pairs n,m<=200 of TOP vals
>
> (3) All pairs 100<=n,m<400 of standard vals
>
> (4) All pairs 100<=n,m<=400 of TOP vals
>
> (5) Generators of standard vals up to 100
>
> (6) Generators of certain nonstandard vals up to 100
>
> (7) Pairs of commas from Paul's list of relative error < 0.06,
> epimericity < 0.5
>
> (8) Pairs of vals with consistent badness figure < 1.5 up to 5000
>
> This lead to a list of 32201 candidate wedgies, most of which of
> course were incredible garbage. I then accepted everything with a
2.8
> exponent badness less than 10000, where error is TOP error and
> complexity is our mysterious L1 TOP complexity. I did not do any
> cutting off for either error or complexity, figuring people could
> decide how to do that for themselves. The first six systems are
> macrotemperaments of dubious utility, number 7 is the {15/14, 25/24}
> temperament, and 8 and 9 are the beep-ennealimmal pair, and number
13
> is father. After ennealimmal, we don't get back into the micros
until
> number 46; if we wanted to avoid going there we can cutoff at 4000.
> Number 46, incidentally, has TM basis {2401/2400, 65625/65536} and
is
> covered by 140, 171, 202 and 311; the last is interesting because of
> the peculiar talents of 311.
>
>
>
> 1 [0, 0, 2, 0, 3, 5] 662.236987 77.285947 2.153690
> 2 [1, 1, 0, -1, -3, -3] 806.955502 64.326132 2.467788
> 3 [0, 0, 3, 0, 5, 7] 829.171704 30.152577 3.266201
> 4 [0, 2, 2, 3, 3, -1] 870.690617 33.049025 3.216583
> 5 [1, 2, 1, 1, -1, -3] 888.831828 49.490949 2.805189
> 6 [1, 2, 3, 1, 2, 1] 1058.235145 33.404241 3.435525
> 7 [2, 1, 3, -3, -1, 4] 1076.506437 16.837898 4.414720
> 8 [2, 3, 1, 0, -4, -6] 1099.121425 14.176105 4.729524
> 9 [18, 27, 18, 1, -22, -34] 1099.959348 .036377 39.828719
> 10 [1, -1, 0, -4, -3, 3] 1110.471803 39.807123 3.282968
> 11 [0, 5, 0, 8, 0, -14] 1352.620311 7.239629 6.474937
> 12 [1, -1, -2, -4, -6, -2] 1414.400610 20.759083 4.516198
> 13 [1, -1, 3, -4, 2, 10] 1429.376082 14.130876 5.200719
> 14 [1, 4, -2, 4, -6, -16] 1586.917865 4.771049 7.955969
> 15 [1, 4, 10, 4, 13, 12] 1689.455290 1.698521 11.765178
> 16 [2, 1, -1, -3, -7, -5] 1710.030839 16.874108 5.204166
> 17 [1, 4, 3, 4, 2, -4] 1749.120722 14.253642 5.572288
> 18 [0, 0, 4, 0, 6, 9] 1781.787825 33.049025 4.153970
> 19 [1, -1, 1, -4, -1, 5] 1827.319456 54.908088 3.496512
> 20 [4, 4, 4, -3, -5, -2] 1926.265442 5.871540 7.916963
> 21 [2, -4, -4, -11, -12, 2] 2188.881053 3.106578 10.402108
> 22 [3, 0, 6, -7, 1, 14] 2201.891023 5.870879 8.304602
> 23 [0, 0, 5, 0, 8, 12] 2252.838883 19.840685 5.419891
> 24 [4, 2, 2, -6, -8, -1] 2306.678659 7.657798 7.679190
> 25 [2, 1, 6, -3, 4, 11] 2392.139586 9.396316 7.231437
> 26 [2, -1, 1, -6, -4, 5] 2452.275337 22.453717 5.345120
> 27 [0, 0, 7, 0, 11, 16] 2580.688285 9.431411 7.420171
> 28 [1, -3, -4, -7, -9, -1] 2669.323351 9.734056 7.425960
> 29 [5, 1, 12, -10, 5, 25] 2766.028555 1.276744 15.536039
> 30 [7, 9, 13, -2, 1, 5] 2852.991531 1.610469 14.458536
> 31 [2, -2, 1, -8, -4, 8] 3002.749158 14.130876 6.779481
> 32 [3, 0, -6, -7, -18, -14] 3181.791246 2.939961 12.125211
> 33 [2, 8, 1, 8, -4, -20] 3182.905310 3.668842 11.204461
> 34 [6, -7, -2, -25, -20, 15] 3222.094343 .631014 21.101881
> 35 [4, -3, 2, -14, -8, 13] 3448.998676 3.187309 12.124601
> 36 [1, -3, -2, -7, -6, 4] 3518.666155 18.633939 6.499551
> 37 [1, 4, 5, 4, 5, 0] 3526.975600 19.977396 6.345287
> 38 [2, 6, 6, 5, 4, -3] 3589.967809 8.400361 8.700992
> 39 [2, 1, -4, -3, -12, -12] 3625.480387 9.146173 8.470366
> 40 [2, -2, -2, -8, -9, 1] 3634.089963 14.531543 7.185526
> 41 [3, 2, 4, -4, -2, 4] 3638.704033 20.759083 6.329002
> 42 [6, 5, 3, -6, -12, -7] 3680.095702 3.187309 12.408714
> 43 [2, 8, 8, 8, 7, -4] 3694.344150 3.582707 11.917575
> 44 [2, 3, 6, 0, 4, 6] 3938.578264 20.759083 6.510560
> 45 [0, 0, 5, 0, 8, 11] 3983.263457 38.017335 5.266481
> 46 [22, -5, 3, -59, -57, 21] 4009.709706 .073527 49.166221
> 47 [3, 5, 9, 1, 6, 7] 4092.014696 6.584324 9.946084
> 48 [7, -3, 8, -21, -7, 27] 4145.427852 .946061 19.979719
> 49 [1, -8, -14, -15, -25, -10] 4177.550548 .912904 20.291786
> 50 [3, 5, 1, 1, -7, -12] 4203.022260 12.066285 8.088219
> 51 [1, 9, -2, 12, -6, -30] 4235.792998 2.403879 14.430906
> 52 [6, 10, 10, 2, -1, -5] 4255.362112 3.106578 13.189661
> 53 [2, 5, 3, 3, -1, -7] 4264.417050 21.655518 6.597656
> 54 [6, 5, 22, -6, 18, 37] 4465.462582 .536356 25.127403
> 55 [0, 0, 12, 0, 19, 28] 4519.315488 3.557008 12.840061
> 56 [1, -3, 3, -7, 2, 15] 4555.017089 15.315953 7.644302
> 57 [1, -1, -5, -4, -11, -9] 4624.441621 14.789095 7.782398
> 58 [16, 2, 5, -34, -37, 6] 4705.894319 .307997 31.211875
> 59 [4, -32, -15, -60, -35, 55] 4750.916876 .066120 54.255591
> 60 [1, -8, 39, -15, 59, 113] 4919.628715 .074518 52.639423
> 61 [3, 0, -3, -7, -13, -7] 4967.108742 11.051598 8.859010
> 62 [6, 0, 0, -14, -17, 0] 5045.450988 5.526647 11.410361
> 63 [37, 46, 75, -13, 15, 45] 5230.896745 .021640 83.678088
> 64 [1, 6, 5, 7, 5, -5] 5261.484667 11.970043 8.788871
> 65 [3, 2, -1, -4, -10, -8] 5276.949135 17.564918 7.671954
> 66 [1, 4, -9, 4, -17, -32] 5338.184867 2.536420 15.376139
> 67 [1, -3, 5, -7, 5, 20] 5338.971970 8.959294 9.797992
> 68 [10, 9, 7, -9, -17, -9] 5386.217633 1.171542 20.325677
> 69 [19, 19, 57, -14, 37, 79] 5420.385757 .046052 64.713343
> 70 [5, 3, 7, -7, -3, 8] 5753.932407 7.459874 10.743721
> 71 [3, 5, -6, 1, -18, -28] 5846.930660 3.094040 14.795975
> 72 [3, 12, -1, 12, -10, -36] 5952.918469 1.698521 18.448015
> 73 [6, 0, 3, -14, -12, 7] 6137.760804 5.291448 12.429144
> 74 [4, 4, 0, -3, -11, -11] 6227.282004 12.384652 9.221275
> 75 [3, 0, 9, -7, 6, 21] 6250.704457 6.584324 11.570803
> 76 [9, 5, -3, -13, -30, -21] 6333.111158 1.049791 22.396682
> 77 [0, 0, 8, 0, 13, 19] 6365.852053 14.967465 8.686091
> 78 [4, 2, 5, -6, -3, 6] 6370.380556 16.499269 8.391154
> 79 [1, -8, -2, -15, -6, 18] 6507.074340 4.974313 12.974488
> 80 [2, -6, 1, -14, -4, 19] 6598.741284 6.548265 11.820058
> 81 [2, 25, 13, 35, 15, -40] 6657.512727 .299647 35.677429
> 82 [6, -2, -2, -17, -20, 1] 6845.573750 3.740932 14.626943
> 83 [1, 7, 3, 9, 2, -13] 6852.061008 12.161876 9.603642
> 84 [0, 5, 5, 8, 8, -2] 7042.202107 19.368923 8.212986
> 85 [4, 2, 9, -6, 3, 15] 7074.478038 8.170435 11.196673
> 86 [8, 6, 6, -9, -13, -3] 7157.960980 3.268439 15.596153
> 87 [5, 8, 2, 1, -11, -18] 7162.155511 5.664628 12.817743
> 88 [3, 17, -1, 20, -10, -50] 7280.048554 .894655 24.922952
> 89 [4, 2, -1, -6, -13, -8] 7307.246603 13.289190 9.520562
> 90 [5, 13, -17, 9, -41, -76] 7388.593186 .276106 38.128083
> 91 [8, 18, 11, 10, -5, -25] 7423.457669 .968741 24.394122
> 92 [3, -2, 1, -10, -7, 8] 7553.291925 18.095699 8.628089
> 93 [3, 7, -1, 4, -10, -22] 7604.170165 7.279064 11.973078
> 94 [6, 10, 3, 2, -12, -21] 7658.950254 3.480440 15.622931
> 95 [14, 59, 33, 61, 13, -89] 7727.766150 .037361 79.148236
> 96 [3, -5, -6, -15, -18, 0] 7760.555544 4.513934 14.304666
> 97 [13, 14, 35, -8, 19, 42] 7785.862490 .261934 39.585940
> 98 [11, 13, 17, -5, -4, 3] 7797.739891 1.485250 21.312375
> 99 [2, -4, -16, -11, -31, -26] 7870.803242 1.267597 22.628529
> 100 [2, -9, -4, -19, -12, 16] 7910.552221 2.895855 16.877046
> 101 [0, 0, 9, 0, 14, 21] 7917.731843 14.176105 9.573860
> 102 [3, 12, 11, 12, 9, -8] 7922.981072 2.624742 17.489863
> 103 [1, -6, 3, -12, 2, 24] 8250.683192 8.474270 11.675656
> 104 [55, 73, 93, -12, -7, 11] 8282.844862 .017772 105.789216
> 105 [4, 7, 2, 2, -8, -15] 8338.658153 10.400103 10.893408
> 106 [0, 5, -5, 8, -8, -26] 8426.314560 8.215515 11.894828
> 107 [5, 8, 14, 1, 8, 10] 8428.707855 4.143252 15.190723
> 108 [6, 7, 5, -3, -9, -8] 8506.845926 6.986391 12.646486
> 109 [8, 13, 23, 2, 14, 17] 8538.660000 1.024522 25.136807
> 110 [0, 0, 10, 0, 16, 23] 8630.819015 11.358665 10.686371
> 111 [3, -7, -8, -18, -21, 1] 8799.551719 2.900537 17.521249
> 112 [0, 5, 10, 8, 16, 9] 8869.402675 6.941749 12.865826
> 113 [4, 16, 9, 16, 3, -24] 8931.184092 1.698521 21.324102
> 114 [6, 5, 7, -6, -6, 2] 8948.277847 9.097987 11.718042
> 115 [3, -3, 1, -12, -7, 11] 9072.759561 14.130876 10.062449
> 116 [0, 12, 24, 19, 38, 22] 9079.668325 .617051 30.795105
> 117 [33, 78, 90, 47, 50, -10] 9153.275887 .016734 112.014440
> 118 [5, 1, -7, -10, -25, -19] 9260.372155 3.148011 17.329377
> 119 [1, -6, -2, -12, -6, 12] 9290.939644 13.273963 10.377495
> 120 [2, -2, 4, -8, 1, 15] 9367.180611 25.460673 8.247748
> 121 [3, 5, 16, 1, 17, 23] 9529.360455 3.220227 17.366255
> 122 [6, 3, 5, -9, -9, 3] 9771.701969 9.773087 11.787090
> 123 [15, -2, -5, -38, -50, -6] 9772.798330 .479706 34.589494
> 124 [2, -6, -6, -14, -15, 3] 9810.819078 6.548265 13.618691
> 125 [1, 9, 3, 12, 2, -18] 9825.667878 9.244393 12.047225
> 126 [1, -13, -2, -23, -6, 32] 9884.172505 2.432212 19.449425

🔗Gene Ward Smith <gwsmith@svpal.org>

2/6/2004 12:56:41 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> Since there's a huge empty gap between complexity ~25+ and ~31, I was
> forced to look for a lower-complexity moat (probably a good thing
> anyway). I'll upload a graph showing the temperaments indicated by
> their ranking according to error/8.125 + complexity/25, since I saw a
> reasonable linear moat where this measure equals 1. Twenty
> temperaments make it in:

Given that we normally relate error and complexity multiplicitively, I
think using log(err) and log(complexity) makes far more sense. Can you
justify using them additively?

I might be more willing to believe in this stuff if it made some
logical sense to me, but maybe I am just strange.

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 12:50:19 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > Since there's a huge empty gap between complexity ~25+ and ~31, I
was
> > forced to look for a lower-complexity moat (probably a good thing
> > anyway). I'll upload a graph showing the temperaments indicated
by
> > their ranking according to error/8.125 + complexity/25, since I
saw a
> > reasonable linear moat where this measure equals 1. Twenty
> > temperaments make it in:
>
> Given that we normally relate error and complexity multiplicitively,

normally . . .

> I
> think using log(err) and log(complexity) makes far more sense.

I don't think they make more sense practically.

> Can you
> justify using them additively?

Yes, or else some small power of them. Dave and I discussed this in
depth. He initially proposed a*error^2 + b*complexity^2, partly
because the local minima of harmonic entropy are parabolic.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 12:22:48 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > I
> > think using log(err) and log(complexity) makes far more sense.
>
> I don't think they make more sense practically.

I think they probably will make more sense both practically and
theoretically, but you've been ignoring this issue. Are you going to
think about it, at least?

> > Can you
> > justify using them additively?
>
> Yes, or else some small power of them. Dave and I discussed this in
> depth. He initially proposed a*error^2 + b*complexity^2, partly
> because the local minima of harmonic entropy are parabolic.

None of this has convinced me in the least.

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 12:33:28 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > I
> > > think using log(err) and log(complexity) makes far more sense.
> >
> > I don't think they make more sense practically.
>
> I think they probably will make more sense both practically and
> theoretically,

As I see it, no way. Example: when you look at the graph with log
(err) as one of the axes, the indication is that JI is infinitely far
away. This is ridiculous. The JI line should be right there, with
some temperaments many times more distant from it than others.
Otherwise, you're operating in the realm of hopelessly impractical
abstraction.

> but you've been ignoring this issue. Are you going to
> think about it, at least?

Countless hours already spent thinking about it, and discussing it
here.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 12:48:50 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > > > I
> > > > think using log(err) and log(complexity) makes far more sense.
> > >
> > > I don't think they make more sense practically.
> >
> > I think they probably will make more sense both practically and
> > theoretically,
>
> As I see it, no way.

You've taken me seriously enough to at least looked at it, or are you
just blowing the issue off?

Example: when you look at the graph with log
> (err) as one of the axes, the indication is that JI is infinitely far
> away. This is ridiculous.

No it isn't. JI has *zero error*!

The JI line should be right there, with
> some temperaments many times more distant from it than others.
> Otherwise, you're operating in the realm of hopelessly impractical
> abstraction.

It's what we've been doing, in effect, for the last few years, so this
argument makes no sense at all to me.

> > but you've been ignoring this issue. Are you going to
> > think about it, at least?
>
> Countless hours already spent thinking about it, and discussing it
> here.

The only one who seems to have thought about it is me. I've been
trying to get you to at least think about it, so far with no success.
Do you care about convincing the rest of us that what you are doing
makes a particle of sense, or is it going to be a committee of two?

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 1:36:15 PM

> A bit more concavity still and we include
>
> 45. Blackwood

Following what Dave did for the 5-limit and ET cases, I found that an
exponent of 2/3 produces the desired moat, for example when

err^(2/3)/6.3+complexity^(2/3)/9.35 < 1.

Please look at the resulting graph:

/tuning-math/files/Erlich/7lin23.gif

The temperaments in thie graph are identified by their ranking
according to the badness measure implied above:

1. Huygens meantone
2. Pajara
3. Magic
4. Semisixths
5. Dominant Seventh
6. Tripletone
7. Negri
8. Hemifourths
9. Kleismic/Hanson
10. Superpythagorean
11. Injera
12. Miracle
13. Biporky
14. Orwell
15. Diminished
16. Schismic
17. Augmented
18. 1/12 oct. period, 25 cent generator (we discussed this years ago)
19. Flattone
20. Blackwood
21. Supermajor seconds
22. Nonkleismic
23. Porcupine

Here is the data for first three wedgie entries and implied badness,
for the implied top 32:

1 4 10 0.68784
2 -4 -4 0.78033
5 1 12 0.78742
7 9 13 0.78759
1 4 -2 0.82001
3 0 -6 0.83995
4 -3 2 0.86556
2 8 1 0.87254
6 5 3 0.87815
1 9 -2 0.88068
2 8 8 0.89638
6 -7 -2 0.90191
6 10 10 0.9041
7 -3 8 0.91204
4 4 4 0.91347
1 -8 -14 0.91872
3 0 6 0.93351
0 0 12 0.93521
1 4 -9 0.93554
0 5 0 0.9488
3 12 -1 0.9593
10 9 7 0.95971
3 5 -6 0.97257
9 5 -3 1.0207
8 6 6 1.0259
6 -2 -2 1.0335
6 5 22 1.0337
3 12 11 1.0342
2 -9 -4 1.0395
11 13 17 1.0435
6 10 3 1.0498
4 2 2 1.0499

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 1:44:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...>
> > wrote:
> > >
> > > > > I
> > > > > think using log(err) and log(complexity) makes far more
sense.
> > > >
> > > > I don't think they make more sense practically.
> > >
> > > I think they probably will make more sense both practically and
> > > theoretically,
> >
> > As I see it, no way.
>
> You've taken me seriously enough to at least looked at it, or are
you
> just blowing the issue off?

What, you didn't believe me when I said "countless hours"?

> > Example: when you look at the graph with log
> > (err) as one of the axes, the indication is that JI is infinitely
far
> > away. This is ridiculous.
>
> No it isn't. JI has *zero error*!

Yes, that's very different from "minus infinity error"!

> > The JI line should be right there, with
> > some temperaments many times more distant from it than others.
> > Otherwise, you're operating in the realm of hopelessly
impractical
> > abstraction.
>
> It's what we've been doing, in effect, for the last few years, so
this
> argument makes no sense at all to me.

The only criterion for making sense is agreeing with habit? Why don't
you actually take me seriously enough to at least look at it, instead
of blowing the issue off?

> > > but you've been ignoring this issue. Are you going to
> > > think about it, at least?
> >
> > Countless hours already spent thinking about it, and discussing
it
> > here.
>
> The only one who seems to have thought about it is me.

Then you must not be reading our posts.

>I've been
> trying to get you to at least think about it, so far with no
>success.

What would count as "thinking about it" to you? Agreeing with you?

> Do you care about convincing the rest of us that what you are doing
> makes a particle of sense, or is it going to be a committee of two?

Not only have we tried to be completely explicit and impartial in our
logic, but you may note that Herman's guidelines had a very big
influence on us. How big is the committee that thinks log-flat has
any practical relevance?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 2:38:34 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> What, you didn't believe me when I said "countless hours"?

I'll believe it when I see the log-log plots I've been trying to get
you to do, with no success.

> > The only one who seems to have thought about it is me.
>
> Then you must not be reading our posts.

Which ones addressed this issue?

> >I've been
> > trying to get you to at least think about it, so far with no
> >success.
>
> What would count as "thinking about it" to you? Agreeing with you?

Some evidence you've actually considered it would be nice. A plot
would be grand. Some attempt to theoretically justify what you two are
doing would be appreciated.

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 2:46:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > What, you didn't believe me when I said "countless hours"?
>
> I'll believe it when I see the log-log plots I've been trying to get
> you to do, with no success.

I've posted quite a few log-log plots, thanks very much. Seeing them
is what made me realize that they assume JI is infinitely far away,
and how absurd that is.

> > > The only one who seems to have thought about it is me.
> >
> > Then you must not be reading our posts.
>
> Which ones addressed this issue?

All the ones about 5-limit linear temperaments and about ETs in
various limits.

> > >I've been
> > > trying to get you to at least think about it, so far with no
> > >success.
> >
> > What would count as "thinking about it" to you? Agreeing with you?
>
> Some evidence you've actually considered it would be nice. A plot
> would be grand.

OK, what would you like me to plot that I haven't already plotted?
I've looked at every single plot I've posted in both log-log and
linear axes, just out of curiosity, but none of them change the fact
that considering JI to be infinitely far away is like agreeing with
Zeno that you can never traverse a room. Any measure of the "pain" of
error will not predict this infinite distance, if it has any
connection with the real world.

> Some attempt to theoretically justify what you two are
> doing would be appreciated.

Dave and I just recently shared some new theoretical insights, I
thought. Carl sort of got hooked in too, on the "rectangular badness"
issue. Why don't you take some time and re-read what you glossed
over -- I've glossed over reading things before, but I think I've
been pretty comprehensive since last month.

🔗Paul Erlich <perlich@aya.yale.edu>

2/8/2004 2:53:10 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> A plot
> would be grand.

Here's the log-log version of the most recent plot:

groups.yahoo.com/group/tuning_files/files/Erlich/7lin23loglog.gif

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 2:54:36 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I've posted quite a few log-log plots, thanks very much.

For instance? Don't assume I can see how you label the axes easily,
because I can't.

Seeing them
> is what made me realize that they assume JI is infinitely far away,
> and how absurd that is.
>
> > > > The only one who seems to have thought about it is me.
> > >
> > > Then you must not be reading our posts.
> >
> > Which ones addressed this issue?
>
> All the ones about 5-limit linear temperaments and about ETs in
> various limits.

For instance? Can you show me specifically?

> > Some attempt to theoretically justify what you two are
> > doing would be appreciated.
>
> Dave and I just recently shared some new theoretical insights, I
> thought.

What insights? I've seen you say things which didn't register to me as
making sense, but I could be missing something. What, exactly, are you
up to?

Carl sort of got hooked in too, on the "rectangular badness"
> issue. Why don't you take some time and re-read what you glossed
> over -- I've glossed over reading things before, but I think I've
> been pretty comprehensive since last month.

Can you say what, in particular, I should read, which has the theory
in it?

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 4:40:53 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> Some evidence you've actually considered it would be nice. A plot
> would be grand. Some attempt to theoretically justify what you two are
> doing would be appreciated.

I'm not sure what "it" is that you think we haven't considered. If
it's log-flat badness then that seems to have been the only such
measure being considered on this list for the past several years,
despite the objections (from psychology) that I thought I spelled out
in great detail when it was first mooted.

And by "theoretically justify" do you mean justify purely from
mathematical considerations? I believe that to be futile. It
eventually needs to be grounded in human psychology, both perceptual
and cognitive.

I understand you're still in favour of log-flat cutoffs which can be
written in the form

log(err) + k * log(complexity) < x

Paul and I have been considering those of the form

err^p + k * comp^p < x

which can be made to look a lot like the previous one when 0<p<0.5.

Paul and I have not so much been trying to theoretically justify, but
rather empirically determine, appropriate values for p, admittedly
based on some pretty sketchy and anecdotal evidence. But that's all we
have.

By far the greatest body of evidence, about which temperaments people
consider musically interesting or useful, relates to equal
temperaments, particularly at the 5-limit.

And we find that what works best is a value of p that's slightly less
than one, i.e. the cutoff functions that we construct based on our
knowledge of which ETs have been popular historically, are somewhere
between log and linear, but much closer to linear.

Since you and Paul seem to have done a marvelous job of giving us
error and complexity measures that generalise from equal temps to
linear temps and beyond, then it seems likely that the general shape
of equal-interest contours we find for equal temps will be repeated
for higher dimensions. I suppose you could say this is the theoretical
part of the justification.

But rather than trying to come up with precise values for p and the
scaling constants for cutoffs, we are looking for what we call
"moats". These are places where moderate changes in these constants
will make no difference to which temperaments are included. They would
ideally look like a band of whitespace on the graph shaped like a pair
of back-to-back horns (something I hadn't realised before). In other
words it doesn't matter so much if a moat has a narrow waist. What is
most important is that it is wide near the axes.

But we can't just use any old moat. There are bound to be some very
wide moats that are unusable because they bear no resemblance to an
equal-interest cutoff.

The idea is that they should agree with the subjective cutoff
functions (implicit or otherwise) of as many different people as possible.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 4:57:22 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> > A bit more concavity still and we include
> >
> > 45. Blackwood
>
> Following what Dave did for the 5-limit and ET cases, I found that an
> exponent of 2/3 produces the desired moat, for example when
>
> err^(2/3)/6.3+complexity^(2/3)/9.35 < 1.

I prefer to put the scaling constants inside the exponentiation like this

(err/15.8)^(2/3) + (complexity/28.6)^(2/3) < 1.

Then you can see at a glance what maximum error and complexity are
allowed by this cutoff. For similar reasons I prefer to show the chart
with both axes starting from zero.

> Please look at the resulting graph:
>
> /tuning-math/files/Erlich/7lin23.gif
>
> The temperaments in thie graph are identified by their ranking
> according to the badness measure implied above:
>
> 1. Huygens meantone
> 2. Pajara
> 3. Magic
> 4. Semisixths
> 5. Dominant Seventh
> 6. Tripletone
> 7. Negri
> 8. Hemifourths
> 9. Kleismic/Hanson
> 10. Superpythagorean
> 11. Injera
> 12. Miracle
> 13. Biporky
> 14. Orwell
> 15. Diminished
> 16. Schismic
> 17. Augmented
> 18. 1/12 oct. period, 25 cent generator (we discussed this years ago)
> 19. Flattone
> 20. Blackwood
> 21. Supermajor seconds
> 22. Nonkleismic
> 23. Porcupine

This looks reasonable to me as a cutoff, although maybe still too
many, but making a badness measure out of it may be going too far.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 5:01:30 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
>
> > Some evidence you've actually considered it would be nice. A plot
> > would be grand. Some attempt to theoretically justify what you
two are
> > doing would be appreciated.
>
> I'm not sure what "it" is that you think we haven't considered.

The "it" is using log(complexity) and log(error) in drawing moats,
and trying to make the moats involve straight lines if possible. Paul
thinks JI is a temperament too and should be on the chart somewhere,
which I think is an absurd thing to worry about. What's your view?

> And by "theoretically justify" do you mean justify purely from
> mathematical considerations?

I mean that combining error and complexity, and not their logs, seems
a little like adding the distance to the moon in inches to the gross
national product in dollars--where's the justification that we are
talking about something comparable?

> I understand you're still in favour of log-flat cutoffs which can be
> written in the form

> log(err) + k * log(complexity) < x

Those seem to me to make more sense logically.

> Paul and I have been considering those of the form
>
> err^p + k * comp^p < x
>
> which can be made to look a lot like the previous one when 0<p<0.5.

Which is what I thought. Why then the fanatical opposition to even
thinking about it?

> And we find that what works best is a value of p that's slightly
less
> than one, i.e. the cutoff functions that we construct based on our
> knowledge of which ETs have been popular historically, are somewhere
> between log and linear, but much closer to linear.

This is based on actually looking at loglog charts?

> But rather than trying to come up with precise values for p and the
> scaling constants for cutoffs, we are looking for what we call
> "moats". These are places where moderate changes in these constants
> will make no difference to which temperaments are included.

I'd like to put this moat business on a theoretical basis which makes
sense to me, and a good way to start would be shifting to loglog. I
really don't see why that idea is so horrible. Of course if we must
use curved lines the difference between the approaches is less
important, but first can we show it is somehow better to use curved
lines?

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 5:09:10 PM

I wrote:
> This looks reasonable to me as a cutoff, although maybe still too
> many, ...

After more careful examination, I find this moat to be ideal. I can't
find one closer to the origin without leaving out temperaments I
really wouldn't want to leave out.

Well done.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 6:07:04 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> > wrote:
> >
> > > Some evidence you've actually considered it would be nice. A plot
> > > would be grand. Some attempt to theoretically justify what you
> two are
> > > doing would be appreciated.
> >
> > I'm not sure what "it" is that you think we haven't considered.
>
> The "it" is using log(complexity) and log(error) in drawing moats,
> and trying to make the moats involve straight lines if possible.

That would be nice, but I just don't think we can make a straight line
on a loglog chart agree with what we know about the historical (with
much weight on the last few decades) use of various ETs as 5 and
7-limit approximations.

But Paul, would you care to replot those ET plots as loglog and we'll
have a go.

I'm pretty sure we're gonna want to have almost quarter-elliptical
cutoffs on a loglog plot, so it's simpler to use linear plots in that
case.

> Paul
> thinks JI is a temperament too and should be on the chart somewhere,
> which I think is an absurd thing to worry about. What's your view?

I don't think that's quite what Paul means. It's not that we need to
see JI plotted, and certainly not because "it's a temperament too".

It's that when the error gets below say 0.3 of a cent it's as good as
JI in most circumstances (I'll never forget how your persistence payed
off in getting Johnny to agree to that :-) so you're not going to put
up with any significant amount of extra complexity just because some
temperament gets you to 0.03 cents or 0.003 cents. So any straight
line on a loglog plot will be too extreme in this regard.

> > And by "theoretically justify" do you mean justify purely from
> > mathematical considerations?
>
> I mean that combining error and complexity, and not their logs, seems
> a little like adding the distance to the moon in inches to the gross
> national product in dollars--where's the justification that we are
> talking about something comparable?

That's a good point. When we move to a psychological model we are
effectively considering a quantity that has been called "pain". And we
are considering how to convert error in cents to pain (in ouches?) and
complexity in notes to pain in ouches. Then we add pains to get the
total pain. And we assume there is a certain threshold of total pain
beyond which few musicians are willing to venture.

Calculating pain as log of error or complexity just doesn't produce
cutoffs that agree with what we've seen on this list over the years.

> > I understand you're still in favour of log-flat cutoffs which can be
> > written in the form
>
> > log(err) + k * log(complexity) < x
>
> Those seem to me to make more sense logically.

But not psychologically.

> > Paul and I have been considering those of the form
> >
> > err^p + k * comp^p < x
> >
> > which can be made to look a lot like the previous one when 0<p<0.5.
>
> Which is what I thought. Why then the fanatical opposition to even
> thinking about it?

I'm sorry it came across that way. But the fact is we had already
thought about it and found it too extreme, not possible to match up
with the historical data (vague though that is). Sorry we didn't spell
that out.

> > And we find that what works best is a value of p that's slightly
> less
> > than one, i.e. the cutoff functions that we construct based on our
> > knowledge of which ETs have been popular historically, are somewhere
> > between log and linear, but much closer to linear.
>
> This is based on actually looking at loglog charts?

No. Paul started off giving those (or were they log-linear, I forget)
but it soon became apparent that the cutoffs we wanted (based,
admittedly on our somewhat intuitive distillation of what has been
talked about on the tuning lists for the last decade or so, and Herman
Miller's experiments) would be much less curved on a linear-linear plot.

> I'd like to put this moat business on a theoretical basis which makes
> sense to me, and a good way to start would be shifting to loglog. I
> really don't see why that idea is so horrible. Of course if we must
> use curved lines the difference between the approaches is less
> important, but first can we show it is somehow better to use curved
> lines?

I suspect the only thing that would convince you, and fair enough,
would be some kind of a survey of the tuning list. Perhaps we could
list a bunch of what we think are borderline useful 5-limit ETs and
ask people to say which are in and which are out based on considering
both the error and the complexity. Trouble is, to have an appreciation
of the complexity, it isn't enough to hear Pachelbel's Canon played in
each, you have to have considered composing or playing in the
temperament or building a fixed-pitch instrument for it or some such.

So there would be few people qualified to participate.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 6:44:43 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> That would be nice, but I just don't think we can make a straight
line
> on a loglog chart agree with what we know about the historical (with
> much weight on the last few decades) use of various ETs as 5 and
> 7-limit approximations.

To convince me of this, I'll need to see the plots.

> But Paul, would you care to replot those ET plots as loglog and
we'll
> have a go.

That's log of complexity and error, not relative error, or
epimericity, or anything else. Stating exactly what is being plotted
would be nice.

> It's that when the error gets below say 0.3 of a cent it's as good
as
> JI in most circumstances (I'll never forget how your persistence
payed
> off in getting Johnny to agree to that :-) so you're not going to
put
> up with any significant amount of extra complexity just because some
> temperament gets you to 0.03 cents or 0.003 cents. So any straight
> line on a loglog plot will be too extreme in this regard.

Your conclusion doesn't follow from your premises, but it is likely
that a straight line would include micros as well as macros. I don't
see why that is bad; I want a few of them included.

> Calculating pain as log of error or complexity just doesn't produce
> cutoffs that agree with what we've seen on this list over the years.

Show me.

The nice thing about log(error) vs log(complexity) in my mind is that
we know they are related.

> I'm sorry it came across that way. But the fact is we had already
> thought about it and found it too extreme, not possible to match up
> with the historical data (vague though that is). Sorry we didn't
spell
> that out.

It would be nice if some attempt was made to bring the rest of us on
board. I don't know what Carl or Graham think, but I have not been
convinced.

> > I'd like to put this moat business on a theoretical basis which
makes
> > sense to me, and a good way to start would be shifting to loglog.
I
> > really don't see why that idea is so horrible. Of course if we
must
> > use curved lines the difference between the approaches is less
> > important, but first can we show it is somehow better to use
curved
> > lines?
>
> I suspect the only thing that would convince you, and fair enough,
> would be some kind of a survey of the tuning list.

Seeing the plots would be nice. It remains vaporware to me, even if
Paul has them, if they aren't made available.

Perhaps we could
> list a bunch of what we think are borderline useful 5-limit ETs and
> ask people to say which are in and which are out based on
considering
> both the error and the complexity.

Well, hey, what do you think I keep pesting people about, and why?
I've been trying to get a data set.

Trouble is, to have an appreciation
> of the complexity, it isn't enough to hear Pachelbel's Canon played
in
> each, you have to have considered composing or playing in the
> temperament or building a fixed-pitch instrument for it or some
such.

High complexity really isn't such a big deal for some uses. JI can be
said to have infinite complexity in a sense (no amount of fifths and
octaves will net you a pure major third, etc) which I think shows
Paul's worry about where it is on the graph is absurd, and also shows
high complexity is not something we must necessarily be concerned
about. The question for some uses simply is, are we getting anything
out of this approximation?

> So there would be few people qualified to participate.

I'm working away at it.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 7:42:16 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
> > It's that when the error gets below say 0.3 of a cent it's as good
> as
> > JI in most circumstances (I'll never forget how your persistence
> payed
> > off in getting Johnny to agree to that :-) so you're not going to
> put
> > up with any significant amount of extra complexity just because some
> > temperament gets you to 0.03 cents or 0.003 cents. So any straight
> > line on a loglog plot will be too extreme in this regard.
>
> Your conclusion doesn't follow from your premises, but it is likely
> that a straight line would include micros as well as macros. I don't
> see why that is bad; I want a few of them included.

Yeah but it may well include femtos as well.

If we assume for the sake of argument that 0.3 c is as good as JI, and
you have a 0.3 c temperament that is just barely acceptable because of
its complexity. Then all the pain is caused by the complexity, not the
error. Therefore a further reduction in error from 0.3 c to 0.03 c is
not going to reduce the pain and therefore you will not be able to
accept any more complexity.

> > Calculating pain as log of error or complexity just doesn't produce
> > cutoffs that agree with what we've seen on this list over the years.
>
> Show me.

I'm hoping paul can easily replot those ET plots loglog. I don't have
the data and am not confident to calculate it.

>
> The nice thing about log(error) vs log(complexity) in my mind is that
> we know they are related.

But to the musician they are experienced as two very different things.

> > I'm sorry it came across that way. But the fact is we had already
> > thought about it and found it too extreme, not possible to match up
> > with the historical data (vague though that is). Sorry we didn't
> spell
> > that out.
>
> It would be nice if some attempt was made to bring the rest of us on
> board. I don't know what Carl or Graham think, but I have not been
> convinced.

At one stage Carl gave some good arguments why the cutoff might be as
far from loglog as

err^2 + k * comp^2 < x

And I went along with this until I saw the ET plots. Perhaps this
still makes sense as a badness measure for ranking temperaments, but
not as a cutoff for what to include in an article. But I'm not even
sure if that's a coherent suggestion.

> > I suspect the only thing that would convince you, and fair enough,
> > would be some kind of a survey of the tuning list.
>
> Seeing the plots would be nice. It remains vaporware to me, even if
> Paul has them, if they aren't made available.
>
> Perhaps we could
> > list a bunch of what we think are borderline useful 5-limit ETs and
> > ask people to say which are in and which are out based on
> considering
> > both the error and the complexity.
>
> Well, hey, what do you think I keep pesting people about, and why?
> I've been trying to get a data set.

Right. Well maybe you could put together something more formal that
sets out everything we know that's relevant about all the borderline
5-limit ETs along with something to listen to and then have a web form
where we just click yes, no, or don't know for each.

But again, I don't know how to ensure people have thought about how
painful the complexity might be, and aren't just responding to the
error alone.

Hmm. Now that I think about it. We seem to disagree most about the
temperaments near the axes. So what we most need to agree on is how
much error is acceptable and how much complexity, independently of
each other. That would go a long way to nailing things down.

So we could have separate surveys for error and complexity, for starters.

> High complexity really isn't such a big deal for some uses. JI can be
> said to have infinite complexity in a sense (no amount of fifths and
> octaves will net you a pure major third, etc) which I think shows
> Paul's worry about where it is on the graph is absurd, and also shows
> high complexity is not something we must necessarily be concerned
> about. The question for some uses simply is, are we getting anything
> out of this approximation?

Yes. That's a good point (about e.g. 5-limit JI having infinite
complexity as a linear temperament), but obviously there's another
point of view available where 5-limit JI has finite complexity as a
planar temperament.

Psychologically it would seem that there is some point in the
complexity of low-error 5-limit linear temperaments where one would
rather have the planar complexity of 5-limit JI than bother with the
linear complexity of a temperament. I suggest that occurs somewhere
between the complexities of schismic and the least complex temperament
with error less than schismic.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/8/2004 7:56:58 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> Hmm. Now that I think about it. We seem to disagree most about the
> temperaments near the axes. So what we most need to agree on is how
> much error is acceptable and how much complexity, independently of
> each other. That would go a long way to nailing things down.

Those are the dreaded error and complexity bounds.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 8:22:51 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > Hmm. Now that I think about it. We seem to disagree most about the
> > temperaments near the axes. So what we most need to agree on is how
> > much error is acceptable and how much complexity, independently of
> > each other. That would go a long way to nailing things down.
>
> Those are the dreaded error and complexity bounds.

Yes. But with a different purpose in mind. They are only intended to
form part of the boundary as _points_at_the_axes_.

My objection was not to limits on them per se, but to acceptance
regions shaped like this (on a log-log plot).

err
|
| (a)
|---\
| \
| \
| \ (b)
| |
| |
------------ comp

as opposed to a smooth curve that rounds off those corners marked (a)
and (b).

If you have those corners (a) and (b) you then have to explain what is
special about, not only the max complexity and max error, but also the
complexity at (a) and the error at (b).

It turns out that the simplest way to round off those corners is to do
the following on a linear-linear plot.

err
|
|
|\
| \
| \
| \
| \
------------ comp

🔗Carl Lumma <ekin@lumma.org>

2/8/2004 10:42:43 PM

>> I'm sorry it came across that way. But the fact is we had already
>> thought about it and found it too extreme, not possible to match up
>> with the historical data (vague though that is). Sorry we didn't
>> spell that out.
>
>It would be nice if some attempt was made to bring the rest of us on
>board. I don't know what Carl or Graham think, but I have not been
>convinced.

My latest position is that I can live with log-flat badness with
appropriate cutoffs. The problem with anything more tricky is that
we have no data. Not vague historical data, actually no data. By
putting all this energy into the list of temperaments, we're loosing
touch with reality. Rather than worry about what is and isn't on
the list, I'd like to figure out why Paul's creepy complexity gives
the numbers it does. But as long as Dave and Paul were having fun I
didn't want to say anything. They have a way of coming up with neat
stuff, though so far their conversation has been impenetrable to me.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/8/2004 10:48:19 PM

>At one stage Carl gave some good arguments why the cutoff might be
>as far from loglog as
>
>err^2 + k * comp^2 < x

Yes, I think I did say that, in multiplicitive form.

>And I went along with this until I saw the ET plots.

Ok, can you recommend a plot to look at, and what you saw that
changed your mind? None of the plots I've seen have been labeled
nor made any sense to me.

>Perhaps this
>still makes sense as a badness measure for ranking temperaments, but
>not as a cutoff for what to include in an article. But I'm not even
>sure if that's a coherent suggestion.

Which suggestion?

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/8/2004 11:52:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >At one stage Carl gave some good arguments why the cutoff might be
> >as far from loglog as
> >
> >err^2 + k * comp^2 < x
>
> Yes, I think I did say that, in multiplicitive form.
>
> >And I went along with this until I saw the ET plots.
>
> Ok, can you recommend a plot to look at, and what you saw that
> changed your mind? None of the plots I've seen have been labeled
> nor made any sense to me.

Those Paul gave in

/tuning-math/message/9202

Particularly the 5-limit one, which I assume most people have the
greatest feel for.

Complexity is horizontal, error is vertical, labels are the notes per
octave of the ET.

> >Perhaps this
> >still makes sense as a badness measure for ranking temperaments, but
> >not as a cutoff for what to include in an article. But I'm not even
> >sure if that's a coherent suggestion.
>
> Which suggestion?

That something might make a good badness measure for ranking temps but
not be good for determining a cutoff. I'd like to retract that now.

🔗Carl Lumma <ekin@lumma.org>

2/8/2004 11:56:24 PM

>> >At one stage Carl gave some good arguments why the cutoff might be
>> >as far from loglog as
>> >
>> >err^2 + k * comp^2 < x
>>
>> Yes, I think I did say that, in multiplicitive form.
>>
>> >And I went along with this until I saw the ET plots.
>>
>> Ok, can you recommend a plot to look at, and what you saw that
>> changed your mind? None of the plots I've seen have been labeled
>> nor made any sense to me.
>
>Those Paul gave in
>
>/tuning-math/message/9202
>
>Particularly the 5-limit one, which I assume most people have the
>greatest feel for.
>
>Complexity is horizontal, error is vertical,

Aha.

>labels are the notes per octave of the ET.

How can error be in notes?

>> >Perhaps this
>> >still makes sense as a badness measure for ranking temperaments, but
>> >not as a cutoff for what to include in an article. But I'm not even
>> >sure if that's a coherent suggestion.
>>
>> Which suggestion?
>
>That something might make a good badness measure for ranking temps but
>not be good for determining a cutoff. I'd like to retract that now.

Ok.

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

2/9/2004 12:49:54 PM

Dave Keenan wrote:

> Psychologically it would seem that there is some point in the
> complexity of low-error 5-limit linear temperaments where one would
> rather have the planar complexity of 5-limit JI than bother with the
> linear complexity of a temperament. I suggest that occurs somewhere
> between the complexities of schismic and the least complex temperament
> with error less than schismic.

which is:

5/31, 193.2 cent generator

basis:
(1.0, 0.16099797612742392)

mapping by period and generator:
[(1, 0), (4, -15), (2, 2)]

mapping by steps:
[(25, 6), (40, 9), (58, 14)]

highest interval width: 17
complexity measure: 17 (19 for smallest MOS)
highest error: 0.000068 (0.081 cents)
unique

Graham

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 1:41:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Complexity is horizontal, error is vertical,
>
> Aha.
>
> >labels are the notes per octave of the ET.
>
> How can error be in notes?

Sorry. I was referring to the labels on the points. i.e. each point is
labelled with the n of the n-tET that it is.

The error is minimax error in cents where the weighting is log_2(n*d)
for the ratio n/d in lowest terms.

The complexity I'm not sure about. Paul?

But the point is, if you believe these are good error and complexity
measures, to try to draw a simple curve that cuts off those ETs that
have historically been used or recommended as approximations of
5-limit JI (you are allowed to exclude from your curve if you wish,
some that may only have been used because they were multiples of 12).

Or alternatively, those you would include in a catalog of "useful
5-limit ETs" or an article about the same.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 2:42:48 PM

>>> Complexity is horizontal, error is vertical,
>>
>> Aha.
>>
>>> labels are the notes per octave of the ET.
>>
>> How can error be in notes?
>
> Sorry. I was referring to the labels on the points. i.e. each point
> is labelled with the n of the n-tET that it is.
>
>The error is minimax error in cents where the weighting is log_2(n*d)
>for the ratio n/d in lowest terms.

Ok, thanks.

>But the point is, if you believe these are good error and complexity
>measures, to try to draw a simple curve that cuts off those ETs that
>have historically been used or recommended as approximations of
>5-limit JI (you are allowed to exclude from your curve if you wish,
>some that may only have been used because they were multiples of 12).

I don't place much stock in this sort of game. I have no idea what
ETs I'd include or exclude.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 3:16:01 PM

>> ... the dreaded error and complexity bounds.
>
>My objection was not to limits on them per se, but to acceptance
>regions shaped like this (on a log-log plot).
>
>err
>|
>| (a)
>|---\
>| \
>| \
>| \ (b)
>| |
>| |
>------------ comp
>
>as opposed to a smooth curve that rounds off those corners marked (a)
>and (b).

Aha, now I understand your objection. But wait, what's stopping
this from being a rectangle? Is the badness bound giving the
line AB? If so, it looks like a badness cutoff alone would give a
finite region...

>It turns out that the simplest way to round off those corners is to
>do the following on a linear-linear plot.
>
>err
>|
>|
>|\
>| \
>| \
>| \
>| \
>------------ comp

Why not this on a loglog plot?

-C.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:30:37 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > > A bit more concavity still and we include
> > >
> > > 45. Blackwood
> >
> > Following what Dave did for the 5-limit and ET cases, I found
that an
> > exponent of 2/3 produces the desired moat, for example when
> >
> > err^(2/3)/6.3+complexity^(2/3)/9.35 < 1.
>
> I prefer to put the scaling constants inside the exponentiation
like this
>
> (err/15.8)^(2/3) + (complexity/28.6)^(2/3) < 1.
>
> Then you can see at a glance what maximum error and complexity are
> allowed by this cutoff. For similar reasons I prefer to show the
chart
> with both axes starting from zero.
>
> > Please look at the resulting graph:
> >
> > /tuning-math/files/Erlich/7lin23.gif
> >
> > The temperaments in thie graph are identified by their ranking
> > according to the badness measure implied above:
> >
> > 1. Huygens meantone
> > 2. Pajara
> > 3. Magic
> > 4. Semisixths
> > 5. Dominant Seventh
> > 6. Tripletone
> > 7. Negri
> > 8. Hemifourths
> > 9. Kleismic/Hanson
> > 10. Superpythagorean
> > 11. Injera
> > 12. Miracle
> > 13. Biporky
> > 14. Orwell
> > 15. Diminished
> > 16. Schismic
> > 17. Augmented
> > 18. 1/12 oct. period, 25 cent generator (we discussed this years
ago)
> > 19. Flattone
> > 20. Blackwood
> > 21. Supermajor seconds
> > 22. Nonkleismic
> > 23. Porcupine
>
> This looks reasonable to me as a cutoff, although maybe still too
> many, but making a badness measure out of it may be going too far.

The badness measure is only "implied" by the above; as you know, I
don't favor using actual badness measures.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:34:44 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> >Paul
> >thinks JI is a temperament too

riiight . . .

> and should be on the chart somewhere,

Of course not -- it has infinite complexity according to any of the
charts, since it exceeds the dimension being considered.

> > And by "theoretically justify" do you mean justify purely from
> > mathematical considerations?
>
> I mean that combining error and complexity, and not their logs,
seems
> a little like adding the distance to the moon in inches to the
gross
> national product in dollars--where's the justification that we are
> talking about something comparable?

What do I get if I add the log of the distance to the moon in inches
to the log of the gross national product in dollars??

> Which is what I thought. Why then the fanatical opposition to even
> thinking about it?

I've been thinking about it and only it for years. The only
fanaticism I've seen is the opposition to thinking about something
different (and far more practical).

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:35:23 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> I wrote:
> > This looks reasonable to me as a cutoff, although maybe still too
> > many, ...
>
> After more careful examination, I find this moat to be ideal. I
can't
> find one closer to the origin without leaving out temperaments I
> really wouldn't want to leave out.
>
> Well done.

Thanks Dave!

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:41:36 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > > wrote:
> > >
> > > > Some evidence you've actually considered it would be nice. A
plot
> > > > would be grand. Some attempt to theoretically justify what
you
> > two are
> > > > doing would be appreciated.
> > >
> > > I'm not sure what "it" is that you think we haven't considered.
> >
> > The "it" is using log(complexity) and log(error) in drawing
moats,
> > and trying to make the moats involve straight lines if possible.
>
> That would be nice, but I just don't think we can make a straight
line
> on a loglog chart agree with what we know about the historical (with
> much weight on the last few decades) use of various ETs as 5 and
> 7-limit approximations.

The problem is that a straight line on the loglog chart will *never*
cross the zero-error line! Never!!

> But Paul, would you care to replot those ET plots as loglog and
we'll
> have a go.

Yes, when I have time.

> I'm pretty sure we're gonna want to have almost quarter-elliptical
> cutoffs on a loglog plot,

You're forgetting that the zero-error line is infinitely far away on
the loglog plots.

> > > And by "theoretically justify" do you mean justify purely from
> > > mathematical considerations?
> >
> > I mean that combining error and complexity, and not their logs,
seems
> > a little like adding the distance to the moon in inches to the
gross
> > national product in dollars--where's the justification that we
are
> > talking about something comparable?
>
> That's a good point. When we move to a psychological model we are
> effectively considering a quantity that has been called "pain". And
we
> are considering how to convert error in cents to pain (in ouches?)
and
> complexity in notes to pain in ouches. Then we add pains to get the
> total pain. And we assume there is a certain threshold of total pain
> beyond which few musicians are willing to venture.
>
> Calculating pain as log of error or complexity just doesn't produce
> cutoffs

Or pain values!

>that agree with what we've seen on this list over the years.

> > > And we find that what works best is a value of p that's
slightly
> > less
> > > than one, i.e. the cutoff functions that we construct based on
our
> > > knowledge of which ETs have been popular historically, are
somewhere
> > > between log and linear, but much closer to linear.
> >
> > This is based on actually looking at loglog charts?
>
> No. Paul started off giving those (or were they log-linear, I
forget)

Oh crap, I must apologize to Gene. They were log-linear, weren't
they? SO SORRY, GENE!

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:45:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> The nice thing about log(error) vs log(complexity) in my mind is
that
> we know they are related.

Related how? Via log-flat badness? Meanwhile, error and complexity
are related?

> Seeing the plots would be nice. It remains vaporware to me, even if
> Paul has them, if they aren't made available.

OK, I'll do more of them when I have a chance, but ultimately, I
don't think I want to force any musician to think about what log
(error) means or what log(complexity) means.

> High complexity really isn't such a big deal for some uses. JI can
be
> said to have infinite complexity in a sense (no amount of fifths
and
> octaves will net you a pure major third, etc) which I think shows
> Paul's worry about where it is on the graph is absurd,

No, it shows the bullshit you're putting into my mouth is absurd, as
I agreed in a recent post.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:49:59 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> Yes. That's a good point (about e.g. 5-limit JI having infinite
> complexity as a linear temperament), but obviously there's another
> point of view available where 5-limit JI has finite complexity as a
> planar temperament.

But there are no other 5-limit planar temperaments to compare it to,
so this is irrelevant.

> Psychologically it would seem that there is some point in the
> complexity of low-error 5-limit linear temperaments where one would
> rather have the planar complexity of 5-limit JI than bother with the
> linear complexity of a temperament.

Rather, it seems to me, one wouldn't care. Certainly the linear
temperament can never become *more* complex than a planar -- it can
merely become *equally* complex for all intents and purposes.

> I suggest that occurs somewhere
> between the complexities of schismic and the least complex
temperament
> with error less than schismic.

I suggest much more complex temperaments belong in a math paper, not
a music paper.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 3:56:53 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> I'm hoping paul can easily replot those ET plots loglog.

When I do so, at least keep in mind that rather than log(complexity),
2^complexity has actually been proposed as a criterion (i.e., by
Fokker), and that error^2, at least, has gotten much attention as a
measure of pain, while log(error) has gotten none.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 4:01:26 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> I'm sorry it came across that way. But the fact is we had already
> >> thought about it and found it too extreme, not possible to match
up
> >> with the historical data (vague though that is). Sorry we didn't
> >> spell that out.
> >
> >It would be nice if some attempt was made to bring the rest of us
on
> >board. I don't know what Carl or Graham think, but I have not been
> >convinced.
>
> My latest position is that I can live with log-flat badness with
> appropriate cutoffs. The problem with anything more tricky

More tricky? Log-flat is tricky enough to be interesting for
mathematicians and mind-boggling for musicians.

> is that
> we have no data. Not vague historical data, actually no data.

Less data than in the log-flat case?

> By
> putting all this energy into the list of temperaments, we're loosing
> touch with reality. Rather than worry about what is and isn't on
> the list, I'd like to figure out why Paul's creepy complexity gives
> the numbers it does.

Seems to be a creepy coincidence, since it's an affine-geometrical
measure of area in the Tenney lattice, not something with units of
number of notes. But I'm not surprised that it gives more "notes" for
more complex temperaments, and fewer for less complex temperaments. ;)

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 4:09:02 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >Complexity is horizontal, error is vertical,
> >
> > Aha.
> >
> > >labels are the notes per octave of the ET.
> >
> > How can error be in notes?
>
> Sorry. I was referring to the labels on the points. i.e. each point
is
> labelled with the n of the n-tET that it is.
>
> The error is minimax error in cents where the weighting is log_2
(n*d)
> for the ratio n/d in lowest terms.
>
> The complexity I'm not sure about. Paul?

This was explained in the post itself, though it's obviously giving
something extremely close to the number of notes per octave.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 4:10:19 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> ... the dreaded error and complexity bounds.
> >
> >My objection was not to limits on them per se, but to acceptance
> >regions shaped like this (on a log-log plot).
> >
> >err
> >|
> >| (a)
> >|---\
> >| \
> >| \
> >| \ (b)
> >| |
> >| |
> >------------ comp
> >
> >as opposed to a smooth curve that rounds off those corners marked
(a)
> >and (b).
>
> Aha, now I understand your objection. But wait, what's stopping
> this from being a rectangle? Is the badness bound giving the
> line AB?

Yes.

> If so, it looks like a badness cutoff alone would give a
> finite region...

No, because the zero-error line is infinitely far away on a loglog
plot.

> >It turns out that the simplest way to round off those corners is to
> >do the following on a linear-linear plot.
> >
> >err
> >|
> >|
> >|\
> >| \
> >| \
> >| \
> >| \
> >------------ comp
>
> Why not this on a loglog plot?

Same reason as above.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 4:21:54 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> The error is minimax error in cents where the weighting is log_2
(n*d)
> for the ratio n/d in lowest terms.

What in the world does this mean? Do you mean TOP error for an equal
temperament, which is dual to the above?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 4:32:56 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Aha, now I understand your objection. But wait, what's stopping
> this from being a rectangle? Is the badness bound giving the
> line AB? If so, it looks like a badness cutoff alone would give a
> finite region...

You don't need a finite region, just a finite number of temperaments
below the badness line. This is easily accomplished in loglog as
well; the difficulty is that people are likely to be unhappy with the
fact that both very high error, low complexity temperaments and very
low error, high complexity temperaments are likely to be included.
Since these are evil, and simply excluding them on the grounds we
don't want them is also evil, we are left with trying to cook up some
scheme which doesn't look as if we are simply cooking up some scheme
to get rid of them. If this isn't basically just a shell game, I
think the thing should be defined in a way where the definition gives
us the list, and not the list the definition. Some kind of cluster
analysis or something.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 4:35:58 PM

>> is that
>> we have no data. Not vague historical data, actually no data.
>
>Less data than in the log-flat case?

log-flat is natural, in a way. And it should be one of the easier
concepts around here to explain to musicians. So far, you and Dave
have not done any kind of job explaining "moats", or why we should
want to add instead of multiply to get badness.

I notice that you're now saying that you don't want to use badness
at all, which is what I said would be the logical extreme of a
suggestion you made, and you argued against it!!

>> By putting all this energy into the list of temperaments, we're
>> loosing touch with reality. Rather than worry about what is and
>> isn't on the list, I'd like to figure out why Paul's creepy
>> complexity gives the numbers it does.
>
>Seems to be a creepy coincidence, since it's an affine-geometrical
>measure of area in the Tenney lattice, not something with units of
>number of notes. But I'm not surprised that it gives more "notes"
>for more complex temperaments, and fewer for less complex
>temperaments. ;)

I mean, it seems to favor DE scales, but which ones and why?

-C.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 4:43:33 PM

>>>My objection was not to limits on them per se, but to acceptance
>>>regions shaped like this (on a log-log plot).
>>>
>>>err
>>>|
>>>| (a)
>>>|---\
>>>| \
>>>| \
>>>| \ (b)
>>>| |
>>>| |
>>>------------ comp
>>>
>>>as opposed to a smooth curve that rounds off those corners marked
>>>(a) and (b).
>>
>>Aha, now I understand your objection. But wait, what's stopping
>>this from being a rectangle? Is the badness bound giving the
>>line AB?
>
>Yes.
>
>>If so, it looks like a badness cutoff alone would give a
>>finite region...
>
>No, because the zero-error line is infinitely far away on a loglog
>plot.

Can you illustrate this? It looks like the zero-error line is
three dashes away on the above loglog plot. :)

>>>It turns out that the simplest way to round off those corners
>>>is to do the following on a linear-linear plot.
>> >
>> >err
>> >|
>> >|
>> >|\
>> >| \
>> >| \
>> >| \
>> >| \
>> >------------ comp
>>
>> Why not this on a loglog plot?
>
>Same reason as above.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 4:44:19 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> My latest position is that I can live with log-flat badness with
> appropriate cutoffs. The problem with anything more tricky is that
> we have no data. Not vague historical data, actually no data.

Three questions regarding this statement.

1. Why is log-flat badness with cutoffs (on error and complexity) less
tricky than the cutoff functions Paul and I have been looking at.

Log-flat badness with cutoffs looks like this

max(err/k1, comp/k2, err * comp^k3) < x

or equivalently this (with different choices of k1, k2 and x)

max(err/k1, comp/k2, log(err) + k3*log(comp)) < x

where k3 is the number of primes divided by the number of primes less
the number of degrees of freedom.

This has two discontinuities in the cutoff curve.

Is that less tricky than the single straight line

err/k1 + comp/k2 < x ?

or the slightly curved line

(err/k1)^(2/3) + comp^(2/3) < x ?

If so, why?

2. Assuming for the moment that we have no data, why isn't that just
as much of a problem for log-flat badness with e&c cutoffs as for any
other proposed cutoff relation? i.e. How should we decide what cutoffs
to use on error, complexity and log-flat badness?

3. Why don't discussions of the value of various temperaments in the
archives of the tuning list constitute data on this, or at least
evidence?

I assume "evidence" is what you mean by "data" here. It's what I
meant. If, by "data", you mean something already organised as lists of
relevant numbers then I agree we don't have it, but what could
possibly be meant by "vague historical" lists of numbers.

> By
> putting all this energy into the list of temperaments, we're loosing
> touch with reality.

Well Paul and I see it as bringing it in closer touch with reality.

> Rather than worry about what is and isn't on
> the list, I'd like to figure out why Paul's creepy complexity gives
> the numbers it does.

Well sure. That would be a good thing to do. But I don't have a handle
on it. I think that's Paul and Gene's department. I'm happy just to
take it as evidence that Paul has hit on a very good complexity
measure and we should use it.

> But as long as Dave and Paul were having fun I
> didn't want to say anything. They have a way of coming up with neat
> stuff, though so far their conversation has been impenetrable to me.

Thanks and sorry. Did this one help?

/tuning-math/message/9330

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 4:44:15 PM

>we are left with trying to cook up some
>scheme which doesn't look as if we are simply cooking up some scheme
>to get rid of them. If this isn't basically just a shell game, I
>think the thing should be defined in a way where the definition
gives
>us the list, and not the list the definition. Some kind of cluster
>analysis or something.

Agreed completely, but let's hear Paul and Dave out. They may
already have something!

-C.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 5:00:08 PM

>>My latest position is that I can live with log-flat badness with
>>appropriate cutoffs. The problem with anything more tricky is that
>>we have no data. Not vague historical data, actually no data.
>
>Three questions regarding this statement.
>
>1. Why is log-flat badness with cutoffs (on error and complexity)
>less tricky than the cutoff functions Paul and I have been looking
>at.

logflat is unique among badness functions I know of in that it does
not favor any region of complexity or error (thus it reveals
something about the natural distribution of temperaments) and has
zero free variables.

>Log-flat badness with cutoffs

The cutoffs are of course completely arbitrary, but can be easily
justified and explained in the context of a paper.

>2. Assuming for the moment that we have no data, why isn't that
>just as much of a problem for log-flat badness with e&c cutoffs
>as for any other proposed cutoff relation?

Ignoring the cutoffs, logflat does reveal something fundamental about
the distribution of temperaments. Whether musically appropriate or
not (utterly unfalsifiable assumptions), it gives an unbiased view
of ennealimmal vs. meantone, etc.

>i.e. How should we decide what cutoffs to use on error, complexity
>and log-flat badness?

You can tweak them to satisfy your sensibilities as best as possible,
same as you're tweaking the moat to factor infinity to satisfy your
sensibilities as best as possible.

>3. Why don't discussions of the value of various temperaments in
>the archives of the tuning list constitute data on this, or at
>least evidence?

Because nobody here or on the tuning list has the slightest clue
about what's musically useful. Nobody has composed more than a few
ditties in any of these systems.

>> But as long as Dave and Paul were having fun I
>> didn't want to say anything. They have a way of coming up with
>> neat stuff, though so far their conversation has been
>> impenetrable to me.
>
>Thanks and sorry. Did this one help?
>
>/tuning-math/message/9330

It doesn't explain what the heck a moat is, for starters.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 5:03:30 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:

> Related how? Via log-flat badness? Meanwhile, error and complexity
> are related?

If C is the complexity and E is the error for a 7-limit linear
temperament which belongs to an infinite list of best examples, then
E ~ k C^2. Taking the logs, log(E) ~ 2 log(C) + c. In other words, we
have a relationship; one can very roughly be estimated in terms of
the other.

> > Seeing the plots would be nice. It remains vaporware to me, even
if
> > Paul has them, if they aren't made available.
>
> OK, I'll do more of them when I have a chance, but ultimately, I
> don't think I want to force any musician to think about what log
> (error) means or what log(complexity) means.

You are going to explain TOP error and complexity, but this would be
too much math??
>
> > High complexity really isn't such a big deal for some uses. JI
can
> be
> > said to have infinite complexity in a sense (no amount of fifths
> and
> > octaves will net you a pure major third, etc) which I think shows
> > Paul's worry about where it is on the graph is absurd,
>
> No, it shows the bullshit you're putting into my mouth is absurd,
as
> I agreed in a recent post.

You go on and on about not finding the zero error line, though
evidently not finding the infinite complexity line is not a problem.
I think that is absurd, but if you want it, you can find it on the
projective plane.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 5:06:06 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > I'm hoping paul can easily replot those ET plots loglog.
>
> When I do so, at least keep in mind that rather than log
(complexity),
> 2^complexity has actually been proposed as a criterion (i.e., by
> Fokker), and that error^2, at least, has gotten much attention as a
> measure of pain, while log(error) has gotten none.

That it's gotten none is what I'm complaining about. That
2^complexity has been discussed bores me to tears, unless you can
explain *why*.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 5:15:50 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> Well Paul and I see it as bringing it in closer touch with reality.

Convince us. Make a case. Show some loglog plots and prove they make
no sense. Explain why what you are doing does make sense. Is this an
unreasonable request?

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 5:16:32 PM

>>Related how? Via log-flat badness? Meanwhile, error and complexity
>>are related?
>
>If C is the complexity and E is the error for a 7-limit linear
>temperament which belongs to an infinite list of best examples,
>then E ~ k C^2.

Whoa dude, howdoyoufigure?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 5:23:26 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma" <ekin@l...> wrote:
> >>Related how? Via log-flat badness? Meanwhile, error and
complexity
> >>are related?
> >
> >If C is the complexity and E is the error for a 7-limit linear
> >temperament which belongs to an infinite list of best examples,
> >then E ~ k C^2.
>
> Whoa dude, howdoyoufigure?

Should be E~k/C^2, sorry. The convex hull thing I was talking about
is relevant here.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 7:55:15 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > The error is minimax error in cents where the weighting is log_2
> (n*d)
> > for the ratio n/d in lowest terms.

The weighting is actually ONE OVER log2(n*d).

> What in the world does this mean?

Just because he's off by a multiplicate inverse, you suddenly have no
idea what he's talking about?

> Do you mean TOP error for an equal
> temperament,

Of course that's what he means.

> which is dual to the above?

Dual? How does duality come into play here?

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 8:00:13 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma" <ekin@l...> wrote:
> >> is that
> >> we have no data. Not vague historical data, actually no data.
> >
> >Less data than in the log-flat case?
>
> log-flat is natural, in a way. And it should be one of the easier
> concepts around here to explain to musicians.

I don't recall even Dave understanding its derivation, let along any
full-time musicians.

> So far, you and Dave
> have not done any kind of job explaining "moats",

I thought we had, over and over again.

> or why we should
> want to add instead of multiply to get badness.

Why should we want to multiply instead of add?

> I notice that you're now saying that you don't want to use badness
> at all, which is what I said would be the logical extreme of a
> suggestion you made, and you argued against it!!

No, Carl, I was arguing against it being a logical extreme of, or
having any correlation in desirability with, that suggestion (which
was a hypothetical max(a*error, b*complexity) criterion).

> >> By putting all this energy into the list of temperaments, we're
> >> loosing touch with reality. Rather than worry about what is and
> >> isn't on the list, I'd like to figure out why Paul's creepy
> >> complexity gives the numbers it does.
> >
> >Seems to be a creepy coincidence, since it's an affine-geometrical
> >measure of area in the Tenney lattice, not something with units of
> >number of notes. But I'm not surprised that it gives more "notes"
> >for more complex temperaments, and fewer for less complex
> >temperaments. ;)
>
> I mean, it seems to favor DE scales,

Right, and that seems to me to be just a coincidence, but there are
many unanswered questions, for example in the post about 12-equal
complexity calculation.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 8:01:15 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma" <ekin@l...> wrote:
> >>>My objection was not to limits on them per se, but to acceptance
> >>>regions shaped like this (on a log-log plot).
> >>>
> >>>err
> >>>|
> >>>| (a)
> >>>|---\
> >>>| \
> >>>| \
> >>>| \ (b)
> >>>| |
> >>>| |
> >>>------------ comp
> >>>
> >>>as opposed to a smooth curve that rounds off those corners marked
> >>>(a) and (b).
> >>
> >>Aha, now I understand your objection. But wait, what's stopping
> >>this from being a rectangle? Is the badness bound giving the
> >>line AB?
> >
> >Yes.
> >
> >>If so, it looks like a badness cutoff alone would give a
> >>finite region...
> >
> >No, because the zero-error line is infinitely far away on a loglog
> >plot.
>
> Can you illustrate this?

How can I illustrate infinity?

> It looks like the zero-error line is
> three dashes away on the above loglog plot. :)

Since you're smiliing, I'll assume you "got it".

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 8:10:32 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma" <ekin@l...> wrote:
> >>My latest position is that I can live with log-flat badness with
> >>appropriate cutoffs. The problem with anything more tricky is
that
> >>we have no data. Not vague historical data, actually no data.
> >
> >Three questions regarding this statement.
> >
> >1. Why is log-flat badness with cutoffs (on error and complexity)
> >less tricky than the cutoff functions Paul and I have been looking
> >at.
>
> logflat is unique among badness functions I know of in that it does
> not favor any region of complexity or error (thus it reveals
> something about the natural distribution of temperaments) and has
> zero free variables.

Thus it's great for a paper for mathematicians. Not for musicians.

> >Log-flat badness with cutoffs
>
> The cutoffs are of course completely arbitrary, but can be easily
> justified and explained in the context of a paper.

But there are *three* of them!

> >2. Assuming for the moment that we have no data, why isn't that
> >just as much of a problem for log-flat badness with e&c cutoffs
> >as for any other proposed cutoff relation?
>
> Ignoring the cutoffs, logflat does reveal something fundamental
about
> the distribution of temperaments. Whether musically appropriate or
> not (utterly unfalsifiable assumptions), it gives an unbiased view
> of ennealimmal vs. meantone, etc.

From a purely mathematical standpoint only.

> >i.e. How should we decide what cutoffs to use on error, complexity
> >and log-flat badness?
>
> You can tweak them to satisfy your sensibilities as best as
possible,
> same as you're tweaking the moat to factor infinity

To factor infinity??

> to satisfy your
> sensibilities as best as possible.

But there's less to tweak -- we just find the thickest moat than
encloses the systems in the same ballpark as the ones we know we
definitely want to include. This seems a lot less arbitrary than
tweaking *three* parameters to satisfy one's sensibilities as best as
possible.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 8:16:22 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
>
> > Related how? Via log-flat badness? Meanwhile, error and
complexity
> > are related?
>
> If C is the complexity and E is the error for a 7-limit linear
> temperament which belongs to an infinite list of best examples,

Other than via log-flat badness?

> then
> E ~ k C^2. Taking the logs, log(E) ~ 2 log(C) + c. In other words,
we
> have a relationship; one can very roughly be estimated in terms of
> the other.

> > OK, I'll do more of them when I have a chance, but ultimately, I
> > don't think I want to force any musician to think about what log
> > (error) means or what log(complexity) means.
>
> You are going to explain TOP error and complexity, but this would
be
> too much math??

No, just too abstract a measure to wrap one's head around intuitively.

> > > High complexity really isn't such a big deal for some uses. JI
> can
> > be
> > > said to have infinite complexity in a sense (no amount of
fifths
> > and
> > > octaves will net you a pure major third, etc) which I think
shows
> > > Paul's worry about where it is on the graph is absurd,
> >
> > No, it shows the bullshit you're putting into my mouth is absurd,
> as
> > I agreed in a recent post.
>
> You go on and on about not finding the zero error line, though
> evidently not finding the infinite complexity line is not a
>problem.

No, it's not a problem, any more than not finding the infinite error
line is a problem.

🔗Paul Erlich <perlich@aya.yale.edu>

2/9/2004 8:18:36 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> > wrote:
> >
> > > I'm hoping paul can easily replot those ET plots loglog.
> >
> > When I do so, at least keep in mind that rather than log
> (complexity),
> > 2^complexity has actually been proposed as a criterion (i.e., by
> > Fokker), and that error^2, at least, has gotten much attention as
a
> > measure of pain, while log(error) has gotten none.
>
> That it's gotten none is what I'm complaining about.

No one creates a psychological model where one of the response
variables goes to minus infinity!

> That
> 2^complexity has been discussed bores me to tears, unless you can
> explain *why*.

One reason might be because, for an ET, the number of possible chords
goes as 2^complexity.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 8:28:02 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > The error is minimax error in cents where the weighting is log_2
> (n*d)
> > for the ratio n/d in lowest terms.
>
> What in the world does this mean? Do you mean TOP error for an equal
> temperament, which is dual to the above?

Yes, I should have said the weights were 1/log_2(n*d) or that the
errors were _divided_ by the weights I gave.

Yes. I mean TOP error but didn't want to assume all readers would know
what that meant.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 9:03:50 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > Well Paul and I see it as bringing it in closer touch with reality.
>
> Convince us. Make a case. Show some loglog plots and prove they make
> no sense. Explain why what you are doing does make sense. Is this an
> unreasonable request?

I seem to have been doing nothing but that for the past two days. The
fact that you haven't recognised it as such says to me that we're
somehow talking past each other much worse than I thought.

We're trying to come up with some reasonable way to decide on which
temperaments of each type to include in a paper on temperaments, given
that space is always limited. We want to include those few (maybe only
about 20 of each type) which we feel are most likely to actually be
found useful by musicians, and we want to be able to answer questions
of the kind: "since you included this and this, then why didn't you
included this". So Gene may have a point when he talks about cluster
analysis, I just don't find his applications of it so far to be
producing useful results.

Our starting point (but _only_ a starting point) is the knowledge
we've built up, over many years spent on the tuning list, regarding
what people find musically useful, with 5-limit ETs having had the
greatest coverage.

It may be an objective mathematical fact that log-flat badness gives
uniform distribution, but you don't need a multiple-choice survey to
know it is a psychological fact that musicians aren't terribly
interested in availing themselves of the full resources of 4276-ET
()or whatever it was. Nor are they interested in a 5-limit temperament
where 6/5 is distributed. So we add complexity and error cutoffs which
utterly violate log-flat badness in their region of application (so
why violate log-flat badness elsewhere and make the transition to
non-violatedness as smooth as possible.

Corners in the cutoff line are bad because there are too many ways for
a temperament to be close to the outside of a corner.

A moat is a wide and straight (or smoothly curved) band of white space
on the complexity-error chart, surrounding your included temperaments.
It is good to have a moat so that you can answer questions like "since
you included this and this, then why didn't you included this", by at
least offering that "it's a long way from any of the included
temperaments, on an error complexity plot".

The way to find a useful moat is to start with the temperaments you
know everyone will want included, and those that almost no one will
care about, and check out the space between the two.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 10:19:14 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> Why should we want to multiply instead of add?

Oh, for God's sake Paul-have you looked at your own plots? Did you
notice how straight the thing looks in loglog coordinates? Your plots
make it clear that loglog is the right approach. Look at them!

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 10:21:03 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:

> But there's less to tweak -- we just find the thickest moat than
> encloses the systems in the same ballpark as the ones we know we
> definitely want to include. This seems a lot less arbitrary than
> tweaking *three* parameters to satisfy one's sensibilities as best
as
> possible.

Your plots make it clear you'd better trash the idea of doing moats
in anything but loglog.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 10:20:11 PM

>> Do you mean TOP error for an equal
>> temperament,
>
>Of course that's what he means.

For 7-limit ets, how do you decide which comma to use?

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 10:23:17 PM

>> >>>My objection was not to limits on them per se, but to acceptance
>> >>>regions shaped like this (on a log-log plot).
>> >>>
>> >>>err
>> >>>|
>> >>>| (a)
>> >>>|---\
>> >>>| \
>> >>>| \
>> >>>| \ (b)
>> >>>| |
>> >>>| |
>> >>>------------ comp
>> >>>
>> >>>as opposed to a smooth curve that rounds off those corners marked
>> >>>(a) and (b).
>> >>
>> >>Aha, now I understand your objection. But wait, what's stopping
>> >>this from being a rectangle? Is the badness bound giving the
>> >>line AB?
>> >
>> >Yes.
>> >
>> >>If so, it looks like a badness cutoff alone would give a
>> >>finite region...
>> >
>> >No, because the zero-error line is infinitely far away on a loglog
>> >plot.
>>
>> Can you illustrate this?
>
>How can I illustrate infinity?
>
>> It looks like the zero-error line is
>> three dashes away on the above loglog plot. :)
>
>Since you're smiliing, I'll assume you "got it".

No, I was just cracking wise. :(

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 10:28:39 PM

>Thus it's great for a paper for mathematicians. Not for musicians.

The *contents* of the list is what's great for musicians, not
how it was generated.

>> >Log-flat badness with cutoffs
>>
>> The cutoffs are of course completely arbitrary, but can be easily
>> justified and explained in the context of a paper.
>
>But there are *three* of them!

...still trying to understand why the rectangle doesn't enclose
a finite number of temperaments...

>> >i.e. How should we decide what cutoffs to use on error, complexity
>> >and log-flat badness?
>>
>> You can tweak them to satisfy your sensibilities as best as
>> possible, same as you're tweaking the moat to factor infinity
>
>To factor infinity??

Sorry, it just means "a lot" here. Like "Kill factor infinity!".

>> to satisfy your
>> sensibilities as best as possible.
>
>But there's less to tweak -- we just find the thickest moat than
>encloses the systems in the same ballpark as the ones we know we
>definitely want to include. This seems a lot less arbitrary than
>tweaking *three* parameters to satisfy one's sensibilities as best
>as possible.

With moats it seems you're pretty-much able to hand pick the list,
which is more arbitrary (in the above sense) than not being able to.

By thoughts are that in the 5-limit, we might reasonably have a
chance of guessing a good list. But beyond that, I would cry
Judas if anyone here claimed they could hand-pick anything. So,
my question to you is: can a 5-limit moat be extrapolated upwards
nicely?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 10:43:29 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> I seem to have been doing nothing but that for the past two days.
The
> fact that you haven't recognised it as such says to me that we're
> somehow talking past each other much worse than I thought.

Then first loglog plots I've seen were just now posted by Paul; they
make a *very* strong case for loglog, not at all to my surprise. It
would be interesting now to see linears.

> Corners in the cutoff line are bad because there are too many ways
for
> a temperament to be close to the outside of a corner.

There's only one way to do it, which is to do it. I don't see why
this is any kind of argument. Something on the very edge of your
criterion is by definition marginal, whereever your margin lies. You
can try to avoid this by moats, but that's only going to take you so
far, and if you are not careful (and I've seen no signs of care) into
regions where the justification is dubious. If you want a list, why
not just pick your favorites and put them on it?

>
> A moat is a wide and straight (or smoothly curved) band of white
space
> on the complexity-error chart, surrounding your included
temperaments.
> It is good to have a moat so that you can answer questions
like "since
> you included this and this, then why didn't you included this", by
at
> least offering that "it's a long way from any of the included
> temperaments, on an error complexity plot".

If the moat is gerrymandered, you get that question anyway, don't you?

> The way to find a useful moat is to start with the temperaments you
> know everyone will want included, and those that almost no one will
> care about, and check out the space between the two.

Right. Then you put them on a loglog plot, and try to draw a straight
line between them, and find to your amazement that it works. Now you
only have the corners to worry about, and what you are doing is
easier to justify. Is this so bad? Why the opposition to even trying?
When the response is "this isn't helping" my impression is that I am
not being listened to at all, hence I started shouting. Now I think I
may have gotten through a little, so let's talk.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 10:44:01 PM

>We're trying to come up with some reasonable way to decide on which
>temperaments of each type to include in a paper on temperaments, given
>that space is always limited. We want to include those few (maybe only
>about 20 of each type)

For musicians, I'd make the list 5 for each limit; 10 tops. For
people reading a theory paper, 20 would be interesting.

>which we feel are most likely to actually be
>found useful by musicians, and we want to be able to answer questions
>of the kind: "since you included this and this, then why didn't you
>included this". So Gene may have a point when he talks about cluster
>analysis, I just don't find his applications of it so far to be
>producing useful results.

I haven't seen any cluster analysis yet!

>Our starting point (but _only_ a starting point) is the knowledge
>we've built up, over many years spent on the tuning list, regarding
>what people find musically useful, with 5-limit ETs having had the
>greatest coverage.

You're gravely mistaken about the pertinence of this 'data source'.
Even worse than culling intervals from the Scala archive.

>It may be an objective mathematical fact that log-flat badness gives
>uniform distribution, but you don't need a multiple-choice survey to
>know it is a psychological fact that musicians aren't terribly
>interested in availing themselves of the full resources of 4276-ET
>()or whatever it was.

So far this can be addressed with a complexity bound.

>So we add complexity and error cutoffs which
>utterly violate log-flat badness in their region of application (so
>why violate log-flat badness elsewhere and make the transition to
>non-violatedness as smooth as possible.

?

>Corners in the cutoff line are bad because there are too many ways for
>a temperament to be close to the outside of a corner.

Agreed.

>A moat is a wide and straight (or smoothly curved) band of white space
>on the complexity-error chart, surrounding your included temperaments.
>It is good to have a moat so that you can answer questions like "since
>you included this and this, then why didn't you included this", by at
>least offering that "it's a long way from any of the included
>temperaments, on an error complexity plot".

Okay, now I have a definition of moat. How do they compare to Gene's
"acceptance regions"?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 10:49:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >We're trying to come up with some reasonable way to decide on which
> >temperaments of each type to include in a paper on temperaments,
given
> >that space is always limited. We want to include those few (maybe
only
> >about 20 of each type)
>
> For musicians, I'd make the list 5 for each limit; 10 tops. For
> people reading a theory paper, 20 would be interesting.

Ridiculous. I've *composed* in about that many temperaments.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 10:49:50 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > Why should we want to multiply instead of add?
>
> Oh, for God's sake Paul-have you looked at your own plots? Did you
> notice how straight the thing looks in loglog coordinates? Your plots
> make it clear that loglog is the right approach. Look at them!

I don't much care how it's plotted, so long as we zoom in on the
interesting bit. So, on these plots, what shape would you make a
smooth curve that encloses only (or mostly) those ETs that musicians
have actually found useful (or that you think are likely to be found
useful) for approximating JI to the relevant limit? Having regard for
the difficulty caused by complexity as well as error.

I wonder if, when you say that there is no particular problem with
complexity you are thinking of cases where you may use a subset of an
ET, in the way that Joseph Pehrson is using a 21 note subset of 72-ET.
In that case you are really using a linear temperament, not the ET
itself. I think the complexity of an ET should be considered as if you
planned to use _all_ its notes.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 10:58:04 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> I don't much care how it's plotted, so long as we zoom in on the
> interesting bit. So, on these plots, what shape would you make a
> smooth curve that encloses only (or mostly) those ETs that musicians
> have actually found useful (or that you think are likely to be found
> useful) for approximating JI to the relevant limit? Having regard
for
> the difficulty caused by complexity as well as error.

Straight lines, or if you absolutely must, round out the straight
lines by making them algebraic curves of high degree.

> I wonder if, when you say that there is no particular problem with
> complexity you are thinking of cases where you may use a subset of
an
> ET, in the way that Joseph Pehrson is using a 21 note subset of 72-
ET.
> In that case you are really using a linear temperament, not the ET
> itself. I think the complexity of an ET should be considered as if
you
> planned to use _all_ its notes.

You are not thinking like a computer-based composer, and that is
likely to be an increasingly important consideration as time goes on.
Technology has a habit of getting both better and more available.
Around here Fry's sometimes sells new computers--good ones--for $99
as a loss leader. A cheap monitor, a pair of those cheap earphones
I've been hearing about on tuning, and some freeware and you're in
business, if only people like us will just tell you how.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 11:06:25 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
>
> > But there's less to tweak -- we just find the thickest moat than
> > encloses the systems in the same ballpark as the ones we know we
> > definitely want to include. This seems a lot less arbitrary than
> > tweaking *three* parameters to satisfy one's sensibilities as best
> as
> > possible.
>
> Your plots make it clear you'd better trash the idea of doing moats
> in anything but loglog.

I don't have a problem with that. I still think the simplest curves
through moats that are in the right ballpark will be of the form

(err/k1)^p + (comp/k2)^p < x where p is 1 or slightly less than 1.

in terms of log(err) and log(comp) that's equivalent to

exp([log(err) - k1] * p) + exp([log(comp) - k2)] * p) < x

with a different choice of k1, k2 and x.

Is there a simpler function of log(err) and log(comp) that gives
similar shaped curves in the region of interest?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 11:12:58 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> > wrote:
> >
> > > The error is minimax error in cents where the weighting is log_2
> > (n*d)
> > > for the ratio n/d in lowest terms.
>
> The weighting is actually ONE OVER log2(n*d).

It's not either one when I'm doing it, to me log(n/d)/log(n*d) is
just a variant on epimericity.

> > What in the world does this mean?
>
> Just because he's off by a multiplicate inverse, you suddenly have
no
> idea what he's talking about?

No, and you aren't making much sense to me either; we seem to have
differing ideas of what the topic under discussion is. Maybe I'm not
tracking it, but I thought we were talking about TOP error and
complexity.

> > Do you mean TOP error for an equal
> > temperament,
>
> Of course that's what he means.
>
> > which is dual to the above?
>
> Dual? How does duality come into play here?

The dual to Tenney distance is how the error is measured.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 11:14:07 PM

>> For musicians, I'd make the list 5 for each limit; 10 tops. For
>> people reading a theory paper, 20 would be interesting.
>
>Ridiculous. I've *composed* in about that many temperaments.

You're not a professional musician, are you?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 11:18:45 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> I don't have a problem with that. I still think the simplest curves
> through moats that are in the right ballpark will be of the form
>
> (err/k1)^p + (comp/k2)^p < x where p is 1 or slightly less than 1.

The ets Paul just got through plotting are lying more or less along
straight lines. I don't see any way to make a sensible moat unless
your line follows the lay of the land, so to speak.

> Is there a simpler function of log(err) and log(comp) that gives
> similar shaped curves in the region of interest?

Lines.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/9/2004 11:24:05 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> For musicians, I'd make the list 5 for each limit; 10 tops. For
> >> people reading a theory paper, 20 would be interesting.
> >
> >Ridiculous. I've *composed* in about that many temperaments.
>
> You're not a professional musician, are you?

For the right price, I'll compose in your choice of temperament.

🔗Carl Lumma <ekin@lumma.org>

2/9/2004 10:22:35 PM

>> log-flat is natural, in a way. And it should be one of the easier
>> concepts around here to explain to musicians.
>
>I don't recall even Dave understanding its derivation, let along any
>full-time musicians.

I recall that Dave rejected the idea of a critical exponent. But I
didn't understand it until I coded it. But anyway, it's no big deal.
At the level I'd imagine this stuff being explained to a full-time
musician, it wouldn't be any harder to explain than a moat.

>> So far, you and Dave
>> have not done any kind of job explaining "moats",
>
>I thought we had, over and over again.

Can you give a definition? Is it a )) shaped region on a
log-linear plot, or...?

>> or why we should
>> want to add instead of multiply to get badness.
>
>Why should we want to multiply instead of add?

Gene multiplies logs, and you and Dave are adding them.
Or so I thought...

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 11:39:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:

> Then first loglog plots I've seen were just now posted by Paul; they
> make a *very* strong case for loglog, not at all to my surprise. It
> would be interesting now to see linears.

Paul posted the linear versions a while ago. And I posted a link to
Paul's message introducing them, for Carl earlier today.

> > Corners in the cutoff line are bad because there are too many ways
> for
> > a temperament to be close to the outside of a corner.
>
> There's only one way to do it, which is to do it. I don't see why
> this is any kind of argument. Something on the very edge of your
> criterion is by definition marginal, whereever your margin lies. You
> can try to avoid this by moats, but that's only going to take you so
> far, and if you are not careful (and I've seen no signs of care) into
> regions where the justification is dubious. If you want a list, why
> not just pick your favorites and put them on it?

Because if someone plots them on a graph (whether log or linear),
along with some nearby ones we left out, then if the only way to draw
a line separating them is to have lots of zigs and zags in it, they
will have good reason to complain.

> > A moat is a wide and straight (or smoothly curved) band of white
> space
> > on the complexity-error chart, surrounding your included
> temperaments.
> > It is good to have a moat so that you can answer questions
> like "since
> > you included this and this, then why didn't you included this", by
> at
> > least offering that "it's a long way from any of the included
> > temperaments, on an error complexity plot".
>
> If the moat is gerrymandered, you get that question anyway, don't you?
>

Sure. But the wider and smoother your moat, the easier you can be let
off the hook. :-)

Also "gerrymander" is a derogatory term and originally applied only to
electoral boundaries redefined to suit the encumbent.

We have no need to apologise for choosing boundaries that we know to
the best of our combined knowledge to only include the X most useful
temperaments. Indeed that's the whole idea.

We're never going to agree with everyone, but a good moat will lessen
the scope for disagreement.

> > The way to find a useful moat is to start with the temperaments you
> > know everyone will want included, and those that almost no one will
> > care about, and check out the space between the two.
>
> Right. Then you put them on a loglog plot, and try to draw a straight
> line between them, and find to your amazement that it works.

No! I'm afraid I've tried, but I can find absolutely no way to make a
straight line work for this on a log log plot.

> Now you
> only have the corners to worry about, and what you are doing is
> easier to justify.

If it was a straight line, why would I have corners to worry about?

> Is this so bad?

Now I've tried, and got the results I fully expected from my
experience of the kinds of things that happen when you go from
linearlinear to loglog.

> Why the opposition to even trying?

Because I was pretty sure from the above experience and having already
looked at it on both linear-linear and log-linear that it would be a
waste of time.

> When the response is "this isn't helping" my impression is that I am
> not being listened to at all, hence I started shouting. Now I think I
> may have gotten through a little, so let's talk.

Sure.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/9/2004 11:51:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >We're trying to come up with some reasonable way to decide on which
> >temperaments of each type to include in a paper on temperaments, given
> >that space is always limited. We want to include those few (maybe only
> >about 20 of each type)
> >Our starting point (but _only_ a starting point) is the knowledge
> >we've built up, over many years spent on the tuning list, regarding
> >what people find musically useful, with 5-limit ETs having had the
> >greatest coverage.
>
> You're gravely mistaken about the pertinence of this 'data source'.
> Even worse than culling intervals from the Scala archive.

How do you know this?

> >So we add complexity and error cutoffs which
> >utterly violate log-flat badness in their region of application (so
> >why violate log-flat badness elsewhere and make the transition to
> >non-violatedness as smooth as possible.

That was meant to be "(so why not violate log-flat badness elsewhere ..."

> Okay, now I have a definition of moat. How do they compare to Gene's
> "acceptance regions"?

As I understand it, a moat is intended to surround an acceptance
region and quarantine it to some small degree from the kind of
objections I mentioned.

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 12:14:08 AM

>> >Our starting point (but _only_ a starting point) is the knowledge
>> >we've built up, over many years spent on the tuning list, regarding
>> >what people find musically useful, with 5-limit ETs having had the
>> >greatest coverage.
>>
>> You're gravely mistaken about the pertinence of this 'data source'.
>> Even worse than culling intervals from the Scala archive.
>
>How do you know this?

Assuming a system is never exhausted, how close do you think we've
come to where schismic, meantone, dominant 7ths, augmented, and
diminshed are today with any other system?

If you had gone to apply your program in Bach's time, would you have
included augmented and diminished? "Oh, nobody's ever expressed
interest about them on a particular mailing list with about enough
aggregate musical talent to dimly light a pantry, so they must not be
worth mentioning." It is said the musicians of Bach's time did not
accept the errors of 12-tET.

5-limit ETs being shown musically useful on the tuning list?
Exactly what music are you thinking of? We're fortunate to have
had some great musicians working with new systems -- Haverstick,
Catler, Hobbs, Grady -- but we've chased all of them off the list,
and only Haverstick could be said to have worked in a "5-limit ET"
(and it's a stretch). We've got Miller, Smith and Pehrson left,
with the promising Erlich and monz stuck in theory and/or 12-tET
land. We're so far from any kind of form that would allow us to
make statements about musical utility that it's laughable.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 11:45:53 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
>
> > But there's less to tweak -- we just find the thickest moat than
> > encloses the systems in the same ballpark as the ones we know we
> > definitely want to include. This seems a lot less arbitrary than
> > tweaking *three* parameters to satisfy one's sensibilities as
best
> as
> > possible.
>
> Your plots make it clear you'd better trash the idea of doing moats
> in anything but loglog.

On the contrary.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 11:45:14 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > Why should we want to multiply instead of add?
>
> Oh, for God's sake Paul-have you looked at your own plots?

Of course I have.

> Did you
> notice how straight the thing looks in loglog coordinates?

Yes, this has been clear to me for years.

> Your plots
> make it clear that loglog is the right approach. Look at them!

Geez, you must really be thinking like a mathematician and not a
musician.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 11:48:03 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >>>My objection was not to limits on them per se, but to
acceptance
> >> >>>regions shaped like this (on a log-log plot).
> >> >>>
> >> >>>err
> >> >>>|
> >> >>>| (a)
> >> >>>|---\
> >> >>>| \
> >> >>>| \
> >> >>>| \ (b)
> >> >>>| |
> >> >>>| |
> >> >>>------------ comp
> >> >>>
> >> >>>as opposed to a smooth curve that rounds off those corners
marked
> >> >>>(a) and (b).
> >> >>
> >> >>Aha, now I understand your objection. But wait, what's
stopping
> >> >>this from being a rectangle? Is the badness bound giving the
> >> >>line AB?
> >> >
> >> >Yes.
> >> >
> >> >>If so, it looks like a badness cutoff alone would give a
> >> >>finite region...
> >> >
> >> >No, because the zero-error line is infinitely far away on a
loglog
> >> >plot.
> >>
> >> Can you illustrate this?
> >
> >How can I illustrate infinity?
> >
> >> It looks like the zero-error line is
> >> three dashes away on the above loglog plot. :)
> >
> >Since you're smiliing, I'll assume you "got it".
>
> No, I was just cracking wise. :(
>

Well, again, the zero-error line is infintely far down. No matter how
you set up your log-log plot, and no matter how big it is, the zero-
error line will never be on it.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 11:47:06 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> Do you mean TOP error for an equal
> >> temperament,
> >
> >Of course that's what he means.
>
> For 7-limit ets, how do you decide which comma to use?

What comma to use? There's no comma to use: n and d run over *all*
ratios n/d, not just one or more commas.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 11:50:39 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Thus it's great for a paper for mathematicians. Not for musicians.
>
> The *contents* of the list is what's great for musicians, not
> how it was generated.

No; I agree with Graham that we should "teach a man to fish".

> >> >Log-flat badness with cutoffs
> >>
> >> The cutoffs are of course completely arbitrary, but can be easily
> >> justified and explained in the context of a paper.
> >
> >But there are *three* of them!
>
> ...still trying to understand why the rectangle doesn't enclose
> a finite number of temperaments...

Which rectangle?

> With moats it seems you're pretty-much able to hand pick the list,

No way, dude! The decision is virtually made for us. If you can find
a wider moat in the vicinity, we'll adopt it.

> By thoughts are that in the 5-limit, we might reasonably have a
> chance of guessing a good list. But beyond that, I would cry
> Judas if anyone here claimed they could hand-pick anything. So,
> my question to you is: can a 5-limit moat be extrapolated upwards
> nicely?

Not sure what you mean by that.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 11:58:27 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > Your plots make it clear you'd better trash the idea of doing moats
> > in anything but loglog.
>
> On the contrary.

You did notice the approximately linear arragement, I presume? Does
that suggest anything to you?

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:02:31 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >We're trying to come up with some reasonable way to decide on which
> >temperaments of each type to include in a paper on temperaments,
given
> >that space is always limited. We want to include those few (maybe
only
> >about 20 of each type)
>
> For musicians, I'd make the list 5 for each limit; 10 tops.

If that's the case, and if we're also going to use log-flat, then my
name is probably off the paper. It'll offer far too little of use for
ordinary musicians.

> >which we feel are most likely to actually be
> >found useful by musicians, and we want to be able to answer
questions
> >of the kind: "since you included this and this, then why didn't you
> >included this". So Gene may have a point when he talks about
cluster
> >analysis, I just don't find his applications of it so far to be
> >producing useful results.
>
> I haven't seen any cluster analysis yet!

It was principal components analysis, but the reasoning behind the
implementation was obscure.

> >Our starting point (but _only_ a starting point) is the knowledge
> >we've built up, over many years spent on the tuning list, regarding
> >what people find musically useful, with 5-limit ETs having had the
> >greatest coverage.
>
> You're gravely mistaken about the pertinence of this 'data source'.
> Even worse than culling intervals from the Scala archive.

OK, Carl, so everyone's been sorely underestimating the true
usefulness of 665-equal and 612-equal, yes?

> >It may be an objective mathematical fact that log-flat badness
gives
> >uniform distribution, but you don't need a multiple-choice survey
to
> >know it is a psychological fact that musicians aren't terribly
> >interested in availing themselves of the full resources of 4276-ET
> >()or whatever it was.
>
> So far this can be addressed with a complexity bound.

Which contradicts the notion of 'badness'.

> >So we add complexity and error cutoffs which
> >utterly violate log-flat badness in their region of application (so
> >why violate log-flat badness elsewhere and make the transition to
> >non-violatedness as smooth as possible.
>
> ?

Dave's exactly right. If we're violating it suddenly at the cutoffs
but nowhere else, we're clearly not conforming to any kind of
psychological badness criterion.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 12:02:42 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:

> > Your plots
> > make it clear that loglog is the right approach. Look at them!
>
> Geez, you must really be thinking like a mathematician and not a
> musician.

A musician is going to look at these plots, see that they show a
slantwise arrangement of ets, and conclude circles are the way to
analyze them, and select out the best ones? I don't think musicians
are brain-damaged, sorry. Can you take me seriously enough not to blow
me off this this bilge, and give a real argument for a clearly stated
position?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 12:06:23 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> No way, dude! The decision is virtually made for us.

Prove it. Give log-log plots for your proposed moats, and let us see
what you've got. It's possible we could come to some kind of consensus
if you would attempt to treat people with something better than the
contempt you have shown lately. Work with me, work with Carl.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:05:57 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> > wrote:
> > > Why should we want to multiply instead of add?
> >
> > Oh, for God's sake Paul-have you looked at your own plots? Did
you
> > notice how straight the thing looks in loglog coordinates? Your
plots
> > make it clear that loglog is the right approach. Look at them!
>
> I don't much care how it's plotted, so long as we zoom in on the
> interesting bit. So, on these plots, what shape would you make a
> smooth curve that encloses only (or mostly) those ETs that musicians
> have actually found useful (or that you think are likely to be found
> useful) for approximating JI to the relevant limit? Having regard
for
> the difficulty caused by complexity as well as error.

Did either of you guys look at the loglog version of the moat-of-23 7-
limit linear temperaments?

> I wonder if, when you say that there is no particular problem with
> complexity you are thinking of cases where you may use a subset of
an
> ET, in the way that Joseph Pehrson is using a 21 note subset of 72-
ET.
> In that case you are really using a linear temperament, not the ET
> itself. I think the complexity of an ET should be considered as if
you
> planned to use _all_ its notes.

I'd say that if you planned to use any set of commas that generate
the ET's kernel (for chord-pump progressions, say), we're justified
to consider that you're planning to use the ET itself.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:15:01 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
> <gwsmith@s...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Dave Keenan"
<d.keenan@b...>
> > > wrote:
> > >
> > > > The error is minimax error in cents where the weighting is
log_2
> > > (n*d)
> > > > for the ratio n/d in lowest terms.
> >
> > The weighting is actually ONE OVER log2(n*d).
>
> It's not either one when I'm doing it, to me log(n/d)/log(n*d) is
> just a variant on epimericity.

I'm not following you, and I'm at a loss to understand why the above
definition of TOP error is suddenly a problem for you.

> > > What in the world does this mean?
> >
> > Just because he's off by a multiplicate inverse, you suddenly
have
> no
> > idea what he's talking about?
>
> No, and you aren't making much sense to me either; we seem to have
> differing ideas of what the topic under discussion is. Maybe I'm
not
> tracking it, but I thought we were talking about TOP error and
> complexity.

Yes; TOP error is always defined as above (though there are plenty of
equivalent definitions, such as over the primes alone).
>
> > > Do you mean TOP error for an equal
> > > temperament,
> >
> > Of course that's what he means.
> >
> > > which is dual to the above?
> >
> > Dual? How does duality come into play here?
>
> The dual to Tenney distance is how the error is measured.

Sure, but Dave's talking about measuring the errors directly (which
may be less useful for finding a solution, but may be more useful for
understanding what the solution means).

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 12:18:09 PM

>> >Thus it's great for a paper for mathematicians. Not for musicians.
>>
>> The *contents* of the list is what's great for musicians, not
>> how it was generated.
>
>No; I agree with Graham that we should "teach a man to fish".

That's all the more reason particulars like moats v. logflat don't
matter -- if I can fish, I can implement my own list. But I still
don't think the hypothetical "musicians" you're battering Gene
with will be able to do it.

>> >> >Log-flat badness with cutoffs
>> >>
>> >> The cutoffs are of course completely arbitrary, but can be easily
>> >> justified and explained in the context of a paper.
>> >
>> >But there are *three* of them!
>>
>> ...still trying to understand why the rectangle doesn't enclose
>> a finite number of temperaments...
>
>Which rectangle?

The rectangle enclosed by error and complexity bounds. You answered
that the axes were infinitely far away, but the badness line AB
doesn't seem to be helping that.

>> My thoughts are that in the 5-limit, we might reasonably have a
>> chance of guessing a good list. But beyond that, I would cry
>> Judas if anyone here claimed they could hand-pick anything. So,
>> my question to you is: can a 5-limit moat be extrapolated upwards
>> nicely?
>
>Not sure what you mean by that.

Which part? Can the equation/coordinates that defines your fav. moat
be taken from a 5-limit plot and slapped onto a 7-limit one?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:17:42 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
>
> > I don't have a problem with that. I still think the simplest
curves
> > through moats that are in the right ballpark will be of the form
> >
> > (err/k1)^p + (comp/k2)^p < x where p is 1 or slightly less than 1.
>
> The ets Paul just got through plotting are lying more or less along
> straight lines. I don't see any way to make a sensible moat unless
> your line follows the lay of the land, so to speak.

Except you suddenly depart from the lay of the land in two places?
Why this suddenness? What powerful psychological force operates at
these two points?

> > Is there a simpler function of log(err) and log(comp) that gives
> > similar shaped curves in the region of interest?
>
> Lines.

As I showed, the curve in question, on a log-log plot, looks clearly
unlike a line.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:21:04 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> log-flat is natural, in a way. And it should be one of the
easier
> >> concepts around here to explain to musicians.
> >
> >I don't recall even Dave understanding its derivation, let along
any
> >full-time musicians.
>
> I recall that Dave rejected the idea of a critical exponent. But I
> didn't understand it until I coded it. But anyway, it's no big
deal.
> At the level I'd imagine this stuff being explained to a full-time
> musician, it wouldn't be any harder to explain than a moat.

I think the regular plot will be easier to explain than the log-log
plot. After that, the moat will be quite easy to demonstrate. It
seems you just grokked it yourself.

> >> or why we should
> >> want to add instead of multiply to get badness.
> >
> >Why should we want to multiply instead of add?
>
> Gene multiplies logs, and you and Dave are adding them.
> Or so I thought...

No, Gene adds logs, while Dave and I add without taking logs.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:26:44 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >Our starting point (but _only_ a starting point) is the
knowledge
> >> >we've built up, over many years spent on the tuning list,
regarding
> >> >what people find musically useful, with 5-limit ETs having had
the
> >> >greatest coverage.
> >>
> >> You're gravely mistaken about the pertinence of this 'data
source'.
> >> Even worse than culling intervals from the Scala archive.
> >
> >How do you know this?
>
> Assuming a system is never exhausted, how close do you think we've
> come to where schismic, meantone, dominant 7ths, augmented, and
> diminshed are today with any other system?

We don't care, since we're including *all* the systems with error and
complexity no worse than *any* of these systems, as well as miracle.
And that's quite a few!

> If you had gone to apply your program in Bach's time, would you have
> included augmented and diminished? "Oh, nobody's ever expressed
> interest about them on a particular mailing list with about enough
> aggregate musical talent to dimly light a pantry, so they must not
be
> worth mentioning." It is said the musicians of Bach's time did not
> accept the errors of 12-tET.

Except on lutes . . .

> 5-limit ETs being shown musically useful on the tuning list?
> Exactly what music are you thinking of? We're fortunate to have
> had some great musicians working with new systems -- Haverstick,
> Catler, Hobbs, Grady -- but we've chased all of them off the list,

Nasty, nasty us.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:34:23 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > Your plots make it clear you'd better trash the idea of doing
moats
> > > in anything but loglog.
> >
> > On the contrary.
>
> You did notice the approximately linear arragement, I presume?

Again: I've known this to be the case for years.

> Does
> that suggest anything to you?

Years ago, when you first made be aware of this fact, I was seduced
by it, to Dave's dismay. Did you forget? Now, I'm thinking about it
from a musician's point of view. Simply put, music based on
constructs requiring large numbers of pitches doesn't seem to be able
to cohere in the way almost all the world's music does. Of all
people, I'm suprised Carl is now throwing his investigations along
these lines by the wayside.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:36:47 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
>
> > > Your plots
> > > make it clear that loglog is the right approach. Look at them!
> >
> > Geez, you must really be thinking like a mathematician and not a
> > musician.
>
> A musician is going to look at these plots, see that they show a
> slantwise arrangement of ets, and conclude circles are the way to
> analyze them,

I wasn't one of those who brought up or discussed circles, but I
certainly wouldn't want to seduce musicians with a plot that is not
likely to correspond with musically meaningful pain measures -- not
by a long shot!

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:39:14 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > No way, dude! The decision is virtually made for us.
>
> Prove it. Give log-log plots for your proposed moats,

I already did that for one case, pointed it out twice, and asked for
your comments.

> It's possible we could come to some kind of consensus
> if you would attempt to treat people with something better than the
> contempt you have shown lately.

I can take your attitude in no other way, unless you either ignored
completely or have an abominably low level of respect for the
discussions Dave and I posted on the topic.

Let's start over. If I'm willing to tolerate a certain level of
error, and a certain level of complexity, why wouldn't I be willing
to tolerate both together?

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 12:44:31 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >> ...still trying to understand why the rectangle doesn't enclose
> >> a finite number of temperaments...
> >
> >Which rectangle?
>
> The rectangle enclosed by error and complexity bounds.

Yes, that would enclose a finite number of temperaments.

> You answered
> that the axes were infinitely far away,

In log-log, this is true, and thus you can't *visually* verify that
there are a finite number of temperaments enclosed, as you can in the
linear-linear plot.

> >> My thoughts are that in the 5-limit, we might reasonably have a
> >> chance of guessing a good list. But beyond that, I would cry
> >> Judas if anyone here claimed they could hand-pick anything. So,
> >> my question to you is: can a 5-limit moat be extrapolated upwards
> >> nicely?
> >
> >Not sure what you mean by that.
>
> Which part? Can the equation/coordinates that defines your fav.
moat
> be taken from a 5-limit plot and slapped onto a 7-limit one?

No. The units are not the same. Well, depends which 5-limit one and
which 7-limit one you mean. Remember, we're dealing with a Pascal's
triangle, with one scenario for each number in the triangle, where
the number itself tells you the number of elements in the wedgie, the
rrow number is the number of primes, and the column number is the
codimension.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 12:52:23 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> It was principal components analysis, but the reasoning behind the
> implementation was obscure.

The reasoning was to draw an elliptical moat.

> OK, Carl, so everyone's been sorely underestimating the true
> usefulness of 665-equal and 612-equal, yes?

Sounds like you are. Not everyone plays live music and has that as
their focus, like you.

> Dave's exactly right. If we're violating it suddenly at the cutoffs
> but nowhere else, we're clearly not conforming to any kind of
> psychological badness criterion.

And if you are simply drawing squiggly lines on a graph, you are?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 12:57:43 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Did either of you guys look at the loglog version of the moat-of-23 7-
> limit linear temperaments?

I have a plot with unlabled axes and a curved red line on it.
Obviously, since I don't know what is being plotted, I draw no conclusion.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/10/2004 1:05:12 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >Our starting point (but _only_ a starting point) is the knowledge
> >> >we've built up, over many years spent on the tuning list, regarding
> >> >what people find musically useful, with 5-limit ETs having had the
> >> >greatest coverage.
> >>
> >> You're gravely mistaken about the pertinence of this 'data source'.
> >> Even worse than culling intervals from the Scala archive.
> >
> >How do you know this?
>
> Assuming a system is never exhausted, how close do you think we've
> come to where schismic, meantone, dominant 7ths, augmented, and
> diminshed are today with any other system?

Carl,

I've been saying we have evidence regarding "5-limit ETs". The above
are all linear temperaments, not equal temperaments, and one is
strictly 7-limit. Also I'm not sure I'm parsing that sentence correctly.

I think you're asking how well we have explored any systems other than
those linear temperaments you mention. My answer is, "Not very far".
But at least you seem to agree that the systems you mention have been
somewhat explored, and so we have some evidence of their musical
usefulness. The same goes for several _equal_ temperaments.

> If you had gone to apply your program in Bach's time, would you have
> included augmented and diminished? "Oh, nobody's ever expressed
> interest about them on a particular mailing list with about enough
> aggregate musical talent to dimly light a pantry, so they must not be
> worth mentioning." It is said the musicians of Bach's time did not
> accept the errors of 12-tET.

You've got the wrong end of the stick here, and are putting words in
my mouth. I never proposed using
failure-to-be-mentioned-on-the-tuning- lits as a reason to exclude a
temperament, that would be ridiculous. I only propose using those that
_have_ been mentioned (as useful), and general discussions on
desirable properties, as a starting point and then widening the circle
roughly equally in all directions from there.

> 5-limit ETs being shown musically useful on the tuning list?
> Exactly what music are you thinking of? We're fortunate to have
> had some great musicians working with new systems -- Haverstick,
> Catler, Hobbs, Grady -- but we've chased all of them off the list,

Even if that were true, it would not disqualify their past testimony.

> and only Haverstick could be said to have worked in a "5-limit ET"
> (and it's a stretch). We've got Miller, Smith and Pehrson left,
> with the promising Erlich and monz stuck in theory and/or 12-tET
> land. We're so far from any kind of form that would allow us to
> make statements about musical utility that it's laughable.

And why would you limit this information to those who have posted? We
have also heard about composers who never go near the tuning lists.
Darreg, Blackwood, Negri and Hanson come immediately to mind.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:10:18 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > It was principal components analysis, but the reasoning behind
the
> > implementation was obscure.
>
> The reasoning was to draw an elliptical moat.

OK, I'd be happy to revisit that, then.

> > OK, Carl, so everyone's been sorely underestimating the true
> > usefulness of 665-equal and 612-equal, yes?
>
> Sounds like you are. Not everyone plays live music and has that as
> their focus, like you.

But are you using these to approximate JI or truly for their inherent
properties?

> > Dave's exactly right. If we're violating it suddenly at the
cutoffs
> > but nowhere else, we're clearly not conforming to any kind of
> > psychological badness criterion.
>
> And if you are simply drawing squiggly lines on a graph, you are?

What squiggly lines? Your lines, with their two "corners", are a lot
squigglier than ours.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/10/2004 1:13:56 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >Thus it's great for a paper for mathematicians. Not for musicians.
> >
> > The *contents* of the list is what's great for musicians, not
> > how it was generated.
>
> No; I agree with Graham that we should "teach a man to fish".

I disagree. It's just too hard for non-mathematicians. Unless by
"fish" you mean "go to Graham's web site and use the temperament
finder there" in which case I'm all for it! And this would let us not
worry too much that we may have left some temperament out of the paper
that someone someday may find useful.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:18:49 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > It's not either one when I'm doing it, to me log(n/d)/log(n*d) is
> > just a variant on epimericity.
>
> I'm not following you, and I'm at a loss to understand why the above
> definition of TOP error is suddenly a problem for you.

Sorry, brain short-circuit.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:20:23 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > > >Thus it's great for a paper for mathematicians. Not for
musicians.
> > >
> > > The *contents* of the list is what's great for musicians, not
> > > how it was generated.
> >
> > No; I agree with Graham that we should "teach a man to fish".
>
> I disagree. It's just too hard for non-mathematicians. Unless by
> "fish" you mean "go to Graham's web site and use the temperament
> finder there" in which case I'm all for it! And this would let us
not
> worry too much that we may have left some temperament out of the
paper
> that someone someday may find useful.

This is a music *theory* paper, so presenting the bare minimum of
math to actually *derive* our results is appropriate. True, only
heavy theorists will probably want to reproduce the calculations. But
we want to leave referees, at least, with fairly complete confidence
that what we're doing is correct.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:24:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> The rectangle enclosed by error and complexity bounds. You answered
> that the axes were infinitely far away, but the badness line AB
> doesn't seem to be helping that.

If you simply bound complexity alone, you get a finite number of
temperaments. Most are complete crap.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/10/2004 1:27:57 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> > > wrote:

Dave wrote:
> > I don't much care how it's plotted, so long as we zoom in on the
> > interesting bit. So, on these plots, what shape would you make a
> > smooth curve that encloses only (or mostly) those ETs that musicians
> > have actually found useful (or that you think are likely to be found
> > useful) for approximating JI to the relevant limit? Having regard
> for
> > the difficulty caused by complexity as well as error.
>
> Did either of you guys look at the loglog version of the moat-of-23 7-
> limit linear temperaments?

Sure. I looked at it and agree with it just fine. That should be
obvious since I agreed just fine with it on a linear-linear plot. I
was asking Gene what shape _he_ thought it should be, and particularly
in regard to 5-limit ETs. He says "a straight line", so I think we're
doomed to disagree.

> I'd say that if you planned to use any set of commas that generate
> the ET's kernel (for chord-pump progressions, say), we're justified
> to consider that you're planning to use the ET itself.

Fair enough.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:28:45 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Except you suddenly depart from the lay of the land in two places?
> Why this suddenness? What powerful psychological force operates at
> these two points?

Lack of interest in tempering out 3/2. However, you don't need to set
any error bound if this doesn't worry you, and don't need any
complexity bound at all. We discussed this extensively in the past,
back in the days when you didn't like the idea of getting a finite
list, which this would do for the right badness exponent or slope.

> As I showed, the curve in question, on a log-log plot, looks clearly
> unlike a line.

Can you tell me where this plot is, with axes clearly labeled, so the
rest of us have a clue?

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:29:22 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> > The rectangle enclosed by error and complexity bounds. You
answered
> > that the axes were infinitely far away, but the badness line AB
> > doesn't seem to be helping that.
>
> If you simply bound complexity alone, you get a finite number of
> temperaments.

That doesn't seem to be true. There are lots of low-complexity
temperaments with arbitrarily high error. Wouldn't you have to bound
epimericity or something like that?

> Most are complete crap.

Agreed.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:31:21 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I think the regular plot will be easier to explain than the log-log
> plot.

Are you going to actually explain it, or just sweep that under the
rug? In other words, are you going to explain why what you are doing
makes sense? If you propose a real explanation, in the sense of
rational justification, good luck.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:33:26 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > > wrote:
> > > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...>
> > > > wrote:
>
> Dave wrote:
> > > I don't much care how it's plotted, so long as we zoom in on the
> > > interesting bit. So, on these plots, what shape would you make a
> > > smooth curve that encloses only (or mostly) those ETs that
musicians
> > > have actually found useful (or that you think are likely to be
found
> > > useful) for approximating JI to the relevant limit? Having
regard
> > for
> > > the difficulty caused by complexity as well as error.
> >
> > Did either of you guys look at the loglog version of the moat-of-
23 7-
> > limit linear temperaments?
>
> Sure. I looked at it and agree with it just fine. That should be
> obvious since I agreed just fine with it on a linear-linear plot. I
> was asking Gene what shape _he_ thought it should be, and
particularly
> in regard to 5-limit ETs. He says "a straight line", so I think
we're
> doomed to disagree.

Yes, a straight line in loglog will either include a finite (?)
number of ultra-high-error temperaments, or an infinite number of
ultra-high-complexity temperaments. Unless you replace the straight
line with a "staple", which places far too much importance on the
arbitrary location of the corners.

Personally, I don't think we should even be looking at the loglog
plots, since the axes don't represent quantities even in the ballpark
of what can be considered 'pain'. View the lay of the land as you
find it, not after an ultra-clever mathematical transformation of it.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/10/2004 1:40:50 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > > > >Thus it's great for a paper for mathematicians. Not for
> musicians.
> > > >
> > > > The *contents* of the list is what's great for musicians, not
> > > > how it was generated.
> > >
> > > No; I agree with Graham that we should "teach a man to fish".
> >
> > I disagree. It's just too hard for non-mathematicians. Unless by
> > "fish" you mean "go to Graham's web site and use the temperament
> > finder there" in which case I'm all for it! And this would let us
> not
> > worry too much that we may have left some temperament out of the
> paper
> > that someone someday may find useful.
>
> This is a music *theory* paper, so presenting the bare minimum of
> math to actually *derive* our results is appropriate. True, only
> heavy theorists will probably want to reproduce the calculations. But
> we want to leave referees, at least, with fairly complete confidence
> that what we're doing is correct.

Fair enough. A sketch that a mathematician can follow. But that seems
a long way from "teach a man to fish". The musicians will only be
relying on the lists we give (and Graham's finder).

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:42:02 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Years ago, when you first made be aware of this fact, I was seduced
> by it, to Dave's dismay. Did you forget? Now, I'm thinking about it
> from a musician's point of view. Simply put, music based on
> constructs requiring large numbers of pitches doesn't seem to be able
> to cohere in the way almost all the world's music does.

You've gotten all the way up to 22 notes to the octave. I suggest you
have zero experience along these lines.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:47:03 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Except you suddenly depart from the lay of the land in two
places?
> > Why this suddenness? What powerful psychological force operates
at
> > these two points?
>
> Lack of interest in tempering out 3/2.

But not 4/3?

> However, you don't need to set
> any error bound if this doesn't worry you, and don't need any
> complexity bound at all. We discussed this extensively in the past,
> back in the days when you didn't like the idea of getting a finite
> list, which this would do for the right badness exponent or slope.

Unfortunately, we can't publish an infinitely long paper.

> > As I showed, the curve in question, on a log-log plot, looks
clearly
> > unlike a line.
>
> Can you tell me where this plot is, with axes clearly labeled, so
the
> rest of us have a clue?

See

/tuning-math/message/9479

By your preceding comments, you must already 'have' the plot.

Now that you 'know', can you tell me how there could still have been
any question as to what the axes represent, since on all the graphs,
there never have been any exceptions to x=complexity, y=error, I
repeatedly pointed out this fact, and we've already established that
we're on the same page for these quantities? You'll have to forgive
me if I feel like you're latching on to any possible excuse, real or
imagined, to denigrate my work, and that of Dave as well. I've been
working extremely hard to wrap my brain around your nearly
impenetrable posts for years, going way out on a limb to defend you
against your detractors, and hailing you as the savior of our cause.
Now I'm working hard to satisfy everyone's needs in this discussion,
including yours. If I leave out a detail that could be ambiguous to
someone who's very far away from this discussion, I would expect that
you, of all people, would be able to tolerate this. If you can't
operate on the principle of 'trying to understand' rather
than 'trying to dismiss', then you have no right to expect others to,
and then your work here, I'm sorry to say, would be the first to fall
by the wayside.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 1:48:10 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > Prove it. Give log-log plots for your proposed moats,
>
> I already did that for one case, pointed it out twice, and asked for
> your comments.

I have saved a plot I don't know how to interpret.

> > It's possible we could come to some kind of consensus
> > if you would attempt to treat people with something better than the
> > contempt you have shown lately.
>
> I can take your attitude in no other way, unless you either ignored
> completely or have an abominably low level of respect for the
> discussions Dave and I posted on the topic.

Dave has been treating the rest of us with respect and discussing
things. I'd like the same from you.

> Let's start over. If I'm willing to tolerate a certain level of
> error, and a certain level of complexity, why wouldn't I be willing
> to tolerate both together?

From some points of view complexity doesn't even matter, so the whole
premise we've all been operating under can be questioned by someone
who is interested in the character of the commas in the kernel, not
what complexity they give. As for your question, are you arguing *for*
straight line error and complexity bounds, because I don't see where
else you can possibly go with it? Tolerating both together is exactly
what Dave doesn't tolerate, and I thought you agreed with that.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 1:50:18 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > I think the regular plot will be easier to explain than the log-
log
> > plot.
>
> Are you going to actually explain it, or just sweep that under the
> rug?

Yes, excuse me for the brutal honesty, but my track record for
successfully explaining things to people is just a bit better than
yours. The next few months are going to be insanely brutal as we try
to work together putting together an explanation for people to read.

🔗Graham Breed <graham@microtonal.co.uk>

2/10/2004 1:56:27 PM

Dave Keenan wrote:

> I disagree. It's just too hard for non-mathematicians. Unless by
> "fish" you mean "go to Graham's web site and use the temperament
> finder there" in which case I'm all for it! And this would let us not
> worry too much that we may have left some temperament out of the paper
> that someone someday may find useful.

That's roughly what I meant. Of course, the temperament finder could always do with improving (even mathematicians have trouble understanding it!) and could do with a good user guide -- hence the "teach" part. And I could really do with help with that.

We also need to give mathematicians the instructions for writing their own temperament finders for their own websites, or software packages, or idle amusement.

Either endeavour would be more worthy of my time than endless discussions about what temperaments to include on a list. But, while I'm here:

- log-flat looks like a good place to start

- silence is negative infinity in decibels

- spherical projection!

- can somebody give a friendly explanation of complex hulls?

- would k-means have anything to do with the clustering?

http://people.revoledu.com/kardi/tutorial/kMean/WhatIs.htm

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 2:00:37 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > > OK, Carl, so everyone's been sorely underestimating the true
> > > usefulness of 665-equal and 612-equal, yes?
> >
> > Sounds like you are. Not everyone plays live music and has that as
> > their focus, like you.
>
> But are you using these to approximate JI or truly for their inherent
> properties?

I'm in the middle of working on an ennealimmal piece now. Inherent
properties are a major aspect for this kind of thing. 612 is a fine
way to tune ennealimmal, though I plan on using TOP for this one. This
stuff really is practical if you care to practice it.

In terms of commas, we have a sort of complexity of the harmonic
relationships they imply--distance measured in terms of the
symmetrical lattice norm possibly being more relevant here than
Tenney. Past a certain point the equivalencies aren't going to make
any differences to you, and there is another sort of complexity bound
to think about.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 2:16:50 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Years ago, when you first made be aware of this fact, I was
seduced
> > by it, to Dave's dismay. Did you forget? Now, I'm thinking about
it
> > from a musician's point of view. Simply put, music based on
> > constructs requiring large numbers of pitches doesn't seem to be
able
> > to cohere in the way almost all the world's music does.
>
> You've gotten all the way up to 22 notes to the octave.

False, and I don't appreciate the sarcastic tone of this either.

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 2:20:03 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > > OK, Carl, so everyone's been sorely underestimating the true
> > > > usefulness of 665-equal and 612-equal, yes?
> > >
> > > Sounds like you are. Not everyone plays live music and has that
as
> > > their focus, like you.
> >
> > But are you using these to approximate JI or truly for their
inherent
> > properties?
>
> I'm in the middle of working on an ennealimmal piece now. Inherent
> properties are a major aspect for this kind of thing.

You're using a full basis for the kernel? And it's audible? (Real
questions, not rhetorical or riddles.)

> 612 is a fine
> way to tune ennealimmal, though I plan on using TOP for this one.
This
> stuff really is practical if you care to practice it.
>
> In terms of commas, we have a sort of complexity of the harmonic
> relationships they imply--distance measured in terms of the
> symmetrical lattice norm possibly being more relevant here than
> Tenney.

How so? You really think a progression by perfect fifths is as
complex as a progression by ratios of 7?

> Past a certain point the equivalencies aren't going to make
> any differences to you, and there is another sort of complexity
bound
> to think about.

I thought this was the only kind. Can you elaborate?

🔗Paul Erlich <perlich@aya.yale.edu>

2/10/2004 2:24:39 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:

> > > It's possible we could come to some kind of consensus
> > > if you would attempt to treat people with something better than
the
> > > contempt you have shown lately.
> >
> > I can take your attitude in no other way, unless you either
ignored
> > completely or have an abominably low level of respect for the
> > discussions Dave and I posted on the topic.
>
> Dave has been treating the rest of us with respect and discussing
> things. I'd like the same from you.

Then let's start over, and out with the sarcasm!

> > Let's start over. If I'm willing to tolerate a certain level of
> > error, and a certain level of complexity, why wouldn't I be
willing
> > to tolerate both together?
>
> From some points of view complexity doesn't even matter,

?

> so the whole
> premise we've all been operating under can be questioned by someone
> who is interested in the character of the commas in the kernel, not
> what complexity they give.

Please elaborate on this point of view -- I'm not seeing it.

> As for your question, are you arguing *for*
> straight line error and complexity bounds, because I don't see where
> else you can possibly go with it?

Could you do me a favor and attempt to speak to me as a human being,
and not deal with me like a chess opponent, trying to look several
moves ahead so that you can defeat me?

> Tolerating both together is exactly
> what Dave doesn't tolerate, and I thought you agreed with that.

I'll let Dave speak for himself, but I was hoping we would be
starting over, rather than arguing about what was said before.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 3:37:42 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >
> > > The rectangle enclosed by error and complexity bounds. You
> answered
> > > that the axes were infinitely far away, but the badness line AB
> > > doesn't seem to be helping that.
> >
> > If you simply bound complexity alone, you get a finite number of
> > temperaments.
>
> That doesn't seem to be true. There are lots of low-complexity
> temperaments with arbitrarily high error.

"Lots" is not the same as "infintely many". If you bound complexity,
you bound the size of the numbers in a wedgie, and hence bound the
number of wedgies.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:02:50 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Unfortunately, we can't publish an infinitely long paper.

You don't get an infinite list in this way for slopes less than the
critical exponent.

I've been
> working extremely hard to wrap my brain around your nearly
> impenetrable posts for years, going way out on a limb to defend you
> against your detractors, and hailing you as the savior of our cause.

In the last week, however, you've demoted me to being an annoying
idiot. Can you think about my hyperbola plan and tell me if I should
bother?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:07:33 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > > I think the regular plot will be easier to explain than the log-
> log
> > > plot.
> >
> > Are you going to actually explain it, or just sweep that under the
> > rug?
>
> Yes, excuse me for the brutal honesty, but my track record for
> successfully explaining things to people is just a bit better than
> yours.

I rely on you for that. Can you possibly believe my track record for
working out the logic of a proposal is not a bad one, and that if I am
saying something it might be worth thinking about?

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:11:27 PM

>> I haven't seen any cluster analysis yet!
>
>It was principal components analysis, but the reasoning behind the
>implementation was obscure.

I have no idea what you're talking about.

>> >Our starting point (but _only_ a starting point) is the knowledge
>> >we've built up, over many years spent on the tuning list, regarding
>> >what people find musically useful, with 5-limit ETs having had the
>> >greatest coverage.
>>
>> You're gravely mistaken about the pertinence of this 'data source'.
>> Even worse than culling intervals from the Scala archive.
>
>OK, Carl, so everyone's been sorely underestimating the true
>usefulness of 665-equal and 612-equal, yes?

No.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:13:28 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> False, and I don't appreciate the sarcastic tone of this either.

I didn't appreciate learning I write incoherent music.

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:17:40 PM

>> Assuming a system is never exhausted, how close do you think we've
>> come to where schismic, meantone, dominant 7ths, augmented, and
>> diminshed are today with any other system?
>
>We don't care, since we're including *all* the systems with error and
>complexity no worse than *any* of these systems, as well as miracle.
>And that's quite a few!

But you can still make the same kind of error.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:20:12 PM

>Years ago, when you first made be aware of this fact, I was seduced
>by it, to Dave's dismay. Did you forget? Now, I'm thinking about it
>from a musician's point of view. Simply put, music based on
>constructs requiring large numbers of pitches doesn't seem to be able
>to cohere in the way almost all the world's music does. Of all
>people, I'm suprised Carl is now throwing his investigations along
>these lines by the wayside.

I'm not. It is well known that Dave, for example, is far more
micro-biased than I! I'm just exploring possibilities.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:22:50 PM

>> A musician is going to look at these plots, see that they show a
>> slantwise arrangement of ets, and conclude circles are the way to
>> analyze them,
>
>I wasn't one of those who brought up or discussed circles, but I
>certainly wouldn't want to seduce musicians with a plot that is not
>likely to correspond with musically meaningful pain measures -- not
>by a long shot!

The circle rocks, dude. It penalizes temperaments equally for trading
too much of their error for complexity, or complexity for error. Look
at the plots, and the first things you hit are 19, 12, and 53. And
22 in the 7-limit. Further, my suggestion that 1cents = zero should
satisfy Dave's micro fears. Or make 0 cents = zero. It works either
way. No origin; pfff.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:26:16 PM

>> >> ...still trying to understand why the rectangle doesn't enclose
>> >> a finite number of temperaments...
>> >
>> >Which rectangle?
>>
>> The rectangle enclosed by error and complexity bounds.
>
>Yes, that would enclose a finite number of temperaments.

Then why the hell do we need a badness bound?

Alternatively, then why doesn't the badness bound alone enclose a
finite triangle?

>> >> My thoughts are that in the 5-limit, we might reasonably have a
>> >> chance of guessing a good list. But beyond that, I would cry
>> >> Judas if anyone here claimed they could hand-pick anything. So,
>> >> my question to you is: can a 5-limit moat be extrapolated upwards
>> >> nicely?
>> >
>> >Not sure what you mean by that.
>>
>> Which part? Can the equation/coordinates that defines your fav.
>> moat be taken from a 5-limit plot and slapped onto a 7-limit one?
>
>No. The units are not the same. Well, depends which 5-limit one and
>which 7-limit one you mean.

See my prev. message about notes units for complexity. Error units
ought to be the same!

>Remember, we're dealing with a Pascal's
>triangle, with one scenario for each number in the triangle, where
>the number itself tells you the number of elements in the wedgie, the
>rrow number is the number of primes, and the column number is the
>codimension.

I never knew that or forgot it!

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:25:19 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > I'm in the middle of working on an ennealimmal piece now. Inherent
> > properties are a major aspect for this kind of thing.
>
> You're using a full basis for the kernel? And it's audible? (Real
> questions, not rhetorical or riddles.)

I'm not sure what your question means. So far I've been sticking to
the 45 note DE, and obviously doing that makes a clear audible
difference. However, experience has shown that comma pumps on
2401/2400 or 4375/4374 are not so long that they fail to be
comprehensible. The audibility of the differnce between the starting
and ending note if you temper is another matter; it's not a hell of a
big change, and I don't hear it myself.

> > 612 is a fine
> > way to tune ennealimmal, though I plan on using TOP for this one.
> This
> > stuff really is practical if you care to practice it.
> >
> > In terms of commas, we have a sort of complexity of the harmonic
> > relationships they imply--distance measured in terms of the
> > symmetrical lattice norm possibly being more relevant here than
> > Tenney.
>
> How so? You really think a progression by perfect fifths is as
> complex as a progression by ratios of 7?

It's precisely as complex in terms of the chord relationships
involved, so long as you stay below the 9-limit.

> > Past a certain point the equivalencies aren't going to make
> > any differences to you, and there is another sort of complexity
> bound
> > to think about.
>
> I thought this was the only kind. Can you elaborate?

If |a b c d> is a 7-limit monzo, the symmetrical lattice norm
(seminorm, if we are including 2) is
sqrt(b^2 + c^2 + d^2 + bc + bd + cd), and this may be viewed as its
complexity in terms of harmonic relationships of 7-limit chords. How
many consonant intervalsteps at minimum are needed to get there is
another and related measure.

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:32:35 PM

>> and only Haverstick could be said to have worked in a "5-limit ET"
>> (and it's a stretch). We've got Miller, Smith and Pehrson left,
>> with the promising Erlich and monz stuck in theory and/or 12-tET
>> land. We're so far from any kind of form that would allow us to
>> make statements about musical utility that it's laughable.
>
>And why would you limit this information to those who have posted? We
>have also heard about composers who never go near the tuning lists.
>Darreg, Blackwood, Negri and Hanson come immediately to mind.

Of course I see your point, and you *can* make a case for it. But I
am convinced the experience we have so far might in fact tell us
*nothing* about the future. Therefore I am inclined, as usual, to
derive everything from first principles or a well-controlled
experiment.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:35:07 PM

>> The rectangle enclosed by error and complexity bounds. You answered
>> that the axes were infinitely far away, but the badness line AB
>> doesn't seem to be helping that.
>
>If you simply bound complexity alone, you get a finite number of
>temperaments. Most are complete crap.

Above I suggest a rectangle which bounds complexity and error, not
complexity alone.

In the circle suggestion I suggest a circle plus a complexity bound
is sufficient.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:42:29 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > so the whole
> > premise we've all been operating under can be questioned by someone
> > who is interested in the character of the commas in the kernel, not
> > what complexity they give.
>
> Please elaborate on this point of view -- I'm not seeing it.

You can look at meantone as something which gives nice triads, as a
superior system because it has fifths for generators, as a nice deal
because of a low badness figure. Or, you can say, wow, it has 81/80,
126/125 and 225/224 all in the kernel, and look what that implies. The
last point of view has nothing directly to do with error and
complexity, though the relationship is a close one when analyzed. As
you move to lower-error systems, your interest in the error per se
falls off, and complexity from the point of view of a vast conceptual
keyboard not too interesting--but oh, those commas! That's where the
action is in some ways, and more so as we increase the prime limit and
we have commas up the wazoo. Ennealimmal is not just low-error, it has
commas which are still in the useful range.

Then, finally, they get so complex they become pointless.

> Could you do me a favor and attempt to speak to me as a human being,
> and not deal with me like a chess opponent, trying to look several
> moves ahead so that you can defeat me?

I washed out of the first round of the US correspondence championship.
It's my brother who is the grandmaster.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 4:48:15 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> ...still trying to understand why the rectangle doesn't enclose
> >> >> a finite number of temperaments...
> >> >
> >> >Which rectangle?
> >>
> >> The rectangle enclosed by error and complexity bounds.
> >
> >Yes, that would enclose a finite number of temperaments.
>
> Then why the hell do we need a badness bound?

To keep the utter crap at bay, and allow us not to try to publish a
list of 1000000 "temperaments". Did you see the vast clouds of
darkness on Paul's plots?

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 4:56:32 PM

>> Could you do me a favor and attempt to speak to me as a human being,
>> and not deal with me like a chess opponent, trying to look several
>> moves ahead so that you can defeat me?
>
>I washed out of the first round of the US correspondence championship.
>It's my brother who is the grandmaster.

Is he really a grandmaster?

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 5:13:55 PM

>> >> >> ...still trying to understand why the rectangle doesn't enclose
>> >> >> a finite number of temperaments...
>> >> >
>> >> >Which rectangle?
>> >>
>> >> The rectangle enclosed by error and complexity bounds.
>> >
>> >Yes, that would enclose a finite number of temperaments.
>>
>> Then why the hell do we need a badness bound?
>
>To keep the utter crap at bay, and allow us not to try to publish a
>list of 1000000 "temperaments". Did you see the vast clouds of
>darkness on Paul's plots?

I'm looking at...

/tuning-math/files/Paul/et5loglog.gif

...is a badness bound here shown by a line, say, through 12 and 171,
as on Dave's mock-up ASCI plot? Oh, so you want to keep the likes
of 3 and 2513 off the list? If so, a circle would do this, or an
error-comp. rectangle with a diagonal at about 45deg. to either
axis.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/10/2004 10:14:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> Could you do me a favor and attempt to speak to me as a human being,
> >> and not deal with me like a chess opponent, trying to look several
> >> moves ahead so that you can defeat me?
> >
> >I washed out of the first round of the US correspondence championship.
> >It's my brother who is the grandmaster.
>
> Is he really a grandmaster?

He's not an OTB grandmaster, he is a correspondence grandmaster and
several times US champion.

🔗Carl Lumma <ekin@lumma.org>

2/10/2004 10:51:06 PM

>> >> Could you do me a favor and attempt to speak to me as a human being,
>> >> and not deal with me like a chess opponent, trying to look several
>> >> moves ahead so that you can defeat me?
>> >
>> >I washed out of the first round of the US correspondence championship.
>> >It's my brother who is the grandmaster.
>>
>> Is he really a grandmaster?
>
>He's not an OTB grandmaster, he is a correspondence grandmaster and
>several times US champion.

Wow, cool! I don't follow the correspondence scene, but I always get
a kick out of the name of the column in No Life magazine: The Check Is
In The Mail.
:)

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:05:58 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...>
wrote:
> > >
> > > > The rectangle enclosed by error and complexity bounds. You
> > answered
> > > > that the axes were infinitely far away, but the badness line
AB
> > > > doesn't seem to be helping that.
> > >
> > > If you simply bound complexity alone, you get a finite number of
> > > temperaments.
> >
> > That doesn't seem to be true. There are lots of low-complexity
> > temperaments with arbitrarily high error.
>
> "Lots" is not the same as "infintely many". If you bound complexity,
> you bound the size of the numbers in a wedgie, and hence bound the
> number of wedgies.

I realized this last night. Thanks for bearing with my stupidity.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:08:44 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Unfortunately, we can't publish an infinitely long paper.
>
> You don't get an infinite list in this way for slopes less than the
> critical exponent.

Right, exactly as I said, but as I also said, you get a lot of
emphasis on high-error temperaments, and I know Dave would be very
unhappy with that.

> Can you think about my hyperbola plan and tell me if I should
> bother?

I can't understand your reasoning there. I'd love to consider it if
you would elaborate on your thinking.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:11:03 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...>
> > wrote:
> > >
> > > > I think the regular plot will be easier to explain than the
log-
> > log
> > > > plot.
> > >
> > > Are you going to actually explain it, or just sweep that under
the
> > > rug?
> >
> > Yes, excuse me for the brutal honesty, but my track record for
> > successfully explaining things to people is just a bit better
than
> > yours.
>
> I rely on you for that. Can you possibly believe my track record for
> working out the logic of a proposal is not a bad one, and that if I
am
> saying something it might be worth thinking about?

I've been thinking about it for years, and mostly supporting it. It's
just that I think Dave and Graham should both be in on this, and we
were going to lose Dave entirely if we didn't at least try to address
his objections. I'm hoping this process will continue, whenever Dave
gets back.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:11:57 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >> I haven't seen any cluster analysis yet!
> >
> >It was principal components analysis, but the reasoning behind the
> >implementation was obscure.
>
> I have no idea what you're talking about.

Well, I didn't have much grasp of it either. And I do prinicipal
components analysis all the time!

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:14:14 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > False, and I don't appreciate the sarcastic tone of this either.
>
> I didn't appreciate learning I write incoherent music.

?

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:15:29 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> Assuming a system is never exhausted, how close do you think
we've
> >> come to where schismic, meantone, dominant 7ths, augmented, and
> >> diminshed are today with any other system?
> >
> >We don't care, since we're including *all* the systems with error
and
> >complexity no worse than *any* of these systems, as well as
miracle.
> >And that's quite a few!
>
> But you can still make the same kind of error.
>
> -Carl

How so?

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:16:37 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Years ago, when you first made be aware of this fact, I was
seduced
> >by it, to Dave's dismay. Did you forget? Now, I'm thinking about
it
> >from a musician's point of view. Simply put, music based on
> >constructs requiring large numbers of pitches doesn't seem to be
able
> >to cohere in the way almost all the world's music does. Of all
> >people, I'm suprised Carl is now throwing his investigations along
> >these lines by the wayside.
>
> I'm not.

Then why are you suddenly silent on all this?

> It is well known that Dave, for example, is far more
> micro-biased than I!

?

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:19:27 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> A musician is going to look at these plots, see that they show a
> >> slantwise arrangement of ets, and conclude circles are the way to
> >> analyze them,
> >
> >I wasn't one of those who brought up or discussed circles, but I
> >certainly wouldn't want to seduce musicians with a plot that is
not
> >likely to correspond with musically meaningful pain measures --
not
> >by a long shot!
>
> The circle rocks, dude. It penalizes temperaments equally for
trading
> too much of their error for complexity, or complexity for error.
Look
> at the plots, and the first things you hit are 19, 12, and 53. And
> 22 in the 7-limit. Further, my suggestion that 1cents = zero should
> satisfy Dave's micro fears. Or make 0 cents = zero. It works
>either
> way.

It does? Look at the graph! How can you make 0 cents = zero when it's
infinitely far away? And what about the position of the origin on the
*complexity* axis??

> No origin; pfff.

piano-forte-forte-forte?

P.S. The relative scaling of the two axes is completely arbitrary,
so, even if you actually selected an origin, the circle would produce
different results for a different relative scaling.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:24:20 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> ...still trying to understand why the rectangle doesn't
enclose
> >> >> a finite number of temperaments...
> >> >
> >> >Which rectangle?
> >>
> >> The rectangle enclosed by error and complexity bounds.
> >
> >Yes, that would enclose a finite number of temperaments.
>
> Then why the hell do we need a badness bound?

We'd get an awful long list of temperaments without it, especially in
the ET case, if we're insisting on including at least one with
relatively high error and at least one with relatively high
complexity.

> Alternatively, then why doesn't the badness bound alone enclose a
> finite triangle?

Not only is it, like the rectangle, infinite in area on the loglog
plot, since the zero-error line and zero-complexity lines are
infinitely far away, but it actually encloses an infinite number of
temperaments.

> Error units
> ought to be the same!

Yes, an argument could be made for that, though we'd tend to insist
on tighter error bounds for higher limits.

> >Remember, we're dealing with a Pascal's
> >triangle, with one scenario for each number in the triangle, where
> >the number itself tells you the number of elements in the wedgie,
the
> >rrow number is the number of primes, and the column number is the
> >codimension.
>
> I never knew that or forgot it!

Well, the point is that 'limit' is not the only 'dimension' in this
problem; for example in 7-limit, we have ETs, linear temperaments,
and planar temperaments.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:27:59 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > I'm in the middle of working on an ennealimmal piece now.
Inherent
> > > properties are a major aspect for this kind of thing.
> >
> > You're using a full basis for the kernel? And it's audible? (Real
> > questions, not rhetorical or riddles.)
>
> I'm not sure what your question means. So far I've been sticking to
> the 45 note DE, and obviously doing that makes a clear audible
> difference. However, experience has shown that comma pumps on
> 2401/2400 or 4375/4374 are not so long that they fail to be
> comprehensible. The audibility of the differnce between the starting
> and ending note if you temper is another matter; it's not a hell of
a
> big change, and I don't hear it myself.

I think it's Dave's turn to ponder this.

> > > 612 is a fine
> > > way to tune ennealimmal, though I plan on using TOP for this
one.
> > This
> > > stuff really is practical if you care to practice it.
> > >
> > > In terms of commas, we have a sort of complexity of the harmonic
> > > relationships they imply--distance measured in terms of the
> > > symmetrical lattice norm possibly being more relevant here than
> > > Tenney.
> >
> > How so? You really think a progression by perfect fifths is as
> > complex as a progression by ratios of 7?
>
> It's precisely as complex in terms of the chord relationships
> involved, so long as you stay below the 9-limit.

Why do you say this? Is this some mathematical result, or your
subjective feeling? My ears certainly don't seem to agree.

> > > Past a certain point the equivalencies aren't going to make
> > > any differences to you, and there is another sort of complexity
> > bound
> > > to think about.
> >
> > I thought this was the only kind. Can you elaborate?
>
> If |a b c d> is a 7-limit monzo, the symmetrical lattice norm
> (seminorm, if we are including 2) is
> sqrt(b^2 + c^2 + d^2 + bc + bd + cd), and this may be viewed as its
> complexity in terms of harmonic relationships of 7-limit chords. How
> many consonant intervalsteps at minimum are needed to get there is
> another and related measure.

I think the Tenney lattice is pretty ideal for this, because
progressing by simpler consonances is more comprehensible and thus
allows for longer chord progressions with the same subjective
complexity.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:31:07 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> The rectangle enclosed by error and complexity bounds. You
answered
> >> that the axes were infinitely far away, but the badness line AB
> >> doesn't seem to be helping that.
> >
> >If you simply bound complexity alone, you get a finite number of
> >temperaments. Most are complete crap.
>
> Above I suggest a rectangle which bounds complexity and error, not
> complexity alone.
>
> In the circle suggestion I suggest a circle plus a complexity bound
> is sufficient.

Can you give an example of the latter?

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 12:35:10 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > so the whole
> > > premise we've all been operating under can be questioned by
someone
> > > who is interested in the character of the commas in the kernel,
not
> > > what complexity they give.
> >
> > Please elaborate on this point of view -- I'm not seeing it.
>
> You can look at meantone as something which gives nice triads, as a
> superior system because it has fifths for generators, as a nice deal
> because of a low badness figure. Or, you can say, wow, it has 81/80,
> 126/125 and 225/224 all in the kernel, and look what that implies.

Having 81/80 in the kernel implies you can harmonize a diatonic scale
all the way through in consonant thirds. Similar commas have similar
implications of the kind Carl always seemed to care about.

> The
> last point of view has nothing directly to do with error and
> complexity, though the relationship is a close one when analyzed. As
> you move to lower-error systems, your interest in the error per se
> falls off, and complexity from the point of view of a vast
conceptual
> keyboard not too interesting--but oh, those commas!

Example?

> That's where the
> action is in some ways, and more so as we increase the prime limit
and
> we have commas up the wazoo. Ennealimmal is not just low-error, it
has
> commas which are still in the useful range.

It has 4375:4374 in the kernel. Look what that implies. Umm . . .
what (musically useful thing)?

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 12:53:47 PM

>> >> Assuming a system is never exhausted, how close do you think
>> >> we've come to where schismic, meantone, dominant 7ths,
>> >> augmented, and diminshed are today with any other system?
>> >
>> >We don't care, since we're including *all* the systems with error
>> >and complexity no worse than *any* of these systems, as well as
>> >miracle. And that's quite a few!
>>
>> But you can still make the same kind of error.
>>
>> -Carl
>
>How so?

1. The process of expansion into temperament space might not be
finished in the 5-limit.

2. If we don't know anything about 7-limit music, listing all
temperaments at least as "good" (never mind how we determine that) as
the ones used to date in 5-limit music might not mean anything.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 12:54:43 PM

>> I'm not.
>
>Then why are you suddenly silent on all this?

Huh? I've been posting at a record rate.

>> It is well known that Dave, for example, is far more
>> micro-biased than I!
>
>?

What's your question?

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 12:58:01 PM

>> Alternatively, then why doesn't the badness bound alone enclose a
>> finite triangle?
>
>Not only is it, like the rectangle, infinite in area on the loglog
>plot, since the zero-error line and zero-complexity lines are
>infinitely far away, but it actually encloses an infinite number of
>temperaments.

Huh; I thought I just saw you and Gene agreeing that a badness bound
alone does return a finite list of temperaments.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 1:02:55 PM

>> The circle rocks, dude. It penalizes temperaments equally for
>> trading too much of their error for complexity, or complexity
>> for error. Look
>> at the plots, and the first things you hit are 19, 12, and 53.
>> And 22 in the 7-limit. Further, my suggestion that 1cents = zero
>> should satisfy Dave's micro fears. Or make 0 cents = zero. It
>> works either way.
>
>It does? Look at the graph! How can you make 0 cents = zero when
>it's infinitely far away?

I thought I had a way to fudge it by adding a constant later,
but I can't remember it at the moment.

>And what about the position of the origin on the
>*complexity* axis??

I already answered that.

>P.S. The relative scaling of the two axes is completely arbitrary,

Howso? They're both base2 logs of fixed units. You mean c is
arbitrary in y = x + c? Selecting an "origin" effectively fixes c.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:03:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> Assuming a system is never exhausted, how close do you think
> >> >> we've come to where schismic, meantone, dominant 7ths,
> >> >> augmented, and diminshed are today with any other system?
> >> >
> >> >We don't care, since we're including *all* the systems with
error
> >> >and complexity no worse than *any* of these systems, as well as
> >> >miracle. And that's quite a few!
> >>
> >> But you can still make the same kind of error.
> >>
> >> -Carl
> >
> >How so?
>
> 1. The process of expansion into temperament space might not be
> finished in the 5-limit.

Sure, but we can show a graph of where the temperaments occur in the
space, and of course provide the necessary math, so that further
expansion can be explored by the reader.

> 2. If we don't know anything about 7-limit music, listing all
> temperaments at least as "good" (never mind how we determine that)
as
> the ones used to date in 5-limit music might not mean anything.

It might not. And we're not suggesting any "goodness" measure which
is applicable to both 5-limit and 7-limit systems of any respective
dimensionalities. But we are suggesting something similar be used in
each of the Pascal's triangle of cases, which seems logical. If it's
wrong, it's wrong, and there goes the premise of our paper. But it's
a theory paper, not an edict. I think if the criteria we use are
easily grasped and well justified, we will have done a great job
publishing something truly pioneering and valuable as fodder for
experimentation.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:05:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> I'm not.
> >
> >Then why are you suddenly silent on all this?
>
> Huh? I've been posting at a record rate.

Not on this subject of cognitive limits that used to occupy you so.

> >> It is well known that Dave, for example, is far more
> >> micro-biased than I!
> >
> >?
>
> What's your question?

What does micro-biased mean, on what basis do you say this about you
vs. Dave, and what is its relevance here?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 1:06:10 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > It's precisely as complex in terms of the chord relationships
> > involved, so long as you stay below the 9-limit.
>
> Why do you say this? Is this some mathematical result, or your
> subjective feeling? My ears certainly don't seem to agree.

It's a non-subjective musical result. The number of intervals
involved, or the number of chord changes for chords sharing a note, or
sharing two notes, is going to be exactly the same. This is why you
can map them 1-1. Getting to a 3/2 from a 1 is two steps for chords
sharing two notes, and one step for chords sharing a note (since they
share a 3/2.) Exactly the same is true of any other 7-limit
consonance, such as 7/4. You go from one major tetrad to another in
two steps of chords sharing an interval, no more, no less.

> > > > Past a certain point the equivalencies aren't going to make
> > > > any differences to you, and there is another sort of complexity
> > > bound
> > > > to think about.
> > >
> > > I thought this was the only kind. Can you elaborate?
> >
> > If |a b c d> is a 7-limit monzo, the symmetrical lattice norm
> > (seminorm, if we are including 2) is
> > sqrt(b^2 + c^2 + d^2 + bc + bd + cd), and this may be viewed as its
> > complexity in terms of harmonic relationships of 7-limit chords. How
> > many consonant intervalsteps at minimum are needed to get there is
> > another and related measure.
>
> I think the Tenney lattice is pretty ideal for this, because
> progressing by simpler consonances is more comprehensible and thus
> allows for longer chord progressions with the same subjective
> complexity.

The Tenney lattice is no good for this, since I am assuming octave
equivalence. The octave-class Tenney lattice could be argued for, but
chords sharing notes or intervals seems far more basic to me so far as
chords go. We can start from chords and then get back to the notes.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:09:57 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> Alternatively, then why doesn't the badness bound alone enclose a
> >> finite triangle?
> >
> >Not only is it, like the rectangle, infinite in area on the loglog
> >plot, since the zero-error line and zero-complexity lines are
> >infinitely far away, but it actually encloses an infinite number
of
> >temperaments.
>
> Huh; I thought I just saw you and Gene agreeing that a badness bound
> alone does return a finite list of temperaments.

Not the one you were referring to here.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 1:12:09 PM

>> I suggest a rectangle which bounds complexity and error, not
>> complexity alone.
>>
>> In the circle suggestion I suggest a circle plus a complexity bound
>> is sufficient.
>
>Can you give an example of the latter?

Fix the origin at 1 cent and 1 note, and the complexity < whatever
you want. 100 notes? 20 notes? Or just grow the radius until
you enclose the number of temperaments you want to list.

I originally thought to include only the upper-right quadrant, but
a circle all the way around the origin might be nice to see.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 1:13:19 PM

>> You can look at meantone as something which gives nice triads, as a
>> superior system because it has fifths for generators, as a nice deal
>> because of a low badness figure. Or, you can say, wow, it has 81/80,
>> 126/125 and 225/224 all in the kernel, and look what that implies.
>
>Having 81/80 in the kernel implies you can harmonize a diatonic scale
>all the way through in consonant thirds. Similar commas have similar
>implications of the kind Carl always seemed to care about.

Don't you mean 25:24?

Yes, I do care very much about this.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:16:45 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> The circle rocks, dude. It penalizes temperaments equally for
> >> trading too much of their error for complexity, or complexity
> >> for error. Look
> >> at the plots, and the first things you hit are 19, 12, and 53.
> >> And 22 in the 7-limit. Further, my suggestion that 1cents = zero
> >> should satisfy Dave's micro fears. Or make 0 cents = zero. It
> >> works either way.
> >
> >It does? Look at the graph! How can you make 0 cents = zero when
> >it's infinitely far away?
>
> I thought I had a way to fudge it by adding a constant later,
> but I can't remember it at the moment.

Let me know when you do.

> >And what about the position of the origin on the
> >*complexity* axis??
>
> I already answered that.

Where? I didn't see anything on that, but I could have misunderstood
something.

> >P.S. The relative scaling of the two axes is completely arbitrary,
>
> Howso? They're both base2 logs of fixed units.

Actually, the vertical axis isn't base anything, since it's a ratio
of logs. The base of the horizontal axis is arbitrary, so the scaling
between the two is arbitrary. But then the loglog plots take an
additional log of both (I've been showing power-of-10 tick marks),
and in that case, I suppose you're right that the units aren't
arbitrary relative to one another . . . Unfortunately I haven't been
drawing them with the same scaling on both axes, as you can easily
see. So if your circle was based on looking at the graphs, it would
become a highly eccentric ellipse when the two logarithmic axes
actually do use the same scaling.

> You mean c is
> arbitrary in y = x + c?

Not what I meant, but this is the equation of a line, not a circle.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 1:29:13 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > I rely on you for that. Can you possibly believe my track record for
> > working out the logic of a proposal is not a bad one, and that if I
> am
> > saying something it might be worth thinking about?
>
> I've been thinking about it for years, and mostly supporting it. It's
> just that I think Dave and Graham should both be in on this, and we
> were going to lose Dave entirely if we didn't at least try to address
> his objections. I'm hoping this process will continue, whenever Dave
> gets back.

Hey Paul, I assume your recent change of mind on this stuff wasn't
just so you wouldn't "lose" me. I certainly never made any threats of
that kind.

Part of the reason I suggested recently that you go ahead without me,
was the difficulty I sometimes have communicating with Gene in a civil
manner, but also because I'm starting some new (paying) work where I'm
going to have a lot less time for this list, and won't be able to
continue to follow discussions as closely as the current one. I'm
already working two days a week at this new job (Wed & Fri) and will
be doing it four days a week starting next week (Tue to Fri).

Gene, I feel I've been bending over backwards to accomodate you
recently and I don't think I can keep it up. I have found it
extraordinary how often you have been unable to find plots or figure
out their axes, even after I have posted that information in response
to questions by Carl (but I agree Paul should have labelled the axes
unless the software made it difficult). And yet you seem to think it's
fine to post lists of numbers with no column headings, and that it's
easy to figure them out. And we have been considering your cutoff
proposals for years. I can't for the life of me understand why you say
we haven't.

Why can't you understand that the mathematical fact that temperaments
come out with a straight edge on a log-log plot has absolutely no
bearing on which of them will be found musically useful to humans. It
is a beautiful mathematical fact and no more.

Humans seem to find a particular region of complexity and error
attractive and have a certain approximate function relating error and
complexity to usefulness. Extra-terrestrial music-makers (or humpback
whales) may find completely different regions attractive.

These are _not_ facts of mathematics, but of psychology and
physiology. We don't have much data on them, but we are far from
having _none_ as Carl hyperbolically insists.

I can certainly understand Paul's impatience. But he has agreed to
work thru it with you again from the start, so lets' see what happens
there.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:33:20 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > It's precisely as complex in terms of the chord relationships
> > > involved, so long as you stay below the 9-limit.
> >
> > Why do you say this? Is this some mathematical result, or your
> > subjective feeling? My ears certainly don't seem to agree.
>
> It's a non-subjective musical result. The number of intervals
> involved,

Not the only valid measure of complexity.

> or the number of chord changes for chords sharing a note, or
> sharing two notes, is going to be exactly the same.
> This is why you
> can map them 1-1. Getting to a 3/2 from a 1 is two steps for chords
> sharing two notes, and one step for chords sharing a note (since
they
> share a 3/2.) Exactly the same is true of any other 7-limit
> consonance, such as 7/4.

How do you know that? What if your chords are 9-limit chords (either
complete or saturated)?

> You go from one major tetrad to another in
> two steps of chords sharing an interval, no more, no less.

Why should I only care about major tetrads?

> > > > > Past a certain point the equivalencies aren't going to make
> > > > > any differences to you, and there is another sort of
complexity
> > > > bound
> > > > > to think about.
> > > >
> > > > I thought this was the only kind. Can you elaborate?
> > >
> > > If |a b c d> is a 7-limit monzo, the symmetrical lattice norm
> > > (seminorm, if we are including 2) is
> > > sqrt(b^2 + c^2 + d^2 + bc + bd + cd), and this may be viewed as
its
> > > complexity in terms of harmonic relationships of 7-limit
chords. How
> > > many consonant intervalsteps at minimum are needed to get there
is
> > > another and related measure.
> >
> > I think the Tenney lattice is pretty ideal for this, because
> > progressing by simpler consonances is more comprehensible and
thus
> > allows for longer chord progressions with the same subjective
> > complexity.
>
> The Tenney lattice is no good for this, since I am assuming octave
> equivalence.

Then something like the Kees lattice should be used, but this
assumption would add a new chapter to our paper that would probably
make it too long.

> The octave-class Tenney lattice could be argued for,

The one that makes 15:8 equally complex as 5:3? Never.

> but
> chords sharing notes or intervals seems far more basic to me so far
>as
> chords go.

In C major, the progression between C major and D minor triads
doesn't use any shared notes. Is that a problem?

> We can start from chords and then get back to the notes.

I don't want to assume any particular chord structures; that would
make this whole enterprise far less general and might doom it to
being nothing more than an academic curiosity.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:35:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> I suggest a rectangle which bounds complexity and error, not
> >> complexity alone.
> >>
> >> In the circle suggestion I suggest a circle plus a complexity
bound
> >> is sufficient.
> >
> >Can you give an example of the latter?
>
> Fix the origin at 1 cent and 1 note,

Complexity is only measured in "notes" in the ET cases, and even then
there's arbitrariness to it (notes per octave? tritave?)

> and the complexity < whatever
> you want. 100 notes? 20 notes?

Why would you need a complexity bound in addition to the circle? The
circle, being finite, would only extend to a certain maximum
complexity anyway . . .

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:37:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> You can look at meantone as something which gives nice triads,
as a
> >> superior system because it has fifths for generators, as a nice
deal
> >> because of a low badness figure. Or, you can say, wow, it has
81/80,
> >> 126/125 and 225/224 all in the kernel, and look what that
implies.
> >
> >Having 81/80 in the kernel implies you can harmonize a diatonic
scale
> >all the way through in consonant thirds. Similar commas have
similar
> >implications of the kind Carl always seemed to care about.
>
> Don't you mean 25:24?

No, 81;80. 25;24 in the kernel doesn't give you either a diatonic
scale or 'consonant thirds'.

🔗Paul Erlich <perlich@aya.yale.edu>

2/11/2004 1:54:37 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > > I rely on you for that. Can you possibly believe my track
record for
> > > working out the logic of a proposal is not a bad one, and that
if I
> > am
> > > saying something it might be worth thinking about?
> >
> > I've been thinking about it for years, and mostly supporting it.
It's
> > just that I think Dave and Graham should both be in on this, and
we
> > were going to lose Dave entirely if we didn't at least try to
address
> > his objections. I'm hoping this process will continue, whenever
Dave
> > gets back.
>
> Hey Paul, I assume your recent change of mind on this stuff wasn't
> just so you wouldn't "lose" me. I certainly never made any threats
of
> that kind.

It seemed like you were saying something like this (though not
a "threat") on the tuning list. I hate rehashing, but I could find
the posts in question . . . But I think it would be valuable if the
four of us could put something together, rather than splintering off
and then possibly having fights about priority or whatnot. So a
little politics didn't seem out of order. Not to mention I think a
lot more could be said for your case, particularly by the most
active "other", namely Carl.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 2:20:05 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:
a week starting next week (Tue to Fri).

> Why can't you understand that the mathematical fact that temperaments
> come out with a straight edge on a log-log plot has absolutely no
> bearing on which of them will be found musically useful to humans.

Does this mean you are simply going to blow me off and ignore the hard
work I have done trying to accomodate both you and Paul?

> I can certainly understand Paul's impatience.

Can you understand my impatience with the fact that neither you nor
Paul seems willing even to listen to me?

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 2:21:31 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> How do you know that? What if your chords are 9-limit chords (either
> complete or saturated)?

Obviously they aren't, because I said we are staying below the 9-limit.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 2:51:28 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
...
> But, while I'm here:
>
> - log-flat looks like a good place to start

We _did_ start there, some years ago, and have been there ever since.
What I don't understand is the resistance to moving on, and finding
something that's approximately representative of human musical
interest in these things.

And if you mean it's a good function to modify to obtain the latter,
then that's fine. Every smooth cutoff function that Paul or Carl or I
have proposed so far (and almost certainly any that might be proposed
in the future) are tangent to log-flat badness. What more could you ask?

> - silence is negative infinity in decibels

Yes, what Paul said is technically wrong. But that doesn't alter the
fact that we don't experience mistuning pain as anything like
log(cents). Cents is already log(frequency).

> - spherical projection!

I think we can safely forget about that one.

> - can somebody give a friendly explanation of complex hulls?

I think that was _convex_ hull. That's just the smallest convex shape
that encloses a set of points - a polygon with the "outermost" points
as its vertices. I think it goes without saying that we would never
exclude a temperament that was inside the convex hull of the included
temperaments. Although not as formalised (yet), moats can be
considered as taking the convex hull idea a little further, by
including not only those _inside_ the convex hull but also those
_close_ to the outside of it, and insisting that the hull is not only
convex, but smooth.

> - would k-means have anything to do with the clustering?
>
> http://people.revoledu.com/kardi/tutorial/kMean/WhatIs.htm

Sorry. Don't know anything about these.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 3:03:31 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...>
wrote:

> What I don't understand is the resistance to moving on, and finding
> something that's approximately representative of human musical
> interest in these things.

What I don't understand is why when I try to do that I get either
ignored or berated.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 3:19:08 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > > > I'm in the middle of working on an ennealimmal piece now.
> Inherent
> > > > properties are a major aspect for this kind of thing.
> > >
> > > You're using a full basis for the kernel? And it's audible? (Real
> > > questions, not rhetorical or riddles.)
> >
> > I'm not sure what your question means. So far I've been sticking to
> > the 45 note DE, and obviously doing that makes a clear audible
> > difference. However, experience has shown that comma pumps on
> > 2401/2400 or 4375/4374 are not so long that they fail to be
> > comprehensible. The audibility of the differnce between the starting
> > and ending note if you temper is another matter; it's not a hell of
> a
> > big change, and I don't hear it myself.
>
> I think it's Dave's turn to ponder this.

I feel like Shaw in Monty Python's 'Oscar Wilde' sketch. :-)

http://www.geocities.com/TelevisionCity/8889/poetry/mp-wilde.htm

I'm afraid I lost the thread and don't have time to pick it up.

I'm guessing ennealimmal is so complex and so close to 7-limit JI that
most musicians would happily use the two interchangeably without
noticing. Is that something we could test?

> > > > 612 is a fine
> > > > way to tune ennealimmal, though I plan on using TOP for this
> one.

I don't see how the fact that x/612ths of an octave is a fine way to
tune the ennealimmal generator has any bearing on the musical
usefulness of 612-ET _as_an_ET_. One can't hear the difference between
ennealimmal tuned as a subset of 612-ET and tuned with an
ever-so-slightly different generator that is an irrational fraction of
an octave.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 3:38:51 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:

> I don't see how the fact that x/612ths of an octave is a fine way to
> tune the ennealimmal generator has any bearing on the musical
> usefulness of 612-ET _as_an_ET_.

Try thinking like a computer composer, who could well desire to keep
track of things more easily by scoreing in terms of reasonably small
integers.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 3:58:46 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >> It is well known that Dave, for example, is far more
> > >> micro-biased than I!
> > >
> > >?
> >
> > What's your question?
>
> What does micro-biased mean, on what basis do you say this about you
> vs. Dave, and what is its relevance here?

I'd like to know what you mean by micro-biased. It may well be true,
but I'd like to know.

At the moment I fell you should be calling me "centrally biased" or
some such. I don't want to include either the very high error low
complexity or very high complexity low error temperaments that a
log-flat cutoff alone would include.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 4:11:26 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > The circle rocks, dude. It penalizes temperaments equally for
> trading
> > too much of their error for complexity, or complexity for error.
> Look
> > at the plots, and the first things you hit are 19, 12, and 53. And
> > 22 in the 7-limit. Further, my suggestion that 1cents = zero should
> > satisfy Dave's micro fears. Or make 0 cents = zero. It works
> >either
> > way.
>
> It does? Look at the graph! How can you make 0 cents = zero when it's
> infinitely far away? And what about the position of the origin on the
> *complexity* axis??
>
> > No origin; pfff.
>
> piano-forte-forte-forte?
>
> P.S. The relative scaling of the two axes is completely arbitrary,
> so, even if you actually selected an origin, the circle would produce
> different results for a different relative scaling.

But an elliptical cutoff in log-log space could be made to work. You
do have to choose nonzero values of error and complexity to represent
zero pain (the center of the ellipse). But that's OK. As Graham
pointed out, that's exactly what we do with loudness.

So I'd also like to see if one of these elliptical-log beasties can be
made to give the same list as Paul's red line on the 7-limit-LT
log-log plot. How about it Carl?

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 4:21:38 PM

>And we're not suggesting any "goodness" measure which
>is applicable to both 5-limit and 7-limit systems of any respective
>dimensionalities.

Any fundamental reason why not?

>But we are suggesting something similar be used in
>each of the Pascal's triangle of cases, which seems logical.

I'm a bit lost with the Pascal's triangle stuff. Can you populate
a triangle with the things you're associating with it? Such would
be grand, in the Wilson tradition....

>If it's
>wrong, it's wrong, and there goes the premise of our paper. But it's
>a theory paper, not an edict. I think if the criteria we use are
>easily grasped and well justified, we will have done a great job
>publishing something truly pioneering and valuable as fodder for
>experimentation.

We have a choice -- derive badness from first principles or cook
it from a survey of the tuning list, our personal tastes, etc.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 4:28:59 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:
> a week starting next week (Tue to Fri).
>
>
> > Why can't you understand that the mathematical fact that temperaments
> > come out with a straight edge on a log-log plot has absolutely no
> > bearing on which of them will be found musically useful to humans.
>
> Does this mean you are simply going to blow me off and ignore the hard
> work I have done trying to accomodate both you and Paul?

Absolutely not.

> > I can certainly understand Paul's impatience.
>
> Can you understand my impatience with the fact that neither you nor
> Paul seems willing even to listen to me?

For god's sake man, what _are_ you talking about?

I'm starting to think that for you, "listen to me" means, "agree with
everything I say", or at least "follow up every direction of
investigation I suggest".

What do we have to do to convince you we're listening? You'd convince
us that you were listening to us if you didn't have to ask for reposts
of information that's already been recently posted twice.

Sorry. I'm getting hot under the collar again, and that's pretty bad
because I'm not even wearing a shirt ..., but it's already 30 degrees
Celsius here with extreme humidity, at 10 in the morning. :-)

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 4:30:04 PM

>> >> I'm not.
>> >
>> >Then why are you suddenly silent on all this?
>>
>> Huh? I've been posting at a record rate.
>
>Not on this subject of cognitive limits that used to occupy you so.

Those apply to scales, not tunings. Ideally the paper would show
how to use the tools of temperament to find both. But that's up
to you guys. Dave doesn't seem to want the macros which would
be necessary for the scale-building stuff.

>> >> It is well known that Dave, for example, is far more
>> >> micro-biased than I!
>> >
>> >?
>>
>> What's your question?
>
>What does micro-biased mean, on what basis do you say this about you
>vs. Dave, and what is its relevance here?

Micro-biased means biased in favor of microtemperaments. I've
historically fought for macros vs. Dave. But in general if it
ever appears that I'm taking a side on any of these lists, please
stop adn consider that I rarely do so -- I sometimes appear to
do so if a position hasn't been _explored_ to my satisfaction.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 4:38:26 PM

>> >> Alternatively, then why doesn't the badness bound alone enclose a
>> >> finite triangle?
>> >
>> >Not only is it, like the rectangle, infinite in area on the loglog
>> >plot, since the zero-error line and zero-complexity lines are
>> >infinitely far away, but it actually encloses an infinite number
>> >of temperaments.

Yet on ET charts like this...

/tuning-math/files/Paul/et5loglog.gif

...the region beneath the 7-53 diagonal is empty. Is there stuff
there you haven't plotted?

Wait -- and how can ETs appear more than once -- different maps?
That might explain different errors, but they are appearing at
different complexities too... baffling.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 4:48:20 PM

>> >And what about the position of the origin on the
>> >*complexity* axis??
>>
>> I already answered that.
>
>Where? I didn't see anything on that, but I could have misunderstood
>something.

Sorry; you use 1 cent and 1 note as zeros.

>> >P.S. The relative scaling of the two axes is completely arbitrary,
>>
>> Howso? They're both base2 logs of fixed units.
>
>Actually, the vertical axis isn't base anything, since it's a ratio
>of logs.

That cents are log seems irrelevant. They're fundamental units!

>> You mean c is
>> arbitrary in y = x + c?
>
>Not what I meant, but this is the equation of a line, not a circle.

Yes, I know. But I wasn't trying to give a circle (IIRC that form
is like x**2 + y**2 something something), or a line, but the
intersection point of the axes, which is what I thought you meant by
relative scaling. That means I only meant the above to apply when
either x or y is zero, I think. Anyway, I don't think changing the
intersection point would turn a circle into an elipse, so you must
have meant something else.

If a circle is just so unsatisfactory, please instead consider my
suggestion to be that we equally penalize temperaments for trading
too much of their error for comp., or too much of their comp for
error.

Incidentally, I don't see the point of a moat vs. a circle, since
the moat's 'hole' is apparently empty on your charts -- but I
guess the moat is only meant for linear-linear, or?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 4:49:25 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > It's a non-subjective musical result. The number of intervals
> > involved,
>
> Not the only valid measure of complexity.

Who said it was?

> Why should I only care about major tetrads?

The same applies to minor tetrads. As to why you should care about
tetrads, the only 7-limit JI harmony there is is tetrads or subsets of
tetrads.

> Then something like the Kees lattice should be used, but this
> assumption would add a new chapter to our paper that would probably
> make it too long.

I wasn't suggesting adding it to our paper, I was hoping to add it to
our thinking.

> In C major, the progression between C major and D minor triads
> doesn't use any shared notes. Is that a problem?

It's not a problem, but it does mean that it is less organic than from
C to a or e; it is well-known that ii relates to both IV and V, and
this *does* involve shared notes. iii is sometimes hardly regarded as
a separate tonal presence, since it ties so closely to both I and V.

> > We can start from chords and then get back to the notes.
>
> I don't want to assume any particular chord structures; that would
> make this whole enterprise far less general and might doom it to
> being nothing more than an academic curiosity.

I hope you have not become allergic to thinking about theory except in
terms of a paper. As for chord stuctures, in the 5 and 7 odd limits,
with no tempering, you don't have a lot of choices.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 4:52:06 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:
>
> > I don't see how the fact that x/612ths of an octave is a fine way to
> > tune the ennealimmal generator has any bearing on the musical
> > usefulness of 612-ET _as_an_ET_.
>
> Try thinking like a computer composer, who could well desire to keep
> track of things more easily by scoreing in terms of reasonably small
> integers.

OK. I'll grant that that is a form of musical usefulness for 612-ET
but it is not derived directly from the n-limit properties of 612-ET
itself, but only indirectly from the properties of ennealimmal (which
are themselves of dubious utility).

Even if we want to include _this_ kind of derived usefulness in our
considerations I think an ET should only inherit a tiny fraction of
the usefulness of an LT it supports.

The other way of approaching this (which I favour at present) is to
say that the support of LTs should have no such impact on the
inclusion or otherwise of an ET _as_an_ET_, because the ET will get
it's due in this regard when the LTs are listed since we would include
a column giving the ETs that well support each ET.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 4:58:12 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> We have a choice -- derive badness from first principles or cook
> it from a survey of the tuning list, our personal tastes, etc.

What first principles of the human psychology of the musical use of
temperaments did you have in mind?

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:04:25 PM

>Humans seem to find a particular region of complexity and error
>attractive and have a certain approximate function relating error and
>complexity to usefulness. Extra-terrestrial music-makers (or humpback
>whales) may find completely different regions attractive.

This seems to be the key statement of this thread. I don't think
this has been established. If it had, I'd be all for it. But it
seems instead that whenever you cut out temperament T, somebody
could come along and do something with T that would make you wish
you hadn't have cut it. Therefore it seems logical to use something
that allows a comparison of temperaments in any range (like logflat).
Then no matter what T is, we can say...

"You could have used U, which is in the same range but better."

...or...

"T's the best in that range. Bravo!"

...The worst that could happen would seem to be...

"T falls outside the range we established for our paper, sorry."

...in which case the reader could perform his own analysis in the
above way. With a cooked acceptance region, however, the following
could happen...

"Oh, T. It didn't meet our guesses about human cognition, but YMMV."

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:07:36 PM

>> and the complexity < whatever
>> you want. 100 notes? 20 notes?
>
>Why would you need a complexity bound in addition to the circle? The
>circle, being finite, would only extend to a certain maximum
>complexity anyway . . .

To determine its radius.

-C.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:09:30 PM

>>>Having 81/80 in the kernel implies you can harmonize a diatonic
>>>scale all the way through in consonant thirds. Similar commas
>>>have similar implications of the kind Carl always seemed to care
>>>about.
>>
>> Don't you mean 25:24?
>
>No, 81;80. 25;24 in the kernel doesn't give you either a diatonic
>scale or 'consonant thirds'.

Oh, in the kernel means tempered out (right?) giving neutral
thirds. So it isn't immediately obvious why 81:80 throws 5:4
and 6:5 on the same scale degree (always thirds?). . .

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 5:11:12 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:

> I'm starting to think that for you, "listen to me" means, "agree with
> everything I say", or at least "follow up every direction of
> investigation I suggest".

It means at least not to scoff at the idea of using lines without
first thinking about what they can do, not telling people that what
they are doing is useless and they should toddle off and compute
another list instead, and in general that you let someone outside of
the committee of two in on the conversation if they seem to want to help.

> What do we have to do to convince you we're listening?

You'd convince
> us that you were listening to us if you didn't have to ask for reposts
> of information that's already been recently posted twice.

Paul posted a loglog graph, and I had trouble understanding it at
first--my eyesight is probably not as good as yours and I have trouble
seeing these things, and was relying mostly on what Paul said about
it--and it got me off onto the wrong foot, with the whole business of
trying to make elliptical regions. Then I forgot that is what the
graph, was, sorry--but it wasn't labeled when I looked at it again.

> Sorry. I'm getting hot under the collar again, and that's pretty bad
> because I'm not even wearing a shirt ..., but it's already 30 degrees
> Celsius here with extreme humidity, at 10 in the morning. :-)

Ouch. Try moving to California. :)

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:12:48 PM

>> What I don't understand is the resistance to moving on, and finding
>> something that's approximately representative of human musical
>> interest in these things.
>
>What I don't understand is why when I try to do that I get either
>ignored or berated.

Gene, please understand that at least a few of us would love nothing
more than to be able to follow your posts, but it isn't necessarily
any easier to do it than it is for you to follow our posts. In fact,
it's probably harder.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:16:27 PM

>> > >> It is well known that Dave, for example, is far more
>> > >> micro-biased than I!
>> > >
>> > >?
>> >
>> > What's your question?
>>
>> What does micro-biased mean, on what basis do you say this about you
>> vs. Dave, and what is its relevance here?
>
>I'd like to know what you mean by micro-biased. It may well be true,
>but I'd like to know.

Of all the amazing things I've seen on these lists, the failure of
both you and Paul to understand the meaning of "micro-biased" is
possibly the most amazing.

>At the moment I fell you should be calling me "centrally biased" or
>some such.

Obviously you did understand it!

>I don't want to include either the very high error low
>complexity or very high complexity low error temperaments that a
>log-flat cutoff alone would include.

Yes, you are apparently centrally biased. You should like circles
in that case. :)

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 5:17:20 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> Dave doesn't seem to want the macros which would
> be necessary for the scale-building stuff.

What are macros? Why can't you do scale-building stuff without them?

> >> >> It is well known that Dave, for example, is far more
> >> >> micro-biased than I!
> >> >
> >> >?
> >>
> >> What's your question?
> >
> >What does micro-biased mean, on what basis do you say this about you
> >vs. Dave, and what is its relevance here?
>
> Micro-biased means biased in favor of microtemperaments.

Then you are wrong. I'm not micro-biased. I'm biased in favour of
temperaments whose error and complexity are not enormously greater
than any that have ever been used before _as_approximations_of_JI_. In
my book, if you can't make a recognisably consonant complete otonal
n-limit chord in it, with no odentity duplicated at octaves, and
without using a custom made scale-based timbre, then it aint an
n-limit temperament (i.e. approximation of n-limit JI), useful though
it may be for other purposes.

> I've
> historically fought for macros vs. Dave. But in general if it
> ever appears that I'm taking a side on any of these lists, please
> stop adn consider that I rarely do so -- I sometimes appear to
> do so if a position hasn't been _explored_ to my satisfaction.

And good on you for that. You and Paul forced me to nail down what I
object to about them, as above.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 5:25:56 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> Incidentally, I don't see the point of a moat vs. a circle, since
> the moat's 'hole' is apparently empty on your charts -- but I
> guess the moat is only meant for linear-linear, or?

There's no vs. between moats and circles. And moats are meant just as
much for log-log.

The circle would ideally be placed so that it just touches two or
three of the included temperaments and has no temperament close to the
outside of it, i.e. so it is surrounded by a moat.

You know what a moat is right? You have the castle (the circle is its
outer bound) with people (temperaments) inside . Then you have the
moat surrounding that, with no people in it. Then you have the rest of
the world with the rest of the people in it.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 5:28:34 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:
That should have been

> The other way of approaching this (which I favour at present) is to
> say that the support of LTs should have no such impact on the
> inclusion or otherwise of an ET _as_an_ET_, because the ET will get
> it's due in this regard when the LTs are listed since we would include
> a column giving the ETs that well support each LT.

I had "ET" at the end there by mistake.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:31:36 PM

>> We have a choice -- derive badness from first principles or cook
>> it from a survey of the tuning list, our personal tastes, etc.
>
>What first principles of the human psychology of the musical use of
>temperaments did you have in mind?

Since I'm not aware of any, and since we don't have the means to
experimentally determine any, I suggest using only mathematical
first principles, or very simple ideas like...

() For a number of notes n, we would expect more dyads in the
7-limit than the 5-limit.

() I expect to find a new best comma after searching n notes
in the 5-limit, n(something) notes in the 7-limit.

etc.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:36:19 PM

>> Dave doesn't seem to want the macros which would
>> be necessary for the scale-building stuff.
>
>What are macros?

Again, I'm amazed that this well-worn terminology isn't effective
here. AKA exos?

>Why can't you do scale-building stuff without them?

I don't know that it can't, but they're certainly fertile for
scale-building.

See another recent message for my response to the rest. :)

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 5:38:10 PM

>You know what a moat is right?

Obviously not! :(

>You have the castle (the circle is its
>outer bound) with people (temperaments) inside.

Then it's the same as a circle!

>Then you have the moat surrounding that, with no people in it.
>Then you have the rest of the world with the rest of the people
>in it.

Oooo!!!

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 6:14:25 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Humans seem to find a particular region of complexity and error
> >attractive and have a certain approximate function relating error and
> >complexity to usefulness. Extra-terrestrial music-makers (or humpback
> >whales) may find completely different regions attractive.
>
> This seems to be the key statement of this thread. I don't think
> this has been established. If it had, I'd be all for it. But it
> seems instead that whenever you cut out temperament T, somebody
> could come along and do something with T that would make you wish
> you hadn't have cut it. Therefore it seems logical to use something
> that allows a comparison of temperaments in any range (like logflat).

So Carl. You really think it's possible that some human musician
could find the temperament where 3/2 vanishes to be a useful
approximation of 5-limit JI (but hey at least the complexity is
0.001)? And likewise for some temperament where the number of
generators to each prime is around a google (but hey at least the
error is 10^-99 cents)?

> Then no matter what T is, we can say...
>
> "You could have used U, which is in the same range but better."
>
> ...or...
>
> "T's the best in that range. Bravo!"
>
> ...The worst that could happen would seem to be...
>
> "T falls outside the range we established for our paper, sorry."
>
> ...in which case the reader could perform his own analysis in the
> above way. With a cooked acceptance region, however, the following
> could happen...
>
> "Oh, T. It didn't meet our guesses about human cognition, but YMMV."

I don't understand why you think log-flat is a magic bullet in this
regard. If you use log flat badness and include the same number of
temperaments as Paul and I and Gene are considering (around 20), then
exactly the same scenario is possible, only this time it will be
temperaments with moderate amounts of both error and complexity that
are omitted and the objecting musician won't be fictitious, he'll be
Herman Miller.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 6:18:05 PM

>> >Humans seem to find a particular region of complexity and error
>> >attractive and have a certain approximate function relating error and
>> >complexity to usefulness. Extra-terrestrial music-makers (or humpback
>> >whales) may find completely different regions attractive.
>>
>> This seems to be the key statement of this thread. I don't think
>> this has been established. If it had, I'd be all for it. But it
>> seems instead that whenever you cut out temperament T, somebody
>> could come along and do something with T that would make you wish
>> you hadn't have cut it. Therefore it seems logical to use something
>> that allows a comparison of temperaments in any range (like logflat).
>
>So Carl. You really think it's possible that some human musician
>could find the temperament where 3/2 vanishes to be a useful
>approximation of 5-limit JI (but hey at least the complexity is
>0.001)? And likewise for some temperament where the number of
>generators to each prime is around a google (but hey at least the
>error is 10^-99 cents)?

This is a false dilemma. The size of this thread shows how hard
it is to agree on the cutoffs.

>> Then no matter what T is, we can say...
>>
>> "You could have used U, which is in the same range but better."
>>
>> ...or...
>>
>> "T's the best in that range. Bravo!"
>>
>> ...The worst that could happen would seem to be...
>>
>> "T falls outside the range we established for our paper, sorry."
>>
>> ...in which case the reader could perform his own analysis in the
>> above way. With a cooked acceptance region, however, the following
>> could happen...
>>
>> "Oh, T. It didn't meet our guesses about human cognition, but YMMV."
>
>I don't understand why you think log-flat is a magic bullet in this
>regard. If you use log flat badness and include the same number of
>temperaments as Paul and I and Gene are considering (around 20), then
>exactly the same scenario is possible, only this time it will be
>temperaments with moderate amounts of both error and complexity that
>are omitted and the objecting musician won't be fictitious, he'll be
>Herman Miller.

Can you name the temperaments that fell outside of the top 20 on
Gene's 114 list?

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 6:31:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> > >> It is well known that Dave, for example, is far more
> >> > >> micro-biased than I!
> >> > >
> >> > >?
> >> >
> >> > What's your question?
> >>
> >> What does micro-biased mean, on what basis do you say this about you
> >> vs. Dave, and what is its relevance here?
> >
> >I'd like to know what you mean by micro-biased. It may well be true,
> >but I'd like to know.
>
> Of all the amazing things I've seen on these lists, the failure of
> both you and Paul to understand the meaning of "micro-biased" is
> possibly the most amazing.

You misjudge. It wasn't failure to understand, it was carefulness in
checking for possible misunderstandings, rather than immediately
telling someone they are wrong. Something that surely we'd all like to
see more of.

>
> >At the moment I fell you should be calling me "centrally biased" or
> >some such.
>
> Obviously you did understand it!

Yes, as it turns out (but might not have).

> >I don't want to include either the very high error low
> >complexity or very high complexity low error temperaments that a
> >log-flat cutoff alone would include.
>
> Yes, you are apparently centrally biased. You should like circles
> in that case. :)

Yes, I do, so far. Haven't you read that?

For me there are three candidates on the table at the moment. log-log
circles or ellipses, log-log hyperbolae, and linear-linear
nearly-straight-lines.

I'm guessing that one can probably make any one of these fit within
any given moat. If so, a major reason to prefer one over another would
be the number of free parameters and the simplicity of the expression
for the cutoff relation in terms of error and complexity.

What happens to the curve once it is free of the pack (sorry, reading
about too many antarctic sea voyages lately) doesn't much matter,
although I guess you could still argue psychological plausibility from
the curve's behaviour in regions that don't happen to be populated.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 6:39:23 PM

>> >I'd like to know what you mean by micro-biased. It may well be true,
>> >but I'd like to know.
>>
>> Of all the amazing things I've seen on these lists, the failure of
>> both you and Paul to understand the meaning of "micro-biased" is
>> possibly the most amazing.
>
>You misjudge. It wasn't failure to understand, it was carefulness in
>checking for possible misunderstandings, rather than immediately
>telling someone they are wrong. Something that surely we'd all like to
>see more of.

It's a careful balance, but in this case I suppose you did the
right thing.

>> >I don't want to include either the very high error low
>> >complexity or very high complexity low error temperaments that a
>> >log-flat cutoff alone would include.
>>
>> Yes, you are apparently centrally biased. You should like circles
>> in that case. :)
>
>Yes, I do, so far. Haven't you read that?

Yes.

>For me there are three candidates on the table at the moment. log-log
>circles or ellipses, log-log hyperbolae, and linear-linear
>nearly-straight-lines.

Can we keep log-flat on the table for the moment?

>I'm guessing that one can probably make any one of these fit within
>any given moat. If so, a major reason to prefer one over another would
>be the number of free parameters and the simplicity of the expression
>for the cutoff relation in terms of error and complexity.

Ok.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 6:40:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> Dave doesn't seem to want the macros which would
> >> be necessary for the scale-building stuff.
> >
> >What are macros?
>
> Again, I'm amazed that this well-worn terminology isn't effective
> here. AKA exos?

Again, just being careful since my current understanding did not agree
with them being necessary for scale building.

It wouldn't be the first time we both thought we understood the
meaning of a term and eventually discovered we were poles apart. (Damn
those Antarctic stories :-)

> >Why can't you do scale-building stuff without them?
>
> I don't know that it can't, but they're certainly fertile for
> scale-building.

Carl, "necessary" means you can't do without them. Please be careful
about your use of hyperbole (as opposed to Gene's use of hyperbolas
:-), particularly since frustration is running high in all quarters at
present.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 6:50:31 PM

>It wouldn't be the first time we both thought we understood the
>meaning of a term and eventually discovered we were poles apart. (Damn
>those Antarctic stories :-)

I wasn't familiar with the Oscar Wilde routine, by the way.
That's hilarious. Thanks for keeping a sense of humor.

>> >Why can't you do scale-building stuff without them?
>>
>> I don't know that it can't, but they're certainly fertile for
>> scale-building.
>
>Carl, "necessary" means you can't do without them. Please be careful
>about your use of hyperbole

Do *what* without them? Build any decent scale (the above sense)?
Or run any kind of decent scale-building program (the sense in which
I said "necessary")?

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 6:54:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >You know what a moat is right?
>
> Obviously not! :(
>
> >You have the castle (the circle is its
> >outer bound) with people (temperaments) inside.
>
> Then it's the same as a circle!

No. A circle is infinitesimally thin. A moat has real thickness. If
we're talking circular cutoffs then we'd say the moat is annular. The
circle is only one edge of the moat. Of course it doesn't have to be
circular, but continuing in that vein ...

You could draw the smallest circle that encloses all the lucky
temperaments and then you could draw another one outside that which is
the largest circle that still encloses the same ones and no others.
The space between them is the moat. You can then give a quantitative
measure of the size of the moat as the percentage difference between
the radii of the two circles.

The term "moat" came to mind because the temperaments sometimes look
like constellations and in the Niven and Pournelle books "The Mote in
Gods Eye" and "The Moat around Murcheson's Eye", "the Moat" is a vast
region of space with no stars.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 7:00:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> We have a choice -- derive badness from first principles or cook
> >> it from a survey of the tuning list, our personal tastes, etc.
> >
> >What first principles of the human psychology of the musical use of
> >temperaments did you have in mind?
>
> Since I'm not aware of any, and since we don't have the means to
> experimentally determine any, I suggest using only mathematical
> first principles

But badness is clearly a psychological property, what have
mathematical first principles got to do with it?

> , or very simple ideas like...
>
> () For a number of notes n, we would expect more dyads in the
> 7-limit than the 5-limit.
>
> () I expect to find a new best comma after searching n notes
> in the 5-limit, n(something) notes in the 7-limit.

These sound reasonable, but I don't see how to use them to determine
psychologically reasonable cutoff for lists of the temperaments most
likely to be musically useful.

I think we have no choice but to "cook it from a survey of the tuning
list, our personal tastes, etc.". Some of us have been doing informal
surveys of these questions on the tuning list for a decade or more.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 7:08:01 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:

> I don't understand why you think log-flat is a magic bullet in this
> regard. If you use log flat badness and include the same number of
> temperaments as Paul and I and Gene are considering (around 20), then
> exactly the same scenario is possible, only this time it will be
> temperaments with moderate amounts of both error and complexity that
> are omitted and the objecting musician won't be fictitious, he'll be
> Herman Miller.

Now explain why my list of 22 is no good, please, and what you think
should be on it that isn't, or off it that is.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 7:10:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >Humans seem to find a particular region of complexity and error
> >> >attractive and have a certain approximate function relating
error and
> >> >complexity to usefulness. Extra-terrestrial music-makers (or
humpback
> >> >whales) may find completely different regions attractive.
> >>
> >> This seems to be the key statement of this thread. I don't think
> >> this has been established. If it had, I'd be all for it. But it
> >> seems instead that whenever you cut out temperament T, somebody
> >> could come along and do something with T that would make you wish
> >> you hadn't have cut it. Therefore it seems logical to use something
> >> that allows a comparison of temperaments in any range (like logflat).
> >
> >So Carl. You really think it's possible that some human musician
> >could find the temperament where 3/2 vanishes to be a useful
> >approximation of 5-limit JI (but hey at least the complexity is
> >0.001)? And likewise for some temperament where the number of
> >generators to each prime is around a google (but hey at least the
> >error is 10^-99 cents)?
>
> This is a false dilemma. The size of this thread shows how hard
> it is to agree on the cutoffs.

Well yeah but we're probably within a factor of 2 of agreeing. Another
species could disagree with us by orders of magnitude.

So you do want cutoffs on error and complexity? But cutoffs utterly
violate log-flat badness in the regions outside of them.

> Can you name the temperaments that fell outside of the top 20 on
> Gene's 114 list?

Yes.

Number 21 {21/20, 28/27}

[1, 4, 3, 4, 2, -4] [[1, 2, 4, 4], [0, -1, -4, -3]]
TOP tuning [1214.253642, 1919.106053, 2819.409644, 3328.810876]
TOP generators [1214.253642, 509.4012304]
bad: 42.300772 comp: 1.722706 err: 14.253642

Number 22 Injera

[2, 8, 8, 8, 7, -4] [[2, 3, 4, 5], [0, 1, 4, 4]]
TOP tuning [1201.777814, 1896.276546, 2777.994928, 3378.883835]
TOP generators [600.8889070, 93.60982493]
bad: 42.529834 comp: 3.445412 err: 3.582707

Number 23 Dicot

[2, 1, 6, -3, 4, 11] [[1, 1, 2, 1], [0, 2, 1, 6]]
TOP tuning [1204.048158, 1916.847810, 2764.496143, 3342.447113]
TOP generators [1204.048159, 356.3998255]
bad: 42.920570 comp: 2.137243 err: 9.396316

Number 24 Hemifourths

[2, 8, 1, 8, -4, -20] [[1, 2, 4, 3], [0, -2, -8, -1]]
TOP tuning [1203.668842, 1902.376967, 2794.832500, 3358.526166]
TOP generators [1203.668841, 252.4803582]
bad: 43.552336 comp: 3.445412 err: 3.668842

Number 25 Waage? Compton? Duodecimal?

[0, 12, 24, 19, 38, 22] [[12, 19, 28, 34], [0, 0, -1, -2]]
TOP tuning [1200.617051, 1900.976998, 2785.844725, 3370.558188]
TOP generators [100.0514209, 16.55882096]
bad: 45.097159 comp: 8.548972 err: .617051

Number 26 Wizard

[12, -2, 20, -31, -2, 52] [[2, 1, 5, 2], [0, 6, -1, 10]]
TOP tuning [1200.639571, 1900.941305, 2784.828674, 3368.342104]
TOP generators [600.3197857, 216.7702531]
bad: 45.381303 comp: 8.423526 err: .639571

Number 27 Kleismic

[6, 5, 3, -6, -12, -7] [[1, 0, 1, 2], [0, 6, 5, 3]]
TOP tuning [1203.187308, 1907.006766, 2792.359613, 3359.878000]
TOP generators [1203.187309, 317.8344609]
bad: 45.676063 comp: 3.785579 err: 3.187309

Number 28 Negri

[4, -3, 2, -14, -8, 13] [[1, 2, 2, 3], [0, -4, 3, -2]]
TOP tuning [1203.187308, 1907.006766, 2780.900506, 3359.878000]
TOP generators [1203.187309, 124.8419629]
bad: 46.125886 comp: 3.804173 err: 3.187309

Number 29 Nonkleismic

[10, 9, 7, -9, -17, -9] [[1, -1, 0, 1], [0, 10, 9, 7]]
TOP tuning [1198.828458, 1900.098151, 2789.033948, 3368.077085]
TOP generators [1198.828458, 309.8926610]
bad: 46.635848 comp: 6.309298 err: 1.171542

Number 30 Quartaminorthirds

[9, 5, -3, -13, -30, -21] [[1, 1, 2, 3], [0, 9, 5, -3]]
TOP tuning [1199.792743, 1900.291122, 2788.751252, 3365.878770]
TOP generators [1199.792743, 77.83315314]
bad: 47.721352 comp: 6.742251 err: 1.049791

Number 31 Tripletone

[3, 0, -6, -7, -18, -14] [[3, 5, 7, 8], [0, -1, 0, 2]]
TOP tuning [1197.060039, 1902.640406, 2793.140092, 3377.079420]
TOP generators [399.0200131, 92.45965769]
bad: 48.112067 comp: 4.045351 err: 2.939961

Number 32 Decimal

[4, 2, 2, -6, -8, -1] [[2, 4, 5, 6], [0, -2, -1, -1]]
TOP tuning [1207.657798, 1914.092323, 2768.532858, 3372.361757]
TOP generators [603.8288989, 250.6116362]
bad: 48.773723 comp: 2.523719 err: 7.657798

Number 33 {1029/1024, 4375/4374}

[12, 22, -4, 7, -40, -71] [[2, 5, 8, 5], [0, -6, -11, 2]]
TOP tuning [1200.421488, 1901.286959, 2785.446889, 3367.642640]
TOP generators [600.2107440, 183.2944602]
bad: 50.004574 comp: 10.892116 err: .421488

Number 34 Superpythagorean

[1, 9, -2, 12, -6, -30] [[1, 2, 6, 2], [0, -1, -9, 2]]
TOP tuning [1197.596121, 1905.765059, 2780.732078, 3374.046608]
TOP generators [1197.596121, 489.4271829]
bad: 50.917015 comp: 4.602303 err: 2.403879

Number 35 Supermajor seconds

[3, 12, -1, 12, -10, -36] [[1, 1, 0, 3], [0, 3, 12, -1]]
TOP tuning [1201.698521, 1899.262909, 2790.257556, 3372.574099]
TOP generators [1201.698520, 232.5214630]
bad: 51.806440 comp: 5.522763 err: 1.698521

Number 36 Supersupermajor

[3, 17, -1, 20, -10, -50] [[1, 1, -1, 3], [0, 3, 17, -1]]
TOP tuning [1200.231588, 1903.372996, 2784.236389, 3366.314293]
TOP generators [1200.231587, 234.3804692]
bad: 52.638504 comp: 7.670504 err: .894655

Number 37 {6144/6125, 10976/10935} Hendecatonic?

[11, -11, 22, -43, 4, 82] [[11, 17, 26, 30], [0, 1, -1, 2]]
TOP tuning [1199.662182, 1902.490429, 2787.098101, 3368.740066]
TOP generators [109.0601984, 48.46705632]
bad: 53.458690 comp: 12.579627 err: .337818

Number 38 {3136/3125, 5120/5103} Misty

[3, -12, -30, -26, -56, -36] [[3, 5, 6, 6], [0, -1, 4, 10]]
TOP tuning [1199.661465, 1902.491566, 2787.099767, 3368.765021]
TOP generators [399.8871550, 96.94420930]
bad: 53.622498 comp: 12.585536 err: .338535

Number 39 {1728/1715, 4000/3993}

[11, 18, 5, 3, -23, -39] [[1, 2, 3, 3], [0, -11, -18, -5]]
TOP tuning [1199.083445, 1901.293958, 2784.185538, 3371.399002]
TOP generators [1199.083445, 45.17026643]
bad: 55.081549 comp: 7.752178 err: .916555

Number 40 {36/35, 160/147} Hystrix?

[3, 5, 1, 1, -7, -12] [[1, 2, 3, 3], [0, -3, -5, -1]]
TOP tuning [1187.933715, 1892.564743, 2758.296667, 3402.700250]
TOP generators [1187.933715, 161.1008955]
bad: 55.952057 comp: 2.153383 err: 12.066285

Number 41 {28/27, 50/49}

[2, 6, 6, 5, 4, -3] [[2, 3, 4, 5], [0, 1, 3, 3]]
TOP tuning [1191.599639, 1915.269258, 2766.808679, 3362.608498]
TOP generators [595.7998193, 127.8698005]
bad: 56.092257 comp: 2.584059 err: 8.400361

Number 42 Porcupine

[3, 5, -6, 1, -18, -28] [[1, 2, 3, 2], [0, -3, -5, 6]]
TOP tuning [1196.905961, 1906.858938, 2779.129576, 3367.717888]
TOP generators [1196.905960, 162.3176609]
bad: 57.088650 comp: 4.295482 err: 3.094040

Number 43

[6, 10, 10, 2, -1, -5] [[2, 4, 6, 7], [0, -3, -5, -5]]
TOP tuning [1196.893422, 1906.838962, 2779.100462, 3377.547174]
TOP generators [598.4467109, 162.3159606]
bad: 57.621529 comp: 4.306766 err: 3.106578

Number 44 Octacot

[8, 18, 11, 10, -5, -25] [[1, 1, 1, 2], [0, 8, 18, 11]]
TOP tuning [1199.031259, 1903.490418, 2784.064367, 3366.693863]
TOP generators [1199.031259, 88.05739491]
bad: 58.217715 comp: 7.752178 err: .968741

Number 45 {25/24, 81/80} Jamesbond?

[0, 0, 7, 0, 11, 16] [[7, 11, 16, 20], [0, 0, 0, -1]]
TOP tuning [1209.431411, 1900.535075, 2764.414655, 3368.825906]
TOP generators [172.7759159, 86.69241190]
bad: 58.637859 comp: 2.493450 err: 9.431411

Number 46 Hemithirds

[15, -2, -5, -38, -50, -6] [[1, 4, 2, 2], [0, -15, 2, 5]]
TOP tuning [1200.363229, 1901.194685, 2787.427555, 3367.479202]
TOP generators [1200.363229, 193.3505488]
bad: 60.573479 comp: 11.237086 err: .479706

Number 47

[12, 34, 20, 26, -2, -49] [[2, 4, 7, 7], [0, -6, -17, -10]]
TOP tuning [1200.284965, 1901.503343, 2786.975381, 3369.219732]
TOP generators [600.1424823, 83.17776441]
bad: 61.101493 comp: 14.643003 err: .284965

Number 48 Flattone

[1, 4, -9, 4, -17, -32] [[1, 2, 4, -1], [0, -1, -4, 9]]
TOP tuning [1202.536420, 1897.934872, 2781.593812, 3361.705278]
TOP generators [1202.536419, 507.1379663]
bad: 61.126418 comp: 4.909123 err: 2.536420

Number 49 Diaschismic

[2, -4, -16, -11, -31, -26] [[2, 3, 5, 7], [0, 1, -2, -8]]
TOP tuning [1198.732403, 1901.885616, 2789.256983, 3365.267311]
TOP generators [599.3662015, 103.7870123]
bad: 61.527901 comp: 6.966993 err: 1.267597

Number 50 Superkleismic

[9, 10, -3, -5, -30, -35] [[1, 4, 5, 2], [0, -9, -10, 3]]
TOP tuning [1201.371917, 1904.129438, 2783.128219, 3369.863245]
TOP generators [1201.371918, 322.3731369]
bad: 62.364585 comp: 6.742251 err: 1.371918

Number 51

[8, 1, 18, -17, 6, 39] [[1, -1, 2, -3], [0, 8, 1, 18]]
TOP tuning [1201.135544, 1899.537544, 2789.855225, 3373.107814]
TOP generators [1201.135545, 387.5841360]
bad: 62.703297 comp: 6.411729 err: 1.525246

Number 52 Tritonic

[5, -11, -12, -29, -33, 3] [[1, 4, -3, -3], [0, -5, 11, 12]]
TOP tuning [1201.023211, 1900.333250, 2785.201472, 3365.953391]
TOP generators [1201.023211, 580.7519186]
bad: 63.536850 comp: 7.880073 err: 1.023211

Number 53

[1, 33, 27, 50, 40, -30] [[1, 2, 16, 14], [0, -1, -33, -27]]
TOP tuning [1199.680495, 1902.108988, 2785.571846, 3369.722869]
TOP generators [1199.680495, 497.2520023]
bad: 64.536886 comp: 14.212326 err: .319505

Number 54

[6, 10, 3, 2, -12, -21] [[1, 2, 3, 3], [0, -6, -10, -3]]
TOP tuning [1202.659696, 1907.471368, 2778.232381, 3359.055076]
TOP generators [1202.659696, 82.97467050]
bad: 64.556006 comp: 4.306766 err: 3.480440

Number 55

[0, 0, 12, 0, 19, 28] [[12, 19, 28, 34], [0, 0, 0, -1]]
TOP tuning [1197.674070, 1896.317278, 2794.572829, 3368.825906]
TOP generators [99.80617249, 24.58395811]
bad: 65.630949 comp: 4.295482 err: 3.557008

Number 56

[2, 1, -4, -3, -12, -12] [[1, 1, 2, 4], [0, 2, 1, -4]]
TOP tuning [1204.567524, 1916.451342, 2765.076958, 3394.502460]
TOP generators [1204.567524, 355.9419091]
bad: 66.522610 comp: 2.696901 err: 9.146173

Number 57

[2, -2, 1, -8, -4, 8] [[1, 2, 2, 3], [0, -2, 2, -1]]
TOP tuning [1185.869125, 1924.351909, 2819.124589, 3333.914203]
TOP generators [1185.869125, 223.6931705]
bad: 66.774944 comp: 2.173813 err: 14.130876

Number 58

[5, 8, 2, 1, -11, -18] [[1, 2, 3, 3], [0, -5, -8, -2]]
TOP tuning [1194.335372, 1892.976778, 2789.895770, 3384.728528]
TOP generators [1194.335372, 99.13879319]
bad: 67.244049 comp: 3.445412 err: 5.664628

Number 59

[3, 5, 9, 1, 6, 7] [[1, 2, 3, 4], [0, -3, -5, -9]]
TOP tuning [1193.415676, 1912.390908, 2789.512955, 3350.341372]
TOP generators [1193.415676, 158.1468146]
bad: 67.670842 comp: 3.205865 err: 6.584324

Number 60

[3, 0, 9, -7, 6, 21] [[3, 5, 7, 9], [0, -1, 0, -3]]
TOP tuning [1193.415676, 1912.390908, 2784.636577, 3350.341372]
TOP generators [397.8052253, 76.63521863]
bad: 68.337269 comp: 3.221612 err: 6.584324

Number 61 Hemikleismic

[12, 10, -9, -12, -48, -49] [[1, 0, 1, 4], [0, 12, 10, -9]]
TOP tuning [1199.411231, 1902.888178, 2785.151380, 3370.478790]
TOP generators [1199.411231, 158.5740148]
bad: 68.516458 comp: 10.787602 err: .588769

Number 62

[2, -2, -2, -8, -9, 1] [[2, 3, 5, 6], [0, 1, -1, -1]]
TOP tuning [1185.468457, 1924.986952, 2816.886876, 3409.621105]
TOP generators [592.7342285, 146.7842660]
bad: 68.668284 comp: 2.173813 err: 14.531543

Number 63

[8, 13, 23, 2, 14, 17] [[1, 2, 3, 4], [0, -8, -13, -23]]
TOP tuning [1198.975478, 1900.576277, 2788.692580, 3365.949709]
TOP generators [1198.975478, 62.17183489]
bad: 68.767371 comp: 8.192765 err: 1.024522

Number 64

[3, -7, -8, -18, -21, 1] [[1, 3, -1, -1], [0, -3, 7, 8]]
TOP tuning [1202.900537, 1897.357759, 2790.235118, 3360.683070]
TOP generators [1202.900537, 570.4479508]
bad: 69.388565 comp: 4.891080 err: 2.900537

Number 65

[3, 12, 11, 12, 9, -8] [[1, 3, 8, 8], [0, -3, -12, -11]]
TOP tuning [1202.624742, 1900.726787, 2792.408176, 3361.457323]
TOP generators [1202.624742, 569.0491468]
bad: 70.105427 comp: 5.168119 err: 2.624742

Number 66

[17, 6, 15, -30, -24, 18] [[1, -5, 0, -3], [0, 17, 6, 15]]
TOP tuning [1199.379215, 1900.971080, 2787.482526, 3370.568669]
TOP generators [1199.379215, 464.5804210]
bad: 71.416917 comp: 10.725806 err: .620785

Number 67

[11, 13, 17, -5, -4, 3] [[1, 3, 4, 5], [0, -11, -13, -17]]
TOP tuning [1198.514750, 1899.600936, 2789.762356, 3371.570447]
TOP generators [1198.514750, 154.1766650]
bad: 71.539673 comp: 6.940227 err: 1.485250

Number 68

[3, -24, -1, -45, -10, 65] [[1, 1, 7, 3], [0, 3, -24, -1]]
TOP tuning [1200.486331, 1902.481504, 2787.442939, 3367.460603]
TOP generators [1200.486331, 233.9983907]
bad: 72.714599 comp: 12.227699 err: .486331

Number 69

[23, -1, 13, -55, -44, 33] [[1, 9, 2, 7], [0, -23, 1, -13]]
TOP tuning [1199.671611, 1901.434518, 2786.108874, 3369.747810]
TOP generators [1199.671611, 386.7656515]
bad: 73.346343 comp: 14.944966 err: .328389

Number 70

[6, 29, -2, 32, -20, -86] [[1, 4, 14, 2], [0, -6, -29, 2]]
TOP tuning [1200.422358, 1901.285580, 2787.294397, 3367.645998]
TOP generators [1200.422357, 483.4006416]
bad: 73.516606 comp: 13.193267 err: .422358

Number 71

[7, -15, -16, -40, -45, 5] [[1, 5, -5, -5], [0, -7, 15, 16]]
TOP tuning [1200.210742, 1900.961474, 2784.858222, 3370.585685]
TOP generators [1200.210742, 585.7274621]
bad: 74.053446 comp: 10.869066 err: .626846

Number 72

[5, 3, 7, -7, -3, 8] [[1, 1, 2, 2], [0, 5, 3, 7]]
TOP tuning [1192.540126, 1890.131381, 2803.635005, 3361.708008]
TOP generators [1192.540126, 139.5182509]
bad: 74.239244 comp: 3.154649 err: 7.459874

Number 73

[4, 21, -3, 24, -16, -66] [[1, 0, -6, 4], [0, 4, 21, -3]]
TOP tuning [1199.274449, 1901.646683, 2787.998389, 3370.862785]
TOP generators [1199.274449, 475.4116708]
bad: 74.381278 comp: 10.125066 err: .725551

Number 74

[3, -5, -6, -15, -18, 0] [[1, 3, 0, 0], [0, -3, 5, 6]]
TOP tuning [1195.486066, 1908.381352, 2796.794743, 3356.153692]
TOP generators [1195.486066, 559.3589487]
bad: 74.989802 comp: 4.075900 err: 4.513934

Number 75

[6, 0, 3, -14, -12, 7] [[3, 4, 7, 8], [0, 2, 0, 1]]
TOP tuning [1199.400031, 1910.341746, 2798.600074, 3353.970936]
TOP generators [399.8000105, 155.5708520]
bad: 76.576420 comp: 3.804173 err: 5.291448

Number 76

[13, 2, 30, -27, 11, 64] [[1, 6, 3, 13], [0, -13, -2, -30]]
TOP tuning [1200.672456, 1900.889183, 2786.148822, 3370.713730]
TOP generators [1200.672456, 407.9342733]
bad: 76.791305 comp: 10.686216 err: .672456

Number 77 Shrutar

[4, -8, 14, -22, 11, 55] [[2, 3, 5, 5], [0, 2, -4, 7]]
TOP tuning [1198.920873, 1903.665377, 2786.734051, 3365.796415]
TOP generators [599.4604367, 52.64203308]
bad: 76.825572 comp: 8.437555 err: 1.079127

Number 78

[12, 10, 25, -12, 6, 30] [[1, 6, 6, 12], [0, -12, -10, -25]]
TOP tuning [1199.028703, 1903.494472, 2785.274095, 3366.099130]
TOP generators [1199.028703, 440.8898120]
bad: 77.026097 comp: 8.905180 err: .971298

Number 79 Beatles

[2, -9, -4, -19, -12, 16] [[1, 1, 5, 4], [0, 2, -9, -4]]
TOP tuning [1197.104145, 1906.544822, 2793.037680, 3369.535226]
TOP generators [1197.104145, 354.7203384]
bad: 77.187771 comp: 5.162806 err: 2.895855

Number 80

[6, -12, 10, -33, -1, 57] [[2, 4, 3, 7], [0, -3, 6, -5]]
TOP tuning [1199.025947, 1903.033657, 2788.575394, 3371.560420]
TOP generators [599.5129735, 165.0060791]
bad: 78.320453 comp: 8.966980 err: .974054

Number 81

[4, 4, 0, -3, -11, -11] [[4, 6, 9, 11], [0, 1, 1, 0]]
TOP tuning [1212.384652, 1905.781495, 2815.069985, 3334.057793]
TOP generators [303.0961630, 63.74881402]
bad: 78.879803 comp: 2.523719 err: 12.384652

Number 82

[6, -2, -2, -17, -20, 1] [[2, 2, 5, 6], [0, 3, -1, -1]]
TOP tuning [1203.400986, 1896.025764, 2777.627538, 3379.328030]
TOP generators [601.7004928, 230.8749260]
bad: 79.825592 comp: 4.619353 err: 3.740932

Number 83

[1, 6, 5, 7, 5, -5] [[1, 2, 5, 5], [0, -1, -6, -5]]
TOP tuning [1211.970043, 1882.982932, 2814.107292, 3355.064446]
TOP generators [1211.970043, 540.9571536]
bad: 79.928319 comp: 2.584059 err: 11.970043

Number 84 Squares

[4, 16, 9, 16, 3, -24] [[1, 3, 8, 6], [0, -4, -16, -9]]
TOP tuning [1201.698521, 1899.262909, 2790.257556, 3372.067656]
TOP generators [1201.698520, 426.4581630]
bad: 80.651668 comp: 6.890825 err: 1.698521

Number 85

[6, 0, 0, -14, -17, 0] [[6, 10, 14, 17], [0, -1, 0, 0]]
TOP tuning [1194.473353, 1901.955001, 2787.104490, 3384.341166]
TOP generators [199.0788921, 88.83392059]
bad: 80.672767 comp: 3.820609 err: 5.526647

Number 86

[7, 26, 25, 25, 20, -15] [[1, 5, 15, 15], [0, -7, -26, -25]]
TOP tuning [1199.352846, 1902.980716, 2784.811068, 3369.637284]
TOP generators [1199.352846, 584.8262161]
bad: 81.144087 comp: 11.197591 err: .647154

Number 87

[18, 15, -6, -18, -60, -56] [[3, 6, 8, 8], [0, -6, -5, 2]]
TOP tuning [1200.448679, 1901.787880, 2785.271912, 3367.566305]
TOP generators [400.1495598, 83.18491309]
bad: 81.584166 comp: 13.484503 err: .448679

Number 88

[9, -2, 14, -24, -3, 38] [[1, 3, 2, 5], [0, -9, 2, -14]]
TOP tuning [1201.918556, 1904.657347, 2781.858962, 3363.439837]
TOP generators [1201.918557, 189.0109248]
bad: 81.594641 comp: 6.521440 err: 1.918557

Number 89

[1, -8, -2, -15, -6, 18] [[1, 2, -1, 2], [0, -1, 8, 2]]
TOP tuning [1195.155395, 1894.070902, 2774.763716, 3382.790568]
TOP generators [1195.155395, 496.2398890]
bad: 82.638059 comp: 4.075900 err: 4.974313

Number 90

[3, 7, -1, 4, -10, -22] [[1, 1, 1, 3], [0, 3, 7, -1]]
TOP tuning [1205.820043, 1890.417958, 2803.215176, 3389.260823]
TOP generators [1205.820043, 228.1993049]
bad: 82.914167 comp: 3.375022 err: 7.279064

Number 91

[6, 5, -31, -6, -66, -86] [[1, 0, 1, 11], [0, 6, 5, -31]]
TOP tuning [1199.976626, 1902.553087, 2785.437532, 3369.885264]
TOP generators [1199.976626, 317.0921813]
bad: 83.023430 comp: 14.832953 err: .377351

Number 92

[8, 6, 6, -9, -13, -3] [[2, 5, 6, 7], [0, -4, -3, -3]]
TOP tuning [1198.553882, 1907.135354, 2778.724633, 3378.001574]
TOP generators [599.2769413, 272.3123381]
bad: 83.268810 comp: 5.047438 err: 3.268439

Number 93

[4, 2, 9, -6, 3, 15] [[1, 3, 3, 6], [0, -4, -2, -9]]
TOP tuning [1208.170435, 1910.173796, 2767.342550, 3391.763218]
TOP generators [1208.170435, 428.5843770]
bad: 83.972208 comp: 3.205865 err: 8.170435

Number 94 Hexidecimal

[1, -3, 5, -7, 5, 20] [[1, 2, 1, 5], [0, -1, 3, -5]]
TOP tuning [1208.959294, 1887.754858, 2799.450479, 3393.977822]
TOP generators [1208.959293, 530.1637287]
bad: 84.341555 comp: 3.068202 err: 8.959294

Number 95

[6, 0, 15, -14, 7, 35] [[3, 5, 7, 9], [0, -2, 0, -5]]
TOP tuning [1197.060039, 1902.856975, 2793.140092, 3360.572393]
TOP generators [399.0200131, 46.12154491]
bad: 84.758945 comp: 5.369353 err: 2.939961

Number 96

[0, 12, 12, 19, 19, -6] [[12, 19, 28, 34], [0, 0, -1, -1]]
TOP tuning [1198.015473, 1896.857833, 2778.846497, 3377.854234]
TOP generators [99.83462277, 16.52294019]
bad: 85.896401 comp: 5.168119 err: 3.215955

Number 97

[11, -6, 10, -35, -15, 40] [[1, 4, 1, 5], [0, -11, 6, -10]]
TOP tuning [1200.950404, 1901.347958, 2784.106944, 3366.157786]
TOP generators [1200.950404, 263.8594234]
bad: 85.962459 comp: 9.510433 err: .950404

Number 98 Slender

[13, -10, 6, -46, -27, 42] [[1, 2, 2, 3], [0, -13, 10, -6]]
TOP tuning [1200.337238, 1901.055858, 2784.996493, 3370.418508]
TOP generators [1200.337239, 38.43220154]
bad: 88.631905 comp: 12.499426 err: .567296

Number 99

[0, 5, 10, 8, 16, 9] [[5, 8, 12, 15], [0, 0, -1, -2]]
TOP tuning [1195.598382, 1912.957411, 2770.195472, 3388.313857]
TOP generators [239.1196765, 99.24064453]
bad: 89.758630 comp: 3.595867 err: 6.941749

Number 100

[1, -1, -5, -4, -11, -9] [[1, 2, 2, 1], [0, -1, 1, 5]]
TOP tuning [1185.210905, 1925.395162, 2815.448458, 3410.344145]
TOP generators [1185.210905, 445.0266480]
bad: 90.384580 comp: 2.472159 err: 14.789095

Number 101

[2, 8, -11, 8, -23, -48] [[1, 1, 0, 6], [0, 2, 8, -11]]
TOP tuning [1201.698521, 1899.262909, 2790.257556, 3373.586984]
TOP generators [1201.698520, 348.7821945]
bad: 92.100337 comp: 7.363684 err: 1.698521

Number 102

[3, 12, 18, 12, 20, 8] [[3, 5, 8, 10], [0, -1, -4, -6]]
TOP tuning [1202.260038, 1898.372926, 2784.451552, 3375.170635]
TOP generators [400.7533459, 105.3938041]
bad: 92.910783 comp: 6.411729 err: 2.260038

Number 103

[4, -8, -20, -22, -43, -24] [[4, 6, 10, 13], [0, 1, -2, -5]]
TOP tuning [1199.003867, 1903.533834, 2787.453602, 3371.622404]
TOP generators [299.7509668, 105.0280329]
bad: 93.029698 comp: 9.663894 err: .996133

Number 104

[3, 0, -3, -7, -13, -7] [[3, 5, 7, 8], [0, -1, 0, 1]]
TOP tuning [1205.132027, 1884.438632, 2811.974729, 3337.800149]
TOP generators [401.7106756, 124.1147448]
bad: 94.336372 comp: 2.921642 err: 11.051598

Number 105

[4, 7, 2, 2, -8, -15] [[1, 2, 3, 3], [0, -4, -7, -2]]
TOP tuning [1190.204869, 1918.438775, 2762.165422, 3339.629125]
TOP generators [1190.204869, 115.4927407]
bad: 94.522719 comp: 3.014736 err: 10.400103

Number 106

[13, 19, 23, 0, 0, 0] [[1, 0, 0, 0], [0, 13, 19, 23]]
TOP tuning [1200.0, 1904.187463, 2783.043215, 3368.947050]
TOP generators [1200., 146.4759587]
bad: 94.757554 comp: 8.202087 err: 1.408527

Number 107

[2, -6, -6, -14, -15, 3] [[2, 3, 5, 6], [0, 1, -3, -3]]
TOP tuning [1206.548264, 1891.576247, 2771.109113, 3374.383246]
TOP generators [603.2741324, 81.75384943]
bad: 94.764743 comp: 3.804173 err: 6.548265

Number 108

[2, -6, -6, -14, -15, 3] [[2, 3, 5, 6], [0, 1, -3, -3]]
TOP tuning [1206.548264, 1891.576247, 2771.109113, 3374.383246]
TOP generators [603.2741324, 81.75384943]
bad: 94.764743 comp: 3.804173 err: 6.548265

Number 109

[1, -13, -2, -23, -6, 32] [[1, 2, -3, 2], [0, -1, 13, 2]]
TOP tuning [1197.567789, 1904.876372, 2780.666293, 3375.653987]
TOP generators [1197.567789, 490.2592046]
bad: 94.999539 comp: 6.249713 err: 2.432212

Number 110

[9, 0, 9, -21, -11, 21] [[9, 14, 21, 25], [0, 1, 0, 1]]
TOP tuning [1197.060039, 1897.499011, 2793.140092, 3360.572393]
TOP generators [133.0066710, 35.40561749]
bad: 95.729260 comp: 5.706260 err: 2.939961

Number 111

[5, 1, 9, -10, 0, 18] [[1, 0, 2, 0], [0, 5, 1, 9]]
TOP tuning [1193.274911, 1886.640142, 2763.877849, 3395.952256]
TOP generators [1193.274911, 377.3280283]
bad: 99.308041 comp: 3.205865 err: 9.662601

Number 112 Muggles

[5, 1, -7, -10, -25, -19] [[1, 0, 2, 5], [0, 5, 1, -7]]
TOP tuning [1203.148010, 1896.965522, 2785.689126, 3359.988323]
TOP generators [1203.148011, 379.3931044]
bad: 99.376477 comp: 5.618543 err: 3.148011

Number 113

[11, 6, 15, -16, -7, 18] [[1, 1, 2, 2], [0, 11, 6, 15]]
TOP tuning [1202.072164, 1905.239303, 2787.690040, 3363.008608]
TOP generators [1202.072164, 63.92428535]
bad: 99.809415 comp: 6.940227 err: 2.072164

Number 114

[1, -8, -26, -15, -44, -38] [[1, 2, -1, -8], [0, -1, 8, 26]]
TOP tuning [1199.424969, 1900.336158, 2788.685275, 3365.958541]
TOP generators [1199.424969, 498.5137806]
bad: 99.875385 comp: 9.888635 err: 1.021376

Now what was the point of that?

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:12:54 PM

>> >You know what a moat is right?
>>
>> Obviously not! :(
>>
>> >You have the castle (the circle is its
>> >outer bound) with people (temperaments) inside.
>>
>> Then it's the same as a circle!
>
>No. A circle is infinitesimally thin.

In 3-D we have ball, but I'm not aware of a corresponding
term for 2-D. Disk?

>A moat has real thickness.

Oh, ... yes I'm aware my proposal doesn't include a region
of safety. I think it's nice but it isn't a must-have for me.

>Of course it doesn't have to be
>circular, but continuing in that vein ...

Again, you'll have to bear with my tendency for hyperbole.
I don't consider specifically circular curvature to be part
of the circle suggestion. This goes without saying for me,
but I can see how it might not for person X.

I think maximum hyperbole is the ideal way to use natural
languages, but I really should learn to back it up with
examples.

>You could draw the smallest circle that encloses all the lucky
>temperaments and then you could draw another one outside that which
>is the largest circle that still encloses the same ones and no
>others. The space between them is the moat. You can then give a
>quantitative measure of the size of the moat as the percentage
>difference between the radii of the two circles.

Good idea.

>The term "moat" came to mind because the temperaments sometimes look
>like constellations and in the Niven and Pournelle books "The Mote in
>Gods Eye" and "The Moat around Murcheson's Eye", "the Moat" is a vast
>region of space with no stars.

I've only ever read The Integral Trees, which was Niven-only IIRC (I
was just a lad). Oh, and a funny analysis of Superman's powers when
I was a bit older. Anyway "moat" makes sense because it is a region
of safety. If only you had explained (or I noticed) this sooner!

-C.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:18:26 PM

>>>>We have a choice -- derive badness from first principles or cook
>>>>it from a survey of the tuning list, our personal tastes, etc.
>>>
>>>What first principles of the human psychology of the musical use
>>>of temperaments did you have in mind?
>>
>>Since I'm not aware of any, and since we don't have the means to
>>experimentally determine any, I suggest using only mathematical
>>first principles
>
>But badness is clearly a psychological property,

No it isn't! What evidence do we have that badness means anything
musical at all?

>what have mathematical first principles got to do with it?

What _don't_ they have to do with? For folks into "digital physics"
like me, nothing.

>> , or very simple ideas like...
>>
>> () For a number of notes n, we would expect more dyads in the
>> 7-limit than the 5-limit.
>>
>> () I expect to find a new best comma after searching n notes
>> in the 5-limit, n(something) notes in the 7-limit.
>
>These sound reasonable, but I don't see how to use them to determine
>psychologically reasonable cutoff for lists of the temperaments most
>likely to be musically useful.

I suspect that answering them would present possibilities for
psy.-reasonable cutoffs.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 7:19:45 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> [Dave Keenan wrote:]
> >For me there are three candidates on the table at the moment. log-log
> >circles or ellipses, log-log hyperbolae, and linear-linear
> >nearly-straight-lines.
>
> Can we keep log-flat on the table for the moment?

If you mean, log-flat with no other cutoffs, then no. I don't think
this was ever on the table, even from Gene. There is an infinite
number in the increasing complexity direction. I understand this would
be a single straight line on the log-log plot parallel to the apparent
lower left "edge" of the populated region.

If you mean log-flat badness in conjunction with error and complexity
cutoffs then it can stay on the table if you like, but I don't know
how you can psychologically justify the corners.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 7:26:07 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >But badness is clearly a psychological property,
>
> No it isn't! What evidence do we have that badness means anything
> musical at all?

Without a definition of "badness", we can't possibly have evidence for
what it means. Mostly, it's been a number used as an aid for a
decision proceedure. Log flat badness, occurring at the critical
exponent, could claim to be a "property". Keenan's psychological
badness is undefined.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/11/2004 7:31:37 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:

> If you mean, log-flat with no other cutoffs, then no. I don't think
> this was ever on the table, even from Gene. There is an infinite
> number in the increasing complexity direction. I understand this would
> be a single straight line on the log-log plot parallel to the apparent
> lower left "edge" of the populated region.

It would in particular be the line which is a lim sup for the slope,
and hence containing an infinite supply of temperaments.

http://en.wikipedia.org/wiki/Limit_inferior

> If you mean log-flat badness in conjunction with error and complexity
> cutoffs then it can stay on the table if you like, but I don't know
> how you can psychologically justify the corners.

Can you specifically cite when an obnoxious temperament turned up in a
corner, and you couldn't get rid of it without losing something good?

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 7:31:08 PM

Carl, you wrote:
> >> Dave doesn't seem to want the macros which would
> >> be necessary for the scale-building stuff.

To me, in the context of the current highly mathematical discussion,
this said to me that you think macros are necessary (i.e. you can't do
without them) for scale-building stuff.

I think this is obviously wrong since you can show how to build a
scale using meantone which is not a macrotemperament.

But since I now learn that you apparently only meant "desirable"
rather than "necessary" in the strict logical sense, you should note
that I long ago agreed to neutral thirds and pelogic being on the
5-limit list. Surely they are macro enough for your purposes.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:36:32 PM

>>>>>Humans seem to find a particular region of complexity and error
>>>>>attractive and have a certain approximate function relating
>>>>>error and complexity to usefulness. Extra-terrestrial music-makers
>>>>>(or humpback whales) may find completely different regions
>>>>>attractive.
>>>>
>>>>This seems to be the key statement of this thread. I don't think
>>>>this has been established. If it had, I'd be all for it. But it
>>>>seems instead that whenever you cut out temperament T, somebody
>>>>could come along and do something with T that would make you wish
>>>>you hadn't have cut it. Therefore it seems logical to use
>>>>something that allows a comparison of temperaments in any range
>>>>(like logflat).
>>>
>>>So Carl. You really think it's possible that some human musician
>>>could find the temperament where 3/2 vanishes to be a useful
>>>approximation of 5-limit JI (but hey at least the complexity is
>>>0.001)? And likewise for some temperament where the number of
>>>generators to each prime is around a google (but hey at least the
>>>error is 10^-99 cents)?
>>
>>This is a false dilemma. The size of this thread shows how hard
>>it is to agree on the cutoffs.
>
>Well yeah but we're probably within a factor of 2 of agreeing.
>Another species could disagree with us by orders of magnitude.

This was addressed to "So Carl". Am I not human?

>So you do want cutoffs on error and complexity?

I think we want roughly the same things. Except I want to answer
questions like those I just mentioned (which among other things
investigate making complexity comparable across harmonic limit and
dimensionality), and why Paul's creepy complexity gives the
numbers it does, before continuing.

And I maintain that a survey of the tuning list would be a
cataclysmic scientific error. But with reasonable pain axes,
such as cents**2 and 2**notes, finding the widest reasonably-
convex moat that encloses the desired number of temperaments
for each case (limit, dimensionality) would seem to be a good
idea and sufficient to eliminate the need for a survey.

>But cutoffs utterly violate log-flat badness in the regions
>outside of them.

I have no problem with smoothing the cutoff region.

>> Can you name the temperaments that fell outside of the top 20
>> on Gene's 114 list?
>
>Yes.

Eep! Sorry, I meant the ones that you want that fell outside
Gene's top 20/114.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:37:53 PM

>> [Dave Keenan wrote:]
>> >For me there are three candidates on the table at the moment. log-log
>> >circles or ellipses, log-log hyperbolae, and linear-linear
>> >nearly-straight-lines.
>>
>> Can we keep log-flat on the table for the moment?
>
>If you mean, log-flat with no other cutoffs, then no.

I mean log-flat with *some* kind of cutoffs.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:40:39 PM

>> >> Dave doesn't seem to want the macros which would
>> >> be necessary for the scale-building stuff.
>
>To me, in the context of the current highly mathematical discussion,
>this said to me that you think macros are necessary (i.e. you can't do
>without them) for scale-building stuff.
>
>I think this is obviously wrong since you can show how to build a
>scale using meantone which is not a macrotemperament.
>
>But since I now learn that you apparently only meant "desirable"
>rather than "necessary" in the strict logical sense,

Hate to nitpick now that we understand each other, but it has
nothing to do with strict logic, but rather *what* one wants to
do. Try this again:

>>Do *what* without them? Build any decent scale (the above sense)?
>>Or run any kind of decent scale-building program (the sense in
>>which I said "necessary")?

>you should note
>that I long ago agreed to neutral thirds and pelogic being on the
>5-limit list. Surely they are macro enough for your purposes.

Herman just got through posting on tuning how beep is a great
temperament for scale-building.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 7:43:05 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >>>>We have a choice -- derive badness from first principles or cook
> >>>>it from a survey of the tuning list, our personal tastes, etc.
> >>>
> >>>What first principles of the human psychology of the musical use
> >>>of temperaments did you have in mind?
> >>
> >>Since I'm not aware of any, and since we don't have the means to
> >>experimentally determine any, I suggest using only mathematical
> >>first principles
> >
> >But badness is clearly a psychological property,
>
> No it isn't! What evidence do we have that badness means anything
> musical at all?

So why call it "badness". Bad for whom? Bad for what?

Humans and music, that's who and what.

We either want to _make_ badness mean something psychological or use a
different word for this thing we are trying to come up with to model
the psychology of musical usefulness of temperaments, at least in so
far as to produce a short-list.

> >what have mathematical first principles got to do with it?
>
> What _don't_ they have to do with? For folks into "digital physics"
> like me, nothing.

Sure. Everything may, _in principle_, be derivable from mathematics
but the intervening complexity of human neuro-physiology is such that
this is utterly irrelevant to what we are trying to do here.

🔗Carl Lumma <ekin@lumma.org>

2/11/2004 7:47:05 PM

>> >But badness is clearly a psychological property,
>>
>> No it isn't! What evidence do we have that badness means anything
>> musical at all?
>
>So why call it "badness". Bad for whom? Bad for what?
>
>Humans and music, that's who and what.
>
>We either want to _make_ badness mean something psychological or use a
>different word for this thing we are trying to come up with to model
>the psychology of musical usefulness of temperaments, at least in so
>far as to produce a short-list.
>
>> >what have mathematical first principles got to do with it?
>>
>> What _don't_ they have to do with? For folks into "digital physics"
>> like me, nothing.
>
>Sure. Everything may, _in principle_, be derivable from mathematics
>but the intervening complexity of human neuro-physiology is such that
>this is utterly irrelevant to what we are trying to do here.

I want to make badness psychological. But if we choose it to fit
the "data", it's only as good as the data. If, on the other hand,
we derive it from first principles, and it happens to *match* a
survey of the tuning list, then you might have my attention.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 8:10:37 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> [Dave wrote:]
> >Well yeah but we're probably within a factor of 2 of agreeing.
> >Another species could disagree with us by orders of magnitude.
>
> This was addressed to "So Carl".

I think that's a fairly universal english-speaking way of announcing
that you are about to ask the named person a question. What of it?

> Am I not human?

I have always assumed so. Are you telling me you're not? :-) What's
your point.

> I think we want roughly the same things.

Wow. That would be great.

> Except I want to answer
> questions like those I just mentioned (which among other things
> investigate making complexity comparable across harmonic limit and
> dimensionality), and why Paul's creepy complexity gives the
> numbers it does, before continuing.

Go for it.

> And I maintain that a survey of the tuning list would be a
> cataclysmic scientific error. But with reasonable pain axes,
> such as cents**2 and 2**notes, finding the widest reasonably-
> convex moat that encloses the desired number of temperaments
> for each case (limit, dimensionality) would seem to be a good
> idea and sufficient to eliminate the need for a survey.

Woah. Now we're talking!

> >But cutoffs utterly violate log-flat badness in the regions
> >outside of them.
>
> I have no problem with smoothing the cutoff region.

Great!

> >> Can you name the temperaments that fell outside of the top 20
> >> on Gene's 114 list?
> >
> >Yes.
>
> Eep! Sorry, I meant the ones that you want that fell outside
> Gene's top 20/114.

Oh. Sorry. I just don't have any enthusiasm for working this out now.
I just know that I like Paul's latest list (which I can't easily find)
because it has the ones I want plus a few more that bring it up
against a reasonable moat.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 8:15:37 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@b...> wrote:
>
> > If you mean, log-flat with no other cutoffs, then no. I don't think
> > this was ever on the table, even from Gene. There is an infinite
> > number in the increasing complexity direction. I understand this would
> > be a single straight line on the log-log plot parallel to the apparent
> > lower left "edge" of the populated region.
>
> It would in particular be the line which is a lim sup for the slope,
> and hence containing an infinite supply of temperaments.
>
> http://en.wikipedia.org/wiki/Limit_inferior
>
> > If you mean log-flat badness in conjunction with error and complexity
> > cutoffs then it can stay on the table if you like, but I don't know
> > how you can psychologically justify the corners.
>
> Can you specifically cite when an obnoxious temperament turned up in a
> corner, and you couldn't get rid of it without losing something good?

No. I can't, but it may have happened and I wouldn't have known since
we only started plotting things very recently.

However, it may well be possible in some cases to find a wide enough
moat in the right (subjective) ballpark that can accomodate both a
smooth curve and a tri-linear (bad, comp, err) cutoff giving the same
list.

That's the beauty of moats.

🔗Dave Keenan <d.keenan@bigpond.net.au>

2/11/2004 8:31:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> Dave doesn't seem to want the macros which would
> >> >> be necessary for the scale-building stuff.
> >
> >To me, in the context of the current highly mathematical discussion,
> >this said to me that you think macros are necessary (i.e. you can't do
> >without them) for scale-building stuff.
> >
> >I think this is obviously wrong since you can show how to build a
> >scale using meantone which is not a macrotemperament.
> >
> >But since I now learn that you apparently only meant "desirable"
> >rather than "necessary" in the strict logical sense,
>
> Hate to nitpick now that we understand each other, but it has
> nothing to do with strict logic, but rather *what* one wants to
> do. Try this again:
>
> >>Do *what* without them? Build any decent scale (the above sense)?
> >>Or run any kind of decent scale-building program (the sense in
> >>which I said "necessary")?

By "scale-building stuff" I assumed you mean "showing readers how to
take temperaments and build scales from them, complete with several
examples".

This corresponds closely to the second option above. I don't think our
difference of interpretation has anything to do with that. But the
words "any kind of decent" did not appear in your original statement.
If they had, there would have been no problem.

So continuing the nitpicking:

"necessary for <whatever>" does not mean "indispensable for any kind
of decent <whatever>". But I agree that "necessary for any kind of
decent <whatever>" would have been just as good as "desirable for
<whatever>".

But of course I still disagree with your opinion on this.

> >you should note
> >that I long ago agreed to neutral thirds and pelogic being on the
> >5-limit list. Surely they are macro enough for your purposes.
>
> Herman just got through posting on tuning how beep is a great
> temperament for scale-building.

It may be good for scale building, but it isn't a temperament in the
sense of approximation of JI. Herman agrees.

🔗Carl Lumma <ekin@lumma.org>

2/12/2004 1:51:25 AM

>> >> Can you name the temperaments that fell outside of the top 20
>> >> on Gene's 114 list?
>> >
>> >Yes.
>>
>> Eep! Sorry, I meant the ones that you want that fell outside
>> Gene's top 20/114.
>
>Oh. Sorry. I just don't have any enthusiasm for working this out now.
>I just know that I like Paul's latest list (which I can't easily find)
>because it has the ones I want plus a few more that bring it up
>against a reasonable moat.

And which list is that?

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/12/2004 1:59:25 AM

>So continuing the nitpicking:
>
>"necessary for <whatever>" does not mean "indispensable for any kind
>of decent <whatever>". But I agree that "necessary for any kind of
>decent <whatever>" would have been just as good as "desirable for
><whatever>".

I'm sorry, I can't parse this.

>But of course I still disagree with your opinion on this.

I think you understand me, but ultimately I'm not sure since I
couldn't parse the above.

>> >you should note
>> >that I long ago agreed to neutral thirds and pelogic being on the
>> >5-limit list. Surely they are macro enough for your purposes.
>>
>> Herman just got through posting on tuning how beep is a great
>> temperament for scale-building.
>
>It may be good for scale building, but it isn't a temperament in the
>sense of approximation of JI. Herman agrees.

But if you view it as a "temperament in the sense of 'approximating'
JI" it still works. The point is that temperament, in whatever
sense, is useful for all sorts of reasons.

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

2/12/2004 12:33:28 PM

Dave Keenan wrote:

> I think that was _convex_ hull. That's just the smallest convex shape
> that encloses a set of points - a polygon with the "outermost" points
> as its vertices. I think it goes without saying that we would never
> exclude a temperament that was inside the convex hull of the included
> temperaments. Although not as formalised (yet), moats can be
> considered as taking the convex hull idea a little further, by
> including not only those _inside_ the convex hull but also those
> _close_ to the outside of it, and insisting that the hull is not only
> convex, but smooth.

Thanks! I asked because they got mentioned in a paper relating to a project I was looking at and have now chosen. What I'll be looking at needn't be convex but does have to be smooth.

>>- would k-means have anything to do with the clustering?
>>
>>http://people.revoledu.com/kardi/tutorial/kMean/WhatIs.htm
> > Sorry. Don't know anything about these.

Same project -- I was told to look them up. Fuzzy k-means in fact, which will be the opposite of "moats". I suppose you could segment temperaments into two categories, one "good" and the other "bad" but it'd need a peculiar geometry. Alternatively, how about separate categories for "run of the mill", "way out" and "microtemperaments"?

Anyway, it may have something to do with clustering theory.

Happy de-tox! (if you haven't already left)

Graham

🔗Graham Breed <graham@microtonal.co.uk>

2/12/2004 12:57:37 PM

Dave Keenan wrote:

> I'm guessing ennealimmal is so complex and so close to 7-limit JI that
> most musicians would happily use the two interchangeably without
> noticing. Is that something we could test?

Ennealimmal isn't *that* complex. By my measure, 27, so you get 18 tetrads in the 45 note MOS. That's within what's possible to fit to a keyboard, and more useful as a general notation system for computer music (it might be too strange for musicians to read, but who knows?)

It is very accurate -- to 0.2 cents.

IIRC, it tunes out the 2401:2400 comma. So that comma pump I came up with already shows something that would work in ennealimmal but not 7-limit JI. And you get conceptual simplicity relative to the planar temperament, as well as more accuracy than miracle.

> I don't see how the fact that x/612ths of an octave is a fine way to
> tune the ennealimmal generator has any bearing on the musical
> usefulness of 612-ET _as_an_ET_. One can't hear the difference between
> ennealimmal tuned as a subset of 612-ET and tuned with an
> ever-so-slightly different generator that is an irrational fraction of
> an octave.

Yes, the 7-limit minimax ennealimmal generator is 24.993/612 octaves, so you probably couldn't tell the difference. As Gene said, it's a matter of simplicity and convenience. It's much easier to think in integer steps than multiples of two generators. And a computer could easily hold a 612 note octave tuning table, to allow for arbitrary modulations. A lot of concepts relating to algorithmic composition work best with cyclic groups.

Quite indigestible for most of us, but not in loony territory yet.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

2/12/2004 2:42:29 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

And a computer could easily
> hold a 612 note octave tuning table, to allow for arbitrary
modulations.

Most people would probably be satisfied with the tuning accuracy of
171 notes. Joe would be satisfied with 72 notes, I suppose, given
that he thinks that is JI. Last time I started to do an ennealimmal
piece, I ended up with 171-et instead because the tuning sounded fine
to me and you also temper out the schisma, which is convenient. This
time I'm sticking to pure, top-tuned ennealimmal, but not because 171
isn't actually already an acceptable way to tune it, at least to me.

🔗Carl Lumma <ekin@lumma.org>

2/12/2004 3:47:30 PM

At 04:11 PM 2/11/2004, you wrote:
>--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
>> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>> > The circle rocks, dude. It penalizes temperaments equally for
>> trading
>> > too much of their error for complexity, or complexity for error.
>> Look
>> > at the plots, and the first things you hit are 19, 12, and 53. And
>> > 22 in the 7-limit. Further, my suggestion that 1cents = zero should
>> > satisfy Dave's micro fears. Or make 0 cents = zero. It works
>> >either
>> > way.
>>
>> It does? Look at the graph! How can you make 0 cents = zero when it's
>> infinitely far away? And what about the position of the origin on the
>> *complexity* axis??
>>
>> > No origin; pfff.
>>
>> piano-forte-forte-forte?
>>
>> P.S. The relative scaling of the two axes is completely arbitrary,
>> so, even if you actually selected an origin, the circle would produce
>> different results for a different relative scaling.
>
>But an elliptical cutoff in log-log space could be made to work. You
>do have to choose nonzero values of error and complexity to represent
>zero pain (the center of the ellipse). But that's OK.

Glad you agree.

>So I'd also like to see if one of these elliptical-log beasties can be
>made to give the same list as Paul's red line on the 7-limit-LT
>log-log plot. How about it Carl?

I didn't follow how a circle became an ellipse.

Sorry all if I'm dropping correspondence of late. I'm having a
nightmare of a time with my provider at the moment.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:19:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >And we're not suggesting any "goodness" measure which
> >is applicable to both 5-limit and 7-limit systems of any
respective
> >dimensionalities.
>
> Any fundamental reason why not?

In some cases at least, it would be comparing lengths vs. areas vs.
volumes.

> >But we are suggesting something similar be used in
> >each of the Pascal's triangle of cases, which seems logical.
>
> I'm a bit lost with the Pascal's triangle stuff. Can you populate
> a triangle with the things you're associating with it? Such would
> be grand, in the Wilson tradition....

1
1 1
1 2 1
1 3 3 1
1 4 6 4 1

First row -- ?
Second row -- 2-limit (all octaves), ?
Third row -- 3-limit JI, 3-limit temperament, ?
Fourth row -- 5-limit JI, 5-limit LT, 5-limit ET, ?
Fifth row -- 7-limit JI, 7-limit PT, 7-limit LT, 7-limit ET, ?

Haven't really thought about what the '?'s mean -- 1 note?

> >If it's
> >wrong, it's wrong, and there goes the premise of our paper. But
it's
> >a theory paper, not an edict. I think if the criteria we use are
> >easily grasped and well justified, we will have done a great job
> >publishing something truly pioneering and valuable as fodder for
> >experimentation.
>
> We have a choice -- derive badness from first principles or cook
> it from a survey of the tuning list, our personal tastes, etc.

First principles seems fine insofar as there's nothing arbitrary and
also no flagrant disagreement with known reality. Personally, I don't
advocate using badness at all.

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:24:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> I'm not.
> >> >
> >> >Then why are you suddenly silent on all this?
> >>
> >> Huh? I've been posting at a record rate.
> >
> >Not on this subject of cognitive limits that used to occupy you so.
>
> Those apply to scales, not tunings. Ideally the paper would show
> how to use the tools of temperament to find both. But that's up
> to you guys. Dave doesn't seem to want the macros which would
> be necessary for the scale-building stuff.

Macros?

Anyway, since you brought up the inevitability of scales in the
thread earlier, I'd encourage you to continue with that line of
thought.

> >> >> It is well known that Dave, for example, is far more
> >> >> micro-biased than I!
> >> >
> >> >?
> >>
> >> What's your question?
> >
> >What does micro-biased mean, on what basis do you say this about
you
> >vs. Dave, and what is its relevance here?
>
> Micro-biased means biased in favor of microtemperaments. I've
> historically fought for macros vs. Dave.

Well hopefully you're aware of Dave's current position.

Is that what you meant by macros above? I think Dave is saying the
opposite now, saying that temperaments should be included only if we
use enough of the relevant pitches to really necessitate the
particular temperament, and that seems to imply the use of scales
associated specifically with that temperament, at least if we use
your open-ended idea of "scales".

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 12:28:44 PM

>> >And we're not suggesting any "goodness" measure which
>> >is applicable to both 5-limit and 7-limit systems of any
>> >respective dimensionalities.
>>
>> Any fundamental reason why not?
>
>In some cases at least, it would be comparing lengths vs. areas vs.
>volumes.

Yes, but I should think ideally we'd figure out how to normalize
in some way to bring this whole business back to scales.

>> >But we are suggesting something similar be used in
>> >each of the Pascal's triangle of cases, which seems logical.
>>
>> I'm a bit lost with the Pascal's triangle stuff. Can you populate
>> a triangle with the things you're associating with it? Such would
>> be grand, in the Wilson tradition....
>
>1
>1 1
>1 2 1
>1 3 3 1
>1 4 6 4 1
>
>First row -- ?
>Second row -- 2-limit (all octaves), ?
>Third row -- 3-limit JI, 3-limit temperament, ?
>Fourth row -- 5-limit JI, 5-limit LT, 5-limit ET, ?
>Fifth row -- 7-limit JI, 7-limit PT, 7-limit LT, 7-limit ET, ?

Great! Don't forget to mention that the number tells the number
of elements in the wedgie. This should go on form chart.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:28:56 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> Alternatively, then why doesn't the badness bound alone
enclose a
> >> >> finite triangle?
> >> >
> >> >Not only is it, like the rectangle, infinite in area on the
loglog
> >> >plot, since the zero-error line and zero-complexity lines are
> >> >infinitely far away, but it actually encloses an infinite number
> >> >of temperaments.
>
> Yet on ET charts like this...
>
> /tuning-math/files/Paul/et5loglog.gif
>
> ...the region beneath the 7-53 diagonal is empty.

Your point?

> Is there stuff
> there you haven't plotted?

With lower error? No, but you'd never know for sure just from looking
at the loglog graph.

> Wait -- and how can ETs appear more than once -- different maps?

Yes.

> That might explain different errors, but they are appearing at
> different complexities too... baffling.

I explained what I was doing with the complexity stuff, but these
complexity differences are very minor. My questions about this were
never answered, but you can look again at "Attn: Gene 2" to see how
it makes sense in the 3-limit (where complexity is proportional to
the number of TOP notes per acoustical octave, so the map matters
slightly).

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 12:34:00 PM

>> >> >> why doesn't the badness bound alone
>> >> >> enclose a finite triangle?
>> >> >
>> >> > it actually encloses an infinite number
>> >> > of temperaments.
>>
>> Yet on ET charts like this...
>>
>> /tuning-math/files/Paul/et5loglog.gif
>>
>> ...the region beneath the 7-53 diagonal is empty.
>
>Your point?
>
>> Is there stuff
>> there you haven't plotted?
>
>With lower error? No, but you'd never know for sure just from looking
>at the loglog graph.

Ok, but you're saying there isn't. And we've gone down to 1 note,
and if the complexity variations are slight that means the triangle
is empty, as opposed to enclosing an infinite number of temperaments.

>> Wait -- and how can ETs appear more than once -- different maps?
>
>Yes.
>
>> That might explain different errors, but they are appearing at
>> different complexities too... baffling.
>
>I explained what I was doing with the complexity stuff, but these
>complexity differences are very minor. My questions about this were
>never answered, but you can look again at "Attn: Gene 2" to see how
>it makes sense in the 3-limit (where complexity is proportional to
>the number of TOP notes per acoustical octave, so the map matters
>slightly).

Ok...

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:41:11 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >And what about the position of the origin on the
> >> >*complexity* axis??
> >>
> >> I already answered that.
> >
> >Where? I didn't see anything on that, but I could have
misunderstood
> >something.
>
> Sorry; you use 1 cent and 1 note as zeros.
>
> >> >P.S. The relative scaling of the two axes is completely
arbitrary,
> >>
> >> Howso? They're both base2 logs of fixed units.
> >
> >Actually, the vertical axis isn't base anything, since it's a
ratio
> >of logs.
>
> That cents are log seems irrelevant. They're fundamental units!

??

> >> You mean c is
> >> arbitrary in y = x + c?
> >
> >Not what I meant, but this is the equation of a line, not a circle.
>
> Yes, I know. But I wasn't trying to give a circle (IIRC that form
> is like x**2 + y**2 something something), or a line, but the
> intersection point of the axes, which is what I thought you meant by
> relative scaling.

Scaling is one thing, and where you depict the axes intersecting is
another.

> That means I only meant the above to apply when
> either x or y is zero, I think.

I lost you.

> If a circle is just so unsatisfactory, please instead consider my
> suggestion to be that we equally penalize temperaments for trading
> too much of their error for comp., or too much of their comp for
> error.

Have I ever *not* done this?

> Incidentally, I don't see the point of a moat vs. a circle, since
> the moat's 'hole' is apparently empty on your charts

Don't know what you mean.

> -- but I
> guess the moat is only meant for linear-linear, or?

It looks a little different on loglog but loglog's not a total deal-
stopper.

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:49:18 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> and the complexity < whatever
> >> you want. 100 notes? 20 notes?
> >
> >Why would you need a complexity bound in addition to the circle?
The
> >circle, being finite, would only extend to a certain maximum
> >complexity anyway . . .
>
> To determine its radius.

Oh, so there isn't an additional complexity bound. Gotcha. I'll try
some circles when I have a chance . . .

🔗Gene Ward Smith <gwsmith@svpal.org>

2/13/2004 12:49:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Yes, but I should think ideally we'd figure out how to normalize
> in some way to bring this whole business back to scales.

Why must we care that much about scales? Half the time I'm not using
them myself. For me temperaments are more significant.

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 12:50:27 PM

>> >> >P.S. The relative scaling of the two axes is completely
>> >> >arbitrary,
>> >>
>> >> Howso? They're both base2 logs of fixed units.
>> >
>> >Actually, the vertical axis isn't base anything, since it's a
>> >ratio of logs.
>>
>> That cents are log seems irrelevant. They're fundamental units!
>
>??

I don't know what you meant by "ratio of logs".

>> >> You mean c is arbitrary in y = x + c?
>> >
>> >Not what I meant, but this is the equation of a line, not a circle.
>>
>> Yes, I know. But I wasn't trying to give a circle (IIRC that form
>> is like x**2 + y**2 something something), or a line, but the
>> intersection point of the axes, which is what I thought you meant by
>> relative scaling.
>
>Scaling is one thing, and where you depict the axes intersecting is
>another.

Yes, I gather. I have no clue what relative scaling is.

>> That means I only meant the above to apply when
>> either x or y is zero, I think.
>
>I lost you.

When x or y is zero in the above, you get the intersection point
for the axes.

>> Incidentally, I don't see the point of a moat vs. a circle, since
>> the moat's 'hole' is apparently empty on your charts
>
>Don't know what you mean.

You should see shortly that I thought the moat was the region of
acceptance, not the region of safety.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 12:50:52 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >>>Having 81/80 in the kernel implies you can harmonize a diatonic
> >>>scale all the way through in consonant thirds. Similar commas
> >>>have similar implications of the kind Carl always seemed to care
> >>>about.
> >>
> >> Don't you mean 25:24?
> >
> >No, 81;80. 25;24 in the kernel doesn't give you either a diatonic
> >scale or 'consonant thirds'.
>
> Oh, in the kernel means tempered out (right?)

Yeah, well, what I meant was in the kernel of the temperament.

> giving neutral
> thirds. So it isn't immediately obvious why 81:80 throws 5:4
> and 6:5 on the same scale degree (always thirds?). . .

81:80 alone doesn't define a single finite scale, but the whole
family that it does imply includes the diatonic scale as a member.

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 1:03:11 PM

>> Yes, but I should think ideally we'd figure out how to normalize
>> in some way to bring this whole business back to scales.
>
>Why must we care that much about scales? Half the time I'm not using
>them myself. For me temperaments are more significant.

Dragnabbit, we've already been through this. I do *not* mean
*diatonics*, I mean *scales*, YOUR definition, pitches.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

2/13/2004 1:12:28 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Dragnabbit, we've already been through this. I do *not* mean
> *diatonics*, I mean *scales*, YOUR definition, pitches.

A discrete set of notes? A periodic discrete set of notes? Some other
definition?

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 1:32:08 PM

>> Dragnabbit, we've already been through this. I do *not* mean
>> *diatonics*, I mean *scales*, YOUR definition, pitches.
>
>A discrete set of notes?

Pitches, yes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 1:40:10 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> >> why doesn't the badness bound alone
> >> >> >> enclose a finite triangle?
> >> >> >
> >> >> > it actually encloses an infinite number
> >> >> > of temperaments.
> >>
> >> Yet on ET charts like this...
> >>
> >> /tuning-
math/files/Paul/et5loglog.gif
> >>
> >> ...the region beneath the 7-53 diagonal is empty.
> >
> >Your point?
> >
> >> Is there stuff
> >> there you haven't plotted?
> >
> >With lower error? No, but you'd never know for sure just from
looking
> >at the loglog graph.
>
> Ok, but you're saying there isn't. And we've gone down to 1 note,
> and if the complexity variations are slight that means the triangle
> is empty, as opposed to enclosing an infinite number of
>temperaments.

Which triangle are you talking about? I thought you were talking
about the one formed by using one of Gene's log-flat badness cutoffs
by itself, without any complexity or error cutoffs.

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 1:47:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> >P.S. The relative scaling of the two axes is completely
> >> >> >arbitrary,
> >> >>
> >> >> Howso? They're both base2 logs of fixed units.
> >> >
> >> >Actually, the vertical axis isn't base anything, since it's a
> >> >ratio of logs.
> >>
> >> That cents are log seems irrelevant. They're fundamental units!
> >
> >??
>
> I don't know what you meant by "ratio of logs".

log(n/d)
--------
log(n*d)

is one log divided by another log, hence a "ratio of logs". It
doesn't matter what base you use, you get the same answer.

> >> >> You mean c is arbitrary in y = x + c?
> >> >
> >> >Not what I meant, but this is the equation of a line, not a
circle.
> >>
> >> Yes, I know. But I wasn't trying to give a circle (IIRC that
form
> >> is like x**2 + y**2 something something), or a line, but the
> >> intersection point of the axes, which is what I thought you
meant by
> >> relative scaling.
> >
> >Scaling is one thing, and where you depict the axes intersecting
is
> >another.
>
> Yes, I gather. I have no clue what relative scaling is.

Relative scaling would be, for example, what one inch represents on
one of the axes, vs. what it represents on the other axis.

> >> That means I only meant the above to apply when
> >> either x or y is zero, I think.
> >
> >I lost you.
>
> When x or y is zero in the above, you get the intersection point
> for the axes.

What's "the above", exactly? y = x + c? This is the equation of a
line, which intersects the y-axis at c, and the x-axis at -c.

> >> Incidentally, I don't see the point of a moat vs. a circle, since
> >> the moat's 'hole' is apparently empty on your charts
> >
> >Don't know what you mean.
>
> You should see shortly that I thought the moat was the region of
> acceptance, not the region of safety.

So what's the 'hole', and what is apparently empty?

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 1:52:24 PM

>> >> >> >> why doesn't the badness bound alone
>> >> >> >> enclose a finite triangle?
>> >> >> >
>> >> >> > it actually encloses an infinite number
>> >> >> > of temperaments.
>> >>
>> >> groups.yahoo.com/group/tuning-math/files/Paul/et5loglog.gif
>> >>
>> >> ...the region beneath the 7-53 diagonal is empty.
>> >> Is there stuff there you haven't plotted?
>> >
>> >With lower error? No, but you'd never know for sure just from
>> >looking at the loglog graph.
>>
>> Ok, but you're saying there isn't. And your graph goes down to
>> 1 note, and if the complexity variations are slight that means
>> the triangle is empty, as opposed to enclosing an infinite number
>> of temperaments.
>
>Which triangle are you talking about? I thought you were talking
>about the one formed by using one of Gene's log-flat badness cutoffs
>by itself, without any complexity or error cutoffs.

That's right.

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 2:01:22 PM

>> >> >> >P.S. The relative scaling of the two axes is completely
>> >> >> >arbitrary,
>> >> >>
>> >> >> Howso? They're both base2 logs of fixed units.
>> >> >
>> >> >Actually, the vertical axis isn't base anything, since it's a
>> >> >ratio of logs.
>> >>
>> >> That cents are log seems irrelevant. They're fundamental units!
>> >
>> >??
>>
>> I don't know what you meant by "ratio of logs".
>
>log(n/d)
>--------
>log(n*d)
>
>is one log divided by another log, hence a "ratio of logs". It
>doesn't matter what base you use, you get the same answer.

Of course. I thought you were actually taking real cents error,
and then taking the log of that, though.

>> >Scaling is one thing, and where you depict the axes intersecting
>> >is another.
>>
>> Yes, I gather. I have no clue what relative scaling is.
>
>Relative scaling would be, for example, what one inch represents on
>one of the axes, vs. what it represents on the other axis.

Ok. Well however you did it, it seems that middle-of-the-road
temperaments like meantone and pajara are very close to 45deg.
off the axes.

>> >> Incidentally, I don't see the point of a moat vs. a circle, since
>> >> the moat's 'hole' is apparently empty on your charts
>> >
>> >Don't know what you mean.
>>
>> You should see shortly that I thought the moat was the region of
>> acceptance, not the region of safety.
>
>So what's the 'hole', and what is apparently empty?

Don't worry about this -- I was operating under a false definition
of moat.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 2:02:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> >> >> why doesn't the badness bound alone
> >> >> >> >> enclose a finite triangle?
> >> >> >> >
> >> >> >> > it actually encloses an infinite number
> >> >> >> > of temperaments.
> >> >>
> >> >> groups.yahoo.com/group/tuning-math/files/Paul/et5loglog.gif
> >> >>
> >> >> ...the region beneath the 7-53 diagonal is empty.
> >> >> Is there stuff there you haven't plotted?
> >> >
> >> >With lower error? No, but you'd never know for sure just from
> >> >looking at the loglog graph.
> >>
> >> Ok, but you're saying there isn't. And your graph goes down to
> >> 1 note, and if the complexity variations are slight that means
> >> the triangle is empty, as opposed to enclosing an infinite number
> >> of temperaments.
> >
> >Which triangle are you talking about? I thought you were talking
> >about the one formed by using one of Gene's log-flat badness
cutoffs
> >by itself, without any complexity or error cutoffs.
>
> That's right.

Well that would enclose an infinite number of temperaments unless
it's so low that it encloses none. But Gene never used such a low
cutoff, since he wanted more than zero temperaments to be included.

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 2:05:17 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> >> >P.S. The relative scaling of the two axes is completely
> >> >> >> >arbitrary,
> >> >> >>
> >> >> >> Howso? They're both base2 logs of fixed units.
> >> >> >
> >> >> >Actually, the vertical axis isn't base anything, since it's a
> >> >> >ratio of logs.
> >> >>
> >> >> That cents are log seems irrelevant. They're fundamental
units!
> >> >
> >> >??
> >>
> >> I don't know what you meant by "ratio of logs".
> >
> >log(n/d)
> >--------
> >log(n*d)
> >
> >is one log divided by another log, hence a "ratio of logs". It
> >doesn't matter what base you use, you get the same answer.
>
> Of course. I thought you were actually taking real cents error,
> and then taking the log of that, though.

No, if we use log scaling on the error axis then we're essentially
taking the log of the above expression.

> >> >Scaling is one thing, and where you depict the axes
intersecting
> >> >is another.
> >>
> >> Yes, I gather. I have no clue what relative scaling is.
> >
> >Relative scaling would be, for example, what one inch represents
on
> >one of the axes, vs. what it represents on the other axis.
>
> Ok. Well however you did it, it seems that middle-of-the-road
> temperaments like meantone and pajara are very close to 45deg.
> off the axes.

It's too bad that 45-degree angle is completely arbitrary -- the
entire graph was scaled by Matlab so that it fit the screen.

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 2:17:09 PM

>Well that would enclose an infinite number of temperaments unless
>it's so low that it encloses none. But Gene never used such a low
>cutoff, since he wanted more than zero temperaments to be included.

/tuning-math/files/Paul/et5loglog.gif

I can certainly enclose a finite number of plotted points here (as
with the line passing through 12 & 53), and I thought Gene said
badness alone *could* give a finite list, just that it would include
lots of crap (like 1 & 2).

-Carl

🔗Carl Lumma <ekin@lumma.org>

2/13/2004 2:19:03 PM

>> >> I don't know what you meant by "ratio of logs".
>> >
>> >log(n/d)
>> >--------
>> >log(n*d)
>> >
>> >is one log divided by another log, hence a "ratio of logs". It
>> >doesn't matter what base you use, you get the same answer.
>>
>> Of course. I thought you were actually taking real cents error,
>> and then taking the log of that, though.
>
>No, if we use log scaling on the error axis then we're essentially
>taking the log of the above expression.

Right! I thought that's why it's called loglog.

>> Ok. Well however you did it, it seems that middle-of-the-road
>> temperaments like meantone and pajara are very close to 45deg.
>> off the axes.
>
>It's too bad that 45-degree angle is completely arbitrary -- the
>entire graph was scaled by Matlab so that it fit the screen.

Well it'd still be nice to see a circle, out of curiosity.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

2/13/2004 2:26:55 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >Well that would enclose an infinite number of temperaments unless
> >it's so low that it encloses none. But Gene never used such a low
> >cutoff, since he wanted more than zero temperaments to be included.
>
> /tuning-math/files/Paul/et5loglog.gif
>
> I can certainly enclose a finite number of plotted points here (as
> with the line passing through 12 & 53),

Just 1, 1, and 3, right? But then there might be more to the right of
the graph's range.

> and I thought Gene said
> badness alone *could* give a finite list, just that it would include
> lots of crap (like 1 & 2).

Right, but that wasn't log-flat badness, that was a badness measure
which uses any higher exponent on complexity than what you would use
to get log-flat badness.

🔗Gene Ward Smith <gwsmith@svpal.org>

2/13/2004 3:41:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Well that would enclose an infinite number of temperaments unless
> >it's so low that it encloses none. But Gene never used such a low
> >cutoff, since he wanted more than zero temperaments to be included.
>
> /tuning-math/files/Paul/et5loglog.gif
>
> I can certainly enclose a finite number of plotted points here (as
> with the line passing through 12 & 53), and I thought Gene said
> badness alone *could* give a finite list, just that it would include
> lots of crap (like 1 & 2).

It can and will, so long as you go past the critical, log-flat
exponent.

🔗Paul Erlich <perlich@aya.yale.edu>

4/4/2004 4:22:38 PM

Hi Gene,

Would you be so kind as to produce a file like the one below, but
instead of culling to 126 lines, leave all 32201 in there? That would
be great. If that's too much, you could cut off the error and
complexity wherever you see fit. The idea, though, is to produce a
graph, and as most pieces of paper are rectangular, the data should
fill a rectangular region. I'm *not* arguing for a rectangular
badness function.

Also could you provide the TM-reduced kernel bases -- at least for
the 126 below?

Thanks so much,
Paul

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> I first made a candidate list by the kitchen sink method:
>
> (1) All pairs n,m<=200 of standard vals
>
> (2) All pairs n,m<=200 of TOP vals
>
> (3) All pairs 100<=n,m<400 of standard vals
>
> (4) All pairs 100<=n,m<=400 of TOP vals
>
> (5) Generators of standard vals up to 100
>
> (6) Generators of certain nonstandard vals up to 100
>
> (7) Pairs of commas from Paul's list of relative error < 0.06,
> epimericity < 0.5
>
> (8) Pairs of vals with consistent badness figure < 1.5 up to 5000
>
> This lead to a list of 32201 candidate wedgies, most of which of
> course were incredible garbage. I then accepted everything with a
2.8
> exponent badness less than 10000, where error is TOP error and
> complexity is our mysterious L1 TOP complexity. I did not do any
> cutting off for either error or complexity, figuring people could
> decide how to do that for themselves. The first six systems are
> macrotemperaments of dubious utility, number 7 is the {15/14, 25/24}
> temperament, and 8 and 9 are the beep-ennealimmal pair, and number
13
> is father. After ennealimmal, we don't get back into the micros
until
> number 46; if we wanted to avoid going there we can cutoff at 4000.
> Number 46, incidentally, has TM basis {2401/2400, 65625/65536} and
is
> covered by 140, 171, 202 and 311; the last is interesting because of
> the peculiar talents of 311.
>
>
>
> 1 [0, 0, 2, 0, 3, 5] 662.236987 77.285947 2.153690
> 2 [1, 1, 0, -1, -3, -3] 806.955502 64.326132 2.467788
> 3 [0, 0, 3, 0, 5, 7] 829.171704 30.152577 3.266201
> 4 [0, 2, 2, 3, 3, -1] 870.690617 33.049025 3.216583
> 5 [1, 2, 1, 1, -1, -3] 888.831828 49.490949 2.805189
> 6 [1, 2, 3, 1, 2, 1] 1058.235145 33.404241 3.435525
> 7 [2, 1, 3, -3, -1, 4] 1076.506437 16.837898 4.414720
> 8 [2, 3, 1, 0, -4, -6] 1099.121425 14.176105 4.729524
> 9 [18, 27, 18, 1, -22, -34] 1099.959348 .036377 39.828719
> 10 [1, -1, 0, -4, -3, 3] 1110.471803 39.807123 3.282968
> 11 [0, 5, 0, 8, 0, -14] 1352.620311 7.239629 6.474937
> 12 [1, -1, -2, -4, -6, -2] 1414.400610 20.759083 4.516198
> 13 [1, -1, 3, -4, 2, 10] 1429.376082 14.130876 5.200719
> 14 [1, 4, -2, 4, -6, -16] 1586.917865 4.771049 7.955969
> 15 [1, 4, 10, 4, 13, 12] 1689.455290 1.698521 11.765178
> 16 [2, 1, -1, -3, -7, -5] 1710.030839 16.874108 5.204166
> 17 [1, 4, 3, 4, 2, -4] 1749.120722 14.253642 5.572288
> 18 [0, 0, 4, 0, 6, 9] 1781.787825 33.049025 4.153970
> 19 [1, -1, 1, -4, -1, 5] 1827.319456 54.908088 3.496512
> 20 [4, 4, 4, -3, -5, -2] 1926.265442 5.871540 7.916963
> 21 [2, -4, -4, -11, -12, 2] 2188.881053 3.106578 10.402108
> 22 [3, 0, 6, -7, 1, 14] 2201.891023 5.870879 8.304602
> 23 [0, 0, 5, 0, 8, 12] 2252.838883 19.840685 5.419891
> 24 [4, 2, 2, -6, -8, -1] 2306.678659 7.657798 7.679190
> 25 [2, 1, 6, -3, 4, 11] 2392.139586 9.396316 7.231437
> 26 [2, -1, 1, -6, -4, 5] 2452.275337 22.453717 5.345120
> 27 [0, 0, 7, 0, 11, 16] 2580.688285 9.431411 7.420171
> 28 [1, -3, -4, -7, -9, -1] 2669.323351 9.734056 7.425960
> 29 [5, 1, 12, -10, 5, 25] 2766.028555 1.276744 15.536039
> 30 [7, 9, 13, -2, 1, 5] 2852.991531 1.610469 14.458536
> 31 [2, -2, 1, -8, -4, 8] 3002.749158 14.130876 6.779481
> 32 [3, 0, -6, -7, -18, -14] 3181.791246 2.939961 12.125211
> 33 [2, 8, 1, 8, -4, -20] 3182.905310 3.668842 11.204461
> 34 [6, -7, -2, -25, -20, 15] 3222.094343 .631014 21.101881
> 35 [4, -3, 2, -14, -8, 13] 3448.998676 3.187309 12.124601
> 36 [1, -3, -2, -7, -6, 4] 3518.666155 18.633939 6.499551
> 37 [1, 4, 5, 4, 5, 0] 3526.975600 19.977396 6.345287
> 38 [2, 6, 6, 5, 4, -3] 3589.967809 8.400361 8.700992
> 39 [2, 1, -4, -3, -12, -12] 3625.480387 9.146173 8.470366
> 40 [2, -2, -2, -8, -9, 1] 3634.089963 14.531543 7.185526
> 41 [3, 2, 4, -4, -2, 4] 3638.704033 20.759083 6.329002
> 42 [6, 5, 3, -6, -12, -7] 3680.095702 3.187309 12.408714
> 43 [2, 8, 8, 8, 7, -4] 3694.344150 3.582707 11.917575
> 44 [2, 3, 6, 0, 4, 6] 3938.578264 20.759083 6.510560
> 45 [0, 0, 5, 0, 8, 11] 3983.263457 38.017335 5.266481
> 46 [22, -5, 3, -59, -57, 21] 4009.709706 .073527 49.166221
> 47 [3, 5, 9, 1, 6, 7] 4092.014696 6.584324 9.946084
> 48 [7, -3, 8, -21, -7, 27] 4145.427852 .946061 19.979719
> 49 [1, -8, -14, -15, -25, -10] 4177.550548 .912904 20.291786
> 50 [3, 5, 1, 1, -7, -12] 4203.022260 12.066285 8.088219
> 51 [1, 9, -2, 12, -6, -30] 4235.792998 2.403879 14.430906
> 52 [6, 10, 10, 2, -1, -5] 4255.362112 3.106578 13.189661
> 53 [2, 5, 3, 3, -1, -7] 4264.417050 21.655518 6.597656
> 54 [6, 5, 22, -6, 18, 37] 4465.462582 .536356 25.127403
> 55 [0, 0, 12, 0, 19, 28] 4519.315488 3.557008 12.840061
> 56 [1, -3, 3, -7, 2, 15] 4555.017089 15.315953 7.644302
> 57 [1, -1, -5, -4, -11, -9] 4624.441621 14.789095 7.782398
> 58 [16, 2, 5, -34, -37, 6] 4705.894319 .307997 31.211875
> 59 [4, -32, -15, -60, -35, 55] 4750.916876 .066120 54.255591
> 60 [1, -8, 39, -15, 59, 113] 4919.628715 .074518 52.639423
> 61 [3, 0, -3, -7, -13, -7] 4967.108742 11.051598 8.859010
> 62 [6, 0, 0, -14, -17, 0] 5045.450988 5.526647 11.410361
> 63 [37, 46, 75, -13, 15, 45] 5230.896745 .021640 83.678088
> 64 [1, 6, 5, 7, 5, -5] 5261.484667 11.970043 8.788871
> 65 [3, 2, -1, -4, -10, -8] 5276.949135 17.564918 7.671954
> 66 [1, 4, -9, 4, -17, -32] 5338.184867 2.536420 15.376139
> 67 [1, -3, 5, -7, 5, 20] 5338.971970 8.959294 9.797992
> 68 [10, 9, 7, -9, -17, -9] 5386.217633 1.171542 20.325677
> 69 [19, 19, 57, -14, 37, 79] 5420.385757 .046052 64.713343
> 70 [5, 3, 7, -7, -3, 8] 5753.932407 7.459874 10.743721
> 71 [3, 5, -6, 1, -18, -28] 5846.930660 3.094040 14.795975
> 72 [3, 12, -1, 12, -10, -36] 5952.918469 1.698521 18.448015
> 73 [6, 0, 3, -14, -12, 7] 6137.760804 5.291448 12.429144
> 74 [4, 4, 0, -3, -11, -11] 6227.282004 12.384652 9.221275
> 75 [3, 0, 9, -7, 6, 21] 6250.704457 6.584324 11.570803
> 76 [9, 5, -3, -13, -30, -21] 6333.111158 1.049791 22.396682
> 77 [0, 0, 8, 0, 13, 19] 6365.852053 14.967465 8.686091
> 78 [4, 2, 5, -6, -3, 6] 6370.380556 16.499269 8.391154
> 79 [1, -8, -2, -15, -6, 18] 6507.074340 4.974313 12.974488
> 80 [2, -6, 1, -14, -4, 19] 6598.741284 6.548265 11.820058
> 81 [2, 25, 13, 35, 15, -40] 6657.512727 .299647 35.677429
> 82 [6, -2, -2, -17, -20, 1] 6845.573750 3.740932 14.626943
> 83 [1, 7, 3, 9, 2, -13] 6852.061008 12.161876 9.603642
> 84 [0, 5, 5, 8, 8, -2] 7042.202107 19.368923 8.212986
> 85 [4, 2, 9, -6, 3, 15] 7074.478038 8.170435 11.196673
> 86 [8, 6, 6, -9, -13, -3] 7157.960980 3.268439 15.596153
> 87 [5, 8, 2, 1, -11, -18] 7162.155511 5.664628 12.817743
> 88 [3, 17, -1, 20, -10, -50] 7280.048554 .894655 24.922952
> 89 [4, 2, -1, -6, -13, -8] 7307.246603 13.289190 9.520562
> 90 [5, 13, -17, 9, -41, -76] 7388.593186 .276106 38.128083
> 91 [8, 18, 11, 10, -5, -25] 7423.457669 .968741 24.394122
> 92 [3, -2, 1, -10, -7, 8] 7553.291925 18.095699 8.628089
> 93 [3, 7, -1, 4, -10, -22] 7604.170165 7.279064 11.973078
> 94 [6, 10, 3, 2, -12, -21] 7658.950254 3.480440 15.622931
> 95 [14, 59, 33, 61, 13, -89] 7727.766150 .037361 79.148236
> 96 [3, -5, -6, -15, -18, 0] 7760.555544 4.513934 14.304666
> 97 [13, 14, 35, -8, 19, 42] 7785.862490 .261934 39.585940
> 98 [11, 13, 17, -5, -4, 3] 7797.739891 1.485250 21.312375
> 99 [2, -4, -16, -11, -31, -26] 7870.803242 1.267597 22.628529
> 100 [2, -9, -4, -19, -12, 16] 7910.552221 2.895855 16.877046
> 101 [0, 0, 9, 0, 14, 21] 7917.731843 14.176105 9.573860
> 102 [3, 12, 11, 12, 9, -8] 7922.981072 2.624742 17.489863
> 103 [1, -6, 3, -12, 2, 24] 8250.683192 8.474270 11.675656
> 104 [55, 73, 93, -12, -7, 11] 8282.844862 .017772 105.789216
> 105 [4, 7, 2, 2, -8, -15] 8338.658153 10.400103 10.893408
> 106 [0, 5, -5, 8, -8, -26] 8426.314560 8.215515 11.894828
> 107 [5, 8, 14, 1, 8, 10] 8428.707855 4.143252 15.190723
> 108 [6, 7, 5, -3, -9, -8] 8506.845926 6.986391 12.646486
> 109 [8, 13, 23, 2, 14, 17] 8538.660000 1.024522 25.136807
> 110 [0, 0, 10, 0, 16, 23] 8630.819015 11.358665 10.686371
> 111 [3, -7, -8, -18, -21, 1] 8799.551719 2.900537 17.521249
> 112 [0, 5, 10, 8, 16, 9] 8869.402675 6.941749 12.865826
> 113 [4, 16, 9, 16, 3, -24] 8931.184092 1.698521 21.324102
> 114 [6, 5, 7, -6, -6, 2] 8948.277847 9.097987 11.718042
> 115 [3, -3, 1, -12, -7, 11] 9072.759561 14.130876 10.062449
> 116 [0, 12, 24, 19, 38, 22] 9079.668325 .617051 30.795105
> 117 [33, 78, 90, 47, 50, -10] 9153.275887 .016734 112.014440
> 118 [5, 1, -7, -10, -25, -19] 9260.372155 3.148011 17.329377
> 119 [1, -6, -2, -12, -6, 12] 9290.939644 13.273963 10.377495
> 120 [2, -2, 4, -8, 1, 15] 9367.180611 25.460673 8.247748
> 121 [3, 5, 16, 1, 17, 23] 9529.360455 3.220227 17.366255
> 122 [6, 3, 5, -9, -9, 3] 9771.701969 9.773087 11.787090
> 123 [15, -2, -5, -38, -50, -6] 9772.798330 .479706 34.589494
> 124 [2, -6, -6, -14, -15, 3] 9810.819078 6.548265 13.618691
> 125 [1, 9, 3, 12, 2, -18] 9825.667878 9.244393 12.047225
> 126 [1, -13, -2, -23, -6, 32] 9884.172505 2.432212 19.449425

🔗Gene Ward Smith <gwsmith@svpal.org>

4/4/2004 5:45:15 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> Hi Gene,
>
> Would you be so kind as to produce a file like the one below, but
> instead of culling to 126 lines, leave all 32201 in there? That
would
> be great. If that's too much, you could cut off the error and
> complexity wherever you see fit. The idea, though, is to produce a
> graph, and as most pieces of paper are rectangular, the data should
> fill a rectangular region. I'm *not* arguing for a rectangular
> badness function.

I could either upload something or email it. Which is better?

> Also could you provide the TM-reduced kernel bases -- at least for
> the 126 below?

Not unless I write code to fully automate the process first.

Are we getting serious about the paper again? It's not as if the
marklet is flooded with good math-heavy theory papers or books.
Haluska's book is something of a mess, and the Diderot Forum book is
in some respects rather appalling, but they got published.

🔗Paul Erlich <perlich@aya.yale.edu>

4/4/2004 6:23:19 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > Hi Gene,
> >
> > Would you be so kind as to produce a file like the one below, but
> > instead of culling to 126 lines, leave all 32201 in there? That
> would
> > be great. If that's too much, you could cut off the error and
> > complexity wherever you see fit. The idea, though, is to produce
a
> > graph, and as most pieces of paper are rectangular, the data
should
> > fill a rectangular region. I'm *not* arguing for a rectangular
> > badness function.
>
> I could either upload something or email it. Which is better?

Uploading keeps things public. If that doesn't work, e-mail.

> > Also could you provide the TM-reduced kernel bases -- at least
for
> > the 126 below?
>
> Not unless I write code to fully automate the process first.
>
> Are we getting serious about the paper again?

John Chalmers just reneged on his much earlier reneging of having a
paper by me in XH18. He's putting a serious time-clamp on me, so
there's absolutely no way we can iron out our differences in time.
Right now I have 7.5 pages of text, 2 pages of endnotes, and I'm
planning to have loads of those horagrams (which look much nicer
printed directly to paper than as .jpgs or .gifs). Many people are
thanked, especially you, Graham, and Dave. The main concern at this
point is *understandability*. If the importance of the results can't
be communicated to the average xenharmonicist, then all is lost. My
XH17 paper was too hard for most people, and this one is much more
ambitious. It's still only going to scratch the surface, leaving out
most of the math, lattices, keyboard designs etc. that one would
ideally like to see. Only so much space (not to mention time).

> It's not as if the
> marklet is flooded with good math-heavy theory papers or books.

Gene, you've done work of such breadth and depth. Is it going to
remain unorganized, spread across thousands of posts, forever?

🔗Gene Ward Smith <gwsmith@svpal.org>

4/4/2004 7:40:43 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Gene, you've done work of such breadth and depth. Is it going to
> remain unorganized, spread across thousands of posts, forever?

A little has been organized on my web site. I'm wondering again about
what and where to publish, after having seen what's gotten published
lately. There's enough material for a book, if I could get myself up
to speed and actually do it; but I started a book on Chebyshev
functions and never finished it, and in general have a problem
finishing projects.

🔗Paul Erlich <perlich@aya.yale.edu>

4/5/2004 3:41:17 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Gene, you've done work of such breadth and depth. Is it going to
> > remain unorganized, spread across thousands of posts, forever?
>
> A little has been organized on my web site. I'm wondering again
about
> what and where to publish, after having seen what's gotten published
> lately. There's enough material for a book, if I could get myself up
> to speed and actually do it; but I started a book on Chebyshev
> functions and never finished it, and in general have a problem
> finishing projects.

So do I -- particularly the projects that mean most to me. This is
why I have composed so little. You've done me better -- many, many
better -- in this regard. I applaud you, Gene, and your compositions
will receive due notice in my paper. Which website would you like me
to reference?

🔗Gene Ward Smith <gwsmith@svpal.org>

4/5/2004 4:38:02 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> So do I -- particularly the projects that mean most to me. This is
> why I have composed so little. You've done me better -- many, many
> better -- in this regard. I applaud you, Gene, and your compositions
> will receive due notice in my paper. Which website would you like me
> to reference?

Thanks, Paul. I only have one website: www.xenharmony.org

🔗Gene Ward Smith <gwsmith@svpal.org>

4/6/2004 1:21:40 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
> Hi Gene,
>
> Would you be so kind as to produce a file like the one below, but
> instead of culling to 126 lines, leave all 32201 in there?

I presume you want TOP error, but what complexity do you want? Should
I use logflat badness?

This will take some amount of time to compute, so I want to get it
right the first time.

🔗gooseplex <cfaah@eiu.edu>

4/6/2004 4:18:51 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...> wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...> wrote:
>
> > Gene, you've done work of such breadth and depth. Is it
going to
> > remain unorganized, spread across thousands of posts,
forever?
>
> A little has been organized on my web site. I'm wondering
again about
> what and where to publish, after having seen what's gotten
published
> lately. There's enough material for a book, if I could get myself
up
> to speed and actually do it; but I started a book on Chebyshev
> functions and never finished it, and in general have a problem
> finishing projects.

Yes, please write the book. I'll bet if you set aside a couple of
weeks to concentrate on a logical table of contents, you will be
able to sort previous and subsequent work into logical orderly
categories. After tackling the problem of overall organization, my
guess is that you could assemble your book in less than a year. I
hope you will do this.

Aaron

🔗Paul Erlich <perlich@aya.yale.edu>

4/6/2004 12:33:01 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> > Hi Gene,
> >
> > Would you be so kind as to produce a file like the one below, but
> > instead of culling to 126 lines, leave all 32201 in there?
>
> I presume you want TOP error, but what complexity do you want?

The same complexity you used there, when you culled to 126 (just
click "up thread" repeatedly if you forgot). It was the L1 kind.

>Should
> I use logflat badness?

No badness figure required.

> This will take some amount of time to compute, so I want to get it
> right the first time.

Well, if you can't reproduce the results you posted when you culled
to 126, it'll be a good sign something's wrong. Of course, I should
do an independent check myself, but time is running out.

🔗Paul Erlich <perlich@aya.yale.edu>

4/10/2004 3:08:14 PM

Hi Gene,

I hope you're making progress on un-culling the list.

Would it be rude of me to request a similar list for 11-
limit 'linears'? Dave told me I should include these in my paper, and
I agree.

Thanks,
Paul

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> Hi Gene,
>
> Would you be so kind as to produce a file like the one below, but
> instead of culling to 126 lines, leave all 32201 in there? That
would
> be great. If that's too much, you could cut off the error and
> complexity wherever you see fit. The idea, though, is to produce a
> graph, and as most pieces of paper are rectangular, the data should
> fill a rectangular region. I'm *not* arguing for a rectangular
> badness function.
>
> Also could you provide the TM-reduced kernel bases -- at least for
> the 126 below?
>
> Thanks so much,
> Paul
>
>
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> wrote:
> > I first made a candidate list by the kitchen sink method:
> >
> > (1) All pairs n,m<=200 of standard vals
> >
> > (2) All pairs n,m<=200 of TOP vals
> >
> > (3) All pairs 100<=n,m<400 of standard vals
> >
> > (4) All pairs 100<=n,m<=400 of TOP vals
> >
> > (5) Generators of standard vals up to 100
> >
> > (6) Generators of certain nonstandard vals up to 100
> >
> > (7) Pairs of commas from Paul's list of relative error < 0.06,
> > epimericity < 0.5
> >
> > (8) Pairs of vals with consistent badness figure < 1.5 up to 5000
> >
> > This lead to a list of 32201 candidate wedgies, most of which of
> > course were incredible garbage.

🔗Gene Ward Smith <gwsmith@svpal.org>

4/11/2004 12:55:44 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> Hi Gene,
>
> I hope you're making progress on un-culling the list.

I think I'll finish by today.

> Would it be rude of me to request a similar list for 11-
> limit 'linears'? Dave told me I should include these in my paper,
and
> I agree.

I think getting a big list of 11-limits would be nice. I hope I don't
need to get 32000.

🔗Paul Erlich <perlich@aya.yale.edu>

4/11/2004 2:26:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> > Hi Gene,
> >
> > I hope you're making progress on un-culling the list.
>
> I think I'll finish by today.

Awesome!

> > Would it be rude of me to request a similar list for 11-
> > limit 'linears'? Dave told me I should include these in my paper,
> and
> > I agree.
>
> I think getting a big list of 11-limits would be nice. I hope I
don't
> need to get 32000.

As long as we're sure we're getting a complete list up to some fairly
modest error and complexity bounds, I'm happy. I hope to
perform/verify all these calculations myself eventually, but don't
yet have code to calculate TOP tuning in the general case, and time
is running out.

Much appreciated,
Paul