back to list

An idea for evaulating badness measures

🔗Herman Miller <hmiller@IO.COM>

3/22/2008 3:48:42 PM

This is a bit of a long description, but I can't think of a clearer way to describe what I'm getting at, so here goes.

Say you have a way to measure complexity and a way to measure error for temperaments. I'll use the wedgie complexity and TOP-MAX error that Paul Erlich used in his "Middle Path" paper for examples. Some temperaments, such as miracle or ennealimmal, are better in error or complexity than any other temperament on the list: if any temperament has a lower error, it's higher in complexity (and vice versa). Graham Breed described a way to find all temperaments lower in complexity and error, so that you can verify if any temperament is on the "best temperaments" list, or find others that might be better.

So let's say that you have a list of all temperaments that might be worth considering. Each temperament starts out as a candidate for the highest grade of temperament -- let's call it the "gold medal" set. Not all of these are necessarily the best temperaments (e.g., father seems to be on the list), but no other temperament beats them in both error and complexity. Since the set starts out empty, the first candidate automatically gets in. Let's say the first candidate happens to be hedgehog, with complexity 13.189661 and error 3.106578. The next candidate is orwell, with a smaller error but higher complexity. So they both stay in the running for a gold medal.

[<2, 4, 6, 7], <0, -3, -5, -5]> c: 13.189661 e: 3.106578 hedgehog
[<1, 0, 3, 1], <0, 7, -3, 8]> c: 19.979719 e: 0.946061 orwell

Now injera comes along and challenges hedgehog. Its complexity is lower, but it's got a higher error, so they both stay in the gold list.

[<2, 3, 4, 5], <0, 1, 4, 4]> c: 11.917575 e: 3.582706 injera
[<2, 4, 6, 7], <0, -3, -5, -5]> c: 13.189661 e: 3.106578 hedgehog
[<1, 0, 3, 1], <0, 7, -3, 8]> c: 19.979719 e: 0.946061 orwell

Meantone has a lower error and complexity than both injera and hedgehog, so it takes its place in the gold list and sends the other two into the silver medal list.

Gold:
[<1, 2, 4, 7], <0, -1, -4, -10]> c: 11.765178 e: 1.698520 meantone
[<1, 0, 3, 1], <0, 7, -3, 8]> c: 19.979719 e: 0.946061 orwell

Silver:
[<2, 3, 4, 5], <0, 1, 4, 4]> c: 11.917575 e: 3.582706 injera
[<2, 4, 6, 7], <0, -3, -5, -5]> c: 13.189661 e: 3.106578 hedgehog

Augene comes along, but meantone is better in both error and complexity, so augene gets sent to the silver list. On the other hand, augene beats hedgehog and sends hedgehog to the bronze list.

Gold:
[<1, 2, 4, 7], <0, -1, -4, -10]> c: 11.765178 e: 1.698520 meantone
[<1, 0, 3, 1], <0, 7, -3, 8]> c: 19.979719 e: 0.946061 orwell

Silver:
[<2, 3, 4, 5], <0, 1, 4, 4]> c: 11.917575 e: 3.582706 injera
[<3, 5, 7, 8], <0, -1, 0, 2]> c: 12.125211 e: 2.939961 augene

Bronze:
[<2, 4, 6, 7], <0, -3, -5, -5]> c: 13.189661 e: 3.106578 hedgehog

Temperaments not as good as bronze are simply taken off the list. If you go on doing this for all temperaments in your list, eventually you'll have three grades of temperaments, something like this (although other temperaments not on this list might end up being better):

Gold:
[<2, 3, 5, 6], <0, 1, -2, -2]> c: 10.402108 e: 3.106578 pajara
[<1, 2, 4, 7], <0, -1, -4, -10]> c: 11.765178 e: 1.698520 meantone
[<1, -1, -1, -2], <0, 7, 9, 13]> c: 14.458536 e: 1.610469 sensi
[<1, 0, 2, -1], <0, 5, 1, 12]> c: 15.536039 e: 1.276744 magic
[<1, 0, 3, 1], <0, 7, -3, 8]> c: 19.979719 e: 0.946061 orwell

Silver:
[<1, 2, 4, 3], <0, -2, -8, -1]> c: 11.204461 e: 3.668841 semaphore
[<2, 3, 4, 5], <0, 1, 4, 4]> c: 11.917575 e: 3.582706 injera
[<1, 2, 2, 3], <0, -4, 3, -2]> c: 12.124601 e: 3.187309 negri
[<3, 5, 7, 8], <0, -1, 0, 2]> c: 12.125211 e: 2.939961 augene
[<1, 2, 6, 2], <0, -1, -9, 2]> c: 14.430906 e: 2.403879 superpyth
[<1, 1, 0, 3], <0, 3, 12, -1]> c: 18.448015 e: 1.698520 cynder

Bronze:
[<6, 10, 14, 17], <0, -1, 0, 0]> c: 11.410361 e: 5.526647 hexe
[<1, 0, 1, 2], <0, 6, 5, 3]> c: 12.408714 e: 3.187309 keemun
[<2, 4, 6, 7], <0, -3, -5, -5]> c: 13.189661 e: 3.106578 hedgehog
[<1, 2, 3, 2], <0, -3, -5, 6]> c: 14.795975 e: 3.094040 porcupine
[<1, 2, 4, -1], <0, -1, -4, 9]> c: 15.376139 e: 2.536419 flattone
[<1, 2, -3, 2], <0, -1, 13, 2]> c: 19.449425 e: 2.432211 quasisuper

So, my idea is to look for a weighted badness measure that agrees as closely as possible with the gold, silver, and bronze ranking of all temperaments. The average badness of the gold medal temperaments should be roughly the same for any range of complexity. The badness of the silver medal temperaments should be higher than the badness of the gold ones, and so on.

A slight modification would penalize temperaments with unusually high error, such as [<3, 5, 7, 9], <0, 0, 0, -1]>, even though the error may be optimal for the complexity level, and give a slight bonus to temperaments with complexity in the middle range (like meantone or miracle). At any rate, I think the idea of gold, silver, and bronze medal categories for temperaments could be useful in getting a rough impression of how good a temperament might be.

🔗Graham Breed <gbreed@gmail.com>

3/24/2008 4:26:04 AM

Herman Miller wrote:

> So, my idea is to look for a weighted badness measure that agrees as > closely as possible with the gold, silver, and bronze ranking of all > temperaments. The average badness of the gold medal temperaments should > be roughly the same for any range of complexity. The badness of the > silver medal temperaments should be higher than the badness of the gold > ones, and so on.

Logflat badness should do it. I also found a message in the archives where Gene says error x complexity badness is what I'd call "linear flat". That is, you get roughly the same number of temperament classes above the line in any given linear range.

> A slight modification would penalize temperaments with unusually high > error, such as [<3, 5, 7, 9], <0, 0, 0, -1]>, even though the error may > be optimal for the complexity level, and give a slight bonus to > temperaments with complexity in the middle range (like meantone or > miracle). At any rate, I think the idea of gold, silver, and bronze > medal categories for temperaments could be useful in getting a rough > impression of how good a temperament might be.

The parametric badness I've mentioned before for equal temperaments is

B**2 = E**2 k**2 + E_k**2 k**2

where E is error, k is complexity, and E_k is a free parameter. I haven't seen quite this form mentioned in the archives before. You can always add the right exponent on the k to turn in into logflat badness.

For higher ranks, you need to find a geometry where this is the distance of a val from the origin. Then the badness of a rank 2 temperament of the area of the parallelogram with the origin and the two vals at corners, and so on.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/24/2008 5:49:50 PM

Graham Breed wrote:
> Herman Miller wrote:
> >> So, my idea is to look for a weighted badness measure that agrees as >> closely as possible with the gold, silver, and bronze ranking of all >> temperaments. The average badness of the gold medal temperaments should >> be roughly the same for any range of complexity. The badness of the >> silver medal temperaments should be higher than the badness of the gold >> ones, and so on.
> > Logflat badness should do it. I also found a message in the > archives where Gene says error x complexity badness is what > I'd call "linear flat". That is, you get roughly the same > number of temperament classes above the line in any given > linear range.

I was hoping for something better than error * complexity, which excludes some of the less complex temperaments of interest unless you include dozens of unremarkable complex temperaments (unremarkable in the sense that we already know of so many better complex temperaments). I tried ranking temperaments by error * complexity from one to three stars (blacksmith and beatles being examples of * temperaments, while meantone and orwell are *** temperaments). Lemba didn't make the one-star temperament list, while the 3-star list starts including temperaments like valentine, clyde, and grackle (along with many that don't even have names).

A preliminary list of the best (gold medal) temperaments (preliminary since I haven't implemented the full temperament search) includes a handful of high-error temperaments with low complexity, followed by:

blacksmith, diminished, dominant, pajara, meantone, sensi, magic, orwell, garibaldi, miracle, catakleismic, hemiw�rschmidt, hemififths, amity, parakleismic, ennealimmal, supermajor.

These are all good temperaments for their size. The next rank (silver) includes temperaments like:

decimal, august, semaphore, injera, negri, augene, superpyth, cynder, myna, valentine, octacot, rodan, compton, luna, unidec, misty, harry.

Still good temperaments, although not as good as the gold. The bronze list includes: hexe, keemun, hedgehog, porcupine, flattone, quasisuper, a temperament I don't know a name for, nusecond, and diaschismic. These are obviously starting to get into the less accurate range, but many of these are still usable.

In the next rank, quite a few of them don't have names, and the named ones include catler, doublewide, beatles, liese, squares, w�rschmidt, superkleismic, shrutar, and clyde. (Shrutar fares better in the 11- and 17-limits, earning a silver medal in each; it's just not a great 7-limit temperament.)

Finally the fifth rank includes temperaments like gorgo, schism, lemba, nautilus, muggles, and triton. These are still on the edge of being good temperaments, but they're starting to get into the higher error range. Grackle doesn't even make it into this list.

So far this system seems to be agreeing with my evaluations more or less, except on the low complexity end (which can be addressed by setting an error cutoff) and in the low ranking of temperaments like gorgo, lemba, and muggles. They might happen to be in a range where there are many good temperaments to compete with, or their appeal could come from qualities other than their error * complexity product (such as their step size ratios, another important consideration in evaluating temperaments).

>> A slight modification would penalize temperaments with unusually high >> error, such as [<3, 5, 7, 9], <0, 0, 0, -1]>, even though the error may >> be optimal for the complexity level, and give a slight bonus to >> temperaments with complexity in the middle range (like meantone or >> miracle). At any rate, I think the idea of gold, silver, and bronze >> medal categories for temperaments could be useful in getting a rough >> impression of how good a temperament might be.
> > The parametric badness I've mentioned before for equal > temperaments is
> > B**2 = E**2 k**2 + E_k**2 k**2
> > where E is error, k is complexity, and E_k is a free > parameter. I haven't seen quite this form mentioned in the > archives before. You can always add the right exponent on > the k to turn in into logflat badness.
> > For higher ranks, you need to find a geometry where this is > the distance of a val from the origin. Then the badness of > a rank 2 temperament of the area of the parallelogram with > the origin and the two vals at corners, and so on.

Do you have a particular geometry in mind, or is this one of the unanswered questions?

🔗Carl Lumma <carl@lumma.org>

3/24/2008 7:53:02 PM

[Herman wrote]
>So, my idea is to look for a weighted badness measure that agrees as
>closely as possible with the gold, silver, and bronze ranking of all
>temperaments.

Cool idea, but don't the results depend heavily on how you define
"all temperaments"?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/24/2008 8:01:10 PM

Herman Miller wrote:

> I was hoping for something better than error * complexity, which > excludes some of the less complex temperaments of interest unless you > include dozens of unremarkable complex temperaments (unremarkable in the > sense that we already know of so many better complex temperaments). I > tried ranking temperaments by error * complexity from one to three stars > (blacksmith and beatles being examples of * temperaments, while meantone > and orwell are *** temperaments). Lemba didn't make the one-star > temperament list, while the 3-star list starts including temperaments > like valentine, clyde, and grackle (along with many that don't even have > names).

That's what logflat badness is for. E k**(d/(d-r)) where E is error, k is complexity, d is the number of prime intervals, and r is the rank of the temperament.

>> The parametric badness I've mentioned before for equal >> temperaments is
>>
>> B**2 = E**2 k**2 + E_k**2 k**2
>>
>> where E is error, k is complexity, and E_k is a free >> parameter. I haven't seen quite this form mentioned in the >> archives before. You can always add the right exponent on >> the k to turn in into logflat badness.
>>
>> For higher ranks, you need to find a geometry where this is >> the distance of a val from the origin. Then the badness of >> a rank 2 temperament of the area of the parallelogram with >> the origin and the two vals at corners, and so on.
> > Do you have a particular geometry in mind, or is this one of the > unanswered questions?

There's only one that will fit. I gave the matrix equations in "Prime Based Errors and Complexities" and the wedgie equations here recently.

The geometric argument is this:

Consider the projection for "simple badness space". You project each val parallel to the JI line onto the x+y+z+...=0 hyperplane. That means all equal temperaments are represented by a point on this hyperplane and the distance they get from it is zero.

For the parametric complexity, you move every val towards the same hyperplane, but not quite there. The distance from the hyperplane is then the parameter epsilon times the complexity. The distance from the origin is the square root of the simple badness squared plus the reduced complexity squared.

If you consider a ball around the origin, only equal temperaments with badness less than the radius will be inside it. And only equal temperaments with complexity less than the radius divided by epsilon.

You can define a rank 2 temperament class as a pair of vals. The measure of the temperament class (in whatever space) is the area of a parallelogram with corners at the origin and the two vals. Whatever pair of vals you choose, this area will be the same provided they define the same temperament class without contorsion. That means the further the vals are from the origin, the closer the angle between them will get to 0 degrees. The "best" vals will be closest to the origin, and so the angle between them will be approximately 90 degrees.

That means to find good rank 2 temperaments all you need is to find equal temperaments that are good by the same measure. The parallelogram they define will be roughly a square. That means the badness of the rank 2 temperament will be slightly less than the product of the badnesses of the equal temperaments. So to find all rank 2 temperaments within a given badness you take the square root, make it a bit bigger, and look at all pairs of equal temperaments lying within a ball of this radius from the origin.

I don't know if this can be made more quantitative. But at least it means that the best equal temperaments are likely to lead to the best rank 2 temperaments (by the same measure).

Graham

🔗Graham Breed <gbreed@gmail.com>

3/24/2008 8:08:40 PM

I wrote:

> There's only one that will fit. I gave the matrix equations in "Prime > Based Errors and Complexities" and the wedgie equations here recently.

Sorry, I only gave wedgie equations for the simple badness and complexity. I don't know how to mix them up to get parameteric badness.

Graham

🔗Carl Lumma <carl@lumma.org>

3/24/2008 11:03:57 PM

[Graham wrote]
>Logflat badness should do it. I also found a message in the
>archives where Gene says error x complexity badness is what
>I'd call "linear flat". That is, you get roughly the same
>number of temperament classes above the line in any given
>linear range.

So I get about the same number of temperaments below
logflat cutoff x as I do between x and 10x, and between
10x and 100x, etc.?

And for linearflat, it's

below x ~ between x and 2x ~ between 2x and 3x

That's right, yes?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/24/2008 11:05:56 PM

Carl Lumma wrote:
> [Graham wrote]
>> Logflat badness should do it. I also found a message in the >> archives where Gene says error x complexity badness is what >> I'd call "linear flat". That is, you get roughly the same >> number of temperament classes above the line in any given >> linear range.
> > So I get about the same number of temperaments below
> logflat cutoff x as I do between x and 10x, and between
> 10x and 100x, etc.?
> > And for linearflat, it's
> > below x ~ between x and 2x ~ between 2x and 3x
> > That's right, yes?

Yep, looks right to me!

Graham

🔗Herman Miller <hmiller@IO.COM>

3/25/2008 6:50:55 PM

Graham Breed wrote:
> Herman Miller wrote:
> >> I was hoping for something better than error * complexity, which >> excludes some of the less complex temperaments of interest unless you >> include dozens of unremarkable complex temperaments (unremarkable in the >> sense that we already know of so many better complex temperaments). I >> tried ranking temperaments by error * complexity from one to three stars >> (blacksmith and beatles being examples of * temperaments, while meantone >> and orwell are *** temperaments). Lemba didn't make the one-star >> temperament list, while the 3-star list starts including temperaments >> like valentine, clyde, and grackle (along with many that don't even have >> names).
> > That's what logflat badness is for. E k**(d/(d-r)) where E > is error, k is complexity, d is the number of prime > intervals, and r is the rank of the temperament.

I was just looking at some charts of the best 7-limit temperaments and concluded that error * complexity^2 is a good measure of badness, which agrees with this formula (with d = 4, r = 2). The gold and silver temperaments are clearly separated, with one silver (unnamed "no. 57") among the bronze temperaments, and keemun grouped with the silver temperaments. Catler is ranked between silver and bronze, with the rest of the bronze temperaments grouped together.

When I rank the temperaments by badness, it becomes clear that the reason keemun was ranked lower is that it has the same error as negri, but is very slightly more complex. The lower ranking of keemun then pushed catler and lemba into lower grades.

🔗Herman Miller <hmiller@IO.COM>

3/25/2008 7:08:04 PM

Carl Lumma wrote:
> [Herman wrote]
>> So, my idea is to look for a weighted badness measure that agrees as >> closely as possible with the gold, silver, and bronze ranking of all >> temperaments.
> > Cool idea, but don't the results depend heavily on how you define
> "all temperaments"?
> > -Carl

To some extent. Degenerate temperaments (complexity = 0) are disqualified. I also disqualify temperaments with contorsion, since that would effectively allow them to win twice, but if you allow those temperaments, it could skew the results slightly. I haven't seen many of those in the results; their added complexity usually puts them into a lower grade unless they were really outstanding to begin with (e.g., meantone, ennealimmal).

And I'm not sure if it was clear, but I was thinking specifically all temperaments of the same rank and prime limit. So you'd have a 7-limit rank 2 competition, a 2.5.11.13-limit rank 2 competition, an 11-limit rank 3 competition, etc., each with their own winners and runners-up.

The results can also be skewed if you're only looking at a subset of temperaments, such as temperaments from optimal ET's, but most of the good temperaments are in the list of temperaments from optimal ET's anyway, and many of the others I've run across don't actually have an effect on the ranking. So it looks like you get a pretty good approximation from just combining optimal ET's.

🔗Carl Lumma <carl@lumma.org>

3/25/2008 8:23:41 PM

At 07:08 PM 3/25/2008, you wrote:
>Carl Lumma wrote:
>> [Herman wrote]
>>> So, my idea is to look for a weighted badness measure that agrees as
>>> closely as possible with the gold, silver, and bronze ranking of all
>>> temperaments.
>>
>> Cool idea, but don't the results depend heavily on how you define
>> "all temperaments"?
>>
>> -Carl
>
>To some extent. Degenerate temperaments (complexity = 0) are
>disqualified. I also disqualify temperaments with contorsion, since that
>would effectively allow them to win twice, but if you allow those
>temperaments, it could skew the results slightly. I haven't seen many of
>those in the results; their added complexity usually puts them into a
>lower grade unless they were really outstanding to begin with (e.g.,
>meantone, ennealimmal).
>
>And I'm not sure if it was clear, but I was thinking specifically all
>temperaments of the same rank and prime limit.

There are an infinite number of such temperaments.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/25/2008 9:30:24 PM

Carl Lumma wrote:
> At 07:08 PM 3/25/2008, you wrote:
>> Carl Lumma wrote:
>>> [Herman wrote]
>>>> So, my idea is to look for a weighted badness measure that agrees as >>>> closely as possible with the gold, silver, and bronze ranking of all >>>> temperaments.
>>> Cool idea, but don't the results depend heavily on how you define
>>> "all temperaments"?
>>>
>>> -Carl
>> To some extent. Degenerate temperaments (complexity = 0) are >> disqualified. I also disqualify temperaments with contorsion, since that >> would effectively allow them to win twice, but if you allow those >> temperaments, it could skew the results slightly. I haven't seen many of >> those in the results; their added complexity usually puts them into a >> lower grade unless they were really outstanding to begin with (e.g., >> meantone, ennealimmal).
>>
>> And I'm not sure if it was clear, but I was thinking specifically all >> temperaments of the same rank and prime limit.
> > There are an infinite number of such temperaments.

All but a few will be worse in error and complexity than a temperament already on the list. It's possible there could be a batch of some really outstanding temperaments at some monstrously high level of complexity, but the farther out you go, the smaller tweaks you'll need to make to the overall badness weighting to get the averages to line up. So a few way-out temperaments may have a slight effect on the results, but I can't imagine it would be very significant. You could err on the side of higher badness for more complex temperaments if you're worried about that sort of thing.

🔗Carl Lumma <carl@lumma.org>

3/25/2008 9:48:41 PM

Hi Herman,

>>>>> So, my idea is to look for a weighted badness measure that agrees as
>>>>> closely as possible with the gold, silver, and bronze ranking of all
>>>>> temperaments.
>>>>
>>>> Cool idea, but don't the results depend heavily on how you define
>>>> "all temperaments"?
>>>
>>> To some extent. Degenerate temperaments (complexity = 0) are
>>> disqualified. I also disqualify temperaments with contorsion, since that
>>> would effectively allow them to win twice, but if you allow those
>>> temperaments, it could skew the results slightly. I haven't seen many of
>>> those in the results; their added complexity usually puts them into a
>>> lower grade unless they were really outstanding to begin with (e.g.,
>>> meantone, ennealimmal).
>>>
>>> And I'm not sure if it was clear, but I was thinking specifically all
>>> temperaments of the same rank and prime limit.
>>
>> There are an infinite number of such temperaments.
>
>All but a few will be worse in error and complexity than a temperament
>already on the list.

If you're using logflat badness, an infinite number will be better
than those on the list.

>It's possible there could be a batch of some really
>outstanding temperaments at some monstrously high level of complexity,
>but the farther out you go, the smaller tweaks you'll need to make to
>the overall badness weighting to get the averages to line up. So a few
>way-out temperaments may have a slight effect on the results, but I
>can't imagine it would be very significant. You could err on the side of
>higher badness for more complex temperaments if you're worried about
>that sort of thing.

I'm just missing how you generate the list temperaments to test.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/26/2008 6:18:09 PM

Carl Lumma wrote:

> > If you're using logflat badness, an infinite number will be better
> than those on the list.

I'm thinking the proportion of temperaments on the list to those not on the list will approach any arbitrarily small number if you go out far enough, as the number of temperaments increases with complexity, but the number of temperaments on the list remains fixed at however many grades you keep track of (three is probably a reasonable number).

>> It's possible there could be a batch of some really >> outstanding temperaments at some monstrously high level of complexity, >> but the farther out you go, the smaller tweaks you'll need to make to >> the overall badness weighting to get the averages to line up. So a few >> way-out temperaments may have a slight effect on the results, but I >> can't imagine it would be very significant. You could err on the side of >> higher badness for more complex temperaments if you're worried about >> that sort of thing.
> > I'm just missing how you generate the list temperaments to test.

In theory you could check every possible generator mapping. But that would be pretty wasteful.

I've been finding optimal ET mappings (minimizing TOP-MAX error) and wedging them. I occasionally double-check this list by selecting the second-nearest mappings of any one prime for each ET, and only rarely have I found a temperament that knocks out an existing temperament on the list and bumps it down to a lower grade. As an example, one 7-limit mavila [<1, 2, 1, 1], <0, -1, 3, 4]> was reduced from silver to bronze by a 7-limit dicot [<1, 1, 2, 1], <0, 2, 1, 6]> that wasn't on my original list. A really thorough check would test all possible combinations of best and second-best primes for each ET.

[<1, 1, 2, 1], <0, 2, 1, 6]>
P = 1204.048159, G = 356.399825
c = 7.231437, e = 9.396316

[<1, 2, 1, 1], <0, -1, 3, 4]>
P = 1209.734056, G = 532.941225
c = 7.425960, e = 9.734056

Most of the temperaments not found by wedging optimal ETs are in the high error / low complexity range, and very few of them have been above 20 in complexity. In any case, they don't appear to affect the average much.

🔗Carl Lumma <carl@lumma.org>

3/26/2008 6:46:36 PM

Hi Herman,

>>> It's possible there could be a batch of some really
>>> outstanding temperaments at some monstrously high level of complexity,
>>> but the farther out you go, the smaller tweaks you'll need to make to
>>> the overall badness weighting to get the averages to line up. So a few
>>> way-out temperaments may have a slight effect on the results, but I
>>> can't imagine it would be very significant. You could err on the side of
>>> higher badness for more complex temperaments if you're worried about
>>> that sort of thing.
>>
>> I'm just missing how you generate the list temperaments to test.
>
>In theory you could check every possible generator mapping. But that
>would be pretty wasteful.

I'm guessing it'd be slightly less wasteful to check every wedgie,
but I don't understand wedgies all that well.

>I've been finding optimal ET mappings (minimizing TOP-MAX error) and
>wedging them.

Ah; that's what I was missing. I think that's what Graham
does too.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/27/2008 3:09:19 AM

Herman:
>> In theory you could check every possible generator mapping. But that >> would be pretty wasteful.

Carl:
> I'm guessing it'd be slightly less wasteful to check every wedgie,
> but I don't understand wedgies all that well.

It'd be more wasteful to check every wedgie. The generator mapping's a subset of the wedgie (at least for a linear temperament -- in general you need a generator mapping and the number of periods per octave). The number of elements in the generator mapping is roughly proportional to the number of primes (strictly the number minus one). The number of elements in the wedgie is roughly proportional to the square of the number of primes (strictly n(n-1)/2).

Searching for "every mapping" will be exponential in the number of primes, which means it gets very wasteful. You also need to be able to extract the generator size from the mapping which isn't straightforward. If you want a complete mapping, the smallest number of integers you need to search is 2n-3 for n primes.

>> I've been finding optimal ET mappings (minimizing TOP-MAX error) and >> wedging them.
> > Ah; that's what I was missing. I think that's what Graham
> does too.

I find all ETs below a given simple badness (error*complexity) and number of notes to the octave. I hope there's a pseudo-polynomial time algorithm behind this.

Graham

🔗Carl Lumma <carl@lumma.org>

3/27/2008 10:09:22 AM

At 03:09 AM 3/27/2008, you wrote:
>Herman:
>>> In theory you could check every possible generator mapping. But that
>>> would be pretty wasteful.
>
>Carl:
>> I'm guessing it'd be slightly less wasteful to check every wedgie,
>> but I don't understand wedgies all that well.
>
>It'd be more wasteful to check every wedgie. The generator
>mapping's a subset of the wedgie (at least for a linear
>temperament -- in general you need a generator mapping and
>the number of periods per octave).

I assumed Herman meant the full mapping.

>The number of elements
>in the generator mapping is roughly proportional to the
>number of primes (strictly the number minus one). The
>number of elements in the wedgie is roughly proportional to
>the square of the number of primes (strictly n(n-1)/2).
>
>Searching for "every mapping" will be exponential in the
>number of primes, which means it gets very wasteful.

Sounds like I was right.

>>> I've been finding optimal ET mappings (minimizing TOP-MAX error) and
>>> wedging them.
>>
>> Ah; that's what I was missing. I think that's what Graham
>> does too.
>
>I find all ETs below a given simple badness
>(error*complexity) and number of notes to the octave.

...and then wedge them, yes?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/27/2008 9:07:22 PM

Carl Lumma wrote:
> At 03:09 AM 3/27/2008, you wrote:
>> Herman:
>>>> In theory you could check every possible generator mapping. But that >>>> would be pretty wasteful.
>> Carl:
>>> I'm guessing it'd be slightly less wasteful to check every wedgie,
>>> but I don't understand wedgies all that well.
>> It'd be more wasteful to check every wedgie. The generator >> mapping's a subset of the wedgie (at least for a linear >> temperament -- in general you need a generator mapping and >> the number of periods per octave).
> > I assumed Herman meant the full mapping.

Which full mapping? The least wasteful way is to look at the reduced mapping (Hermite or row echelon or something).

>> The number of elements >> in the generator mapping is roughly proportional to the >> number of primes (strictly the number minus one). The >> number of elements in the wedgie is roughly proportional to >> the square of the number of primes (strictly n(n-1)/2).
>>
>> Searching for "every mapping" will be exponential in the >> number of primes, which means it gets very wasteful.
> > Sounds like I was right.

For integer values of n where 2n-3 > n(n-1)/2, yes.

>>>> I've been finding optimal ET mappings (minimizing TOP-MAX error) and >>>> wedging them.
>>> Ah; that's what I was missing. I think that's what Graham
>>> does too.
>> I find all ETs below a given simple badness >> (error*complexity) and number of notes to the octave.
> > ...and then wedge them, yes?

I don't know. What does "wedge them" mean?

Graham

🔗Carl Lumma <carl@lumma.org>

3/27/2008 11:27:04 PM

Graham wrote...

>>>>> In theory you could check every possible generator mapping. But that
>>>>> would be pretty wasteful.
>>>>
>>>> I'm guessing it'd be slightly less wasteful to check every wedgie,
>>>> but I don't understand wedgies all that well.
>>>
>>> It'd be more wasteful to check every wedgie. The generator
>>> mapping's a subset of the wedgie (at least for a linear
>>> temperament -- in general you need a generator mapping and
>>> the number of periods per octave).
>>
>> I assumed Herman meant the full mapping.
>
>Which full mapping? The least wasteful way is to look at
>the reduced mapping (Hermite or row echelon or something).

Both vals.

>>>>> I've been finding optimal ET mappings (minimizing TOP-MAX error)
>>>>> and wedging them.
>>>>
>>>> Ah; that's what I was missing. I think that's what Graham
>>>> does too.
>>>
>>> I find all ETs below a given simple badness
>>> (error*complexity) and number of notes to the octave.
>>
>> ...and then wedge them, yes?
>
>I don't know. What does "wedge them" mean?

Compute the wedge product of the vals endemic to the two ETs.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 12:59:25 AM

Carl Lumma wrote:
> Graham wrote...

>>>>>> I've been finding optimal ET mappings (minimizing TOP-MAX error)
>>>>>> and wedging them.
>>>>> Ah; that's what I was missing. I think that's what Graham
>>>>> does too.
>>>> I find all ETs below a given simple badness >>>> (error*complexity) and number of notes to the octave.
>>> ...and then wedge them, yes?
>> I don't know. What does "wedge them" mean?
> > Compute the wedge product of the vals endemic to the two ETs.

Then no, I don't do that. Part of it, maybe, to get the contorsion.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 5:20:11 AM

Carl Lumma wrote:
> Graham wrote...
> >> Which full mapping? The least wasteful way is to look at >> the reduced mapping (Hermite or row echelon or something).
> > Both vals.

We'll have to leave it to Herman to say what he isn't doing. But note that a search for the period and generator mappings needn't be that wasteful. For a reasonable error, the period mappings are uniquely determined once you go past the 3-limit in most cases. The possible values for the generator are limited by the complexity and you can also restrict them by error, adding one prime at a time. So even if it's exponential it's only in the number of primes.

My complete search code does something similar. You can set dmax=1 so it searches for true linear temperaments where the period equals the octave. First it finds mappings for 1 note equal temperament (period mappings) within the error limit. Then it finds generators for them within the error and complexity limits.

You can surely do the same with a wedgie search because the number of independent elements are the same and it's always the same problem. But it isn't intuitive to me to predict error and complexity by adding elements to the wedgie.

Overall, the easiest way of finding good rank 2 temperaments is to pair off good equal temperaments. Which is what Herman is doing.

Graham

🔗Carl Lumma <carl@lumma.org>

3/28/2008 8:34:30 AM

>> Compute the wedge product of the vals endemic to the two ETs.
>
>Then no, I don't do that. Part of it, maybe, to get the
>contorsion.

How do you get LTs from ETs?

-Carl

🔗Carl Lumma <carl@lumma.org>

3/28/2008 8:44:14 AM

Graham wrote...

>>> Which full mapping? The least wasteful way is to look at
>>> the reduced mapping (Hermite or row echelon or something).
>>
>> Both vals.
>
>We'll have to leave it to Herman to say what he isn't doing.

He's already said what he's doing. You said something to
which I was replying, which isn't worth looking up now.

> But note that a search for the period and generator
>mappings needn't be that wasteful. For a reasonable error,
>the period mappings are uniquely determined once you go past
>the 3-limit in most cases. The possible values for the
>generator are limited by the complexity and you can also
>restrict them by error, adding one prime at a time. So even
>if it's exponential it's only in the number of primes.

And for similar reasons, searching wedgies shouldn't be
that wasteful. It should be even less wasteful since
there's a 1:many mapping from wedgies to mappings.

>You can surely do the same with a wedgie search because the
>number of independent elements are the same and it's always
>the same problem. But it isn't intuitive to me to predict
>error and complexity by adding elements to the wedgie.

If you're using wedgie complexity like Herman, that ought
to be straightforward. I don't understand how limiting the
error would work in either case.

>Overall, the easiest way of finding good rank 2 temperaments
>is to pair off good equal temperaments. Which is what
>Herman is doing.

But it's not quite as elegant.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/28/2008 6:33:10 PM

Graham Breed wrote:
> Carl Lumma wrote:
>> Graham wrote...
>>
>>> Which full mapping? The least wasteful way is to look at >>> the reduced mapping (Hermite or row echelon or something).
>> Both vals.
> > We'll have to leave it to Herman to say what he isn't doing. I have a monster of a program that really ought to be rewritten someday. Basically I end up using the wedgie to calculate error and complexity, follow the method described at http://x31eq.com/temper/method.html to find the period and generator mapping, recalculate the wedgie from the temperament mapping (!), and use the wedgie to eliminate temperaments already in the list before comparing the error and complexity. Not the most efficient way to do it, but I don't have the time to rewrite it, and it's fast enough for what I need.

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 9:31:16 PM

Carl Lumma wrote:
> Graham wrote...

>> But note that a search for the period and generator >> mappings needn't be that wasteful. For a reasonable error, >> the period mappings are uniquely determined once you go past >> the 3-limit in most cases. The possible values for the >> generator are limited by the complexity and you can also >> restrict them by error, adding one prime at a time. So even >> if it's exponential it's only in the number of primes.
> > And for similar reasons, searching wedgies shouldn't be
> that wasteful. It should be even less wasteful since
> there's a 1:many mapping from wedgies to mappings.

For suitable definitions of "wedgie" and "mapping" there may be a 1:many mapping. But I said most of the period mappings are uniquely determined. That means many wedgies to one generator mapping. So the search space of mappings is smaller.

>> You can surely do the same with a wedgie search because the >> number of independent elements are the same and it's always >> the same problem. But it isn't intuitive to me to predict >> error and complexity by adding elements to the wedgie.
> > If you're using wedgie complexity like Herman, that ought
> to be straightforward. I don't understand how limiting the
> error would work in either case.

Herman's using sum-abs wedgie complexity. Of course, he could always change to something that makes more sense.

I get the max-abs wedgie complexity for miracle as 6.793. So if you want miracle in the output you need at least as much going in. The bounds on elements of the wedge product are:

|T[0,0]| < 7
|T[0,1]| < 6.793*log2(3) = 10.8
|T[0,2]| < 6.793*log2(5) = 15.8
|T[0,3]| < 6.793*log2(7) = 19.1
|T[1,2]| < 6.793*log2(3)*log2(5) = 25.0 (below 25)
|T[1,3]| < 6.793*log2(3)*log2(7) = 30.2
|T[2,3]| < 6.793*log2(5)*log2(7) = 44.3

If you do a na�ve search for all wedgies below this complexity, you'll need the first element to go from -6 to 6, which means 13 different values. The second element will be from -10 to 10 for 21 different values. The total number of wedgies you need to look at is

13*21*31*49*61*89 = 2,251,335,723

That means over 2 billion different wedgies, which may be possible but is extremely wasteful. Well, no problem, as one of the elements is redundant. So you can be clever and only search for

13*21*31*49*61 = 25,295,907

different wedgies. Certainly that's possible as long as you don't want to calculate the TOP-max error for each of them. You could even decide that the octave-equivalent wedgie is good enough to distinguish the temperament class (after all, it isn't a generator mapping unless you call it a generator mapping). Then you have to look at 13*21*31 = 8,463 different mappings. Certainly possible, but still wasteful compared to a few hundred paired equal temperaments. If you're using sum-abs complexity it gets worse because it isn't so easy to predict the overall complexity from the octave equivalent part.

That covers the 7-limit. For higher limits it gets dramatically worse because of the exponential search space.

The point about limiting the error is that once you have the 5-limit part of the mapping you have the mapping for a 5-limit temperament class. From that you can calculate the optimal error for the same class. If it's already too high you can stop searching and tame the exponential growth.

>> Overall, the easiest way of finding good rank 2 temperaments >> is to pair off good equal temperaments. Which is what >> Herman is doing.
> > But it's not quite as elegant.

That's a fine way to dismiss 7 years' of work!

Graham

🔗Graham Breed <gbreed@gmail.com>

3/28/2008 9:38:45 PM

Carl Lumma wrote:
>>> Compute the wedge product of the vals endemic to the two ETs.
>> Then no, I don't do that. Part of it, maybe, to get the >> contorsion.
> > How do you get LTs from ETs?

What's an LT? By your definition of "temperament" I thought I already had one. By mine, it depends on which module and which error's being optimized for. The pure Python code does something relating to a formula in "Prime Based Errors and Complexities". The newer code for TOP-RMS goes straight to the error without needing a tuning so it's strictly about temperament classes and not temperaments. Where I use NumPy (formerly Numeric) I use the standard linear least squares function. For TOP-max error I use a linear programming library.

Graham

🔗Carl Lumma <carl@lumma.org>

3/28/2008 11:17:44 PM

Graham wrote...

>> And for similar reasons, searching wedgies shouldn't be
>> that wasteful. It should be even less wasteful since
>> there's a 1:many mapping from wedgies to mappings.
>
>For suitable definitions of "wedgie" and "mapping" there may
>be a 1:many mapping. But I said most of the period mappings
>are uniquely determined.

That made no sense. They can divide the octave an arbitrary
number of times (though this is better-controlled if you use
weighted complexity).

>That means many wedgies to one generator mapping.

Show me.

>Herman's using sum-abs wedgie complexity. Of course, he
>could always change to something that makes more sense.

Such as?

>I get the max-abs wedgie complexity for miracle as 6.793.
>So if you want miracle in the output you need at least as
>much going in. The bounds on elements of the wedge product are:
>
>|T[0,0]| < 7
>|T[0,1]| < 6.793*log2(3) = 10.8
>|T[0,2]| < 6.793*log2(5) = 15.8
>|T[0,3]| < 6.793*log2(7) = 19.1
>|T[1,2]| < 6.793*log2(3)*log2(5) = 25.0 (below 25)
>|T[1,3]| < 6.793*log2(3)*log2(7) = 30.2
>|T[2,3]| < 6.793*log2(5)*log2(7) = 44.3
>
>If you do a naive search for all wedgies below this
>complexity, you'll need the first element to go from -6 to
>6, which means 13 different values. The second element will
>be from -10 to 10 for 21 different values. The total number
>of wedgies you need to look at is
>
>13*21*31*49*61*89 = 2,251,335,723
>
>That means over 2 billion different wedgies, which may be
>possible but is extremely wasteful. Well, no problem, as
>one of the elements is redundant. So you can be clever and
>only search for
>
>13*21*31*49*61 = 25,295,907
>
>different wedgies. Certainly that's possible as long as you
>don't want to calculate the TOP-max error for each of them.
> You could even decide that the octave-equivalent wedgie is
>good enough to distinguish the temperament class (after all,
>it isn't a generator mapping unless you call it a generator
>mapping). Then you have to look at 13*21*31 = 8,463
>different mappings.

We jumped to "mappings" here in a way I didn't follow.

>>> Overall, the easiest way of finding good rank 2 temperaments
>>> is to pair off good equal temperaments. Which is what
>>> Herman is doing.
>>
>> But it's not quite as elegant.
>
>That's a fine way to dismiss 7 years' of work!

All that's missing is the proof that all low-complexity LTs
show up from wedging pairs of vals belonging to a small set
of ETs. Specifically, a relation between the largest ET
used and the largest complexity of the obtainable LTs.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/28/2008 11:18:47 PM

At 09:38 PM 3/28/2008, you wrote:
>Carl Lumma wrote:
>>>> Compute the wedge product of the vals endemic to the two ETs.
>>> Then no, I don't do that. Part of it, maybe, to get the
>>> contorsion.
>>
>> How do you get LTs from ETs?
>
>What's an LT?

Sorry, rank 2 temperament. Please replace every occurence
of "LT" with "R2T" in my last message.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 12:08:00 AM

Carl Lumma wrote:
> Graham wrote...
> >>> And for similar reasons, searching wedgies shouldn't be
>>> that wasteful. It should be even less wasteful since
>>> there's a 1:many mapping from wedgies to mappings.
>> For suitable definitions of "wedgie" and "mapping" there may >> be a 1:many mapping. But I said most of the period mappings >> are uniquely determined.
> > That made no sense. They can divide the octave an arbitrary
> number of times (though this is better-controlled if you use
> weighted complexity).

Who's using unweighted complexity?

It doesn't matter how many times they divide the octave. There's usually only one choice for the period mapping that makes sense. There may be a few exceptions but finding the period mapping is still generally linear-time once you have the 3-limit.

>> That means many wedgies to one generator mapping.
> > Show me.

[1,4,4] is meantone (5-limit). [1,4,0] is not but the generator mapping's still <0, 1, 4].

>> Herman's using sum-abs wedgie complexity. Of course, he >> could always change to something that makes more sense.
> > Such as?

Max-abs wedgie complexity probably has some justification. Kees-max or STD complexity are simpler to calculate from the generator mapping (plus periods per octave). Scalar complexity is generally good. Odd-limit complexity obviously makes sense if you're using odd-limits.

>> different wedgies. Certainly that's possible as long as you >> don't want to calculate the TOP-max error for each of them. >> You could even decide that the octave-equivalent wedgie is >> good enough to distinguish the temperament class (after all, >> it isn't a generator mapping unless you call it a generator >> mapping). Then you have to look at 13*21*31 = 8,463 >> different mappings.
> > We jumped to "mappings" here in a way I didn't follow.

The octave-equivalent part of the wedgie is also the mapping by the octave-equivalent generator times the number of periods to the octave. That's what I though Herman meant by "generator mapping" because I've been calling it that for several years now (give or take the periods per octave bit).

>>>> Overall, the easiest way of finding good rank 2 temperaments >>>> is to pair off good equal temperaments. Which is what >>>> Herman is doing.
>>> But it's not quite as elegant.
>> That's a fine way to dismiss 7 years' of work!
> > All that's missing is the proof that all low-complexity LTs
> show up from wedging pairs of vals belonging to a small set
> of ETs. Specifically, a relation between the largest ET
> used and the largest complexity of the obtainable LTs.

http://x31eq.com/complete.pdf

I've also explained on the list how parametric scalar badness is the product of the badnesses of two equal temperaments and the angle between them in parametric scalar badness space. Also how the best pair of equal temperaments for a given rank 2 temperament class will be the nearest to orthogonal. All we need is a rule for how close vectors can get to being orthogonal in a given lattice.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 1:03:59 AM

Graham wrote...

>>>> And for similar reasons, searching wedgies shouldn't be
>>>> that wasteful. It should be even less wasteful since
>>>> there's a 1:many mapping from wedgies to mappings.
>>> For suitable definitions of "wedgie" and "mapping" there may
>>> be a 1:many mapping. But I said most of the period mappings
>>> are uniquely determined.
>>
>> That made no sense. They can divide the octave an arbitrary
>> number of times (though this is better-controlled if you use
>> weighted complexity).
>
>Who's using unweighted complexity?

Me.

>>> That means many wedgies to one generator mapping.
>>
>> Show me.
>
>[1,4,4] is meantone (5-limit). [1,4,0] is not but the
>generator mapping's still <0, 1, 4].

Stop leaving out the period mapping and it won't look so
good. If you have a way to find the period easily then you
can show it here.

>>> Herman's using sum-abs wedgie complexity. Of course, he
>>> could always change to something that makes more sense.
>>
>> Such as?
>
>Max-abs wedgie complexity probably has some justification.
>Kees-max or STD complexity are simpler to calculate from the
>generator mapping (plus periods per octave). Scalar
>complexity is generally good. Odd-limit complexity
>obviously makes sense if you're using odd-limits.

I'm sure there's a justification for sum-abs, or Paul
wouldn't have used it.

>>>>> Overall, the easiest way of finding good rank 2 temperaments
>>>>> is to pair off good equal temperaments. Which is what
>>>>> Herman is doing.
>>>> But it's not quite as elegant.
>>> That's a fine way to dismiss 7 years' of work!
>>
>> All that's missing is the proof that all low-complexity LTs
>> show up from wedging pairs of vals belonging to a small set
>> of ETs. Specifically, a relation between the largest ET
>> used and the largest complexity of the obtainable LTs.
>
>http://x31eq.com/complete.pdf

Down at the moment, but fortunately I've archived it.

I think it would be clearer if you started from the result
and worked backwards. I don't see any relation there involving
only complexity of R2Ts vs. the max ET used. Error always
seems to be involved.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 1:35:00 AM

Carl Lumma wrote:
> Graham wrote...

>> [1,4,4] is meantone (5-limit). [1,4,0] is not but the >> generator mapping's still <0, 1, 4].
> > Stop leaving out the period mapping and it won't look so
> good. If you have a way to find the period easily then you
> can show it here.

What won't look as good as what? The period is the octave (generally equivalence interval) divided by the GCD of the octave-equivalent wedgie.

>>>> Herman's using sum-abs wedgie complexity. Of course, he >>>> could always change to something that makes more sense.
>>> Such as?
>> Max-abs wedgie complexity probably has some justification. >> Kees-max or STD complexity are simpler to calculate from the >> generator mapping (plus periods per octave). Scalar >> complexity is generally good. Odd-limit complexity >> obviously makes sense if you're using odd-limits.
> > I'm sure there's a justification for sum-abs, or Paul
> wouldn't have used it.

It's touching how much faith you have in your peers. How do you know it wasn't the best bet given imperfect knowledge?

>>>>>> Overall, the easiest way of finding good rank 2 temperaments >>>>>> is to pair off good equal temperaments. Which is what >>>>>> Herman is doing.
>>>>> But it's not quite as elegant.
>>>> That's a fine way to dismiss 7 years' of work!
>>> All that's missing is the proof that all low-complexity LTs
>>> show up from wedging pairs of vals belonging to a small set
>>> of ETs. Specifically, a relation between the largest ET
>>> used and the largest complexity of the obtainable LTs.
>> http://x31eq.com/complete.pdf
> > Down at the moment, but fortunately I've archived it.

Yes, sorry, it's past midnight for you. They're doing maintenance this weekend.

> I think it would be clearer if you started from the result
> and worked backwards. I don't see any relation there involving
> only complexity of R2Ts vs. the max ET used. Error always
> seems to be involved.

It's too late to say where I should start from because that's last month's article and I've decreed it finished.

Why do you want to ignore error? Scalar complexity is a Euclidean lattice norm. Once again it comes down to how close you can get to being orthogonal.

Max ET without an error cutoff means nothing. You can get any strictly linear temperament as a mapping of 1-equal. Scalar complexity will be a bit better behaved but there'll still be a load of nonsense temperaments with low complexity.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 1:53:55 AM

>Why do you want to ignore error?

Because I asked for a relationship between R2T complexity
and R1T /\ R1T complexity.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 9:19:10 AM

Carl Lumma wrote:
>> Why do you want to ignore error?
> > Because I asked for a relationship between R2T complexity
> and R1T /\ R1T complexity.

Yes, and I gave you one. Why did you want it?

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 11:48:42 AM

Graham wrote...

>>> Why do you want to ignore error?
>>
>> Because I asked for a relationship between R2T complexity
>> and R1T /\ R1T complexity.
>
>Yes, and I gave you one.

Where in the paper is it?

>Why did you want it?

I'm thinking of implementing a R2T search.

-C.

🔗Herman Miller <hmiller@IO.COM>

3/29/2008 1:51:33 PM

Graham Breed wrote:

> The octave-equivalent part of the wedgie is also the mapping > by the octave-equivalent generator times the number of > periods to the octave. That's what I though Herman meant by > "generator mapping" because I've been calling it that for > several years now (give or take the periods per octave bit).

What I'm calling "generator mapping" is what I was told to call "generator mapping" when I asked what I should be calling it. I probably was calling it something else that was "wrong".

A few random examples for illustration:

[<2, 3, 5, 6], <0, 1, -2, -2]>
[<1, -1, 0, 1, -3], <0, 10, 9, 7, 25]>
[<1, 2, 3, 4, 4, 4, 4, 4, 6, 5], <0, -3, -5, -9, -4, -2, 1, 2, -11, -1]>

🔗Carl Lumma <carl@lumma.org>

3/29/2008 1:55:52 PM

At 01:51 PM 3/29/2008, you wrote:
>Graham Breed wrote:
>
>> The octave-equivalent part of the wedgie is also the mapping
>> by the octave-equivalent generator times the number of
>> periods to the octave. That's what I though Herman meant by
>> "generator mapping" because I've been calling it that for
>> several years now (give or take the periods per octave bit).
>
>What I'm calling "generator mapping" is what I was told to call
>"generator mapping" when I asked what I should be calling it. I probably
>was calling it something else that was "wrong".
>
>A few random examples for illustration:
>
>[<2, 3, 5, 6], <0, 1, -2, -2]>
>[<1, -1, 0, 1, -3], <0, 10, 9, 7, 25]>
>[<1, 2, 3, 4, 4, 4, 4, 4, 6, 5], <0, -3, -5, -9, -4, -2, 1, 2, -11, -1]>

That's what I'd call it, but Graham I think has in this thread
been using it to refer to the mapping without the period val.
Whichever one that is.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 8:29:31 PM

Carl Lumma wrote:
> Graham wrote...
> >>>> Why do you want to ignore error?
>>> Because I asked for a relationship between R2T complexity
>>> and R1T /\ R1T complexity.
>> Yes, and I gave you one.
> > Where in the paper is it?

Not in the paper. I described it here. It follows from Gene's geometry if you use a Euclidean metric.

>> Why did you want it?
> > I'm thinking of implementing a R2T search.

In that case you should have some idea of the error as well and it'll help if you use it.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 8:37:31 PM

>>>> Because I asked for a relationship between R2T complexity
>>>> and R1T /\ R1T complexity.
>>>
>>> Yes, and I gave you one.
>>
>> Where in the paper is it?
>
>Not in the paper. I described it here. It follows from
>Gene's geometry if you use a Euclidean metric.
>
>>> Why did you want it?
>>
>> I'm thinking of implementing a R2T search.
>
>In that case you should have some idea of the error as well
>and it'll help if you use it.

Yes, I'll need error for that, but I won't use the two
ETs method unless there's a relation with complexity of
the resulting R2Ts. If you gave it I missed it.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 10:04:50 PM

Carl Lumma wrote:

> Yes, I'll need error for that, but I won't use the two
> ETs method unless there's a relation with complexity of
> the resulting R2Ts. If you gave it I missed it.

The connection is that the scalar complexity of the R2T is an area in a complexity space, where the scalar complexities of the equal temperament are lengths. It follows from this that the complexity of the R2T must be less than the product of the complexities of the two ETs.

One rule of thumb is that the scalar complexities of the simplest ETs will around the square root of the scalar complexity of the R2T. When you see that miracle has a scalar complexity of 2.4 you'll realize how small the ETs can get. In this region you can't approximate the scalar complexity as the number of notes to the octave -- you have to do the full RMS of the weighted mapping.

The problem is that good equal temperaments are not randomly distributed in complexity spaces. They cluster around the JI line. The closer they get the smaller the error. And the TOP-RMS error is exactly the sine of the angle the ET line makes with the JI line. That's pretty small -- usually less than 1 degree. So if the ETs have a small angle with the JI line they'll also have a small angle with each other. To guess how small you really do have to consider the error.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 10:37:28 PM

At 10:04 PM 3/29/2008, you wrote:
>Carl Lumma wrote:
>
>> Yes, I'll need error for that, but I won't use the two
>> ETs method unless there's a relation with complexity of
>> the resulting R2Ts. If you gave it I missed it.
>
>The connection is that the scalar complexity of the R2T is
>an area in a complexity space, where the scalar complexities
>of the equal temperament are lengths. It follows from this
>that the complexity of the R2T must be less than the product
>of the complexities of the two ETs.
>
>One rule of thumb is that the scalar complexities of the
>simplest ETs will around the square root of the scalar
>complexity of the R2T.

You mean if I use ETs up to 100 I can't find R2Ts with
scalar complexity of > 99? Will I get all of them?
If so that's a real deal.

>When you see that miracle has a scalar complexity of 2.4

It does? We're still talking about the number of notes
needed to complete the map, right? I assume 2.4 is some
bastardization produced by weighting?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 11:24:07 PM

Carl Lumma wrote:
> At 10:04 PM 3/29/2008, you wrote:

>> One rule of thumb is that the scalar complexities of the >> simplest ETs will around the square root of the scalar >> complexity of the R2T.
> > You mean if I use ETs up to 100 I can't find R2Ts with
> scalar complexity of > 99? Will I get all of them?
> If so that's a real deal.

ETs up to 100 will give you R2Ts with scalar complexity up to 9900. You can never get all of them.

>> When you see that miracle has a scalar complexity of 2.4
> > It does? We're still talking about the number of notes
> needed to complete the map, right? I assume 2.4 is some
> bastardization produced by weighting?

It's the area of the weighted mapping where the weighting's in octaves.

There is a way of getting a number of notes out of a rank 2 temperament: sqrt(c/e) where c is complexity and e is error. It's rule of thumb 1 in complete.pdf and that points to Equation 89 on p.19 of primerr.pdf. The best equal temperaments will have fewer notes than this.

Graham

🔗Carl Lumma <carl@lumma.org>

3/29/2008 11:33:45 PM

At 11:24 PM 3/29/2008, you wrote:
>Carl Lumma wrote:
>> At 10:04 PM 3/29/2008, you wrote:
>
>>> One rule of thumb is that the scalar complexities of the
>>> simplest ETs will around the square root of the scalar
>>> complexity of the R2T.
>>
>> You mean if I use ETs up to 100 I can't find R2Ts with
>> scalar complexity of > 99? Will I get all of them?
>> If so that's a real deal.
>
>ETs up to 100 will give you R2Ts with scalar complexity up
>to 9900.

Ah, OK.

>You can never get all of them.

I mean all of them up to 9900.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/29/2008 11:49:30 PM

Carl Lumma wrote:
> At 11:24 PM 3/29/2008, you wrote:
>> Carl Lumma wrote:
>>> At 10:04 PM 3/29/2008, you wrote:
>>>> One rule of thumb is that the scalar complexities of the >>>> simplest ETs will around the square root of the scalar >>>> complexity of the R2T.
>>> You mean if I use ETs up to 100 I can't find R2Ts with
>>> scalar complexity of > 99? Will I get all of them?
>>> If so that's a real deal.
>> ETs up to 100 will give you R2Ts with scalar complexity up >> to 9900.
> > Ah, OK.
> >> You can never get all of them.
> > I mean all of them up to 9900.

Yes, and I really mean you'll never get them all. If you want an easier problem, try searching through all possible chess games.

To get an idea of weighted complexity, look at Table 9 on page 29 of primerr.pdf. The right hand column is the 7-odd limit complexity that you should be familiar. The 1/2-Range column is the weighted equivalent. Half the Kees-max complexity by another name. It's weighted by octaves, so to make it comparable to the odd-limit complexity you multiply by the number of octaves that give the harmonics you're interested in. 7 is one less than 8 and 8 is three octaves, so the 7-odd limit uses roughly 3 octaves. Multiply the 1/2-range column by 3 and you'll see it roughly matches up.

Then, you can see that the Max-Abs column is very similar to the 1/2-Range column. I don't know why this is but there must be a good reason for it. So Max-Abs weighted wedgie complexity is the best weighted equivalent to odd-limit that doesn't give octaves a privileged status.

The values in the STD column are always smaller than those in the 1/2-range column. It's the best bet for the RMS weighted complexity of the prime limit. If you multiply this column by 3 it generally comes up less than the odd-limit complexity. It's less because it's an average rather than a worst case.

Now notice that the Scalar column is very close to the STD column. Scalar complexity is the octave-specific equivalent to STD complexity. Multiply it by the number of octaves corresponding to the complexity of the intervals you're interested in and it'll tell you the average number of notes per octave you'll need. There's still a "per octave" because the weighting is by octaves squared.

Graham