back to list

Graham's Top 20 13-limit temperaments

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/24/2003 10:09:09 PM

Here they are again, in the form I would have given them.

Three comments--first, I don't know how this list was filtered. Second, I think 20 is too small a number for the 13-limit. Third, I used unweighted complexity because I don't have geometric complexity in the 13-limit coded as yet.

Mystery
[0, 29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7]
[[29, 46, 67, 81, 100, 107], [0, 0, 1, 1, 1, 1]]
[41.3793103448276, 15.8257880102396]

unweighted badness 242.527516 unweighted complexity 22.463303 rms error 2.277984

Hemififth
[2, 25, 13, 5, -1, 35, 15, 1, -9, -40, -75, -95, -31, -51, -22]
[[1, 1, -5, -1, 2, 4], [0, 2, 25, 13, 5, -1]]
[1200., 351.617195107835]

unweighted badness 203.445092 unweighted complexity 13.364131 rms error 4.164244

Unidec
[12, 22, -4, -6, 4, 7, -40, -51, -38, -71, -90, -72, -3, 26, 36]
[[2, 5, 8, 5, 6, 8], [0, -6, -11, 2, 3, -2]]
[600., 183.225219270142]

unweighted badness 229.903139 unweighted complexity 17.401149 rms error 3.167218

Diaschismic
[2, -4, -16, -24, -30, -11, -31, -45, -55, -26, -42, -55, -12, -25, -15]
[[2, 3, 5, 7, 9, 10], [0, 1, -2, -8, -12, -15]]
[600., 103.786589235317]

unweighted badness 251.546815 unweighted complexity 19.682480 rms error 2.880707

Biminortonic
[14, 30, 4, 6, 22, 15, -33, -39, -17, -75, -90, -60, 3, 47, 54]
[[2, 1, 0, 5, 6, 4], [0, 7, 15, 2, 3, 11]]
[600., 185.994500197218]

unweighted badness 285.227918 unweighted complexity 17.175564 rms error 4.007057

Schismatic
[1, -8, -14, 23, 20, -15, -25, 33, 28, -10, 81, 76, 113, 108, -16]
[[1, 2, -1, -3, 13, 12], [0, -1, 8, 14, -23, -20]]
[1200., 497.872301871492]

unweighted badness 245.881799 unweighted complexity 19.724350 rms error 2.806870

Nonkleismic
[10, 9, 7, 25, -5, -9, -17, 5, -45, -9, 27, -45, 46, -40, -110]
[[1, -1, 0, 1, -3, 5], [0, 10, 9, 7, 25, -5]]
[1200., 310.312830917046]

unweighted badness 254.415991 unweighted complexity 15.006665 rms error 4.376411

Acute
[4, 21, -3, 39, 27, 24, -16, 48, 28, -66, 18, -15, 120, 87, -51]
[[1, 0, -6, 4, -12, -7], [0, 4, 21, -3, 39, 27]]
[1200., 475.694618357624]

unweighted badness 241.523384 unweighted complexity 22.614155 rms error 2.245891

Amity
[5, 13, -17, 9, -6, 9, -41, -3, -28, -76, -24, -62, 84, 46, -54]
[[1, 3, 6, -2, 6, 2], [0, -5, -13, 17, -9, 6]]
[1200., 339.412784647410]

unweighted badness 268.549368 unweighted complexity 15.295424 rms error 4.489333

Subminorsixth
[6, -48, 10, -50, 26, -90, -1, -100, 19, 158, 50, 238, -175, 36, 275]
[[2, 4, -2, 7, 0, 11], [0, -3, 24, -5, 25, -13]]
[600., 166.081391969544]

unweighted badness 207.849710 unweighted complexity 43.788126 rms error .717323

Minorsemi
[18, 15, -6, 9, 42, -18, -60, -48, 0, -56, -31, 42, 46, 140, 112]
[[3, 6, 8, 8, 11, 14], [0, -6, -5, 2, -3, -14]]
[400., 83.0061237758442]

unweighted badness 231.510127 unweighted complexity 25.260641 rms error 1.823490

Tricontaheximal
[0, 36, 0, 36, 0, 57, 0, 57, 0, -101, -41, -133, 101, 0, -133]
[[36, 57, 83, 101, 124, 133], [0, 0, 1, 0, 1, 0]]
[33.3333333333333, 15.7721139478765]

unweighted badness 416.491693 unweighted complexity 25.455844 rms error 3.242837

Spearmint
[2, -4, 30, 22, 16, -11, 42, 28, 18, 81, 65, 52, -42, -66, -26]
[[2, 3, 5, 3, 5, 6], [0, 1, -2, 15, 11, 8]]
[600., 104.895179196541]

unweighted badness 288.935571 unweighted complexity 18.477013 rms error 3.637922

Supersupermajor
[3, 17, -1, -13, -22, 20, -10, -31, -46, -50, -89, -114, -33, -58, -28]
[[1, 1, -1, 3, 6, 8], [0, 3, 17, -1, -13, -22]]
[1200., 234.480133729376]

unweighted badness 219.395264 unweighted complexity 18.448577 rms error 2.768745

Suprasubminorsixth
[6, 46, 10, 44, 26, 59, -1, 49, 19, -106, -57, -110, 89, 36, -73]
[[2, 4, 11, 7, 13, 11], [0, -3, -23, -5, -22, -13]]
[600., 165.810532415714]

unweighted badness 277.161590 unweighted complexity 26.724521 rms error 2.006172

Paraorwell
[11, -6, 10, 7, 15, -35, -15, -27, -17, 40, 37, 57, -15, 5, 26]
[[1, 4, 1, 5, 5, 7], [0, -11, 6, -10, -7, -15]]
[1200., 263.695224647719]

unweighted badness 271.488360 unweighted complexity 13.234425 rms error 5.638890

[3, 29, 11, 16, 7, 39, 9, 15, 0, -56, -63, -91, 7, -21, -35]
[[1, 3, 16, 8, 11, 7], [0, -3, -29, -11, -16, -7]]
[1200., 565.936121918345]

unweighted badness 282.926501 unweighted complexity 14.126217 rms error 5.328866

Miracle
[6, -7, -2, 15, 38, -25, -20, 3, 38, 15, 59, 114, 49, 114, 76]
[[1, 1, 3, 3, 2, 0], [0, 6, -7, -2, 15, 38]]
[1200., 116.779511703154]

unweighted badness 224.267777 unweighted complexity 21.718656 rms error 2.215733

[4, 9, 26, 10, -2, 5, 30, 2, -18, 35, -8, -38, -62, -102, -44]
[[1, 1, 1, -1, 2, 4], [0, 4, 9, 26, 10, -2]]
[1200., 175.873678872191]

unweighted badness 280.118702 unweighted complexity 13.315405 rms error 5.765149

[4, -37, -3, -19, -31, -68, -16, -44, -64, 97, 84, 65, -43, -76, -37]
[[1, 0, 17, 4, 11, 16], [0, 4, -37, -3, -19, -31]]
[1200., 476.008069832587]

unweighted badness 298.019307 unweighted complexity 25.845696 rms error 2.268099

🔗Graham Breed <graham@microtonal.co.uk>

1/25/2003 12:12:59 AM

Gene Ward Smith wrote:
> Here they are again, in the form I would have given them. Do you have a script that could have given them, as you do for the 11-limit?

> Three comments--first, I don't know how this list was filtered. Second, I think 20 is too small a number for the 13-limit. Third, I used unweighted complexity because I don't have geometric complexity in the 13-limit coded as yet.

Complexity<100, RMS error < 6 or so cents.

Why so? How many have you tuned up?

Graham

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/25/2003 12:40:10 AM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> Gene Ward Smith wrote:

> > Here they are again, in the form I would have given them.
>
> Do you have a script that could have given them, as you do for the 11-limit?

I'm running on Maple, which is more powerful but much slower than Python, so it's getting to the point where I should really use something else, or else get you to do it.

> > Three comments--first, I don't know how this list was filtered. Second, I think 20 is too small a number for the 13-limit. Third, I used unweighted complexity because I don't have geometric complexity in the 13-limit coded as yet.

> Complexity<100, RMS error < 6 or so cents.

What kind of complexity? I didn't find anything nearly as high as 100 for unweighted complexity.

> Why so? How many have you tuned up?

As I say, I think you might be better for the job at this point. If I calculated the coefficients for determining geometric complexity, would you try that?

As for why 20 is too small, as we go up in prime limit, the number of reasonable systems increases; it more and more becomes the case that the lists will be completely different if we use slightly different complexity or error measures. I think we should take in a bigger haul at least to start out with.

🔗Graham Breed <graham@microtonal.co.uk>

1/25/2003 2:47:50 AM

Gene Ward Smith wrote:

> I'm running on Maple, which is more powerful but much slower than Python, so it's getting to the point where I should really use something else, or else get you to do it.

I'd have thought Maple would be better optimized for numerical work.

Anyway, I do have a version now using Numerical Python, which links to C and Fortran libraries for matrix operations. I can now do the 13-limit search within an hour, but there are bugs -- I'm not getting a correct period mapping for mystery.

It wouldn't be that difficult to convert the script to a more efficient language like C++ or Java, with a suitable numerical library. I can invert matrices in C++, but that isn't optimized code. Currently I still use my high-level Python library to do the RMS optimization, but I think that can be re-written.

>>Complexity<100, RMS error < 6 or so cents.
> > > What kind of complexity? I didn't find anything nearly as high as 100 for unweighted complexity.

It's the usual max-min complexity. And not calculated correctly because I'm only using primes. Maybe nothing's that high. I'm not recording what I throw away.

A lot of inaccurate temperaments were getting in, which is why I tightened up the error cutoff.

> As I say, I think you might be better for the job at this point. If I calculated the coefficients for determining geometric complexity, would you try that?

Yes, if you give me an algorithm I can copy. Ideally Python code or pseudocode.

If it's only a question of checking that the unison vectors can produce all the temperaments you're interested, there are more efficient ways of doing it. Start with the wedgie for the temperament, and you can filter for unison vectors that are consistent with the temperament. Then you only need to take subsets of those until you get one that's linearly independent.

> As for why 20 is too small, as we go up in prime limit, the number of reasonable systems increases; it more and more becomes the case that the lists will be completely different if we use slightly different complexity or error measures. I think we should take in a bigger haul at least to start out with.

Good algorithms will be more likely to agree on the best temperaments than the mediocre ones. The best thing's to restrict the range of complexities and errors so the same temperaments will fall through. I can also run the ET and unison vector searches with the same parameters. Oh, and if two sets of results are held in RAM they can be compared automatically, so larger sets can be used.

Graham

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/25/2003 3:30:29 AM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> If it's only a question of checking that the unison vectors can produce
> all the temperaments you're interested, there are more efficient ways of
> doing it. Start with the wedgie for the temperament, and you can filter
> for unison vectors that are consistent with the temperament. Then you
> only need to take subsets of those until you get one that's linearly
> independent.

The way I find a kernel basis is to find the invariant commas of the wedgie; that's four commas missing a prime in the 7-limit case, and 10 commas missing two primes in the 11-limit case. Then I LLL reduce that, which gives me a kernel basis, which I can then TM reduce if need be. The trouble is, I don't have a list of all interesting wedgies.

> > As for why 20 is too small, as we go up in prime limit, the number of reasonable systems increases; it more and more becomes the case that the lists will be completely different if we use slightly different complexity or error measures. I think we should take in a bigger haul at least to start out with.

> Good algorithms will be more likely to agree on the best temperaments
> than the mediocre ones. The best thing's to restrict the range of
> complexities and errors so the same temperaments will fall through.

I don't think the same temperaments will always fall through for you quite this neatly, because the badness of the worst temperaments on the list is no longer so much higher than badness of the best ones.

It sounds like I am too slow, and you have bugs which need to be worked out.

🔗Graham Breed <graham@microtonal.co.uk>

1/25/2003 5:16:56 AM

Gene Ward Smith wrote:

> The way I find a kernel basis is to find the invariant commas of the wedgie; that's four commas missing a prime in the 7-limit case, and 10 commas missing two primes in the 11-limit case. Then I LLL reduce that, which gives me a kernel basis, which I can then TM reduce if need be. The trouble is, I don't have a list of all interesting wedgies.

I wasn't talking about that. If you have a set of candidate unison vectors, you can check it's complete without doing the full serach.

> I don't think the same temperaments will always fall through for you quite this neatly, because the badness of the worst temperaments on the list is no longer so much higher than badness of the best ones. Yes, that's why I always take the best ones. If the filter's strict enough, the badness doesn't matter.

> It sounds like I am too slow, and you have bugs which need to be worked out.

I've sorted out the bug. I was getting temperaments with the same period and octave-equivalent mapping as mystery, but silly period mappings. But I was rejecting the real mystery because it looked the same. So now I have to reject fewer temperaments, and it's taking 82 minutes, but seems to be working correctly.

Graham

🔗Graham Breed <graham@microtonal.co.uk>

1/25/2003 5:56:07 AM

Gene Ward Smith wrote:

> What kind of complexity? I didn't find anything nearly as high as 100 for unweighted complexity.

I found 103089 such in the 13-limit search, some of them probably duplicates.

Graham

🔗Graham Breed <graham@microtonal.co.uk>

1/25/2003 1:45:47 PM

The optimized script for finding temperaments from unison vectors is at

http://microtonal.co.uk/selectNumeric.py

You need the latest version of Python, the Numeric extensions and my temperament library:

http://microtonal.co.uk/temper.py

That's been updated to search on all versions of equal temperaments where we allow inconsistency. So I ran the 13-limit search on both. The unison vectors

http://microtonal.co.uk/gene.limit13.result

take about 70 minutes. Searching through pairs of the simplest 100 ETs with a consistency cutoff of 0.8 scale steps, using

http://microtonal.co.uk/selectET.py

takes about 50 seconds. Those results

http://microtonal.co.uk/selectet.result

are the same as for the unison vector search!

I'll look at getting the CGI updated to use the new inconsistency search.

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/25/2003 3:46:53 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <graham@m...>
wrote:
> > Gene Ward Smith wrote:
>
> > > Here they are again, in the form I would have given them.
> >
> > Do you have a script that could have given them, as you do for
>>the 11-limit?
>
> I'm running on Maple, which is more powerful but much slower than
>Python, so it's getting to the point where I should really use
>something else, or else get you to do it.

i have access to a 2.4 GHz machine for running Matlab overnight or
for however long it takes. i'd be happy to try whatever algorithms
you wish to spell out.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/25/2003 5:54:43 PM

>You need the latest version of Python, the Numeric extensions

numpy or Numarray?

-C.

> http://microtonal.co.uk/gene.limit13.result
> http://microtonal.co.uk/selectet.result

Awesome!

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/25/2003 6:38:59 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> The optimized script for finding temperaments from unison vectors is at
>
> http://microtonal.co.uk/selectNumeric.py
>
> You need the latest version of Python, the Numeric extensions and my
> temperament library:
>
> http://microtonal.co.uk/temper.py

Last time I tried this I couldn't get it to work. I see about it again.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/25/2003 6:49:16 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

> takes about 50 seconds. Those results
>
> http://microtonal.co.uk/selectet.result
>
> are the same as for the unison vector search!

Not that surprising, but the comma search seems more likely to catch oddball systems. I suppose I could try to identify these.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/25/2003 6:50:04 PM

>i have access to a 2.4 GHz machine for running Matlab overnight or
>for however long it takes. i'd be happy to try whatever algorithms
>you wish to spell out.

One page claims Matlab is implemented in C. I seem to think Maple
is implemented in Maple, but I can't find that in the manual now.
I'd be surprised if either of them were faster than python, but
I could very well be wrong.

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/25/2003 7:47:43 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:
> >i have access to a 2.4 GHz machine for running Matlab overnight or
> >for however long it takes. i'd be happy to try whatever algorithms
> >you wish to spell out.
>
> One page claims Matlab is implemented in C. I seem to think Maple
> is implemented in Maple, but I can't find that in the manual now.
> I'd be surprised if either of them were faster than python, but
> I could very well be wrong.

Maple is in C also, but it isn't designed for speed. For instance, the float data type has a precision defined by "Digits", and the int data type allows for ints as big as the machine can handle.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/25/2003 11:24:43 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <graham@m...>
wrote:
>
> > takes about 50 seconds. Those results
> >
> > http://microtonal.co.uk/selectet.result
> >
> > are the same as for the unison vector search!
>
> Not that surprising, but the comma search seems more likely to
>catch oddball systems. I suppose I could try to identify these.

by duality, isn't the val search as likely to catch oddball systems,
though "differently odd", as the comma search?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/25/2003 11:28:06 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > >i have access to a 2.4 GHz machine for running Matlab overnight
or
> > >for however long it takes. i'd be happy to try whatever
algorithms
> > >you wish to spell out.
> >
> > One page claims Matlab is implemented in C. I seem to think Maple
> > is implemented in Maple, but I can't find that in the manual now.
> > I'd be surprised if either of them were faster than python, but
> > I could very well be wrong.
>
> Maple is in C also, but it isn't designed for speed. For instance,
>the float data type has a precision defined by "Digits", and the int
>data type allows for ints as big as the machine can handle.

well then will somebody *please* *please* . . .

pretty please :) :) :)

calculate the numerators and denominators here which came out in
scientific notation, making it impossible for yahoo to sort by
denominator:

/tuning/database?
method=reportRows&tbl=10&sortBy=4

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/26/2003 12:06:36 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

> by duality, isn't the val search as likely to catch oddball systems,
> though "differently odd", as the comma search?

The comma system is likely to catch low complexity systems missed by the val search; one idea is to do two searches, one of which is a comma search using only relatively large commas. The comma search might miss systems with one or more dud commas, meaning equivalences which don't actully contribute much of practical use.

🔗Graham Breed <graham@microtonal.co.uk>

1/26/2003 1:41:16 AM

Carl Lumma wrote:

> numpy or Numarray?

The one you get from "ppm install Numeric" in ActivePython. I think that's numpy.

Graham

🔗Graham Breed <graham@microtonal.co.uk>

1/26/2003 2:41:22 AM

wallyesterpaulrus wrote:

> calculate the numerators and denominators here which came out in > scientific notation, making it impossible for yahoo to sort by > denominator:
> > /tuning/database?
> method=reportRows&tbl=10&sortBy=4

http://microtonal.co.uk/paul.table.results

Graham

🔗Graham Breed <graham@microtonal.co.uk>

1/26/2003 2:37:35 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:
> >>>i have access to a 2.4 GHz machine for running Matlab overnight or >>>for however long it takes. i'd be happy to try whatever algorithms >>>you wish to spell out.
>>
>>One page claims Matlab is implemented in C. I seem to think Maple
>>is implemented in Maple, but I can't find that in the manual now.
>>I'd be surprised if either of them were faster than python, but
>>I could very well be wrong.
> > > Maple is in C also, but it isn't designed for speed. For instance, the float data type has a precision defined by "Digits", and the int data type allows for ints as big as the machine can handle.

This has come up on comp.lang.python. People who've used both say that Numeric Python is slightly faster than Matlab, although Matlab's matrix operations are faster. Also, if there's a native library for Matlab but not Python then Matlab's much faster.

There are also people using Python to drive Matlab.

Probably Maple is similar. As Python doesn't come with arbitrary precision floating point, Maple will be faster when you need it. Python does have arbitrary sized integers, and they now interact seemlessly with the normal integers. So it looks comparable to Maple for what we need. However, the Numeric extensions use a Fortran library that only works with floating point -- there aren't any routines to efficiently find the adjoint of an integer matrix. There's also nothing for wedge products.

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/26/2003 10:02:36 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> wallyesterpaulrus wrote:
>
> > calculate the numerators and denominators here which came out in
> > scientific notation, making it impossible for yahoo to sort by
> > denominator:
> >
> > /tuning/database?
> > method=reportRows&tbl=10&sortBy=4
>
> http://microtonal.co.uk/paul.table.results
>
> Graham

thanks dude!

try now:

/tuning/database?
method=reportRows&tbl=10&sortBy=4

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/26/2003 11:28:46 PM

> try now:
>
> /tuning/database?
> method=reportRows&tbl=10&sortBy=4

http://tinyurl.com/4xuu

**Now we're talking**. Sorting by the denominator
of the comma really works better than anything I
tried on Dave's spreadsheet.

Aside from the order, what bounds were used to
select temperaments for this list?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/26/2003 11:44:39 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > try now:
> >
> > /tuning/database?
> > method=reportRows&tbl=10&sortBy=4
>
> http://tinyurl.com/4xuu
>
> **Now we're talking**. Sorting by the denominator
> of the comma really works better than anything I
> tried on Dave's spreadsheet.

better than any other complexity measure! cool, so you must *really*
like the heuristic for complexity . . .

> Aside from the order, what bounds were used to
> select temperaments for this list?

my best recollection, off the top of my head:

log-flat badness < 3500, rms error < 50 cents, geometric complexity <
104-151 (doesn't matter where you draw the line in this range).

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/27/2003 1:17:34 AM

> better than any other complexity measure! cool, so you must
> *really* like the heuristic for complexity . . .

Apparently so.

> my best recollection, off the top of my head:
>
> log-flat badness < 3500, rms error < 50 cents, geometric
> complexity < 104-151 (doesn't matter where you draw the
> line in this range).

Ok, but two small nits:

() Is that geometric complexity as Gene defines it?

() Being that badness is just a combination error and
complexity, why is it needed / how can it change the
bounds on the list?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/27/2003 1:37:43 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > better than any other complexity measure! cool, so you must
> > *really* like the heuristic for complexity . . .
>
> Apparently so.
>
> > my best recollection, off the top of my head:
> >
> > log-flat badness < 3500, rms error < 50 cents, geometric
> > complexity < 104-151 (doesn't matter where you draw the
> > line in this range).
>
> Ok, but two small nits:
>
> () Is that geometric complexity as Gene defines it?

yes.

> () Being that badness is just a combination error and
> complexity, why is it needed

because otherwise you'd have a huge number of temperaments, and not
the same number in each complexity range. for example, imagine how
many possible temperaments there must be with rms error < 50 cents
and complexity between, say, 74 and 104. some huge number.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/27/2003 2:55:35 AM

>because otherwise you'd have a huge number of temperaments, and not
>the same number in each complexity range. for example, imagine how
>many possible temperaments there must be with rms error < 50 cents
>and complexity between, say, 74 and 104. some huge number.

Right on. -C.