back to list

Log-flat badness for TOP tuned ETs

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/3/2006 8:53:52 AM

Hello,

How would you compute log-flat badness for TOP tuned equal temperaments?

Kalle

🔗Carl Lumma <ekin@lumma.org>

8/3/2006 10:35:12 AM

At 08:53 AM 8/3/2006, you wrote:
>Hello,
>
>How would you compute log-flat badness for TOP tuned equal
>temperaments?
>
>Kalle

Logflat badness in the p-limit is...

error^(pi(p)-1) / complexity

For error I'd use TOP damage, and for complexity I'd simply use
the number of notes/octave of the ET.

But different mappings of the ET to JI will give different
results (for "inconsistent" ETs). To find the best mappings
regardless of consistency, I'd use this suggestion of Paul's...

>Start with the JI val. Multiply it by every possible positive real
>number (in practice you just need to find a small enough step size
>to increment this constant through) and round the result to the
>nearest integers. Use the TOP-et algorithm on all of these, and
>calculate their TOP damage. Then group the results by the octave
>part of the val and pick out the lowest-damage one in each group.
>
>I think any val that doesn't arise through this process can't be
>the minimum-TOP-damage one (when TOP-tuned) for a given octave
>cardinality.

Paul's Middle Path paper tells how to calculate TOP damage,
I believe. The "TOP-et algorithm" he refers to looks like this
in Scheme...

;; Returns the TOP-tuned patent val in cents for the given et at
;; the given prime limit.
;; Algorithm due to Paul Erlich.

(define top-et
(lambda (et lim)
(let*
((patent-val (patent-val et lim))
(log2-primes (map log2 (reverse (primes lim))))
(et-exact-val (map / standard-val log2-primes))
(large (apply max et-exact-val))
(small (apply min et-exact-val))
(mean (/ (+ large small) 2))
(top-val (map (lambda (x)
(/ x mean))
standard-val))
(cents-top-val (map (lambda (x)
(* x 1200))
top-val)))
cents-top-val)))

...but instead of "patent-val", you'd plug in the vals you found
by multiplying by JI and rounding per Paul's suggestion.

Pi looks like this in Scheme...

;; The number of primes less than n.
;; Due to Minac (unpublished, proof in Ribenboim 1995, p. 181).

(define pi
(lambda (n)
(letrec ((loop
(lambda (j sum)
(let ((j-1! (fact (- j 1))))
(if (> j n)
sum
(+ (floor (- (/ (+ j-1! 1) j)
(floor (/ j-1! j))))
(loop (+ j 1) sum)))))))
(loop 2 0))))

-Carl

🔗Graham Breed <gbreed@gmail.com>

8/4/2006 3:57:48 AM

Carl Lumma wrote:
> At 08:53 AM 8/3/2006, you wrote:
> >>Hello,
>>
>>How would you compute log-flat badness for TOP tuned equal
>>temperaments?
> > Logflat badness in the p-limit is...
> > error^(pi(p)-1) / complexity
> > For error I'd use TOP damage, and for complexity I'd simply use
> the number of notes/octave of the ET.

Surely not -- it means the more complex a temperament the lower its badness.

> But different mappings of the ET to JI will give different
> results (for "inconsistent" ETs). To find the best mappings
> regardless of consistency, I'd use this suggestion of Paul's...

You would?

>>Start with the JI val. Multiply it by every possible positive real
>>number (in practice you just need to find a small enough step size
>>to increment this constant through) and round the result to the
>>nearest integers. Use the TOP-et algorithm on all of these, and
>>calculate their TOP damage. Then group the results by the octave
>>part of the val and pick out the lowest-damage one in each group.
>>
>>I think any val that doesn't arise through this process can't be
>>the minimum-TOP-damage one (when TOP-tuned) for a given octave
>>cardinality.

He's right, with a caveat that there isn't always a single optimal mapping for a given number of notes to the octave. For example, these both have the same TOP damage:

<23, 36, 53, 64, 79, 85, 94], <23, 36, 53, 64, 79, 85, 93]

We know that the TOP damage of a temperament is the same as the worst Tenney-weighted error of a prime interval when you tune optimially. So the damage can only increase when you take an approximation other than the optimal one. But it may be the same because that prime has more than one mapping with an error lower than that in some other prime.

> Paul's Middle Path paper tells how to calculate TOP damage,
> I believe. The "TOP-et algorithm" he refers to looks like this
> in Scheme...
> > ;; Returns the TOP-tuned patent val in cents for the given et at
> ;; the given prime limit.
> ;; Algorithm due to Paul Erlich.
> > (define top-et
> (lambda (et lim)
> (let*
> ((patent-val (patent-val et lim))
> (log2-primes (map log2 (reverse (primes lim))))
> (et-exact-val (map / standard-val log2-primes))
> (large (apply max et-exact-val))
> (small (apply min et-exact-val))
> (mean (/ (+ large small) 2))
> (top-val (map (lambda (x)
> (/ x mean))
> standard-val))
> (cents-top-val (map (lambda (x)
> (* x 1200))
> top-val)))
> cents-top-val)))
> > ...but instead of "patent-val", you'd plug in the vals you found
> by multiplying by JI and rounding per Paul's suggestion.

You can also calculate the TOP error directly, which I think means changing

(mean (/ (+ large small) 2))

to

(top-error (/ (- large small) (+ large small)))

and returning top-error*1200 for the standard units (that's (* top-error 1200))

It'll always be close to the Kees error

(error-range (- large small))

You can get the optimal temperament mapping by rounding the mapping for each prime both up and down to the nearest integers, and finding all combinations. Moving any prime further from perfect will only increase the error range so this is guaranteed to find the optimal error. It's probably going to give the optimal TOP mapping as well.

The problem with this method is that the complexity of the calculation grows exponentially with the number of odd primes. If that's a problem you have to bail out when you know the error's already too large. Only looking at mappings with a smaller error than the round-to-nearest one is a good way of reducing the complexity.

> Pi looks like this in Scheme...
> > ;; The number of primes less than n.

In context, why would you ever need this?

Graham

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/4/2006 8:17:53 AM

Hi Carl and thanks for your reply!

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> At 08:53 AM 8/3/2006, you wrote:
> >Hello,
> >
> >How would you compute log-flat badness for TOP tuned equal
> >temperaments?
> >
> >Kalle
>
> Logflat badness in the p-limit is...
>
> error^(pi(p)-1) / complexity

Really? I thought it was something like error*complexity^something.

> For error I'd use TOP damage, and for complexity I'd simply use
> the number of notes/octave of the ET.

What about nonoctave ETs?

Wouldn't it be more general if the complexity was something simply
inversely proportional to the size of the ET step?

> But different mappings of the ET to JI will give different
> results (for "inconsistent" ETs). To find the best mappings
> regardless of consistency, I'd use this suggestion of Paul's...
>
> >Start with the JI val. Multiply it by every possible positive real
> >number (in practice you just need to find a small enough step size
> >to increment this constant through) and round the result to the
> >nearest integers. Use the TOP-et algorithm on all of these, and
> >calculate their TOP damage. Then group the results by the octave
> >part of the val and pick out the lowest-damage one in each group.
> >
> >I think any val that doesn't arise through this process can't be
> >the minimum-TOP-damage one (when TOP-tuned) for a given octave
> >cardinality.
>
> Paul's Middle Path paper tells how to calculate TOP damage,
> I believe. The "TOP-et algorithm" he refers to looks like this
> in Scheme...

I know how to do that but thank you!

Kalle

🔗Carl Lumma <ekin@lumma.org>

8/4/2006 9:34:58 AM

>>>Hello,
>>>
>>>How would you compute log-flat badness for TOP tuned equal
>>>temperaments?
>>
>> Logflat badness in the p-limit is...
>>
>> error^(pi(p)-1) / complexity
>>
>> For error I'd use TOP damage, and for complexity I'd simply use
>> the number of notes/octave of the ET.
>
>Surely not -- it means the more complex a temperament the lower
>its badness.

Oh sorry, that was supposed to be *, not /.

>> But different mappings of the ET to JI will give different
>> results (for "inconsistent" ETs). To find the best mappings
>> regardless of consistency, I'd use this suggestion of Paul's...
>
>You would?

Yes, it's on my list.

>>>Start with the JI val. Multiply it by every possible positive real
>>>number (in practice you just need to find a small enough step size
>>>to increment this constant through) and round the result to the
>>>nearest integers. Use the TOP-et algorithm on all of these, and
>>>calculate their TOP damage. Then group the results by the octave
>>>part of the val and pick out the lowest-damage one in each group.
>>>
>>>I think any val that doesn't arise through this process can't be
>>>the minimum-TOP-damage one (when TOP-tuned) for a given octave
>>>cardinality.
>
>He's right, with a caveat that there isn't always a single optimal
>mapping for a given number of notes to the octave.

I remember that.

>For example, these
>both have the same TOP damage:
>
><23, 36, 53, 64, 79, 85, 94], <23, 36, 53, 64, 79, 85, 93]
>
>We know that the TOP damage of a temperament is the same as the worst
>Tenney-weighted error of a prime interval when you tune optimially. So
>the damage can only increase when you take an approximation other than
>the optimal one. But it may be the same because that prime has more
>than one mapping with an error lower than that in some other prime.

Didn't Paul say to minimize the 2nd biggest error, etc. in such
cases. Like in the above, isn't the first mapping clearly better?

>> Paul's Middle Path paper tells how to calculate TOP damage,
>> I believe. The "TOP-et algorithm" he refers to looks like this
>> in Scheme...
>>
>> ;; Returns the TOP-tuned patent val in cents for the given et at
>> ;; the given prime limit.
>> ;; Algorithm due to Paul Erlich.
>>
>> (define top-et
>> (lambda (et lim)
>> (let*
>> ((patent-val (patent-val et lim))
>> (log2-primes (map log2 (reverse (primes lim))))
>> (et-exact-val (map / patent-val log2-primes))
>> (large (apply max et-exact-val))
>> (small (apply min et-exact-val))
>> (mean (/ (+ large small) 2))
>> (top-val (map (lambda (x)
>> (/ x mean))
>> patent-val))
>> (cents-top-val (map (lambda (x)
>> (* x 1200))
>> top-val)))
>> cents-top-val)))
>>
>> ...but instead of "patent-val", you'd plug in the vals you found
>> by multiplying by JI and rounding per Paul's suggestion.

Crap, I just noticed I hadn't changed some instances of
"standard-val" to "patent-val" in that. Fixed above.

>You can also calculate the TOP error directly, which I think means
>changing
> (mean (/ (+ large small) 2))
>to
> (top-error (/ (- large small) (+ large small)))
>and returning top-error*1200 for the standard units (that's
>(* top-error 1200))

That's your max-min formula, then? I guess I should review your
proof of why it works.

>It'll always be close to the Kees error
>
> (error-range (- large small))

In this case we're talking ETs, so the Kees tuning will
be the ET itself, no?

>> Pi looks like this in Scheme...
>>
>> ;; The number of primes less than n.
>
>In context, why would you ever need this?

It's in the logflat-badness calculation.

-Carl

🔗Carl Lumma <ekin@lumma.org>

8/4/2006 9:42:57 AM

>> Logflat badness in the p-limit is...
>>
>> error^(pi(p)-1) / complexity
>
>Really? I thought it was something like error*complexity^something.

As I just wrote to Graham, I meant * instead of /, but I think
you're right, the ^something goes on complexity, and it's
the number of primes - 1.

>> For error I'd use TOP damage, and for complexity I'd simply use
>> the number of notes/octave of the ET.
>
>What about nonoctave ETs?

There will still be a mapping to 2 (or you can find it quickly if
there isn't), and it's as good as any of the primes to define a
simple complexity measure. I suppose it's ideal to take the entire
mapping into account.

>Wouldn't it be more general if the complexity was something simply
>inversely proportional to the size of the ET step?

The mapping to 2 will be proportional to the size of the step.

-Carl

🔗Carl Lumma <ekin@lumma.org>

8/4/2006 9:48:26 AM

At 09:42 AM 8/4/2006, you wrote:
>>> Logflat badness in the p-limit is...
>>>
>>> error^(pi(p)-1) / complexity
>>
>>Really? I thought it was something like error*complexity^something.
>
>As I just wrote to Graham, I meant * instead of /, but I think
>you're right, the ^something goes on complexity, and it's
>the number of primes - 1.

Actually that's for linear temperaments, which used to be rank 1
I think. For ETs it would have been primes - 0. But now that
linear temperaments are rank 2... maybe Gene can clear this up.

-Carl

🔗Graham Breed <gbreed@gmail.com>

8/4/2006 1:34:46 PM

Carl Lumma wrote:

>>For example, these >>both have the same TOP damage:
>>
>><23, 36, 53, 64, 79, 85, 94], <23, 36, 53, 64, 79, 85, 93]
>>
>>We know that the TOP damage of a temperament is the same as the worst >>Tenney-weighted error of a prime interval when you tune optimially. So >>the damage can only increase when you take an approximation other than >>the optimal one. But it may be the same because that prime has more >>than one mapping with an error lower than that in some other prime.
> > Didn't Paul say to minimize the 2nd biggest error, etc. in such
> cases. Like in the above, isn't the first mapping clearly better?

I don't know what Paul said, but I certainly prefer to avoid complicating definitions. In this case, the second mapping looks better to me. It has a lower TOP-RMS error. Even if it isn't better by your rule there's nothing clear about it.

>>You can also calculate the TOP error directly, which I think means
>>changing
>> (mean (/ (+ large small) 2))
>>to
>> (top-error (/ (- large small) (+ large small)))
>>and returning top-error*1200 for the standard units (that's
>>(* top-error 1200))
> > That's your max-min formula, then? I guess I should review your
> proof of why it works. How is it different from Paul's formula? You can at least substitute the TOP scale stretch into the normal TOP formula.

>>It'll always be close to the Kees error
>>
>> (error-range (- large small))
> > > In this case we're talking ETs, so the Kees tuning will
> be the ET itself, no?

In so far as there's a Kees tuning, it has pure octaves, so there's no choice for an ET tuning.

>>>Pi looks like this in Scheme...
>>>
>>>;; The number of primes less than n.
>>
>>In context, why would you ever need this?
> > It's in the logflat-badness calculation.

There's also a loop over the primes, so how can you do the calculation without knowing the size of the prime limit?

Graham

🔗Graham Breed <gbreed@gmail.com>

8/4/2006 1:34:55 PM

Kalle Aho wrote:

> What about nonoctave ETs? > > Wouldn't it be more general if the complexity was something simply
> inversely proportional to the size of the ET step?

I do mention this in my errors and complexities thing

http://x31eq.com/primerr.pdf

The point is, there isn't an obvious way of comparing complexities of temperaments with different equivalence intervals. You have to think about what it is you're measuring the complexity of. For most people a non-octave scale will be inherently complex because of having to deal with the octaves not being there. But scaling it to be proportional to the size of an acoustic octave is certainly an option. It means the complexity will depend slightly on the tuning.

You can also use wedgie complexity, which happens to be close to the number of notes to an octave for an equal temperament. It doesn't stop working if you take away the octaves.

Graham

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/5/2006 2:13:43 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Kalle Aho wrote:
>
> > What about nonoctave ETs?
> >
> > Wouldn't it be more general if the complexity was something simply
> > inversely proportional to the size of the ET step?
>
> I do mention this in my errors and complexities thing
>
> http://x31eq.com/primerr.pdf
>
> The point is, there isn't an obvious way of comparing complexities of
> temperaments with different equivalence intervals. You have to think
> about what it is you're measuring the complexity of. For most people a
> non-octave scale will be inherently complex because of having to deal
> with the octaves not being there. But scaling it to be proportional to
> the size of an acoustic octave is certainly an option. It means the
> complexity will depend slightly on the tuning.
>
> You can also use wedgie complexity, which happens to be close to the
> number of notes to an octave for an equal temperament. It doesn't stop
> working if you take away the octaves.

But is there something wrong in using e.g. the reciprocal of the step
size as a complexity measure?

If not, my question then becomes:

How to change the badness measure

max tenney-weighted error/step size

to make it log-flat?

Kalle

🔗Graham Breed <gbreed@gmail.com>

8/5/2006 2:43:32 AM

Kalle Aho wrote:

> But is there something wrong in using e.g. the reciprocal of the step
> size as a complexity measure?

If you're happy with it I don't think there are any snakes lurking. It's just nice to have a complexity that doesn't depend on the tuning.

Graham

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/5/2006 6:46:48 AM

--- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:

> How to change the badness measure
>
> max tenney-weighted error/step size
>
> to make it log-flat?

I try to answer this myself with help from

http://tonalsoft.com/enc/b/badness.aspx

So is

max tenney-weighted error/step size^(3/2)

correct for 5-limit?

🔗Carl Lumma <ekin@lumma.org>

8/5/2006 11:56:00 AM

Let me try to remember what I wrote the first time I replied to
this...

>>>For example, these
>>>both have the same TOP damage:
>>>
>>><23, 36, 53, 64, 79, 85, 94], <23, 36, 53, 64, 79, 85, 93]
>>>
>>>We know that the TOP damage of a temperament is the same as the worst
>>>Tenney-weighted error of a prime interval when you tune optimially. So
>>>the damage can only increase when you take an approximation other than
>>>the optimal one. But it may be the same because that prime has more
>>>than one mapping with an error lower than that in some other prime.
>>
>> Didn't Paul say to minimize the 2nd biggest error, etc. in such
>> cases. Like in the above, isn't the first mapping clearly better?
>
>I don't know what Paul said, but I certainly prefer to avoid
>complicating definitions. In this case, the second mapping looks better
>to me. It has a lower TOP-RMS error. Even if it isn't better by your
>rule there's nothing clear about it.

Oh dear, I can barely understand TOP, let alone TOP-RMS. Do you
really have working variants of TOP for different types of averages?
What is the analogous to "the weighted error of any interval is not
greater than the TOP damage" for TOP-RMS?

>>>You can also calculate the TOP error directly, which I think means
>>>changing
>>> (mean (/ (+ large small) 2))
>>>to
>>> (top-error (/ (- large small) (+ large small)))
>>>and returning top-error*1200 for the standard units (that's
>>>(* top-error 1200))
>>
>> That's your max-min formula, then? I guess I should review your
>> proof of why it works.
>
>How is it different from Paul's formula?

I was wrong -- it doesn't look like Paul gives a general formula
for TOP damage in his Middle Path paper. So I don't know. But I
recall him being surprised by it.

>>>>Pi looks like this in Scheme...
>>>>
>>>>;; The number of primes less than n.
>>>
>>>In context, why would you ever need this?
>>
>> It's in the logflat-badness calculation.
>
>There's also a loop over the primes, so how can you do the calculation
>without knowing the size of the prime limit?

You can't. Kalle didn't specify, so I assume that's a free
parameter in his program.

-Carl

🔗Carl Lumma <ekin@lumma.org>

8/4/2006 2:17:36 PM

>I don't know what Paul said, but I certainly prefer to avoid
>complicating definitions. In this case, the second mapping looks better
>to me. It has a lower TOP-RMS error.

I can barely understand TOP, let alone TOP-RMS. Do you really
have good generalizations of TOP for various average error measures?
Like with TOP-RMS, what's the analogous statement to "the
weighted error of any interval does not exceed the TOP damage"?

>>>You can also calculate the TOP error directly, which I think means
>>>changing
>>> (mean (/ (+ large small) 2))
>>>to
>>> (top-error (/ (- large small) (+ large small)))
>>>and returning top-error*1200 for the standard units (that's
>>>(* top-error 1200))
>>
>> That's your max-min formula, then? I guess I should review your
>> proof of why it works.
>
>How is it different from Paul's formula?

I don't know... the Middle Path paper apparently contains the
formula only for the single-comma case.

>>>>Pi looks like this in Scheme...
>>>>
>>>>;; The number of primes less than n.
>>>
>>>In context, why would you ever need this?
>>
>> It's in the logflat-badness calculation.
>
>There's also a loop over the primes, so how can you do the calculation
>without knowing the size of the prime limit?

Kalle didn't specify a prime limit, so this is a free parameter
in the approach I suggested.

-C.

🔗Graham Breed <gbreed@gmail.com>

8/5/2006 2:32:56 PM

Carl Lumma wrote:

>>>><23, 36, 53, 64, 79, 85, 94], <23, 36, 53, 64, 79, 85, 93]
>>>>
>>>>We know that the TOP damage of a temperament is the same as the worst >>>>Tenney-weighted error of a prime interval when you tune optimially. So >>>>the damage can only increase when you take an approximation other than >>>>the optimal one. But it may be the same because that prime has more >>>>than one mapping with an error lower than that in some other prime.
>>>
>>>Didn't Paul say to minimize the 2nd biggest error, etc. in such
>>>cases. Like in the above, isn't the first mapping clearly better?
>>
>>I don't know what Paul said, but I certainly prefer to avoid >>complicating definitions. In this case, the second mapping looks better >>to me. It has a lower TOP-RMS error. Even if it isn't better by your >>rule there's nothing clear about it.
> > Oh dear, I can barely understand TOP, let alone TOP-RMS. Do you
> really have working variants of TOP for different types of averages?
> What is the analogous to "the weighted error of any interval is not
> greater than the TOP damage" for TOP-RMS?

I have one variant for a different type of average, the RMS, and variants for octave equivalent approximations. There's no analogy that I can prove to that TOP-max rule.

>>>>You can also calculate the TOP error directly, which I think means
>>>>changing
>>>> (mean (/ (+ large small) 2))
>>>>to
>>>> (top-error (/ (- large small) (+ large small)))
>>>>and returning top-error*1200 for the standard units (that's
>>>>(* top-error 1200))
>>>
>>>That's your max-min formula, then? I guess I should review your
>>>proof of why it works. >>
>>How is it different from Paul's formula?
> > I was wrong -- it doesn't look like Paul gives a general formula
> for TOP damage in his Middle Path paper. So I don't know. But I
> recall him being surprised by it.

There's a footnote in the Middle Path paper that points to a tuning-math message in which he gives a formula for equal temperaments.

>>>>>Pi looks like this in Scheme...
>>>>>
>>>>>;; The number of primes less than n.
>>>>
>>>>In context, why would you ever need this?
>>>
>>>It's in the logflat-badness calculation.
>>
>>There's also a loop over the primes, so how can you do the calculation >>without knowing the size of the prime limit?
> > You can't. Kalle didn't specify, so I assume that's a free
> parameter in his program.

You say the patent val is an input to the error function. How are we going to construct this val if we don't know how many primes it's supposed to contain?

Graham

🔗Carl Lumma <ekin@lumma.org>

8/5/2006 3:29:58 PM

>> Oh dear, I can barely understand TOP, let alone TOP-RMS. Do you
>> really have working variants of TOP for different types of averages?
>> What is the analogous to "the weighted error of any interval is not
>> greater than the TOP damage" for TOP-RMS?
>
>I have one variant for a different type of average, the RMS, //
>There's no analogy that I can prove to that TOP-max rule.

I assume you can calculate the TOP-RMS damage of a tuning. Have
you then checked the Tenney-weighted RMS errors of some chords
to see if any are greater?

>>>>>You can also calculate the TOP error directly, which I think means
>>>>>changing
>>>>> (mean (/ (+ large small) 2))
>>>>>to
>>>>> (top-error (/ (- large small) (+ large small)))
>>>>>and returning top-error*1200 for the standard units (that's
>>>>>(* top-error 1200))
>>>>
>>>>That's your max-min formula, then? I guess I should review your
>>>>proof of why it works.
>>>
>>>How is it different from Paul's formula?
>>
>> I was wrong -- it doesn't look like Paul gives a general formula
>> for TOP damage in his Middle Path paper. So I don't know. But I
>> recall him being surprised by it.
>
>There's a footnote in the Middle Path paper that points to a
>tuning-math message in which he gives a formula for equal
>temperaments.

The only links to tuning-math I see in the footnotes of the
version I have are to the original exposition of TOP, and to

/tuning-math/message/8512

which tells one how to TOP-tune an ET (it's the basis for the
code I've already given) but not how to calculate the TOP damage.

>> You can't. Kalle didn't specify, so I assume that's a free
>> parameter in his program.
>
>You say the patent val is an input to the error function. How are we
>going to construct this val if we don't know how many primes it's
>supposed to contain?

Obviously Kalle has to supply that information.

-Carl

🔗Carl Lumma <ekin@lumma.org>

8/5/2006 3:37:50 PM

>>There's a footnote in the Middle Path paper that points to a
>>tuning-math message in which he gives a formula for equal
>>temperaments.
>
>The only links to tuning-math I see in the footnotes of the
>version I have are to the original exposition of TOP, and to
>
>/tuning-math/message/8512
>
>which tells one how to TOP-tune an ET (it's the basis for the
>code I've already given) but not how to calculate the TOP damage.

I don't see anything searching my tuning-math archive either.
But I did come across Paul saying something about you 'cleverly
reducing the steps necessary' to calculate, I think, TOP
damage for ETs. Perhaps he was referring to your max-min thing.

So, given an ET (12) and a prime limit (5), how do I calculate
the TOP damage?

-Carl

🔗Graham Breed <gbreed@gmail.com>

8/6/2006 12:38:24 PM

Carl Lumma wrote:
>>>There's a footnote in the Middle Path paper that points to a
>>>tuning-math message in which he gives a formula for equal
>>>temperaments.
>>
>>The only links to tuning-math I see in the footnotes of the
>>version I have are to the original exposition of TOP, and to
>>
>>/tuning-math/message/8512
>>
>>which tells one how to TOP-tune an ET (it's the basis for the
>>code I've already given) but not how to calculate the TOP damage.

He says "For an ET, just stretch so that the weighted errors of the most
upward-biased prime and most downward-biased prime are equal in
magnitude and opposite in sign." The only other thing you need to know is that the equal magnitude you end up with is the TOP damage.

> I don't see anything searching my tuning-math archive either.
> But I did come across Paul saying something about you 'cleverly
> reducing the steps necessary' to calculate, I think, TOP
> damage for ETs. Perhaps he was referring to your max-min thing.

I simplified his formula. Algebra, nothing else.

> So, given an ET (12) and a prime limit (5), how do I calculate
> the TOP damage?

I've already told you what to change in your function, in this thread. And I've linked to a PDF with the formulae. What else do you need?

Graham

🔗Graham Breed <gbreed@gmail.com>

8/6/2006 12:37:48 PM

Carl Lumma wrote:
>>>Oh dear, I can barely understand TOP, let alone TOP-RMS. Do you
>>>really have working variants of TOP for different types of averages?
>>>What is the analogous to "the weighted error of any interval is not
>>>greater than the TOP damage" for TOP-RMS?
>>
>>I have one variant for a different type of average, the RMS, //
>>There's no analogy that I can prove to that TOP-max rule.
> > I assume you can calculate the TOP-RMS damage of a tuning. Have
> you then checked the Tenney-weighted RMS errors of some chords
> to see if any are greater?

Of course there are chords that are going to have higher error than the average. The TOP-RMS is vaguely correlated with the Tenney-weighted error of an arbitrarily large Tenney-limited set of intervals. But I can't fiddle the weighting so that different large sets give a consistent result, and so it's as well correlated with the TOP-max error as far as I can see.

The TOP-RMS is never larger than the TOP-max. That's a strict rule.

Graham

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/6/2006 9:58:00 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> Actually that's for linear temperaments, which used to be rank 1
> I think. For ETs it would have been primes - 0. But now that
> linear temperaments are rank 2... maybe Gene can clear this up.

The general formula is error * complexity^(pi(p)/(pi(p)-r), where r is
the rank. If the rank is 1, that becomes e*c^(pi(p)/(pi(p)-1)).

🔗Carl Lumma <ekin@lumma.org>

8/6/2006 10:04:49 PM

>> Actually that's for linear temperaments, which used to be rank 1
>> I think. For ETs it would have been primes - 0. But now that
>> linear temperaments are rank 2... maybe Gene can clear this up.
>
>The general formula is error * complexity^(pi(p)/(pi(p)-r), where r is
>the rank. If the rank is 1, that becomes e*c^(pi(p)/(pi(p)-1)).

Thanks!

-Carl

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/7/2006 1:01:58 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@> wrote:
>
> > Actually that's for linear temperaments, which used to be rank 1
> > I think. For ETs it would have been primes - 0. But now that
> > linear temperaments are rank 2... maybe Gene can clear this up.
>
> The general formula is error * complexity^(pi(p)/(pi(p)-r), where r is
> the rank. If the rank is 1, that becomes e*c^(pi(p)/(pi(p)-1)).

Thanks Gene,

when the error is Tenney-weighted, the sequence of 5-limit ETs of
decreasing badness is

1, 12, 53, 4296,...

Of meantone ETs 12-equal has the lowest log-flat badness!

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/9/2006 12:23:28 AM

--- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
> <genewardsmith@> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@> wrote:
> >
> > > Actually that's for linear temperaments, which used to be rank 1
> > > I think. For ETs it would have been primes - 0. But now that
> > > linear temperaments are rank 2... maybe Gene can clear this up.
> >
> > The general formula is error * complexity^(pi(p)/(pi(p)-r), where r is
> > the rank. If the rank is 1, that becomes e*c^(pi(p)/(pi(p)-1)).
>
>
> Thanks Gene,
>
> when the error is Tenney-weighted, the sequence of 5-limit ETs of
> decreasing badness is
>
> 1, 12, 53, 4296,...
>
> Of meantone ETs 12-equal has the lowest log-flat badness!

Are there any other badness measures which are as non-ad hoc as
log-flat but which would give more reasonable values for ETs?

Kalle

🔗Carl Lumma <ekin@lumma.org>

8/9/2006 1:04:57 AM

>> > > Actually that's for linear temperaments, which used to be rank 1
>> > > I think. For ETs it would have been primes - 0. But now that
>> > > linear temperaments are rank 2... maybe Gene can clear this up.
>> >
>> > The general formula is error * complexity^(pi(p)/(pi(p)-r), where
>> > r is the rank. If the rank is 1, that becomes e*c^(pi(p)/(pi(p)-1)).
>>
>> Thanks Gene,
>>
>> when the error is Tenney-weighted, the sequence of 5-limit ETs of
>> decreasing badness is
>>
>> 1, 12, 53, 4296,...
>>
>> Of meantone ETs 12-equal has the lowest log-flat badness!
>
>Are there any other badness measures which are as non-ad hoc as
>log-flat but which would give more reasonable values for ETs?

Badness is a tough tradeoff to make. 12 is not very many tones.
It's hard to be non- ad hoc because one composer's delight is
another's monstrosity.

By the way, 53 is not a meantone.

'Barely infinite' is a elegant way to make the tradeoff, but in
practice one winds up using cutoffs anyway. So something stronger
that tapers itself off would be preferable in my mind.

As you know, an ET number isn't a mapping. If you don't specify
one, comparing two different ET numbers may be comparing apples and
oranges. Something that finds the 'area' of a map on the lattice
is appealing to me -- I think wedgie complexity is basically such
a thing.

-Carl

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/9/2006 1:31:09 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> > > Actually that's for linear temperaments, which used to be rank 1
> >> > > I think. For ETs it would have been primes - 0. But now that
> >> > > linear temperaments are rank 2... maybe Gene can clear this up.
> >> >
> >> > The general formula is error * complexity^(pi(p)/(pi(p)-r), where
> >> > r is the rank. If the rank is 1, that becomes
e*c^(pi(p)/(pi(p)-1)).
> >>
> >> Thanks Gene,
> >>
> >> when the error is Tenney-weighted, the sequence of 5-limit ETs of
> >> decreasing badness is
> >>
> >> 1, 12, 53, 4296,...
> >>
> >> Of meantone ETs 12-equal has the lowest log-flat badness!
> >
> >Are there any other badness measures which are as non-ad hoc as
> >log-flat but which would give more reasonable values for ETs?

Hi Carl,

> Badness is a tough tradeoff to make. 12 is not very many tones.
> It's hard to be non- ad hoc because one composer's delight is
> another's monstrosity.

> By the way, 53 is not a meantone.

Yes, I know. My remark above implies that.

> 'Barely infinite' is a elegant way to make the tradeoff, but in
> practice one winds up using cutoffs anyway. So something stronger
> that tapers itself off would be preferable in my mind.

Actually I would prefer something weaker!

I would like to have a badness measure that would be good for
comparing ETs that temper out certain commas. Those commas would then
function as a cutoff. Something that would give 19 or 31 as the best
5-limit meantone, not 12.

Kalle

🔗Graham Breed <gbreed@gmail.com>

8/9/2006 2:24:54 AM

Kalle Aho wrote:

> I would like to have a badness measure that would be good for
> comparing ETs that temper out certain commas. Those commas would then
> function as a cutoff. Something that would give 19 or 31 as the best
> 5-limit meantone, not 12. I think 31 is the poptimal meantone. Maybe Gene'll be a long to explain it.

Otherwise, how non-ad hoc do you want? I've found error*complexity is easy to deal with mathematically.

Graham

🔗Carl Lumma <ekin@lumma.org>

8/9/2006 2:44:50 AM

>> >> when the error is Tenney-weighted, the sequence of 5-limit ETs of
>> >> decreasing badness is
>> >>
>> >> 1, 12, 53, 4296,...
>> >>
>> >> Of meantone ETs 12-equal has the lowest log-flat badness!
>> >
>> >Are there any other badness measures which are as non-ad hoc as
>> >log-flat but which would give more reasonable values for ETs?
>
>Hi Carl,
>
>> Badness is a tough tradeoff to make. 12 is not very many tones.
>> It's hard to be non- ad hoc because one composer's delight is
>> another's monstrosity.
>
>> By the way, 53 is not a meantone.
>
>Yes, I know. My remark above implies that.

Sorry, I read this backwards (and hence incorrectly) upon replying.

>> 'Barely infinite' is a elegant way to make the tradeoff, but in
>> practice one winds up using cutoffs anyway. So something stronger
>> that tapers itself off would be preferable in my mind.
>
>Actually I would prefer something weaker!
>
>I would like to have a badness measure that would be good for
>comparing ETs that temper out certain commas. Those commas would then
>function as a cutoff.

I'd expect an infinite number of ETs to temper out any comma. How
does it work as a cutoff?

-Carl

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/9/2006 12:25:15 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Kalle Aho wrote:
>
> > I would like to have a badness measure that would be good for
> > comparing ETs that temper out certain commas. Those commas would then
> > function as a cutoff. Something that would give 19 or 31 as the best
> > 5-limit meantone, not 12.
>
> I think 31 is the poptimal meantone. Maybe Gene'll be a long to
explain it.

Hi Graham,

I remember it being 81 for 5-limit.

> Otherwise, how non-ad hoc do you want?

Ironically enough non-ad hocness is pretty subjective. At least no
coefficients, please!

> I've found error*complexity is
> easy to deal with mathematically.

Yes, I would preserve the basic form and manipulate the complexity
measure. In example error*log(number of tones to the octave) seems to
work pretty well despite being so soft on higher ETs.

Kalle

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/9/2006 12:35:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> >> when the error is Tenney-weighted, the sequence of 5-limit ETs of
> >> >> decreasing badness is
> >> >>
> >> >> 1, 12, 53, 4296,...
> >> >>
> >> >> Of meantone ETs 12-equal has the lowest log-flat badness!
> >> >
> >> >Are there any other badness measures which are as non-ad hoc as
> >> >log-flat but which would give more reasonable values for ETs?
> >
> >Hi Carl,
> >
> >> Badness is a tough tradeoff to make. 12 is not very many tones.
> >> It's hard to be non- ad hoc because one composer's delight is
> >> another's monstrosity.
> >
> >> By the way, 53 is not a meantone.
> >
> >Yes, I know. My remark above implies that.
>
> Sorry, I read this backwards (and hence incorrectly) upon replying.

No problem!

> >> 'Barely infinite' is a elegant way to make the tradeoff, but in
> >> practice one winds up using cutoffs anyway. So something stronger
> >> that tapers itself off would be preferable in my mind.
> >
> >Actually I would prefer something weaker!
> >
> >I would like to have a badness measure that would be good for
> >comparing ETs that temper out certain commas. Those commas would then
> >function as a cutoff.
>
> I'd expect an infinite number of ETs to temper out any comma. How
> does it work as a cutoff?

You are right of course. But if we require that the best
approximations of primes in the ET are the same as given by its
mapping then I believe the list of such ETs will be finite for every
comma.

Kalle

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/9/2006 12:54:14 PM

--- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@> wrote:
> >
> > Kalle Aho wrote:
> >
> > > I would like to have a badness measure that would be good for
> > > comparing ETs that temper out certain commas. Those commas would
then
> > > function as a cutoff. Something that would give 19 or 31 as the best
> > > 5-limit meantone, not 12.
> >
> > I think 31 is the poptimal meantone. Maybe Gene'll be a long to
> explain it.
>
>
> Hi Graham,
>
> I remember it being 81 for 5-limit.
>
> > Otherwise, how non-ad hoc do you want?
>
>
> Ironically enough non-ad hocness is pretty subjective. At least no
> coefficients, please!
>
> > I've found error*complexity is
> > easy to deal with mathematically.
>
>
> Yes, I would preserve the basic form and manipulate the complexity
> measure. In example error*log(number of tones to the octave) seems to
> work pretty well despite being so soft on higher ETs.

Although it doesn't work with 1-equal because its badness will be 0 as
log(1)=0. Error*log(1/step size) is better.

Kalle

🔗Carl Lumma <ekin@lumma.org>

8/9/2006 1:14:01 PM

>> I'd expect an infinite number of ETs to temper out any comma. How
>> does it work as a cutoff?
>
>You are right of course. But if we require that the best
>approximations of primes in the ET are the same as given by its
>mapping then I believe the list of such ETs will be finite for
>every comma.

I think that's right. Herman Miller has suggested similar stuff
for linear temperaments.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/9/2006 9:53:52 PM

--- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:

> when the error is Tenney-weighted, the sequence of 5-limit ETs of
> decreasing badness is
>
> 1, 12, 53, 4296,...
>
> Of meantone ETs 12-equal has the lowest log-flat badness!

This seems to be weighting 3 as much more valuable than 5.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/9/2006 9:56:51 PM

--- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:

> Are there any other badness measures which are as non-ad hoc as
> log-flat but which would give more reasonable values for ETs?

Weightingn the tonality diamond equally is pretty common, as it gives
good results. Kees tunings seem to me to give preference to 3 over 5
in a way more reasonable than what Carl posted.

🔗Carl Lumma <ekin@lumma.org>

8/9/2006 10:35:30 PM

>> Are there any other badness measures which are as non-ad hoc as
>> log-flat but which would give more reasonable values for ETs?
>
>Weightingn the tonality diamond equally is pretty common, as it gives
>good results. Kees tunings seem to me to give preference to 3 over 5
>in a way more reasonable than what Carl posted.

What did I post?

-Carl

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/10/2006 12:52:02 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
>
> > when the error is Tenney-weighted, the sequence of 5-limit ETs of
> > decreasing badness is
> >
> > 1, 12, 53, 4296,...
> >
> > Of meantone ETs 12-equal has the lowest log-flat badness!
>
> This seems to be weighting 3 as much more valuable than 5.

Yes, isn't that what Tenney-weighting does?

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/10/2006 7:34:25 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> I think 31 is the poptimal meantone. Maybe Gene'll be a long to
explain it.

In the 5-limit it goes a little bit flat from, not sharp from,
1/4-comma. But that's assuming we weight 6/5, 5/4, and 3/2 evenly.
That gives 81-et as 5-limit poptimal. The 7-limit poptimal goes very
slight sharp of 1/4-comma, but may not even reach 31.

That's the theoretical flapdoodle; in practice I think 31 is about
perfect. It detunes 5/4 just enough to notice (0.783 cents sharp),
which is probably a good idea, and does both 5-limit and septimal
meantone about as well as it can be done. But the old classic,
1/4-comma, *is* poptimal in both the 5 and 7 limits, being in both
cases the minimax tuning.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/10/2006 7:35:42 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> I'd expect an infinite number of ETs to temper out any comma. How
> does it work as a cutoff?

An infinite number of vals tempers out any comma. Only a finite number
of patent vals will.

🔗yahya_melb <yahya@melbpc.org.au>

8/11/2006 6:08:49 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed wrote:
>
> > I think 31 is the poptimal meantone. Maybe Gene'll be a long to
> explain it.
>
> In the 5-limit it goes a little bit flat from, not sharp from,
> 1/4-comma. But that's assuming we weight 6/5, 5/4, and 3/2 evenly.
> That gives 81-et as 5-limit poptimal. The 7-limit poptimal goes
very
> slight sharp of 1/4-comma, but may not even reach 31.
>
> That's the theoretical flapdoodle; in practice I think 31 is about
> perfect. It detunes 5/4 just enough to notice (0.783 cents sharp),
> which is probably a good idea, and does both 5-limit and septimal
> meantone about as well as it can be done. But the old classic,
> 1/4-comma, *is* poptimal in both the 5 and 7 limits, being in both
> cases the minimax tuning.

Would I be right in concluding that, for practical purposes, you
think 31-EDO is as good as it gets for a tuning that does good
octaves, fifths, thirds and sevenths, all close to the natural
integer overtones of the harmonic series?

I do hope that's what you mean! For it would mean that we *can*
have our cake and eat it (there's not much point in having a cake
you *can't* eat, is there?) - an equal temperament, with all its
implied simplicity of modulation, that nearly perfectly handles all
just ratios up to the 7-limit.

If so, that's a pretty important result, and so deserves much wider
publicity.

Regards,
Yahya

🔗Graham Breed <gbreed@gmail.com>

8/11/2006 6:14:27 AM

Kalle:
>>>I would like to have a badness measure that would be good for
>>>comparing ETs that temper out certain commas. Those commas would then
>>>function as a cutoff.

Carl:
>>I'd expect an infinite number of ETs to temper out any comma. How
>>does it work as a cutoff?

Kalle:
> You are right of course. But if we require that the best
> approximations of primes in the ET are the same as given by its
> mapping then I believe the list of such ETs will be finite for every
> comma. You don't need an arbitrary rule for selecting the mappings. You know that no ETs that temper out a given set of commas can have a lower error than the optimal higher-rank temperament that tempers out the same commas. If the badness gives enough weight to the complexity, you can find the most complex ET that can possibly have a badness within a given range.

For example, taking badness as error*complexity we have

max_complexity = max_badness/min_error

5-limit meantone has a TOP-max error of 1.70 cents/octave. 31-equal as a TOP-max error of 1.81 cents/octave. Taking TOP-max error and steps per octave to calculate the badness, it comes out as 55.96 cents/octave^2 for 31-equal. A temperament that improves on this badness would need to have less than 55.96/1.70=32.93 steps. Hence no more than 32 steps, and so as 32-equal isn't a meantone we know that there's no point at looking at meantones beyond 31 steps.

Actually, I think 12-equal is still the optimal equal temperament for this example. But 31-equal is optimal in the 7-limit.

If you use a wedgie complexity instead of steps per octave you will get slightly different numbers. But the two complexities will be very close, and you can probably work out how close for a given badness.

If the badness is error*log(complexity) you'll get

max_complexity = exp(max_badness/min_error)

By my calculations, no 5-limit meantone with more than 38 notes can possibly improve on 31-equal. So being softer on complex temperaments doesn't matter.

With log-flat badness of

error*complexity^(size/(size-1))

(^ is exponentiation) the limit on complexity is

max_complexity = (max_badness/min_error)^((size-1)/size)

Graham

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/11/2006 9:18:28 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> You don't need an arbitrary rule for selecting the mappings.

Graham, I don't think that using the best prime approximations is
particularly arbitrary but anyway...

> You know
> that no ETs that temper out a given set of commas can have a lower
error
> than the optimal higher-rank temperament that tempers out the same
> commas. If the badness gives enough weight to the complexity, you can
> find the most complex ET that can possibly have a badness within a
given
> range.

This is great!

> For example, taking badness as error*complexity we have
>
> max_complexity = max_badness/min_error
>
> 5-limit meantone has a TOP-max error of 1.70 cents/octave. 31-equal
as a
> TOP-max error of 1.81 cents/octave. Taking TOP-max error and steps per
> octave to calculate the badness, it comes out as 55.96 cents/octave^2

Is that ^2 a typo?

> for 31-equal. A temperament that improves on this badness would
need to
> have less than 55.96/1.70=32.93 steps. Hence no more than 32 steps,
and
> so as 32-equal isn't a meantone we know that there's no point at
looking
> at meantones beyond 31 steps.
>
> Actually, I think 12-equal is still the optimal equal temperament for
> this example. But 31-equal is optimal in the 7-limit.

True.

> If you use a wedgie complexity instead of steps per octave you will get
> slightly different numbers. But the two complexities will be very
> close, and you can probably work out how close for a given badness.
>
>
> If the badness is error*log(complexity) you'll get
>
> max_complexity = exp(max_badness/min_error)
>
> By my calculations, no 5-limit meantone with more than 38 notes can
> possibly improve on 31-equal. So being softer on complex temperaments
> doesn't matter.

Right and it gives meaningful results. Here's the best ETs for
temperaments in Paul's Paper according to this measure:

5-limit

Father 8
Bug 9
Dicot 7
Meantone 31
Augmented 12
Mavila 7 (This is the only one which is a bit sad)
Porcupine 22
Blackwood 15
Dimipent 12
Srutal 46
Magic 41
Ripple 12
Hanson 53
Negripent 19
Tetracot 34
Superpyth 27
Helmholtz 171
Sensipent 65
Passion 73
Wuerschmidt 99
Compton 84
Amity 205
Orson 53
Vishnu 506
Luna 559

7-limit

Blacksmith 15
Dimisept 12
Dominant 12
August 12
Pajara 22
Semaphore 19
Meantone 31
Injera 26
Negrisept 19
Augene 27
Keemun 19
Catler 36
Hedgehog 22
Superpyth 27
Sensisept 46
Lemba 26
Porcupine 22
Flattone 45
Magic 41
Doublewide 22
Nautilus 29
Beatles 27
Liese 19
Cynder 31
Orwell 53
Garibaldi 94
Myna 89
Miracle 72
Ennealimmal 612

> With log-flat badness of
>
> error*complexity^(size/(size-1))
>
> (^ is exponentiation) the limit on complexity is
>
> max_complexity = (max_badness/min_error)^((size-1)/size)

"size" should be number of primes, right?

Kalle

🔗Carl Lumma <ekin@lumma.org>

8/11/2006 9:55:16 AM

Brilliant!

-Carl

At 06:14 AM 8/11/2006, you wrote:
>Kalle:
>>>>I would like to have a badness measure that would be good for
>>>>comparing ETs that temper out certain commas. Those commas would then
>>>>function as a cutoff.
>
>Carl:
>>>I'd expect an infinite number of ETs to temper out any comma. How
>>>does it work as a cutoff?
>
>Kalle:
>> You are right of course. But if we require that the best
>> approximations of primes in the ET are the same as given by its
>> mapping then I believe the list of such ETs will be finite for every
>> comma.
>
>You don't need an arbitrary rule for selecting the mappings. You know
>that no ETs that temper out a given set of commas can have a lower error
>than the optimal higher-rank temperament that tempers out the same
>commas. If the badness gives enough weight to the complexity, you can
>find the most complex ET that can possibly have a badness within a given
>range.
>
>
>For example, taking badness as error*complexity we have
>
>max_complexity = max_badness/min_error
>
>5-limit meantone has a TOP-max error of 1.70 cents/octave. 31-equal as a
>TOP-max error of 1.81 cents/octave. Taking TOP-max error and steps per
>octave to calculate the badness, it comes out as 55.96 cents/octave^2
>for 31-equal. A temperament that improves on this badness would need to
>have less than 55.96/1.70=32.93 steps. Hence no more than 32 steps, and
>so as 32-equal isn't a meantone we know that there's no point at looking
>at meantones beyond 31 steps.
>
>Actually, I think 12-equal is still the optimal equal temperament for
>this example. But 31-equal is optimal in the 7-limit.
>
>If you use a wedgie complexity instead of steps per octave you will get
>slightly different numbers. But the two complexities will be very
>close, and you can probably work out how close for a given badness.
>
>
>If the badness is error*log(complexity) you'll get
>
>max_complexity = exp(max_badness/min_error)
>
>By my calculations, no 5-limit meantone with more than 38 notes can
>possibly improve on 31-equal. So being softer on complex temperaments
>doesn't matter.
>
>
>With log-flat badness of
>
>error*complexity^(size/(size-1))
>
>(^ is exponentiation) the limit on complexity is
>
>max_complexity = (max_badness/min_error)^((size-1)/size)
>
>
> Graham

🔗Carl Lumma <ekin@lumma.org>

8/11/2006 10:18:27 AM

Hm, I wonder why hanson is 53 and keemun 19.

-Carl

>> If the badness is error*log(complexity) you'll get
>>
>> max_complexity = exp(max_badness/min_error)
>>
>> By my calculations, no 5-limit meantone with more than 38 notes can
>> possibly improve on 31-equal. So being softer on complex temperaments
>> doesn't matter.
>
>Right and it gives meaningful results. Here's the best ETs for
>temperaments in Paul's Paper according to this measure:
>
>5-limit
>
>Father 8
>Bug 9
>Dicot 7
>Meantone 31
>Augmented 12
>Mavila 7 (This is the only one which is a bit sad)
>Porcupine 22
>Blackwood 15
>Dimipent 12
>Srutal 46
>Magic 41
>Ripple 12
>Hanson 53
>Negripent 19
>Tetracot 34
>Superpyth 27
>Helmholtz 171
>Sensipent 65
>Passion 73
>Wuerschmidt 99
>Compton 84
>Amity 205
>Orson 53
>Vishnu 506
>Luna 559
>
>7-limit
>
>Blacksmith 15
>Dimisept 12
>Dominant 12
>August 12
>Pajara 22
>Semaphore 19
>Meantone 31
>Injera 26
>Negrisept 19
>Augene 27
>Keemun 19
>Catler 36
>Hedgehog 22
>Superpyth 27
>Sensisept 46
>Lemba 26
>Porcupine 22
>Flattone 45
>Magic 41
>Doublewide 22
>Nautilus 29
>Beatles 27
>Liese 19
>Cynder 31
>Orwell 53
>Garibaldi 94
>Myna 89
>Miracle 72
>Ennealimmal 612

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/11/2006 11:10:18 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> Hm, I wonder why hanson is 53 and keemun 19.

Probably because patent val for 53 doesn't temper out 49:48!

🔗Graham Breed <gbreed@gmail.com>

8/11/2006 4:04:10 PM

Kalle Aho wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> > >>You don't need an arbitrary rule for selecting the mappings. > > > Graham, I don't think that using the best prime approximations is
> particularly arbitrary but anyway... Well, this is one of those points I always grumble about.

>>For example, taking badness as error*complexity we have
>>
>>max_complexity = max_badness/min_error
>>
>>5-limit meantone has a TOP-max error of 1.70 cents/octave. 31-equal
> as a >>TOP-max error of 1.81 cents/octave. Taking TOP-max error and steps per >>octave to calculate the badness, it comes out as 55.96 cents/octave^2
> > Is that ^2 a typo?

No. Error is measured in cents/octave and complexity in steps/octave. Taking steps as dimensionless, that gives complexity*error in units of cents/octave^2.

Graham

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

8/11/2006 7:39:12 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > Is that ^2 a typo?
>
> No. Error is measured in cents/octave and complexity in steps/octave.
> Taking steps as dimensionless, that gives complexity*error in units of
> cents/octave^2.

Oh, but of course! I'm sorry.

🔗Carl Lumma <ekin@lumma.org>

8/11/2006 11:25:11 AM

>> I'd expect an infinite number of ETs to temper out any comma. How
>> does it work as a cutoff?
>
>An infinite number of vals tempers out any comma. Only a finite number
>of patent vals will.

What's the largest ET whose patent val tempers out 81/80?

-Carl

🔗Graham Breed <gbreed@gmail.com>

8/12/2006 2:26:31 AM

Carl Lumma wrote:
>>>I'd expect an infinite number of ETs to temper out any comma. How
>>>does it work as a cutoff?
>>
>>An infinite number of vals tempers out any comma. Only a finite number
>>of patent vals will.
> > What's the largest ET whose patent val tempers out 81/80?

What's an ET? If you're allowed contorsion, it's <129, 204, 300]. Otherwise, <117, 185, 272]. The full list is

5, 7, 12, 19, 24*, 26, 31, 36*, 38*, 43, 45, 50, 55, 57*, 62*, 67, 69, 74, 76*, 81, 86*, 88, 93*, 98, 100*, 105, 117, 129*

where a * denotes contorsion. A similar list for some kind of optimal mappings:

5, 7, 12, 14*, 17, 19, 24*, 26, 31, 33, 36*, 38*, 43, 45, 50, 52*, 55, 57*, 62*, 64, 69, 74, 76*, 81, 88, 93*, 100*, 112

Contorsion is where it's not possible to cover all notes of a regular temperament with consonances sharing a common note. For equal temperaments that means the elements of the mapping have a common factor. A contorted temperament can't strictly be a meantone because it isn't possible to notate it by octaves and fifths. It might not even be a temperament depending on your definition of "temperament".

Graham

🔗Carl Lumma <ekin@lumma.org>

8/12/2006 8:15:26 AM

Wow, maybe insisting on patent vals isn't as arbitrary as I thought.

-C.

>>>>I'd expect an infinite number of ETs to temper out any comma. How
>>>>does it work as a cutoff?
>>>
>>>An infinite number of vals tempers out any comma. Only a finite number
>>>of patent vals will.
>>
>> What's the largest ET whose patent val tempers out 81/80?
>
>What's an ET? If you're allowed contorsion, it's <129, 204, 300].
>Otherwise, <117, 185, 272]. The full list is
>
>5, 7, 12, 19, 24*, 26, 31, 36*, 38*, 43, 45, 50, 55, 57*, 62*, 67, 69,
>74, 76*, 81, 86*, 88, 93*, 98, 100*, 105, 117, 129*
>
>where a * denotes contorsion. A similar list for some kind of optimal
>mappings:
>
>5, 7, 12, 14*, 17, 19, 24*, 26, 31, 33, 36*, 38*, 43, 45, 50, 52*, 55,
>57*, 62*, 64, 69, 74, 76*, 81, 88, 93*, 100*, 112
>
>
>Contorsion is where it's not possible to cover all notes of a regular
>temperament with consonances sharing a common note. For equal
>temperaments that means the elements of the mapping have a common
>factor. A contorted temperament can't strictly be a meantone because it
>isn't possible to notate it by octaves and fifths. It might not even be
>a temperament depending on your definition of "temperament".
>
>
> Graham

🔗Carl Lumma <ekin@lumma.org>

8/12/2006 8:41:16 AM

>>>>I'd expect an infinite number of ETs to temper out any comma. How
>>>>does it work as a cutoff?
>>>
>>>An infinite number of vals tempers out any comma. Only a finite number
>>>of patent vals will.

When

error(patent(3)) < 4 * error(patent(5))

we no longer have a patent meantone. And since as ET size goes
up the patent errors will tend to go down at about the same rate,
the inequality will eventually be satisfied. Is that right?

>Contorsion is where it's not possible to cover all notes of a regular
>temperament with consonances sharing a common note. For equal
>temperaments that means the elements of the mapping have a common
>factor. A contorted temperament can't strictly be a meantone because it
>isn't possible to notate it by octaves and fifths. It might not even be
>a temperament depending on your definition of "temperament".

Yes, contorsion isn't ok. But it isn't a concern here because
there are an infinite number of primes.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/14/2006 1:23:56 PM

--- In tuning-math@yahoogroups.com, "yahya_melb" <yahya@...> wrote:

> Would I be right in concluding that, for practical purposes, you
> think 31-EDO is as good as it gets for a tuning that does good
> octaves, fifths, thirds and sevenths, all close to the natural
> integer overtones of the harmonic series?

It's about as good as a *meantone* tuning gets, which is not the same.
I would say that for most 7-limit purposes 99-et is all you need,
however, so far as tuning accuracy goes. But other people no doubt
have different ideas.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/14/2006 2:31:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> I'd expect an infinite number of ETs to temper out any comma. How
> >> does it work as a cutoff?
> >
> >An infinite number of vals tempers out any comma. Only a finite number
> >of patent vals will.
>
> What's the largest ET whose patent val tempers out 81/80?

129-et. As expected, the fifth is a little sharp of minimax, at
697.67 cents. The NOT tuning is 698.02.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/14/2006 2:35:58 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> What's an ET? If you're allowed contorsion, it's <129, 204, 300].
> Otherwise, <117, 185, 272]. The full list is

Good point. The fifth for 117 is 697.44 cents, still as expected sharp
of minimax.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/14/2006 4:47:53 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@> wrote:
>
> > What's an ET? If you're allowed contorsion, it's <129, 204, 300].
> > Otherwise, <117, 185, 272]. The full list is
>
> Good point. The fifth for 117 is 697.44 cents, still as expected sharp
> of minimax.

Of course, we can do this stuff for other temperaments and other prime
limits as well. For septimal meantone, we get 105 et, which has a fifth
of 697.143 cents; the 7-limit NOT tuning is 697.646.

For miracle, the highest patent for both 7 and 11 limit is 175, which
is also 7-limit poptimal. It has a secor of 116.57 cents, as compared
to the 7- and 11-limit NOT tunings of 116.66 cents, very close to the
72-et values; but the 175 tuning is still pretty close to NOT (a
semiconvergent, in fact), which is all we can expect.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/14/2006 4:53:28 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@> wrote:
>
> > What's an ET? If you're allowed contorsion, it's <129, 204, 300].
> > Otherwise, <117, 185, 272]. The full list is
>
> Good point. The fifth for 117 is 697.44 cents, still as expected sharp
> of minimax.

There's also the question of consistency. In the 5-limit, the largest
consistent et for meantone is 100, and the largest uncontorted is 88.
For the 7-limit, the largest consistent one is 93, and the largest
uncontorted is 81.

🔗Graham Breed <gbreed@gmail.com>

8/15/2006 7:31:28 AM

Gene Ward Smith wrote:

> There's also the question of consistency. In the 5-limit, the largest
> consistent et for meantone is 100, and the largest uncontorted is 88.
> For the 7-limit, the largest consistent one is 93, and the largest
> uncontorted is 81.

Yes, and consistency is a particular threshold on an error*complexity badness measure, which brings us right back where we started.

Graham

🔗Carl Lumma <ekin@lumma.org>

8/15/2006 10:22:39 AM

At 08:41 AM 8/12/2006, you wrote:
>>>>>I'd expect an infinite number of ETs to temper out any comma. How
>>>>>does it work as a cutoff?
>>>>
>>>>An infinite number of vals tempers out any comma. Only a finite number
>>>>of patent vals will.
>
>When
>
>error(patent(3)) < 4 * error(patent(5))
>
>we no longer have a patent meantone. And since as ET size goes
>up the patent errors will tend to go down at about the same rate,
>the inequality will eventually be satisfied. Is that right?

It still seems that some large ET could happen to have one error
4 times the size of another.

-Carl

🔗Graham Breed <gbreed@gmail.com>

8/15/2006 1:09:58 PM

Carl Lumma wrote:

>>When
>>
>>error(patent(3)) < 4 * error(patent(5))
>>
>>we no longer have a patent meantone. And since as ET size goes
>>up the patent errors will tend to go down at about the same rate,
>>the inequality will eventually be satisfied. Is that right?

I obviously wasn't paying attention when you posted this before, because it isn't right. 31-equal is consistent, so its val is a patent one, but its error in 5 is much smaller than its error in 3.

> It still seems that some large ET could happen to have one error
> 4 times the size of another.

No meantone can have all its 5-limit intervals tuned to better than a quarter of a syntonic comma, or about 5.4 cents. No equal temperament represented by a patent val can have an error in a prime interval greater than half a scale step. Although these two statements aren't connected, it still works out that there's a size of ET beyond which it's impossible for a meantone to be represented by a patent val.

Graham

🔗yahya_melb <yahya@melbpc.org.au>

8/15/2006 9:22:15 PM

Gene,

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" wrote:
>
> --- In tuning-math@yahoogroups.com, "yahya_melb" wrote:
>
> > Would I be right in concluding that, for practical purposes, you
> > think 31-EDO is as good as it gets for a tuning that does good
> > octaves, fifths, thirds and sevenths, all close to the natural
> > integer overtones of the harmonic series?
>
> It's about as good as a *meantone* tuning gets, which is not the
same.
> I would say that for most 7-limit purposes 99-et is all you need,
> however, so far as tuning accuracy goes. But other people no doubt
> have different ideas.

Thanks for the clarification. So if I want to have good major and
minor tones (10/9 and 9/8), 31-EDO won't do the job, but 99-EDO will.
That's good to know.

Since there are around 6 meantones per octave, that means on average
16.5 steps of 99-EDO per meantone, so I'm guessing the minor and major
tones will be 16 and 17 steps respectively, and a septimal tone (8/7)
probably 18 or 19. Hmm, think I'll have to spreadsheet the numbers to
find out just how to use 99-EDO effectively in various limits. (Which
reminds me, there's a spreadsheet of Carl's I've been meaning to get
back to.)

Regards,
Yahya

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

8/16/2006 1:39:47 AM

--- In tuning-math@yahoogroups.com, "yahya_melb" <yahya@...> wrote:

> Thanks for the clarification. So if I want to have good major and
> minor tones (10/9 and 9/8), 31-EDO won't do the job, but 99-EDO will.
> That's good to know.

You were asking about the 7-limit. If all you need is 5-limit stuff
and in particular good 9/8 and 10/9, 53-et is fine.

> Since there are around 6 meantones per octave, that means on average
> 16.5 steps of 99-EDO per meantone, so I'm guessing the minor and major
> tones will be 16 and 17 steps respectively, and a septimal tone (8/7)
> probably 18 or 19.

10/9: 15
9/8: 17
8/7: 19
7/6: 22

🔗yahya_melb <yahya@melbpc.org.au>

8/16/2006 6:34:08 AM

Gene,

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" wrote:
>
> --- In tuning-math@yahoogroups.com, "yahya_melb" wrote:
>
> > Thanks for the clarification. So if I want to have good major
and minor tones (10/9 and 9/8), 31-EDO won't do the job, but 99-EDO
will. That's good to know.
>
> You were asking about the 7-limit. If all you need is 5-limit
stuff ...

I didn't actually say that! But if I did only want 5-limit, 53
would surely be a more manageable number of steps per octave (eg on
guitar) than would 99. Again, that's useful info.

> ... and in particular good 9/8 and 10/9, 53-et is fine.
>
> > Since there are around 6 meantones per octave, that means on
average 16.5 steps of 99-EDO per meantone, so I'm guessing the minor
and major tones will be 16 and 17 steps respectively, and a septimal
tone (8/7) probably 18 or 19.
>
> 10/9: 15
> 9/8: 17
> 8/7: 19
> 7/6: 22

Thanks for the numbers.

Regards,
Yahya

🔗Carl Lumma <carl@lumma.org>

1/8/2009 1:40:02 PM

Kalle wrote:

>> when the error is Tenney-weighted, the sequence of 5-limit
>> ETs of decreasing badness is
>>
>> 1, 12, 53, 4296,...
>>
>> Of meantone ETs 12-equal has the lowest log-flat badness!
>
>Are there any other badness measures which are as non-ad hoc as
>log-flat but which would give more reasonable values for ETs?

I was playing with logflat badness the other night, generating
this list, where I look at ETs up to 1000 and down to the
number of primes (e.g. 4-ET is smallest 7-limit ET considered),
and print the top results such that 12-ET is their median. The
val used for each ET is the one with the lowest TOP damage after
the TOP stretch is applied, and complexity is the ET number.

5-limit
<53 84 123| 117
<12 19 28| 148
<3 5 7| 157
7-limit
<99 157 230 278| 155
<12 19 28 34| 169
<31 49 72 87| 176
11-limit
<31 49 72 87 107| 132
<72 114 167 202 249| 135
<22 35 51 62 76| 156
<12 19 28 34 42| 170
<5 8 12 14 18| 190
<8 13 19 23 28| 201
<41 65 95 115 142| 202
13-limit
<72 114 167 202 249 266| 165
<12 19 28 34 42 45| 170
<8 13 19 23 28 30| 181
17-limit
<72 114 167 202 249 266 294| 143
<12 19 28 34 42 45 49| 156
<46 73 107 129 159 170 188| 161

As I've done on a few occasions in the past, I tinkered
with the exponents on error and complexity, but didn't find
anything that seemed to work better than logflat.

Then today I think I hit on an explanation: the badness is
doing its job, it's just that ETs are unmusical. Things
like 72 are not scales on which to base music, but rather
convenient tunings for rank 2 temperaments. ETs in the
complexity range of musical scales aren't accurate enough,
and are too melodically simple. Even 12-ET is not used
in music much -- the diatonic scale is. This isn't a new
realization, but I'm trying it again for the first time. :)

For rank 2 temperaments, something like the length of the
continuous chain of generators needed to produce a
saturated chord makes more sense. I think that's called
Graham complexity (right Graham?). And I think it's the
same as the unweighted wedgie complexity if the footprint
of both generators and periods is considered (right Graham?).
Then there's scalar complexity... maybe somebody can tell
me what that is.

So one approach for ETs would be to report the complexity
of a rank 2 temperament it supports. How to choose? I'll
suggest using the complexity of the simplest comma tempered
out by the val under consideration. This means one has to
go all the way to badness before picking a val for a number
of notes/octave, rather than just picking the one with the
lowest error.

One might also disqualify commas that don't include all
primes in the limit. For example, the best 11-limit val
above, <31 49 72 87 107|, tempers out 81/80. But the
simplest complete 11-limit comma it tempers out seems to
be 441/440. I'd then take the log2 Tenney Height of this
comma as the val's weighted complexity (its taxicab
distance on a rectangular lattice would be its unweighted
complexity).

Has this been suggested before? I haven't recoded my
search for it yet, but I've got to get to work before they
fire me. Maybe somebody will beat me to it...

-Carl

🔗Carl Lumma <carl@lumma.org>

1/8/2009 3:52:33 PM

I wrote:

>One might also disqualify commas that don't include all
>primes in the limit. For example, the best 11-limit val
>above, <31 49 72 87 107|, tempers out 81/80. But the
>simplest complete 11-limit comma it tempers out seems to
>be 441/440.

Simplest by Tenney height, that is.

I just uploaded a plot of notes/octave vs. simplest 7-limit
comma, here:

/tuning-math/files/carl/

Looks like it might work.

-Carl

🔗Carl Lumma <carl@lumma.org>

1/11/2009 2:33:28 AM

>I just uploaded a plot of notes/octave vs. simplest 7-limit
>comma, here:
>
>/tuning-math/files/carl/
>
>Looks like it might work.

Here are the best 3 ETs up to 100 ET for several different
limits using this scheme.

These are (val comma badness), where val is the non-torsional
val with the least TOP damage for the ET after the corresponding
TOP stretch has been applied, comma is the simplest vanishing
comma for that val which involves all the limit's prime factors
and where "simplest" is defined in terms of tenney height
(though I'm generating candidate commas by unweighted lattice
distance and there is a possibility that a comma lower tenney
height is outside my window), and badness is the tenney height
of this comma raised to the standard logflat exponent times the
top damage of the val.

5-limit:
((53 84 123) 15625/15552 45)
((65 103 151) 32805/32768 70)
((87 138 202) 15625/15552 73))
7-limit:
((99 157 230 278) 2401/2400 21)
((72 114 167 202) 225/224 25)
((94 149 218 264) 225/224 38))
11-limit:
((72 114 167 202 249) 441/440 23)
((99 157 230 278 343) 441/440 33)
((94 149 218 264 325) 539/540 36))
13-limit:
((94 149 218 264 325 348) 1715/1716 39)
((72 114 167 202 249 266) 1715/1716 39)
((87 138 202 244 301 322) 4235/4212 48))
17-limit:
((72 114 167 202 249 266 294) 715/714 30)
((94 149 218 264 325 348 384) 715/714 31)
((80 127 186 225 277 296 327) 715/714 36))

The results are biased towards larger ETs, and I doubt the
results are logflat anymore. There's clearly a penalty to
having more notes/octave, though I'll argue it's less important
than the penalty of greater Graham complexity in the embedded
rank 2 temperaments. I was thinking maybe
(notes/octave * simplest comma)^logflat_expt
would be one compromise. If anyone is following this, I'd
love to have your comments.

-Carl

🔗Carl Lumma <carl@lumma.org>

1/11/2009 2:39:22 AM

I wrote:

>These are (val comma badness), where val is the non-torsional

Oh, and I'm assuming for rank 1 temperaments, removing torsion
is as simple as ignoring vals with GCD > 1. Somebody please
correct me if that's not right.

-Carl