back to list

Re: further comparisons of well temperaments

🔗Carl Lumma <ekin@lumma.org>

8/29/2005 4:23:50 PM

[see...

/tuning/topicId_59914.html#59914

...]

> > > > I wanted to implement Absolute TOP error, but the Excel
> > > > formulas got to be unwieldy, as I'd seemingly have to use
> > > > if statements to avoid catastrophe wherever the error
> > > > from JI was zero.
> > >
> > > Why is that? Just use max(abs(...),abs(...),abs(...),...) and
> > > there should be no catastrophe. Of course, there's still a
> > > question as to what abs terms should be in this expression,
> > > but I would think you'd probably be evaluating a particular
> > > triad so there should be three terms(?) . . .
> >
> > For ATE you need to do logs of errors. When the errors are
> > zero, Excel's log function blows up.
>
> My guess is that you're doing something wrong. What is the formula
> you're trying to use exactly? You shouldn't be taking the log of
> anything with units of cents, and on your current spreadsheet it
> doesn't seem you are.

I'm not doing ATE, but I wanted to. For that, you take the
base n*d log of the error, right?

> > This reminds me that I never liked the minimax nature of
> > TOP. It's because...
> >
> > 3 3 3
> > mean = 3
> > max = 3
> >
> > 0 0 9
> > mean = 3
> > max = 9
> >
> > ...I think mean is closer to the truth here than max. Though
> > I'm not aware of this ever having been established with a
> > listening test.
>
> If you advocate the mean (or the sum, which amounts to the
> same thing as regards tuning comparisons), then you're
> advocating P=1. Meanwhile, Gene's "poptimal" is based on
> a range 2<=P<=infinity. Another reason for Gene to reconsider?

I've always agreed with John deLaubenfels that errors seem to
get worse with the square of their size. So I think I'm a P=2
guy. But it'd be great to get some tetrads and triads to
try to compare infinity with a low P. It's not immediately
obvious to me how to calculate examples like 009 vs. 333 since
the pairwise errors in a chord are not independent.

> The nice thing about minimax, though, is that once you've
> established it over the intervals in a tuning system, you
> know that the minimax for any particular chords in the
> tuning system will be less than or equal to the tuning's
> minimax. Not so for mean or for any intermediate value
> of P.

Ok, that's a key point. However, isn't the "tuning's minimax"
here just another "particular chord", ultimately? It's nice
to think of JI as a lattice, where none of the weighted errors
exceeds a certain bound. I agree that's nice. But really it
seems that only the primary intervals matter... yes, I suppose
mean says that a subset of your basic chord could be worse
than the entire basic chord. That's a pain in the butt, but
what if it corresponds to reality? I dunno, I hate to dig up
the error thing, but it never sat quite right with me. I
know Gene and Graham have delivered TOP-like results based on
RMS... I never took the time to see what those were like.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

8/29/2005 4:37:21 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> [see...
>
> /tuning/topicId_59914.html#59914
>
> ...]
>
> > > > > I wanted to implement Absolute TOP error, but the Excel
> > > > > formulas got to be unwieldy, as I'd seemingly have to use
> > > > > if statements to avoid catastrophe wherever the error
> > > > > from JI was zero.
> > > >
> > > > Why is that? Just use max(abs(...),abs(...),abs(...),...) and
> > > > there should be no catastrophe. Of course, there's still a
> > > > question as to what abs terms should be in this expression,
> > > > but I would think you'd probably be evaluating a particular
> > > > triad so there should be three terms(?) . . .
> > >
> > > For ATE you need to do logs of errors. When the errors are
> > > zero, Excel's log function blows up.
> >
> > My guess is that you're doing something wrong. What is the
formula
> > you're trying to use exactly? You shouldn't be taking the log of
> > anything with units of cents, and on your current spreadsheet it
> > doesn't seem you are.
>
> I'm not doing ATE, but I wanted to. For that, you take the
> base n*d log of the error, right?

Maybe I don't know what you mean. What are you proposing, exactly,
and how does it differ from TOP or what you did on your spreadsheet?

> > > This reminds me that I never liked the minimax nature of
> > > TOP. It's because...
> > >
> > > 3 3 3
> > > mean = 3
> > > max = 3
> > >
> > > 0 0 9
> > > mean = 3
> > > max = 9
> > >
> > > ...I think mean is closer to the truth here than max. Though
> > > I'm not aware of this ever having been established with a
> > > listening test.
> >
> > If you advocate the mean (or the sum, which amounts to the
> > same thing as regards tuning comparisons), then you're
> > advocating P=1. Meanwhile, Gene's "poptimal" is based on
> > a range 2<=P<=infinity. Another reason for Gene to reconsider?
>
> I've always agreed with John deLaubenfels that errors seem to
> get worse with the square of their size. So I think I'm a P=2
> guy.

Not necessarily. These are separate issues. If you agree with John
deLaubenfels, you might square the final result of the calculation
regardless of what P is. But P only tells you about how to assess the
combined impact of different coexisting errors, and not at all about
how to compare one set of errors with another set that's x times as
large.

> But it'd be great to get some tetrads and triads to
> try to compare infinity with a low P. It's not immediately
> obvious to me how to calculate examples like 009 vs. 333 since
> the pairwise errors in a chord are not independent.

So what?

> > The nice thing about minimax, though, is that once you've
> > established it over the intervals in a tuning system, you
> > know that the minimax for any particular chords in the
> > tuning system will be less than or equal to the tuning's
> > minimax. Not so for mean or for any intermediate value
> > of P.
>
> Ok, that's a key point. However, isn't the "tuning's minimax"
> here just another "particular chord", ultimately? It's nice
> to think of JI as a lattice, where none of the weighted errors
> exceeds a certain bound. I agree that's nice. But really it
> seems that only the primary intervals matter...

Meaning n*d < 100, or something like that?

> yes, I suppose
> mean says that a subset of your basic chord could be worse
> than the entire basic chord.

Well, something along those lines, at any rate.

> That's a pain in the butt, but
> what if it corresponds to reality?

It certainly might.

> I dunno, I hate to dig up
> the error thing, but it never sat quite right with me. I
> know Gene and Graham have delivered TOP-like results based on
> RMS... I never took the time to see what those were like.

I know Graham did but I don't recall Gene getting involved.
Meanwhile, I would have imagined an RMS (or sum-squared) version of
Kees tuning (this is another interesting case to consider) would
amount to circles replacing the hexagons in the chart I keep showing,
and spheres replacing rhombic dodecahedra in a 7-limit version, but
Gene seems to be implying I'm wrong about the latter.

🔗Carl Lumma <ekin@lumma.org>

8/29/2005 4:53:01 PM

>> [see...
>>
>> /tuning/topicId_59914.html#59914
>>
>> ...]
>>
>> > > > > I wanted to implement Absolute TOP error, but the Excel
>> > > > > formulas got to be unwieldy, as I'd seemingly have to use
>> > > > > if statements to avoid catastrophe wherever the error
>> > > > > from JI was zero.
>> > > >
>> > > > Why is that? Just use max(abs(...),abs(...),abs(...),...) and
>> > > > there should be no catastrophe. Of course, there's still a
>> > > > question as to what abs terms should be in this expression,
>> > > > but I would think you'd probably be evaluating a particular
>> > > > triad so there should be three terms(?) . . .
>> > >
>> > > For ATE you need to do logs of errors. When the errors are
>> > > zero, Excel's log function blows up.
>> >
>> > My guess is that you're doing something wrong. What is the
>> > formula you're trying to use exactly? You shouldn't be taking
>> > the log of anything with units of cents, and on your current
>> > spreadsheet it doesn't seem you are.
>>
>> I'm not doing ATE, but I wanted to. For that, you take the
>> base n*d log of the error, right?
>
>Maybe I don't know what you mean. What are you proposing, exactly,
>and how does it differ from TOP or what you did on your spreadsheet?

/tuning-math/message/10579

...I think I parsed this right.

>> > > This reminds me that I never liked the minimax nature of
>> > > TOP. It's because...
>> > >
>> > > 3 3 3
>> > > mean = 3
>> > > max = 3
>> > >
>> > > 0 0 9
>> > > mean = 3
>> > > max = 9
>> > >
>> > > ...I think mean is closer to the truth here than max. Though
>> > > I'm not aware of this ever having been established with a
>> > > listening test.
>> >
>> > If you advocate the mean (or the sum, which amounts to the
>> > same thing as regards tuning comparisons), then you're
>> > advocating P=1. Meanwhile, Gene's "poptimal" is based on
>> > a range 2<=P<=infinity. Another reason for Gene to reconsider?
>>
>> I've always agreed with John deLaubenfels that errors seem to
>> get worse with the square of their size. So I think I'm a P=2
>> guy.
>
>Not necessarily. These are separate issues. If you agree with John
>deLaubenfels, you might square the final result of the calculation
>regardless of what P is. But P only tells you about how to assess the
>combined impact of different coexisting errors, and not at all about
>how to compare one set of errors with another set that's x times as
>large.

Aha. Very good to know.

>> But it'd be great to get some tetrads and triads to
>> try to compare infinity with a low P. It's not immediately
>> obvious to me how to calculate examples like 009 vs. 333 since
>> the pairwise errors in a chord are not independent.
>
>So what?

I don't know how to easily generate test cases given this
constraint. If I could, I'd listen to them.

>> > The nice thing about minimax, though, is that once you've
>> > established it over the intervals in a tuning system, you
>> > know that the minimax for any particular chords in the
>> > tuning system will be less than or equal to the tuning's
>> > minimax. Not so for mean or for any intermediate value
>> > of P.
>>
>> Ok, that's a key point. However, isn't the "tuning's minimax"
>> here just another "particular chord", ultimately? It's nice
>> to think of JI as a lattice, where none of the weighted errors
>> exceeds a certain bound. I agree that's nice. But really it
>> seems that only the primary intervals matter...
>
>Meaning n*d < 100, or something like that?

Say you've got an m-limit TOP tuning, I'd call the "primary
intervals" the n-odd-limit intervals where n is the smallest
odd number > m.

>> yes, I suppose
>> mean says that a subset of your basic chord could be worse
>> than the entire basic chord.
>
>Well, something along those lines, at any rate.
>
>> That's a pain in the butt, but
>> what if it corresponds to reality?
>
>It certainly might.
>
>> I dunno, I hate to dig up
>> the error thing, but it never sat quite right with me. I
>> know Gene and Graham have delivered TOP-like results based on
>> RMS... I never took the time to see what those were like.
>
>I know Graham did but I don't recall Gene getting involved.
>Meanwhile, I would have imagined an RMS (or sum-squared) version of
>Kees tuning (this is another interesting case to consider) would
>amount to circles replacing the hexagons in the chart I keep showing,
>and spheres replacing rhombic dodecahedra in a 7-limit version, but
>Gene seems to be implying I'm wrong about the latter.

I need to review the IM discussion we had about the Kees lattice...
it's on my desktop in my to-do pile.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

8/29/2005 6:03:45 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> [see...
> >>
> >> /tuning/topicId_59914.html#59914
> >>
> >> ...]
> >>
> >> > > > > I wanted to implement Absolute TOP error, but the Excel
> >> > > > > formulas got to be unwieldy, as I'd seemingly have to use
> >> > > > > if statements to avoid catastrophe wherever the error
> >> > > > > from JI was zero.
> >> > > >
> >> > > > Why is that? Just use max(abs(...),abs(...),abs(...),...)
and
> >> > > > there should be no catastrophe. Of course, there's still a
> >> > > > question as to what abs terms should be in this expression,
> >> > > > but I would think you'd probably be evaluating a particular
> >> > > > triad so there should be three terms(?) . . .
> >> > >
> >> > > For ATE you need to do logs of errors. When the errors are
> >> > > zero, Excel's log function blows up.
> >> >
> >> > My guess is that you're doing something wrong. What is the
> >> > formula you're trying to use exactly? You shouldn't be taking
> >> > the log of anything with units of cents, and on your current
> >> > spreadsheet it doesn't seem you are.
> >>
> >> I'm not doing ATE, but I wanted to. For that, you take the
> >> base n*d log of the error, right?
> >
> >Maybe I don't know what you mean. What are you proposing, exactly,
> >and how does it differ from TOP or what you did on your
spreadsheet?
>
> /tuning-math/message/10579
>
> ...I think I parsed this right.

Oh. I had no idea that's what you meant by ATE. OK, here the error
would have to be expressed as a frequency *ratio* (not necessarily a
rational one) because the whole idea is to use the same type of log
for both the Tenney part of the calculation and the 'cents' part of
the calculation. So if the error is zero cents, it's 1 when expressed
as a frequency ratio. So you'd be taking the log of 1 at a minimum;
never of 0.

> >So what?
>
> I don't know how to easily generate test cases given this
> constraint. If I could, I'd listen to them.

Oh, you mean you can't find a 333 case or a 009 case. True enough --
because of the constraint, you have to compare things which don't
show as much difference under the different criteria.

> >> > The nice thing about minimax, though, is that once you've
> >> > established it over the intervals in a tuning system, you
> >> > know that the minimax for any particular chords in the
> >> > tuning system will be less than or equal to the tuning's
> >> > minimax. Not so for mean or for any intermediate value
> >> > of P.
> >>
> >> Ok, that's a key point. However, isn't the "tuning's minimax"
> >> here just another "particular chord", ultimately? It's nice
> >> to think of JI as a lattice, where none of the weighted errors
> >> exceeds a certain bound. I agree that's nice. But really it
> >> seems that only the primary intervals matter...
> >
> >Meaning n*d < 100, or something like that?
>
> Say you've got an m-limit TOP tuning, I'd call the "primary
> intervals" the n-odd-limit intervals where n is the smallest
> odd number > m.

Wow. Of course that runs against the whole grain of TOP . . . and why
would you go to the next prime beyond the limit when m = 3, 5, 11,
17, 29 . . . ? How do you determine the mapping for that prime, etc. ?

> I need to review the IM discussion we had about the Kees lattice...
> it's on my desktop in my to-do pile.

Awesome. I gave you and especially Monz a lot of gems in IM.

🔗Carl Lumma <ekin@lumma.org>

8/29/2005 6:13:46 PM

>> /tuning-math/message/10579
>>
>> ...I think I parsed this right.
>
>Oh. I had no idea that's what you meant by ATE. OK, here the error
>would have to be expressed as a frequency *ratio* (not necessarily a
>rational one) because the whole idea is to use the same type of log
>for both the Tenney part of the calculation and the 'cents' part of
>the calculation. So if the error is zero cents, it's 1 when expressed
>as a frequency ratio. So you'd be taking the log of 1 at a minimum;
>never of 0.

Thanks!

>> >So what?
>>
>> I don't know how to easily generate test cases given this
>> constraint. If I could, I'd listen to them.
>
>Oh, you mean you can't find a 333 case or a 009 case. True enough --
>because of the constraint, you have to compare things which don't
>show as much difference under the different criteria.

I hate that. :)

>> >> > The nice thing about minimax, though, is that once you've
>> >> > established it over the intervals in a tuning system, you
>> >> > know that the minimax for any particular chords in the
>> >> > tuning system will be less than or equal to the tuning's
>> >> > minimax. Not so for mean or for any intermediate value
>> >> > of P.
>> >>
>> >> Ok, that's a key point. However, isn't the "tuning's minimax"
>> >> here just another "particular chord", ultimately? It's nice
>> >> to think of JI as a lattice, where none of the weighted errors
>> >> exceeds a certain bound. I agree that's nice. But really it
>> >> seems that only the primary intervals matter...
>> >
>> >Meaning n*d < 100, or something like that?
>>
>> Say you've got an m-limit TOP tuning, I'd call the "primary
>> intervals" the n-odd-limit intervals where n is the smallest
>> odd number > m.
>
>Wow. Of course that runs against the whole grain of TOP . . . and why
>would you go to the next prime beyond the limit when m = 3, 5, 11,
>17, 29 . . . ?

The next odd. 9 for the 7-limit, etc. I was just trying to say
what you've been saying all along, that 135/128 has no field of
attraction and saying its weighted error is still less than something
has little meaning. 15:8 has a field of attraction, but we shouldn't
care about its error in the 5-limit. If we do care, we should
say so explicitly.

>How do you determine the mapping for that prime, etc. ?

Er, I'm just trying to clarify my point... this wasn't a serious
proposal.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

8/29/2005 6:20:20 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> /tuning-math/message/10579
> >>
> >> ...I think I parsed this right.
> >
> >Oh. I had no idea that's what you meant by ATE. OK, here the error
> >would have to be expressed as a frequency *ratio* (not necessarily
a
> >rational one) because the whole idea is to use the same type of
log
> >for both the Tenney part of the calculation and the 'cents' part
of
> >the calculation. So if the error is zero cents, it's 1 when
expressed
> >as a frequency ratio. So you'd be taking the log of 1 at a
minimum;
> >never of 0.
>
> Thanks!
>
> >> >So what?
> >>
> >> I don't know how to easily generate test cases given this
> >> constraint. If I could, I'd listen to them.
> >
> >Oh, you mean you can't find a 333 case or a 009 case. True enough -
-
> >because of the constraint, you have to compare things which don't
> >show as much difference under the different criteria.
>
> I hate that. :)
>
> >> >> > The nice thing about minimax, though, is that once you've
> >> >> > established it over the intervals in a tuning system, you
> >> >> > know that the minimax for any particular chords in the
> >> >> > tuning system will be less than or equal to the tuning's
> >> >> > minimax. Not so for mean or for any intermediate value
> >> >> > of P.
> >> >>
> >> >> Ok, that's a key point. However, isn't the "tuning's minimax"
> >> >> here just another "particular chord", ultimately? It's nice
> >> >> to think of JI as a lattice, where none of the weighted errors
> >> >> exceeds a certain bound. I agree that's nice. But really it
> >> >> seems that only the primary intervals matter...
> >> >
> >> >Meaning n*d < 100, or something like that?
> >>
> >> Say you've got an m-limit TOP tuning, I'd call the "primary
> >> intervals" the n-odd-limit intervals where n is the smallest
> >> odd number > m.
> >
> >Wow. Of course that runs against the whole grain of TOP . . . and
why
> >would you go to the next prime beyond the limit when m = 3, 5, 11,
> >17, 29 . . . ?
>
> The next odd. 9 for the 7-limit, etc. I was just trying to say
> what you've been saying all along, that 135/128 has no field of
> attraction and saying its weighted error is still less than
something
> has little meaning. 15:8 has a field of attraction, but we
shouldn't
> care about its error in the 5-limit. If we do care, we should
> say so explicitly.

So what's wrong with only including the ratios in the lattice n/d
such that n*d < 100 or something like that? Seems a lot more
compatible with the spirit and assumptions of TOP.

🔗Carl Lumma <ekin@lumma.org>

8/29/2005 7:44:17 PM

>> >> Say you've got an m-limit TOP tuning, I'd call the "primary
>> >> intervals" the n-odd-limit intervals where n is the smallest
>> >> odd number > m.
>> >
>> >Wow. Of course that runs against the whole grain of TOP . . . and
>> >why would you go to the next prime beyond the limit when
>> >m = 3, 5, 11, 17, 29 . . . ?
>>
>> The next odd. 9 for the 7-limit, etc. I was just trying to say
>> what you've been saying all along, that 135/128 has no field of
>> attraction and saying its weighted error is still less than
>something
>> has little meaning. 15:8 has a field of attraction, but we
>shouldn't
>> care about its error in the 5-limit. If we do care, we should
>> say so explicitly.
>
>So what's wrong with only including the ratios in the lattice n/d
>such that n*d < 100 or something like that? Seems a lot more
>compatible with the spirit and assumptions of TOP.

Sounds good to me.

-Carl