back to list

TOP-like things?

🔗Carl Lumma <ekin@lumma.org>

3/10/2007 7:49:33 PM

TOP (aka TOP-MAX)
TOP with pure octaves (aka Kees?)
TOP-RMS
TOM-RMS with pure octaves (aka PORMSWE?)
NOT
POCPIT (aka Frobenius?)

any others?

-C.

🔗Graham Breed <gbreed@gmail.com>

3/11/2007 3:32:16 AM

On 11/03/07, Carl Lumma <ekin@lumma.org> wrote:
> TOP (aka TOP-MAX)

Yes. Or TOP-max because "max" isn't an acronym.

> TOP with pure octaves (aka Kees?)

I call this Kees-max because there are surely other things you can do
with a Kees metric.

> TOP-RMS

Yes.

> TOM-RMS with pure octaves (aka PORMSWE?)

PORMSWE is an old term for TOP-RMS error. Prime Optimal RMS Weighted
Error. A real mouthful and I forgot the "Tenney". Nothing to do with
pure octaves.

> NOT

Another kind of TOP with pure octave.

> POCPIT (aka Frobenius?)

A kind of TOP-RMS with pure octaves. Probably not biased towards
intervals between primes but I don't know how it works out. Frobenius
is an old word for a least squares optimization, probably unweighted.

> any others?

Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
or not it implies weighting.

My preference for TOP-RMS with pure octaves is the standard deviation
tuning. This doesn't have a special name. It's biased towards
intervals between primes, and so suffers less than an octave-specific
RMS when you fix the octaves. The standard deviation is the RMS error
relative to the mean, instead of relative to zero.

There's TOP-RMS with the scale restretched so the octaves are pure.

I worked out a wedgie error which is close to the TOP-RMS error but I
don't know if it's useful. Gene worked out a wedgie error but it
doesn't seem to correlate with anything else.

You could also do a weighted mean absolute deviation but nobody talks
about this. Probably rightly so.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/11/2007 11:37:00 AM

At 03:32 AM 3/11/2007, you wrote:
>On 11/03/07, Carl Lumma <ekin@lumma.org> wrote:
>> TOP (aka TOP-MAX)
>
>Yes. Or TOP-max because "max" isn't an acronym.
>
>> TOP with pure octaves (aka Kees?)
>
>I call this Kees-max because there are surely other things you can do
>with a Kees metric.

Check.

>> TOP-RMS
>
>Yes.
>
>> TOM-RMS with pure octaves (aka PORMSWE?)
>
>PORMSWE is an old term for TOP-RMS error. Prime Optimal RMS Weighted
>Error. A real mouthful and I forgot the "Tenney". Nothing to do with
>pure octaves.

I was thinking PO must have been Pure Octaves.

>> NOT
>
>Another kind of TOP with pure octave.

Yes. I wish somebody would outline the differences between
these approaches.

>> POCPIT (aka Frobenius?)
>
>A kind of TOP-RMS with pure octaves. Probably not biased towards
>intervals between primes but I don't know how it works out. Frobenius
>is an old word for a least squares optimization, probably unweighted.

Oh? I thought the interesting thing was the val (for rank 1) is
also the eigenmonzo, or something. Is that a property of least-
squares error (unweighted I assume, or it'd be TOP-RMS)?

>> any others?
>
>Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
>or not it implies weighting.

First of all, I thought Pseudoinverse = POCPIT = Probenius.

Second of all, I thought you two *just* figured out that TOP-RMS
is *not* any of these.

>My preference for TOP-RMS with pure octaves is the standard deviation
>tuning. This doesn't have a special name.

Kees-RMS?

>There's TOP-RMS with the scale restretched so the octaves are pure.

That's what I'd do. :)

>You could also do a weighted mean absolute deviation but nobody talks
>about this. Probably rightly so.

It's one of the things that can be done for ETs with my latest Scheme
project. Looking at the ET rankings that came out, RMS and Tenney
weighting with logflat complexity weighting seemed best. But I'm
still not entirely happy with rankings, and it seems like the
complexity weighting is at fault.

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/11/2007 1:17:26 PM

>It's one of the things that can be done for ETs with my latest Scheme
>project. Looking at the ET rankings that came out, RMS and Tenney
>weighting with logflat complexity weighting seemed best. But I'm
>still not entirely happy with rankings, and it seems like the
>complexity weighting is at fault.

In particular, and I guess this is the kind of thing Paul and Dave
objected to, I would like to see a little more in the 12-100 range.
But overall it seems to do a pretty good job.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/11/2007 1:24:45 PM

--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:

> Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
> or not it implies weighting.

I guess I shouldn't use it to imply anything ans should rewrite my web
page.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/11/2007 1:29:14 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> First of all, I thought Pseudoinverse = POCPIT = Probenius.

Frobenius is RMS unwieghted. POCPIT is the pure octaves version.

> Second of all, I thought you two *just* figured out that TOP-RMS
> is *not* any of these.

I gave the weighted and unweighted calculation using the pseudoinverse
last year, and then stupidly got the idea that Graham liked the
unweighted; entirely the fault of my aging brain.

🔗Graham Breed <gbreed@gmail.com>

3/11/2007 10:56:17 PM

On 12/03/07, Carl Lumma <ekin@lumma.org> wrote:
> At 03:32 AM 3/11/2007, you wrote:
> >On 11/03/07, Carl Lumma <ekin@lumma.org> wrote:

> >> NOT
> >
> >Another kind of TOP with pure octave.
>
> Yes. I wish somebody would outline the differences between
> these approaches.

NOT is a No Octave Tempering version of TOP. All you do is ignore
octaves and optimize for odd primes only. That is, I think that's it,
I don't mean to be dismissive. But it doesn't check the balance
between positive and negative errors. The more the prime errors are
consistent, the smaller intervals between primes tend to be. Odd
limits, standard deviations, the Kees metric, and optimizing the
octaves all correct for this. In many cases this doesn't matter, and
NOT will still distinguish really good from really bad temperaments.
But calculating the standard deviation or optimizing the octaves
aren't very difficult either.

> >> POCPIT (aka Frobenius?)
> >
> >A kind of TOP-RMS with pure octaves. Probably not biased towards
> >intervals between primes but I don't know how it works out. Frobenius
> >is an old word for a least squares optimization, probably unweighted.
>
> Oh? I thought the interesting thing was the val (for rank 1) is
> also the eigenmonzo, or something. Is that a property of least-
> squares error (unweighted I assume, or it'd be TOP-RMS)?

I think that's only the unweighted least squares. The eigenvalues may
have interesting values for TOP-RMS but probably don't represent
rational intervals.

> >> any others?
> >
> >Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
> >or not it implies weighting.
>
> First of all, I thought Pseudoinverse = POCPIT = Probenius.
>
> Second of all, I thought you two *just* figured out that TOP-RMS
> is *not* any of these.

The pseudoinverse is a matrix you can use to solve this class of least
squares problems. So any optimal RMS could be called a pseudoinverse
tuning, or not.

> >My preference for TOP-RMS with pure octaves is the standard deviation
> >tuning. This doesn't have a special name.
>
> Kees-RMS?

Maybe it is, but I can't prove it's anything to do with the Kees
metric. So for now it's imperfectly called STD-error.

> >There's TOP-RMS with the scale restretched so the octaves are pure.
>
> That's what I'd do. :)

It's extremely close to STD-error when the error's low. The STD-error
is a kind of second order approximation.

> >You could also do a weighted mean absolute deviation but nobody talks
> >about this. Probably rightly so.
>
> It's one of the things that can be done for ETs with my latest Scheme
> project. Looking at the ET rankings that came out, RMS and Tenney
> weighting with logflat complexity weighting seemed best. But I'm
> still not entirely happy with rankings, and it seems like the
> complexity weighting is at fault.

A MAD would but more emphasis on small errors. But these naive,
weighted-primes error measures don't imply small errors correctly. So
TOP-max still works perfectly, and TOP-RMS isn't far off an RMS of
some more useful intervals, but weighted MAD I'd guess would be less
so. It's also much easier to work with the RMS because it's
continuously differentiable and I think there is a subjective argument
for it.

Are you optimizing the octaves?

Graham

🔗Carl Lumma <ekin@lumma.org>

3/11/2007 11:26:37 PM

>> >> NOT
>> >
>> >Another kind of TOP with pure octave.
>>
>> Yes. I wish somebody would outline the differences between
>> these approaches.
>
>NOT is a No Octave Tempering version of TOP. All you do is ignore
>octaves and optimize for odd primes only.

I knew that. But I wasn't clear on if/where the results differed
from the likes of: whatever Kees-metric things have been proposed,
stretched TOP, stretched TOP-RMS, etc.

>> >> POCPIT (aka Frobenius?)
>> >
>> >A kind of TOP-RMS with pure octaves. Probably not biased towards
>> >intervals between primes but I don't know how it works out. Frobenius
>> >is an old word for a least squares optimization, probably unweighted.
>>
>> Oh? I thought the interesting thing was the val (for rank 1) is
>> also the eigenmonzo, or something. Is that a property of least-
>> squares error (unweighted I assume, or it'd be TOP-RMS)?
>
>I think that's only the unweighted least squares. The eigenvalues may
>have interesting values for TOP-RMS but probably don't represent
>rational intervals.

Ok.

>> >> any others?
>> >
>> >Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
>> >or not it implies weighting.
>>
>> First of all, I thought Pseudoinverse = POCPIT = Probenius.
>>
>> Second of all, I thought you two *just* figured out that TOP-RMS
>> is *not* any of these.
>
>The pseudoinverse is a matrix you can use to solve this class of least
>squares problems. So any optimal RMS could be called a pseudoinverse
>tuning, or not.

I gather from Gene's answer this is just a method, that could
presumably be applied to weighted errors aswell.

>> >My preference for TOP-RMS with pure octaves is the standard deviation
>> >tuning. This doesn't have a special name.
>>
>> Kees-RMS?
>
>Maybe it is, but I can't prove it's anything to do with the Kees
>metric. So for now it's imperfectly called STD-error.

Ah.

>> >There's TOP-RMS with the scale restretched so the octaves are pure.
>>
>> That's what I'd do. :)
>
>It's extremely close to STD-error when the error's low. The STD-error
>is a kind of second order approximation.

Like I said... :)

>> >You could also do a weighted mean absolute deviation but nobody talks
>> >about this. Probably rightly so.
>>
>> It's one of the things that can be done for ETs with my latest Scheme
>> project. Looking at the ET rankings that came out, RMS and Tenney
>> weighting with logflat complexity weighting seemed best. But I'm
>> still not entirely happy with rankings, and it seems like the
>> complexity weighting is at fault.
>
>A MAD would but more emphasis on small errors. But these naive,
>weighted-primes error measures don't imply small errors correctly. So
>TOP-max still works perfectly, and TOP-RMS isn't far off an RMS of
>some more useful intervals, but weighted MAD I'd guess would be less
>so. It's also much easier to work with the RMS because it's
>continuously differentiable and I think there is a subjective argument
>for it.
>
>Are you optimizing the octaves?

I'm not coming up with tunings at all, er... well, yes I am,
they're ETs, so no, I'm not optimizing the octaves. So I suppose
I'm getting wonky errors like you warn about in your paper.
Except I didn't believe that bit.

On a slightly different note, do you have anything showing
how average errors-of-primes is bounded vs. average errors-of-
the-tonality-diamond? Something that didn't depend on the
choice of weighting schemes would be awesome...

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/11/2007 11:55:46 PM

On 12/03/07, Carl Lumma <ekin@lumma.org> wrote:
> >> >> NOT
> >> >
> >> >Another kind of TOP with pure octave.
> >>
> >> Yes. I wish somebody would outline the differences between
> >> these approaches.
> >
> >NOT is a No Octave Tempering version of TOP. All you do is ignore
> >octaves and optimize for odd primes only.
>
> I knew that. But I wasn't clear on if/where the results differed
> from the likes of: whatever Kees-metric things have been proposed,
> stretched TOP, stretched TOP-RMS, etc.

The standard example is 19 vs 22 in the 5-limit (and I think it
carries through into the 7-limit). For the actual size of the errors,
22 comes off a lot better. But 19 has a good 6:5, so I think we agree
that both should be rated about the same.

If you look at a list of optimal tunings, the more the octaves deviate
from just, the more different the NOT error would be.

> >> >> any others?
> >> >
> >> >Pseudinverse is a synonym for optimal RMS, and it's up to Gene whether
> >> >or not it implies weighting.
> >>
> >> First of all, I thought Pseudoinverse = POCPIT = Probenius.
> >>
> >> Second of all, I thought you two *just* figured out that TOP-RMS
> >> is *not* any of these.
> >
> >The pseudoinverse is a matrix you can use to solve this class of least
> >squares problems. So any optimal RMS could be called a pseudoinverse
> >tuning, or not.
>
> I gather from Gene's answer this is just a method, that could
> presumably be applied to weighted errors aswell.

Yes.

> >> >You could also do a weighted mean absolute deviation but nobody talks
> >> >about this. Probably rightly so.
> >>
> >> It's one of the things that can be done for ETs with my latest Scheme
> >> project. Looking at the ET rankings that came out, RMS and Tenney
> >> weighting with logflat complexity weighting seemed best. But I'm
> >> still not entirely happy with rankings, and it seems like the
> >> complexity weighting is at fault.
> >
> >A MAD would but more emphasis on small errors. But these naive,
> >weighted-primes error measures don't imply small errors correctly. So
> >TOP-max still works perfectly, and TOP-RMS isn't far off an RMS of
> >some more useful intervals, but weighted MAD I'd guess would be less
> >so. It's also much easier to work with the RMS because it's
> >continuously differentiable and I think there is a subjective argument
> >for it.
> >
> >Are you optimizing the octaves?
>
> I'm not coming up with tunings at all, er... well, yes I am,
> they're ETs, so no, I'm not optimizing the octaves. So I suppose
> I'm getting wonky errors like you warn about in your paper.
> Except I didn't believe that bit.

You're getting 22 but not 19. That may be valid because of the
11-limit, or it may be an artefact of not stretching the octaves. Try
using a stretched 19 for comparison.

> On a slightly different note, do you have anything showing
> how average errors-of-primes is bounded vs. average errors-of-
> the-tonality-diamond? Something that didn't depend on the
> choice of weighting schemes would be awesome...

No. I could do a table comparing unweighted odd limit to weighted
primes errors. There should be a vague correlation. I've tried to
work out the limit to infinity of a suitably weighted odd-limit
calculation, but I can't find a way of doing the weighting that gives
a stable answer.

There are certain ways of setting the weights that get you close to
odd limits. Up to the 7-limit, all odd primes are weighted equally.
For the 9-limit, you weight 3 double (implying a buoyancy of 0.5). At
the 15-limit, you need to introduce 15 as an independent harmonic with
a weight of 1. That screws up the averages but gives you an idea of
the true odd limit error. I can work this out in more detail but for
now I'm concentrating on getting the Tenney weighting clear and
correct.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/12/2007 12:07:23 AM

>> >> >> NOT
>> >> >
>> >> >Another kind of TOP with pure octave.
>> >>
>> >> Yes. I wish somebody would outline the differences between
>> >> these approaches.
>> >
>> >NOT is a No Octave Tempering version of TOP. All you do is ignore
>> >octaves and optimize for odd primes only.
>>
>> I knew that. But I wasn't clear on if/where the results differed
>> from the likes of: whatever Kees-metric things have been proposed,
>> stretched TOP, stretched TOP-RMS, etc.
>
>The standard example is 19 vs 22 in the 5-limit (and I think it
>carries through into the 7-limit). For the actual size of the errors,
>22 comes off a lot better. But 19 has a good 6:5, so I think we agree
>that both should be rated about the same.
>If you look at a list of optimal tunings, the more the octaves deviate
>from just, the more different the NOT error would be.

You mean, the greater the octave error in TOP, the greater the
difference between NOT and... ?

I'm not sure why the relative errors of 19 and 22 would change
depending on whether octaves are tempered.

>> >> >You could also do a weighted mean absolute deviation but nobody talks
>> >> >about this. Probably rightly so.
>> >>
>> >> It's one of the things that can be done for ETs with my latest Scheme
>> >> project. Looking at the ET rankings that came out, RMS and Tenney
>> >> weighting with logflat complexity weighting seemed best. But I'm
>> >> still not entirely happy with rankings, and it seems like the
>> >> complexity weighting is at fault.
>> >
>> >A MAD would but more emphasis on small errors. But these naive,
>> >weighted-primes error measures don't imply small errors correctly. So
>> >TOP-max still works perfectly, and TOP-RMS isn't far off an RMS of
>> >some more useful intervals, but weighted MAD I'd guess would be less
>> >so. It's also much easier to work with the RMS because it's
>> >continuously differentiable and I think there is a subjective argument
>> >for it.
>> >
>> >Are you optimizing the octaves?
>>
>> I'm not coming up with tunings at all, er... well, yes I am,
>> they're ETs, so no, I'm not optimizing the octaves. So I suppose
>> I'm getting wonky errors like you warn about in your paper.
>> Except I didn't believe that bit.
>
>You're getting 22 but not 19. That may be valid because of the
>11-limit, or it may be an artefact of not stretching the octaves. Try
>using a stretched 19 for comparison.

Hm, I suppose I could update my code to use TOP-tuned patent vals.

>> On a slightly different note, do you have anything showing
>> how average errors-of-primes is bounded vs. average errors-of-
>> the-tonality-diamond? Something that didn't depend on the
>> choice of weighting schemes would be awesome...
>
>No. I could do a table comparing unweighted odd limit to weighted
>primes errors. There should be a vague correlation. I've tried to
>work out the limit to infinity of a suitably weighted odd-limit
>calculation, but I can't find a way of doing the weighting that gives
>a stable answer.
>
>There are certain ways of setting the weights that get you close to
>odd limits. Up to the 7-limit, all odd primes are weighted equally.
>For the 9-limit, you weight 3 double (implying a buoyancy of 0.5). At
>the 15-limit, you need to introduce 15 as an independent harmonic with
>a weight of 1. That screws up the averages but gives you an idea of
>the true odd limit error. I can work this out in more detail but for
>now I'm concentrating on getting the Tenney weighting clear and
>correct.

Hm, I thought I remember you saying somewhere how looking at primes
can't be far off looking at all intervals.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/12/2007 12:24:10 AM

On 12/03/07, Carl Lumma <ekin@lumma.org> wrote:
> >> >> >> NOT
> >> >> >
> >> >> >Another kind of TOP with pure octave.
> >> >>
> >> >> Yes. I wish somebody would outline the differences between
> >> >> these approaches.
> >> >
> >> >NOT is a No Octave Tempering version of TOP. All you do is ignore
> >> >octaves and optimize for odd primes only.
> >>
> >> I knew that. But I wasn't clear on if/where the results differed
> >> from the likes of: whatever Kees-metric things have been proposed,
> >> stretched TOP, stretched TOP-RMS, etc.
> >
> >The standard example is 19 vs 22 in the 5-limit (and I think it
> >carries through into the 7-limit). For the actual size of the errors,
> >22 comes off a lot better. But 19 has a good 6:5, so I think we agree
> >that both should be rated about the same.
> >If you look at a list of optimal tunings, the more the octaves deviate
> >from just, the more different the NOT error would be.
>
> You mean, the greater the octave error in TOP, the greater the
> difference between NOT and... ?

Yes. Between NOT and TOP error or NOT and Kees error or tuning.

> I'm not sure why the relative errors of 19 and 22 would change
> depending on whether octaves are tempered.

Only in the 5 limit by the looks of it. 19 has errors in 3 and 5
almost (unweighted) equally flat so that 6:5 is almost perfect. The
average odd-limit error will be about two thirds the average prime
error. 22 has a sharp 3 and a flat 5 so 6:5 is worse than either and
the average odd-limit error will be higher than the average prime
error. When you stretch the octaves, 19 will benefit more, with the
errors in 3 and 5 both getting smaller and 2 taking up the slack.
This more fairly represents the odd-limit error.

Over-weighting to converge for arbitrarily high prime limits will
probably give results biased towards the 5-limit.

> >> >Are you optimizing the octaves?
> >>
> >> I'm not coming up with tunings at all, er... well, yes I am,
> >> they're ETs, so no, I'm not optimizing the octaves. So I suppose
> >> I'm getting wonky errors like you warn about in your paper.
> >> Except I didn't believe that bit.
> >
> >You're getting 22 but not 19. That may be valid because of the
> >11-limit, or it may be an artefact of not stretching the octaves. Try
> >using a stretched 19 for comparison.
>
> Hm, I suppose I could update my code to use TOP-tuned patent vals.

Make sure you used the stretched octave to find the nearest prime
approximation. Otherwise you're still biased against high stretches.

> >> On a slightly different note, do you have anything showing
> >> how average errors-of-primes is bounded vs. average errors-of-
> >> the-tonality-diamond? Something that didn't depend on the
> >> choice of weighting schemes would be awesome...
> >
> >No. I could do a table comparing unweighted odd limit to weighted
> >primes errors. There should be a vague correlation. I've tried to
> >work out the limit to infinity of a suitably weighted odd-limit
> >calculation, but I can't find a way of doing the weighting that gives
> >a stable answer.
> >
> >There are certain ways of setting the weights that get you close to
> >odd limits. Up to the 7-limit, all odd primes are weighted equally.
> >For the 9-limit, you weight 3 double (implying a buoyancy of 0.5). At
> >the 15-limit, you need to introduce 15 as an independent harmonic with
> >a weight of 1. That screws up the averages but gives you an idea of
> >the true odd limit error. I can work this out in more detail but for
> >now I'm concentrating on getting the Tenney weighting clear and
> >correct.
>
> Hm, I thought I remember you saying somewhere how looking at primes
> can't be far off looking at all intervals.

Maybe, but it's still at the hand waving stage. And must surely
depend on the weighting: Tenney weighting approximates unweighted odd
limits.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/12/2007 12:32:50 AM

>> >> >NOT is a No Octave Tempering version of TOP. All you do is ignore
>> >> >octaves and optimize for odd primes only.
>> >>
>> >> I knew that. But I wasn't clear on if/where the results differed
>> >> from the likes of: whatever Kees-metric things have been proposed,
>> >> stretched TOP, stretched TOP-RMS, etc.
>> >
>> >The standard example is 19 vs 22 in the 5-limit (and I think it
>> >carries through into the 7-limit). For the actual size of the errors,
>> >22 comes off a lot better. But 19 has a good 6:5, so I think we agree
>> >that both should be rated about the same.
>> >If you look at a list of optimal tunings, the more the octaves deviate
>> >from just, the more different the NOT error would be.
>>
>> You mean, the greater the octave error in TOP, the greater the
>> difference between NOT and... ?
>
>Yes. Between NOT and TOP error

That's obvious.

>or NOT and Kees error or tuning.

...This is less obvious. I mean, a Kees metric is one that
ignores factors of 2, and if you take TOP and ignore 2, the
naive guess would be that... Yet I remember someone, maybe you
or Paul, showing they were different. I remember Paul going
on about cases when the Kees tuning has tempered octaves when
the TOP octaves are pure and vice versa, whereas the NOT
octaves are always pure.

>> I'm not sure why the relative errors of 19 and 22 would change
>> depending on whether octaves are tempered.
>
>Only in the 5 limit by the looks of it. 19 has errors in 3 and 5
>almost (unweighted) equally flat so that 6:5 is almost perfect. The
>average odd-limit error will be about two thirds the average prime
>error. 22 has a sharp 3 and a flat 5 so 6:5 is worse than either and
>the average odd-limit error will be higher than the average prime
>error. When you stretch the octaves, 19 will benefit more, with the
>errors in 3 and 5 both getting smaller and 2 taking up the slack.
>This more fairly represents the odd-limit error.

I see.

>Over-weighting to converge for arbitrarily high prime limits will
>probably give results biased towards the 5-limit.

Yes.

>> >> >Are you optimizing the octaves?
>> >>
>> >> I'm not coming up with tunings at all, er... well, yes I am,
>> >> they're ETs, so no, I'm not optimizing the octaves. So I suppose
>> >> I'm getting wonky errors like you warn about in your paper.
>> >> Except I didn't believe that bit.
>> >
>> >You're getting 22 but not 19. That may be valid because of the
>> >11-limit, or it may be an artefact of not stretching the octaves. Try
>> >using a stretched 19 for comparison.
>>
>> Hm, I suppose I could update my code to use TOP-tuned patent vals.
>
>Make sure you used the stretched octave to find the nearest prime
>approximation. Otherwise you're still biased against high stretches.

Hm, I first find the patent val and then tune it. I'm not sure
how that could go wrong.

>> Hm, I thought I remember you saying somewhere how looking at primes
>> can't be far off looking at all intervals.
>
>Maybe, but it's still at the hand waving stage. And must surely
>depend on the weighting: Tenney weighting approximates unweighted odd
>limits.

Aha! Well that sounds promising. Take what you can get, I
sometimes (though rarely) say. :)

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/12/2007 1:42:15 PM

--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:
> > Kees-RMS?
>
> Maybe it is, but I can't prove it's anything to do with the Kees
> metric. So for now it's imperfectly called STD-error.

What's the exact definition?

🔗Graham Breed <gbreed@gmail.com>

3/12/2007 11:46:44 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:
> >>>Kees-RMS?
>>
>>Maybe it is, but I can't prove it's anything to do with the Kees
>>metric. So for now it's imperfectly called STD-error.
> > > What's the exact definition?

The standard deviation of the Tenney-weighted errors of the prime intervals of a regular temperament with the scale stretch fixed such that the equivalence interval is perfect.

Graham

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/13/2007 12:27:28 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Gene Ward Smith wrote:
> > --- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@> wrote:
> >
> >>>Kees-RMS?
> >>
> >>Maybe it is, but I can't prove it's anything to do with the Kees
> >>metric. So for now it's imperfectly called STD-error.
> >
> >
> > What's the exact definition?
>
> The standard deviation of the Tenney-weighted errors of the prime
> intervals of a regular temperament with the scale stretch fixed such
> that the equivalence interval is perfect.

Why are you now calling this standard deviation rather than RMS?

🔗Graham Breed <gbreed@gmail.com>

3/13/2007 1:18:23 AM

Carl Lumma wrote:

>>>You mean, the greater the octave error in TOP, the greater the
>>>difference between NOT and... ?
>>
>>Yes. Between NOT and TOP error
> > > That's obvious.
> > >>or NOT and Kees error or tuning.
> > > ...This is less obvious. I mean, a Kees metric is one that
> ignores factors of 2, and if you take TOP and ignore 2, the
> naive guess would be that... Yet I remember someone, maybe you
> or Paul, showing they were different. I remember Paul going
> on about cases when the Kees tuning has tempered octaves when
> the TOP octaves are pure and vice versa, whereas the NOT
> octaves are always pure.

If you take TOP and remove octaves in a naive way, you get NOT. I try to show in the PDF that Kees weighting gives the same results as (half) the Tenney weighting for infinitely small intervals. In other cases the closest match is between (twice) the Kees weight and the Tenney weight of the smallest octave-equivalent. So bringing in octaves and treating them as equal citizens in terms of Tenney weighting gives the same kind of balance as Kees weighting.

You have to optimize the octaves for the TOP averaging to work because otherwise large intervals will be worse than small ones. Kees weighting only represents small intervals. If you optimize the stretch, the error is shared amongst large and small intervals, and the whole average is representative of any range of interval sizes. Small intervals are largely unaffected by the small scale stretches that are typical of good temperaments.

Kees weighting isn't defined for tempered octaves. An infinite number of octaves would have infinite error, yet have zero complexity. Also the simple formula for the Kees-max (and the STD that may be Kees related) can give zero error if you shrink the scale such that all intervals are unisons, all primes have a weighted error of 1, and so the deviation of errors is zero.

>>>I'm not sure why the relative errors of 19 and 22 would change
>>>depending on whether octaves are tempered.
>>
>>Only in the 5 limit by the looks of it. 19 has errors in 3 and 5
>>almost (unweighted) equally flat so that 6:5 is almost perfect. The
>>average odd-limit error will be about two thirds the average prime
>>error. 22 has a sharp 3 and a flat 5 so 6:5 is worse than either and
>>the average odd-limit error will be higher than the average prime
>>error. When you stretch the octaves, 19 will benefit more, with the
>>errors in 3 and 5 both getting smaller and 2 taking up the slack.
>>This more fairly represents the odd-limit error.
> > I see.
> >>Over-weighting to converge for arbitrarily high prime limits will
>>probably give results biased towards the 5-limit.
> > Yes.
> >>>>>>Are you optimizing the octaves?
>>>>>
>>>>>I'm not coming up with tunings at all, er... well, yes I am,
>>>>>they're ETs, so no, I'm not optimizing the octaves. So I suppose
>>>>>I'm getting wonky errors like you warn about in your paper.
>>>>>Except I didn't believe that bit.
>>>>
>>>>You're getting 22 but not 19. That may be valid because of the
>>>>11-limit, or it may be an artefact of not stretching the octaves. Try
>>>>using a stretched 19 for comparison.
>>>
>>>Hm, I suppose I could update my code to use TOP-tuned patent vals.
>>
>>Make sure you used the stretched octave to find the nearest prime
>>approximation. Otherwise you're still biased against high stretches.
> > Hm, I first find the patent val and then tune it. I'm not sure
> how that could go wrong.

Because the errors are all randomly distributed around zero for pure octaves. If you take the nearest approximations and then stretch the scale, the weighted errors will be randomly distributed around the average scale stretch. So the average absolute error will be greater.

You should take the nearest approximation to stretched scale steps. Then the errors will be distributed around zero for the stretched scale.

Graham

Glossary:

Tenney weight/metric: weighting an interval by the logarithm of the product of the numerator and denominator of its ratio, for conventional JI.

Kees weight/metric: like the Tenney metric, but taking only the larger of the numerator and denominator after removing factors of 2.

TOP: Tempered Octaves Please or Tenney Optimal Prime. Some octave-specific tuning of a temperament that's optimal for some average of the Tenney-weighted errors in the primes.

Patent val: represents an equal temperament where all prime intervals are rounded to the nearest whole number of scale steps. Requires the number of steps to an octave to be chosen in advance (must be an integer?)

Overweighting: a weighting scheme where small primes are favored even more than with Tenney weighting.

🔗Graham Breed <gbreed@gmail.com>

3/13/2007 1:20:45 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>Gene Ward Smith wrote:
>>
>>>--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@> wrote:
>>>
>>>
>>>>>Kees-RMS?
>>>>
>>>>Maybe it is, but I can't prove it's anything to do with the Kees
>>>>metric. So for now it's imperfectly called STD-error.
>>>
>>>
>>>What's the exact definition?
>>
>>The standard deviation of the Tenney-weighted errors of the prime >>intervals of a regular temperament with the scale stretch fixed such >>that the equivalence interval is perfect.
> > Why are you now calling this standard deviation rather than RMS?

Because I know it's the standard deviation of something interesting, but I don't know it's the RMS of anything interesting.

Graham

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/13/2007 12:55:22 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> If you take TOP and remove octaves in a naive way, you get NOT.

If you take TOP-RMS and remove octaves in a naive way, we get your
minimal standard deviation tuning. Which is one reason I
think "standard deviation" should probably be left off the terminology,
the other being it's a variance measure but you aren't using it that
way.

🔗Graham Breed <gbreed@gmail.com>

3/13/2007 10:21:01 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>If you take TOP and remove octaves in a naive way, you get NOT.
> > If you take TOP-RMS and remove octaves in a naive way, we get your > minimal standard deviation tuning. Which is one reason I > think "standard deviation" should probably be left off the terminology, > the other being it's a variance measure but you aren't using it that > way.

I don't want to sound arrogant, but I think you're getting confused again. You gave your TOP-RMS with octaves removed before, and it didn't match my standard deviation. I gave a worked example of my calculation that you haven't responded to. If you've now found the right balance between naivete and correctness you haven't explained it.

In what way do you think I'm not using what? What else should I call it when it plainly looks like a standard deviation? I could use the variance instead (it only means not taking a square root) but it wouldn't match the other error measures as well. I'm quite happy with the standard deviation. I use the term exactly as the Oxford Concise Dictionary of Mathematics uses it, and AFAICT the way all the results of a define: search on Google intend it. There are 2 kinds of standard deviation, of course. I think the one I want is the population standard deviation. But if you want to use the sample standard deviation instead, that will still work because it compares the same way for temperaments of the same prime limit.

I take it that "standard deviation" is a common enough term that we can assume tuning-math readers will either be familiar with it or know how to look it up???

Graham