back to list

TOP for equal tunings

🔗Kalle Aho <kalleaho@...>

4/27/2010 8:13:51 AM

Hi tuning dorks :),

could someone give me a concise formula for calculating Tenney optimal
equal tunings from mappings to primes, for I seem to have forgotten how
to do it.

I'm planning to do some "cheating" and try out some evil Sethares-style
spectrally mapped timbres with TOP-tuned partials.

thanks,
Kalle Aho

🔗Carl Lumma <carl@...>

4/27/2010 10:57:11 AM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> Hi tuning dorks :),
> could someone give me a concise formula for calculating Tenney
> optimal equal tunings from mappings to primes, for I seem to
> have forgotten how to do it.
> I'm planning to do some "cheating" and try out some evil
> Sethares-style spectrally mapped timbres with TOP-tuned partials.

Hi Kalle,

1. For an ET your mapping will be a val.

2. Divide each entry in the val by the base-2 log of its
corresponding prime. This gives you a list of the octave
stretches (in notes/oct) that tune each prime in JI.

3. Find the mean of the largest and smallest of these
stretches. Divide each entry of the original val by
this average.

4. Multiply each entry by 1200 to get the tuning in cents.

5. Divide the first term by the first term of the original
val to get the step size.

-Carl

🔗genewardsmith <genewardsmith@...>

4/27/2010 11:34:36 AM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> Hi tuning dorks :),
>
> could someone give me a concise formula for calculating Tenney optimal
> equal tunings from mappings to primes, for I seem to have forgotten how
> to do it.

My formulas are not very concise, involving comparing distances in "val space" to a "JI point" and sometimes using linear programming. What is it you wanted to tune?

🔗Graham Breed <gbreed@...>

4/27/2010 10:07:13 PM

On 27 April 2010 22:34, genewardsmith <genewardsmith@...> wrote:

> My formulas are not very concise, involving comparing distances in
> "val space" to a "JI point" and sometimes using linear programming.
> What is it you wanted to tune?

Yes, I wanted to talk to you about that. How do you feel about
applying a Euclidean metric to this space?

Graham

🔗genewardsmith <genewardsmith@...>

4/27/2010 10:15:49 PM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 27 April 2010 22:34, genewardsmith <genewardsmith@...> wrote:
>
> > My formulas are not very concise, involving comparing distances in
> > "val space" to a "JI point" and sometimes using linear programming.
> > What is it you wanted to tune?
>
> Yes, I wanted to talk to you about that. How do you feel about
> applying a Euclidean metric to this space?

Fine, but it won't be TOP tuning any more.

🔗Graham Breed <gbreed@...>

4/27/2010 10:22:47 PM

On 27 April 2010 19:13, Kalle Aho <kalleaho@...> wrote:
> Hi tuning dorks :),
>
> could someone give me a concise formula for calculating Tenney optimal
> equal tunings from mappings to primes, for I seem to have forgotten how
> to do it.

First you need what I call the weighted primes, or w. You get that by
multiplying each entry in the equal temperament mapping by the weight
you assign to it and dividing by the number of steps to the octave.
For TOP, that weight is one divided by the log to base 2 of the
relevant prime number. Carl explained all this.

For example, 12-equal in the 5-limit is

<12, 19, 28]

The weights are 1/log2(2), 1/log2(3) and 1/log2(5), or 1.0, 0.63...,
0.43... and there are 12 steps to the octave. You should get

<1.0, 0.99897210982147411, 1.0049119688379171]

So, this is w. You can then calculate the TOP-RMS stretch as:

stretch = mean(w) / mean(w**2)

That is, the mean of w divided by the mean squared of w.

Or, for the dowdy old TOP-max:

stretch = 2 / [max(w) + min(w)]

What Carl gave seems to be neither of these. You can also calculate
the error in the optimal temperament:

TOP-RMS = mean(w)/rms(w)

TOP-max = [max(w) - min(w)]/[max(w) + min(w)]

Graham

🔗Graham Breed <gbreed@...>

4/27/2010 10:25:25 PM

On 28 April 2010 09:15, genewardsmith <genewardsmith@...> wrote:
>
>
> --- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>>
>> On 27 April 2010 22:34, genewardsmith <genewardsmith@...> wrote:
>>
>> > My formulas are not very concise, involving comparing distances in
>> > "val space" to a "JI point" and sometimes using linear programming.
>> > What is it you wanted to tune?
>>
>> Yes, I wanted to talk to you about that.  How do you feel about
>> applying a Euclidean metric to this space?
>
> Fine, but it won't be TOP tuning any more.

You'll get a Tenney-optimal prime tuning, which I've been calling
TOP-RMS for a number of years now. Paul Erlich hasn't objected, which
is to say I haven't heard anything from him at all. I don't see why
the nice acronym "TOP" should be reserved for the less useful case.

Graham

🔗Carl Lumma <carl@...>

4/28/2010 1:57:44 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Or, for the dowdy old TOP-max:
>
> stretch = 2 / [max(w) + min(w)]
>
> What Carl gave seems to be neither of these.

Sure I did. The above is the inverse of the mean I
mentioned.

-Carl

🔗Kalle Aho <kalleaho@...>

4/28/2010 3:47:40 PM

Thanks, Carl!

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> >
> > Hi tuning dorks :),
> > could someone give me a concise formula for calculating Tenney
> > optimal equal tunings from mappings to primes, for I seem to
> > have forgotten how to do it.
> > I'm planning to do some "cheating" and try out some evil
> > Sethares-style spectrally mapped timbres with TOP-tuned partials.
>
> Hi Kalle,
>
> 1. For an ET your mapping will be a val.
>
> 2. Divide each entry in the val by the base-2 log of its
> corresponding prime. This gives you a list of the octave
> stretches (in notes/oct) that tune each prime in JI.
>
> 3. Find the mean of the largest and smallest of these
> stretches. Divide each entry of the original val by
> this average.
>
> 4. Multiply each entry by 1200 to get the tuning in cents.
>
> 5. Divide the first term by the first term of the original
> val to get the step size.
>
> -Carl
>

🔗Kalle Aho <kalleaho@...>

4/28/2010 3:57:21 PM

Hi Graham,

what is the idea behind TOP-RMS? Why don't you like plain TOP?

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> You'll get a Tenney-optimal prime tuning, which I've been calling
> TOP-RMS for a number of years now. Paul Erlich hasn't objected, which
> is to say I haven't heard anything from him at all. I don't see why
> the nice acronym "TOP" should be reserved for the less useful case.

🔗genewardsmith <genewardsmith@...>

4/28/2010 4:45:24 PM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> Hi Graham,
>
> what is the idea behind TOP-RMS? Why don't you like plain TOP?

I'm not trying to answer for Graham, but just to put some mathematical perspective on the question. The "four Ms" of measures for "central tendency" (generalized averages) in statistics are mean, median, mode, and midrange. Of these, mode is not relevant in this case. The other three are associated to metrics. Normalizing by the midrange, which is what TOP does to a val, is associated with L-infinity norm and minimizes the maximum error. Normalizing by the mean is associated to the Euclidean norm and minimizes the sum of squared error. Normalizing by the median minimizes the sum of absolute error. Which you want to use depends on how much weight you want to give to outliers.

🔗Carl Lumma <carl@...>

4/28/2010 5:19:03 PM

--- In tuning@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

> > what is the idea behind TOP-RMS? Why don't you like plain TOP?
>
> I'm not trying to answer for Graham, but just to put some
> mathematical perspective on the question. The "four Ms" of
> measures for "central tendency" (generalized averages) in
> statistics are mean, median, mode, and midrange. Of these,
> mode is not relevant in this case. The other three are
> associated to metrics. Normalizing by the midrange, which is
> what TOP does to a val, is associated with L-infinity norm
> and minimizes the maximum error. Normalizing by the mean is
> associated to the Euclidean norm and minimizes the sum of
> squared error. Normalizing by the median minimizes the sum
> of absolute error. Which you want to use depends on how much
> weight you want to give to outliers.

Thanks, Gene, that's very clear. I'd like to bring this back
to Graham's post

/tuning/topicId_88292.html#88374

Where w is a val, he said the TOP RMS stretch was given by

stretch = mean(w) / mean(w**2)

but translated this into English as "...divided by the mean
squared of w" which left me confused. How to actually
implement this?

I suppose this is more a question for Graham at this point,
but I also don't know what this

TOP-RMS = mean(w)/rms(w)

is the error of in the resulting tuning. It can't be the
weighted rms error of any collection of intervals, can it?

-Carl

🔗Kalle Aho <kalleaho@...>

4/29/2010 7:02:44 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> First you need what I call the weighted primes, or w. You get that
> by multiplying each entry in the equal temperament mapping by the
> weight you assign to it and dividing by the number of steps to the
> octave. For TOP, that weight is one divided by the log to base 2 of
> the relevant prime number. Carl explained all this.

But it seems to me that for calculating the TOP tuning you don't
necessarily need base 2 logarithms. For example, you could divide
each entry in the val by the natural logarithm of the corresponding
prime and then the equal tempered step (in ratio form) would be

e^(2/(min+max)).

Kalle Aho

🔗Kalle Aho <kalleaho@...>

4/29/2010 7:08:07 AM

--- In tuning@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
>
>
> --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> >
> > Hi Graham,
> >
> > what is the idea behind TOP-RMS? Why don't you like plain TOP?
>
> I'm not trying to answer for Graham, but just to put some
mathematical perspective on the question. The "four Ms" of measures
for "central tendency" (generalized averages) in statistics are mean,
median, mode, and midrange. Of these, mode is not relevant in this
case. The other three are associated to metrics. Normalizing by the
midrange, which is what TOP does to a val, is associated with L-
infinity norm and minimizes the maximum error. Normalizing by the
mean is associated to the Euclidean norm and minimizes the sum of
squared error. Normalizing by the median minimizes the sum of
absolute error. Which you want to use depends on how much weight you
want to give to outliers.
>

Okay, I understand that in the minimax TOP the composite ratios have
the same max Tenney-weighted error as the TOP tuned primes but I
don't understand how the primes and the composite ratios are related
in the other cases.

Kalle Aho

🔗Graham Breed <gbreed@...>

4/29/2010 9:32:01 AM

On 29 April 2010 02:57, Kalle Aho <kalleaho@...> wrote:
> Hi Graham,
>
> what is the idea behind TOP-RMS? Why don't you like plain TOP?

Plain TOP, or TOP-max as I call it, is harder to optimize outside of
rank 1 or n-1 cases. It's a linear program whereas TOP-RMS is a
linear least squares problem.

TOP-RMS also corresponds to a Euclidean "val space" or what I think
should be called a val lattice in tuning space. So I have neat
algebra built around it. TOP-max corresponds to a taxi cab metric.
We used to think that was a good idea but I don't any more. I can't
think of any fundamental reason why Euclidean is wrong for music. For
higher rank temperaments, you need to define areas or volumes and so
on, and I never learned how to do that in a geometry without angles.
Anybody who wants to enlighten me on the tuning-math list can do so.

There are also psychoacoustic arguments. What I've concluded is that
we should be minimizing mistuning badness. If you assume that badness
goes up as the square of mistuning, you don't need a detailed model.
The residual badness of each interval, which has something to do with
its discordance, cancels out when you do the mean. TOP-max, though,
is about the worst badness of any interval, with the badness added by
mistuning being linear. That can work, but you do need to specify the
residual badness. TOP doesn't. It assumes everything's zero and I
don't like that. Of course, TOP-max works well enough by
approximating TOP-RMS.

Graham

🔗Graham Breed <gbreed@...>

4/29/2010 9:37:11 AM

On 29 April 2010 18:02, Kalle Aho <kalleaho@...> wrote:

> But it seems to me that for calculating the TOP tuning you don't
> necessarily need base 2 logarithms. For example, you could divide
> each entry in the val by the natural logarithm of the corresponding
> prime and then the equal tempered step (in ratio form) would be
>
> e^(2/(min+max)).

You can use any algorithms as long as you normalize properly. The
first entry in w should be approximately 1 for the stretch to be
correct. For the TOP errors even that doesn't matter, because the
factors of w on the top and bottom are balanced. Hence the error is
dimensionless and you multiply by 1200 to get cents/octave.

Graham

🔗Graham Breed <gbreed@...>

4/29/2010 9:43:14 AM

On 29 April 2010 04:19, Carl Lumma <carl@...> wrote:

> Thanks, Gene, that's very clear.  I'd like to bring this back
> to Graham's post
>
> /tuning/topicId_88292.html#88374
>
> Where w is a val, he said the TOP RMS stretch was given by

No, w is a weighted tuning map.

> stretch = mean(w) / mean(w**2)
>
> but translated this into English as "...divided by the mean
> squared of w" which left me confused.  How to actually
> implement this?

How is it difficult? Scientific calculators have a sum-squared button.

> I suppose this is more a question for Graham at this point,
> but I also don't know what this
>
> TOP-RMS = mean(w)/rms(w)
>
> is the error of in the resulting tuning.  It can't be the
> weighted rms error of any collection of intervals, can it?

I talked about this in http://x31eq.com/composite.pdf remember? A
prime weighting corresponds to treating pairs like 5/3 and 15/1
equally. The Tenney prime weighting is an extrapolation to infinity
of a family of weights that treat equally complex intervals equally.
You can use other matrices if you like and keep most of the geometric
properties of TOP-RMS. If you keep octaves pure it makes sense to use
asymmetric weights so that 5/3 is more important than 15/1. That's why
I have other measures for the octave-equivalent case.

Graham

🔗genewardsmith <genewardsmith@...>

4/29/2010 10:03:35 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> TOP-RMS also corresponds to a Euclidean "val space" or what I think
> should be called a val lattice in tuning space. So I have neat
> algebra built around it. TOP-max corresponds to a taxi cab metric.
> We used to think that was a good idea but I don't any more. I can't
> think of any fundamental reason why Euclidean is wrong for music.

Paul was the one who liked taxicabs, and his observation that the weighted max error in TOP was constant as you went to more complex intervals was impressive. I did remark from time to time that Euclidean metrics held advantages, but at the time no one seemed to care. Until the TOP business came along with its plausible suggestion it wasn't clear how to define the Euclidean metric anyway.

🔗Carl Lumma <carl@...>

4/29/2010 10:37:51 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> > Thanks, Gene, that's very clear.  I'd like to bring this back
> > to Graham's post
> > /tuning/topicId_88292.html#88374
> > Where w is a val, he said the TOP RMS stretch was given by
>
> No, w is a weighted tuning map.

Ah, right.

> > stretch = mean(w) / mean(w**2)
> >
> > but translated this into English as "...divided by the mean
> > squared of w" which left me confused.  How to actually
> > implement this?
>
> How is it difficult? Scientific calculators have a sum-
> squared button.

It's not difficult now that I know what it is.

> > I suppose this is more a question for Graham at this point,
> > but I also don't know what this
> >
> > TOP-RMS = mean(w)/rms(w)
> >
> > is the error of in the resulting tuning.  It can't be the
> > weighted rms error of any collection of intervals, can it?
>
> I talked about this in http://x31eq.com/composite.pdf remember?
> A prime weighting corresponds to treating pairs like 5/3 and 15/1
> equally. The Tenney prime weighting is an extrapolation to
> infinity of a family of weights that treat equally complex
> intervals equally. You can use other matrices if you like and
> keep most of the geometric properties of TOP-RMS. If you keep
> octaves pure it makes sense to use asymmetric weights so that
> 5/3 is more important than 15/1. That's why I have other measures
> for the octave-equivalent case.

I notice you didn't answer the question. :P
Is there any section in particular of this 24-page paper
you think I should look at?

-Carl

🔗Graham Breed <gbreed@...>

4/30/2010 6:43:42 AM

On 29 April 2010 21:37, Carl Lumma <carl@...> wrote:

> I notice you didn't answer the question. :P
> Is there any section in particular of this 24-page paper
> you think I should look at?

"The Tenney prime weighting is an extrapolation to infinity of a
family of weights that treat equally complex intervals equally." I
thought that was the answer to your question.

Graham

🔗Graham Breed <gbreed@...>

4/30/2010 10:05:22 AM

On 29 April 2010 19:37, Carl Lumma <carl@...> wrote:
> --- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

>> > I suppose this is more a question for Graham at this point,
>> > but I also don't know what this
>> >
>> > TOP-RMS = mean(w)/rms(w)
>> >
>> > is the error of in the resulting tuning. Â It can't be the
>> > weighted rms error of any collection of intervals, can it?

Oh, that's wrong. Maybe that was the question. It should be

TOP-RMS = 1 - mean(w)**2/mean(w**2)

That is, one minus the mean squared divided by the mean squared. Oh
the joys of English! Alternatively, the standard deviation of w
divided by the RMS of w.

I made one simplification too many. Sorry.

Graham

🔗Carl Lumma <carl@...>

4/30/2010 10:47:11 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > I notice you didn't answer the question. :P
> > Is there any section in particular of this 24-page paper
> > you think I should look at?
>
> "The Tenney prime weighting is an extrapolation to infinity of a
> family of weights that treat equally complex intervals equally." I
> thought that was the answer to your question.

My question was, how does the TOP-RMS error you gave

TOP-RMS = mean(w)/rms(w)

relate to the weighted RMS error of arbitrary collections
of intervals? I believe Kalle asked the same question.
I remember you once answering it, probably offlist, but I
forget the answer.

-Carl

🔗genewardsmith <genewardsmith@...>

4/30/2010 1:11:26 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> --- In tuning@yahoogroups.com, Graham Breed <gbreed@> wrote:

> My question was, how does the TOP-RMS error you gave
>
> TOP-RMS = mean(w)/rms(w)
>
> relate to the weighted RMS error of arbitrary collections
> of intervals? I believe Kalle asked the same question.
> I remember you once answering it, probably offlist, but I
> forget the answer.
> -Carl

Another question is why don't you just take w and divide it by mean(w)?

🔗Graham Breed <gbreed@...>

4/30/2010 10:19:39 PM

On 30 April 2010 21:47, Carl Lumma <carl@...> wrote:

> My question was, how does the TOP-RMS error you gave
>
> TOP-RMS = mean(w)/rms(w)

That's the formula for a weighted unison vector. For a weighted val
or tuning map it should be

TOP-RMS = std(w)/rms(w)

I sent the correction yesterday, let's not lose it.

> relate to the weighted RMS error of arbitrary collections
> of intervals?  I believe Kalle asked the same question.
> I remember you once answering it, probably offlist, but I
> forget the answer.

For the correct formula, the answer is "yes". The weighting isn't
important but the way of collecting intervals is. So it isn't
completely arbitrary. Tenney limits as they become arbitrarily
complex will do it. I gave a link before and the section you want
will probably be called something like "Tenney Limits".

I originally answered the question in that PDF file that I linked to
from tuning-math when I wrote it. If I can remember the URL, along
with everything else:

http://x31eq.com/composite.pdf

Graham

🔗Graham Breed <gbreed@...>

4/30/2010 10:24:25 PM

On 1 May 2010 00:11, genewardsmith <genewardsmith@...> wrote:

> Another question is why don't you just take w and divide it by mean(w)?

Why would I want to? All I can think of is that 1/mean(w) might be a
reasonable scale stretch. Perhaps I wrote about it here:

http://x31eq.com/primerr.pdf

Graham

🔗genewardsmith <genewardsmith@...>

4/30/2010 10:37:33 PM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 1 May 2010 00:11, genewardsmith <genewardsmith@...> wrote:
>
> > Another question is why don't you just take w and divide it by mean(w)?
>
> Why would I want to?

Because the mean minimizes the squared error, just as the midrange minimizes the max error.

🔗Carl Lumma <carl@...>

4/30/2010 11:11:59 PM

--- In tuning@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
> > > Another question is why don't you just take w and divide it
> > > by mean(w)?
> >
> > Why would I want to?
>
> Because the mean minimizes the squared error, just as the midrange
> minimizes the max error.

So I think Graham's claiming that mean(w)/mean(w**2) gives the
*stretch* that minimizes the sum-squared error over a bag of
intervals bounded by Tenney height.

Graham's stretch is, I think, a scalar for w. So it's

w * mean(w) / mean(w**2)

You're claiming plain w/mean(w) minimizes sum-squared error
over... what bag of intervals, exactly?

-Carl

🔗genewardsmith <genewardsmith@...>

5/1/2010 2:07:06 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> You're claiming plain w/mean(w) minimizes sum-squared error
> over... what bag of intervals, exactly?

All I'm claiming is that it's just a general fact about the mean that it minimizes squared error. If you have n real numbers xi, and if M is the mean, then sum_i (xi - M)^2 is less for M than for any other number (if you don't believe me, take the derivative wrt to M). Similarly, is M is the median, then sum_i |xi - M| is minimized. Finally, if M is the midrange, then max_i |xi - M| is minimized.

🔗Carl Lumma <carl@...>

5/1/2010 1:42:40 PM

--- In tuning@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
> --- In tuning@yahoogroups.com, "Carl Lumma" <carl@> wrote:
>
> > You're claiming plain w/mean(w) minimizes sum-squared error
> > over... what bag of intervals, exactly?
>
> All I'm claiming is that it's just a general fact about the mean
> that it minimizes squared error. If you have n real numbers xi,
> and if M is the mean, then sum_i (xi - M)^2 is less for M than for
> any other number (if you don't believe me, take the derivative wrt
> to M). Similarly, is M is the median, then sum_i |xi - M| is
> minimized. Finally, if M is the midrange, then max_i |xi - M| is
> minimized.

I understood, and believe that. But we need to get to what I
wrote above at some point. -Carl

🔗genewardsmith <genewardsmith@...>

5/1/2010 2:12:59 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> I understood, and believe that. But we need to get to what I
> wrote above at some point. -Carl

Another thing I am claiming is that whatever the answer to your question is, "TOP mean" seems to give results quite close to what Graham gets, is very easy to compute for any rank of temperament, and minimizes a specific height function (exponents squared.)

🔗Carl Lumma <carl@...>

5/1/2010 3:05:11 PM

Graham wrote:

> > relate to the weighted RMS error of arbitrary collections
> > of intervals?  I believe Kalle asked the same question.
> > I remember you once answering it, probably offlist, but I
> > forget the answer.
>
> For the correct formula, the answer is "yes". The weighting
> isn't important but the way of collecting intervals is. So it
> isn't completely arbitrary. Tenney limits as they become
> arbitrarily complex will do it. I gave a link before and the
> section you want will probably be called something like "Tenney
> Limits".
>
> I originally answered the question in that PDF file that I
> linked to from tuning-math when I wrote it. If I can remember
> the URL, along with everything else:
>
> http://x31eq.com/composite.pdf

It looks like you talk about the intersection of Tenney limits
and prime limits. I don't agree with your motivation for that
(19:1 should be about equally consonant to 5:4). You also talk
a lot about untempered-octave variants, which aren't interesting
to me. The right tuning optimization shouldn't make people ask
you for a version with untempered octaves.

The reason I always "forget" your answer is that you never loop
back and confirm I've understood. For example, is this right:

>So I think Graham's claiming that mean(w)/mean(w**2) gives
>the *stretch* that minimizes the sum-squared error over a bag
>of intervals bounded by Tenney height.

? If you do in fact prove this, it would be enough for me to
stop using TOP-max and start using TOP-RMS.

-Carl

🔗Graham Breed <gbreed@...>

5/1/2010 10:16:36 PM

On 2 May 2010 02:05, Carl Lumma <carl@...> wrote:

>> http://x31eq.com/composite.pdf
>
> It looks like you talk about the intersection of Tenney limits
> and prime limits.  I don't agree with your motivation for that
> (19:1 should be about equally consonant to 5:4).  You also talk
> a lot about untempered-octave variants, which aren't interesting
> to me.  The right tuning optimization shouldn't make people ask
> you for a version with untempered octaves.

Yes, it's intersections like that. It has to be like that. Otherwise
the weights shift for every new prime you add and it never converges.
You can still take the exact matrix for the exact intervals you want
to use.

Right, you shouldn't need to temper the octaves. You do for Tenney
limits because stupidly large intervals are considered on a par with
intervals you can actually hear. I think the Farey limits are more
reasonable. Using those matrices you can calculate the badness of any
tuning and optimize with the octaves fixed. I haven't looked at the
details but maybe it's as simple as removing the top row and left hand
column.

The same approach should work for odd limits. That's another case
where I haven't worked out the details.

My general solution for pure octaves is to optimize the standard
deviation instead of the RMS. The section on cross-weighted metrics
is leading towards that. But I didn't tie it up so that I can show
Farey limits lead to a cross weighted metric which lead to a standard
deviation.

It also happens that my parametric badness is a kind of cross-weighted metric.

> The reason I always "forget" your answer is that you never loop
> back and confirm I've understood.  For example, is this right:

Oh, do I? I'm more likely to correct things that I think are wrong.

>>So I think Graham's claiming that mean(w)/mean(w**2) gives
>>the *stretch* that minimizes the sum-squared error over a bag
>>of intervals bounded by Tenney height.

Yes.

> ?  If you do in fact prove this, it would be enough for me to
> stop using TOP-max and start using TOP-RMS.

That's nice.

Graham

🔗Carl Lumma <carl@...>

5/1/2010 10:55:20 PM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> > It looks like you talk about the intersection of Tenney limits
> > and prime limits.  I don't agree with your motivation for that
> > (19:1 should be about equally consonant to 5:4).  You also talk
> > a lot about untempered-octave variants, which aren't interesting
> > to me.  The right tuning optimization shouldn't make people ask
> > you for a version with untempered octaves.
>
> Yes, it's intersections like that. It has to be like that.
> Otherwise the weights shift for every new prime you add and it
> never converges. You can still take the exact matrix for the
> exact intervals you want to use.

Sorry, you lost me here. I meant, I don't think there's
any problem with 19:1 = 5:4.

> Right, you shouldn't need to temper the octaves. You do for
> Tenney limits because stupidly large intervals are considered
> on a par with intervals you can actually hear.

No, I meant, octaves definitely should be tempered. At least,
they should not be protected from temperament. The correct
weighting is one that produces tunings people prefer to ones
where you left 2 out. I don't know if log(p) that or not --
some folks balked at the TOP tunings when they first debuted --
but for now it's close enough for me.

> > The reason I always "forget" your answer is that you never loop
> > back and confirm I've understood.  For example, is this right:
>
> Oh, do I? I'm more likely to correct things that I think
> are wrong.

I think psychologists have concluded that everyone is like that.
But on mailing lists I think we need to work against it. Sorry,
I didn't mean to say "never".

> >>So I think Graham's claiming that mean(w)/mean(w**2) gives
> >>the *stretch* that minimizes the sum-squared error over a bag
> >>of intervals bounded by Tenney height.
>
> Yes.

Well, there we go. I still don't see it in your paper, but
I generally trust you.

-Carl

🔗Graham Breed <gbreed@...>

5/1/2010 11:58:46 PM

On 2 May 2010 09:55, Carl Lumma <carl@...> wrote:

> Sorry, you lost me here.  I meant, I don't think there's
> any problem with 19:1 = 5:4.

Right, if you're actively considering the 19-limit that's fine. If
you want to consider arbitrarily complex intervals you have to enforce
a prime limit or favor the simple primes more than Tenney weighting
does. Favoring simple primes is theoretically fine if you aren't
interested in infinite complexity. But I don't have a formula for how
much you should favor them, other than plugging in the intervals you
want and turning the handle.

There's also the problem that, for example, 81/80 is comparable to
81*80. But 81/80 is clearly audible whereas 81*80 covers over 12
octaves and even good human hearing is only about 10 octaves (16 Hz to
16.4 kHz).

In real music, even ignoring aesthetic preferences, most intervals
will be small because they occur between notes in chords. That makes
them more important.

>> Right, you shouldn't need to temper the octaves.  You do for
>> Tenney limits because stupidly large intervals are considered
>> on a par with intervals you can actually hear.
>
> No, I meant, octaves definitely should be tempered.  At least,
> they should not be protected from temperament.  The correct
> weighting is one that produces tunings people prefer to ones
> where you left 2 out.  I don't know if log(p) that or not --
> some folks balked at the TOP tunings when they first debuted --
> but for now it's close enough for me.

What was the problem with my PDF, then?

>> > The reason I always "forget" your answer is that you never loop
>> > back and confirm I've understood.  For example, is this right:
>>
>> Oh, do I?  I'm more likely to correct things that I think
>> are wrong.
>
> I think psychologists have concluded that everyone is like that.
> But on mailing lists I think we need to work against it.  Sorry,
> I didn't mean to say "never".

I didn't take any offense at the "never", don't worry.

The danger now is that there's such a flurry of posts in different
threads that I might miss some replies.

I see we're still on the big list as well ...

>> >>So I think Graham's claiming that mean(w)/mean(w**2) gives
>> >>the *stretch* that minimizes the sum-squared error over a bag
>> >>of intervals bounded by Tenney height.
>>
>> Yes.
>
> Well, there we go.  I still don't see it in your paper, but
> I generally trust you.

I did calculations beyond what's in there, and it seemed to converge.
You'll need to find a mathematician for a formal proof. It's not at
all difficult to duplicate these metrics. You list your intervals in
vector form, put them in a matrix, and multiply by the transpose. Any
programming language with a linear algebra library can do it.

Graham

🔗Carl Lumma <carl@...>

5/2/2010 12:39:52 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > Sorry, you lost me here.  I meant, I don't think there's
> > any problem with 19:1 = 5:4.
>
> Right, if you're actively considering the 19-limit that's fine.

Oh right; obviously we're constrained to the prime limit of the
intervals we're mapping. So I withdrawal that criticism.

> There's also the problem that, for example, 81/80 is comparable
> to 81*80. But 81/80 is clearly audible whereas 81*80 covers
> over 12 octaves and even good human hearing is only about 10
> octaves (16 Hz to 16.4 kHz).

81*80 is way beyond where Tenney height is claimed to work.
The highest 9-odd-limit interval is, what, 14*9 = 126. That
would be the upper limit of where I'd claim it worked.
128:1 is 7 octaves, which is audible.

> In real music, even ignoring aesthetic preferences, most
> intervals will be small because they occur between notes in
> chords. That makes them more important.

If you go to the orchestra you'll hear notes all over the place.
It's known that very large intervals are less important because
less of their spectra overlap. I suppose you could say that
Tenney weighting doesn't account for this. But we're regular-
mapping so we need prime limits (though I suppose we could
consider other fields that generate the rationals). So the best
thing to do is bend them to be psychoacoustically relevant.
I don't know of anything better than Tenney weighting for this.

> > No, I meant, octaves definitely should be tempered.  At least,
> > they should not be protected from temperament.  The correct
> > weighting is one that produces tunings people prefer to ones
> > where you left 2 out.  I don't know if log(p) that or not --
> > some folks balked at the TOP tunings when they first debuted --
> > but for now it's close enough for me.
>
> What was the problem with my PDF, then?

I was just bellyaching that you spent so much ink on pure-octave
variants.

-Carl

🔗Graham Breed <gbreed@...>

5/2/2010 3:01:28 AM

On 2 May 2010 11:39, Carl Lumma <carl@...> wrote:

> Oh right; obviously we're constrained to the prime limit of the
> intervals we're mapping.  So I withdrawal that criticism.

We need to choose some kind of limit, anyway.

>> There's also the problem that, for example, 81/80 is comparable
>> to 81*80.  But 81/80 is clearly audible whereas 81*80 covers
>> over 12 octaves and even good human hearing is only about 10
>> octaves (16 Hz to 16.4 kHz).
>
> 81*80 is way beyond where Tenney height is claimed to work.
> The highest 9-odd-limit interval is, what, 14*9 = 126.  That
> would be the upper limit of where I'd claim it worked.
> 128:1 is 7 octaves, which is audible.

You don't get anything very close to Tenney weighting until the
intervals get stupidly complex. Before that lower primes are given
systematically higher weights. But that assumes equal weighting of
intervals within a Tenney limit is what you wanted. If you don't know
exactly what you want Tenney weighting's a pretty good bet.

>> In real music, even ignoring aesthetic preferences, most
>> intervals will be small because they occur between notes in
>> chords.  That makes them more important.
>
> If you go to the orchestra you'll hear notes all over the place.
> It's known that very large intervals are less important because
> less of their spectra overlap.  I suppose you could say that
> Tenney weighting doesn't account for this.  But we're regular-
> mapping so we need prime limits (though I suppose we could
> consider other fields that generate the rationals).  So the best
> thing to do is bend them to be psychoacoustically relevant.
> I don't know of anything better than Tenney weighting for this.

Notes all over the place means the majority of intervals will be
small. And, yes, the spectra overlap less. So I count 14:9 as much
more important than 126:1. The good thing about octave stretching is
that we can ignore this issue. With TOP-Max you get the same solution
for any set of intervals. With TOP-RMS you're optimizing the largest
intervals but the symmetry means it doesn't matter.

> I was just bellyaching that you spent so much ink on pure-octave
> variants.

That must be primerr.pdf because I didn't consider pure octaves with
the composite errors.

Graham