back to list

Why pocpit tuning makes sense

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/22/2007 2:14:39 PM

This is easiest to see starting from a linear temperament. If we have a
temperament with a val v for mapping generator steps to primes, then
pocpit tuning makes 3^v[2] * 5^v[3] * ... p^v[pi(i)] into an eigenmonzo.
This says to take more care with the tuning of complex primes than with
less complex primes. This is what you want to do, since fixing the less
complex primes too exactly is likely to throw everything out of whack.
For example, Pythagorean tuning is a terrible meantone tuning, but pure
fives or pure sevens is fine. A pure 3*5^4*7^10, which takes care not
to throw the 5s and especially the 7s out of whack, is also fine. Also,
the signs get the primes to pull, so to speak, in the right direction.
The pocpit tuning for 7-limit miracle fixes 3^6*5^(-7)*7^(-2). The net
effect is that 3s and 5s both get a good tuning. If we used instead
3^6*5^7*7^2, then 3 and 5 pull towards 3*5, so to speak--in other
words, the secor is moved closer to 16/15. This isn't what we wanted,
which was for them to balance.

The same kind of argument can now be extended to other temperaments, and
if we don't fix two, to argue that the Frobenius tuning likewise makes
sense. Graham's TOP-RMS is as we've seen similar, but it's weighted in
favor of smaller primes in its effect, which we may not want.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/22/2007 2:30:00 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:

7s out of whack, is also fine. Also,
> the signs get the primes to pull, so to speak, in the right direction.

Here's another way to see this which might be better. Take miralcle
again. Now

cents(3/2)/6 = 116.9925
cents(8/5)/7 = 116.2409
cents(8/7)/2 = 115.5870

By fixing (3/2)^6 * (8/5)^7 * (8/7)^2, we are weighing the 3/2 and 8/5
values more heavily than the 8/7 value--which is what we want.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/22/2007 2:41:32 PM

It may make sense but I guess it shouldn't be called POCPIT now. What
about POF tuning, for Pure Octaves Frobenius?

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/22/2007 4:57:23 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<genewardsmith@...> wrote:
>
> This is easiest to see starting from a linear temperament.

There's an even easier way to see why pof (pocpit) tuning makes
sense, which makes my comments worthy of the big duh of the week
award. Because of the least squares property of the pseudoinverse,
the pof tuning is just the unweighted rms tuning with odd primes as
the targets being optimized.

This, incidentially, allows us to cast rms with respect to the
tonality diamond or any other set of intervals into the same mold.
Simply take

sum generators(q) * monz(q)

over the set, where generators(q) is how many generators (+ or -) it
takes to get to q, and monz(q) is the monzo for q. Take this as an
eigenvector, and you get rms tuning.

Another way of looking at it fits it to my remarks on why it makes
sense to weight by generator steps. If we take 5-limit meantone, for
example, then not only are 3/2 and 5^(1/4) generator tunings giving a
pure consonance, so is (10/3)^(1/3). So, now take (3/2)*5^4*(10/3)^3 =
312500/9 and use it for an eigenmonzo, and the result is the 7/26-
comma Woolhouse tuning.

We can weight the smaller primes more heavily by making the exponents
all 1. So, for 3/2 and 5, take 15 as an eigenmonzo, and get
1/5-comma meantone as a result. Or, using 3/2, 5, and 10/3, take
(3/2)*5*(10/3) = 25 as an eigenmonzo, and get 1/4-comma meantone.

🔗Graham Breed <gbreed@gmail.com>

3/23/2007 3:49:38 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" > <genewardsmith@...> wrote:
> >>This is easiest to see starting from a linear temperament. > > There's an even easier way to see why pof (pocpit) tuning makes > sense, which makes my comments worthy of the big duh of the week > award. Because of the least squares property of the pseudoinverse, > the pof tuning is just the unweighted rms tuning with odd primes as > the targets being optimized.

Right, so it's an unweighted NOT then. And doesn't attempt to solve the problems with NOT that I addressed by using standard deviations. So pof/pocpit may make sense why should we care about it?

> This, incidentially, allows us to cast rms with respect to the > tonality diamond or any other set of intervals into the same mold. > Simply take

Why was that ever difficult?

> sum generators(q) * monz(q)
> > over the set, where generators(q) is how many generators (+ or -) it > takes to get to q, and monz(q) is the monzo for q. Take this as an > eigenvector, and you get rms tuning.
> > Another way of looking at it fits it to my remarks on why it makes > sense to weight by generator steps. If we take 5-limit meantone, for > example, then not only are 3/2 and 5^(1/4) generator tunings giving a > pure consonance, so is (10/3)^(1/3). So, now take (3/2)*5^4*(10/3)^3 =
> 312500/9 and use it for an eigenmonzo, and the result is the 7/26-
> comma Woolhouse tuning.

I think weighting by generator steps might be valid. But that's not what you're talking about. Any least squares optimization will pay most attention to the more complex intervals (given equivalent weight) because they change most rapidly. I have been known to define complexity as the gradient of (average) error as a function of the generator size. And this isn't even confined to the least squares: any average should optimize in a similar way. Minimax hides this to an extent because the complex intervals may happen to be small enough not to matter near the optimal point.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/23/2007 4:17:01 AM

Gene Ward Smith wrote:
> This is easiest to see starting from a linear temperament. If we have a > temperament with a val v for mapping generator steps to primes, then
> pocpit tuning makes 3^v[2] * 5^v[3] * ... p^v[pi(i)] into an eigenmonzo.
> This says to take more care with the tuning of complex primes than with > less complex primes. This is what you want to do, since fixing the less > complex primes too exactly is likely to throw everything out of whack. > For example, Pythagorean tuning is a terrible meantone tuning, but pure > fives or pure sevens is fine. A pure 3*5^4*7^10, which takes care not > to throw the 5s and especially the 7s out of whack, is also fine. Also, > the signs get the primes to pull, so to speak, in the right direction.
> The pocpit tuning for 7-limit miracle fixes 3^6*5^(-7)*7^(-2). The net > effect is that 3s and 5s both get a good tuning. If we used instead > 3^6*5^7*7^2, then 3 and 5 pull towards 3*5, so to speak--in other > words, the secor is moved closer to 16/15. This isn't what we wanted, > which was for them to balance.

This all seems to follow from it being a least squares optimum of something vaguely sensible, as you revealed in another post.

For the case of meantone, you will get a sensible tuning because equal prime weighting and averaging the magnitudes of odd primes pull in different directions. Equal prime weighting takes you towards 19-equal because fifths aren't so important. Magnitudes of odd primes takes you away from 19 because 3 and 5 are both getting worse, and you miss the compensation of 5:3 getting better.

Miracle is pretty stable with respect to the optimization algorithm. All the primes become optimal at around the same place. That's why it works so well.

> The same kind of argument can now be extended to other temperaments, and
> if we don't fix two, to argue that the Frobenius tuning likewise makes > sense. Graham's TOP-RMS is as we've seen similar, but it's weighted in > favor of smaller primes in its effect, which we may not want.

We already knew that Frobenius was a least squares optimum. It will make a lot of sense if you think unweighted primes are the thing to optimize for. TOP is weighted in favor of smaller primes ... relative to equal weighting of primes. But equal weighting of primes favors large primes relative to Tenney weighting, where primes are considered on an equal footing with composites. You can take any weighting you want. Why do we care about equal weighting?

Really, there are three parts to pocpit/pof:

1) Least squares optimum

2) Equal weighted primes

3) Naively ignoring factors of 2

The first I won't argue with, of course.

Equal weighting is fine if it's really what you want. I didn't think anybody did want that, though, from musical arguments. It doesn't make much sense in theoretical terms because primes and composites are unbalanced. The only advantage I can see is that you can use integer libraries for a bit longer.

The last point I still think is wrong. I've argued before why something like a Kees metric or standard deviation of primes makes more sense. I haven't seen any counter-arguments. I though this was agreed years ago, which is why we used to use odd limits, and then switched to TOP. Naive odd primes measures don't cut it. The Tenney-weighted prime STD is so easy to calculate that I don't even see a practical reasons for using a prime RMS instead. I haven't worked it out for general weighting but I don't see that should be a problem.

If you're looking at tunings of low-limit rank 2 temperaments maybe it doesn't matter. The primes are all around the same size, so weighting them equally and throwing away 2 will get you close enough. And as there's a parameter to optimize for, you'll probably avoid the worst tunings the same way that TOP avoids the worst error cancellation by optimizing the octave.

I'm more concerned about comparisons between temperaments. Does pocpit/pof order equal temperaments in a way that makes sense? I take it the definition works in the rank 1 case. Does it correctly order the unoptimal tunings of higher rank temperaments? What happens with higher prime limits?

Graham

🔗Carl Lumma <ekin@lumma.org>

3/23/2007 9:14:44 AM

>> Another way of looking at it fits it to my remarks on why it makes
>> sense to weight by generator steps. If we take 5-limit meantone, for
>> example, then not only are 3/2 and 5^(1/4) generator tunings giving a
>> pure consonance, so is (10/3)^(1/3). So, now take (3/2)*5^4*(10/3)^3 =
>> 312500/9 and use it for an eigenmonzo, and the result is the 7/26-
>> comma Woolhouse tuning.
>
>I think weighting by generator steps might be valid. But that's not
>what you're talking about. Any least squares optimization will pay most
>attention to the more complex intervals (given equivalent weight)
>because they change most rapidly.

I was just thinking this when I read Gene's post.

>I have been known to define
>complexity as the gradient of (average) error as a function of the
>generator size.

size?

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/23/2007 9:18:23 AM

>The last point I still think is wrong. I've argued before why something
>like a Kees metric or standard deviation of primes makes more sense. I
>haven't seen any counter-arguments.

I know you did, and so did Paul. Can you sum up the argument here?

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/23/2007 3:48:41 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Right, so it's an unweighted NOT then. And doesn't attempt to
solve the
> problems with NOT that I addressed by using standard deviations.

Wjat problems?

> > This, incidentially, allows us to cast rms with respect to the
> > tonality diamond or any other set of intervals into the same
mold.
> > Simply take
>
> Why was that ever difficult?

This way

(1) Uses rational arithmetic and gives exact answers, so that

(2) Your computer package never craps out on you and tells you there
isn't an answer when there is one, plus

(3) It's fast and easy.

> I think weighting by generator steps might be valid. But that's
not
> what you're talking about. Any least squares optimization will pay
most
> attention to the more complex intervals (given equivalent weight)
> because they change most rapidly.

Well, that's the point. It turns out they are the same, if
by "weighting" you mean weighting an eigenmonzo.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/23/2007 4:02:33 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> The last point I still think is wrong.

You can call it wrong and naive all you like, and then Jon Szanto
starts screaming because he can hear the detuning of the octaves, and
it's driving him nuts. The fact is, people sometimes want pure
octaves.

> I'm more concerned about comparisons between temperaments. Does
> pocpit/pof order equal temperaments in a way that makes sense?

Does assuming octaves are pure in equal temperaments make sense?
People do it all the time.

I take
> it the definition works in the rank 1 case.

It simply says the tuning of 12-et is an equal division of the octave
into 12 parts. Much more interesting is Frobenius, which works a lot
like TOP, only a bit more exteme. For instance, the 5-limit octave is
1196.778 cents, and the 7-limit octave is 1193.099 cents.

Does it correctly order the
> unoptimal tunings of higher rank temperaments? What happens with
higher
> prime limits?

I don't know what you mean.

🔗Graham Breed <gbreed@gmail.com>

3/23/2007 9:22:32 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>The last point I still think is wrong.
> > You can call it wrong and naive all you like, and then Jon Szanto > starts screaming because he can hear the detuning of the octaves, and > it's driving him nuts. The fact is, people sometimes want pure > octaves.

Huh?

Given that you can't follow my arguments, what hope does anybody else have if you don't quote them? For the boys and girls following this at home, my third point was:

3) Naively ignoring factors of 2

Note the word "Naively" there, which does mean something.

>>I'm more concerned about comparisons between temperaments. Does >>pocpit/pof order equal temperaments in a way that makes sense?
> > Does assuming octaves are pure in equal temperaments make sense? > People do it all the time.

Yes. But naively ignoring factors of 2 doesn't make sense. Everybody from Partch on has noticed that.

> I take >>it the definition works in the rank 1 case. > > It simply says the tuning of 12-et is an equal division of the octave > into 12 parts. Much more interesting is Frobenius, which works a lot > like TOP, only a bit more exteme. For instance, the 5-limit octave is > 1196.778 cents, and the 7-limit octave is 1193.099 cents.

What's interesting about that? Of course equal prime weights will exaggerate the tempering of small primes. The smallest prime, 2, will suffer most from this. I thought that was the whole point of using Tenney weighting in the first place.

> Does it correctly order the >>unoptimal tunings of higher rank temperaments? What happens with > higher >>prime limits?
> > I don't know what you mean.

Two points:

1) If we don't use the optimal tuning, does the error make sense? That is the unweighted, naive octave-equivalent RMS implied by pocpit?

2) How does it behave in higher limits? Equal prime weighting should have a more dramatic effect the more unequal the sizes of the primes are.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/23/2007 9:37:13 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>Right, so it's an unweighted NOT then. And doesn't attempt to > solve the >>problems with NOT that I addressed by using standard deviations. > > Wjat problems?

RTFP

>>>This, incidentially, allows us to cast rms with respect to the >>>tonality diamond or any other set of intervals into the same > mold. >>>Simply take
>>
>>Why was that ever difficult?
> > This way > > (1) Uses rational arithmetic and gives exact answers, so that

I thought we concluded before that you can't get the interval or error sizes without floating point calculations as long as the prime intervals are defined as floating point (and logarithms of integers are never rationals). If you wanted intermediate results in rational numbers all these years, why did you find it difficult?

> (2) Your computer package never craps out on you and tells you there > isn't an answer when there is one, plus

I've included the relevant C code from my years-old computer package below. Tell me for what inputs it craps out and tells you there isn't an answer when there is one.

> (3) It's fast and easy.

The only thing slow about the C code below is that it loops over a list of the intervals in an odd limit. That makes it quadratic in the number of primes. Prime-based measures are faster because they're linear in the number of primes. I don't think anything you've done changes this.

What's difficult?

void linear_temperament_optimize_rms(linear_temperament *this,
float_array *primes, generic_list *intervals) {

assert(this != NULL);
assert(intervals != NULL);
assert(primes != NULL);
double numerator = 0.0;
double denominator = 0.0;

double period_tuning = this->period_size;
int_array* period_mapping = this->mapping->period_mapping;
int_array* generator_mapping = this->mapping->generator_mapping;
assert(primes->size == this->mapping->period_mapping->size);

generic_list *node = intervals;
while (node != NULL) {
int_array* interval = (int_array*) generic_list_head(node);
node = generic_list_tail(node);

assert(period_mapping->size == interval->size);

int equivs = 0;
int gens = 0;
for (int i=0; i<period_mapping->size; i++) {
equivs += period_mapping->data[i] * interval->data[i];
if (i>0) {
gens += generator_mapping->data[i-1] * interval->data[i];
}
}
if (gens != 0) {
double optimum = magnitude(primes, interval);
double gradient = (double) gens;
double offset = ((double) equivs)*period_tuning - optimum;
/*
* y = mx + c
* y**2 = m**2*x**2 + 2*cmx * c**2
* d(y**2)/dx = 2*m**2*x + 2cm = 0
* x = -2cm/2m**2 = -cm/m**2
*/
numerator -= gradient*offset;
denominator += gradient*gradient;
}
}
assert (denominator != 0.0);
this->generator_size = numerator / denominator;
}

🔗Gene Ward Smith <genewardsmith@sbcglobal.net>

3/24/2007 3:34:52 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> Much more interesting is Frobenius, which works a lot
> > like TOP, only a bit more exteme. For instance, the 5-limit
octave is
> > 1196.778 cents, and the 7-limit octave is 1193.099 cents.
>
> What's interesting about that?

What's interesting is that you get results which tell you something,
unlike the situation where you are just told "to make octaves pure,
make them pure."

> 1) If we don't use the optimal tuning, does the error make sense?

What's the optimal tuning?

That
> is the unweighted, naive octave-equivalent RMS implied by pocpit?

That's what it is.

🔗Graham Breed <gbreed@gmail.com>

3/24/2007 4:59:10 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>Much more interesting is Frobenius, which works a lot >>
>>>like TOP, only a bit more exteme. For instance, the 5-limit > octave is >>>1196.778 cents, and the 7-limit octave is 1193.099 cents.
>>
>>What's interesting about that? > > What's interesting is that you get results which tell you something, > unlike the situation where you are just told "to make octaves pure, > make them pure."

What are you blathering about?

>>1) If we don't use the optimal tuning, does the error make sense?
> > What's the optimal tuning?

If you really are too dense to understand this, do you really have to worry what optimum you're not using?

> That >>is the unweighted, naive octave-equivalent RMS implied by pocpit?
> > That's what it is.

So are you going to answer the question?

Graham

🔗Graham Breed <gbreed@gmail.com>

3/24/2007 4:53:01 AM

Me:
>>I have been known to define >>complexity as the gradient of (average) error as a function of the >>generator size.

Carl:
> size?

Yes.

Plot the error in an interval as a function of the generator size. The gradient tells you the number of generators to that interval (once you take account of the period size). You can generalize this by looking at the gradient of the average error for a whole temperament. It gives you a clue as to what complexity measure might go with that kind of average.

For worst error, the complexity is the worst case of the error gradient. For RMS error, it's a constant that determines the gradient (the gradient itself goes from zero to infinity).

Graham

🔗Carl Lumma <ekin@lumma.org>

3/24/2007 9:21:11 AM

At 09:18 AM 3/23/2007, you wrote:
>>The last point I still think is wrong. I've argued before why something
>>like a Kees metric or standard deviation of primes makes more sense. I
>>haven't seen any counter-arguments.
>
>I know you did, and so did Paul. Can you sum up the argument here?

This is the 19 vs. 22 thing?

>>Only in the 5 limit by the looks of it. 19 has errors in 3 and 5
>>almost (unweighted) equally flat so that 6:5 is almost perfect. The
>>average odd-limit error will be about two thirds the average prime
>>error. 22 has a sharp 3 and a flat 5 so 6:5 is worse than either and
>>the average odd-limit error will be higher than the average prime
>>error. When you stretch the octaves, 19 will benefit more, with the
>>errors in 3 and 5 both getting smaller and 2 taking up the slack.
>>This more fairly represents the odd-limit error.

-Carl

🔗Carl Lumma <ekin@lumma.org>

3/24/2007 9:49:32 AM

>>>I have been known to define
>>>complexity as the gradient of (average) error as a function of the
>>>generator size.
>
>Carl:
>> size?
>
>Yes.
>
>Plot the error in an interval as a function of the generator size.

Why on earth would you do that? I thought you were talking about
weighting by generator complexity (mapping) which makes sense.

>The gradient tells you the number of generators to that interval (once
>you take account of the period size). You can generalize this by looking
>at the gradient of the average error for a whole temperament. It gives
>you a clue as to what complexity measure might go with that kind of
>average.

Oh weird. I'll have to take your word for it.

-Carl

🔗Gene Ward Smith <genewardsmith@sbcglobal.net>

3/24/2007 1:23:34 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > What's the optimal tuning?
>
> If you really are too dense to understand this, do you really have to
> worry what optimum you're not using?

If you won't say what you mean, it leaves the impression you don't
really know what you mean.

> So are you going to answer the question?

I've seen no question which was clearly phrased, and none, incidently,
which was politely put.

🔗Graham Breed <gbreed@gmail.com>

3/24/2007 4:20:18 PM

Carl Lumma wrote:
> At 09:18 AM 3/23/2007, you wrote:
> >>>The last point I still think is wrong. I've argued before why something >>>like a Kees metric or standard deviation of primes makes more sense. I >>>haven't seen any counter-arguments.
>>
>>I know you did, and so did Paul. Can you sum up the argument here?

I don't have the message this quote comes from. I know I've missed some lately.

> This is the 19 vs. 22 thing?

Yes. Or 19 vs 12. By my calculations, 12 comes out much better than 19 with a naive odd primes RMS.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/25/2007 3:22:09 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>>What's the optimal tuning?
>>
>>If you really are too dense to understand this, do you really have to >>worry what optimum you're not using?
> > If you won't say what you mean, it leaves the impression you don't > really know what you mean.

Sure, let's get the ad homeinems out of the way. I'm a complete fucking idiot and I haven't got the slightest idea what I'm talking about. I don't give a shit what you think about me. But please, can you read my messages?

>>So are you going to answer the question?
> > I've seen no question which was clearly phrased, and none, incidently, > which was politely put.

Okay, let's try:

Excuse me, oh esteemed master. Your tuning implies that primes should be weighted all equal to one another, and the error in a temperament should only depend on the average value of the errors of the primes. I was wondering, perchance, would you be so kind as to provide some examples of these errors as they apply to common temperaments?

In addition, would it be too far out of your way to use tunings of those temperaments which differ from your most excellent "pocpit". From such information, may wondrous conclusions may be drawn.

Upon perusing these results, you may care to relate them to the subject you introduced of "Why pocpit tuning makes sense".

Graham

🔗Graham Breed <gbreed@gmail.com>

3/25/2007 7:30:33 AM

Carl Lumma wrote:
>>The last point I still think is wrong. I've argued before why something >>like a Kees metric or standard deviation of primes makes more sense. I >>haven't seen any counter-arguments.
> > > I know you did, and so did Paul. Can you sum up the argument here?

Oh, I do have this message. Well, intervals between primes are more important than primes is the simple argument. Beyond that, I wrote the PDF file so that I didn't have to keep explaining things like this.

Graham

🔗Carl Lumma <ekin@lumma.org>

3/25/2007 9:39:07 AM

At 07:30 AM 3/25/2007, you wrote:
>Carl Lumma wrote:
>>>The last point I still think is wrong. I've argued before why something
>>>like a Kees metric or standard deviation of primes makes more sense. I
>>>haven't seen any counter-arguments.
>>
>>
>> I know you did, and so did Paul. Can you sum up the argument here?
>
>Oh, I do have this message. Well, intervals between primes are more
>important than primes is the simple argument. Beyond that, I wrote the
>PDF file so that I didn't have to keep explaining things like this.

One thing about your explanations is that they tend to be like this.
Is your 2nd sentence the result of naively ignoring 2s, or the thing
that breaks when you do? Next, you could give an example of
it happenning.

As a reader of your pdf, I hope this list is a place to discuss it,
not it a replacement for discussions here.

-Carl

🔗Gene Ward Smith <genewardsmith@sbcglobal.net>

3/25/2007 7:52:12 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Sure, let's get the ad homeinems out of the way.

If rudeness bothers you you shouldn't have been so rude.

> But please, can you read my
> messages?

They are often cryptic in their meaning.

> Excuse me, oh esteemed master. Your tuning implies that primes
should
> be weighted all equal to one another, and the error in a
temperament
> should only depend on the average value of the errors of the
primes. I
> was wondering, perchance, would you be so kind as to provide some
> examples of these errors as they apply to common temperaments?

You want errors of what, exactly? And considering how easily it can
be computed, why ask me?

> In addition, would it be too far out of your way to use tunings of
those
> temperaments which differ from your most excellent "pocpit".

I have no idea what this sentence means.

> Upon perusing these results, you may care to relate them to the
subject
> you introduced of "Why pocpit tuning makes sense".

I didn't say it made more sense than everything else. I was trying to
point put that eigenmonzos derived from generator mappings, which at
first blush seems not to make much sense, actually do make sense. But
it isn't a claim it makes *more* sense than other tuning systems, a
bee which buzzed into your bonnet from who knows where.

🔗Graham Breed <gbreed@gmail.com>

3/25/2007 10:15:54 PM

Carl Lumma wrote:
> At 07:30 AM 3/25/2007, you wrote:
> >>Carl Lumma wrote:
>>
>>>>The last point I still think is wrong. I've argued before why something >>>>like a Kees metric or standard deviation of primes makes more sense. I >>>>haven't seen any counter-arguments.
>>>
>>>
>>>I know you did, and so did Paul. Can you sum up the argument here?
>>
>>Oh, I do have this message. Well, intervals between primes are more >>important than primes is the simple argument. Beyond that, I wrote the >>PDF file so that I didn't have to keep explaining things like this.
> > One thing about your explanations is that they tend to be like this.
> Is your 2nd sentence the result of naively ignoring 2s, or the thing
> that breaks when you do? Next, you could give an example of
> it happenning.

Which second sentence? "Intervals between primes are more important than primes" is the rationale for using a standard deviation.

The naive approach could be either to remove 2s from a procedure that works with all primes, or to apply an average that works for optimal scale stretches where octaves are fixed to be perfect. The result will be the same in both cases, because if octaves are perfect they have an error of zero. I call this naive because it's the first thing everybody thinks of doing and they usually run into problems with it soon after.

Tables 1 and 2 from the PDF show examples of this. In Table 1 you can see the raw and Tenney weighted (the "Post-" column) errors for 19-equal. Note that 7:1 has a weighted error of 7.6 cents/octave but none of the simple intervals within the octave is worse than 4.5 cents/octave. So an average of prime errors is going to over-estimate the average error of intervals within the octave.

Oftentimes in music theory you take the complexity of an octave-equivalent interval from the octave-specific complexity of its equivalent withing the octave. I won't give the rationale here because it adds more complexity to the argument. But it ties odd limits to Tenney weighting. If you compare the pure octaves errors with the stretched errors in Table 1, you should see they aren't much different for intervals within the octave. However, the weighted errors for large intervals are generally smaller when you optimize the stretch. This means that even if you're going to tune to pure octaves, the average error of the optimally stretched primes is more representative of intervals within the octave than the average error of the actual primes you're tuning.

In Table 2 you can compare different error measures for different equal temperaments. RMS is the naive average of prime errors, and STD is the standard deviation. You can see that the STD is closely correlated to the TOP-RMS error. 19 and 27 stand out as good octave sizes where taking the standard deviation makes a big difference. My subjective opinion of 19 is that it's closer to 22 than 12 in the 5-limit and this agrees with the STD.

(Note I was wrong before to say that 19 comes out worse than 12 for some RMS)

> As a reader of your pdf, I hope this list is a place to discuss it,
> not it a replacement for discussions here.

Certainly this is a place for discussions. But I don't want to keep being asked questions that I've answered in the PDF. Unless you can point to which part of the argument you're having trouble with.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/26/2007 4:07:26 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>Sure, let's get the ad homeinems out of the way.
> > If rudeness bothers you you shouldn't have been so rude.

Rudeness is fine. But we could try to keep it on topic.

>>But please, can you read my >>messages?
> > They are often cryptic in their meaning.

Of course they're cryptic if you chop them up and misrepresent them. And there's not much I can do to clarify them when you've already thrown away the context.

>>Excuse me, oh esteemed master. Your tuning implies that primes > should >>be weighted all equal to one another, and the error in a > temperament >>should only depend on the average value of the errors of the > primes. I >>was wondering, perchance, would you be so kind as to provide some >>examples of these errors as they apply to common temperaments?
> > You want errors of what, exactly? And considering how easily it can > be computed, why ask me?

Is this the answering a question with another question game?

>>In addition, would it be too far out of your way to use tunings of > those >>temperaments which differ from your most excellent "pocpit". > > I have no idea what this sentence means.

Which words are you having difficulty with?

>>Upon perusing these results, you may care to relate them to the > subject >>you introduced of "Why pocpit tuning makes sense".
> > I didn't say it made more sense than everything else. I was trying to > point put that eigenmonzos derived from generator mappings, which at > first blush seems not to make much sense, actually do make sense. But > it isn't a claim it makes *more* sense than other tuning systems, a > bee which buzzed into your bonnet from who knows where. IOW "vaguely sensible" like I said right at the start.

So when I said

>>Right, so it's an unweighted NOT then. And doesn't attempt to solve >>the problems with NOT that I addressed by using standard deviations. >>So pof/pocpit may make sense [but] why should we care about it?

you could have said "Sure, it still has those problems".

And when I said

>>>This, incidentially, allows us to cast rms with respect to the >>>tonality diamond or any other set of intervals into the same mold. >>>Simply take
>
>Why was that ever difficult?

you could have said "No reason, but I thought I'd mention it".

And when I said

>>Equal weighting is fine if it's really what you want. I didn't think >>anybody did want that, though, from musical arguments. It doesn't >>make much sense in theoretical terms because primes and composites are >>unbalanced. The only advantage I can see is that you can use integer >>libraries for a bit longer.

you could have said "Sure, it doesn't make that much sense".

And when I said

>>The last point I still think is wrong. I've argued before why >>something like a Kees metric or standard deviation of primes makes >>more sense. I haven't seen any counter-arguments. I though this was >>agreed years ago, which is why we used to use odd limits, and then >>switched to TOP. Naive odd primes measures don't cut it. The >>Tenney-weighted prime STD is so easy to calculate that I don't even >>see a practical reasons for using a prime RMS instead. I haven't >>worked it out for general weighting but I don't see that should be a >>problem.

you could have said "Sure, that would make more sense".

And when I said

>>I'm more concerned about comparisons between temperaments. Does >>pocpit/pof order equal temperaments in a way that makes sense? I take >>it the definition works in the rank 1 case. Does it correctly order >>the unoptimal tunings of higher rank temperaments? What happens with >>higher prime limits?

you could have said "Sure, it doesn't make that much sense".

Instead of which, you chose to ignore, argue with, or go off at a tangent to everything I said. Now, is there anything above that you actually disagree with or can't understand?

While I'm here, though, you could make a case for why we should be interested in any of this.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/26/2007 5:41:36 PM

Carl Lumma wrote:
>>>>I have been known to define >>>>complexity as the gradient of (average) error as a function of the >>>>generator size.
>>
>>Carl:
>>
>>>size?
>>
>>Yes.
>>
>>Plot the error in an interval as a function of the generator size.
> > Why on earth would you do that? I thought you were talking about
> weighting by generator complexity (mapping) which makes sense.

These things should be on my website, but I lost them. You plot the (weighted) errors in the prime intervals and from them you can work out the errors in other intervals and visually find the minimax. It's a good way of seeing the trade-off between different primes for a given temperament.

As for the overall error of a temperament, this is the function to minimize. So whether you plot it or not it can be useful to know the gradient.

>>The gradient tells you the number of generators to that interval (once
>>you take account of the period size). You can generalize this by looking
>>at the gradient of the average error for a whole temperament. It gives
>>you a clue as to what complexity measure might go with that kind of
>>average.
> > Oh weird. I'll have to take your word for it.

For the pure octaves Tenney-weighted STD, the error as a function of generator size is quadratic. The standard deviation of complexities is one parameter, along with the number of periods to the octave, the optimial generator size, and the optimal error. So that suggests the standard deviation of complexities as being the complexity of the temperament and this seems to work.

Graham