back to list

TOP arguments

🔗Graham Breed <gbreed@gmail.com>

10/29/2005 6:02:24 AM

There was a lot of fuss about TOP tuning a while back. I won't actually review it here, so if you don't know what I'm talking about you'll have to search back. I've always had reservations about it and have voiced some of them here. But I haven't taken the time to present them all together until now. I don't mean to say that TOP isn't a valid way of fixing the tuning or assessing the accuracy, but that it isn't the best and shouldn't be the standard.

One thing TOP does get right is that weighting tends to go with octave-specific consonance. You can get pretty good measures by only considering the primes so long as you weight them and allow the octaves to become impure. This follows from the "triangular vs rectangular" thread, and I hope we agree on that now. With rectangular, octave-equivalent lattices you have 5:3 and 15:8 with the same complexity, so you have to consider the whole ratio or make subtraction easier than addition.

Tenney weighting's good in so far as it goes. The odd-limit can be seen as an octave-equivalent simplification of Tenney weighting after all. Tenney weighting is like a null-weighting for primes-based measures because it means all intervals are considered equivalent in so far as that's possible. But if you make it the only criterion it gives far too much weight to complex intervals. I don't think you even get a stable optimization. That's why you have to enforce a prime limit with these weighted schemes. It guarantees that intervals you don't consider are more consonant than most of those you do, by the very same complexity measure you're looking for.

You could try having the weighting drop off faster with complexity. The problem with that is, to get a stable result over all intervals, you'll have to get it to fall off so quickly that only the simplest intervals are going to have any importance at all. If you care about the greater 9-limit, you'll have to intervene somewhere, and you may as well do it with a hard limit.

So we end up back with Tenney weighting and a prime limit. It's a pragmatic approach that gives sensible results. But, as it's still a compromise, the precise mathematical properties of TOP aren't important. It's a perfect structure build on imperfect foundations.

My big, conceptual problem with TOP is that the minimax isn't appropriate for a weighted scheme. The point of the minimax is that you put a limit on how bad the tuning can get. The point of weighting is that you favour the most important intervals, and don't worry so much about the less important ones. If simple intervals carry most of the harmony you care about, their tuning is more important than the more complex ones. And good intervals in a chord can make up for bad ones. These considerations lead to some kind of mean error as the thing to be optimized. I really don't think that the weighted maximum error is anything we can hear. Ideally (if we had a theory we could trust) we'd do a weighted-mean optimization provided an unweighted-minimax hurdle is passed. The best weighting schemes would target a particular piece of music, and depend on how often different intervals are used in it. You could even use it to vary the tuning over the piece. For the general case you can guess that simple intervals are used most often, and so some Tenney-weighted mean is the best approximation.

TOP doesn't solve the probem of optimum octave-stretch. There are too many factors that aren't considered once you leave octave-equivalence. The biggest problem is the effect of the actual size of the interval. Quite small intervals sound most dissonant, and very large intervals are inaudible. Yet they have the same Tenney weight, and so are considered equally in the optimization if that's your only criterion. I don't know the solution to this, because there isn't enough theory to rest on, but for the best solution I'd want some kind of size-weighted minimax to make sure the smallest important intervals don't get out of control. Use the octave stretching to optimize 9:8 and 8:7, and leave 3:2 and 7:4 to look after themselves. Soon enough you get such a complex system that you may as well do the tuning subjectively. So TOP is fine to give you a general idea of the ideal stretch. But as we don't know which system is correct we may as well use the simplest vaguely correct one, and TOP isn't it. However, it is simple enough to have some validity.

The simplest weighted, prime-limit scheme is to optimize the RMS of the prime errors themselves. This is actually a mixture of a mean and minimax measure. It assumes errors always add up, whereas sometimes they cancel out, so 5:3 and 15:1 are always given the same error. That's why you have to optimize the octave. If the errors would tend to cancel out in an odd-limit, the RMS of primes is unrepresentative. But the octave stretching will compensate for this. So, paradoxically, the octave-stretched optimum may be most valid after you unstretch the octaves.

The cleverest part of TOP is that it can still be applied with pure octaves and the Kees metric. Is the "Kees" metric from "Kees van Prooijen" or something? In that case, you can call the octave-equivalent TOP "Un-Tempered Octaves, Prooijen Ideal Accuracy". That gives the acronym "UTOPIA" which is better than "TOP" while showing the relationship. Even if that doesn't work, there's still "Un-Tempered Octaves, Please, In All Numbers". Despite this advantage, it requires more effort to calculate than the weighted primes least squares, so you may as well do the latter and restretch.

I still think the easiest metric to understand is the odd-limit minimax. I can see practical advantages in the weighted prime least squares, which I'd overlooked before. But TOP is too difficult to calculate, and rests on too dubious theory, that it doesn't fill any useful niche.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 7:10:47 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> There was a lot of fuss about TOP tuning a while back. I won't
actually
> review it here, so if you don't know what I'm talking about you'll
have
> to search back. I've always had reservations about it and have
voiced
> some of them here. But I haven't taken the time to present them
all
> together until now. I don't mean to say that TOP isn't a valid way
of
> fixing the tuning or assessing the accuracy, but that it isn't the
best
> and shouldn't be the standard.
>
> One thing TOP does get right is that weighting tends to go with
> octave-specific consonance. You can get pretty good measures by
only
> considering the primes so long as you weight them and allow the
octaves
> to become impure. This follows from the "triangular vs
rectangular"
> thread, and I hope we agree on that now. With rectangular,
> octave-equivalent lattices you have 5:3 and 15:8 with the same
> complexity, so you have to consider the whole ratio or make
subtraction
> easier than addition.
>
> Tenney weighting's good in so far as it goes. The odd-limit can be
seen
> as an octave-equivalent simplification of Tenney weighting after
all.
> Tenney weighting is like a null-weighting for primes-based measures
> because it means all intervals are considered equivalent in so far
as
> that's possible. But if you make it the only criterion it gives
far too
> much weight to complex intervals. I don't think you even get a
stable
> optimization. That's why you have to enforce a prime limit with
these
> weighted schemes.

I'm dubious on this part of your claims. Can you set up an unstable
optimization to show this?

> It guarantees that intervals you don't consider are
> more consonant than most of those you do, by the very same
complexity
> measure you're looking for.

I'm not sure exactly what you mean by this.

> You could try having the weighting drop off faster with
complexity. The
> problem with that is, to get a stable result over all intervals,
you'll
> have to get it to fall off so quickly that only the simplest
intervals
> are going to have any importance at all. If you care about the
greater
> 9-limit, you'll have to intervene somewhere, and you may as well do
it
> with a hard limit.

This is the great thing about TOP. You can place the hard limit just
about anywhere you want and it won't change the results! I admit that
TOP may seem pretty unjustified if you don't realize this. But I
tried to point it out in my paper . . . And the results are
perfectly "stable", in every sense.

> So we end up back with Tenney weighting and a prime limit. It's a
> pragmatic approach that gives sensible results. But, as it's still
a
> compromise,

I don't see it as a compromise. What is being compromised?

> the precise mathematical properties of TOP aren't important.
> It's a perfect structure build on imperfect foundations.
>
> My big, conceptual problem with TOP is that the minimax isn't
> appropriate for a weighted scheme.

Sure it is -- just as appropriate as with equal-weighting. I don't
see why one would say otherwise.

> The point of the minimax is that you
> put a limit on how bad the tuning can get.

Yes, and "how bad" is clearly something which can't be in units of
cents for all the different intervals you're looking at, IMO.

> The point of weighting is
> that you favour the most important intervals, and don't worry so
much
> about the less important ones. If simple intervals carry most of
the
> harmony you care about, their tuning is more important than the
more
> complex ones.

Yes, and the weighting is exactly what takes this into account.

> And good intervals in a chord can make up for bad ones.

Possibly, though minimax is good enough for George Secor. Note that
for a triad, max error and sum-absolute-errors amount to the same
thing.

> These considerations lead to some kind of mean error as the thing
to be
> optimized.

OK, I'm willing to delve further into your L2 variant of TOP . . . we
really need to think more about what it means, though.

> I really don't think that the weighted maximum error is
> anything we can hear. Ideally (if we had a theory we could trust)
we'd
> do a weighted-mean optimization provided an unweighted-minimax
hurdle is
> passed.

Why unweighted? You haven't given any arguments on that.

> The best weighting schemes would target a particular piece of
> music, and depend on how often different intervals are used in it.
You
> could even use it to vary the tuning over the piece. For the
general
> case you can guess that simple intervals are used most often, and
so
> some Tenney-weighted mean is the best approximation.
>
> TOP doesn't solve the probem of optimum octave-stretch. There are
too
> many factors that aren't considered once you leave octave-
equivalence.
> The biggest problem is the effect of the actual size of the
interval.
> Quite small intervals sound most dissonant, and very large
intervals are
> inaudible. Yet they have the same Tenney weight, and so are
considered
> equally in the optimization if that's your only criterion.

That's not a fair statement. The optimization can be "considered" in
many different ways while still yielding the same results.

> I don't know
> the solution to this, because there isn't enough theory to rest on,
but
> for the best solution I'd want some kind of size-weighted minimax
to
> make sure the smallest important intervals don't get out of
control.

Again, you appear to be missing another important feature of TOP,
which is that you're free to consider only the intervals within some
range of interest for your optimization, and you still get exactly
the same result!

> Use the octave stretching to optimize 9:8 and 8:7, and leave 3:2
and 7:4
> to look after themselves.

I don't understand that. Aren't they (the latter intervals) more
damaged by mistuning?

> Soon enough you get such a complex system
> that you may as well do the tuning subjectively. So TOP is fine to
give
> you a general idea of the ideal stretch. But as we don't know
which
> system is correct we may as well use the simplest vaguely correct
one,
> and TOP isn't it. However, it is simple enough to have some
validity.
>
> The simplest weighted, prime-limit scheme is to optimize the RMS of
the
> prime errors themselves.

I think you've posted about that before but I don't think it's been
discussed enough.

> This is actually a mixture of a mean and
> minimax measure.

Well, for triads, mean and minimax give the same thing, while RMS
gives something else. So I'm not sure exactly what you mean by
this . . .

> It assumes errors always add up, whereas sometimes
> they cancel out, so 5:3 and 15:1 are always given the same error.

You lost me.

> That's why you have to optimize the octave. If the errors would
tend to
> cancel out in an odd-limit, the RMS of primes is unrepresentative.
But
> the octave stretching will compensate for this. So, paradoxically,
the
> octave-stretched optimum may be most valid after you unstretch the
octaves.

Can you be a little more elaborate?

> The cleverest part of TOP is that it can still be applied with pure
> octaves and the Kees metric.

I don't know what you mean. Though TOP, when stretched or compressed
to have pure octaves, usually coincides with minimax Kees in the 5-
limit, it doesn't in higher limits, according to Gene.

> Is the "Kees" metric from "Kees van
> Prooijen" or something?

Yes. It's what we used to call log of "odd limit".

> In that case, you can call the
> octave-equivalent TOP "Un-Tempered Octaves, Prooijen Ideal
Accuracy".

Unfortunately this seems to have little to do with TOP (or Prooijen,
depending what you mean) particularly beyond the 5-limit.

> That gives the acronym "UTOPIA" which is better than "TOP" while
showing
> the relationship.

The relationship breaks down, according to Gene.

> Even if that doesn't work, there's still "Un-Tempered
> Octaves, Please, In All Numbers". Despite this advantage, it
requires
> more effort to calculate than the weighted primes least squares, so
you
> may as well do the latter and restretch.

If we can put your proposal on some good foundations . . . that has
yet to be seen (by me at least). I was disappointed when Gene
informed us that beyond the 5-limit, minimax Kees doesn't agree (even
modulo stretching) with TOP; I'm dubious that replacing minimax with
sum-of-squares (assuming we can define the latter appropriately in
the Kees case) will bring back the agreement . . .

> I still think the easiest metric to understand is the odd-limit
minimax.

Whew! Score one for minimax, then.

> I can see practical advantages in the weighted prime least
squares,
> which I'd overlooked before. But TOP is too difficult to
calculate, and
> rests on too dubious theory, that it doesn't fill any useful niche.

I find it odd that you'd say this but of course you're entitled . . .
In many cases of interest, TOP is extremely easy to calculate, as you
know . . .

🔗Gene Ward Smith <gwsmith@svpal.org>

10/31/2005 9:40:18 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> If we can put your proposal on some good foundations . . . that has
> yet to be seen (by me at least). I was disappointed when Gene
> informed us that beyond the 5-limit, minimax Kees doesn't agree (even
> modulo stretching) with TOP...

In practice in the 7 and 11 limits it seems it nearly always does.

> In many cases of interest, TOP is extremely easy to calculate, as you
> know . . .

If you've got a linear programming routine available (and they are
easily found) it's quite simple to compute.

🔗Graham Breed <gbreed@gmail.com>

11/1/2005 12:09:09 PM

Paul Erlich wrote:

>>Tenney weighting is like a null-weighting for primes-based measures >>because it means all intervals are considered equivalent in so far > as
>>that's possible. But if you make it the only criterion it gives > far too >>much weight to complex intervals. I don't think you even get a > stable >>optimization. That's why you have to enforce a prime limit with > these >>weighted schemes.
> > I'm dubious on this part of your claims. Can you set up an unstable > optimization to show this?

For the least squares weighted prime function of the nearest prime approximation to 73-equal:

1000 0.000386288890546 1.00000525939
2000 0.000342631907251 0.999999878426
3000 0.000320740521709 1.00000066141
4000 0.00030851462036 0.999999470906
5000 0.000298744238076 1.00000050417
6000 0.000291497890021 0.99999998381
7000 0.000285393298895 0.999999137131
8000 0.000279937136959 1.00000022932
9000 0.000276009346747 0.999999628759

The left hand column is the number of primes, the middle column is the weighted RMS and the final column is the stretched octave. You could say it's stable about 1.0, but it is at least useless as a way of getting the optimum stretch. Here, for comparison, is 72-equal:

1000 0.000373784710919 1.0000057801
2000 0.000333632022278 1.00001005312
3000 0.000317872256548 1.00000028725
4000 0.000306200832323 0.999998925005
5000 0.000297006549111 0.999996450292
6000 0.00029044672927 0.999997819137
7000 0.000285145297883 0.999998100274
8000 0.000280218707906 0.999997806044
9000 0.000276665844039 0.999998361876

When more than 8000 primes, 73-equal is closer to JI. In general, the error has more to do with the number of primes and steps per octave than any properties of the temperament in sensible limits.

>>It guarantees that intervals you don't consider are >>more consonant than most of those you do, by the very same > complexity >>measure you're looking for.
> > I'm not sure exactly what you mean by this.

15:8 is part of the 5-limit, and so part of a weighted primes optimization. 7:4 is outside the 5 prime-limit, and so isn't considered at all. 18984375:16777216 is within the prime limit, and so it carries some weight, if not very much.

> This is the great thing about TOP. You can place the hard limit just > about anywhere you want and it won't change the results! I admit that > TOP may seem pretty unjustified if you don't realize this. But I > tried to point it out in my paper . . . And the results are > perfectly "stable", in every sense.

If that's true, it certainly is news. But how come "dimipent" and "dimisept" are given different periods, generators and damages in your paper? As dimipent is only dimisept without the prime 7, it certainly looks like the limit does change the result.

>>So we end up back with Tenney weighting and a prime limit. It's a >>pragmatic approach that gives sensible results. But, as it's still > a >>compromise,
> > I don't see it as a compromise. What is being compromised?

We have a weighting and a limit that isn't based on complexity. That compromises simplicity against only having the weighting, or sensibleness against considering all primes and getting rubbish out (unless you can demonstrate otherwise).

>>My big, conceptual problem with TOP is that the minimax isn't >>appropriate for a weighted scheme.
> > Sure it is -- just as appropriate as with equal-weighting. I don't > see why one would say otherwise.

This is the thing I haven't mentioned before, because it would mean taking the time to set out all the arguments. But I'm sure it makes sense.

>>The point of the minimax is that you >>put a limit on how bad the tuning can get.
> > Yes, and "how bad" is clearly something which can't be in units of > cents for all the different intervals you're looking at, IMO.

I think it can, and at least this is the best guess if we don't know what the true function should be.

>>The point of weighting is >>that you favour the most important intervals, and don't worry so > much >>about the less important ones. If simple intervals carry most of > the >>harmony you care about, their tuning is more important than the > more >>complex ones.
> > Yes, and the weighting is exactly what takes this into account.

Uh, yes.

>>And good intervals in a chord can make up for bad ones.
> > Possibly, though minimax is good enough for George Secor. Note that > for a triad, max error and sum-absolute-errors amount to the same > thing.

I don't think major seventh chords are as dissonant as other 15-limit chords, so there must be more to it than the most dissonant interval (or odd limit/Tenney complexity).

>>These considerations lead to some kind of mean error as the thing > to be >>optimized.
> > OK, I'm willing to delve further into your L2 variant of TOP . . . we > really need to think more about what it means, though.

It's the average error of the primes. That's not a difficult concept.

>>I really don't think that the weighted maximum error is >>anything we can hear. Ideally (if we had a theory we could trust) > we'd >>do a weighted-mean optimization provided an unweighted-minimax > hurdle is >>passed.
> > Why unweighted? You haven't given any arguments on that.

It makes sense that this would be the case. The worst interval is the one that's most out of tune. If anything, I'm with Partch and would expect the more complex intervals to have a smaller error. I think the various dissonance graphs back this up. If you draw a line for a particular worst dissonance level, the complex intervals have a narrower range than the simpler ones.

The absolute error also tells you how far a performer would have to move the pitch to get JI on a suitably flexible instrument.

>>Quite small intervals sound most dissonant, and very large > intervals are >>inaudible. Yet they have the same Tenney weight, and so are > considered >>equally in the optimization if that's your only criterion.
> > That's not a fair statement. The optimization can be "considered" in > many different ways while still yielding the same results.

Only if you use an algorithm that you know gives the same results, in which case you're implicitly considering all the other intervals by using that algorithm.

>>I don't know >>the solution to this, because there isn't enough theory to rest on, > but >>for the best solution I'd want some kind of size-weighted minimax > to >>make sure the smallest important intervals don't get out of > control. > > Again, you appear to be missing another important feature of TOP, > which is that you're free to consider only the intervals within some > range of interest for your optimization, and you still get exactly > the same result!

No, if you're using TOP you're considering all intervals because you chose TOP which makes them the same.

>>Use the octave stretching to optimize 9:8 and 8:7, and leave 3:2 > and 7:4 >>to look after themselves.
> > I don't understand that. Aren't they (the latter intervals) more > damaged by mistuning?

Yes, that's what will happen if you give them lower weight.

> Well, for triads, mean and minimax give the same thing, while RMS > gives something else. So I'm not sure exactly what you mean by > this . . .

RMS is a kind of mean. That's what the "M" stands for.

>>It assumes errors always add up, whereas sometimes >>they cancel out, so 5:3 and 15:1 are always given the same error.
> > You lost me.

An RMS of primes means the sign of the error is ignored. The error of 5:3 is the difference between the errors of 5 and 3, and the error of 15:1 is the sum of the errors of 5 and 3. A prime-based measure ignoring the sign will always assign the sum of the absolute errors of 5 and 3 to both ratios.

>>That's why you have to optimize the octave. If the errors would > tend to >>cancel out in an odd-limit, the RMS of primes is unrepresentative. > But >>the octave stretching will compensate for this. So, paradoxically, > the >>octave-stretched optimum may be most valid after you unstretch the > octaves.
> > Can you be a little more elaborate?

Okay, prime-weighted schemes can be parameterized to use weighted-intervals which I call w(p). w(p) is the size of the tempered interval representing the prime p divided by the size of p:1. So, for an equal temperament with pure octaves this is

w(p) = n(p)/d/log2(p)

where n(p) is the number of steps approximating p, and d is the number of steps to an octave. You can think of TOP in terms of w(p) to an extent, in that it can be defined in terms of w(p) although you won't get the right weighting for complex intervals if you don't know anything else. The simplest error measure is some average comparing these intervals to 1:

e(p) = w(p) - 1

Average error = sqrt(<e2>)

That is, the RMS. <..> means the mean and suffix 2 is for squared. The problem is that 19-equal is underestimated in the 5-limit if we keep pure octaves. The errors in 3 and 5 cancel out in 6:5, but the simple RMS ignores this. This is where we point out to newbies that they're getting it wrong, and previously we suggested odd limits.

An alternative is to take the standard deviation of the errors instead. Then, 19-equal looks good because the two 5-prime errors are about the same. The problem is that 19-equal is now overestimated, because it's errors cancel out but they're still finite. Getting errors to be the same isn't enough. We really want errors that are close to each other, and close to zero. So, well, add zero to the standard deviation.

With pure octaves, w(2) is always 1.0 and so e(2) is always zero. Hence, the standard deviation of the errors already contains a zero if we do it octave explicitly, but with pure octaves. So, a good bet for an octave-equivalent measure is

std(e) = sqrt(<e2> - <e>2)

where the octaves are still included in the calculation.

It happens that the least squares, prime-weighted error is

sqrt(1-<w>2/<w2>)

which can be rewritten

sqrt[(<w2> - <w>2)/<w2>]

or

std(w)/rms(w)

This is an interesting formula, because it's invariant with respect to the octave stretch. If you multiply each w by a constant, the effect on the standard deviation cancels out that on the RMS. It also happens that, because w(p) is only e(p) plus a constant, the two sets have the same standard deviation. That means our optimal error is

std(e)/rms(w)

If the temperament is at all sensible, each w(p) will be close to 1, and so this total error is close to std(e). Therefore it agrees with the octave-equivalent measure of standard deviation of weighted prime errors with a zero added in. It takes account of the sizes and signs of the prime errors, and doesn't assume a particular value for the octave.

>>The cleverest part of TOP is that it can still be applied with pure >>octaves and the Kees metric.
> > I don't know what you mean. Though TOP, when stretched or compressed > to have pure octaves, usually coincides with minimax Kees in the 5-
> limit, it doesn't in higher limits, according to Gene.

That's not what I mean. The RMS of primes measure is easy to find, and easy to show invalid when you take the octaves out. The bit about the standard deviation above is something I hadn't noticed until after you showed us TOP. The clever bit about TOP is that it's a weighted measure that has a natural octave-equivalent definition: your minimax Kees. That it's at least close to stretched TOP is nice, but a side issue.

> If we can put your proposal on some good foundations . . . that has > yet to be seen (by me at least). I was disappointed when Gene > informed us that beyond the 5-limit, minimax Kees doesn't agree (even > modulo stretching) with TOP; I'm dubious that replacing minimax with > sum-of-squares (assuming we can define the latter appropriately in > the Kees case) will bring back the agreement . . .

I thought I proved the relationship for any case of tempering out a single comma. That would include the 7-limit planar temperaments, and so on.

>> I can see practical advantages in the weighted prime least > > squares, > >>which I'd overlooked before. But TOP is too difficult to > > calculate, and > >>rests on too dubious theory, that it doesn't fill any useful niche.
> > > I find it odd that you'd say this but of course you're entitled . . . > In many cases of interest, TOP is extremely easy to calculate, as you > know . . .

It's only easy when you're tempering out a single comma. For linear temperaments, that only covers the 5-limit. I can do searches up to the 19 limit, and I don't know how to calculate TOP then. It's only in the higher limits that the search gets difficult anyway.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/1/2005 12:12:27 PM

Paul:
>>In many cases of interest, TOP is extremely easy to calculate, as you >>know . . .

Gene:
> If you've got a linear programming routine available (and they are
> easily found) it's quite simple to compute.

It can't be that simple if you need such a routine! Can you find one for Python? Or any free language? How efficient is it for optimizing a billion 19-limit linear temperaments?

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 12:27:21 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> The left hand column is the number of primes, the middle column is the
> weighted RMS and the final column is the stretched octave. You could
> say it's stable about 1.0, but it is at least useless as a way of
> getting the optimum stretch.

I don't know what you are using for a weighting, but it probably isn't
enough; you might try the p^(-1/2) as a weight for the prime p (apeing
the Zeta function on the critical line.)

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 12:29:38 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> It can't be that simple if you need such a routine!

Why not? Routines are as common as dirt.

Can you find one
> for Python?

Probably.

> How efficient is it for optimizing a
> billion 19-limit linear temperaments?

Why in the world do you want to do that? But simplex algorithms run
pretty fast for low dimensional problems like that in particular.

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 12:49:55 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > If we can put your proposal on some good foundations . . . that has
> > yet to be seen (by me at least). I was disappointed when Gene
> > informed us that beyond the 5-limit, minimax Kees doesn't agree
(even
> > modulo stretching) with TOP...
>
> In practice in the 7 and 11 limits it seems it nearly always does.

By "in practice" do you mean "approximately"? What's the minimax Kees
pajara tuning?

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 1:48:46 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > In practice in the 7 and 11 limits it seems it nearly always does.
>
> By "in practice" do you mean "approximately"?

No, I mean "most of the time", and it seems that it happens more often
than one might suppose for the better 7-limit temperaments.

What's the minimax Kees
> pajara tuning?

It's stretched TOP. This is the Kees minimax tuning for just about any
rank two 7-limit temperament worthy of mention.

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 2:30:41 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Paul Erlich wrote:
>
> >>Tenney weighting is like a null-weighting for primes-based
measures
> >>because it means all intervals are considered equivalent in so
far
> > as
> >>that's possible. But if you make it the only criterion it gives
> > far too
> >>much weight to complex intervals. I don't think you even get a
> > stable
> >>optimization. That's why you have to enforce a prime limit with
> > these
> >>weighted schemes.
> >
> > I'm dubious on this part of your claims. Can you set up an
unstable
> > optimization to show this?
>
> For the least squares weighted prime function of the nearest prime
> approximation to 73-equal:
>
> 1000 0.000386288890546 1.00000525939
> 2000 0.000342631907251 0.999999878426
> 3000 0.000320740521709 1.00000066141
> 4000 0.00030851462036 0.999999470906
> 5000 0.000298744238076 1.00000050417
> 6000 0.000291497890021 0.99999998381
> 7000 0.000285393298895 0.999999137131
> 8000 0.000279937136959 1.00000022932
> 9000 0.000276009346747 0.999999628759
>
> The left hand column is the number of primes,

!!! You need to specify your mapping before beginning the
optimization. In this case, you'd need mappings with hundreds or
thousands of elements.

> the middle column is the
> weighted RMS and the final column is the stretched octave. You
could
> say it's stable about 1.0, but it is at least useless as a way of
> getting the optimum stretch. Here, for comparison, is 72-equal:
>
> 1000 0.000373784710919 1.0000057801
> 2000 0.000333632022278 1.00001005312
> 3000 0.000317872256548 1.00000028725
> 4000 0.000306200832323 0.999998925005
> 5000 0.000297006549111 0.999996450292
> 6000 0.00029044672927 0.999997819137
> 7000 0.000285145297883 0.999998100274
> 8000 0.000280218707906 0.999997806044
> 9000 0.000276665844039 0.999998361876
>
> When more than 8000 primes, 73-equal is closer to JI. In general,
the
> error has more to do with the number of primes and steps per octave
than
> any properties of the temperament in sensible limits.

I have no idea what you're doing with this many primes, nor what this
is supposed to show, I'm afraid.

> >>It guarantees that intervals you don't consider are
> >>more consonant than most of those you do, by the very same
> > complexity
> >>measure you're looking for.
> >
> > I'm not sure exactly what you mean by this.
>
> 15:8 is part of the 5-limit, and so part of a weighted primes
> optimization.

I don't need to explicity include it. I could claim that its tuning
is purely a result of the tuning of simpler intervals.

> 7:4 is outside the 5 prime-limit, and so isn't considered
> at all.

If you don't care to have it in your scales, that strikes me as the
right thing to do.

> 18984375:16777216 is within the prime limit, and so it carries
> some weight, if not very much.

I don't need to explicity include it. I could claim that its tuning
is purely a result of the tuning of simpler intervals.

> > This is the great thing about TOP. You can place the hard limit
just
> > about anywhere you want and it won't change the results! I admit
that
> > TOP may seem pretty unjustified if you don't realize this. But I
> > tried to point it out in my paper . . . And the results are
> > perfectly "stable", in every sense.
>
> If that's true, it certainly is news.

It says so in the paper. Perhaps I need to say it more times in there.

> But how come "dimipent" and
> "dimisept" are given different periods, generators and damages in
your
> paper? As dimipent is only dimisept without the prime 7, it
certainly
> looks like the limit does change the result.

I mean the hard limit on the complexity of the ratios included in the
optimization, *given* the prime limit, can be placed just about
anywhere you want, and the optimal result won't change.

> >>So we end up back with Tenney weighting and a prime limit. It's
a
> >>pragmatic approach that gives sensible results. But, as it's
still
> > a
> >>compromise,
> >
> > I don't see it as a compromise. What is being compromised?
>
> We have a weighting and a limit that isn't based on complexity.
That
> compromises simplicity against only having the weighting, or
> sensibleness against considering all primes and getting rubbish out
> (unless you can demonstrate otherwise).

I don't understand why this argument applies to TOP and not to all
the other methods, which also use a limit "that isn't based on
complexity". But with TOP, you can consider, say, all the intervals
with an integer limit of 10, and Tenney-weight their errors. With a
prime limit of 7, there are no "holes" -- there are no intervals
missing that "compromise" the weighting. High primes don't need to
enter the "compromise" scenario since you never have to explicitly
weight the complex intervals anyway.

> >>My big, conceptual problem with TOP is that the minimax isn't
> >>appropriate for a weighted scheme.
> >
> > Sure it is -- just as appropriate as with equal-weighting. I
don't
> > see why one would say otherwise.
>
> This is the thing I haven't mentioned before, because it would mean
> taking the time to set out all the arguments. But I'm sure it >
makes sense.

Weighted minimax is used in many fields. I look forward to your
arguments :)

> >>The point of the minimax is that you
> >>put a limit on how bad the tuning can get.
> >
> > Yes, and "how bad" is clearly something which can't be in units
of
> > cents for all the different intervals you're looking at, IMO.
>
> I think it can, and at least this is the best guess if we don't
know
> what the true function should be.

I think it's pretty clear from the various models (such as harmonic
entropy) and testimonies we have that by and large, simpler intervals
are more sensitive to mistuning than slightly more complex ones. Even
George Secor, who has used equal-weighted minimax, ended up agreeing
with this!

> >>The point of weighting is
> >>that you favour the most important intervals, and don't worry so
> > much
> >>about the less important ones. If simple intervals carry most of
> > the
> >>harmony you care about, their tuning is more important than the
> > more
> >>complex ones.
> >
> > Yes, and the weighting is exactly what takes this into account.
>
> Uh, yes.
>
> >>And good intervals in a chord can make up for bad ones.
> >
> > Possibly, though minimax is good enough for George Secor. Note
that
> > for a triad, max error and sum-absolute-errors amount to the same
> > thing.
>
> I don't think major seventh chords are as dissonant as other 15-
limit
> chords, so there must be more to it than the most dissonant
interval (or
> odd limit/Tenney complexity).

Of course! But this is a completely different question. First of all,
15:8 can be the most dissonant interval while being the most damaged,
or while being the least damaged, interval in the chord, depending on
the tuning. This is closer to what we're actually discussing here. In
TOP, the damage on the most dissonant interval matters least, far
from that interval mattering most as you imply above. Secondly, we're
not talking about a single chord, we're talking about an entire
tuning system, and minimax can mean a lot of different things in that
context. Your example seems like a perfect argument for TOP, in fact,
in that major seventh chords participate in a lot of the consonances
in the lattice, and you don't want any of these consonances to be too
far off, while the 15:8 itself can be further off while doing less
damage to the chord as a whole.

> >>These considerations lead to some kind of mean error as the thing
> > to be
> >>optimized.
> >
> > OK, I'm willing to delve further into your L2 variant of
TOP . . . we
> > really need to think more about what it means, though.
>
> It's the average

You mean RMS?

>error of the primes. That's not a difficult >concept.

The idea is to work through the implications for *all* the intervals,
as has been done with TOP (see, for example, footnote xxvi in my
paper). Otherwise, there seems little justification for going along
with something that just looks at the primes and nothing else.

> >>I really don't think that the weighted maximum error is
> >>anything we can hear. Ideally (if we had a theory we could
trust)
> > we'd
> >>do a weighted-mean optimization provided an unweighted-minimax
> > hurdle is
> >>passed.
> >
> > Why unweighted? You haven't given any arguments on that.
>
> It makes sense that this would be the case. The worst interval is
the
> one that's most out of tune. If anything, I'm with Partch and
would
> expect the more complex intervals to have a smaller error.

I agree if you're saying that we should first check that the ratios
or chords we intend to use as basic consonances aren't closer to
other, equally simple ratios or chords than the ones they're supposed
to approximate. But I'd rather leave this to the user, who will
simply chuck out some of the TOP systems as a result, and keep the
rest of them -- specifying "the ratios or chords we intend to use as
basic consonances" is more than I need or want for the purpose of
setting out some TOP tunings. The hurdle can always be placed at the
end without affecting the results.

> I think the
> various dissonance graphs back this up. If you draw a line for a
> particular worst dissonance level, the complex intervals have a
narrower
> range than the simpler ones.

Right, but a given mistuning has a considerably greater impact on the
consonance of the simpler interval than it does on the consonance of
the complex ones. So if our target harmony is all the intervals in a
big harmonic-series chord, say, the closeness of the sound to JI is
best judged by weighting the errors on the simpler intervals *more*.

> The absolute error also tells you how far a performer would have to
move
> the pitch to get JI on a suitably flexible instrument.

True, and that's one reason I've liked it in the past. It's a good
thing to consider when your ultimate goal is adaptive JI.

> >>Quite small intervals sound most dissonant, and very large
> > intervals are
> >>inaudible. Yet they have the same Tenney weight, and so are
> > considered
> >>equally in the optimization if that's your only criterion.
> >
> > That's not a fair statement. The optimization can be "considered"
in
> > many different ways while still yielding the same results.
>
> Only if you use an algorithm that you know gives the same results,
in
> which case you're implicitly considering all the other intervals by
> using that algorithm.

You could turn the argument around and say that no matter how you
look at it, you're implicitly considering the primes and only the
primes. Either way, the argument is invalid. The mathematical
identity of these various results doesn't mean that if one accepts
one set of desiderata that lead to it, you're automatically accepting
some other set of desiderata that also lead to it.

> >>I don't know
> >>the solution to this, because there isn't enough theory to rest
on,
> > but
> >>for the best solution I'd want some kind of size-weighted minimax
> > to
> >>make sure the smallest important intervals don't get out of
> > control.
> >
> > Again, you appear to be missing another important feature of TOP,
> > which is that you're free to consider only the intervals within
some
> > range of interest for your optimization, and you still get
exactly
> > the same result!
>
> No, if you're using TOP you're considering all intervals because
you
> chose TOP which makes them the same.

Huh? Makes what the same? Is this the same type of argument as above?
You can't very well claim that I'm necessarily considering the wider
intervals in the optimization when ignoring them gives the results I
present.

> >>Use the octave stretching to optimize 9:8 and 8:7, and leave 3:2
> > and 7:4
> >>to look after themselves.
> >
> > I don't understand that. Aren't they (the latter intervals) more
> > damaged by mistuning?
>
> Yes, that's what will happen if you give them lower weight.

Well, it's not something my ears seem to like.

> > Well, for triads, mean and minimax give the same thing, while RMS
> > gives something else. So I'm not sure exactly what you mean by
> > this . . .
>
> RMS is a kind of mean. That's what the "M" stands for.

Yes I know but you didn't clarify your statement, you just snipped it.

> >>It assumes errors always add up, whereas sometimes
> >>they cancel out, so 5:3 and 15:1 are always given the same error.
> >
> > You lost me.
>
> An RMS of primes means the sign of the error is ignored. The error
of
> 5:3 is the difference between the errors of 5 and 3, and the error
of
> 15:1 is the sum of the errors of 5 and 3. A prime-based measure
> ignoring the sign will always assign the sum of the absolute errors
of 5
> and 3 to both ratios.

A prime-based measure always uses sum of absolute errors?

> >>That's why you have to optimize the octave. If the errors would
> > tend to
> >>cancel out in an odd-limit, the RMS of primes is
unrepresentative.
> > But
> >>the octave stretching will compensate for this. So,
paradoxically,
> > the
> >>octave-stretched optimum may be most valid after you unstretch
the
> > octaves.
> >
> > Can you be a little more elaborate?
>
> Okay, prime-weighted schemes can be parameterized to use
> weighted-intervals which I call w(p). w(p) is the size of the
tempered
> interval representing the prime p divided by the size of p:1. So,
for
> an equal temperament with pure octaves this is
>
> w(p) = n(p)/d/log2(p)
>
> where n(p) is the number of steps approximating p, and d is the
number
> of steps to an octave. You can think of TOP in terms of w(p) to an
> extent, in that it can be defined in terms of w(p) although you
won't
> get the right weighting for complex intervals if you don't know
anything
> else. The simplest error measure is some average comparing these
> intervals to 1:
>
> e(p) = w(p) - 1
>
> Average error = sqrt(<e2>)
>
> That is, the RMS. <..> means the mean and suffix 2 is for
squared. The
> problem is that 19-equal is underestimated in the 5-limit if we
keep
> pure octaves. The errors in 3 and 5 cancel out in 6:5, but the
simple
> RMS ignores this. This is where we point out to newbies that
they're
> getting it wrong, and previously we suggested odd limits.
>
> An alternative is to take the standard deviation of the errors
instead.
> Then, 19-equal looks good because the two 5-prime errors are
about the
> same. The problem is that 19-equal is now overestimated, because
it's
> errors cancel out but they're still finite. Getting errors to be
the
> same isn't enough. We really want errors that are close to each
other,
> and close to zero. So, well, add zero to the standard deviation.
>
> With pure octaves, w(2) is always 1.0 and so e(2) is always zero.
> Hence, the standard deviation of the errors already contains a zero
if
> we do it octave explicitly, but with pure octaves. So, a good bet
for
> an octave-equivalent measure is
>
> std(e) = sqrt(<e2> - <e>2)
>
> where the octaves are still included in the calculation.
>
>
> It happens that the least squares, prime-weighted error is
>
> sqrt(1-<w>2/<w2>)
>
> which can be rewritten
>
> sqrt[(<w2> - <w>2)/<w2>]
>
> or
>
> std(w)/rms(w)
>
> This is an interesting formula, because it's invariant with respect
to
> the octave stretch. If you multiply each w by a constant, the
effect on
> the standard deviation cancels out that on the RMS. It also
happens
> that, because w(p) is only e(p) plus a constant, the two sets have
the
> same standard deviation. That means our optimal error is
>
> std(e)/rms(w)
>
> If the temperament is at all sensible, each w(p) will be close to
1, and
> so this total error is close to std(e). Therefore it agrees with
the
> octave-equivalent measure of standard deviation of weighted prime
errors
> with a zero added in. It takes account of the sizes and signs of
the
> prime errors, and doesn't assume a particular value for the octave.

I'll have to reread all this later . . . running out of time today.

> >>The cleverest part of TOP is that it can still be applied with
pure
> >>octaves and the Kees metric.
> >
> > I don't know what you mean. Though TOP, when stretched or
compressed
> > to have pure octaves, usually coincides with minimax Kees in the
5-
> > limit, it doesn't in higher limits, according to Gene.
>
> That's not what I mean. The RMS of primes measure is easy to find,
and
> easy to show invalid when you take the octaves out. The bit about
the
> standard deviation above is something I hadn't noticed until after
you
> showed us TOP. The clever bit about TOP is that it's a weighted
measure
> that has a natural octave-equivalent definition: your minimax Kees.
> That it's at least close to stretched TOP is nice, but a side issue.

I'll have to consider this later.

> > If we can put your proposal on some good foundations . . . that
has
> > yet to be seen (by me at least). I was disappointed when Gene
> > informed us that beyond the 5-limit, minimax Kees doesn't agree
(even
> > modulo stretching) with TOP; I'm dubious that replacing minimax
with
> > sum-of-squares (assuming we can define the latter appropriately
in
> > the Kees case) will bring back the agreement . . .
>
> I thought I proved the relationship for any case of tempering out a
> single comma. That would include the 7-limit planar temperaments,
and
> so on.

Interesting. I wonder if Gene would reply. Single-comma TOP is the
only kind that's motivated with the original TOP construction; I
wonder if some other criterion could somehow generalize this
construction to more commas.

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 2:36:41 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > In practice in the 7 and 11 limits it seems it nearly always does.
> >
> > By "in practice" do you mean "approximately"?
>
> No, I mean "most of the time",

So most of the time it nearly always does??

> and it seems that it happens more often
> than one might suppose for the better 7-limit temperaments.

I supposed it happened every time the TOP tuning didn't have pure
octaves, but you said otherwise.

> > What's the minimax Kees
> > pajara tuning?
>
> It's stretched TOP. This is the Kees minimax tuning for just about any
> rank two 7-limit temperament worthy of mention.

Wow. That's certainly not the impression you gave me before, with all
the talk of corners and whatnot. But you did jump to 11-limit for the
one example you gave me where they're different. Any differences for
the systems in my paper? Based on TOP, I told Igliashon that 13-equal
is better in the 7-limit using the Orwell approximation than any other
approximation of the 7-limit in 13-equal. Is this still true based on
Kees?

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 4:51:24 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Any differences for
> the systems in my paper?

I'll check, but if you have handy a table of wedgies you could post
here it would be nice.

Based on TOP, I told Igliashon that 13-equal
> is better in the 7-limit using the Orwell approximation than any other
> approximation of the 7-limit in 13-equal. Is this still true based on
> Kees?

I don't even know what you mean.

🔗Graham Breed <gbreed@gmail.com>

11/2/2005 7:18:10 AM

Paul Erlich wrote:

>>For the least squares weighted prime function of the nearest prime >>approximation to 73-equal:
>>
>>1000 0.000386288890546 1.00000525939
>>2000 0.000342631907251 0.999999878426
>>3000 0.000320740521709 1.00000066141
>>4000 0.00030851462036 0.999999470906
>>5000 0.000298744238076 1.00000050417
>>6000 0.000291497890021 0.99999998381
>>7000 0.000285393298895 0.999999137131
>>8000 0.000279937136959 1.00000022932
>>9000 0.000276009346747 0.999999628759
>>
>>The left hand column is the number of primes,
> > !!! You need to specify your mapping before beginning the > optimization. In this case, you'd need mappings with hundreds or > thousands of elements. I did specify it: "nearest prime approximation". I could try listing all the elements, but I hardly think it would be an appropriate use of bandwidth.

> I have no idea what you're doing with this many primes, nor what this > is supposed to show, I'm afraid.

It shows that the weighted error depends too much on ridiculously complex intervals.

>>15:8 is part of the 5-limit, and so part of a weighted primes >>optimization.
> > I don't need to explicity include it. I could claim that its tuning > is purely a result of the tuning of simpler intervals.

If you choose a prime limit of 5, you include it unless you explicitly exclude it.

>>7:4 is outside the 5 prime-limit, and so isn't considered >>at all.
> > If you don't care to have it in your scales, that strikes me as the > right thing to do.

But why does every interval involving a prime of 7 or larger have to come with it?

> I mean the hard limit on the complexity of the ratios included in the > optimization, *given* the prime limit, can be placed just about > anywhere you want, and the optimal result won't change. Yes, I made it perfectly clear in my original message that the "hard limit" *is* the prime limit. So any other limit you may happen to think of is irrelevant in response to it.

>>We have a weighting and a limit that isn't based on complexity. > That >>compromises simplicity against only having the weighting, or >>sensibleness against considering all primes and getting rubbish out >>(unless you can demonstrate otherwise).
> > I don't understand why this argument applies to TOP and not to all > the other methods, which also use a limit "that isn't based on > complexity". But with TOP, you can consider, say, all the intervals > with an integer limit of 10, and Tenney-weight their errors. With a > prime limit of 7, there are no "holes" -- there are no intervals > missing that "compromise" the weighting. High primes don't need to > enter the "compromise" scenario since you never have to explicitly > weight the complex intervals anyway.

It applies to any method that uses Tenney weighting with a prime limit. Where do I say otherwise?

> I think it's pretty clear from the various models (such as harmonic > entropy) and testimonies we have that by and large, simpler intervals > are more sensitive to mistuning than slightly more complex ones. Even > George Secor, who has used equal-weighted minimax, ended up agreeing > with this!

It doesn't matter how sensitive they are to mistuning unless they're the most painful intervals -- if you use a minimax. With a weighted mean, you can give most weight to the most sensitive intervals.

> Of course! But this is a completely different question. First of all, > 15:8 can be the most dissonant interval while being the most damaged, > or while being the least damaged, interval in the chord, depending on > the tuning. This is closer to what we're actually discussing here. In > TOP, the damage on the most dissonant interval matters least, far > from that interval mattering most as you imply above. Secondly, we're > not talking about a single chord, we're talking about an entire > tuning system, and minimax can mean a lot of different things in that > context. Your example seems like a perfect argument for TOP, in fact, > in that major seventh chords participate in a lot of the consonances > in the lattice, and you don't want any of these consonances to be too > far off, while the 15:8 itself can be further off while doing less > damage to the chord as a whole.

To me, it's the same question that leads to an unbounded, weighted average. You can't be sure what supplementary intervals you might end up using, so you try to get them as good as possible but don't make them as important as the primary consonances.

I disagree with what you say about TOP. Every interval you consider (and you get the same result whether you consider them or not, so this is a fairly nebulous consideration) is exactly as important as every other interval. The only difference is that some intervals are allowed more scope for mistuning than others. A more complex interval can take more mistuning for the same amount of damage.

The point of this is "good intervals in a chord make up for the bad ones." That was my original statement that these comments stem from. The minimax is inappropriate in that context. By enforcing the minimax, you assume that the worst interval (however you define it) carries all the badness of the chord. If the goodness of the chord is a trade-off between the good and bad intervals, then it should also be a trade-off between well and poorly tuned intervals -- hence a mean and not a minimax.

>>It's the average
> > You mean RMS?

You can do any average of absolute errors you like. The RMS happens to be the simplest to optimize.

>>error of the primes. That's not a difficult >concept.
> > The idea is to work through the implications for *all* the intervals, > as has been done with TOP (see, for example, footnote xxvi in my > paper). Otherwise, there seems little justification for going along > with something that just looks at the primes and nothing else.

That's exactly the attitude I tried to argue against, and you keep avoiding by saying "it doesn't matter which intervals you choose".

> I agree if you're saying that we should first check that the ratios > or chords we intend to use as basic consonances aren't closer to > other, equally simple ratios or chords than the ones they're supposed > to approximate. But I'd rather leave this to the user, who will > simply chuck out some of the TOP systems as a result, and keep the > rest of them -- specifying "the ratios or chords we intend to use as > basic consonances" is more than I need or want for the purpose of > setting out some TOP tunings. The hurdle can always be placed at the > end without affecting the results.

It could be placed either end. What I'm saying is that the hurdle of worst-error is different in nature to the consideration of average mistuning.

>>I think the >>various dissonance graphs back this up. If you draw a line for a >>particular worst dissonance level, the complex intervals have a > narrower >>range than the simpler ones.
> > Right, but a given mistuning has a considerably greater impact on the > consonance of the simpler interval than it does on the consonance of > the complex ones. So if our target harmony is all the intervals in a > big harmonic-series chord, say, the closeness of the sound to JI is > best judged by weighting the errors on the simpler intervals *more*.

The point of minimax, as I said before, is that all intervals have *equal* weight. I find it so obvious that your argument there applies to a mean, and not a minimax, that I'm not sure how to make it explicit.

> You could turn the argument around and say that no matter how you > look at it, you're implicitly considering the primes and only the > primes. Either way, the argument is invalid. The mathematical > identity of these various results doesn't mean that if one accepts > one set of desiderata that lead to it, you're automatically accepting > some other set of desiderata that also lead to it.

So what other conditions lead to a Tenney weighting?

>>>Again, you appear to be missing another important feature of TOP, >>>which is that you're free to consider only the intervals within > some >>>range of interest for your optimization, and you still get > exactly >>>the same result!
>>
>>No, if you're using TOP you're considering all intervals because > you >>chose TOP which makes them the same.
> > Huh? Makes what the same? Is this the same type of argument as above? > You can't very well claim that I'm necessarily considering the wider > intervals in the optimization when ignoring them gives the results I > present.

As you keep making the same argument, I keep giving the same reply. Or am I supposed to think up innovative new responses?

>>>I don't understand that. Aren't they (the latter intervals) more >>>damaged by mistuning?
>>
>>Yes, that's what will happen if you give them lower weight.
> > Well, it's not something my ears seem to like.

So you accept that a different weighting can give a different emphasis to the small intervals?

>>>Well, for triads, mean and minimax give the same thing, while RMS >>>gives something else. So I'm not sure exactly what you mean by >>>this . . .
>>
>>RMS is a kind of mean. That's what the "M" stands for.
> > Yes I know but you didn't clarify your statement, you just snipped it.

The mean (absolute) error of 2, 3 and 5 is going to be different to the minimax error. Why do you say otherwise? What do triads have to do with it anyway?

> A prime-based measure always uses sum of absolute errors?

An RMS of prime errors always ignores the sign. That comes from the square of a real number always being positive. Summing is a part of calculating the mean.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/2/2005 7:18:27 AM

Gene Ward Smith wrote:

> I don't know what you are using for a weighting, but it probably isn't
> enough; you might try the p^(-1/2) as a weight for the prime p (apeing
> the Zeta function on the critical line.)

I'm using Tenney weighting: 1/log(p). Yes, it isn't enough for stability. But what I originally said is that any weighting that is stable over all primes will be too biased towards the very simple ratios.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/2/2005 7:17:56 AM

Gene Ward Smith wrote:

> Why not? Routines are as common as dirt.

I can do the weighted-prime least squares optimization with 14 lines of Python. No libraries and mostly simple arithmetic. If the TOP requires a specialist library routine, it can't be as simple.

> Can you find one >>for Python? > > Probably.

Then where is it?

>>How efficient is it for optimizing a >>billion 19-limit linear temperaments?
> > Why in the world do you want to do that? But simplex algorithms run
> pretty fast for low dimensional problems like that in particular.

All combinations of 100 commas would do it comfortably. The least squares error of weighted primes is really snot-drenchingly fast. A single pass over the primes, and no unpredictable branches to break the pipeline. It is possible to get simpler (say, set the weighted, signed prime error to zero) but at least this is an optimization of some sensible kind of average.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

11/2/2005 12:22:06 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> Then where is it?

Did you try googling for it? Python simplex returns 84000 hits, the
first of which already gives you code.

🔗Graham Breed <gbreed@gmail.com>

11/2/2005 12:59:09 PM

Gene Ward Smith wrote:

> Did you try googling for it? Python simplex returns 84000 hits, the
> first of which already gives you code.

How am I supposed to Google unless I know what I'm Googling for? In this case, the first few hits give dead links. But I want a Nedler-Mead Simplex, do I? It looks a bit drastic for an apparently simple optimization. How do I specify the TOP function? Can I assume the worst weighted prime error is also the minimax?

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

11/2/2005 2:36:03 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> How am I supposed to Google unless I know what I'm Googling for? In
> this case, the first few hits give dead links. But I want a
Nedler-Mead
> Simplex, do I?

No, stay away from that. You want a linear programming routine, and
simplex would be fine for the low-dimension problems of music. Other
methods are the projective method and the predictor-corrector method.
Googling on "linear programming python" brings up 743000 hits, the
first of which is a linear programming routine called PuLP. I have no
idea what the best routine would be, but clearly they are available.

It looks a bit drastic for an apparently simple
> optimization. How do I specify the TOP function?

You want a tuning <x2, x3, ..., xp|, so you can consider the x's to be
variables to be solved for. You have a certain number of commas in the
comma basis for the temperament, and you add these to start out with,
as equations: c2x2 + c3x3 + ... + cpxp = 0. Then you add inequalities,
for each prime q you add r >= 1-xq/log2(q) and r >= xq/log2(q)-1,
which allows the routine to deal with r >= |xq/log2(q)-1|. Then you
minimize r subject to these constraints, to which you may add
nonnegativity; the values xq where r reaches the minimum are your tuning.

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 12:27:26 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Any differences for
> > the systems in my paper?
>
> I'll check, but if you have handy a table of wedgies you could post
> here it would be nice.

I know you insist that wedgies are wedge products of vals and not
wedge products of commas, but since taking the dual is so easy, I'll
give you the latter anyway :)

Commas' bivector Horagram name
[[-14 0 8 0 -5 0>> Blacksmith
[[2 -5 3 -4 4 -4>> Dimisept
[[16 -6 -4 2 4 -1>> Dominant
[[-14 1 7 -6 0 -3>> August
[[-2 -12 11 4 -4 -2>> Pajara
[[20 -4 -8 -1 8 -2>> Semaphore
[[-12 13 -4 -10 4 -1>> Meantone
[[4 7 -8 -8 8 -2>> Injera
[[13 8 -14 2 3 4>> Negrisept
[[14 -18 7 6 0 -3>> Augene
[[-7 12 -6 3 -5 6>> Keemun
[[28 -19 0 12 0 0>> Catler
[[-5 1 2 10 -10 6>> Hedgehog
[[-30 6 12 -2 -9 1>> Superpyth
[[-5 1 2 -13 9 -7>> Sensisept
[[1 20 -17 -2 2 6>> Lemba
[[-28 18 1 -6 -5 3>> Porcupine
[[32 -17 -4 9 4 -1>> Flattone
[[25 -5 -10 12 -1 5>> Magic
[[-3 13 -9 6 -6 8>> Doublewide
[[-21 12 2 3 -10 6>> Nautilus
[[-16 -12 19 4 -9 -2>> Beatles
[[8 9 -12 -11 12 -3>> Liese
[[36 -10 -12 1 12 -3>> Cynder
[[27 7 -21 8 3 7>> Orwell
[[-10 25 -15 -14 8 1>> Garibaldi
[[9 -17 9 -7 9 -10>> Myna
[[15 20 -25 -2 7 6>> Miracle

bonus:

[[-34 22 1 18 -27 18>> Ennealimmal

I'd also like to know, when the TOP tuning is not unique (such as for
5-limit Blackwood), whether a stretched Kees tuning *could* be a (non-
canonical) TOP tuning.

> > Based on TOP, I told Igliashon that 13-equal
> > is better in the 7-limit using the Orwell approximation than any
other
> > approximation of the 7-limit in 13-equal. Is this still true
based on
> > Kees?
>
> I don't even know what you mean.

I think I could state this as: Is the best 13-equal val for the 7-
limit the one which is an Orwell val?

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 2:21:31 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Paul Erlich wrote:
>
> >>For the least squares weighted prime function of the nearest
prime
> >>approximation to 73-equal:
> >>
> >>1000 0.000386288890546 1.00000525939
> >>2000 0.000342631907251 0.999999878426
> >>3000 0.000320740521709 1.00000066141
> >>4000 0.00030851462036 0.999999470906
> >>5000 0.000298744238076 1.00000050417
> >>6000 0.000291497890021 0.99999998381
> >>7000 0.000285393298895 0.999999137131
> >>8000 0.000279937136959 1.00000022932
> >>9000 0.000276009346747 0.999999628759
> >>
> >>The left hand column is the number of primes,
> >
> > !!! You need to specify your mapping before beginning the
> > optimization. In this case, you'd need mappings with hundreds or
> > thousands of elements.
>
> I did specify it: "nearest prime approximation". I could try
listing
> all the elements, but I hardly think it would be an appropriate use
of
> bandwidth.

As you know, I don't endorse "nearest prime approximation", so I see
no reason to take these kinds of considerations into account. Do
you?

> > I have no idea what you're doing with this many primes, nor what
this
> > is supposed to show, I'm afraid.
>
> It shows that the weighted error depends too much on ridiculously
> complex intervals.

It doesn't show that to me. You have to hold the mapping constant if
you really want to show something like that.

> >>15:8 is part of the 5-limit, and so part of a weighted primes
> >>optimization.
> >
> > I don't need to explicity include it. I could claim that its
tuning
> > is purely a result of the tuning of simpler intervals.
>
> If you choose a prime limit of 5, you include it unless you
explicitly
> exclude it.

No I don't. I get the same optimal tuning either way.

> >>7:4 is outside the 5 prime-limit, and so isn't considered
> >>at all.
> >
> > If you don't care to have it in your scales, that strikes me as
the
> > right thing to do.
>
> But why does every interval involving a prime of 7 or larger have
to
> come with it?

You mean in the optimization? It doesn't -- you get the same answer
with virtually any cutoff on the complexity of these intervals.
Otherwise, they come with it simply because the lattice extends
infinitely in all directions; every traceable move in the lattice
corresponds to some rational interval.

> >>We have a weighting and a limit that isn't based on complexity.
> > That
> >>compromises simplicity against only having the weighting, or
> >>sensibleness against considering all primes and getting rubbish
out
> >>(unless you can demonstrate otherwise).
> >
> > I don't understand why this argument applies to TOP and not to
all
> > the other methods, which also use a limit "that isn't based on
> > complexity". But with TOP, you can consider, say, all the
intervals
> > with an integer limit of 10, and Tenney-weight their errors. With
a
> > prime limit of 7, there are no "holes" -- there are no intervals
> > missing that "compromise" the weighting. High primes don't need
to
> > enter the "compromise" scenario since you never have to
explicitly
> > weight the complex intervals anyway.
>
> It applies to any method that uses Tenney weighting with a prime
>limit.

But what about all the other methods (which is what I was asking
about)?

> Where do I say otherwise?

Huh. If we replace "TOP" above with "any method that uses Tenney
weighting with a prime limit", can we proceed with this conversation?

> > I think it's pretty clear from the various models (such as
harmonic
> > entropy) and testimonies we have that by and large, simpler
intervals
> > are more sensitive to mistuning than slightly more complex ones.
Even
> > George Secor, who has used equal-weighted minimax, ended up
agreeing
> > with this!
>
> It doesn't matter how sensitive they are to mistuning unless
>they're the
> most painful intervals

I completely disagree! If a mistuning makes a pleasant interval
painful, it matters a lot more than if a mistuning makes an already
painful interval stay at about the same level of pain.

> -- if you use a minimax.

Don't follow.

> With a weighted mean,
> you can give most weight to the most sensitive intervals.

You can do that with a weighted mean of any kind, including the
weighted minimax kind (an L_infinity weighted mean).

> > Of course! But this is a completely different question. First of
all,
> > 15:8 can be the most dissonant interval while being the most
damaged,
> > or while being the least damaged, interval in the chord,
depending on
> > the tuning. This is closer to what we're actually discussing
here. In
> > TOP, the damage on the most dissonant interval matters least, far
> > from that interval mattering most as you imply above. Secondly,
we're
> > not talking about a single chord, we're talking about an entire
> > tuning system, and minimax can mean a lot of different things in
that
> > context. Your example seems like a perfect argument for TOP, in
fact,
> > in that major seventh chords participate in a lot of the
consonances
> > in the lattice, and you don't want any of these consonances to be
too
> > far off, while the 15:8 itself can be further off while doing
less
> > damage to the chord as a whole.
>
> To me, it's the same question that leads to an unbounded, weighted
> average.

How so?

> You can't be sure what supplementary intervals you might end
> up using, so you try to get them as good as possible but don't make
them
> as important as the primary consonances.

OK, so what's the problem?

> I disagree with what you say about TOP. Every interval you
consider
> (and you get the same result whether you consider them or not, so
this
> is a fairly nebulous consideration) is exactly as important as
every
> other interval. The only difference is that some intervals are
allowed
> more scope for mistuning than others. A more complex interval can
take
> more mistuning for the same amount of damage.
>
> The point of this is "good intervals in a chord make up for the bad
> ones."

I'm not sure about this leap of yours from "tuning" to "chord". How
do you justify it?

> That was my original statement that these comments stem from.
> The minimax is inappropriate in that context. By enforcing the
minimax,
> you assume that the worst interval (however you define it) carries
all
> the badness of the chord.

Not applicable to the tuning as a whole. You may or may not even be
using more than 2-voice chord in the music. But in the tuning as a
whole, an infinite number of intervals will be "worst". Even ignoring
that, I don't see how this is at all like your major seventh chord
example.

> If the goodness of the chord is a trade-off
> between the good and bad intervals, then it should also be a trade-
off
> between well and poorly tuned intervals -- hence a mean and not a
>minimax.

Perhaps. But since we're dealing with a set of non-independent
intervals, things aren't quite as they seem here. For example, for
triads, mean and minimax exactly the same (or proportional)
quantities! Either way, there are arguments to be made for minimax,
by George Secor for example.

> >>It's the average
> >
> > You mean RMS?
>
> You can do any average of absolute errors you like. The RMS
happens to
> be the simplest to optimize.

RMS means you're squaring the absolute errors before taking the
average. If that counts as one way of doing "any average of absolute
errors you like", so should minimax, which replaces squaring with
taking a very high power.

> >>error of the primes. That's not a difficult >concept.
> >
> > The idea is to work through the implications for *all* the
intervals,
> > as has been done with TOP (see, for example, footnote xxvi in my
> > paper). Otherwise, there seems little justification for going
along
> > with something that just looks at the primes and nothing else.
>
> That's exactly the attitude I tried to argue against, and you keep
> avoiding by saying "it doesn't matter which intervals you choose".

Huh? I don't get it. Is the problem that I said *all* intervals
rather than just some reasonable set? Clearly the former implies that
you've taken care of the latter, so that shouldn't be a problem.

If many, many different optimizations lead to the very same tuning,
that only serves as a still stronger justification for the tuning --
far from weakening it below what a single optimization would yield.
Do you disagree?

> > I agree if you're saying that we should first check that the
ratios
> > or chords we intend to use as basic consonances aren't closer to
> > other, equally simple ratios or chords than the ones they're
supposed
> > to approximate. But I'd rather leave this to the user, who will
> > simply chuck out some of the TOP systems as a result, and keep
the
> > rest of them -- specifying "the ratios or chords we intend to use
as
> > basic consonances" is more than I need or want for the purpose of
> > setting out some TOP tunings. The hurdle can always be placed at
the
> > end without affecting the results.
>
> It could be placed either end. What I'm saying is that the hurdle
of
> worst-error is different in nature to the consideration of average
> mistuning.

Which is different from worst mistuning which is different from
average error. Yes, they're all different.

> >>I think the
> >>various dissonance graphs back this up. If you draw a line for a
> >>particular worst dissonance level, the complex intervals have a
> > narrower
> >>range than the simpler ones.
> >
> > Right, but a given mistuning has a considerably greater impact on
the
> > consonance of the simpler interval than it does on the consonance
of
> > the complex ones. So if our target harmony is all the intervals
in a
> > big harmonic-series chord, say, the closeness of the sound to JI
is
> > best judged by weighting the errors on the simpler intervals
*more*.
>
> The point of minimax, as I said before, is that all intervals have
> *equal* weight.

Of course that's not true, since one can do weighted minimax. So I
don't know what you mean.

> I find it so obvious that your argument there

Where?

> applies
> to a mean, and not a minimax, that I'm not sure how to make it
>explicit.

Well, don't give up! And I'm still very much open to the idea of an
L_2 version of TOP, which you came up with an acronym for like a year
and a half ago, and little has been said about since just recently. I
just want to understand better what it *means* (what its implications
are for all the intervals we might care about) in some fairly
finitary, comprehensible way.

> > You could turn the argument around and say that no matter how you
> > look at it, you're implicitly considering the primes and only the
> > primes. Either way, the argument is invalid. The mathematical
> > identity of these various results doesn't mean that if one
accepts
> > one set of desiderata that lead to it, you're automatically
accepting
> > some other set of desiderata that also lead to it.
>
> So what other conditions lead to a Tenney weighting?

The conditions we're talking about all lead to the same tuning *if*
you're using Tenney weighting to begin with -- I don't see how they
could possibly *lead* to a weighting themselves. Given that the
inverse of Tenney Harmonic Distance is the appropriate coefficient
for converting mistuning to damage, you can include just the primes
in the optimization; you can include just the intervals within one
octave; you can include just the intervals below any limit on n*d (as
long as this limit is not lower than the lowest prime); you can
choose just the intervals within one octave that are below almost any
limit on n*d; etc., and you get the same minimax-damage tuning.

> >>>Again, you appear to be missing another important feature of
TOP,
> >>>which is that you're free to consider only the intervals within
> > some
> >>>range of interest for your optimization, and you still get
> > exactly
> >>>the same result!
> >>
> >>No, if you're using TOP you're considering all intervals because
> > you
> >>chose TOP which makes them the same.
> >
> > Huh? Makes what the same? Is this the same type of argument as
above?
> > You can't very well claim that I'm necessarily considering the
wider
> > intervals in the optimization when ignoring them gives the
results I
> > present.
>
> As you keep making the same argument, I keep giving the same
reply. Or
> am I supposed to think up innovative new responses?

I don't know, because I really think my argument is valid! If
ignoring wider intervals could give a different answer, that answer
would sure be interesting and important. Since it doesn't, the
original answer must have wider applicability than we originally
supposed.

> >>>I don't understand that. Aren't they (the latter intervals) more
> >>>damaged by mistuning?
> >>
> >>Yes, that's what will happen if you give them lower weight.
> >
> > Well, it's not something my ears seem to like.
>
> So you accept that a different weighting can give a different
emphasis
> to the small intervals?

In this case, I was accepting that a different weighting can give a
different emphasis to the *simple* intervals.

> >>>Well, for triads, mean and minimax give the same thing, while
RMS
> >>>gives something else. So I'm not sure exactly what you mean by
> >>>this . . .
> >>
> >>RMS is a kind of mean. That's what the "M" stands for.
> >
> > Yes I know but you didn't clarify your statement, you just
snipped it.
>
> The mean (absolute) error of 2, 3 and 5 is going to be different to
the
> minimax error. Why do you say otherwise?

I did? Where?

> What do triads have to do
> with it anyway?

Triads are just a simple example of how our reasoning can lead us
astray if we forget that the terms in our optimization are not all
independent.

> > A prime-based measure always uses sum of absolute errors?
>
> An RMS of prime errors always ignores the sign. That comes from
the
> square of a real number always being positive. Summing is a part
of
> calculating the mean.

All that is quite clear so I suspect my query referred to something
wider than just the RMS case. Perhaps I had misunderstood something
you wrote earlier.

Oh, and I still need to read part of your post from last week --
don't let me forget! (That is, remind me!!)

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 2:26:08 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Gene Ward Smith wrote:
>
> > I don't know what you are using for a weighting, but it probably
isn't
> > enough; you might try the p^(-1/2) as a weight for the prime p
(apeing
> > the Zeta function on the critical line.)
>
> I'm using Tenney weighting: 1/log(p). Yes, it isn't enough for
> stability. But what I originally said is that any weighting that
is
> stable over all primes will be too biased towards the very simple
ratios.
>
>
> Graham

I don't buy into this notion of "stability". We're looking at more
than just ETs here. Nearest prime approximations in an ET don't mean
very much to me, you've agreed they're don't lead to the "best" val.
And regardless of which prime approximations you're talking about, I
don't know why anyone would expect stability when appending more and
more of these onto the val (given that the optimizations we're
talking about require you to specify a val). It doesn't seem like a
reasonable expectation.

🔗Graham Breed <gbreed@gmail.com>

11/4/2005 2:03:21 PM

Gene Ward Smith wrote:

> No, stay away from that. You want a linear programming routine, and
> simplex would be fine for the low-dimension problems of music. Other
> methods are the projective method and the predictor-corrector method.
> Googling on "linear programming python" brings up 743000 hits, the
> first of which is a linear programming routine called PuLP. I have no
> idea what the best routine would be, but clearly they are available.

I found the Nedler-Mead Simplex on the Way Back Machine, anyway. I got it working for a normal (1-D) minimax so it will presumably work for TOP. It won't be especially efficient. Fine for one temperament at a time, but even slower than my current minimax algorithm. The problem is that it's easy to get the gradient for a minimax, but this function only uses the value.

PuLP looks interesting -- symbolic algebra with Python syntax. I'll have to look into it. It depends on C code as well. It looks like it'd work for least squares optimizations too :P

> You want a tuning <x2, x3, ..., xp|, so you can consider the x's to be
> variables to be solved for. You have a certain number of commas in the
> comma basis for the temperament, and you add these to start out with,
> as equations: c2x2 + c3x3 + ... + cpxp = 0. Then you add inequalities,
> for each prime q you add r >= 1-xq/log2(q) and r >= xq/log2(q)-1,
> which allows the routine to deal with r >= |xq/log2(q)-1|. Then you
> minimize r subject to these constraints, to which you may add
> nonnegativity; the values xq where r reaches the minimum are your tuning.

I only need to look at the primes, then? I don't have the commas at this stage, but writing a formula for the mistuning of each prime in terms of the generator and period is easy.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/4/2005 2:03:38 PM

Paul Erlich wrote:

>>It applies to any method that uses Tenney weighting with a prime >>limit.
> > But what about all the other methods (which is what I was asking > about)?

What other methods are there? I only know Tenney weighted prime limits, the odd limit (which is an octave-equivalent guess to complexity) and integer-limits (which are complexity).

> Huh. If we replace "TOP" above with "any method that uses Tenney > weighting with a prime limit", can we proceed with this conversation?

It wouldn't make much sense because we're comparing two differnet Tenney-weighted measures. I'm not sure where this is leading (or where it's gone!)

> I completely disagree! If a mistuning makes a pleasant interval > painful, it matters a lot more than if a mistuning makes an already > painful interval stay at about the same level of pain.

What's "pain"? Are you suggesting "tuning relative to JI" is a directly observable measure? I'm looking at dissonance.

> You can do that with a weighted mean of any kind, including the > weighted minimax kind (an L_infinity weighted mean).

Oh, then I bow to you superior knowledge of statistics. The RMS will give a better average because all intervals have some weight.

TOP will do fine as an approximation to the RMS, or an extreme kind of mean. The only problem is that it's more complicated. If it were simpler, I'd argue for it. As it is, I still have to spend time arguing that the RMS is simpler. I also happen to think that the prime-weighted RMS is better anyway but that's not so important.

>>To me, it's the same question that leads to an unbounded, weighted >>average.
> > How so?

If you allow dissonant but not completely random intervals in chords, you care a bit how close they are to JI. You don't care very much because the good intervals carry most of the consonance. The weighting tells you how much you care. This is only valid for chords that are musically dissonant, in particular where the dissonance is for coloration so that consonance still matters.

>>You can't be sure what supplementary intervals you might end >>up using, so you try to get them as good as possible but don't make > them >>as important as the primary consonances.
> > OK, so what's the problem?

No problem.

> I'm not sure about this leap of yours from "tuning" to "chord". How > do you justify it?

I'm starting with chords and trying to justify the tuning.

>>That was my original statement that these comments stem from. >>The minimax is inappropriate in that context. By enforcing the > minimax, >>you assume that the worst interval (however you define it) carries > all >>the badness of the chord.
> > Not applicable to the tuning as a whole. You may or may not even be > using more than 2-voice chord in the music. But in the tuning as a > whole, an infinite number of intervals will be "worst". Even ignoring > that, I don't see how this is at all like your major seventh chord > example.

It isn't like the major seventh example. It's supposed to be the alternative idea. For chords that are musically consonant, and so every interval should be heard as a consonance. You assume that the simplicity of the interval and closeness to JI tell you how dissonant an interval will be. Then you draw a line of maximum allowable dissonance for an interval to be counted as a consonance. You check that all the intervals you want to be consonances are below the line. The minimax error of an odd limit is a way of guessing if intervals are below the line. There's all kinds of arbitrariness and approximation but we don't have a solid enough theory to avoid that anyway.

>>If the goodness of the chord is a trade-off >>between the good and bad intervals, then it should also be a trade-
> off >>between well and poorly tuned intervals -- hence a mean and not a >>minimax.
> > Perhaps. But since we're dealing with a set of non-independent > intervals, things aren't quite as they seem here. For example, for > triads, mean and minimax exactly the same (or proportional) > quantities! Either way, there are arguments to be made for minimax, > by George Secor for example.

I don't understand the triads example. What are George's arguments? You've mentioned him twice now.

> Huh? I don't get it. Is the problem that I said *all* intervals > rather than just some reasonable set? Clearly the former implies that > you've taken care of the latter, so that shouldn't be a problem.

The problem is that you're looking at consistency rather than validity.

> If many, many different optimizations lead to the very same tuning, > that only serves as a still stronger justification for the tuning -- > far from weakening it below what a single optimization would yield. > Do you disagree?

It's a little stronger, yes. But it isn't that important. The main advantage is that it simplifies the definition and calculation, but any prime based measure will do that.

>>The point of minimax, as I said before, is that all intervals have >>*equal* weight.
> > Of course that's not true, since one can do weighted minimax. So I > don't know what you mean.

I don't see how it makes sense to weight the intervals and call it a minimax. Any weighting has to be on the errors.

> Well, don't give up! And I'm still very much open to the idea of an > L_2 version of TOP, which you came up with an acronym for like a year > and a half ago, and little has been said about since just recently. I > just want to understand better what it *means* (what its implications > are for all the intervals we might care about) in some fairly > finitary, comprehensible way.

It was PORMSWE for Prime Optimum RMS Weighted Error. But it's not a good acronym because you lose the O when it isn't optimized.

The implication is that it's a guess as to what the Tenney weighted average mistuning of any given set of intervals (within the prime limit) will be. If I trusted any way of selecting octave-specific intervals, I could do a comparison.

> Triads are just a simple example of how our reasoning can lead us > astray if we forget that the terms in our optimization are not all > independent.

Can you explain this one?

> > >>>A prime-based measure always uses sum of absolute errors?

...

> All that is quite clear so I suspect my query referred to something > wider than just the RMS case. Perhaps I had misunderstood something > you wrote earlier.

You chop my paragraphs up so much in the replies I don't know what the original context was.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/4/2005 2:03:49 PM

Paul Erlich wrote:

> I don't buy into this notion of "stability". We're looking at more > than just ETs here. Nearest prime approximations in an ET don't mean > very much to me, you've agreed they're don't lead to the "best" val. > And regardless of which prime approximations you're talking about, I > don't know why anyone would expect stability when appending more and > more of these onto the val (given that the optimizations we're > talking about require you to specify a val). It doesn't seem like a > reasonable expectation.

It doesn't matter if it's the best val or not. However badly the 1000th prime is approximated, it makes no audible difference to the temperament. Any badness measure that it does make a difference to is wrong.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 4:02:59 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Paul Erlich wrote:
>
> > I don't buy into this notion of "stability". We're looking at
more
> > than just ETs here. Nearest prime approximations in an ET don't
mean
> > very much to me, you've agreed they're don't lead to the "best"
val.
> > And regardless of which prime approximations you're talking
about, I
> > don't know why anyone would expect stability when appending more
and
> > more of these onto the val (given that the optimizations we're
> > talking about require you to specify a val). It doesn't seem like
a
> > reasonable expectation.
>
> It doesn't matter if it's the best val or not.

That's not my main point. You're still appending more approximations
to the system either way.

> However badly the 1000th
> prime is approximated, it makes no audible difference to the
> temperament. Any badness measure that it does make a difference to
is
> wrong.

I don't see how this is more damaging to TOP than to any other
method, once the idea of approximating intervals involving the 1000th
prime enter the picture in the first place.

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 4:00:07 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Paul Erlich wrote:
>
> >>It applies to any method that uses Tenney weighting with a prime
> >>limit.
> >
> > But what about all the other methods (which is what I was asking
> > about)?
>
> What other methods are there? I only know Tenney weighted prime
limits,
> the odd limit (which is an octave-equivalent guess to complexity)
and
> integer-limits (which are complexity).

"Odd limit" can be interpreted in several ways, including equal-
weighted Hahn, weighted Hahn, and Kees . . . but yes, these all serve
as examples of "other methods".

> > Huh. If we replace "TOP" above with "any method that uses Tenney
> > weighting with a prime limit", can we proceed with this
conversation?
>
> It wouldn't make much sense because we're comparing two differnet
> Tenney-weighted measures. I'm not sure where this is leading (or
where
> it's gone!)

You said, "It applies to any method that uses Tenney weighting with a
prime limit." So how can it serve for a comparison between two
different such methods?

> > I completely disagree! If a mistuning makes a pleasant interval
> > painful, it matters a lot more than if a mistuning makes an
already
> > painful interval stay at about the same level of pain.
>
> What's "pain"? Are you suggesting "tuning relative to JI" is a
directly
> observable measure?

You can observe that, but more importantly, you can observe the
differences between different tunings of the same temperament. And
the differences sound qualitively greater, for a given cents
difference in an interval, for the simpler ratios than for the more
complex ratios.

> I'm looking at dissonance.

In terms of the relative dissonance of two tunings of a given
temperament, I stand by my statement above -- a simpler interval
differing by a given amount in cents between the two tunings will
have more impact on the relative dissonance of the two than a more
complex interval differing by the same amount (ceteris parebis).

> > You can do that with a weighted mean of any kind, including the
> > weighted minimax kind (an L_infinity weighted mean).
>
> Oh, then I bow to you superior knowledge of statistics. The RMS
will
> give a better average because all intervals have some weight.

Is the logical conclusion of this that sum-absolute error gives a
still better average?

> TOP will do fine as an approximation to the RMS, or an extreme kind
of
> mean. The only problem is that it's more complicated. If it were
> simpler, I'd argue for it.

To me, it's simpler. Simpler definition, simpler motivation, and I
don't think the calculation is significantly more complicated.

> As it is, I still have to spend time arguing
> that the RMS is simpler. I also happen to think that the
> prime-weighted RMS is better anyway but that's not so important.

I, on the other hand, would like to see that pursued further.

> >>To me, it's the same question that leads to an unbounded,
weighted
> >>average.
> >
> > How so?
>
> If you allow dissonant but not completely random intervals in
chords,
> you care a bit how close they are to JI. You don't care very much
> because the good intervals carry most of the consonance. The
weighting
> tells you how much you care. This is only valid for chords that
are
> musically dissonant, in particular where the dissonance is for
> coloration so that consonance still matters.

OK; now how does this fit into our conversation?

> >>You can't be sure what supplementary intervals you might end
> >>up using, so you try to get them as good as possible but don't
make
> > them
> >>as important as the primary consonances.
> >
> > OK, so what's the problem?
>
> No problem.

:)

> > I'm not sure about this leap of yours from "tuning" to "chord".
How
> > do you justify it?
>
> I'm starting with chords and trying to justify the tuning.

I tend to think of chords as smallish blobs in the lattice. This is
very different from thinking of them as similar to the tuning, or the
whole lattice. So I'm still not seeing the justification for the
leap from one to the other in your reasoning.

> >>That was my original statement that these comments stem from.
> >>The minimax is inappropriate in that context. By enforcing the
> > minimax,
> >>you assume that the worst interval (however you define it)
carries
> > all
> >>the badness of the chord.
> >
> > Not applicable to the tuning as a whole. You may or may not even
be
> > using more than 2-voice chord in the music. But in the tuning as
a
> > whole, an infinite number of intervals will be "worst". Even
ignoring
> > that, I don't see how this is at all like your major seventh
chord
> > example.
>
> It isn't like the major seventh example. It's supposed to be the
> alternative idea. For chords that are musically consonant, and so
every
> interval should be heard as a consonance. You assume that the
> simplicity of the interval and closeness to JI tell you how
dissonant an
> interval will be. Then you draw a line of maximum allowable
dissonance
> for an interval to be counted as a consonance. You check that all
the
> intervals you want to be consonances are below the line. The
minimax
> error of an odd limit is a way of guessing if intervals are below
the
> line. There's all kinds of arbitrariness and approximation but we
don't
> have a solid enough theory to avoid that anyway.

OK (in general terms).

> >>If the goodness of the chord is a trade-off
> >>between the good and bad intervals, then it should also be a
trade-
> > off
> >>between well and poorly tuned intervals -- hence a mean and not a
> >>minimax.
> >
> > Perhaps. But since we're dealing with a set of non-independent
> > intervals, things aren't quite as they seem here. For example,
for
> > triads, mean and minimax exactly the same (or proportional)
> > quantities! Either way, there are arguments to be made for
minimax,
> > by George Secor for example.
>
> I don't understand the triads example.

Try evaluating (ranking or rating) a set of tempered triads using
both equal-weighted minimax error and equal-weighted sum-absolute
error (I assume the latter is what you mean by "mean"). Do the two
criteria lead to different conclusions?

> What are George's arguments?
> You've mentioned him twice now.

I want him to chime in himself!

> > Huh? I don't get it. Is the problem that I said *all* intervals
> > rather than just some reasonable set? Clearly the former implies
that
> > you've taken care of the latter, so that shouldn't be a problem.
>
> The problem is that you're looking at consistency rather than
>validity.

If *any* of the equivalent criteria that lead to the optimal result
are valid (and I believe some of them are quite valid, it's just that
I don't know *exactly* where to cut off the list of ratios for
complexity), then the optimal result carries validity, regardless of
how consistent it may be with the results of other "less-valid"
optimizations. You seemed to be implying that this consistency is
somehow a bad thing, as if it somehow forced you to claim that all
the "less-valid" optimizations that lead to that same result are
therefore themselves valid. This doesn't follow.

> > If many, many different optimizations lead to the very same
tuning,
> > that only serves as a still stronger justification for the
tuning --
> > far from weakening it below what a single optimization would
yield.
> > Do you disagree?
>
> It's a little stronger, yes. But it isn't that important. The
main
> advantage is that it simplifies the definition and calculation, but
any
> prime based measure will do that.

I'd like to see that worked out, especially as regards definition,
for the case of weighted prime RMS. But I believe there's part of a
recent message from you that I still need to really read and respond
to -- please don't let me forget!

> >>The point of minimax, as I said before, is that all intervals
have
> >>*equal* weight.
> >
> > Of course that's not true, since one can do weighted minimax. So
I
> > don't know what you mean.
>
> I don't see how it makes sense to weight the intervals and call it
a
> minimax. Any weighting has to be on the errors.

OK, I think I see what you mean. But this doesn't argue against
weighting the errors in any way I can see.

> > Well, don't give up! And I'm still very much open to the idea of
an
> > L_2 version of TOP, which you came up with an acronym for like a
year
> > and a half ago, and little has been said about since just
recently. I
> > just want to understand better what it *means* (what its
implications
> > are for all the intervals we might care about) in some fairly
> > finitary, comprehensible way.
>
> It was PORMSWE for Prime Optimum RMS Weighted Error. But it's not
a
> good acronym because you lose the O when it isn't optimized.
>
> The implication is that it's a guess as to what the Tenney weighted
> average

A straight average? Or RMS? Or . . . (?)

> mistuning of any given set of intervals (within the prime limit)
> will be. If I trusted any way of selecting octave-specific
intervals, I
> could do a comparison.

Two ways are to use some Tenney limit, and some integer limit.

> > Triads are just a simple example of how our reasoning can lead us
> > astray if we forget that the terms in our optimization are not
all
> > independent.
>
> Can you explain this one?

Hopefully your comparison of eq-wtd-sum-abs error vs. eq-wtd-minimax
error will shed some light on this for you.

> >>>A prime-based measure always uses sum of absolute errors?
>
> ...
>
> > All that is quite clear so I suspect my query referred to
something
> > wider than just the RMS case. Perhaps I had misunderstood
something
> > you wrote earlier.
>
> You chop my paragraphs up so much in the replies I don't know what
the
> original context was.

Sorry! Try looking back over the posts, then, if you feel so
inclined . . .

🔗Graham Breed <gbreed@gmail.com>

11/5/2005 5:32:00 AM

Paul Erlich wrote:

> "Odd limit" can be interpreted in several ways, including equal-
> weighted Hahn, weighted Hahn, and Kees . . . but yes, these all serve > as examples of "other methods".

I take "odd limit" to be a set of intervals. It's a good approximation of complexity, but fails in comparison to the Tenney limit.

> You said, "It applies to any method that uses Tenney weighting with a > prime limit." So how can it serve for a comparison between two > different such methods?

It doesn't. This thread is about TOP, not comparing TOP with one other method.

> You can observe that, but more importantly, you can observe the > differences between different tunings of the same temperament. And > the differences sound qualitively greater, for a given cents > difference in an interval, for the simpler ratios than for the more > complex ratios.

I don't think the listeners can hear these differences. Even relative to JI, how many piano players know what a 5:4 sound like?

> In terms of the relative dissonance of two tunings of a given > temperament, I stand by my statement above -- a simpler interval > differing by a given amount in cents between the two tunings will > have more impact on the relative dissonance of the two than a more > complex interval differing by the same amount (ceteris parebis).

You're talking about "impact" there, which suggests averaging. A minimax isn't about measuring the total impact, but whether one interval has enough force to break the chord.

>>Oh, then I bow to you superior knowledge of statistics. The RMS > will >>give a better average because all intervals have some weight.
> > > Is the logical conclusion of this that sum-absolute error gives a > still better average?

No, but that some power will give the best approximation to how the ear does the averaging, however that might be.

>>TOP will do fine as an approximation to the RMS, or an extreme kind > of >>mean. The only problem is that it's more complicated. If it were >>simpler, I'd argue for it. > > To me, it's simpler. Simpler definition, simpler motivation, and I > don't think the calculation is significantly more complicated.

The definitions are much the same either way. (Well, you can take the "R" out of "RMS"). I don't now about motivation. But the calculation is surely more complicated for TOP. How are you calculating it? Everything I've seen so far is numerical optimization, but the least squares is algebraic. And only 14 lines of simple code with no libraries.

>>As it is, I still have to spend time arguing >> that the RMS is simpler. I also happen to think that the >>prime-weighted RMS is better anyway but that's not so important.
> > I, on the other hand, would like to see that pursued further.

Do you agree that weighting is more appropriate for the octave-equivalent case? Or, at least, that an unweighted, octave-equivalent measure doesn't make sense? This is all part of the same revelation to me.

>>If you allow dissonant but not completely random intervals in > chords, >>you care a bit how close they are to JI. You don't care very much >>because the good intervals carry most of the consonance. The > weighting >>tells you how much you care. This is only valid for chords that > are >>musically dissonant, in particular where the dissonance is for >>coloration so that consonance still matters.
> > OK; now how does this fit into our conversation?

For these kind of chords, we want to know the average dissonance. Or at least the average relative to the ideal tuning. An average of weighted errors relative to JI is a good guess for that.

> I tend to think of chords as smallish blobs in the lattice. This is > very different from thinking of them as similar to the tuning, or the > whole lattice. So I'm still not seeing the justification for the > leap from one to the other in your reasoning.

I think of chords as more than one note sounding together.

> I'd like to see that worked out, especially as regards definition, > for the case of weighted prime RMS. But I believe there's part of a > recent message from you that I still need to really read and respond > to -- please don't let me forget!

I don't have any correlation between the WPRMS and any other measure. I had a look, but the math isn't as clean as the TOP :(

> OK, I think I see what you mean. But this doesn't argue against > weighting the errors in any way I can see.

You can weight the errors as long as the weighted error itself has a meaning. That isn't something you can argue over. But once you apply a minimax, only the worst interval (however weighted the error) matters.

>>The implication is that it's a guess as to what the Tenney weighted >>average
> > A straight average? Or RMS? Or . . . (?)

Any average. I use the word deliberately vaguely. Ideally it will be the one closest to what we hear. In practice, I expect an RMS will approximate an RMS best. You can think of TOP as an average, and that's exact.

>>mistuning of any given set of intervals (within the prime limit) >>will be. If I trusted any way of selecting octave-specific > intervals, I >>could do a comparison.
> > Two ways are to use some Tenney limit, and some integer limit.

Simple, but as I said, I don't trust them. They don't take account of the interval size. They have some validity, but so does the prime RMS, so neither would be dignified by comparison with the other.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

11/6/2005 2:07:37 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> I only need to look at the primes, then? I don't have the commas at
> this stage, but writing a formula for the mistuning of each prime in
> terms of the generator and period is easy.

It's as I said--TOP tuning is not that difficult to compute.

🔗Paul Erlich <perlich@aya.yale.edu>

11/8/2005 12:19:08 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Paul Erlich wrote:
>
> > "Odd limit" can be interpreted in several ways, including equal-
> > weighted Hahn, weighted Hahn, and Kees . . . but yes, these all
serve
> > as examples of "other methods".
>
> I take "odd limit" to be a set of intervals. It's a good
approximation
> of complexity, but fails in comparison to the Tenney limit.

It seems like you're using both definitions of odd limit above, one
right after the other. How can a set of intervals be an approximation
of complexity?

> > You said, "It applies to any method that uses Tenney weighting
with a
> > prime limit." So how can it serve for a comparison between two
> > different such methods?
>
> It doesn't. This thread is about TOP, not comparing TOP with one
other
> method.

OK . . . (I forgot what that was about anyway).

> > You can observe that, but more importantly, you can observe the
> > differences between different tunings of the same temperament.
And
> > the differences sound qualitively greater, for a given cents
> > difference in an interval, for the simpler ratios than for the
more
> > complex ratios.
>
> I don't think the listeners can hear these differences. Even
relative
> to JI, how many piano players know what a 5:4 sound like?

The point is that a given cents difference in the major third will
have less of an aural effect than a given cents difference in the
perfect fifth. Are you saying that no differences will be heard in
any case, regardless of how many cents and which intervals? Or . . .
(?)

> > In terms of the relative dissonance of two tunings of a given
> > temperament, I stand by my statement above -- a simpler interval
> > differing by a given amount in cents between the two tunings will
> > have more impact on the relative dissonance of the two than a
more
> > complex interval differing by the same amount (ceteris parebis).
>
> You're talking about "impact" there, which suggests averaging.

Not necessarily. We could be considering each of these intervals in
isolation.

> A
> minimax isn't about measuring the total impact,

No, it's about measuring the maximum impact.

And if "total impact" is supposed to be a linear thing that one can
apply with respect to JI as well as with respect to some other
temperament, neither is any average of errors as far as I can see.

> but whether one interval
> has enough force to break the chord.

You shouldn't assume that there are necessarily any many-interval
chords in the music.

> >>Oh, then I bow to you superior knowledge of statistics. The RMS
> > will
> >>give a better average because all intervals have some weight.
> >
> >
> > Is the logical conclusion of this that sum-absolute error gives a
> > still better average?
>
> No, but that some power will give the best approximation to how the
ear
> does the averaging, however that might be.

Might this not depend entirely or almost entirely on context and
other "accidents"?

> >>TOP will do fine as an approximation to the RMS, or an extreme
kind
> > of
> >>mean. The only problem is that it's more complicated. If it
were
> >>simpler, I'd argue for it.
> >
> > To me, it's simpler. Simpler definition, simpler motivation, and
I
> > don't think the calculation is significantly more complicated.
>
> The definitions are much the same either way. (Well, you can take
the
> "R" out of "RMS"). I don't now about motivation. But the
calculation
> is surely more complicated for TOP. How are you calculating it?
> Everything I've seen so far is numerical optimization, but the
least
> squares is algebraic. And only 14 lines of simple code with no
libraries.

I bet the linear programming involved here can be optimized down to a
similarly short piece of code.

> >>As it is, I still have to spend time arguing
> >> that the RMS is simpler. I also happen to think that the
> >>prime-weighted RMS is better anyway but that's not so important.
> >
> > I, on the other hand, would like to see that pursued further.
>
> Do you agree that weighting is more appropriate for the
> octave-equivalent case?

More appropriate than what? What kind of weighting? The Kees or
inverse-log-of-minimum-odd-limit weighting seems most appropriate to
me . . .

> Or, at least, that an unweighted,
> octave-equivalent measure doesn't make sense?

I thought you were arguing that it *does* make sense in contexts
like, for example, adaptive JI where a given retuning motion will be
equally "painful" regardless of which interval it "corrects". So have
you now changed your mind?

> This is all part of the
> same revelation to me.

Please elaborate!

> >>If you allow dissonant but not completely random intervals in
> > chords,
> >>you care a bit how close they are to JI. You don't care very
much
> >>because the good intervals carry most of the consonance. The
> > weighting
> >>tells you how much you care. This is only valid for chords that
> > are
> >>musically dissonant, in particular where the dissonance is for
> >>coloration so that consonance still matters.
> >
> > OK; now how does this fit into our conversation?
>
> For these kind of chords, we want to know the average dissonance.
Or at
> least the average relative to the ideal tuning. An average of
weighted
> errors relative to JI is a good guess for that.

How wrong can you go with maximum weighted error? Because with Tenney
weighting, there's no way to create a chord where a complex interval
has greater weighted error than any of the simple intervals of which
it is constructed.

> > I tend to think of chords as smallish blobs in the lattice. This
is
> > very different from thinking of them as similar to the tuning, or
the
> > whole lattice. So I'm still not seeing the justification for the
> > leap from one to the other in your reasoning.
>
> I think of chords as more than one note sounding together.

Still.

> > I'd like to see that worked out, especially as regards
definition,
> > for the case of weighted prime RMS. But I believe there's part of
a
> > recent message from you that I still need to really read and
respond
> > to -- please don't let me forget!
>
> I don't have any correlation between the WPRMS and any other
measure. I
> had a look, but the math isn't as clean as the TOP :(

That's what I suspected. WPRMS (I thought you had a different,
slightly longer acronym for it) may be wonderful, but its
specification with respect to some list of intervals we care about
ends up being so abstract that I don't think I could go with it for
an expository paper.

> > OK, I think I see what you mean. But this doesn't argue against
> > weighting the errors in any way I can see.
>
> You can weight the errors as long as the weighted error itself has
a
> meaning. That isn't something you can argue over. But once you
apply a
> minimax, only the worst interval (however weighted the error)
matters.

Right, where "worst" means "biggest weight*error" and not "biggest
error in cents".

> >>The implication is that it's a guess as to what the Tenney
weighted
> >>average
> >
> > A straight average? Or RMS? Or . . . (?)
>
> Any average. I use the word deliberately vaguely. Ideally it will
be
> the one closest to what we hear. In practice, I expect an RMS will
> approximate an RMS best. You can think of TOP as an average, and
that's
> exact.

Whew!

> >>mistuning of any given set of intervals (within the prime limit)
> >>will be. If I trusted any way of selecting octave-specific
> > intervals, I
> >>could do a comparison.
> >
> > Two ways are to use some Tenney limit, and some integer limit.
>
> Simple, but as I said, I don't trust them. They don't take account
of
> the interval size.

Integer limit does -- it favors smaller intervals.

> They have some validity, but so does the prime RMS,

Huh? How does that give you a different way of selecting octave-
specific intervals than integer or product limit?

> so neither would be dignified by comparison with the other.

It seems like there are at least two different things here -- the
optimization criterion (which includes weights (such as Tenney) and
an exponent (such as 2)), and the way of selecting which intervals to
include. You can't compare these things, though you can certainly
combine them in different ways, and compare the combinations . . .

🔗Graham Breed <gbreed@gmail.com>

11/8/2005 3:33:50 PM

[ Attachment content not displayed ]

🔗Gene Ward Smith <gwsmith@svpal.org>

11/8/2005 5:37:30 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I bet the linear programming involved here can be optimized down to a
> similarly short piece of code.

For small problems like this, linear programming can be solved using
brute force.

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 5:04:42 PM

>In terms of the relative dissonance of two tunings of a given
>temperament, I stand by my statement above -- a simpler interval
>differing by a given amount in cents between the two tunings will
>have more impact on the relative dissonance of the two than a more
>complex interval differing by the same amount (ceteris parebis).

I believe that's ceteris paribus.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 5:28:37 PM

>> You can observe that, but more importantly, you can observe the
>> differences between different tunings of the same temperament. And
>> the differences sound qualitively greater, for a given cents
>> difference in an interval, for the simpler ratios than for the more
>> complex ratios.
>
>I don't think the listeners can hear these differences. Even relative
>to JI, how many piano players know what a 5:4 sound like?

I certainly find mistuning of the octave more objectionable than
mistuning of the 5:4.

But it's an age-old question... mistuning simpler intervals is
more painful, but complex intervals require more accuracy of
tuning to be 'evoked', since you're more likely to run into
other consonances. Though it amazes me how good 1000 cents is
at doing 7:4.....

>>>As it is, I still have to spend time arguing
>>> that the RMS is simpler. I also happen to think that the
>>>prime-weighted RMS is better anyway but that's not so important.
>>
>> I, on the other hand, would like to see that pursued further.
>
>Do you agree that weighting is more appropriate for the
>octave-equivalent case?

I don't. Weighting is a great way to allow octaves to be
tempered.

>Or, at least, that an unweighted, octave-equivalent measure
>doesn't make sense?

Sorry for barging in, but make sense for what? Harmonic
complexity? I don't think any octave-equivalent measure is
very good. But certainly an unweighted-factors approach
will fail.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 6:22:12 PM

>>The point is that a given cents difference in the major third will
>>have less of an aural effect than a given cents difference in the
>>perfect fifth. Are you saying that no differences will be heard in
>>any case, regardless of how many cents and which intervals? Or . . .
>>(?)
>
>I'm saying no differences will be heard in most cases. Octaves will
>probably be heard as different from a pure octave because most people
>will know what a pure octave sounds like.

That goes for all intervals of the normal diatonic scale, and
not necessarily their 12-tET versions. But even when bizzare
things like a Partch hexad appears, most people can tell it's
smooth (or not).

>No, what I did is get my "octave-specific" and "octave-equivalent"
>mixed up. I knew I'd do it eventually. Weighting makes sense with
>octave specificity because 2 is such a small number it has to be
>treated differently from 7 and 11 (or however high you're going).

Ah!

>>It seems like there are at least two different things here -- the
>>optimization criterion (which includes weights (such as Tenney) and
>>an exponent (such as 2)), and the way of selecting which intervals to
>>include. You can't compare these things, though you can certainly
>>combine them in different ways, and compare the combinations . . .
>
>Yes, two things. But the point is that the average of primes is some
>approximation to the average of real intervals. You could compare
>the prime-limit RMS to the RMS of some specific set of intervals. But
>if there's a correlation, it would only show the prime-limit RMS is
>valid if the RMS of a set of consonances has some validity in the first
>place, and that hasn't been established.

In my recent thread, I consider the prime factor basis of a comma
the consonant chord of the tuning, and find ms deviation of them.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/9/2005 8:52:00 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> But it's an age-old question... mistuning simpler intervals is
> more painful, but complex intervals require more accuracy of
> tuning to be 'evoked', since you're more likely to run into
> other consonances. Though it amazes me how good 1000 cents is
> at doing 7:4.....

It sounds fairly putrid, but it evokes pretty well. The two things
seem to be different.

🔗oyarman@ozanyarman.com

11/10/2005 4:41:01 AM

And putrid is the word. Insipid also comes to mind.

----- Original Message -----
From: "Gene Ward Smith" <gwsmith@svpal.org>
To: <tuning-math@yahoogroups.com>
Sent: 10 Kas�m 2005 Per�embe 6:52
Subject: [tuning-math] Re: TOP arguments

> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> > But it's an age-old question... mistuning simpler intervals is
> > more painful, but complex intervals require more accuracy of
> > tuning to be 'evoked', since you're more likely to run into
> > other consonances. Though it amazes me how good 1000 cents is
> > at doing 7:4.....
>
> It sounds fairly putrid, but it evokes pretty well. The two things
> seem to be different.
>
>
>

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 9:51:34 AM

>> But it's an age-old question... mistuning simpler intervals is
>> more painful, but complex intervals require more accuracy of
>> tuning to be 'evoked', since you're more likely to run into
>> other consonances. Though it amazes me how good 1000 cents is
>> at doing 7:4.....
>
>It sounds fairly putrid, but it evokes pretty well. The two things
>seem to be different.

Agree. -Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/10/2005 1:54:04 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/9/05, Paul Erlich <perlich@a...> wrote:
>
> > It seems like you're using both definitions of odd limit above,
one
> > right after the other. How can a set of intervals be an
approximation
> > of complexity?
>
> Choosing an odd limit gives you an octave-equivalent set of
intervals

OK.

> bounded by complexity.

which is a function with only two values in this case?

> >The point is that a given cents difference in the major third will
> >have less of an aural effect than a given cents difference in the
> >perfect fifth. Are you saying that no differences will be heard in
> >any case, regardless of how many cents and which intervals?
Or . . .
> >(?)
> I'm saying no differences will be heard in most cases.

Hmm . . .

> Octaves will
> probably be heard as different from a pure octave because most
people will
> know what a pure octave sounds like.

:)

> >> but whether one interval
> >> has enough force to break the chord.
>
> >You shouldn't assume that there are necessarily any many-interval
> >chords in the music.
> Why not? You certainly shouldn't assume there won't be.

OK, so should we develop a sum-abs-error version of TOP? Could we
work out its implications the way we can for (minimax) TOP?

> An all-purpose
> measure should take account of usual practice, which is many-
interval
> chords.

How is that "usual practice"?

> You could assume only one note at a time, in which case anything
> goes.
>
> >> No, but that some power will give the best approximation to how
the
> ear
> >> does the averaging, however that might be.
>
> >Might this not depend entirely or almost entirely on context and
> >other "accidents"?
> It could well do, and if we understood how we could do special-case
> optimizations. But as we don't know we have to make some good
guesses that
> will work in a range of situations.

OK.

> > I bet the linear programming involved here can be optimized down
to a
> > similarly short piece of code.
> Maybe, if it were horribly slow.

?

> But least squares can be done numerically
> as well -- should be easier as the function's continually
differentiable.
> Are you arguing that an algebraic solution isn't superior to a
numerical
> one?

I could see both types of solutions as being *geometric*.

> >> Do you agree that weighting is more appropriate for the
> >> octave-equivalent case?
>
> >More appropriate than what? What kind of weighting? The Kees or
> >inverse-log-of-minimum-odd-limit weighting seems most appropriate
to
> >me . . .
>
> >> Or, at least, that an unweighted,
> >> octave-equivalent measure doesn't make sense?
>
> >I thought you were arguing that it *does* make sense in contexts
> >like, for example, adaptive JI where a given retuning motion will
be
> >equally "painful" regardless of which interval it "corrects". So
have
> >you now changed your mind?
> No, what I did is get my "octave-specific" and "octave-equivalent"
mixed
> up. I knew I'd do it eventually. Weighting makes sense with octave
> specificity because 2 is such a small number it has to be treated
> differently from 7 and 11 (or however high you're going).

OK.

> >How wrong can you go with maximum weighted error? Because with
Tenney
> >weighting, there's no way to create a chord where a complex
interval
> >has greater weighted error than any of the simple intervals of
which
> >it is constructed.
> Not far wrong, because it's going to be pretty close to the least
squares.

:)

> >That's what I suspected. WPRMS (I thought you had a different,
> >slightly longer acronym for it) may be wonderful, but its
> >specification with respect to some list of intervals we care about
> >ends up being so abstract that I don't think I could go with it for
> >an expository paper.

> This is where I have a problem with your TOP propaganda. You're
suggesting
> by implication that it isn't abstract in some way.

I think it's easier when we talk about a set of intervals we care
about, the exact boundaries of what is and isn't in the set being
flexible, and specify some function of the error in these intervals
that is minimized in the tuning you care about. And yes, minimax is
less abstract than RMS. But I don't suggest what you imply.

> >> You can weight the errors as long as the weighted error itself
has
> a
> >> meaning. That isn't something you can argue over. But once you
> apply a
> >> minimax, only the worst interval (however weighted the error)
> matters.
>
> >Right, where "worst" means "biggest weight*error" and not "biggest
> >error in cents".
> I'd say it should include the inherent dissonance of the JI
interval being
> approximated in some way. But perhaps that's the "weight".
> >Integer limit does -- it favors smaller intervals.
> Oh, maybe. In comparison to the product limit at least. But to get
11:6 you
> need 11:1 as well.

This is why I like things like TOP where you can justify it all using
just the intervals within two octaves or one octave or whatever if
you like . . .

> >> They have some validity, but so does the prime RMS,
> >
> >Huh? How does that give you a different way of selecting octave-
> >specific intervals than integer or product limit?
> It means you don't have to (or only a prime limit).

I thought you didn't buy this idea of "you don't have to" . . .

> >> so neither would be dignified by comparison with the other.
>
> >It seems like there are at least two different things here -- the
> >optimization criterion (which includes weights (such as Tenney) and
> >an exponent (such as 2)), and the way of selecting which intervals
to
> >include. You can't compare these things, though you can certainly
> >combine them in different ways, and compare the combinations . . .
> Yes, two things. But the point is that the average of primes is
some
> approximation to the average of real intervals. You could compare
the
> prime-limit RMS to the RMS of some specific set of intervals. But
if there's
> a correlation, it would only show the prime-limit RMS is valid if
the RMS of
> a set of consonances has some validity in the first place, and that
hasn't
> been established.

Hmm . . . but that's your favorite assumption, isn't it?

🔗Graham Breed <gbreed@gmail.com>

11/10/2005 3:07:01 PM

[ Attachment content not displayed ]

🔗Graham Breed <gbreed@gmail.com>

11/10/2005 3:29:41 PM

[ Attachment content not displayed ]

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 10:13:13 PM

>>That goes for all intervals of the normal diatonic scale, and
>>not necessarily their 12-tET versions. But even when bizzare
>>things like a Partch hexad appears, most people can tell it's
>>smooth (or not).
>
>Yes, I'm suggesting people can hear "smooth" or "rough" but not
>the deviation (weighted or otherwise) from an interval they don't
>know.

But doesn't roughness correspond to deviation fairly well?

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 10:17:03 PM

>> OK, so should we develop a sum-abs-error version of TOP? Could we
>> work out its implications the way we can for (minimax) TOP?
>
>I don't think it will work. The problem is that m:n and mn:1 will have
>different errors. All you can predict from the errors in m and n is the
>maximum error in m:n and mn:1. Hence a minimax has properties that a
>sum doesn't.

Wouldn't knowing the signs of the errors fix this?

-Carl

🔗Graham Breed <gbreed@gmail.com>

11/12/2005 6:12:45 PM

[ Attachment content not displayed ]

🔗Graham Breed <gbreed@gmail.com>

11/12/2005 6:19:37 PM

[ Attachment content not displayed ]

🔗Carl Lumma <ekin@lumma.org>

11/12/2005 7:42:07 PM

>>> OK, so should we develop a sum-abs-error version of TOP? Could we
>>> work out its implications the way we can for (minimax) TOP?
>>
>>I don't think it will work. The problem is that m:n and mn:1 will have
>>different errors. All you can predict from the errors in m and n is the
>>maximum error in m:n and mn:1. Hence a minimax has properties that a
>>sum doesn't.
>
>Wouldn't knowing the signs of the errors fix this?
>
>Yes, but what do you do with them?

I may not have understood the context.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/12/2005 7:46:49 PM

>>But doesn't roughness correspond to deviation fairly well?
>
>With what weighting? I'm happy with absolute deviation relative
>to a small odd limit as a rough guess.

I don't think any weighting is necessary at all if you start
with a consonant chord. If you want to compare chords, each
with a deviation, you need weighting. I like the 'simple
intervals get rougher faster' approximation, though I'm not
sure Tenney weighting is ideal.

>(Deviation from arbitrary
>complex intervals is meaningless.) But the Tenney weighting
>would give a 1 cent mistuning of a simple interval as rougher
>than a 1 cent mistuning of a complex interval. I don't like that
>at all.

It seems like the first and last sentences here are
contradictory.

>Whatever Tenney weighted deviation measures, it ain't
>roughness, and I don't think it's directly perceptible (unless
>perhaps you've spent a long time on JI ear training).

Hm! Tenney seems to work for JI dyads (many tests) and
tetrads ("tuning lab" test). I can't say I've ever tried
weighted deviations in the real world.

-Carl

🔗Graham Breed <gbreed@gmail.com>

11/13/2005 12:10:20 AM

[ Attachment content not displayed ]

🔗Carl Lumma <ekin@lumma.org>

11/13/2005 10:26:05 AM

>>Hm! Tenney seems to work for JI dyads (many tests) and
>>tetrads ("tuning lab" test). I can't say I've ever tried
>>weighted deviations in the real world.
>
>Seems to work for what property of JI dyads?

Concordance?

>It's still what I go for when I want weighting.

Have you tried Gene's sqrt(p) suggestion?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:26:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >In terms of the relative dissonance of two tunings of a given
> >temperament, I stand by my statement above -- a simpler interval
> >differing by a given amount in cents between the two tunings will
> >have more impact on the relative dissonance of the two than a more
> >complex interval differing by the same amount (ceteris parebis).
>
> I believe that's ceteris paribus.

Huh? I wrote "ceteris paribus" because in reality, other intervals
would have to be different as well, but for this simplified statement,
we assume that somehow, they're the same.

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:28:59 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> In my recent thread, I consider the prime factor basis of a comma
> the consonant chord of the tuning

I don't understand this and I certainly don't recall reading anything
like this. How do you justify this? It seems dead wrong to me.

>, and find ms deviation of them.

Them?

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 12:35:28 PM

At 12:26 PM 11/14/2005, you wrote:
>--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>
>> >In terms of the relative dissonance of two tunings of a given
>> >temperament, I stand by my statement above -- a simpler interval
>> >differing by a given amount in cents between the two tunings will
>> >have more impact on the relative dissonance of the two than a more
>> >complex interval differing by the same amount (ceteris parebis).
>>
>> I believe that's ceteris paribus.
>
>Huh? I wrote "ceteris paribus" because in reality, other intervals
>would have to be different as well, but for this simplified statement,
>we assume that somehow, they're the same.

You wrote "parebis". :)

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:36:10 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Carl wrote:
> >That goes for all intervals of the normal diatonic scale, and
> >not necessarily their 12-tET versions. But even when bizzare
> >things like a Partch hexad appears, most people can tell it's
> >smooth (or not).
> Yes, I'm suggesting people can hear "smooth" or "rough" but not the
> deviation (weighted or otherwise) from an interval they don't know.
> Graham

How about loud, intense beating?

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:44:24 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/11/05, Paul Erlich <perlich@a...> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> > >
> > > Choosing an odd limit gives you an octave-equivalent set of
> > intervals
> >
> > OK.
> >
> > > bounded by complexity.
> >
> > which is a function with only two values in this case?
>
> It's a circular definition really, in that there's no independent
measure
> of octave-equivalent complexity to compare it with.

Why not? You can use expressibility, in which case any odd limit is
just a certain bound on it.

> But odd limits give
> roughly the same hierarchy as a Tenney metric once you accept octave
> equivalence. At least it's a better indication of complexity than a
prime
> limit.

Prime limit is not an indication of complexity, since any prime limit
always allows for intervals whose complexity approaches infinity.
However, for characterizing infinite JI lattices, prime limit is
preferable to odd limit for several reasons.

> > >> but whether one interval
> > >> has enough force to break the chord.
> >
> > >You shouldn't assume that there are necessarily any many-interval
> > >chords in the music.
> > Why not? You certainly shouldn't assume there won't be.
>
> > OK, so should we develop a sum-abs-error version of TOP? Could we
> > work out its implications the way we can for (minimax) TOP?

> I don't think it will work. The problem is that m:n and mn:1 will
have
> different errors. All you can predict from the errors in m and n is
the
> maximum error in m:n and mn:1. Hence a minimax has properties that
a sum
> doesn't. Perhaps if you know enough statistics you can prove a
correlation.
>
> >> An all-purpose
> >> measure should take account of usual practice, which is many-
> interval
> >> chords.
>
> > How is that "usual practice"?
> Most music I hear has more than two instruments at once if it has
more than
> one instrument at once.
>
>
> > > > I bet the linear programming involved here can be optimized
down
> to a
> > > > similarly short piece of code.
> > > Maybe, if it were horribly slow.
>
> > ?
> You can do for loops over all possible values for the period and
generator,
> to within some resolution, and you'll find the right answer but
it'll take a
> long time.

Of course that's not what I had in mond.

> Any more complicated numerical approach will use more code than
> the least squares and still be slower to run.
> Anyway, I have code that uses the simplex library now.

Good! This is far simpler and far faster than what you outlined above.

> I'm wondering if
> there are any cases where the minimax doesn't give a unique
solution and how
> I'd deal with them.

Yes, there's no unique TOP tuning for Blackwood, for example, as you
know. The convention in my paper is to leave pure the primes that
have some flexibility in their tuning under the TOP criterion.

> >> But least squares can be done numerically
> >> as well -- should be easier as the function's continually
> differentiable.
> >> Are you arguing that an algebraic solution isn't superior to a
> numerical
> >> one?
>
> >I could see both types of solutions as being *geometric*.
> Meaning what?

They can be pictured in a geometric diagram.

>I'm using the definition that algebraic solutions give exact
> answers but numeric solutions give successive approximations.

Well, there's no need for successive approximations when doing a
simplex algorithm.

> At least, I
> thought I was, but you can get an exact solution for a piecewise
linear
> graph I suppose. But you need to iterate to find it

Iterate what? Just search, not iterate, I'd say.

> whereas you can write
> down the least squares solution in all cases.
>
> >I think it's easier when we talk about a set of intervals we care
> >about, the exact boundaries of what is and isn't in the set being
> >flexible, and specify some function of the error in these intervals
> >that is minimized in the tuning you care about. And yes, minimax is
> >less abstract than RMS. But I don't suggest what you imply.

> A fuzzy set, then? In many cases, there should be intervals with a
higher
> prime limit in there as well.

There should?

> >> >> They have some validity, but so does the prime RMS,
> >> >
> >> >Huh? How does that give you a different way of selecting octave-
> >> >specific intervals than integer or product limit?
> >> It means you don't have to (or only a prime limit).
> >
> > I thought you didn't buy this idea of "you don't have to" . . .
> Choosing a prime limit is more flexible than choosing the exact
intervals.
> It'd be nice if we didn't have to choose the prime limit either,
but I don't
> think I'd like the results. So I choose the prime limit as a
compromise.

:)

> >> a correlation, it would only show the prime-limit RMS is valid if
> the RMS of
> >> a set of consonances has some validity in the first place, and
that
> hasn't
> >> been established.
>
> > Hmm . . . but that's your favorite assumption, isn't it?
>
> Yes, none of this has been established, that's why I look for
simplicity
> first.

OK -- I guess there are different possible notions of simplicity!

-P

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:00:27 PM

>> In my recent thread, I consider the prime factor basis of a comma
>> the consonant chord of the tuning
>
> I don't understand this and I certainly don't recall reading anything
> like this.

You just called it a fair assumption!

> How do you justify this? It seems dead wrong to me.
>
>> and find ms deviation of them.
>
>Them?

Example
81:80 - 2, 3, 5 are consonant.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:26:01 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >>That goes for all intervals of the normal diatonic scale, and
> >>not necessarily their 12-tET versions. But even when bizzare
> >>things like a Partch hexad appears, most people can tell it's
> >>smooth (or not).
> >
> >Yes, I'm suggesting people can hear "smooth" or "rough" but not
> >the deviation (weighted or otherwise) from an interval they don't
> >know.
>
> But doesn't roughness correspond to deviation fairly well?

I think Graham's point is that since the starting intervals have
different roughnesses to begin with, deviation can't capture roughness.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:33:21 PM

>> >>That goes for all intervals of the normal diatonic scale, and
>> >>not necessarily their 12-tET versions. But even when bizzare
>> >>things like a Partch hexad appears, most people can tell it's
>> >>smooth (or not).
>> >
>> >Yes, I'm suggesting people can hear "smooth" or "rough" but not
>> >the deviation (weighted or otherwise) from an interval they don't
>> >know.
>>
>> But doesn't roughness correspond to deviation fairly well?
>
>I think Graham's point is that since the starting intervals have
>different roughnesses to begin with, deviation can't capture
>roughness.

I've always assumed the starting thing is a chord, and the
rms deviation of its intervals is what should be minimized.
With TOP that's not the case, and I guess that's some of
what you've been discussing.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:39:48 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/11/05, Carl Lumma <ekin@l...> wrote:
> >
> > >> OK, so should we develop a sum-abs-error version of TOP? Could
we
> > >> work out its implications the way we can for (minimax) TOP?
> > >
> > >I don't think it will work. The problem is that m:n and mn:1
will have
> > >different errors. All you can predict from the errors in m and n
is the
> > >maximum error in m:n and mn:1. Hence a minimax has properties
that a
> > >sum doesn't.
> >
> > Wouldn't knowing the signs of the errors fix this?
>
> Yes, but what do you do with them? The RMS has the advantage of
being
> simple. The optimum happens to be the same as the standard deviation
> ignoring the stretch.

What does that mean? The optimum the same as the standard deviation??

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:45:50 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/11/05, Carl Lumma <ekin@l...> wrote:
> >
> >
> > But doesn't roughness correspond to deviation fairly well?
>
> With what weighting? I'm happy with absolute deviation relative to
a small
> odd limit as a rough guess.

As a rough guess for roughness? You'd say that an interval 1 cent
from 2:1 is just as rough as an interval 1 cent from 8:5??

> (Deviation from arbitrary complex intervals is
> meaningless.) But the Tenney weighting would give a 1 cent
mistuning of a
> simple interval as rougher than a 1 cent mistuning of a complex
interval. I
> don't like that at all.

Good, because it doesn't say that. But which of the measures we've
considered would work better when forced into this interpretation?

>Whatever Tenney weighted deviation measures, it
> ain't roughness, and I don't think it's directly perceptible
(unless perhaps
> you've spent a long time on JI ear training).
> Graham

The same seems to go for any of the deviation measures.

What you're failing to consider is that when comparing tunings, the
*absolute* roughnesses of JI intervals (which are different) can fall
out in the wash, and you can end up with the deviation comparisons
being equivalent to roughness comparisons.

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:48:11 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >Whatever Tenney weighted deviation measures, it ain't
> >roughness, and I don't think it's directly perceptible (unless
> >perhaps you've spent a long time on JI ear training).
>
> Hm! Tenney seems to work for JI dyads (many tests) and
> tetrads ("tuning lab" test).

I think you're talking about Tenney complexity, while the relevant
quantity here would be Tenney-weighted error. Completely different
animals/scenarios.

> I can't say I've ever tried
> weighted deviations in the real world.

How about *any* deviations?

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:49:12 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/13/05, Carl Lumma <ekin@l...> wrote:
>
> >
> > >(Deviation from arbitrary
> > >complex intervals is meaningless.) But the Tenney weighting
> > >would give a 1 cent mistuning of a simple interval as rougher
> > >than a 1 cent mistuning of a complex interval. I don't like that
> > >at all.
> >
> > It seems like the first and last sentences here are
> > contradictory.
>
> For the sake of simplicity, you can set a rule for all intervals
and not
> worry if it also includes some meaningless ones. The weight on the
overly
> complex intervals should work out as being negligible. That isn't
quite true
> with Tenney weighting,

Why not?

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:50:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >>Hm! Tenney seems to work for JI dyads (many tests) and
> >>tetrads ("tuning lab" test). I can't say I've ever tried
> >>weighted deviations in the real world.
> >
> >Seems to work for what property of JI dyads?
>
> Concordance?

What was at issue here was how to measure/weight *errors* or mistunings.

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:51:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> In my recent thread, I consider the prime factor basis of a comma
> >> the consonant chord of the tuning
> >
> > I don't understand this and I certainly don't recall reading
anything
> > like this.
>
> You just called it a fair assumption!

Absolutely not. I said it's a fair assumption that each of the prime
factors of the comma will be consonances. That's all.

> > How do you justify this? It seems dead wrong to me.
> >
> >> and find ms deviation of them.
> >
> >Them?
>
> Example
> 81:80 - 2, 3, 5 are consonant.

But not the only consonances, right? For example, 4 is consonant too,
right?

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:00:14 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >>That goes for all intervals of the normal diatonic scale, and
> >> >>not necessarily their 12-tET versions. But even when bizzare
> >> >>things like a Partch hexad appears, most people can tell it's
> >> >>smooth (or not).
> >> >
> >> >Yes, I'm suggesting people can hear "smooth" or "rough" but not
> >> >the deviation (weighted or otherwise) from an interval they
don't
> >> >know.
> >>
> >> But doesn't roughness correspond to deviation fairly well?
> >
> >I think Graham's point is that since the starting intervals have
> >different roughnesses to begin with, deviation can't capture
> >roughness.
>
> I've always assumed the starting thing is a chord, and the
> rms deviation of its intervals is what should be minimized.
> With TOP that's not the case, and I guess that's some of
> what you've been discussing.

In some of the other schemes it's also not the case. For his 9-limit
and higher-odd-limit optimizations, Gene counts intervals like 3:1
only once even though they occur multiple times in the "complete"
chord -- it's impossible to construct a chord with each interval
occuring only once. I'd rather avoid assuming this (chord thing) for
*any* of the schemes -- even in the simplest equal-weighted 5-limit
case, is it a major or minor chord? -- and then one can show later
for some of the schemes that starting with a chord would in a sense
be equivalent.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:02:11 PM

>> >Whatever Tenney weighted deviation measures, it ain't
>> >roughness, and I don't think it's directly perceptible (unless
>> >perhaps you've spent a long time on JI ear training).
>>
>> Hm! Tenney seems to work for JI dyads (many tests) and
>> tetrads ("tuning lab" test).
>
>I think you're talking about Tenney complexity, while the relevant
>quantity here would be Tenney-weighted error. Completely different
>animals/scenarios.

Funny that, since I seemed to recognize this below...

>> I can't say I've ever tried weighted deviations in the real world.
>
>How about *any* deviations?

Sure. Sum and RMS quite a bit.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:06:30 PM

>> >> In my recent thread, I consider the prime factor basis of a comma
>> >> the consonant chord of the tuning
>> >
>> > I don't understand this and I certainly don't recall reading
>> > anything like this.
>>
>> You just called it a fair assumption!
>
>Absolutely not. I said it's a fair assumption that each of the prime
>factors of the comma will be consonances. That's all.

Ok, but the "chord" part doesn't make any difference in anything
I was doing. I understand it can make a big difference in
optimizing tunings...

>> > How do you justify this? It seems dead wrong to me.
>> >
>> >> and find ms deviation of them.
>> >
>> >Them?
>>
>> Example
>> 81:80 - 2, 3, 5 are consonant.
>
>But not the only consonances, right? For example, 4 is consonant too,
>right?

Nope. You have to use the active distance measure to get to it.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:10:24 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> In my recent thread, I consider the prime factor basis of a
comma
> >> >> the consonant chord of the tuning
> >> >
> >> > I don't understand this and I certainly don't recall reading
> >> > anything like this.
> >>
> >> You just called it a fair assumption!
> >
> >Absolutely not. I said it's a fair assumption that each of the
prime
> >factors of the comma will be consonances. That's all.
>
> Ok, but the "chord" part doesn't make any difference in anything
> I was doing. I understand it can make a big difference in
> optimizing tunings...

The "chord" part can? How so?

> >> > How do you justify this? It seems dead wrong to me.
> >> >
> >> >> and find ms deviation of them.
> >> >
> >> >Them?
> >>
> >> Example
> >> 81:80 - 2, 3, 5 are consonant.
> >
> >But not the only consonances, right? For example, 4 is consonant
too,
> >right?
>
> Nope. You have to use the active distance measure to get to it.

I don't get it. What active distance measure, how does it get to 4,
and how does it not get to 2, 3, or 5?

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:12:00 PM

>> I've always assumed the starting thing is a chord, and the
>> rms deviation of its intervals is what should be minimized.
>> With TOP that's not the case, and I guess that's some of
>> what you've been discussing.
>
>In some of the other schemes it's also not the case. For his 9-limit
>and higher-odd-limit optimizations, Gene counts intervals like 3:1
>only once even though they occur multiple times in the "complete"
>chord -- it's impossible to construct a chord with each interval
>occuring only once. I'd rather avoid assuming this (chord thing) for
>*any* of the schemes -- even in the simplest equal-weighted 5-limit
>case, is it a major or minor chord? -- and then one can show later
>for some of the schemes that starting with a chord would in a sense
>be equivalent.

Yes, that was a great appeal of TOP. Bravo, I say!

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:16:07 PM

>> >> >> In my recent thread, I consider the prime factor basis of
>> >> >> a comma the consonant chord of the tuning
>> >> >
>> >> > I don't understand this and I certainly don't recall reading
>> >> > anything like this.
>> >>
>> >> You just called it a fair assumption!
>> >
>> >Absolutely not. I said it's a fair assumption that each of the
>> >prime factors of the comma will be consonances. That's all.
>>
>> Ok, but the "chord" part doesn't make any difference in anything
>> I was doing. I understand it can make a big difference in
>> optimizing tunings...
>
>The "chord" part can? How so?

If the numbers are only expected to be individually consonant,
you can measure their deviations. If they're expected to be
consonant with eachother, you have to consider "all the intervals"
or use something like chordadic harmonic entropy.

>> >> Example
>> >> 81:80 - 2, 3, 5 are consonant.
>> >
>> >But not the only consonances, right? For example, 4 is
>> >consonant too, right?
>>
>> Nope. You have to use the active distance measure to get to it.
>
>I don't get it. What active distance measure, how does it get to 4,
>and how does it not get to 2, 3, or 5?

I assumed you were asking in the exploring badness thread context.
There, I tried several different distance measures. Since
2 is in the comma, it's length 1, and 4 would be, say, length 2.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:17:55 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I've always assumed the starting thing is a chord, and the
> >> rms deviation of its intervals is what should be minimized.
> >> With TOP that's not the case, and I guess that's some of
> >> what you've been discussing.
> >
> >In some of the other schemes it's also not the case. For his 9-
limit
> >and higher-odd-limit optimizations, Gene counts intervals like 3:1
> >only once even though they occur multiple times in the "complete"
> >chord -- it's impossible to construct a chord with each interval
> >occuring only once. I'd rather avoid assuming this (chord thing)
for
> >*any* of the schemes -- even in the simplest equal-weighted 5-
limit
> >case, is it a major or minor chord? -- and then one can show later
> >for some of the schemes that starting with a chord would in a
sense
> >be equivalent.
>
> Yes, that was a great appeal of TOP.

Not at all what I was thinking.

> Bravo, I say!

Thanks anyway!

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 2:19:57 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> In my recent thread, I consider the prime factor basis of
> >> >> >> a comma the consonant chord of the tuning
> >> >> >
> >> >> > I don't understand this and I certainly don't recall reading
> >> >> > anything like this.
> >> >>
> >> >> You just called it a fair assumption!
> >> >
> >> >Absolutely not. I said it's a fair assumption that each of the
> >> >prime factors of the comma will be consonances. That's all.
> >>
> >> Ok, but the "chord" part doesn't make any difference in anything
> >> I was doing. I understand it can make a big difference in
> >> optimizing tunings...
> >
> >The "chord" part can? How so?
>
> If the numbers are only expected to be individually consonant,
> you can measure their deviations. If they're expected to be
> consonant with eachother, you have to consider "all the intervals"
> or use something like chordadic harmonic entropy.

How can you do that? For optimizing tunings, there's no way to tune
the intervals in a minor n-ad vs. a major n-ad differently, but
clearly they're very different as regards chordal harmonic entropy.

> >> >> Example
> >> >> 81:80 - 2, 3, 5 are consonant.
> >> >
> >> >But not the only consonances, right? For example, 4 is
> >> >consonant too, right?
> >>
> >> Nope. You have to use the active distance measure to get to it.
> >
> >I don't get it. What active distance measure, how does it get to
4,
> >and how does it not get to 2, 3, or 5?
>
> I assumed you were asking in the exploring badness thread context.
> There, I tried several different distance measures. Since
> 2 is in the comma, it's length 1, and 4 would be, say, length 2.

Oof.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:33:33 PM

At 02:17 PM 11/14/2005, you wrote:
>--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>
>> >> I've always assumed the starting thing is a chord, and the
>> >> rms deviation of its intervals is what should be minimized.
>> >> With TOP that's not the case, and I guess that's some of
>> >> what you've been discussing.
>> >
>> >In some of the other schemes it's also not the case. For his 9-
>limit
>> >and higher-odd-limit optimizations, Gene counts intervals like 3:1
>> >only once even though they occur multiple times in the "complete"
>> >chord -- it's impossible to construct a chord with each interval
>> >occuring only once. I'd rather avoid assuming this (chord thing)
>> >for *any* of the schemes -- even in the simplest equal-weighted
>> >5-limit case, is it a major or minor chord? -- and then one can
>> >show later for some of the schemes that starting with a chord
>> >would in a sense be equivalent.
>>
>> Yes, that was a great appeal of TOP.
>
>Not at all what I was thinking.

What were you thinking?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 3:46:07 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> At 02:17 PM 11/14/2005, you wrote:
> >--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >>
> >> >> I've always assumed the starting thing is a chord, and the
> >> >> rms deviation of its intervals is what should be minimized.
> >> >> With TOP that's not the case, and I guess that's some of
> >> >> what you've been discussing.
> >> >
> >> >In some of the other schemes it's also not the case. For his 9-
> >limit
> >> >and higher-odd-limit optimizations, Gene counts intervals like
3:1
> >> >only once even though they occur multiple times in
the "complete"
> >> >chord -- it's impossible to construct a chord with each
interval
> >> >occuring only once. I'd rather avoid assuming this (chord thing)
> >> >for *any* of the schemes -- even in the simplest equal-weighted
> >> >5-limit case, is it a major or minor chord? -- and then one can
> >> >show later for some of the schemes that starting with a chord
> >> >would in a sense be equivalent.
> >>
> >> Yes, that was a great appeal of TOP.
> >
> >Not at all what I was thinking.
>
> What were you thinking?

Woolhouse, for example.

🔗Graham Breed <gbreed@gmail.com>

11/14/2005 7:25:04 PM

[ Attachment content not displayed ]

🔗Graham Breed <gbreed@gmail.com>

11/14/2005 7:28:12 PM

[ Attachment content not displayed ]

🔗Graham Breed <gbreed@gmail.com>

11/14/2005 7:41:55 PM

On 11/15/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> >
> > It's a circular definition really, in that there's no independent
> measure
> > of octave-equivalent complexity to compare it with.
>
> Why not? You can use expressibility, in which case any odd limit is
> just a certain bound on it.

I don't count that as "independent".

> Prime limit is not an indication of complexity, since any prime limit
> always allows for intervals whose complexity approaches infinity.
> However, for characterizing infinite JI lattices, prime limit is
> preferable to odd limit for several reasons.

Yes. But we aren't talking about infinite JI lattices. At least, I'm not.

> > Any more complicated numerical approach will use more code than
> > the least squares and still be slower to run.
> > Anyway, I have code that uses the simplex library now.
>
> Good! This is far simpler and far faster than what you outlined above.

Than what I outlined where? It's got about 200 lines of code behind
it. It calls the function to be optimized several times and doesn't
get to the exact solution.

> Yes, there's no unique TOP tuning for Blackwood, for example, as you
> know. The convention in my paper is to leave pure the primes that
> have some flexibility in their tuning under the TOP criterion.

I didn't know that, but I'm not surprised. Anyway, the simple code
using the simplex won't take any account of this.

> Well, there's no need for successive approximations when doing a
> simplex algorithm.

There is for the one I've been pointed towards. If you know of a
better one, show me!

> > At least, I
> > thought I was, but you can get an exact solution for a piecewise
> linear
> > graph I suppose. But you need to iterate to find it
>
> Iterate what? Just search, not iterate, I'd say.

The way I do it for 1-D minimax, I start with two points and work out
the point where the lines they're on join. Then replace the one of
the old points with the new one such that the solution lies in the
middle. And loop round until I get to the bottom.

How can you search without iterating?

> > A fuzzy set, then? In many cases, there should be intervals with a
> higher
> > prime limit in there as well.
>
> There should?

7-limit music in miracle is likely to hit neutral thirds, which will
be heard as 11:9. 7-limit music in Orwell will use a load of 11-limit
intervals as dissonances.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/14/2005 7:54:58 PM

[ Attachment content not displayed ]

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:34:24 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/15/05, Paul Erlich <perlich@a...> wrote:
> >
> >
> > As a rough guess for roughness? You'd say that an interval 1 cent
> > from 2:1 is just as rough as an interval 1 cent from 8:5??
>
> No, but the question would never arise with an odd limit because
octaves
> are taken out of the equation.

So substitute 3:2 for 2:1.

> I said that octave equivalence, odd limits,
> minimax and absolute error all go together.

?

> > (Deviation from arbitrary complex intervals is
> > > meaningless.) But the Tenney weighting would give a 1 cent
> > mistuning of a
> > > simple interval as rougher than a 1 cent mistuning of a complex
> > interval. I
> > > don't like that at all.
> >
> > Good, because it doesn't say that. But which of the measures we've
> > considered would work better when forced into this interpretation?
>
> Error as roughness works better.

Then JI has some error already, a contradiction in terms.

> We haven't considered measures that do
> better then that, but I'm sure they exist.
>
> >Whatever Tenney weighted deviation measures, it
> > > ain't roughness, and I don't think it's directly perceptible
> > (unless perhaps
> > > you've spent a long time on JI ear training).
> > > Graham
> >
> > The same seems to go for any of the deviation measures.
>
> The minimax absolute error relative to an odd limit at least puts
some cap
> on the roughness.

Huh? How so? I don't see this at all.

> > What you're failing to consider is that when comparing tunings,
the
> > *absolute* roughnesses of JI intervals (which are different) can
fall
> > out in the wash, and you can end up with the deviation comparisons
> > being equivalent to roughness comparisons.
>
> You can say I fail to consider it. I still say it's exactly what I
consider
> when I say that a minimax should be unweighted, and a mean weighted.

And a sum-abs should be ____? Sorry, I don't follow this at all. Am I
trying your patience to ask you to explain this again?

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:35:51 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/15/05, Paul Erlich <perlich@a...> wrote:
> >
> >
> > What does that mean? The optimum the same as the standard
deviation??
>
> The optimum RMS error in the primes is approximately the same as the
> (optimum) standard deviation of the errors in the primes.

Wow -- so if all the primes had the same error, the standard deviation
would be zero, and the RMS error would therefore have to be
approximately zero?

It's explained in
> the bit you told me to remind you to look at.
> Graham

Excellent. What post was that again?

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:46:54 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/15/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> > >
> > > It's a circular definition really, in that there's no
independent
> > measure
> > > of octave-equivalent complexity to compare it with.
> >
> > Why not? You can use expressibility, in which case any odd limit
is
> > just a certain bound on it.
>
> I don't count that as "independent".

What would count?

> > Prime limit is not an indication of complexity, since any prime
limit
> > always allows for intervals whose complexity approaches infinity.
> > However, for characterizing infinite JI lattices, prime limit is
> > preferable to odd limit for several reasons.
>
> Yes. But we aren't talking about infinite JI lattices. At least,
>I'm not.

I am. I see all tempered tunings as imposing some periodicity on, or
rolling up, an infinite JI lattice.

> > > Any more complicated numerical approach will use more code than
> > > the least squares and still be slower to run.
> > > Anyway, I have code that uses the simplex library now.
> >
> > Good! This is far simpler and far faster than what you outlined
above.
>
> Than what I outlined where?

You snipped it. You mentioned numerically searching all possible
generators.

> It's got about 200 lines of code behind
> it. It calls the function to be optimized several times and doesn't
> get to the exact solution.

The simplex algorithm doesn't get to the exact solution? How could
that be?? All it does is search the small number of corners, one of
which is exactly at the exact solution.

> > Yes, there's no unique TOP tuning for Blackwood, for example, as
you
> > know. The convention in my paper is to leave pure the primes that
> > have some flexibility in their tuning under the TOP criterion.
>
> I didn't know that, but I'm not surprised. Anyway, the simple code
> using the simplex won't take any account of this.
>
> > Well, there's no need for successive approximations when doing a
> > simplex algorithm.
>
> There is for the one I've been pointed towards.

How bizarre. I don't see how that could be a simplex algorithm then.
We're talking about simplex as in linear programming, right?

> If you know of a
> better one, show me!

Gene has a method on his TOP page.

> > > At least, I
> > > thought I was, but you can get an exact solution for a piecewise
> > linear
> > > graph I suppose. But you need to iterate to find it
> >
> > Iterate what? Just search, not iterate, I'd say.
>
> The way I do it for 1-D minimax, I start with two points and work
out
> the point where the lines they're on join. Then replace the one of
> the old points with the new one such that the solution lies in the
> middle. And loop round until I get to the bottom.
>
> How can you search without iterating?

Just look at each of the corners. One of the corners is guaranteed to
be exactly at the minimax solution.

> > > A fuzzy set, then? In many cases, there should be intervals
with a
> > higher
> > > prime limit in there as well.
> >
> > There should?
>
> 7-limit music in miracle is likely to hit neutral thirds, which will
> be heard as 11:9.

I don't know about that.

> 7-limit music in Orwell will use a load of 11-limit
> intervals as dissonances.

If they're indeed being heard that way . . .

🔗Graham Breed <gbreed@gmail.com>

11/15/2005 5:09:25 PM

On 11/16/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> Wow -- so if all the primes had the same error, the standard deviation
> would be zero, and the RMS error would therefore have to be
> approximately zero?

If all the primes have the same error before you optimize, you can
stretch the scale so they all have a very small error. If all the
primes have the same error after you optimize, that error must be
zero. This is taking account of the sign of the error (which the
standard deviation uses but the RMS doesn't).

> It's explained in
> > the bit you told me to remind you to look at.
> > Graham
>
> Excellent. What post was that again?

November 2nd

Graham

🔗Graham Breed <gbreed@gmail.com>

11/15/2005 5:24:50 PM

On 11/16/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > > Why not? You can use expressibility, in which case any odd limit
> is
> > > just a certain bound on it.
> >
> > I don't count that as "independent".
>
> What would count?

An octave-equivalent harmonic entropy would count, I suppose.

> I am. I see all tempered tunings as imposing some periodicity on, or
> rolling up, an infinite JI lattice.

Right. The prime limit's only relevant before you roll up the lattice.

> > > Good! This is far simpler and far faster than what you outlined
> above.
> >
> > Than what I outlined where?
>
> You snipped it. You mentioned numerically searching all possible
> generators.

Well, that's simple but slow. I don't know what criteria you say any
other method is simpler. I'll go by lines of code for now. Faster,
yes, of course.

> The simplex algorithm doesn't get to the exact solution? How could
> that be?? All it does is search the small number of corners, one of
> which is exactly at the exact solution.

This simplex algorithm has a parameter for how close you want the
match to be. And it works for arbitrary functions, so there needn't
be corners.

> How bizarre. I don't see how that could be a simplex algorithm then.
> We're talking about simplex as in linear programming, right?

We're talking about Nedler-Mead, which is what you get by searching
for "Python simplex" on Google. According to Google, it isn't linear
programming, in that linear programming requires linear functions but
this library doesn't.

> > If you know of a
> > better one, show me!
>
> Gene has a method on his TOP page.

And where's that?

> > How can you search without iterating?
>
> Just look at each of the corners. One of the corners is guaranteed to
> be exactly at the minimax solution.

So you have to iterate over the corners! Which makes it slower than a
least squares optimization because you have to evaluate the function
at all corners.

> > 7-limit music in Orwell will use a load of 11-limit
> > intervals as dissonances.
>
> If they're indeed being heard that way . . .

You don't know the 9-limit intervals will be either, but a prime limit
includes them.

Graham

🔗Graham Breed <gbreed@gmail.com>

11/15/2005 5:38:09 PM

On 11/16/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> >
> > On 11/15/05, Paul Erlich <perlich@a...> wrote:
> > >
> > >
> > > As a rough guess for roughness? You'd say that an interval 1 cent
> > > from 2:1 is just as rough as an interval 1 cent from 8:5??
> >
> > No, but the question would never arise with an odd limit because
> octaves
> > are taken out of the equation.
>
> So substitute 3:2 for 2:1.

It's a rough bet. Quarter comma meantone gives equal error to 3:2 and
6:5, but that works with the weighted minmax anyway. Really, absolute
error's already a compromise between pure roughness and pure weighted
error for small errors.

> > I said that octave equivalence, odd limits,
> > minimax and absolute error all go together.
>
> ?

The odd limit only works for octave equivalence. Without octave
equivalence, you have to consider 2 alongside other primes, when it
should naturally be given some different weighting (although integer
limits would sort this out anyway, but it breaks the spirit of a
minimax). The minimax makes more sense for a small set, which the odd
limit gives you.

> > Error as roughness works better.
>
> Then JI has some error already, a contradiction in terms.

That's true however you weight the errors. No weighting gets closer
than Tenney weighting.

> > The minimax absolute error relative to an odd limit at least puts
> some cap
> > on the roughness.
>
> Huh? How so? I don't see this at all.

More complex intervals have more natural roughness. Simpler intervals
get rougher more quickly as the error gets larger. So for some given
error, a simpler and more complex interval will have the same
roughness. For some given odd limit and worst error, you know how
rough any of those consonances can get.

> And a sum-abs should be ____? Sorry, I don't follow this at all. Am I
> trying your patience to ask you to explain this again?

A sum-abs would have the same motivations as a sum-squared as far as I
can see, except it's harder to optimize.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 2:23:02 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/16/05, Paul Erlich <perlich@a...> wrote:
> > Wow -- so if all the primes had the same error, the standard
deviation
> > would be zero, and the RMS error would therefore have to be
> > approximately zero?
>
> If all the primes have the same error before you optimize, you can
> stretch the scale so they all have a very small error.

Aha.

> If all the
> primes have the same error after you optimize, that error must be
> zero. This is taking account of the sign of the error (which the
> standard deviation uses but the RMS doesn't).
>
> > It's explained in
> > > the bit you told me to remind you to look at.
> > > Graham
> >
> > Excellent. What post was that again?
>
> November 2nd

Gotta run but I'll try to find it next time.

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 2:28:32 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/16/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> > > > Why not? You can use expressibility, in which case any odd
limit
> > is
> > > > just a certain bound on it.
> > >
> > > I don't count that as "independent".
> >
> > What would count?
>
> An octave-equivalent harmonic entropy would count, I suppose.

Is it a problem that this is a continuous function, so that complex
ratios very close to a simple ratios would have to be considered
somehow 'simple'?

> > I am. I see all tempered tunings as imposing some periodicity on,
or
> > rolling up, an infinite JI lattice.
>
> Right. The prime limit's only relevant before you roll up the
lattice.

I think it's still relevant after.

> > Gene has a method on his TOP page.
>
> And where's that?

http://66.98.148.43/~xenharmo/top.htm

> > > How can you search without iterating?
> >
> > Just look at each of the corners. One of the corners is
guaranteed to
> > be exactly at the minimax solution.
>
> So you have to iterate over the corners! Which makes it slower
than a
> least squares optimization because you have to evaluate the function
> at all corners.

Have you actually tested the speeds?

> > > 7-limit music in Orwell will use a load of 11-limit
> > > intervals as dissonances.
> >
> > If they're indeed being heard that way . . .
>
> You don't know the 9-limit intervals will be either, but a prime
limit
> includes them.

But it doesn't necessarily call them "consonances" or "dissonances",
so whether they're heard as the relevant ratios is irrelevant :)

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 7:06:45 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/16/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> > >
> > > On 11/15/05, Paul Erlich <perlich@a...> wrote:
> > > >
> > > >
> > > > As a rough guess for roughness? You'd say that an interval 1
cent
> > > > from 2:1 is just as rough as an interval 1 cent from 8:5??
> > >
> > > No, but the question would never arise with an odd limit
because
> > octaves
> > > are taken out of the equation.
> >
> > So substitute 3:2 for 2:1.
>
> It's a rough bet. Quarter comma meantone gives equal error to 3:2
and
> 6:5, but that works with the weighted minmax anyway.

And with sum-abs error, among other things.

> Really, absolute
> error's already a compromise between pure roughness and pure
weighted
> error for small errors.

What would better capture "pure roughness" in your view?

> > > I said that octave equivalence, odd limits,
> > > minimax and absolute error all go together.
> >
> > ?
>
> The odd limit only works for octave equivalence.

Right.

> Without octave
> equivalence, you have to consider 2 alongside other primes, when it
> should naturally be given some different weighting (although integer
> limits would sort this out anyway, but it breaks the spirit of a
> minimax). The minimax makes more sense for a small set, which the
odd
> limit gives you.

One could argue that the minimax makes more sense for a large set.

And absolute error?

> > > Error as roughness works better.
> >
> > Then JI has some error already, a contradiction in terms.
>
> That's true however you weight the errors. No weighting gets closer
> than Tenney weighting.

You mean equal weighting gets closer than Tenney weighting? I don't
believe that's true, when you're talking about comparing different
tunings with it, and how well that parallels comparing their
roughnesses.

> > > The minimax absolute error relative to an odd limit at least
puts
> > some cap
> > > on the roughness.
> >
> > Huh? How so? I don't see this at all.
>
> More complex intervals have more natural roughness.

Right.

> Simpler intervals
> get rougher more quickly as the error gets larger.

Right.

> So for some given
> error, a simpler and more complex interval will have the same
> roughness.

I don't know about that. You may leave the local minimum associated
with the more complex interval altogether before the given error
brings the simpler interval up to the same roughness.

> For some given odd limit and worst error, you know how
> rough any of those consonances can get.

If the worst error is small, the simple intervals will never (even if
they have that worst error) approach the roughness of even JI
renditions of the purer intervals.

> > And a sum-abs should be ____? Sorry, I don't follow this at all.
Am I
> > trying your patience to ask you to explain this again?
>
> A sum-abs would have the same motivations as a sum-squared as far
as I
> can see, except it's harder to optimize.

I doubt it's significantly harder for modern computers.

🔗Graham Breed <gbreed@gmail.com>

11/17/2005 5:14:30 AM

On 11/17/05, Paul Erlich <perlich@aya.yale.edu> wrote:

> What would better capture "pure roughness" in your view?

Dunno, that's why I stick with the equal weighted errors. I suppose
you could add a Tenny residual roughness to a Tenney weighted error.

> One could argue that the minimax makes more sense for a large set.

It's harder to calculate at any rate.

> And absolute error?

Much the same as squared error with anything.

> You mean equal weighting gets closer than Tenney weighting? I don't
> believe that's true, when you're talking about comparing different
> tunings with it, and how well that parallels comparing their
> roughnesses.

The simpler intervals should have smaller bands but Tenney weighting
gives them larger bands. Equal sized bands are a compromise.

> I don't know about that. You may leave the local minimum associated
> with the more complex interval altogether before the given error
> brings the simpler interval up to the same roughness.

Oh yes, there's that.

> If the worst error is small, the simple intervals will never (even if
> they have that worst error) approach the roughness of even JI
> renditions of the purer intervals.

But at least you have some pure, simple intervals.

> > > And a sum-abs should be ____? Sorry, I don't follow this at all.
> Am I
> > > trying your patience to ask you to explain this again?
> >
> > A sum-abs would have the same motivations as a sum-squared as far
> as I
> > can see, except it's harder to optimize.
>
> I doubt it's significantly harder for modern computers.

It's significantly harder on any computer if you do it often enough
with a high level language.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/17/2005 1:36:25 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/17/05, Paul Erlich <perlich@a...> wrote:
>
> > What would better capture "pure roughness" in your view?
>
> Dunno, that's why I stick with the equal weighted errors. I suppose
> you could add a Tenny residual roughness to a Tenney weighted error.

Could you elaborate?

> > One could argue that the minimax makes more sense for a large set.
>
> It's harder to calculate at any rate.

Maybe.

> > And absolute error?
>
> Much the same as squared error with anything.

But you specifically said absolute error goes with octave-equivalence
and the rest (?)

> > You mean equal weighting gets closer than Tenney weighting? I
don't
> > believe that's true, when you're talking about comparing different
> > tunings with it, and how well that parallels comparing their
> > roughnesses.
>
> The simpler intervals should have smaller bands but Tenney weighting
> gives them larger bands.

Quite the contrary -- Tenney weighting gives them smaller bands.

> Equal sized bands are a compromise.
>
> > I don't know about that. You may leave the local minimum
associated
> > with the more complex interval altogether before the given error
> > brings the simpler interval up to the same roughness.
>
> Oh yes, there's that.
>
> > If the worst error is small, the simple intervals will never
(even if
> > they have that worst error) approach the roughness of even JI
> > renditions of the purer intervals.
>
> But at least you have some pure, simple intervals.

So it's not really all about roughness then, is it? :)

> > > > And a sum-abs should be ____? Sorry, I don't follow this at
all.
> > Am I
> > > > trying your patience to ask you to explain this again?
> > >
> > > A sum-abs would have the same motivations as a sum-squared as
far
> > as I
> > > can see, except it's harder to optimize.
> >
> > I doubt it's significantly harder for modern computers.
>
> It's significantly harder on any computer if you do it often enough
> with a high level language.

Compared to other kinds of optimizations, which involve numerical
searches and the like, I think all of these kinds have essentially
zero complexity and take essentially zero computer time.

🔗Graham Breed <gbreed@gmail.com>

11/17/2005 4:03:42 PM

On 11/17/05, Paul Erlich <perlich@aya.yale.edu> wrote:

> > An octave-equivalent harmonic entropy would count, I suppose.
>
> Is it a problem that this is a continuous function, so that complex
> ratios very close to a simple ratios would have to be considered
> somehow 'simple'?

It'd be a problem if an interval within the limit you're looking at
came out simpler than it should be.

> > > Gene has a method on his TOP page.
> >
> > And where's that?
>
> http://66.98.148.43/~xenharmo/top.htm

There's one method for equal temperaments, which is simple but will be
fairly slow. For higher ranks, he doesn't give a method but says in
what branch of mathematics the solution can be found.

> Have you actually tested the speeds?

I can't test the speed of an algorithm I haven't implemented. For the
odd limit calculations, I found the minimax optimization to be
significantly slower than the RMS, by measuring the time taken to do
the search for linear temperaments. I have a faster algorithm for the
minimax now, which assumes the function only has a single, global
minimum.

If you think an O(n) algorithm might be slower than an O(n**3) one,
the onus is really on you to demonstrate it. But I might look at this
anyway. The only TOP algorithm I can code directly is for equal
temperaments, though. So I won't be able to show anything about
linear temperaments. Octave equivalent optimizations are very
efficient for equal temperaments because you don't have to do
anything.

Graham

Graham

🔗Graham Breed <gbreed@gmail.com>

11/17/2005 4:13:15 PM

On 11/18/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > Dunno, that's why I stick with the equal weighted errors. I suppose
> > you could add a Tenny residual roughness to a Tenney weighted error.
>
> Could you elaborate?

No. I don't know enough about the perception of dissonance to specify
a quantitative model. Instead, I stick with the simplest algorithm
that has some meaning.

> > > One could argue that the minimax makes more sense for a large set.
> >
> > It's harder to calculate at any rate.
>
> Maybe.

The other thing is that minimax is more meaningful when you can grasp
the whole set that the maximum is over.

> But you specifically said absolute error goes with octave-equivalence
> and the rest (?)

Oh, yes. You can't optimize for absolute error over a whole prime
limit. Ludicrously complex ratios have equal weight to the simple
ones. So you have to choose a specific set. An odd limit is a
convenient set to choose. You could use an integer limit or some such
instead, but it's bound to be bigger than an equivalent odd limit and
there's no need for it because it's easier to use weighted primes.

> > > You mean equal weighting gets closer than Tenney weighting? I
> don't
> > > believe that's true, when you're talking about comparing different
> > > tunings with it, and how well that parallels comparing their
> > > roughnesses.
> >
> > The simpler intervals should have smaller bands but Tenney weighting
> > gives them larger bands.
>
> Quite the contrary -- Tenney weighting gives them smaller bands.

Yes, the contrary. Simple intervals can have larger mistunings for a
given roughness.

> So it's not really all about roughness then, is it? :)

It's mostly about simplicity.

> Compared to other kinds of optimizations, which involve numerical
> searches and the like, I think all of these kinds have essentially
> zero complexity and take essentially zero computer time.

Prime limit algorithms will be easier than odd limit ones, but I
expect it'll still matter if you use a large enough search space.
Large searches are good because the user doesn't have to specify
arbitrary parameters.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 2:31:44 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/17/05, Paul Erlich <perlich@a...> wrote:
>
> > > An octave-equivalent harmonic entropy would count, I suppose.
> >
> > Is it a problem that this is a continuous function, so that
complex
> > ratios very close to a simple ratios would have to be considered
> > somehow 'simple'?
>
> It'd be a problem if an interval within the limit you're looking at
> came out simpler than it should be.

So where does this leave us?

> > > > Gene has a method on his TOP page.
> > >
> > > And where's that?
> >
> > http://66.98.148.43/~xenharmo/top.htm
>
> There's one method for equal temperaments, which is simple but will
be
> fairly slow.

My method for equal temperaments is referenced in my paper and is
immediate.

> For higher ranks, he doesn't give a method but says in
> what branch of mathematics the solution can be found.

I wish Gene himself would chime in at this point.

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 2:38:28 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/18/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> > > Dunno, that's why I stick with the equal weighted errors. I
suppose
> > > you could add a Tenny residual roughness to a Tenney weighted
error.
> >
> > Could you elaborate?
>
> No. I don't know enough about the perception of dissonance to
specify
> a quantitative model. Instead, I stick with the simplest algorithm
> that has some meaning.

I'd like to do more than simply ignore the ideas you throw out.

> > > > One could argue that the minimax makes more sense for a large
set.
> > >
> > > It's harder to calculate at any rate.
> >
> > Maybe.
>
> The other thing is that minimax is more meaningful when you can
grasp
> the whole set that the maximum is over.
>
> > But you specifically said absolute error goes with octave-
equivalence
> > and the rest (?)
>
> Oh, yes. You can't optimize for absolute error over a whole prime
> limit. Ludicrously complex ratios have equal weight to the simple
> ones.

Seems like you said "absolute error" but meant "unweighted error".
Absolute error means something different to me -- that you're taking
the absolute value, rather than the square or something else, of the
error.

> So you have to choose a specific set. An odd limit is a
> convenient set to choose. You could use an integer limit or some
such
> instead, but it's bound to be bigger than an equivalent odd limit
and
> there's no need for it because it's easier to use weighted primes.
>
> > > > You mean equal weighting gets closer than Tenney weighting? I
> > don't
> > > > believe that's true, when you're talking about comparing
different
> > > > tunings with it, and how well that parallels comparing their
> > > > roughnesses.
> > >
> > > The simpler intervals should have smaller bands but Tenney
weighting
> > > gives them larger bands.
> >
> > Quite the contrary -- Tenney weighting gives them smaller bands.
>
> Yes, the contrary. Simple intervals can have larger mistunings for
a
> given roughness.

Not if you're comparing them against like intervals. If you're
comparing like against like, a given mistuning will contribute more
roughness for a simple interval than for a complex interval. As
different tunings of the same temperament will have the same set of
important intervals, you can always compare them purely by comparing
like against like.

> > So it's not really all about roughness then, is it? :)
>
> It's mostly about simplicity.

I can sympathize with that sentiment.

> > Compared to other kinds of optimizations, which involve numerical
> > searches and the like, I think all of these kinds have essentially
> > zero complexity and take essentially zero computer time.
>
> Prime limit algorithms will be easier than odd limit ones, but I
> expect it'll still matter if you use a large enough search space.
> Large searches are good because the user doesn't have to specify
> arbitrary parameters.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 4:52:36 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I wish Gene himself would chime in at this point.

Quite recently I explained how you set up the linear programming
prblem. What more needs to be said?

🔗Graham Breed <gbreed@gmail.com>

11/22/2005 5:50:18 PM

On 11/22/05, Paul Erlich <perlich@aya.yale.edu> wrote:

> I'd like to do more than simply ignore the ideas you throw out.

Oh, well, that's sweet :) My life's a bit too chaotic to follow these
things through now, and I'm not sure it's an interesting problem
anyway. It's not so difficult to tell a good temperament from a bad
one, and telling a quite good one from a middling good one isn't so
important. Generally, I can't see many interesting gaps to be filled
in. A systematic way of deciding what ET pairs to look at for the
temperament search would be nice. I'm also trying to get the search
working in C.

> Seems like you said "absolute error" but meant "unweighted error".
> Absolute error means something different to me -- that you're taking
> the absolute value, rather than the square or something else, of the
> error.

Yes, I agree with that. I thought you shifted meaning somewhere, but
it's not worth looking back now. Unweighted error goes with odd
limits.

> > Yes, the contrary. Simple intervals can have larger mistunings for
> a
> > given roughness.
>
> Not if you're comparing them against like intervals. If you're
> comparing like against like, a given mistuning will contribute more
> roughness for a simple interval than for a complex interval. As
> different tunings of the same temperament will have the same set of
> important intervals, you can always compare them purely by comparing
> like against like.

I'm not really sure what you mean here. If you're comparing like
intervals, the weighting shouldn't matter. And you're talking about
the contribution of the mistuning again. We seem to be going round in
circles. For the minimax, I'm not interested in the contribution but
the absolute pain, however that's likely to be perceived. Also, I'm
interested in comparing different temperaments more than different
tunings of the same temperament. To get a figure for the badness you
need to first do an optimization.

> > > So it's not really all about roughness then, is it? :)
> >
> > It's mostly about simplicity.
>
> I can sympathize with that sentiment.

I didn't see a reply to my benchmarking post, so I'll mention it here:
I sped up the TOP optimization by removing the repeated calls to the
weighted primes function. It's now much faster, but still an order of
magnitude slower than the RMS for higher limits. There's also one
more line of code.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/23/2005 1:08:22 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> I'm not really sure what you mean here. If you're comparing like
> intervals, the weighting shouldn't matter.

Why not? There will be several comparisons, each of which compares
like intervals to one another, but overall many types of intervals
will get compared.

> And you're talking about
> the contribution of the mistuning again. We seem to be going round
in
> circles.

I'd like to understand your point of view better, then.

> For the minimax, I'm not interested in the contribution but
> the absolute pain,

In each interval, right?

> however that's likely to be perceived.

And you can't assume this is zero in JI, right?

> Also, I'm
> interested in comparing different temperaments more than different
> tunings of the same temperament.

I should have included comparing different temperaments of the same
JI system too, as my arguments applied to that as well.

> To get a figure for the badness you
> need to first do an optimization.
>
> > > > So it's not really all about roughness then, is it? :)
> > >
> > > It's mostly about simplicity.
> >
> > I can sympathize with that sentiment.
>
> I didn't see a reply to my benchmarking post,

It seemed clear you weren't looking at the best formulae at least for
the ET case, so I wanted to make sure that was cleared up first . . .

> so I'll mention it here:
> I sped up the TOP optimization by removing the repeated calls to the
> weighted primes function. It's now much faster, but still an order
of
> magnitude slower than the RMS for higher limits. There's also one
> more line of code.
>
>
> Graham
>

🔗Paul Erlich <perlich@aya.yale.edu>

11/23/2005 1:39:04 PM

I'm still waiting for a reply on this from you, Gene.

Also, everyone seems to think Yahoo's search feature improved like a
few months ago, but as far as I can tell, it's gotten worse. It
refuses to find my posts!

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > > Any differences for
> > > the systems in my paper?
> >
> > I'll check, but if you have handy a table of wedgies you could
post
> > here it would be nice.
>
> I know you insist that wedgies are wedge products of vals and not
> wedge products of commas, but since taking the dual is so easy,
I'll
> give you the latter anyway :)
>
> Commas' bivector Horagram name
> [[-14 0 8 0 -5 0>> Blacksmith
> [[2 -5 3 -4 4 -4>> Dimisept
> [[16 -6 -4 2 4 -1>> Dominant
> [[-14 1 7 -6 0 -3>> August
> [[-2 -12 11 4 -4 -2>> Pajara
> [[20 -4 -8 -1 8 -2>> Semaphore
> [[-12 13 -4 -10 4 -1>> Meantone
> [[4 7 -8 -8 8 -2>> Injera
> [[13 8 -14 2 3 4>> Negrisept
> [[14 -18 7 6 0 -3>> Augene
> [[-7 12 -6 3 -5 6>> Keemun
> [[28 -19 0 12 0 0>> Catler
> [[-5 1 2 10 -10 6>> Hedgehog
> [[-30 6 12 -2 -9 1>> Superpyth
> [[-5 1 2 -13 9 -7>> Sensisept
> [[1 20 -17 -2 2 6>> Lemba
> [[-28 18 1 -6 -5 3>> Porcupine
> [[32 -17 -4 9 4 -1>> Flattone
> [[25 -5 -10 12 -1 5>> Magic
> [[-3 13 -9 6 -6 8>> Doublewide
> [[-21 12 2 3 -10 6>> Nautilus
> [[-16 -12 19 4 -9 -2>> Beatles
> [[8 9 -12 -11 12 -3>> Liese
> [[36 -10 -12 1 12 -3>> Cynder
> [[27 7 -21 8 3 7>> Orwell
> [[-10 25 -15 -14 8 1>> Garibaldi
> [[9 -17 9 -7 9 -10>> Myna
> [[15 20 -25 -2 7 6>> Miracle
>
> bonus:
>
> [[-34 22 1 18 -27 18>> Ennealimmal
>
> I'd also like to know, when the TOP tuning is not unique (such as
for
> 5-limit Blackwood), whether a stretched Kees tuning *could* be a
(non-
> canonical) TOP tuning.
>
> > > Based on TOP, I told Igliashon that 13-equal
> > > is better in the 7-limit using the Orwell approximation than
any
> other
> > > approximation of the 7-limit in 13-equal. Is this still true
> based on
> > > Kees?
> >
> > I don't even know what you mean.
>
> I think I could state this as: Is the best 13-equal val for the 7-
> limit the one which is an Orwell val?

🔗Graham Breed <gbreed@gmail.com>

11/24/2005 7:26:35 PM

On 11/24/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> > And you're talking about
> > the contribution of the mistuning again. We seem to be going round
> in
> > circles.
>
> I'd like to understand your point of view better, then.

For a minmax, it's how bad the interval sounds that's important, not
how much the mistuning contributes to the badness.

> > For the minimax, I'm not interested in the contribution but
> > the absolute pain,
>
> In each interval, right?
>
> > however that's likely to be perceived.
>
> And you can't assume this is zero in JI, right?

If you're looking at a minimax, the intervals closest to JI don't
matter. Only the interval furthest away is scored. So it's only that
interval we need to worry about. If some intervals are reasonably far
from JI, they'll be at the point where an interval sounds bad
according to how mistuned it is, by unweighted mistuning. Some
intervals may be so mistuned that they become meaningless, and with a
minimax you can also specify that no such intervals exist. If they're
all close to JI, at least the minimax can tell you that. You can
specify how close you want to be to JI, and make sure all intervals
are within that distance. Perhaps the unweighted minimax isn't so
useful for temperaments that are close to JI. But, for a given
complexity, such temperaments are difficult to find so the precise
scoring isn't so important.

Then again, from a practical point of view, you may not be able to
tune to better than 1 cent say. So if all intervals are within 1 cent
of JI the temperament is as good as (and pointless compared to ;) JI.
You can select the minimax according to your tuning resolution.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/25/2005 2:00:11 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/24/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> >
> > > And you're talking about
> > > the contribution of the mistuning again. We seem to be going
round
> > in
> > > circles.
> >
> > I'd like to understand your point of view better, then.
>
> For a minmax, it's how bad the interval sounds that's important,

I don't think you can state that as a matter of dogma. For one thing,
would you consider a pure 8:5 to sound "badder" than a somewhat-
mistuned 3:2?

> not
> how much the mistuning contributes to the badness.

> > > For the minimax, I'm not interested in the contribution but
> > > the absolute pain,
> >
> > In each interval, right?
> >
> > > however that's likely to be perceived.
> >
> > And you can't assume this is zero in JI, right?
>
> If you're looking at a minimax, the intervals closest to JI don't
> matter. Only the interval furthest away is scored. So it's only
that
> interval we need to worry about. If some intervals are reasonably
far
> from JI, they'll be at the point where an interval sounds bad
> according to how mistuned it is, by unweighted mistuning. Some
> intervals may be so mistuned that they become meaningless, and with
a
> minimax you can also specify that no such intervals exist. If
they're
> all close to JI, at least the minimax can tell you that. You can
> specify how close you want to be to JI, and make sure all intervals
> are within that distance. Perhaps the unweighted minimax isn't so
> useful for temperaments that are close to JI. But, for a given
> complexity, such temperaments are difficult to find so the precise
> scoring isn't so important.

Sounds like a lot of hand-waving to me, but my mind is still
open . . . We also haven't much considered how much various intervals
can be mistuned when they coexist in *chords*, which you claim is the
normal case . . .

> Then again, from a practical point of view, you may not be able to
> tune to better than 1 cent say. So if all intervals are within 1
cent
> of JI the temperament is as good as (and pointless compared to ;)
JI.
> You can select the minimax according to your tuning resolution.

?

🔗Gene Ward Smith <gwsmith@svpal.org>

11/25/2005 11:45:29 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:
>
> I'm still waiting for a reply on this from you, Gene.

Problems only arise for blacksmith and catler, and this is because the
5 (in the case of blacksmith) or the 7 (in the case of catler) are not
determined by the minimax condition. Hence, an exact value for a 5 or
7, when the octave is adjusted, becomes an inexact value, whereas the
exact value is probably what is wanted.

🔗Graham Breed <gbreed@gmail.com>

11/27/2005 4:40:10 PM

On 11/26/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> > For a minmax, it's how bad the interval sounds that's important,
>
> I don't think you can state that as a matter of dogma. For one thing,
> would you consider a pure 8:5 to sound "badder" than a somewhat-
> mistuned 3:2?

Who's being dogmatic? But yes, I would. Fifths are still the
strongest (octaves aside) consonances in quarter comma meantone
despite being out of tune.

> Sounds like a lot of hand-waving to me, but my mind is still
> open . . . We also haven't much considered how much various intervals
> can be mistuned when they coexist in *chords*, which you claim is the
> normal case . . .

Everything's hand-waving. What else do we have? The idea is that the
dissonance of the chord is determined by the most dissonant
constituent interval. That's what led to odd limits in the first
place.

Graham

🔗Carl Lumma <ekin@lumma.org>

11/27/2005 9:32:03 PM

>> Sounds like a lot of hand-waving to me, but my mind is still
>> open . . . We also haven't much considered how much various intervals
>> can be mistuned when they coexist in *chords*, which you claim is the
>> normal case . . .
>
>Everything's hand-waving. What else do we have? The idea is that the
>dissonance of the chord is determined by the most dissonant
>constituent interval. That's what led to odd limits in the first
>place.

I'm not sure what proposal this is, but for the record I wouldn't take
this in support of weighting more heavily the errors of the most complex
intervals in a chord. Neighborhood-of-the-octave-you're-in dissonance
isn't at all like mistuned-by-a-small-amount dissonance.

Odd limits work decently well overall, but something which takes all
the chord's intervals into account would be better. Similarly, I
think RMS that took the errors of all the intervals into account
would be better than what I think you're doing (and better than TOP).
You said somewhere that the latter tracks the former, but I didn't
notice if you posted evidence. I bet the former is harder to optimize
for?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/28/2005 11:06:40 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Odd limits work decently well overall, but something which takes all
> the chord's intervals into account would be better. Similarly, I
> think RMS that took the errors of all the intervals into account
> would be better than what I think you're doing (and better than TOP).
> You said somewhere that the latter tracks the former, but I didn't
> notice if you posted evidence. I bet the former is harder to optimize
> for?

RMS with all the intervals is easy to optimize for, and is what is
normally done.

🔗Carl Lumma <ekin@lumma.org>

11/28/2005 11:47:20 AM

>> Odd limits work decently well overall, but something which takes all
>> the chord's intervals into account would be better. Similarly, I
>> think RMS that took the errors of all the intervals into account
>> would be better than what I think you're doing (and better than TOP).
>> You said somewhere that the latter tracks the former, but I didn't
>> notice if you posted evidence. I bet the former is harder to optimize
>> for?
>
>RMS with all the intervals is easy to optimize for, and is what is
>normally done.

I thought that was the pre-TOP paradigm (though perfect octaves
were generally assumed).

But Paul has a point about the 'assuming a chord' thing. TOP
doesn't require that assumption.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/28/2005 11:50:06 AM

>> Odd limits work decently well overall, but something which takes all
>> the chord's intervals into account would be better. Similarly, I
>> think RMS that took the errors of all the intervals into account
>> would be better than what I think you're doing (and better than TOP).
>> You said somewhere that the latter tracks the former, but I didn't
>> notice if you posted evidence. I bet the former is harder to optimize
>> for?
>
>RMS with all the intervals is easy to optimize for, and is what is
>normally done.

Also, pairwise (dyadic) consideration isn't necessarily what I
meant here. Chordadic harmonic entropy, long one of the most
interesting problems in tuning theory IMO, now seems to be the
single most significant missing piece. I know you've made a
few stabs at getting into harmonic entropy, Gene ("poor man's",
and maybe others), but I never saw Paul reply. Meanwhile, it's
been quite a while since I thought Paul had an approach for
triads that he was just waiting for computer time to run......

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/28/2005 4:38:58 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> >
> > I'm still waiting for a reply on this from you, Gene.
>
> Problems only arise for blacksmith and catler, and this is because
the
> 5 (in the case of blacksmith) or the 7 (in the case of catler) are not
> determined by the minimax condition. Hence, an exact value for a 5 or
> 7, when the octave is adjusted, becomes an inexact value, whereas the
> exact value is probably what is wanted.

Probably? What *are* the minimax Kees tunings of (the 7-limit)
Blacksmith and Catler? I seriously doubt the latter has pure 7s!

🔗Gene Ward Smith <gwsmith@svpal.org>

11/28/2005 11:02:21 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> I know you've made a
> few stabs at getting into harmonic entropy, Gene ("poor man's",
> and maybe others), but I never saw Paul reply. Meanwhile, it's
> been quite a while since I thought Paul had an approach for
> triads that he was just waiting for computer time to run......

My idea of using Stieltjes integration with the ? function mever got
followed up on, but I don't see that that's going to help with chords.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/28/2005 11:07:21 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Probably? What *are* the minimax Kees tunings of (the 7-limit)
> Blacksmith and Catler? I seriously doubt the latter has pure 7s!

I get <1200 1900 2800 3368.8259| for Catler. Why is this wrong, in
your view?

🔗Paul Erlich <perlich@aya.yale.edu>

11/29/2005 2:02:51 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Probably? What *are* the minimax Kees tunings of (the 7-limit)
> > Blacksmith and Catler? I seriously doubt the latter has pure 7s!
>
> I get <1200 1900 2800 3368.8259| for Catler. Why is this wrong, in
> your view?

See below. I think problem is that the minimax Kees result is not
unique. So I'm beginning to wonder if your statement that for some
temperaments, a minimax Kees tuning cannot be a stretched or
compressed TOP tuning is incorrect.

Here are some of the Kees-weighted errors for your result. We only
need to look at ratios involving 7 since the rest of the tuning is
fixed:

error(7/4)/lg2(7) = 0
error(7/5)/lg2(7) = 4.88
error(7/6)/lg2(7) = 0.70
error(9/7)/lg2(9) = 1.23
error(15/14)/lg2(15) = 3.00

Let's try a stretched TOP Catler instead:

<1200 1900 2800 3375.37|

error(7/4)/lg2(7) = 2.33
error(7/5)/lg2(7) = 2.54
error(7/6)/lg2(7) = 3.03
error(9/7)/lg2(9) = 3.30
error(15/14)/lg2(15) = 1.33

This seems to have a lower minimax! It certainly looks like a better
way to implement Catler in an octave-equivalent world, qualitatively
speaking.

What I think is going on is that, since 6/5 has a Kees-weighted error
of 6.74 in both tunings, is that 7 is free to wiggle around quite a
bit without changing the minimax.

But we have a convention for non-unique TOP tunings which is to state
the tuning that acheives the minimax over the subset of ratios whose
tuning can vary without changing the overall minimax. Can we apply
this sort of convention for minimax Kees as well?

🔗Paul Erlich <perlich@aya.yale.edu>

11/29/2005 2:34:49 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/26/05, Paul Erlich <perlich@a...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
>
> > > For a minmax, it's how bad the interval sounds that's important,
> >
> > I don't think you can state that as a matter of dogma. For one
thing,
> > would you consider a pure 8:5 to sound "badder" than a somewhat-
> > mistuned 3:2?
>
> Who's being dogmatic? But yes, I would. Fifths are still the
> strongest (octaves aside) consonances in quarter comma meantone
> despite being out of tune.

You say that for a minimax it's how bad the intervals sound that's
important. But if the pure 8:5 sounds "badder" than a whole host of
different fifths, and so on, doesn't that imply that only the tuning
of the most complex intervals is going to matter? I certainly don't
see the use of minimax as forcing you into anything like that kind of
scenario; hence your statement seems like groundless dogma to me
right now. (But I'm sure it isn't . . . you just have to explain it
better)

> > Sounds like a lot of hand-waving to me, but my mind is still
> > open . . . We also haven't much considered how much various
intervals
> > can be mistuned when they coexist in *chords*, which you claim is
the
> > normal case . . .
>
> Everything's hand-waving. What else do we have? The idea is that
the
> dissonance of the chord

What's "the chord"?

> is determined by the most dissonant
> constituent interval.

That's "the idea" of what?

> That's what led to odd limits in the first
> place.

When we use RMS or MAD with odd limits, how are we following the idea
that the dissonance of "the chord" is determined by the most
dissonant constituent interval?

🔗Graham Breed <gbreed@gmail.com>

11/29/2005 7:58:42 PM

On 11/30/05, Paul Erlich <perlich@aya.yale.edu> wrote:

> You say that for a minimax it's how bad the intervals sound that's
> important. But if the pure 8:5 sounds "badder" than a whole host of
> different fifths, and so on, doesn't that imply that only the tuning
> of the most complex intervals is going to matter? I certainly don't
> see the use of minimax as forcing you into anything like that kind of
> scenario; hence your statement seems like groundless dogma to me
> right now. (But I'm sure it isn't . . . you just have to explain it
> better)

It means the unweighted minimax isn't perfect. But I'm only looking
at a simple set of options:

Tenney weighting vs no weighting
octave equivalent vs octave specific
minimax vs rms
primes vs interval sets

These are the simple things you can do to calculate the tuning damage.
I don't think the basic theory's solid enough to justify more
complicated methods. So as Tenney weighting goes the wrong way, we
use no weighting. There's got to be some other weighting which is
better, but I don't know what it is, and it's all going to depend on
context anyway.

> > Everything's hand-waving. What else do we have? The idea is that
> the
> > dissonance of the chord
>
> What's "the chord"?

Whatever chord you want to find the dissonance of.

> > is determined by the most dissonant
> > constituent interval.
>
> That's "the idea" of what?

The idea of odd limits being a measure of dissonance.

> > That's what led to odd limits in the first
> > place.
>
> When we use RMS or MAD with odd limits, how are we following the idea
> that the dissonance of "the chord" is determined by the most
> dissonant constituent interval?

We aren't for RMS. I don't know what MAD is. Maximum absolute
deviation? In that case, it assumes that there's a particular
mistuning beyond which an interval becomes unrecognizable, like
there's a particular complexity beyond which the ratio becomes
unrecognizable.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

11/30/2005 11:31:31 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> See below. I think problem is that the minimax Kees result is not
> unique. So I'm beginning to wonder if your statement that for some
> temperaments, a minimax Kees tuning cannot be a stretched or
> compressed TOP tuning is incorrect.

Usually it is unique, and that's how you find examples.

> But we have a convention for non-unique TOP tunings which is to state
> the tuning that acheives the minimax over the subset of ratios whose
> tuning can vary without changing the overall minimax. Can we apply
> this sort of convention for minimax Kees as well?

That's certainly a plan.

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:25:35 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/30/05, Paul Erlich <perlich@a...> wrote:
>
> > You say that for a minimax it's how bad the intervals sound that's
> > important. But if the pure 8:5 sounds "badder" than a whole host
of
> > different fifths, and so on, doesn't that imply that only the
tuning
> > of the most complex intervals is going to matter? I certainly
don't
> > see the use of minimax as forcing you into anything like that
kind of
> > scenario; hence your statement seems like groundless dogma to me
> > right now. (But I'm sure it isn't . . . you just have to explain
it
> > better)
>
> It means the unweighted minimax isn't perfect.

This seems to be a different issue, one that remains regardless of
what weighting you use.

> But I'm only looking
> at a simple set of options:
>
> Tenney weighting vs no weighting
> octave equivalent vs octave specific
> minimax vs rms
> primes vs interval sets
>
> These are the simple things you can do to calculate the tuning
damage.
> I don't think the basic theory's solid enough to justify more
> complicated methods. So as Tenney weighting goes the wrong way,

The wrong way??

> we
> use no weighting. There's got to be some other weighting which is
> better, but I don't know what it is, and it's all going to depend on
> context anyway.
>
> > > Everything's hand-waving. What else do we have? The idea is
that
> > the
> > > dissonance of the chord
> >
> > What's "the chord"?
>
> Whatever chord you want to find the dissonance of.
>
> > > is determined by the most dissonant
> > > constituent interval.
> >
> > That's "the idea" of what?
>
> The idea of odd limits being a measure of dissonance.

It is? I don't see it that way at all. Why do you say this?

> > > That's what led to odd limits in the first
> > > place.
> >
> > When we use RMS or MAD with odd limits, how are we following the
idea
> > that the dissonance of "the chord" is determined by the most
> > dissonant constituent interval?
>
> We aren't for RMS.

Oh, so you're talking specifically about the use of minimax in an odd
limit here?

> I don't know what MAD is. Maximum absolute
> deviation?

Mean absolute deviation. What you said sounds like minimax.

> In that case,

In the minimax case then?

> it assumes that there's a particular
> mistuning beyond which an interval becomes unrecognizable,

It does? I think I've lost your reasoning. Can you clarify?

> like
> there's a particular complexity beyond which the ratio becomes
> unrecognizable.
>
>
> Graham
>

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:43:39 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > See below. I think problem is that the minimax Kees result is not
> > unique. So I'm beginning to wonder if your statement that for
some
> > temperaments, a minimax Kees tuning cannot be a stretched or
> > compressed TOP tuning is incorrect.
>
> Usually it is unique, and that's how you find examples.

Are there any 7-limit examples?

> > But we have a convention for non-unique TOP tunings which is to
state
> > the tuning that acheives the minimax over the subset of ratios
whose
> > tuning can vary without changing the overall minimax. Can we
apply
> > this sort of convention for minimax Kees as well?
>
> That's certainly a plan.

I think that would be useful for the list of systems I gave in this
thread (same as the list in my paper). Is it something you could try
calculating?