In-Reply-To: <a2g36a+7sl3@eGroups.com>

Gene:

> > I don't know. What I'd like to know is what a version of your

> >heuristic would be which applies to sets of commas--is this what you

> >are aiming at?

Paul:

> Eventually. It would probably involve some definition of the dot

> product of the commas in a tri-taxicab metric. But I like to start

> simple, and perhaps if we can formulate the right error measure in 5-

> limit, we can generalize it and use it for 7-limit even without

> knowing how one would apply the heuristic.

My experience of generating and sorting linear temperaments from the 5- to

the 21-limit is that the "right" error metric for one can be wildly

inappropriate for others.

One assumption behind the heuristic is that the error is proportional to

the size/complexity of the unison vector. If you measure complexity as

the number of consonant intervals, that's the best case of tempering it

out. Higher-limit linear temperaments tend not to be best cases, but the

proportionality might still work. At least if you can magically produce

orthogonal unison vectors. I'll have to look at lattice theory more.

The other assumption is that the octave-specific Tenney metric

approximates the number of consonant intervals a comma's composed of. I'm

not sure how closely this holds. The Tenney metric is a good match for

the first-order odd limit of small intervals. But extended limits can

behave differently.

For example, 2401:2400 works well in the 7-limit because the numerator

only involves 7, so it has a complexity of 4 despite being fairly complex

and superparticular. Whereas a comma involving 11**4, or 14641, still

only has a complexity of 4 in the 11-limit. So if you could get a

superparticular like that, it'd lead to a much smaller error.

It should follow that 5**4:(13*3*2**4) or 625:624 will be particularly

inefficient between the 13- and 23-limits relative to what the heuristic

would predict. It still has a complexity of 4, whereas 13**3 is already

2197 and 23**2 is 529. Yes, 12168:12167 is a 23-limit comma with a

complexity of 3. (8*9*13*13):(13**3).

I'd prefer to see a heuristic for how complex a temperament produced for a

set of unison vectors or pair of ETs will be. Or one for how small the

error will be when it's generated by ETs.

Graham

--- In tuning-math@y..., graham@m... wrote:

> My experience of generating and sorting linear temperaments from

the 5- to

> the 21-limit is that the "right" error metric for one can be wildly

> inappropriate for others.

Can you give an example?

> One assumption behind the heuristic is that the error is

proportional to

> the size/complexity of the unison vector.

You can call it an assumption, if you wish -- I've verified its

approximate correctness for all 10 (wildly different) temperaments

I've tried, against Gene's rms measures.

> If you measure complexity as

> the number of consonant intervals, that's the best case of

tempering it

> out.

What does that mean?

> Higher-limit linear temperaments tend not to be best cases, but the

> proportionality might still work. At least if you can magically

produce

> orthogonal unison vectors. I'll have to look at lattice theory

more.

Well, so far I've only considered the case where one unison vector is

tempered out.

> The other assumption is that the octave-specific Tenney metric

> approximates the number of consonant intervals a comma's composed

of. I'm

> not sure how closely this holds.

This is based on the Kees van Prooijen lattice metric, and again its

good approximation was verified relative to Gene's rms measure.

> The Tenney metric is a good match for

> the first-order odd limit of small intervals. But extended limits

can

> behave differently.

>

> For example, 2401:2400 works well in the 7-limit because the

numerator

> only involves 7, so it has a complexity of 4 despite being fairly

complex

> and superparticular.

This is only one possible complexity measure, not the one Gene's

currently using, which already showed a good match with the

heuristic. A better one awaits . . .

> Whereas a comma involving 11**4, or 14641, still

> only has a complexity of 4 in the 11-limit. So if you could get a

> superparticular like that, it'd lead to a much smaller error.

You're missing the lattice justification for the heuristic. No wonder

you're skeptical!

> It should follow that 5**4:(13*3*2**4) or 625:624 will be

particularly

> inefficient between the 13- and 23-limits relative to what the

heuristic

> would predict.

Why? Try a 2D system based on 5 and 13. The heuristics should work

fine, especially if you weight ratios of 13 as less important than

ratios of 5.

In-Reply-To: <a2jliq+9hrb@eGroups.com>

Me:

> > My experience of generating and sorting linear temperaments from

> the 5- to

> > the 21-limit is that the "right" error metric for one can be wildly

> > inappropriate for others.

Paul:

> Can you give an example?

The first run through of my temperament generator, when I was using

step-cents gave absurdly complex and accurate 5-limit temperaments. Using

only consistent ETs works well enough up to the 15-limit, but beyond that

optimal temperaments are missed. At least with the current metrics.

Me:

> > One assumption behind the heuristic is that the error is

> proportional to

> > the size/complexity of the unison vector.

Paul:

> You can call it an assumption, if you wish -- I've verified its

> approximate correctness for all 10 (wildly different) temperaments

> I've tried, against Gene's rms measures.

How many dimensions?

Me:

> > If you measure complexity as

> > the number of consonant intervals, that's the best case of

> tempering it

> > out.

Paul:

> What does that mean?

It's the microtemperament formula. Last time I mentioned it, you pointed

me to one of your own messages. For the minimax temperament, tempering

out one unison vector, the error is the size of the comma divided by the

number of consonances making it up. When you're tempering out more than

one comma, the result will typically be worse than the best case for any

of the commas on their own. But it can never be better than for only one

comma.

Paul:

> Well, so far I've only considered the case where one unison vector is

> tempered out.

I'm only questioning size/complexity as a heuristic when you have more

than one unison vector. It might still work then.

> > The other assumption is that the octave-specific Tenney metric

> > approximates the number of consonant intervals a comma's composed

> of. I'm

> > not sure how closely this holds.

>

> This is based on the Kees van Prooijen lattice metric, and again its

> good approximation was verified relative to Gene's rms measure.

>From the exposition I have, 'The "length" of a unison vector

... in the Tenney lattice with taxicab metric ... is proportional to ...

the "number" ... of consonant intervals making

up that unison vector.' That's what I'm disagreeing with. Why does the

KvP metric behave differently?

Me:

> > For example, 2401:2400 works well in the 7-limit because the

> numerator

> > only involves 7, so it has a complexity of 4 despite being fairly

> complex

> > and superparticular.

Paul:

> This is only one possible complexity measure, not the one Gene's

> currently using, which already showed a good match with the

> heuristic. A better one awaits . . .

It's a complexity measure based on

1) The odd limit

2) Minimax tuning

I thought we agreed that (1) was as good as any simple, all-purpose,

numerical dissonance metric. Also that it gave the same results as the

octave-specific Tenney metric (or product limit) for small intervals. I'm

not prepared to abandon this solely in order to make your heuristic work.

I use (2) because it's simple to find the rule, at least for only one

commatic unison vector. I expect RMS optimisation would give similar

results provided all consonances are treated equally. If this isn't the

case, I want a good reason why.

Me:

> > Whereas a comma involving 11**4, or 14641, still

> > only has a complexity of 4 in the 11-limit. So if you could get a

> > superparticular like that, it'd lead to a much smaller error.

Paul:

> You're missing the lattice justification for the heuristic. No wonder

> you're skeptical!

I'm working with the precise theory I already have. And you're asking me

to give it up for a heuristic?

Me:

> > It should follow that 5**4:(13*3*2**4) or 625:624 will be

> particularly

> > inefficient between the 13- and 23-limits relative to what the

> heuristic

> > would predict.

Paul:

> Why? Try a 2D system based on 5 and 13. The heuristics should work

> fine, especially if you weight ratios of 13 as less important than

> ratios of 5.

Yes, it'll work fine as long as you fudge the metric to get it to work.

Making ratios of 13 "less important" means allowing them to be more out of

tune. My experience is that the more complex intervals get, the more

accurately they have to be tuned to sound right.

A planar temperament optimising the minimax would give exactly

1200*log2(625/624)/4 = 0.7 cents for the worst interval. In fact, I think

that's with 13:8 just so 5:4 and 13:10 are both out by 0.7 cents. Other

methods are hardly likely to make anything badly out of tune, but that's

fourth-order, superparticular, planar temperaments for you.

How is one temperament supposed to work or not work according to a

heuristic that only states a proportionality? My rule gives exact

results, and it works. But only when one unison vector is being tempered

out. Actually, not always then. For example 9*3 would be two consonances

in the 9-limit but should be weighted as 1.5. But the difference between

1.5 and 2 is less than that between 5 and 23.

Graham

--- In tuning-math@y..., graham@m... wrote:

> In-Reply-To: <a2jliq+9hrb@e...>

> Me:

> > > My experience of generating and sorting linear temperaments

from

> > the 5- to

> > > the 21-limit is that the "right" error metric for one can be

wildly

> > > inappropriate for others.

>

> Paul:

> > Can you give an example?

>

> The first run through of my temperament generator, when I was using

> step-cents gave absurdly complex and accurate 5-limit

temperaments. Using

> only consistent ETs works well enough up to the 15-limit, but

beyond that

> optimal temperaments are missed. At least with the current metrics.

What does any of this have to do with the validity of the heuristics?

You seem to be talking about goodness/badness, as well as the

generating-from-ETs-missed-some issue, neither of which have anything

to do with the validity of the heuristics. Of course, once you define

a goodness/badness measure, you should be able to use the heuristic

for step/complexity, combined with the heuristic for cents/error, to

approximate that goodness/badness measure.

> Me:

> > > One assumption behind the heuristic is that the error is

> > proportional to

> > > the size/complexity of the unison vector.

>

> Paul:

> > You can call it an assumption, if you wish -- I've verified its

> > approximate correctness for all 10 (wildly different)

temperaments

> > I've tried, against Gene's rms measures.

>

> How many dimensions?

These were all 5-limit linear temperaments.

>

> I'm only questioning size/complexity as a heuristic

Hmm . . . you may be misunderstanding something. Can you clarify what

you mean by this?

> when you have more

> than one unison vector.

More than one tempered out? Why don't we focus on the case of just

one tempered out first.

> > > The other assumption is that the octave-specific Tenney metric

> > > approximates the number of consonant intervals a comma's

composed

> > of. I'm

> > > not sure how closely this holds.

> >

> > This is based on the Kees van Prooijen lattice metric, and again

its

> > good approximation was verified relative to Gene's rms measure.

>

> >From the exposition I have, 'The "length" of a unison vector

> ... in the Tenney lattice with taxicab metric ... is proportional

to ...

> the "number" ... of consonant intervals making

> up that unison vector.' That's what I'm disagreeing with.

Why are you disagreeing? Note that "number" is _weighted_ -- more

complex consonances are longer and count as "fewer" consonances.

?

>

> Me:

> > > For example, 2401:2400 works well in the 7-limit because the

> > numerator

> > > only involves 7, so it has a complexity of 4 despite being

fairly

> > complex

> > > and superparticular.

>

> Paul:

> > This is only one possible complexity measure, not the one Gene's

> > currently using, which already showed a good match with the

> > heuristic. A better one awaits . . .

>

> It's a complexity measure based on

>

> 1) The odd limit

>

> 2) Minimax tuning

>

> I thought we agreed that (1) was as good as any simple, all-

purpose,

> numerical dissonance metric. Also that it gave the same results as

the

> octave-specific Tenney metric (or product limit) for small

intervals. I'm

> not prepared to abandon this solely in order to make your heuristic

work.

My heuristic says the complexity is proportional to log(d), where d

is either the numerator or denominator (since they're close). Since

either n or d is the odd limit, my heuristic is equivalent to (1). So

you wouldn't be abandoning anything.

> I use (2) because it's simple to find the rule, at least for only

one

> commatic unison vector. I expect RMS optimisation would give

similar

> results provided all consonances are treated equally.

RMS optimiziation gave similar results to my heuristic -- so what's

the problem?

>

> Yes, it'll work fine as long as you fudge the metric to get it to

work.

> Making ratios of 13 "less important" means allowing them to be more

out of

> tune. My experience is that the more complex intervals get, the

more

> accurately they have to be tuned to sound right.

Well, this is the age-old question. It depends what you mean

by "sounds right". We've spent so much time discussing this in the

past, how this could go either way . . . personally, I don't find

ratios of 13 to be "meaningful" as isolated dyads, and in the context

of big otonalities, you can notice mistuning in the most consonant

ratios more easily than mistuning in the 13 identity.

> How is one temperament supposed to work or not work according to a

> heuristic that only states a proportionality?

I calculated the constants of proportionality for both heuristics for

10 vastly different temperaments, using Gene's rms optimized results,

and the constants of proportionality for each heuristic were all

within a factor of 2 of one another. Did you miss that post?

--- In tuning-math@y..., graham@m... wrote:

> My experience is that the more complex intervals get, the more

> accurately they have to be tuned to sound right.

So perhaps you'd like to temper the octaves most of all?

In-Reply-To: <a2k66t+5ccs@eGroups.com>

paulerlich wrote:

> --- In tuning-math@y..., graham@m... wrote:

>

> > My experience is that the more complex intervals get, the more

> > accurately they have to be tuned to sound right.

>

> So perhaps you'd like to temper the octaves most of all?

I've mostly used octave-equivalent systems so far, so the opportunity

doesn't present itself. Making the octaves worse would also make the most

complex intervals worse, and some instruments don't allow you to do so

anyway. Besides, I treat octaves and fifths as special cases. But this

is certainly something to look at in the future.

Oh, and if we're getting into specifics, a system with 11:8, 9:7, 9:8 and

11:7 tends to leave 8 on a par with 7, 11 and 9. So 2 would only end up

with a third the error of 7 and 11.

Graham

In-Reply-To: <a2k64u+qmc3@eGroups.com>

paulerlich wrote:

> What does any of this have to do with the validity of the heuristics?

> You seem to be talking about goodness/badness, as well as the

> generating-from-ETs-missed-some issue, neither of which have anything

> to do with the validity of the heuristics. Of course, once you define

> a goodness/badness measure, you should be able to use the heuristic

> for step/complexity, combined with the heuristic for cents/error, to

> approximate that goodness/badness measure.

I'm saying that one dimensional results don't usually generalise well to

more complex cases.

Me:

> > I'm only questioning size/complexity as a heuristic

Paul:

> Hmm . . . you may be misunderstanding something. Can you clarify what

> you mean by this?

Me:

> > when you have more

> > than one unison vector.

Paul:

> More than one tempered out? Why don't we focus on the case of just

> one tempered out first.

Yes, that's fine, we agree on that case. That's why I said I wasn't

questioning it.

Me:

> > >From the exposition I have, 'The "length" of a unison vector

> > ... in the Tenney lattice with taxicab metric ... is proportional

> to ...

> > the "number" ... of consonant intervals making

> > up that unison vector.' That's what I'm disagreeing with.

Paul:

> Why are you disagreeing? Note that "number" is _weighted_ -- more

> complex consonances are longer and count as "fewer" consonances.

I was assuming that numbers were numbers and didn't carry weights. I

thought that was the difference between numbers and amounts.

> My heuristic says the complexity is proportional to log(d), where d

> is either the numerator or denominator (since they're close). Since

> either n or d is the odd limit, my heuristic is equivalent to (1). So

> you wouldn't be abandoning anything.

The examples I gave before show that the Tenney length of a unison vector

isn't a good predictor of the smallest number of intervals within a given

odd limit that make it up. The numerator and denominator being close

simply mean that the Tenney metric will be a predictor of the odd limit

*for that interval*. It works differently when you look at combinations

of intervals.

> RMS optimiziation gave similar results to my heuristic -- so what's

> the problem?

I assume there's no problem with RMS as opposed to minimax. If your

results agree, there's no problem.

> Well, this is the age-old question. It depends what you mean

> by "sounds right". We've spent so much time discussing this in the

> past, how this could go either way . . . personally, I don't find

> ratios of 13 to be "meaningful" as isolated dyads, and in the context

> of big otonalities, you can notice mistuning in the most consonant

> ratios more easily than mistuning in the 13 identity.

Yes, it's not something I would normally pursue. I work on the

simplest-case metric that all intervals deemed consonant are treated

equally in tuning. I can then do the fine tuning by ear. But if you're

suggesting something that will only work with the opposite weighting to

what I now prefer, I'll disagree with it.

> > How is one temperament supposed to work or not work according to a

> > heuristic that only states a proportionality?

>

> I calculated the constants of proportionality for both heuristics for

> 10 vastly different temperaments, using Gene's rms optimized results,

> and the constants of proportionality for each heuristic were all

> within a factor of 2 of one another. Did you miss that post?

I obviously didn't pay much attention to it. Do you have a rough figure

for "Erlich's constant" then? A factor of 2 still sounds a bit wayward.

Graham

--- In tuning-math@y..., graham@m... wrote:

> In-Reply-To: <a2k66t+5ccs@e...>

> paulerlich wrote:

>

> > --- In tuning-math@y..., graham@m... wrote:

> >

> > > My experience is that the more complex intervals get, the more

> > > accurately they have to be tuned to sound right.

> >

> > So perhaps you'd like to temper the octaves most of all?

>

> I've mostly used octave-equivalent systems so far, so the

opportunity

> doesn't present itself. Making the octaves worse would also make

the most

> complex intervals worse,

Huh? And if this is true, why wouldn't this be true of, say, ratios

of 5?

> and some instruments don't allow you to do so

> anyway.

?

--- In tuning-math@y..., graham@m... wrote:

> Me:

> > > >From the exposition I have, 'The "length" of a unison vector

> > > ... in the Tenney lattice with taxicab metric ... is

proportional

> > to ...

> > > the "number" ... of consonant intervals making

> > > up that unison vector.' That's what I'm disagreeing with.

>

> Paul:

> > Why are you disagreeing? Note that "number" is _weighted_ -- more

> > complex consonances are longer and count as "fewer" consonances.

>

> I was assuming that numbers were numbers and didn't carry weights.

Note that I put "number" in quotes.

> > My heuristic says the complexity is proportional to log(d), where

d

> > is either the numerator or denominator (since they're close).

Since

> > either n or d is the odd limit, my heuristic is equivalent to

(1). So

> > you wouldn't be abandoning anything.

>

> The examples I gave before show that the Tenney length of a unison

vector

> isn't a good predictor of the smallest number of intervals within a

given

> odd limit that make it up.

But I've always argued that the complexity measure should be

weighted. It's easier to hear progressions by 3/2 than progressions

by 5/4 . . .

> The numerator and denominator being close

> simply mean that the Tenney metric will be a predictor of the odd

limit

> *for that interval*. It works differently when you look at

combinations

> of intervals.

Again, just focusing on linear temperaments from a "two-dimensional"

just lattice for now . . .

> Yes, it's not something I would normally pursue. I work on the

> simplest-case metric that all intervals deemed consonant are

treated

> equally in tuning. I can then do the fine tuning by ear. But if

you're

> suggesting something that will only work with the opposite

weighting to

> what I now prefer, I'll disagree with it.

Maybe we each have to write our own paper, then. I'm hoping someone

will help me with the math for mine . . .

> > > How is one temperament supposed to work or not work according

to a

> > > heuristic that only states a proportionality?

> >

> > I calculated the constants of proportionality for both heuristics

for

> > 10 vastly different temperaments, using Gene's rms optimized

results,

> > and the constants of proportionality for each heuristic were all

> > within a factor of 2 of one another. Did you miss that post?

>

> I obviously didn't pay much attention to it.

/tuning-math/message/2491

"Expand Messages" as usual.

> A factor of 2 still sounds a bit wayward.

Not bad at all considering the wide range of complexities of these

temperaments . . . but I'm still hunting for the "natural" set of

definitions of "error" and "complexity" that will make the heuristic

work real real good. Once you've found a good temperament, changing

its error function is not going to change its goodness very much. So

why not look for a mathematically pretty way to find good

temperaments? That's something I'm interested in, at any rate.

In-Reply-To: <a2k8io+j4al@eGroups.com>

Me:

> > I've mostly used octave-equivalent systems so far, so the

> opportunity

> > doesn't present itself. Making the octaves worse would also make

> the most

> > complex intervals worse,

Paul:

> Huh? And if this is true, why wouldn't this be true of, say, ratios

> of 5?

It would be true in a 25-limit system. Also, where 3 is involved, in a

15-limit system, although I'm not sure how the maths work out for that.

So far, I've only tuned up 11-limit systems.

> > and some instruments don't allow you to do so

> > anyway.

>

> ?

I have a Korg X5D (currently being repaired) which only supports octave

based tuning tables. If I want to use it, everything else has to fall in

line.

Graham

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Maybe we each have to write our own paper, then. I'm hoping someone

> will help me with the math for mine . . .

We could make a deal--we each help the other guy with what they need help on the most. :)

So

> why not look for a mathematically pretty way to find good

> temperaments? That's something I'm interested in, at any rate.

I'd like to see a quick and easy estimate which inputs a wedgie and outputs a badness measure; a second pass could then refine that.

Can you guys please drop the "(Was: Hi Dave K.)" from the title of

this thread. I have so little time to spend on tuning at the moment

that I'm only reading posts that have my name in them. (in the body

text is fine). But the search finds it whether in the body or the

title and this thread is driving me crazy.

Thanks.

By the way, I didn't have a clue what "taxicab error" was. I'm glad if

Gene found a way to give it meaning. :-)

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I'm glad if

> Gene found a way to give it meaning.

Haven't understood it, as of yet . . .

:-)