I now understand that Gene's logarithmically flat distribution is a very important starting point.

But even Gene sees the need to apply cutoffs to it for (effectively if not directly) very large errors and very high numbers of steps or generators.

I agree that none of the alternatives I have suggested so far have been very good. But I spent some more time on it and I thought "How could I make a single badness function with a parameter that lets me vary it continuously between Gene's log-flat-dist measure and something more like what I've been groping towards." Well I found it, and it gives the effect of a gentle rolloff as opposed to a sharp cutoff for those properties mentioned above.

steps^(4/3) * exp((cents/k)^r)

The parameter r gives the rate of rolloff. From my spreadsheet expeiments I'd guess that putting r at about 0.5 is going to make the most people happy, but maybe it could go a bit lower to make Paul and Gene happy. Note that as r gets closer to 0 this approaches:

steps^(4/3) * exp(ln(cents/k))

= step^(4/3) * cents/k

Which gives the log-flat distribution for 7-limit (If I've remembered correctly). The parameter k becomes a mere scale factor here and so becomes irrelevant to the ranking.

In the more general rolled-off form, the parameter k indirectly (and in conjunction with r) determines where the middle of the band is for rolloff. My favoured way of calibrating this (for 7-limit ETs) is to make 9-tET come out with only slightly more badness than 72-tET, thus agreeing with the log-flat-dist in this regard. When r is 0.5 that makes k about 2.1 cents. The formula can probably be massaged into a form where k doesn't have to be readjusted whenever you change r.

[To see the columns line up in the tables below, when reading from Yahoo's web interface, you can choose "Message Index" then "Expand Messages".]

Here's a top-20 7-limit ET ranking according to

steps^(4/3) * exp(sqrt(cents/2.1))

Steps RMS Badness

error

(cents)

-------------------------

31 4.0 390

22 8.6 466

27 7.9 562

41 4.2 583

19 12.7 595

26 10.3 709

46 4.5 714

15 18.5 720

53 3.5 722

72 1.8 747

9 29.1 776

10 27.4 799

68 2.4 815

37 7.6 824

12 24.5 836

50 5.0 858

99 0.9 876

36 8.6 896

58 4.0 899

62 4.0 982

Here for comparison is the log-flat

steps^(4/3) * cents

with a cutoff above 612-tET.

Steps RMS Badness

error

-------------------------

171 0.3 254

270 0.2 384

31 4.0 393

99 0.9 405

4 64.6 410

5 52.2 446

441 0.1 457

1 478.8 479

72 1.8 526

22 8.6 530

9 29.1 545

10 27.4 591

41 4.2 596

27 7.9 638

342 0.3 641

19 12.7 645

12 24.5 673

2 268.0 675

68 2.4 676

15 18.5 684

Regards,

-- Dave Keenan

Brisbane, Australia

http://dkeenan.com

--- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

> steps^(4/3) * exp((cents/k)^r)

This seems to be a big improvement, though the correct power is

steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It

still seems to me that a rolloff is just as arbitary as a sharp

cutoff, and disguiss the fact that this is what it is, so tastes will

differ about whether it is a good idea.

In-Reply-To: <9v741u+62fl@eGroups.com>

Gene wrote:

> This seems to be a big improvement, though the correct power is

> steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It

> still seems to me that a rolloff is just as arbitary as a sharp

> cutoff, and disguiss the fact that this is what it is, so tastes will

> differ about whether it is a good idea.

A sharp cutoff won't be what most people want. For example, in looking

for an 11-limit temperament I might have thought, well, I don't want more

than 24 notes in the scale because then it can't be mapped to two keyboard

octaves. So, if I want three identical, transposable hexads in that scale

I need to set a complexity cutoff at 21. But I'd still be very pleased if

the program throws up a red hot temperament with a complexity of 22,

because it was only an arbitrary criterion I was applying.

I suggest the flat badness be calculated first, and then shelving

functions applied for worst error and complexity. The advantage of a

sharp cutoff would be that you could store the temperaments in a database,

to save repetitive calculations, and get the list from a single SQL

statement, like

SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY

goodness

but you'd have to go to all the trouble of setting up a database.

Graham

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

>

> > steps^(4/3) * exp((cents/k)^r)

>

> This seems to be a big improvement, though the correct power is

> steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit.

Ok. Sorry. But I find I don't really understand _log_ flat. I only

understand flat. When I plot steps*cents against steps I can _see_

that this is flat. I expect steps*cents to be flat irrespective of the

odd-limit. If I change the steps axis to logarithmic its still gonna

look flat, and anything with a higher power of steps is gonna have a

general upward trend to the right.

So please tell me again how I can tell if something is log flat, in

such a way that I can check it empirically in my spreadsheet. And

please tell me why a log-flat distribution should be of more interest

than a simply flat one.

>I now understand that Gene's logarithmically flat distribution

>is a very important starting point.

Wow- how did that happen? One heckuva switch from the last

post I can find in this thread. Not that I understand what

any of this is about. Badness??

-Carl

--- In tuning-math@y..., graham@m... wrote:

> In-Reply-To: <9v741u+62fl@e...>

> Gene wrote:

>

> > This seems to be a big improvement, though the correct power is

> > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit.

It

> > still seems to me that a rolloff is just as arbitary as a sharp

> > cutoff, and disguiss the fact that this is what it is, so tastes

will

> > differ about whether it is a good idea.

>

> A sharp cutoff won't be what most people want. For example, in

looking

> for an 11-limit temperament I might have thought, well, I don't

want more

> than 24 notes in the scale because then it can't be mapped to two

keyboard

> octaves. So, if I want three identical, transposable hexads in

that scale

> I need to set a complexity cutoff at 21. But I'd still be very

pleased if

> the program throws up a red hot temperament with a complexity of

22,

> because it was only an arbitrary criterion I was applying.

That's why I suggested that we place our sharp cutoffs where we find

some big gaps -- typically right after the "capstone" temperaments.

> I suggest the flat badness be calculated first, and then shelving

> functions applied for worst error and complexity. The advantage of

a

> sharp cutoff would be that you could store the temperaments in a

database,

> to save repetitive calculations, and get the list from a single SQL

> statement, like

>

> SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY

> goodness

>

> but you'd have to go to all the trouble of setting up a database.

>

>

> Graham

Well, database or no, I still like the idea of using a flat badness

measure, since it doesn't automatically have to be modified just

because we decide to look outside our original range.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

> >

> > > steps^(4/3) * exp((cents/k)^r)

> >

> > This seems to be a big improvement, though the correct power is

> > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit.

>

> Ok. Sorry. But I find I don't really understand _log_ flat. I only

> understand flat. When I plot steps*cents against steps I can _see_

> that this is flat. I expect steps*cents to be flat irrespective of

the

> odd-limit. If I change the steps axis to logarithmic its still

gonna

> look flat, and anything with a higher power of steps is gonna have

a

> general upward trend to the right.

>

> So please tell me again how I can tell if something is log flat, in

> such a way that I can check it empirically in my spreadsheet. And

> please tell me why a log-flat distribution should be of more

interest

> than a simply flat one.

My (incomplete) understanding is that flatness is flatness. It's what

you acheive when you hit the critical exponent. The "logarithmic"

character that we see is simply a by-product of the criticality.

I look forward to a fuller and more accurate reply from Gene.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Badness??

The "badness" of a linear temperament is a function of two

components -- how many generators it takes to get the consonant

intervals, and how large the deviations from JI are in the consonant

intervals. Total "badness" is therefore some function of these two

components.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >I now understand that Gene's logarithmically flat distribution

> >is a very important starting point.

>

> Wow- how did that happen? One heckuva switch from the last

> post I can find in this thread.

Hey. I had a coupla days to cool off. :-) I'm only saying its a

starting point and I probably should have written "I now understand

that Gene's flat distribution is a very important starting point".

Since I now realise I don't really understand the

"logarithmically-flat" business.

While I'm waiting for clarification on that I should point out that

once we go to the rolled-off version, the power that "steps" is raised

to, is not an independent parameter, (it can be subsumed in k) so it

doesn't really matter where we start from.

steps^p * exp((cents/k)^r)

= ( steps * (exp((cents/k)^r))^(1/p) )^p

= ( steps * exp((cents/k)^r /p) )^p

= ( steps * exp((cents/(k * p^(1/r)))^r) )^p

Now raising badness to a positive power doesn't affect the ranking so

we can just use

steps * exp((cents/(k * p^(1/r))^r)

and we can call simply treat k * p^(1/r) a new version of k.

So we have

steps * exp((cents/k)^r)

so my old k of 2.1 cents becomes a new one of 3.7 cents and my

proposed badness measure becomes

steps * exp(sqrt(cents/3.7))

> Not that I understand what

> any of this is about. Badness??

As has been done many times before, we are looking for a single figure

that combines the error in cents with the number of notes in the

tuning to give a single figure-of-demerit with which to rank tunings

for the purpose of deciding what to leave out of a published list or

catalog for which limited space is available. One simply lowers the

maximum badness bar until the right number of tunings get under it.

It is ultimately aimed at automatically generated linear temperaments.

But we are using 7-limit ETs as a trial run since we have much more

collective experience of their subjective badness to draw on.

So "steps" is the number of divisions in the octave and "cents" is the

7-limit rms error.

I understand that Paul and Gene favour a badness metric for these that

looks like this

steps^2 * cents * if(min<=steps<=max, 1, infinity)

I think they have agreed to set min = 1 and max will correspond to

some locally-good ET, but I don't know how they will decide exactly

which one. This sharp cutoff in number of steps seems entirely

arbitrary to me and (as Graham pointed out) doesn't correspond to the

human experience of these things. I would rather use a gentle rolloff

that at least makes some attempt to represent the collective

subjective experience of people on the tuning lists.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But we are using 7-limit ETs as a trial run since we have much more

> collective experience of their subjective badness to draw on.

>

> So "steps" is the number of divisions in the octave and "cents" is

the

> 7-limit rms error.

>

> I understand that Paul and Gene favour a badness metric for these

that

> looks like this

>

> steps^2 * cents * if(min<=steps<=max, 1, infinity)

The exponent would be 4/3, not 2, for ETs.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > But we are using 7-limit ETs as a trial run since we have much

more

> > collective experience of their subjective badness to draw on.

> >

> > So "steps" is the number of divisions in the octave and "cents" is

> the

> > 7-limit rms error.

> >

> > I understand that Paul and Gene favour a badness metric for these

> that

> > looks like this

> >

> > steps^2 * cents * if(min<=steps<=max, 1, infinity)

>

> The exponent would be 4/3, not 2, for ETs.

Hey Paul, that's what I had originally but see what Gene wrote in

/tuning-math/message/1833

But as far as I can tell, the only flat one is steps * cents. I'll

post my spreadsheet when I get it cleaned up. Or you can plot them for

yourself.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > The exponent would be 4/3, not 2, for ETs.

>

> Hey Paul, that's what I had originally but see what Gene wrote in

> /tuning-math/message/1833

He was talking about linear temperaments there, not ETs (right,

Gene?).

> But as far as I can tell, the only flat one is steps * cents.

That's "flat" for all ETs overall (though the wiggles aren't), but

what we really care about is whether the goodness/badness values for

the "very best" within each range show a flat pattern, or if their

values go off to infinity or zero as "steps" increases.

2nd attempt at replying . . .

> > The exponent would be 4/3, not 2, for ETs.

>

> Hey Paul, that's what I had originally but see what Gene wrote in

> /tuning-math/message/1833

I think Gene is referring to linear temperaments, not ETs, there.

> But as far as I can tell, the only flat one is steps * cents.

That's "flat" (but the wiggles aren't) if you look at each and every

ET. But if you look at only the best ones in each range, or the best

ones smaller than all better ones, or anything like that, you'll see

that the "goodness" keeps increasing without bound. Gene was

referring to the kind of "flatness" where it doesn't do that, nor

does it drop toward zero after a certain point.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > But as far as I can tell, the only flat one is steps * cents.

>

> That's "flat" for all ETs overall (though the wiggles aren't), but

> what we really care about is whether the goodness/badness values for

> the "very best" within each range show a flat pattern, or if their

> values go off to infinity or zero as "steps" increases.

Well the size of wiggles and the best in each range look pretty damn

flat to me for steps * cents (and not for steps^(4/3)*cents or

steps^2*cents). Take a look for yourself.

http://dkeenan.com/Music/7LimitETBadness.xls.zip

155 KB

It comes set for steps*cents, so take a look at the "cut-off badness"

chart, then change the yellow cell E6 to "=4/3" or "2" and look at the

"cut-off badness" chart again.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > But as far as I can tell, the only flat one is steps * cents.

> >

> > That's "flat" for all ETs overall (though the wiggles aren't),

but

> > what we really care about is whether the goodness/badness values

for

> > the "very best" within each range show a flat pattern, or if

their

> > values go off to infinity or zero as "steps" increases.

>

> Well the size of wiggles and the best in each range look pretty

damn

> flat to me for steps * cents (and not for steps^(4/3)*cents or

> steps^2*cents). Take a look for yourself.

>

> http://dkeenan.com/Music/7LimitETBadness.xls.zip

> 155 KB

Dave, you have to plot "goodness", not "badness".

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > The exponent would be 4/3, not 2, for ETs.

>

> Hey Paul, that's what I had originally but see what Gene wrote in

> /tuning-math/message/1833

I seem to have confused the issue, since I thought it was about

temperaments; it would be 4/3=1+1/3 for the 7-limit ets, and

2=(1+1/3)/(1-1/3) for 7-limit temperaments, but 5/4 = 1+1/4 for

11-limit ets, and 5/3 = (1+1/4)/(1-1/4) for 11-limit temperaments.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> My (incomplete) understanding is that flatness is flatness. It's

what

> you acheive when you hit the critical exponent. The "logarithmic"

> character that we see is simply a by-product of the criticality.

>

> I look forward to a fuller and more accurate reply from Gene.

When you measure the size of an et n by log(n), and are at the

critical exponent, the ets less than a certain fixed badness are

evenly distributed on average; if you plotted numbers of ets less

than the limit up to n versus log(n), it should be a rough line. If

you go over the critical exponent, you should get a finite list. If

you go under, it is weighted in favor of large ets, in terms of the

log of the size.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > My (incomplete) understanding is that flatness is flatness. It's

> what

> > you acheive when you hit the critical exponent. The "logarithmic"

> > character that we see is simply a by-product of the criticality.

> >

> > I look forward to a fuller and more accurate reply from Gene.

>

> When you measure the size of an et n by log(n), and are at the

> critical exponent, the ets less than a certain fixed badness are

> evenly distributed on average;

This is only true if you choose a very low value for your "certain

fixed badness", right?

> if you plotted numbers of ets less

> than the limit up to n versus log(n), it should be a rough line. If

> you go over the critical exponent, you should get a finite list. If

> you go under, it is weighted in favor of large ets, in terms of the

> log of the size.

What if you used n instead of log(n)? Would there still be this same

critical function? Or could a function with a different form be the

critical one?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Dave, you have to plot "goodness", not "badness".

Paul, I assume goodness = 1/badness? How could any reasonable

transformation from badness to goodness change whether it looks flat

or not?

I've added goodness plots below the badness plots in

http://dkeenan.com/Music/7LimitETBadness.xls.zip

I agree that goodness lets you see the trends in the best more easily.

But with the limited sample we have, up to 612-tET, it looks like the

goodness of the best in any range is already falling off with

increasing number of steps, even with steps*cents. Going to

steps^(4/3)*cents just makes it fall off faster.

So I still think steps*cents is the flat one.

Yahoo's advertising has sure taken a quantum leap in

obtrusiveness, if not badness! Yikes.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Dave, you have to plot "goodness", not "badness".

>

> Paul, I assume goodness = 1/badness? How could any reasonable

> transformation from badness to goodness change whether it looks

flat

> or not?

You'll be looking at the opposite extremes of the graph.

> I've added goodness plots below the badness plots in

> http://dkeenan.com/Music/7LimitETBadness.xls.zip

>

> I agree that goodness lets you see the trends in the best more

easily.

> But with the limited sample we have, up to 612-tET, it looks like

the

> goodness of the best in any range is already falling off with

> increasing number of steps, even with steps*cents. Going to

> steps^(4/3)*cents just makes it fall off faster.

Not really. At 612, you can't really see the difference yet. Go much

further and you'll see it.

I wrote,

> Not really. At 612, you can't really see the difference yet. Go

much

> further and you'll see it.

Well I extended the graph out to 32768, and 4/3 starts to make more

sense as an exponent.

But I noticed something else -- something totally unexpected.

Rather than looking like random "noise", the pattern of "best local

ETs" seems to have a definite "wave" to it, with a frequency of about

1680 -- that is, the "wave" repeats itself about 19 1/2 times within

the first 32768 ETs, seemingly with quite a bit of regularity.

Are my eyes decieving me here, or is something going on? Gene?

I'll try Matlab next . . .

I wrote,

> Rather than looking like random "noise", the pattern of "best local

> ETs" seems to have a definite "wave" to it, with a frequency of

about

> 1680 -- that is, the "wave" repeats itself about 19 1/2 times

within

> the first 32768 ETs, seemingly with quite a bit of regularity.

>

> Are my eyes decieving me here, or is something going on? Gene?

>

> I'll try Matlab next . . .

Take a look at the two pictures in

(I didn't enforce consistency, but we're only focusing on

the "goodest" ones, which are consistent anyway).

In both of them, you can spot the same periodicity, occuring 60 times

with regular frequency among the first 100,000 ETs.

Thus we see a frequency of about 1670 in the wave, agreeing closely

with the previous estimate?

What the heck is going on here? Riemann zetafunction weirdness?

In-Reply-To: <9vaje4+gncf@eGroups.com>

paulerlich wrote:

> Take a look at the two pictures in

>

> /tuning-math/files/Paul/

>

> (I didn't enforce consistency, but we're only focusing on

> the "goodest" ones, which are consistent anyway).

>

> In both of them, you can spot the same periodicity, occuring 60 times

> with regular frequency among the first 100,000 ETs.

>

> Thus we see a frequency of about 1670 in the wave, agreeing closely

> with the previous estimate?

>

> What the heck is going on here? Riemann zetafunction weirdness?

I don't know either, but I'll register an interest in finding out. I've

thought for a while that the set of consistent ETs may have properties

similar to the set of prime numbers. It really gets down to details of

the distribution of rational numbers. One thing I noticed is that you

seem to get roughly the same number of consistent ETs within any linear

range. Is that correct?

As to these diagrams, one thing I notice is that the resolution is way

below the number of ETs being considered. So could this be some sort of

aliasing problem? Best way of checking is to be sure each bin contains

the same *number* of ETs, not merely that the x axis is divided into

near-enough equal parts.

Graham

--- In tuning-math@y..., graham@m... wrote:

>

> I don't know either, but I'll register an interest in finding out.

I've

> thought for a while that the set of consistent ETs may have

properties

> similar to the set of prime numbers.

Well, this pattern I found shows up regardless of whether you look at

consistent ETs only, or fail to enforce consistency at all.

> It really gets down to details of

> the distribution of rational numbers. One thing I noticed is that

you

> seem to get roughly the same number of consistent ETs within any

linear

> range. Is that correct?

Yup -- in the 7-limit, it's always half!

You know how to view this table:

range #inconsistent

1-10000 5006

10001-20000 4996

20001-30000 5004

30001-40000 5002

40001-50000 4996

50001-60000 4996

60001-70000 4996

70001-80000 5002

80001-90000 5006

90001-100000 4999 (the first odd number so far)

>

> As to these diagrams, one thing I notice is that the resolution is

way

> below the number of ETs being considered. So could this be some

sort of

> aliasing problem?

No, because the same exact behavior showed up in the Excel chart, no

matter how I stretched it out . . .

> Best way of checking is to be sure each bin contains

> the same *number* of ETs, not merely that the x axis is divided

into

> near-enough equal parts.

Hmm . . . all the maxima are visible, so I'm not sure this is

relevant anyway.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > When you measure the size of an et n by log(n), and are at the

> > critical exponent, the ets less than a certain fixed badness are

> > evenly distributed on average;

>

> This is only true if you choose a very low value for your "certain

> fixed badness", right?

Or start a bit away from 0.

> What if you used n instead of log(n)? Would there still be this

same

> critical function? Or could a function with a different form be the

> critical one?

This is what I was talking about in a previous posting; if we look at

|h(q)-n*log2(q)|^3, where q is in {3,5,7,5/3,7/3,7/5}, we can apply a

condition that |h(q)-n*log2(q)|^3 < f(n), where the integral of

f(n) or the sum of f(n) diverge--for instance, f(n) = 1/n, so

1+1/2+1/3+..., the harmonic series, diverges, where int_1^n 1/x dx =

ln(n). The ln(n) means this is logarithmic; we can get other sorts of

density by changing it, but this is easiest and seems the best to me

anyway.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > What if you used n instead of log(n)? Would there still be this

> same

> > critical function? Or could a function with a different form be

the

> > critical one?

>

> This is what I was talking about in a previous posting; if we look

at

> |h(q)-n*log2(q)|^3, where q is in {3,5,7,5/3,7/3,7/5}, we can apply

a

> condition that |h(q)-n*log2(q)|^3 < f(n), where the integral of

> f(n) or the sum of f(n) diverge--for instance, f(n) = 1/n, so

> 1+1/2+1/3+..., the harmonic series, diverges, where int_1^n 1/x dx

=

> ln(n). The ln(n) means this is logarithmic; we can get other sorts

of

> density by changing it, but this is easiest and seems the best to

me

> anyway.

But it's not really unique as a critical asymptotic function (?), is

it?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Rather than looking like random "noise", the pattern of "best local

> ETs" seems to have a definite "wave" to it, with a frequency of

about

> 1680 -- that is, the "wave" repeats itself about 19 1/2 times

within

> the first 32768 ETs, seemingly with quite a bit of regularity.

This partly makes sense to me and partly doesn't; it should have wave

frequencies corresponding to the good 7-limit ets, but why 1680? It

would be interesting to see a Fourier analysis of this.

Furthermore, noting a striking symmetry centered just above 50,000, I

surmised that there must be an especially exceptional ET just above

100,000. And in fact there is -- 103169-tET, the new champion, only

about 3/5 as bad as 171-tET.

Now the periodicity we saw before appears to occur exactly 62 times

from 1-tET to 103169-tET -- thus my current best estimate of

the "wave period" is

103168/62 = 1664 exactly!!

What is this magical mystical number 1664, and does the 62 suggest

that somehow 31-tET is making itself known across this vaster survey?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> But it's not really unique as a critical asymptotic function (?),

is

> it?

It is among functions n^e, for some fixed e; the value e=-1 is the

critical exponent where n^(e+1)/(e+1) no longer works as an

antiderivative, and going to smaller values of e leads to convergent

series and integrals.

You can get cute at the critical exponent, by looking at things like

1/(n ln n), 1/(n ln n ln ln n) and so forth. These diverge even more

slowly than 1/n.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> This partly makes sense to me and partly doesn't; it should have

wave

> frequencies corresponding to the good 7-limit ets, but why 1680? It

> would be interesting to see a Fourier analysis of this.

Matlab has fft. The FFT of the set of results up to 2^17 has a few

extremely sharp peaks. With what formula should I interpret the

results?

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> You can get cute at the critical exponent, by looking at things

like

> 1/(n ln n), 1/(n ln n ln ln n) and so forth. These diverge even

more

> slowly than 1/n.

It should be noted that these work only "almost always", whereas 1/n

works without exception, giving us an infinite set. It is highly

probable that the badness of the very best systems using the critical

expondent goes to zero, and goes fast enough that we could work in an

extra log factor, but proving it would be another matter.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Matlab has fft. The FFT of the set of results up to 2^17 has a few

> extremely sharp peaks. With what formula should I interpret the

> results?

I don't know what that means, but where are the spikes?

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Matlab has fft. The FFT of the set of results up to 2^17 has a

few

> > extremely sharp peaks. With what formula should I interpret the

> > results?

>

> I don't know what that means, but where are the spikes?

I figured out how to get the power spectrum.

Result: one big giant spike right at 1665-1666.

I will upload the graph shortly.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Result: one big giant spike right at 1665-1666.

Actually, the Nyquist resolution (?) prevents me from saying whether

it's 1659.12658227848 (the nominal peak) or something plus or minus a

dozen or so. But clearly my visual estimate of 1664 has been

corroborated.

I wrote,

> But clearly my visual estimate of 1664 has been

> corroborated.

1664 = 2^7 * 13

Pretty spooky!!

Thus,

103169 = 2^8 * 13 * 31 + 1

paulerlich wrote:

> 103168/62 = 1664 exactly!!

>

> What is this magical mystical number 1664, and does the 62 suggest

> that somehow 31-tET is making itself known across this vaster survey?

1664 is 128*13. So 103169 is 13*256*31. Interesting, don't know if it's

meaningful, that it's lots of 2s and two prime numbers. The obvious

reason for it dividing by 31 is that it contains an interval taken from

31-equal. Well, I can't find any, but the best 7:5 in 2*10369-equal is

15/31, so its influence can certainly be felt this far.

Graham

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Actually, the Nyquist resolution (?) prevents me from saying

whether

> it's 1659.12658227848 (the nominal peak) or something plus or minus

a

> dozen or so. But clearly my visual estimate of 1664 has been

> corroborated.

In the 5-limit, one sees a similar pattern, and the "big peak" is at

612, predicably enough . . .

spooky: 1664/612 = 2.718......

--- In tuning-math@y..., graham@m... wrote:

> paulerlich wrote:

>

> > 103168/62 = 1664 exactly!!

> >

> > What is this magical mystical number 1664, and does the 62

suggest

> > that somehow 31-tET is making itself known across this vaster

survey?

>

> 1664 is 128*13. So 103169 is 13*256*31.

No, but 103168 is. 103169 is 11*83*113.

> Interesting, don't know if it's

> meaningful, that it's lots of 2s and two prime numbers. The

obvious

> reason for it dividing by 31 is that it contains an interval taken

from

> 31-equal. Well, I can't find any, but the best 7:5 in 2*10369-

equal is

> 15/31, so its influence can certainly be felt this far.

Confused . . . you mean 2*103168-equal? That's not consistent in the

7-limit . . .

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> You'll be looking at the opposite extremes of the graph.

So what? I was looking at the best in both cases.

> Not really. At 612, you can't really see the difference yet. Go much

> further and you'll see it.

If you have to go much further than 612-tET it's hardly relevant to

huan beings is it? Just how much further out were you planning to put

your cutoff? How much further do you think I need to go to see it? Or

to convince you that it doesn't exist? This reminds me of faiths

regarding the second coming of Jesus. :-)

[Dave Keenan wrote...]

> As has been done many times before, we are looking for a single

> figure that combines the error in cents with the number of notes

> in the tuning to give a single figure-of-demerit with which to

> rank tunings for the purpose of deciding what to leave out of a

> published list or catalog for which limited space is available.

> One simply lowers the maximum badness bar until the right number

> of tunings get under it.

Thanks.

Has consistency been considered? It is an error per note

measure.

[Gene Ward Smith wrote...]

>it would be 4/3=1+1/3 for the 7-limit ets, and 2=(1+1/3)/(1-1/3)

>for 7-limit temperaments, but 5/4 = 1+1/4 for 11-limit ets, and

>5/3 = (1+1/4)/(1-1/4) for 11-limit temperaments

Sorry, Gene, but I'm not following where you're getting these

exponents. Is there a simple rule or reason I'm missing?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Sorry, Gene, but I'm not following where you're getting these

> exponents. Is there a simple rule or reason I'm missing?

>

> -Carl

It has to do with Diophantine approximation theory. Have you read

Dave Benson's course notes?

>> Sorry, Gene, but I'm not following where you're getting these

>> exponents. Is there a simple rule or reason I'm missing?

>>

>> -Carl

>

> It has to do with Diophantine approximation theory. Have you read

> Dave Benson's course notes?

I've looked at them. What I could understand looked mundane,

and what I couldn't looked like it required quite a bit more

math than I know. Is there a section of the Benson which is

particularly helpful here?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >> Sorry, Gene, but I'm not following where you're getting these

> >> exponents. Is there a simple rule or reason I'm missing?

> >>

> >> -Carl

> >

> > It has to do with Diophantine approximation theory. Have you read

> > Dave Benson's course notes?

>

> I've looked at them. What I could understand looked mundane,

> and what I couldn't looked like it required quite a bit more

> math than I know. Is there a section of the Benson which is

> particularly helpful here?

>

> -Carl

Well, he does mention the Diophantine approximation exponent for

N-term ratios.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > > It has to do with Diophantine approximation theory. Have you

read

> > > Dave Benson's course notes?

>

> Well, he does mention the Diophantine approximation exponent for

> N-term ratios.

Could you tell me what section this is in? I have searched all 8 pdf

files for the word "diophantine" with no success.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > > > It has to do with Diophantine approximation theory. Have you

> read

> > > > Dave Benson's course notes?

> >

> > Well, he does mention the Diophantine approximation exponent for

> > N-term ratios.

>

> Could you tell me what section this is in?

I don't remember.

> I have searched all 8 pdf

> files for the word "diophantine" with no success.

You can search .pdf files for a particular word? I've never heard of

this ability. Try searching for "the".

> You can search .pdf files for a particular word?

Absolutely.

-Carl

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > I have searched all 8 pdf

> > files for the word "diophantine" with no success.

>

> You can search .pdf files for a particular word?

Sure. Type Ctrl-F or choose Edit/Find or click the binoculars icon.

I believe it is possible to generate a PDF which is essentially just

an image and searching in those is impossible. But that is not the

case for Dave Benson's files.

> I've never heard of this ability. Try searching for "the".

Just in case, I tried searching for "the" in all 8 files. It works

fine.

> I believe it is possible to generate a PDF which is essentially

> just an image and searching in those is impossible. But that is

> not the case for Dave Benson's files.

Correct, and it's possible to have a file containing mixed

data types -- you can embed images right along with postscript

stuff, so if "diophantine" (or whatever) is in an image with

a bunch of math symbols, then you won't find it.

Dave- do you have a spreadsheet for badness measures? I'd

love to see consistency plotted against steps*rms.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Dave- do you have a spreadsheet for badness measures?

Yes. http://dkeenan.com/Music/7LimitETBadness.xls.zip

> I'd love to see consistency plotted against steps*rms.

This has been added at your request.

Hey guys,

Is there any chance someone could take the time to convince me that

steps^(4/3)*cents is flat in some sense, for 7-limit ETs. It still

looks to me like steps*cents is flatter.

The purported reference to Diophantine approximation seems to have

evaporated from Dave Benson's files, and my request for a steps*cents

version of Paul's goodness chart (for comparison with

steps^(4/3)*cents), seems to have been ignored.

And surely someone can define "flatness" in such a way that I can make

it a formula in my spreadsheet.

I remind the reader that the value of this exponent becomes academic

if one attempts to actually model human perception/cognition of

7-limit badness by adjusting values of k and r in

badness = steps * exp((cents/k)^r), e.g. k = 3.7 cents, r = 0.5

Regards,

-- Dave Keenan

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Hey guys,

>

> Is there any chance someone could take the time to convince me that

> steps^(4/3)*cents is flat in some sense, for 7-limit ETs. It still

> looks to me like steps*cents is flatter.

You or someone might try graphing log steps vs badness for both of these, or log steps vs number less than a cut-off value.

> And surely someone can define "flatness" in such a way that I can make

> it a formula in my spreadsheet.

I'm not sure what you mean by this.

> Yes. http://dkeenan.com/Music/7LimitETBadness.xls.zip

Got it!

>> I'd love to see consistency plotted against steps*rms.

>

> This has been added at your request.

Thanks! I wish I could say I knew what I was looking at.

Cut-off badness on x and "is consistent" on y? I was thinking

steps on x and real-number (unrounded) Hahn consistency on y

(it looks like you're using boolean consistency).

I'd just do it myself, but I can't wrap my head around Excel.

I'm currently looking into a graphing calculator

library/interface for Scheme...

-Carl

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Hey guys,

> >

> > Is there any chance someone could take the time to convince me

that

> > steps^(4/3)*cents is flat in some sense, for 7-limit ETs. It

still

> > looks to me like steps*cents is flatter.

>

> You or someone might try graphing log steps vs badness for both of

these, or log steps vs number less than a cut-off value.

Gene, I'm disappointed. Dave has produced tons of graphs, in case you

didn't notice. Dave wants to understand how you use Diophantine

approximation theory here.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> > Yes. http://dkeenan.com/Music/7LimitETBadness.xls.zip

>

> Got it!

>

> >> I'd love to see consistency plotted against steps*rms.

> >

> > This has been added at your request.

>

> Thanks! I wish I could say I knew what I was looking at.

> Cut-off badness on x and "is consistent" on y? I was thinking

> steps on x and real-number (unrounded) Hahn consistency on y

> (it looks like you're using boolean consistency).

If you're talking about the maximum real-number consistency _level_

obeyed by an ET, this will be equal to 1/(max_error*1200*steps)

wherever it's greater than 1.5. So for the good ETs, you'll just be

plotting maximum error vs. rms error.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "clumma" <carl@l...> wrote:

> > > Yes. http://dkeenan.com/Music/7LimitETBadness.xls.zip

> >

> > Got it!

> >

> > >> I'd love to see consistency plotted against steps*rms.

> > >

> > > This has been added at your request.

> >

> > Thanks! I wish I could say I knew what I was looking at.

> > Cut-off badness on x and "is consistent" on y? I was thinking

> > steps on x and real-number (unrounded) Hahn consistency on y

> > (it looks like you're using boolean consistency).

>

> If you're talking about the maximum real-number consistency _level_

> obeyed by an ET, this will be equal to 1/(max_error*1200*steps)

> wherever it's greater than 1.5. So for the good ETs, you'll just be

> plotting maximum error vs. rms error.

Oops -- that's true if you plot maximum real-number consistency level

against 1/(steps*rms), which is what I'm guessing you really had in

mind anyway. . . ?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > You or someone might try graphing log steps vs badness for both of

> these, or log steps vs number less than a cut-off value.

> Gene, I'm disappointed. Dave has produced tons of graphs, in case you

> didn't notice.

Did he do the graphs I suggested? They might explain flatness better than some other sort of graph.

Dave wants to understand how you use Diophantine

> approximation theory here.

Well, in case you haven't noticed, I did explain it. What is left to do?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > You or someone might try graphing log steps vs badness for both of

> these, or log steps vs number less than a cut-off value.

>

> Gene, I'm disappointed. Dave has produced tons of graphs, in case

you

> didn't notice. Dave wants to understand how you use Diophantine

> approximation theory here.

Not quite true. While a gentle introduction to Diophantine

approximation theory would be welcome, I would also like to see it

tested empirically.

Paul, your criticism of my graphs was that they didn't go far enough.

I only went to 612-tET. Going much further than 2000-tET is probably

not practical for my spreadsheet approach, but you have plotted

1/(steps^(4/3)*cents) to 10^7-tET or some such. Could you maybe do the

same for 1/(steps*cents)?

However, plotting stuff and eyeballing it is one thing, but we should

really be able to _calculate_ the flatness of a given badness measure

out to large number ETs.

Gene seems to be saying that for any given badness cutoff, the number

less than that should be about the same in every decade (1 to 9-tET,

10 to 99-tET, 100 to 999-tET, etc). Could you check that with Matlab

Paul, for both steps^(4/3)*cents and steps*cents? For various cutoffs?

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> > Yes. http://dkeenan.com/Music/7LimitETBadness.xls.zip

>

> Got it!

>

> >> I'd love to see consistency plotted against steps*rms.

> >

> > This has been added at your request.

>

> Thanks! I wish I could say I knew what I was looking at.

> Cut-off badness on x and "is consistent" on y? I was thinking

> steps on x and real-number (unrounded) Hahn consistency on y

> (it looks like you're using boolean consistency).

Yes. steps*rms on x and Boolean consistency on y.

Tell me how to calculate real-number Hahn consistency.

Do you mean you want to see both Hahn consistency and steps*cents

badness (or do you want 1/(steps*cents) goodness? plotted against

steps?

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Gene seems to be saying that for any given badness cutoff,

Only if it's low enough, or if you start far enough away from zero.

> the number

> less than that should be about the same in every decade (1 to 9-

tET,

> 10 to 99-tET, 100 to 999-tET, etc). Could you check that with

Matlab

> Paul, for both steps^(4/3)*cents and steps*cents? For various

cutoffs?

Sure -- just tell me how far out I should go, what cutoffs to use --

perhaps Gene would like to weigh in on these decisions to help guide

us toward something that will make the distinction more clear . . .

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Sure -- just tell me how far out I should go, what cutoffs to use --

> perhaps Gene would like to weigh in on these decisions to help guide

> us toward something that will make the distinction more clear . . .

You could fit a line to log n vs ets which pass a badness test.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Sure -- just tell me how far out I should go, what cutoffs to

use --

> > perhaps Gene would like to weigh in on these decisions to help

guide

> > us toward something that will make the distinction more

clear . . .

>

> You could fit a line to log n vs ets which pass a badness test.

What badness test would you choose? As I showed, the "waves" seem to

take over after the badness test is relaxed to allow only a

reasonable number of ETs to pass.

>> If you're talking about the maximum real-number consistency

>> _level_ obeyed by an ET, this will be equal to

>> 1/(max_error*1200*steps) wherever it's greater than 1.5. So for

>> the good ETs, you'll just be plotting maximum error vs. rms error.

>

> Oops -- that's true if you plot maximum real-number consistency

> level against 1/(steps*rms), which is what I'm guessing you really

> had in mind anyway. . . ?

Nnn... I had in mind plotting consistency against steps for all

ETs.

But, right, since consistency is just steps*max_error... I guess

I was just wondering how this looked, over the ETs, compared to

steps*max_rms_error. Is there still periodicity at good ets?

-C.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> But, right, since consistency is just steps*max_error... I guess

> I was just wondering how this looked, over the ETs, compared to

> steps*max_rms_error.

You mean steps*rms_error?

> Is there still periodicity at good ets?

I'll check it out, but I bet there is. 5-limit or 7-limit?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > Gene seems to be saying that for any given badness cutoff,

>

> Only if it's low enough, or if you start far enough away from zero.

This is all sounding a little contrived. How low is low enough? How

far from 1-tET is far enough? If we have to go out beyond what is

musically relevant, then what's the point?

> > the number

> > less than that should be about the same in every decade (1 to 9-

> tET,

> > 10 to 99-tET, 100 to 999-tET, etc). Could you check that with

> Matlab

> > Paul, for both steps^(4/3)*cents and steps*cents? For various

> cutoffs?

>

> Sure -- just tell me how far out I should go, what cutoffs to use --

> perhaps Gene would like to weigh in on these decisions to help guide

> us toward something that will make the distinction more clear . . .

Go out to (10^7 - 1)-tET and use whatever badness cutoff gives 10 ETs

in the second decade (i.e. from 10-tET to 99-tET). Then tell us how

many we get in the other 6 decades, using steps^(4/3)*cents in one

case and steps*cents in the other.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> This is all sounding a little contrived. How low is low enough? How

> far from 1-tET is far enough? If we have to go out beyond what is

> musically relevant, then what's the point?

Why not just exlude the inconsistent ets? The point is to get rid of the crap parade at the very beginning, and this looks like a use of consistency I could go for.

> Go out to (10^7 - 1)-tET and use whatever badness cutoff gives 10 ETs

> in the second decade (i.e. from 10-tET to 99-tET). Then tell us how

> many we get in the other 6 decades, using steps^(4/3)*cents in one

> case and steps*cents in the other.

Why not just fit a line to log n vs rank, in the sense of the m-th item on a tops list? Paul's top 75 list should work, if you sort it from smallest to largest.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > This is all sounding a little contrived. How low is low enough?

How

> > far from 1-tET is far enough? If we have to go out beyond what is

> > musically relevant, then what's the point?

>

> Why not just exlude the inconsistent ets? The point is to get rid of

the crap parade at the very beginning, and this looks like a use of

consistency I could go for.

>

I understand this to be equivalent to putting a sharp cutoff at 600 on

steps*cents. The usual objection to sharp cutoffs applies, namely

people don't usually apply sharp cutoffs when making decisions about

the usefulness of tunings.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I understand this to be equivalent to putting a sharp cutoff at 600

on

> steps*cents. The usual objection to sharp cutoffs applies, namely

> people don't usually apply sharp cutoffs when making decisions about

> the usefulness of tunings.

Oops! Only when cents is max-absolute error, not rms.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > I understand this to be equivalent to putting a sharp cutoff at

600

> on

> > steps*cents. The usual objection to sharp cutoffs applies, namely

> > people don't usually apply sharp cutoffs when making decisions

about

> > the usefulness of tunings.

>

> Oops! Only when cents is max-absolute error, not rms.

Maybe it should be minimax. Maybe we should give a _range_ of optimal

generators, rather than just one, when the same minimax is achieved

for all within the range. Maybe we should also give the points at

which the minimax is doubled. This would give an idea of the

sensitivity of the tuning.

>Tell me how to calculate real-number Hahn consistency.

According to Paul, it's 1/(max_error*1200*steps), but

I don't see this coming from the algorithm I've always

used, given by Paul Hahn:

| consistency_level(ET_number, limit):

| max <- 0

| min <- 0

| FOR loop <- 3 TO limit BY 2

| exact_steps <- ET_number * log2(loop)

| error <- exact_steps - round(exact_steps)

| IF error > max THEN

| max <- err

| ELSEIF error < min THEN

| min <- err

| ENDIF

| ENDFOR

| RETURN integer_part(0.5 / (max - min))

>Do you mean you want to see both Hahn consistency and steps*cents

>badness /.../ plotted against steps?

Yes.

>(or do you want 1/(steps*cents) goodness?

I don't know why I'd care.

>>But, right, since consistency is just steps*max_error... I guess

>>I was just wondering how this looked, over the ETs, compared to

>>steps*max_rms_error.

>

>You mean steps*rms_error?

Yes.

>>Is there still periodicity at good ets?

>

>I'll check it out, but I bet there is. 5-limit or 7-limit?

7, of course.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >Tell me how to calculate real-number Hahn consistency.

>

> According to Paul, it's 1/(max_error*1200*steps)

No, that's only correct if it's greater than 1.5.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >>Is there still periodicity at good ets?

> >

> >I'll check it out, but I bet there is. 5-limit or 7-limit?

>

> 7, of course.

I checked, and there's still a nice big peak at 1664, but the shape

is a little different -- the curve rises ~exponentially up to a peak

at 1664, makes a rapid plunge down _below_ the trend line, and

then "decays" back _up_ after that.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > I understand this to be equivalent to putting a sharp cutoff at

> 600

> > on

> > > steps*cents. The usual objection to sharp cutoffs applies,

namely

> > > people don't usually apply sharp cutoffs when making decisions

> about

> > > the usefulness of tunings.

> >

> > Oops! Only when cents is max-absolute error, not rms.

>

> Maybe it should be minimax.

That's a tough choice. I'd rather see two separate rankings. One based

on minimum-rms and another based on minimum maximum-absolute

(minimax).

> Maybe we should give a _range_ of

optimal

> generators, rather than just one, when the same minimax is achieved

> for all within the range.

Nah! Wouldn't you still want to know what value of generator minimises

the max-absolute error of all those intervals that actually _depend_

on the generator.

> Maybe we should also give the points at

> which the minimax is doubled. This would give an idea of the

> sensitivity of the tuning.

The error-sensitivity of the tuning is already given by the maximum

over the diamond, of the absolute value of the number of generators

required for an interval. Which is the same as the maximum over the

relevant primes minus the minimum over those primes, of the (signed)

number of generators required for a prime.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Maybe we should give a _range_ of

> optimal

> > generators, rather than just one, when the same minimax is

achieved

> > for all within the range.

>

> Nah! Wouldn't you still want to know what value of generator

minimises

> the max-absolute error of all those intervals that actually

_depend_

> on the generator.

Sorry, I was actually thinking MAD (mean absolute deviation), not

minimax.