I haven't read any of the messages about this in tuning-math. I'm purely responding to Paul's summary and subsequent responses by Paul and Gene on the tuning list.

--- In tuning@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning@y..., "dkeenanuqnetau" <D.KEENAN@U...> wrote:

> > Thanks for this summary Paul, but ...

>

> You mean you haven't been on tuning-math@y... ? Get thee

> hence :)

>

> > > He proposed a 'badness' measure defined as

> > >

> > > step^3 cent

> > >

> > > where step is a measure of the typical number of notes in a

scale

> > for

> > > this temperament (given any desired degree of harmonic depth),

> >

> > What the heck does that mean?

>

> step is the RMS of the numbers of generators required to get to each

> ratio of the tonality diamond from the 1/1, I think.

This is good. More comprehensive than what Graham and I were using.

> > How does he justify cubing it?

>

--- In tuning@y..., "ideaofgod" <genewardsmith@j...> wrote:

> An order of growth estimate shows there should be an infinite list

> for step^2, but not neccesarily for anything higher, and looking far

> out makes it clear step^3 gives a finite list. What this means, of

> course, is that in some sense step^2 is the right way to measure

> goodness.

Yes! Only squared, not cubed.

> Step^3 weighs the small systems more heavily, and that is

> why we see so many of them to start with.

I believe the way to fix this is not to go to step^3 (I don't think there's any human-perception-or-cognition-based justification for doing that), but instead to correct the raw cents to some kind of dissonance or justness measure (more on this below).

> > > and

> > > cent is a measure of the deviation from JI 'consonances' in

cents.

> >

> > Yes but which measure of deviation? minimum maximum absolute or

> > minimum root mean squared or something else?

>

> RMS

Fine.

> > How does he justify not applying a human sensory correction to

this?

>

> A human sensory correction?

Yes. Once the deviation goes past about 20 cents it's irrelevant how big it is, and a 0.1 cent deviation does not sound 10 times better than a 1.0 cent deviation, it sounds about the same. I suggest this figure-of-demerit.

step^2 * exp((cents/k)^2), where k is somewhere between 5 and 15 cents

I think this will give a ranking of temperaments that corresponds more to how composers or performers would rank them.

-- Dave Keenan

Brisbane, Australia

http://dkeenan.com

-- A country which has dangled the sword of nuclear holocaust over the world for half a century and claims that someone else invented terrorism is a country out of touch with reality. --John K. Stoner

--- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

> > An order of growth estimate shows there should be an infinite

list

> > for step^2, but not neccesarily for anything higher, and looking

far

> > out makes it clear step^3 gives a finite list. What this means,

of

> > course, is that in some sense step^2 is the right way to measure

> > goodness.

>

> Yes! Only squared, not cubed.

>

> > Step^3 weighs the small systems more heavily, and that is

> > why we see so many of them to start with.

>

> I believe the way to fix this is not to go to step^3 (I don't think

there's any human-perception-or-cognition-based justification for

doing that),

What human-perception-or-cognition-based justification is there for

using step^2 ???

> Yes. Once the deviation goes past about 20 cents it's irrelevant >

how big it is,

That's not true -- you're ignoring both adaptive tuning and adaptive

timbring.

>and a 0.1 cent deviation does not sound 10 times better than a 1.0

>cent deviation, it sounds about the same.

In my own musical endeavors, this is true, but with all the strict-JI

obsessed people out there, a 0.1 cent deviation may end up being 10

times more interesting than a 1.0 cent deviation.

> I suggest this figure-of->demerit.

>

> step^2 [...]

Again, what on earth does step^2 tell you about how composers and

performers would rate a temperament? OK, step^2 is the number of

possible dyads in the typical scale. Step^3 is the number of possible

triads. Why is the former so much more "human-perception-or-cognition-

based" to you than the latter?

As for the other part, the dissonance measure . . . by doing it

Gene's way, we're going to end up with all the most interesting

temperaments for a wide variety of different ranges, from "you'll

never hear a beat" to "wafso-just" to "quasi-just" to "tempered"

to "needing adaptive tuning/timbring". Thus our top 30 or whatever

will have much of interest to all different schools of microtonal

composers.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

> > Yes. Once the deviation goes past about 20 cents it's irrelevant >

> how big it is,

>

> That's not true -- you're ignoring both adaptive tuning and adaptive

> timbring.

You can adaptively tune or timbre just about anything, so it seems

like we _should_ ignore it.

> >and a 0.1 cent deviation does not sound 10 times better than a 1.0

> >cent deviation, it sounds about the same.

>

> In my own musical endeavors, this is true, but with all the

strict-JI

> obsessed people out there, a 0.1 cent deviation may end up being 10

> times more interesting than a 1.0 cent deviation.

A strict JI obsessed person will not be the slightest bit interested

in linear temperaments, or at least that has been my experience. If

they are at all interested then think they will be quite happy to have

a 1c error rather than a 0.1c one if it lets them halve (actually

divide by 10^(1/3)) the number of notes in the scale. Given that 1c is

way below the typical accuracy of non-electronic instruments.

> > I suggest this figure-of->demerit.

> >

> > step^2 [...]

>

> Again, what on earth does step^2 tell you about how composers and

> performers would rate a temperament? OK, step^2 is the number of

> possible dyads in the typical scale. Step^3 is the number of

possible

> triads. Why is the former so much more

"human-perception-or-cognition-

> based" to you than the latter?

Ok. Maybe I don't have good argument for that. Try

step^3 * exp((cents/k)^2)

> As for the other part, the dissonance measure . . . by doing it

> Gene's way, we're going to end up with all the most interesting

> temperaments for a wide variety of different ranges, from "you'll

> never hear a beat" to "wafso-just" to "quasi-just" to "tempered"

> to "needing adaptive tuning/timbring". Thus our top 30 or whatever

> will have much of interest to all different schools of microtonal

> composers.

I think it has some extreme cases that are of interest to no one. This

can be fixed.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > Yes. Once the deviation goes past about 20 cents it's

irrelevant >

> > how big it is,

> >

> > That's not true -- you're ignoring both adaptive tuning and

adaptive

> > timbring.

>

> You can adaptively tune or timbre just about anything,

Not true -- in adaptive tuning, you don't want the horizontal shifts

to be too big, or you lose the melodic coherence of the scale; and in

adaptive timbring, you don't want the partials to deviate too far

from a harmonic series, or you'll lose the sense that each note has a

definite pitch.

> A strict JI obsessed person will not be the slightest bit

interested

> in linear temperaments, or at least that has been my experience. If

> they are at all interested then think they will be quite happy to

have

> a 1c error rather than a 0.1c one if it lets them halve (actually

> divide by 10^(1/3)) the number of notes in the scale.

You don't know that for sure. But look, I myself was trying to get

Gene to adopt some exponential, rather than polynomial, function of

the number of notes in the scale. He resisted . . .

> Given that 1c is

> way below the typical accuracy of non-electronic instruments.

Hey, it won't be the first time a feature of tuning that is highly

removed from most musicians' possible realm of experience has gotten

published!

>

> > > I suggest this figure-of->demerit.

> > >

> > > step^2 [...]

> >

> > Again, what on earth does step^2 tell you about how composers and

> > performers would rate a temperament? OK, step^2 is the number of

> > possible dyads in the typical scale. Step^3 is the number of

> possible

> > triads. Why is the former so much more

> "human-perception-or-cognition-

> > based" to you than the latter?

>

> Ok. Maybe I don't have good argument for that. Try

>

> step^3 * exp((cents/k)^2)

That's the _last_ conclusion I wanted you to reach!

> I think it has some extreme cases that are of interest to no one.

This

> can be fixed.

I tried to argue this point to Gene, but he seems to really like

Ennealimmal. Hey, if we're getting mathematical elegance with this

criterion, and all our favorite systems are showing up (I'm still

waiting for double-diatonic ~26), shouldn't we be willing to pay the

price of letting the guy who's doing all the work get his favorite

system in too?

Personally I'd feel much better if everyone could somehow agree what

was the overall most sensible measure regardless of the results!

In Gene's case, I would hope that it would be some elegant internal

consistency that ties the whole deal together. I'd personally settle

for that even if the results were a tad exotic.

Of course it might help if I understood it all a bit better too! I

feel like I'm getting there though, I just wish Gene were a little bit

more generous with the narrative--either that or someone else besides

him were saying the same things slightly differently... that helps me

sometimes too.

--Dan Stearns

----- Original Message -----

From: "paulerlich" <paul@stretch-music.com>

To: <tuning-math@yahoogroups.com>

Sent: Wednesday, December 05, 2001 5:33 PM

Subject: [tuning-math] Re: The grooviest linear temperaments for

7-limit music

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > > > Yes. Once the deviation goes past about 20 cents it's

> irrelevant >

> > > how big it is,

> > >

> > > That's not true -- you're ignoring both adaptive tuning and

> adaptive

> > > timbring.

> >

> > You can adaptively tune or timbre just about anything,

>

> Not true -- in adaptive tuning, you don't want the horizontal shifts

> to be too big, or you lose the melodic coherence of the scale; and

in

> adaptive timbring, you don't want the partials to deviate too far

> from a harmonic series, or you'll lose the sense that each note has

a

> definite pitch.

>

> > A strict JI obsessed person will not be the slightest bit

> interested

> > in linear temperaments, or at least that has been my experience.

If

> > they are at all interested then think they will be quite happy to

> have

> > a 1c error rather than a 0.1c one if it lets them halve (actually

> > divide by 10^(1/3)) the number of notes in the scale.

>

> You don't know that for sure. But look, I myself was trying to get

> Gene to adopt some exponential, rather than polynomial, function of

> the number of notes in the scale. He resisted . . .

>

> > Given that 1c is

> > way below the typical accuracy of non-electronic instruments.

>

> Hey, it won't be the first time a feature of tuning that is highly

> removed from most musicians' possible realm of experience has gotten

> published!

>

> >

> > > > I suggest this figure-of->demerit.

> > > >

> > > > step^2 [...]

> > >

> > > Again, what on earth does step^2 tell you about how composers

and

> > > performers would rate a temperament? OK, step^2 is the number of

> > > possible dyads in the typical scale. Step^3 is the number of

> > possible

> > > triads. Why is the former so much more

> > "human-perception-or-cognition-

> > > based" to you than the latter?

> >

> > Ok. Maybe I don't have good argument for that. Try

> >

> > step^3 * exp((cents/k)^2)

>

> That's the _last_ conclusion I wanted you to reach!

>

> > I think it has some extreme cases that are of interest to no one.

> This

> > can be fixed.

>

> I tried to argue this point to Gene, but he seems to really like

> Ennealimmal. Hey, if we're getting mathematical elegance with this

> criterion, and all our favorite systems are showing up (I'm still

> waiting for double-diatonic ~26), shouldn't we be willing to pay the

> price of letting the guy who's doing all the work get his favorite

> system in too?

>

>

> ------------------------ Yahoo! Groups

Sponsor ---------------------~-->

> Tiny Wireless Camera under $80!

> Order Now! FREE VCR Commander!

> Click Here - Only 1 Day Left!

> http://us.click.yahoo.com/75YKVC/7.PDAA/ySSFAA/wHYolB/TM

> --------------------------------------------------------------------

-~->

>

> To unsubscribe from this group, send an email to:

> tuning-math-unsubscribe@yahoogroups.com

>

>

>

> Your use of Yahoo! Groups is subject to

http://docs.yahoo.com/info/terms/

>

>

--- In tuning-math@y..., "D.Stearns" <STEARNS@C...> wrote:

> Personally I'd feel much better if everyone could somehow agree what

> was the overall most sensible measure regardless of the results!

Fat chance :)

> In Gene's case, I would hope that it would be some elegant internal

> consistency that ties the whole deal together. I'd personally settle

> for that even if the results were a tad exotic.

I feel the same way.

> Of course it might help if I understood it all a bit better too! I

> feel like I'm getting there though, I just wish Gene were a little

bit

> more generous with the narrative--either that or someone else

besides

> him were saying the same things slightly differently... that helps

me

> sometimes too.

I think he's the only one who understands abstract algebra around

here, so in a lot of cases, that isn't really possible,

unfortunately . . . of course, I should study up on it, but I should

also make more music, and get more sleep, and . . .

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> You don't know that for sure. But look, I myself was trying to get

> Gene to adopt some exponential, rather than polynomial, function of

> the number of notes in the scale. He resisted . . .

You wanted to have exponential growth for the "step" factor, and Dave

for the "cents" factor, which have opposite tendencies; Dave seems to

want to filter the very things out on the low end that you wanted

included.

If we added an exponential growth to "cents", I would suggest

trying k sinh (cents/k) for various k.

> > Given that 1c is

> > way below the typical accuracy of non-electronic instruments.

>

> Hey, it won't be the first time a feature of tuning that is highly

> removed from most musicians' possible realm of experience has

gotten

> published!

It seems to me it is quite relevant to the strict JI school of

thought. I got roasted for mentioning Partch in such a connection,

but it's hard to see what theoretical objection he could raise to 45

notes of ennealimmal in the 7-limit.

> I tried to argue this point to Gene, but he seems to really like

> Ennealimmal. Hey, if we're getting mathematical elegance with this

> criterion, and all our favorite systems are showing up (I'm still

> waiting for double-diatonic ~26), shouldn't we be willing to pay

the

> price of letting the guy who's doing all the work get his favorite

> system in too?

I think the only way you will get rid of Ennealimmal is to have an

upper-end cut-off, and you said you wanted none. Sorry, you are stuck

with it, and it has nothing to do with my liking it really. I've

never even tried it!

--- In tuning-math@y..., "D.Stearns" <STEARNS@C...> wrote:

> In Gene's case, I would hope that it would be some elegant internal

> consistency that ties the whole deal together. I'd personally settle

> for that even if the results were a tad exotic.

Elegant internal consistency suggests to me steps^2 cents as a

measure, but that would need an upper cut-off. We do it for ets,

however, so I don't see that as a bif deal myself.

> Of course it might help if I understood it all a bit better too! I

> feel like I'm getting there though, I just wish Gene were a little

bit

> more generous with the narrative--either that or someone else

besides

> him were saying the same things slightly differently... that helps

me

> sometimes too.

I'm hoping Paul will absorb it all and start coming out with his own

interpretations, but I can't get him to compute a wedge product. :)

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> You wanted to have exponential growth for the "step" factor, and

Dave

> for the "cents" factor,

I think you misunderstood Dave -- he wanted the *goodness* for the

cents factor to be a Gaussian.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "D.Stearns" <STEARNS@C...> wrote:

>

> > In Gene's case, I would hope that it would be some elegant

internal

> > consistency that ties the whole deal together. I'd personally

settle

> > for that even if the results were a tad exotic.

>

> Elegant internal consistency suggests to me steps^2 cents as a

> measure, but that would need an upper cut-off. We do it for ets,

> however, so I don't see that as a bif deal myself.

Who's we?

>

> > Of course it might help if I understood it all a bit better too! I

> > feel like I'm getting there though, I just wish Gene were a

little

> bit

> > more generous with the narrative--either that or someone else

> besides

> > him were saying the same things slightly differently... that

helps

> me

> > sometimes too.

>

> I'm hoping Paul will absorb it all and start coming out with his

own

> interpretations, but I can't get him to compute a wedge product. :)

I'll take a look at it again when I get a chance.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> I think you misunderstood Dave -- he wanted the *goodness* for the

> cents factor to be a Gaussian.

I don't think penalizing a system for being good can possibly be

defended, so I'm at a loss here.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Ok. Maybe I don't have good argument for that. Try

>

> step^3 * exp((cents/k)^2)

This looks like hyper-exponential growth penalizing badness, not

goodness.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > I think you misunderstood Dave -- he wanted the *goodness* for the

> > cents factor to be a Gaussian.

>

> I don't think penalizing a system for being good can possibly be

> defended, so I'm at a loss here.

I'm not sure who is confused about what.

gaussian(x) = exp(-(x/k)^2)

goodness = gaussian(cents_error)

badness = 1/goodness

= 1/exp(-(cents_error/k)^2)

= exp((cents_error/k)^2)

sinh might be fine too. I'm not familiar.

The problems, as I see them, are

(a) some temperaments that require ridiculously numbers of notes are

near the top of the list only because they have errors of a fraction of

a cent, but once it's less than about a cent, this should not be enough

to redeeem them. And

(b) some others with ridiculously large errors are near the top of the

list only because they come out needing few notes.

I think that the first can be fixed by applying a function to the cents

error that treats all very small errors as being equal, and the latter

might be fixed by dropping back from steps^3 to steps^2.

-- Dave Keenan

Paul wrote:

> As for the other part, the dissonance measure . . . by doing it

> Gene's way, we're going to end up with all the most interesting

> temperaments for a wide variety of different ranges, from "you'll

> never hear a beat" to "wafso-just" to "quasi-just" to "tempered"

> to "needing adaptive tuning/timbring". Thus our top 30 or whatever

> will have much of interest to all different schools of microtonal

> composers.

Oh, if you think one list can please everybody. I'd rather ask people

what they want, and produce a short list that's likely to have their ideal

temperament on it. That's why I keep up the .key and .micro files. Most

importantly, why I release all the source code for a Free platform so that

anybody can try out their own ideas. Nothing Gene's done so far couldn't

have been done by modifying that code.

Graham

Dave Keenan wrote:

> (b) some others with ridiculously large errors are near the top of the

> list only because they come out needing few notes.

>

> I think that the first can be fixed by applying a function to the cents

> error that treats all very small errors as being equal, and the latter

> might be fixed by dropping back from steps^3 to steps^2.

No, you get ridiculously large errors near the top with steps^2 as well.

Graham

Dan Stearns:

> > Of course it might help if I understood it all a bit better too! I

> > feel like I'm getting there though, I just wish Gene were a little

> bit

> > more generous with the narrative--either that or someone else

> besides

> > him were saying the same things slightly differently... that helps

> me

> > sometimes too.

Paul Erlich:

> I think he's the only one who understands abstract algebra around

> here, so in a lot of cases, that isn't really possible,

> unfortunately . . . of course, I should study up on it, but I should

> also make more music, and get more sleep, and . . .

Most of the results Gene's getting don't require anything I don't

understand. So I said all these things differently a few months ago. If

you want to catch up, try getting the source code from

<http://x31eq.com/temper.html> and an interpreter and try

puzzling it out. I haven't had any feedback at all on readability, so I

don't know easy it'll be for a newbie.

The method shouldn't be difficult for Dan to understand. You generate a

linear temperament from two equal temperaments. That's exactly like

finding an MOS on the scale tree, except you have to do it for all

consonant intervals instead of only the octave.

The wedge products are more difficult, but I don't see them as being at

all important in this context. Working with unison vectors is more

trouble. I've got code for that at

<http://x31eq.com/vectors.html>. Going from temperaments to

unison vectors is an outstanding problem that Gene might have solved, but

I haven't seen any source code yet.

Graham

--- In tuning-math@y..., graham@m... wrote:

> The wedge products are more difficult, but I don't see them as

being at

> all important in this context. Working with unison vectors is more

> trouble.

If working with unison vectors is more trouble, why not wedge

products? The wedgie is good for the following reasons:

(1) It is easy to compute, given a either pair of ets, a pair of

unison vectors, or a generator map.

(2) It uniquely defines the temperament, so that temperaments

obtained by any method can be merged into one list.

(3) It automatically eliminates torsion problems.

(4) Given the wedgie, it is easy to compute assoicated ets, a

generating pair of unison vectors, or a generator map. Hence it is

easy to go from any one of these to any other.

(5) By adding or subtracting wedgies we can produce new temperaments.

Given all of that, I think you are missing a bet by dismissing them;

they could easily be incorporated into your code.

I've got code for that at

> <http://x31eq.com/vectors.html>. Going from

temperaments to

> unison vectors is an outstanding problem that Gene might have

solved, but

> I haven't seen any source code yet.

I don't know what good Maple code will do, but here it is:

findcoms := proc(l)

local p,q,r,p1,q1,r1,s,u,v,w;

s := igcd(l[1], l[2], l[6]);

u := [l[6]/s, -l[2]/s, l[1]/s,0];

v := [p,q,r,1];

w := w7l(u,v);

s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w

[6]});

s := subs(_N1=0,s);

p1 := subs(s,p);

q1 := subs(s,q);

r1 := subs(s,r);

v := 2^p1 * 3^q1 * 5^r1 * 7;

if v < 1 then v := 1/v fi;

w := 2^u[1] * 3^u[2] * 5^u[3];

if w < 1 then w := 1/w fi;

[w, v] end:

coms := proc(l)

local v;

v := findcoms(l);

com7(v[1],v[2]) end:

"w7l" takes two vectors representing intervals, and computes the

wegdge product. "isolve" gives integer solutions to a linear

equation; I get an undeterminded varable "_N1" in this way which I

can set equal to any integer, so I set it to 0. The pair of unisons

returned in this way can be LLL reduced by the "com7" function, which

takes a pair of intervals and LLL reduces them.

--- In tuning-math@y..., graham@m... wrote:

> Dan Stearns:

> > > Of course it might help if I understood it all a bit better

too! I

> > > feel like I'm getting there though, I just wish Gene were a

little

> > bit

> > > more generous with the narrative--either that or someone else

> > besides

> > > him were saying the same things slightly differently... that

helps

> > me

> > > sometimes too.

>

> Paul Erlich:

> > I think he's the only one who understands abstract algebra around

> > here, so in a lot of cases, that isn't really possible,

> > unfortunately . . . of course, I should study up on it, but I

should

> > also make more music, and get more sleep, and . . .

>

> Most of the results Gene's getting don't require anything I don't

> understand. So I said all these things differently a few months

ago. If

> you want to catch up, try getting the source code from

> <http://x31eq.com/temper.html> and an interpreter and

try

> puzzling it out. I haven't had any feedback at all on readability,

so I

> don't know easy it'll be for a newbie.

>

> The method shouldn't be difficult for Dan to understand. You

generate a

> linear temperament from two equal temperaments.

I _really hope_ that that's not what all or even most of Gene's

narrative has been about!!

> That's exactly like

> finding an MOS on the scale tree, except you have to do it for all

> consonant intervals instead of only the octave.

This I don't see at all. Don't you mean "all fractions 1/N of an

octave" rather than "all consonant intervals"?

> The wedge products are more difficult, but I don't see them as

being at

> all important in this context.

Well then, when Dan asks about what is going on here, and you come

back saying you already understood it all a few months ago, you're

actually making a very selective reply to Dan's question, aren't you?

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., graham@m... wrote:

>

> > The wedge products are more difficult, but I don't see them as

> being at

> > all important in this context. Working with unison vectors is

more

> > trouble.

>

> If working with unison vectors is more trouble, why not wedge

> products? The wedgie is good for the following reasons:

>

> (1) It is easy to compute, given a either pair of ets, a pair of

> unison vectors, or a generator map.

>

> (2) It uniquely defines the temperament, so that temperaments

> obtained by any method can be merged into one list.

>

> (3) It automatically eliminates torsion problems.

>

> (4) Given the wedgie, it is easy to compute assoicated ets, a

> generating pair of unison vectors, or a generator map. Hence it is

> easy to go from any one of these to any other.

>

> (5) By adding or subtracting wedgies we can produce new

temperaments.

>

> Given all of that, I think you are missing a bet by dismissing

them;

> they could easily be incorporated into your code.

>

> I've got code for that at

> > <http://x31eq.com/vectors.html>. Going from

> temperaments to

> > unison vectors is an outstanding problem that Gene might have

> solved, but

> > I haven't seen any source code yet.

>

> I don't know what good Maple code will do, but here it is:

>

> findcoms := proc(l)

> local p,q,r,p1,q1,r1,s,u,v,w;

> s := igcd(l[1], l[2], l[6]);

> u := [l[6]/s, -l[2]/s, l[1]/s,0];

> v := [p,q,r,1];

> w := w7l(u,v);

> s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-

w

> [6]});

> s := subs(_N1=0,s);

> p1 := subs(s,p);

> q1 := subs(s,q);

> r1 := subs(s,r);

> v := 2^p1 * 3^q1 * 5^r1 * 7;

> if v < 1 then v := 1/v fi;

> w := 2^u[1] * 3^u[2] * 5^u[3];

> if w < 1 then w := 1/w fi;

> [w, v] end:

>

> coms := proc(l)

> local v;

> v := findcoms(l);

> com7(v[1],v[2]) end:

>

> "w7l" takes two vectors representing intervals, and computes the

> wegdge product. "isolve" gives integer solutions to a linear

> equation; I get an undeterminded varable "_N1" in this way which I

> can set equal to any integer, so I set it to 0.

The solutions represent?

> The pair of unisons

> returned in this way can be LLL reduced by the "com7" function,

which

> takes a pair of intervals and LLL reduces them.

Why not TM-reduce them?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "D.Stearns" <STEARNS@C...> wrote:

>

> > Personally I'd feel much better if everyone could somehow agree

what

> > was the overall most sensible measure regardless of the results!

>

> Fat chance :)

>

> > In Gene's case, I would hope that it would be some elegant

internal

> > consistency that ties the whole deal together. I'd personally

settle

> > for that even if the results were a tad exotic.

>

> I feel the same way.

It's nice to have pretty looking (i.e. simple) fomulae but we can

hardly ignore the fact that we're trying to come up with a list of

linear temperaments that will be of interest to the largest possible

number of human beings. Unfortunately human perception and cognition

is messy to model mathematically, not well established experimentally

and highly variable between individuals. But I'm sure we can come up

with something that is both reasonably elegant mathematically and that

we (in this forum) can all agree isn't too bad. We certainly do it

without trying some out and looking at the results!

We should probably hone the badness metric using 5-limit, where the

most experience exists.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But I'm sure we can come up

> with something that is both reasonably elegant mathematically and

that

> we (in this forum) can all agree isn't too bad.

I felt that way about steps^3 cents, except where was 12+14?

> We certainly do it

> without trying some out and looking at the results!

You mean a priori? The more arbitrary parameters we put into it, the

more we'll have to rely on particular assumption on how someone is

going to be making music, and this assumtion will be violated for the

next person. The top 25 or 40 according to a very generalized

criterion will best serve to present the _pattern_ of this whole

endeavor, upon which any musician can base their _own_ evaluation,

and if they don't want to, at least pick off one or two temperaments

that interest them.

But I have a nagging suspicion that there are even more "slippery"

ones out there, especially on the ultra-simple end of things . . .

I suspect we can use step^2 cents and cut it off at some point where

there's a long gap in the step-cent plane. For example, the next

point out after Ennealimmal is probably a long way out, so we can

probably put a cutoff there. As for simple temperaments with large

errors, I suspect there are more than Gene and Graham have found so

far that would end up looking good on this criterion, so it may end

up making sense to place another cutoff there . . . but I want to be

sure we've caught all the slippery fish before we decide that.

I would still like to see the "step" thing weighted -- there should

be something very mathematically and acoustically elegant about doing

it that way (if defined correctly) since we are using the Tenney

lattice after all!

>

> We should probably hone the badness metric using 5-limit, where the

> most experience exists.

Yes, I was just going to say we should write the whole paper first in

the 5-limit.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > But I'm sure we can come up

> > with something that is both reasonably elegant mathematically and

> that

> > we (in this forum) can all agree isn't too bad.

>

> I felt that way about steps^3 cents, except where was 12+14?

>

> > We certainly do it

> > without trying some out and looking at the results!

Oops! That should have been

We certainly _can't_ do it without trying some out and looking at the

results!

> You mean a priori? The more arbitrary parameters we put into it, the

> more we'll have to rely on particular assumption on how someone is

> going to be making music, and this assumtion will be violated for

the

> next person.

"Not putting in" an arbitrary parameter is usually equivalent to

putting it in but giving it an even more arbitrary value like 0 or 1.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > You mean a priori? The more arbitrary parameters we put into it,

the

> > more we'll have to rely on particular assumption on how someone

is

> > going to be making music, and this assumtion will be violated for

> the

> > next person.

>

> "Not putting in" an arbitrary parameter is usually equivalent to

> putting it in but giving it an even more arbitrary value like 0 or

1.

Well, I think Gene is saying that step^2 cents is clearly the right

measure of "remarkability".

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> The solutions represent?

I take the 5-limit comma defined by the temperament, and then find

another comma 2^p 3^q 5^r 7 such that the wedgie of this and the 5-

limit comma is the correct wedgie, that means these two commas define

the temperament.

> > The pair of unisons

> > returned in this way can be LLL reduced by the "com7" function,

> which

> > takes a pair of intervals and LLL reduces them.

>

> Why not TM-reduce them?

I'd always LLL reduce them first.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > The solutions represent?

>

> I take the 5-limit comma defined by the temperament, and then find

> another comma 2^p 3^q 5^r 7 such that the wedgie of this and the 5-

> limit comma is the correct wedgie, that means these two commas

define

> the temperament.

>

>

> > > The pair of unisons

> > > returned in this way can be LLL reduced by the "com7" function,

> > which

> > > takes a pair of intervals and LLL reduces them.

> >

> > Why not TM-reduce them?

>

> I'd always LLL reduce them first.

How come?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > I'd always LLL reduce them first.

>

> How come?

Because it makes the TM reduction dead easy.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Well, I think Gene is saying that step^2 cents is clearly the right

> measure of "remarkability".

Huh? "Remarkability" sounds like a kind of goodness. Step^2 * cents is

obviously a form of badness. I think I've already explained why no

product of poynomials of these two things will ever be acceptable to

me, at least not without cutoffs applied to them first. And I

understand Gene to be saying that he wants at least an upper cutoff on

"steps" (which seems like a bad name to me since it suggests scale

steps, I prefer "num_gens" or just "gens").

gens^2 * cents

gives exactly the same ranking as

log(gens^2 * cents) [where the log base is arbitrary]

because log(x) is monotonically increasing. Right?

Now

log(gens^2 * cents)

= log(gens^2) + log(cents)

= 2*log(gens) + log(cents)

So this says that a doubling of the number of generators is twice as

bad as a doubling of the error. And previously someone suggested it

was 3 times as bad. You've arbitrarity decided that only the

logarithms are comparable (when cents is already a logarithmic

quantity) and you arbitrarily decided that the constant of

proportionality between them must be an integer!

So what's wrong with k*steps + cents? The basic idea here is that the

unit of badness is the cent and we decide for a given odd-limit how

many cents the error would need to be reduced for you to tolerate an

extra generator in the width of your tetrads (or whatever), or how

many generators you'd need to reduce the tetrad (or whatever) width by

in order to tolerate another cent of error.

Or maybe you think that a _doubling_ of the number of generators is

worth a fixed number of cents. i.e. badness = k*log(gens) + cents

But always you must decide a value for one parameter k that gives the

proportionality between gens and cents because there is no

relationship between their two units of measurement apart from the one

that comes through human experience. Or at least I can't see any.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Yes, I was just going to say we should write the whole paper first

in

> the 5-limit.

There's not much to the 5-limit--it basically is a mere comma search,

and that can be done expeditiously using a decent 5-limit notation.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Well, I think Gene is saying that step^2 cents is clearly the

right

> > measure of "remarkability".

>

> Huh? "Remarkability" sounds like a kind of goodness. Step^2 * cents

is

> obviously a form of badness.

Right, but it's the _objective_ kind. Not the kind that has anything

to do with any particular musician's desiderata. It's the only

measure that doesn't favor a certain range of acceptable values for

error or for complexity. It only favors the best examples within each

range. The particular users of our findings can then decide what

range suits them best. Within any narrow range, all reasonable

measures will give the same ranking.

This is kind of like using Tenney complexity to determine the seed

set for harmonic entropy -- with different complexity measures the

overall slope of the curve changes, changing the consonance ranking

of intervals of different sizes, but the consonance ranking of nearby

intervals remains the same regardless of how complexity is defined

(as long as the 2-by-2 matrix formed by the numbers in adjacent seed

fractions always has a determinant of 1).

> I think I've already explained why no

> product of poynomials of these two things will ever be acceptable

to

> me, at least not without cutoffs applied to them first.

> And I

> understand Gene to be saying that he wants at least an upper cutoff

Yes -- I discussed the situation a few messages back. We use an

objective measure, and cut things off in a nice wide gap.

> on

> "steps" (which seems like a bad name to me since it suggests scale

> steps, I prefer "num_gens" or just "gens").

Yes.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Yes, I was just going to say we should write the whole paper

first

> in

> > the 5-limit.

>

> There's not much to the 5-limit--it basically is a mere comma

search,

> and that can be done expeditiously using a decent 5-limit notation.

A decent 5-limit notation?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > > Well, I think Gene is saying that step^2 cents is clearly the

> right

> > > measure of "remarkability".

> >

> > Huh? "Remarkability" sounds like a kind of goodness. Step^2 *

cents

> is

> > obviously a form of badness.

>

> Right, but it's the _objective_ kind. Not the kind that has anything

> to do with any particular musician's desiderata.

Paul! You seem to have ignored the most of the rest of my message.

What the heck is _objective_ about deciding that a doubling of the

number of generators is twice as bad as a doubling of the error. It's

completely arbitrary.

> It's the only

> measure that doesn't favor a certain range of acceptable values for

> error or for complexity. It only favors the best examples within

each

> range.

What _objective_ reason is there, to choose it over gens^3 * cents or

gens^2.3785 * cents?

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Paul! You seem to have ignored the most of the rest of my message.

Not at all.

> > It's the only

> > measure that doesn't favor a certain range of acceptable values

for

> > error or for complexity. It only favors the best examples within

> each

> > range.

>

> What _objective_ reason is there, to choose it over gens^3 * cents

or

> gens^2.3785 * cents?

Because those measures give an overall "slope" to the results, in

analogy to what the Farey series seeding does to harmonic entropy.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > The solutions represent?

>

> I take the 5-limit comma defined by the temperament, and then find

> another comma 2^p 3^q 5^r 7 such that the wedgie of this and the 5-

> limit comma is the correct wedgie, that means these two commas

define

> the temperament.

This should be 2^p 3^q 5^r 7^s where s is gcd(a,b,c), and the 5-limit

comma is 2^a 3^b 5^c.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Yes -- I discussed the situation a few messages back. We use an

> objective measure, and cut things off in a nice wide gap.

You are thinking that gens^2 cents, and Ennealimmal as the shut-off

point, would be a good plan?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > It's the only

> > > measure that doesn't favor a certain range of acceptable values

> for

> > > error or for complexity. It only favors the best examples within

> > each

> > > range.

> >

> > What _objective_ reason is there, to choose it over gens^3 * cents

> or

> > gens^2.3785 * cents?

>

> Because those measures give an overall "slope" to the results, in

> analogy to what the Farey series seeding does to harmonic entropy.

What's objective about that? A certain slope may be _real_. i.e.

humans on average may experience it that way, in which case the "flat"

case will really be favouring one extreme.

I understand what the slope is in the HE case, but what slope are you

talking about re badness of linear temperament? Badness wrt what?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > There's not much to the 5-limit--it basically is a mere comma

> search,

> > and that can be done expeditiously using a decent 5-limit

notation.

> A decent 5-limit notation?

We could search (16/15)^a (25/24)^b (81/80)^c to start out with, and

go to something more extreme if wanted.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Yes -- I discussed the situation a few messages back. We use an

> > objective measure, and cut things off in a nice wide gap.

>

> You are thinking that gens^2 cents, and Ennealimmal as the shut-off

> point, would be a good plan?

Possibly, though since gens and cents are two dimensions, we really

need a shuf-off _curve_, don't we?

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I understand what the slope is in the HE case, but what slope are

you

> talking about re badness of linear temperament? Badness wrt what?

What is the problem with a "flat" system and a cutoff? It doesn't

commit to any particular theory about what humans are like and what

they should want, and I think that's a good plan.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > > There's not much to the 5-limit--it basically is a mere comma

> > search,

> > > and that can be done expeditiously using a decent 5-limit

> notation.

>

> > A decent 5-limit notation?

>

> We could search (16/15)^a (25/24)^b (81/80)^c to start out with,

and

> go to something more extreme if wanted.

More extreme? I'm not getting this.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Because those measures give an overall "slope" to the results, in

> > analogy to what the Farey series seeding does to harmonic entropy.

>

> What's objective about that? A certain slope may be _real_. i.e.

> humans on average may experience it that way, in which case

the "flat"

> case will really be favouring one extreme.

But I don't feel comfortable deciding that for anyone.

> I understand what the slope is in the HE case, but what slope are

you

> talking about re badness of linear temperament? Badness wrt what?

Both step and cent.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Possibly, though since gens and cents are two dimensions, we really

> need a shuf-off _curve_, don't we?

If we bound one of them and gens^2 cents, we've bound the other;

that's what I'd do.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > I understand what the slope is in the HE case, but what slope are

> you

> > talking about re badness of linear temperament? Badness wrt what?

>

> What is the problem with a "flat" system and a cutoff?

Dave is trying to understand why this _is_ a flat system.

> It doesn't

> commit to any particular theory about what humans are like and what

> they should want, and I think that's a good plan.

Thank you.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Possibly, though since gens and cents are two dimensions, we

really

> > need a shuf-off _curve_, don't we?

>

> If we bound one of them and gens^2 cents, we've bound the other;

> that's what I'd do.

Hmm . . . so if we simply put an upper bound on the RMS cents error,

we'll have a closed search? That doesn't seem right . . .

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > I understand what the slope is in the HE case, but what slope are

> you

> > talking about re badness of linear temperament? Badness wrt what?

>

> What is the problem with a "flat" system and a cutoff?

I may be able to answer that when someone explains what is flat with

respect to what.

It doesn't

> commit to any particular theory about what humans are like and what

> they should want, and I think that's a good plan.

Don't the cutoffs have to be based on a theory about what humans are

like?

If a "flat" system was miles from anything related what humans are

like, would you still be interested in it?

I don't think you can avoid this choice. You must publish a finite

list. If you include more of certain extremes, you must omit more

of the middle-of-the-road.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > We could search (16/15)^a (25/24)^b (81/80)^c to start out with,

> and

> > go to something more extreme if wanted.

>

> More extreme? I'm not getting this.

(78732/78125)^a (32805/32768)^b (2109375/2097152)^c also gives the

5-limit, but is better for finding much smaller commas, to take a

more or less random example.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > It doesn't

> > commit to any particular theory about what humans are like and

what

> > they should want, and I think that's a good plan.

>

> Don't the cutoffs have to be based on a theory about what humans

are

> like?

I'm suggesting we place the cutoffs where we find big gaps, and

comfortably outside any system that has been used to date.

>

> If a "flat" system was miles from anything related what humans are

> like, would you still be interested in it?

Again, any system that is "best" according to a "human" criterion

will show up as "best in its neighborhood" under a flat criterion.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > > We could search (16/15)^a (25/24)^b (81/80)^c to start out

with,

> > and

> > > go to something more extreme if wanted.

> >

> > More extreme? I'm not getting this.

>

> (78732/78125)^a (32805/32768)^b (2109375/2097152)^c also gives the

> 5-limit, but is better for finding much smaller commas, to take a

> more or less random example.

Once a, b, and c are big enough, the original choice of commas will

do little to induce any tendency of smallness or largeness in the

result, correct?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > > Because those measures give an overall "slope" to the results,

in

> > > analogy to what the Farey series seeding does to harmonic

entropy.

> >

> > What's objective about that? A certain slope may be _real_. i.e.

> > humans on average may experience it that way, in which case

> the "flat"

> > case will really be favouring one extreme.

>

> But I don't feel comfortable deciding that for anyone.

But you _are_ deciding it. You can't help but decide it, unless you

intend to publish an infinite list. No matter what you do there will

be someone who thinks there's a lot of fluff in there and you missed

out some others. They aren't going to be impressed by any argument

that "our metric is 'objective' or 'flat'".

> > I understand what the slope is in the HE case, but what slope are

> you

> > talking about re badness of linear temperament? Badness wrt what?

>

> Both step and cent.

Huh? Obviously any badness metric _must_ slope down towards (0,0) on

the (cents,gens) plain. If you make the gens and cents axes

logarithmic then badness = gens^k * cents is simply a tilted plane.

The only way you can decide on whether it should tilt more towards

gens or cents (the exponent k) is through human considerations.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > If a "flat" system was miles from anything related what humans are

> > like, would you still be interested in it?

>

> Again, any system that is "best" according to a "human" criterion

> will show up as "best in its neighborhood" under a flat criterion.

But some neighbourhoods may be so disadvantaged that their best

doesn't even make it into the list.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Hmm . . . so if we simply put an upper bound on the RMS cents

error,

> we'll have a closed search? That doesn't seem right . . .

I was suggesting a *lower* bound on RMS cents as one possibility.

If with all quantities positive we have g^2 c < A and c > B, then

1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it probably

makes more sense to use g>=1, so that if g^2 c <= A then c <= A.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> >

> > > > Because those measures give an overall "slope" to the

results,

> in

> > > > analogy to what the Farey series seeding does to harmonic

> entropy.

> > >

> > > What's objective about that? A certain slope may be _real_.

i.e.

> > > humans on average may experience it that way, in which case

> > the "flat"

> > > case will really be favouring one extreme.

> >

> > But I don't feel comfortable deciding that for anyone.

>

> But you _are_ deciding it. You can't help but decide it, unless you

> intend to publish an infinite list. No matter what you do there

will

> be someone who thinks there's a lot of fluff in there and you

missed

> out some others. They aren't going to be impressed by any argument

> that "our metric is 'objective' or 'flat'".

We won't be missing out on anyone's "best" (unless they are really

far out on the plane, beyond the big gap where we will establish the

cutoff). Then they can come up with their own criterion and get their

own ranking. But at least we'll have something for everyone.

> > > I understand what the slope is in the HE case, but what slope

are

> > you

> > > talking about re badness of linear temperament? Badness wrt

what?

> >

> > Both step and cent.

>

> Huh? Obviously any badness metric _must_ slope down towards (0,0)

on

> the (cents,gens) plain.

The badness metric does, but the results don't. The results have a

similar distribution everywhere on the plane, but only when gens^2

cents is the badness metric.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > If a "flat" system was miles from anything related what humans

are

> > > like, would you still be interested in it?

> >

> > Again, any system that is "best" according to a "human" criterion

> > will show up as "best in its neighborhood" under a flat criterion.

>

> But some neighbourhoods may be so disadvantaged that their best

> doesn't even make it into the list.

That won't happen -- that's the point of the "flat" criterion. Only

the neighborhoods outside our cutoff will be disadvantaged, but at

least this will be explicit.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > Hmm . . . so if we simply put an upper bound on the RMS cents

> error,

> > we'll have a closed search? That doesn't seem right . . .

>

> I was suggesting a *lower* bound on RMS cents as one possibility.

Oh . . . well I don't think we should frame it _that_ way!

> If with all quantities positive we have g^2 c < A and c > B, then

> 1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it probably

> makes more sense to use g>=1, so that if g^2 c <= A then c <= A.

Are you saying that using g>=1 is enough to make this a closed search?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Huh? Obviously any badness metric _must_ slope down towards (0,0)

> on

> > the (cents,gens) plain.

>

> The badness metric does, but the results don't. The results have a

> similar distribution everywhere on the plane, but only when gens^2

> cents is the badness metric.

You're not making any sense. The results are all just discrete points

in the badness surface with respect to gens and cents, so they have

exactly the same slope. The results have a similar distribution of

what? Everywhere on what plane?

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I may be able to answer that when someone explains what is flat

with

> respect to what.

Paul did that. An analogy would be to use n^(4/3) cents when seaching

for 7-limit ets; this will give you a list which does not favor

either high or low numbers n, but it has nothing to do with human

perception, and you would use a different exponent in a different

prime limit--n^2 cents in the 3-limit, n^(3/2) cents in the 5-limit,

and so forth.

> It doesn't

> > commit to any particular theory about what humans are like and

what

> > they should want, and I think that's a good plan.

>

> Don't the cutoffs have to be based on a theory about what humans

are

> like?

I don't think you can have much of a theory about what a bunch of

cranky individualists might like, but I hope we could cut it off when

the difference could no longer be percieved. Can anyone hear the

difference between Ennealimmal and just?

> If a "flat" system was miles from anything related what humans are

> like, would you still be interested in it?

I might, most people would not be. I've discovered though that even

the large, "useless" ets have uses.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > > Huh? Obviously any badness metric _must_ slope down towards

(0,0)

> > on

> > > the (cents,gens) plain.

> >

> > The badness metric does, but the results don't. The results have

a

> > similar distribution everywhere on the plane, but only when

gens^2

> > cents is the badness metric.

>

> You're not making any sense. The results are all just discrete

points

> in the badness surface with respect to gens and cents, so they have

> exactly the same slope. The results have a similar distribution of

> what? Everywhere on what plane?

I see Gene is, at this very moment, doing a good job explaining these

issues to you; meanwhile, my brain is toast.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> >

> > > > We could search (16/15)^a (25/24)^b (81/80)^c to start out

> with,

> > > and

> > > > go to something more extreme if wanted.

> > >

> > > More extreme? I'm not getting this.

> >

> > (78732/78125)^a (32805/32768)^b (2109375/2097152)^c also gives

the

> > 5-limit, but is better for finding much smaller commas, to take a

> > more or less random example.

>

> Once a, b, and c are big enough, the original choice of commas will

> do little to induce any tendency of smallness or largeness in the

> result, correct?

(78732/78125)^53 (32805/32768)^(-84) (2109375/2097152)^65 = 2

I wouldn't search that far myself.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...>

> > > (78732/78125)^a (32805/32768)^b (2109375/2097152)^c also gives

> the

> > > 5-limit, but is better for finding much smaller commas, to take

a

> > > more or less random example.

> >

> > Once a, b, and c are big enough, the original choice of commas

will

> > do little to induce any tendency of smallness or largeness in the

> > result, correct?

>

> (78732/78125)^53 (32805/32768)^(-84) (2109375/2097152)^65 = 2

>

> I wouldn't search that far myself.

How do you know you wouldn't be missing any good ones?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > If with all quantities positive we have g^2 c < A and c > B, then

> > 1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it

probably

> > makes more sense to use g>=1, so that if g^2 c <= A then c <= A.

> Are you saying that using g>=1 is enough to make this a closed

search?

All it does is put an upper limit on how far out of tune the worst

cases can be, so we really need to bound c below or g above to get a

finite search.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> How do you know you wouldn't be missing any good ones?

You'd need bounds on what counted for good; I'll think about it.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > I may be able to answer that when someone explains what is flat

> with

> > respect to what.

>

> Paul did that.

Not in any way that makes any sense to me. I don't think Pauk

really understands it either and may be starting to realise that.

I'm starting to wonder if there's a conspiracy here to make me think

I'm going crazy. :-) Is anyone else getting this "gens^2 * cents is

the only 'flat' metric" thing?

> An analogy would be to use n^(4/3) cents when

seaching

> for 7-limit ets; this will give you a list which does not favor

> either high or low numbers n,

I'm sorry. This makes no sense to me either. _How_ would you use

n^(4/3) cents? Can you prove this to me? Or better still just prove

whatever it is you are trying to say about gens^2 * cents being a

"flat" badness metric for linear temperaments.

> I don't think you can have much of a theory about what a bunch of

> cranky individualists might like, but I hope we could cut it off

when

> the difference could no longer be percieved. Can anyone hear the

> difference between Ennealimmal and just?

Well that is precisely a theory about humans, as opposed to say

grasshoppers or rocks or computers.

If you guys can't explain this to me, I don't think you've got much

chance of getting published in a refereed journal. It doesn't involve

anything beyond high school math.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > An analogy would be to use n^(4/3) cents when

> seaching

> > for 7-limit ets; this will give you a list which does not favor

> > either high or low numbers n,

> I'm sorry. This makes no sense to me either. _How_ would you use

> n^(4/3) cents? Can you prove this to me?

The argument for n^(4/3) is required in order to get the argument for

gens^2 cents, so this is the place to start. The argument comes from

the theory of simultaneous Diophantine approximation, where it is

shown that there is a constant c, depending on d, such that for any d

irrational numbers x1, x2, ... xd there will be an infinite number of

solutions n to

n^(1+1/d) |xi - pi/n| < c

In the case of the 7-limit, we want to simultaneously approximate

log2(3), log2(5) and log2(7), so d=3.

> If you guys can't explain this to me, I don't think you've got much

> chance of getting published in a refereed journal. It doesn't

involve

> anything beyond high school math.

Explain what? Diophantine approximation, or why to use that

theoretical basis, or what? *What* doesn't involve more than high

school math? The theorem I mentioned isn't hard to prove but it does

use Dirichlet's pidgeonhole principle, which is also not hard but

which you probably did not learn in high school and which I would not

propose to discuss in the pages of a music journal, given that I have

reason to think that there is a limit to how much math they would

find acceptable.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > > An analogy would be to use n^(4/3) cents when

> > seaching

> > > for 7-limit ets; this will give you a list which does not favor

> > > either high or low numbers n,

>

> > I'm sorry. This makes no sense to me either. _How_ would you use

> > n^(4/3) cents? Can you prove this to me?

>

> The argument for n^(4/3) is required in order to get the argument

for

> gens^2 cents, so this is the place to start. The argument comes from

> the theory of simultaneous Diophantine approximation,

Oh damn. Ok forget about proving it to me. Just please try to get me

to understand what it is you are saying. I just thought that getting

you to prove it to me my be the easiest way for me to understand what

it was I had asked you to prove. Apparently not.

So ... What is n? What is a 7-limit et? How does one use n^(4/3) to

get a list of them? How would one check to see whether the list

favours high or low n.

> > If you guys can't explain this to me, I don't think you've got

much

> > chance of getting published in a refereed journal. It doesn't

> involve

> > anything beyond high school math.

>

> Explain what? Diophantine approximation, or why to use that

> theoretical basis, or what? *What* doesn't involve more than high

> school math?

Your (and Paul's) statements so far about badness metrics and

flatness.

> The theorem I mentioned isn't hard to prove but it does

> use Dirichlet's pidgeonhole principle, which is also not hard but

> which you probably did not learn in high school and which I would

not

> propose to discuss in the pages of a music journal, given that I

have

> reason to think that there is a limit to how much math they would

> find acceptable.

Agreed.

But surely you can get me to understand what you actually mean by

"flat" here. I may well be prepared to just believe the theorem as

stated, if I can understand what it means.

But no matter what you come up with I can't see how you can get past

the fact that gens and cents are fundamentally incomensurable

quantities, so somewhere there has to be a parameter that says how bad

they are relative to each other.

Currently you are saying that doubling gens is twice as bad as

doubling cents. Why? What if 99% of humans don't experience it like

that.

And why should they both be treated logarithmically? k*log(gens) +

log(cents) gives the same ranking as gens^2 * cents when k=2. Why not

use k*gens + cents. e.g. if badness was simply gens + cents and you

listed everything with badness not more than 30 then you don't need

any additional cutoffs. You automatically eliminate anything with gens

> 30 or cents > 30 (actually cents > 29 because gens can't go below

1).

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> So ... What is n? What is a 7-limit et? How does one use n^(4/3) to

> get a list of them? How would one check to see whether the list

> favours high or low n.

"n" is how many steps to the octave, or in other words what 2 is

mapped to. By a "7-limit et" I mean something which maps 7-limit

intervals to numbers of steps in a consistent way. Since we are

looking for the best, we can safely restrict these to what we get by

rounding n*log2(3), n*log2(5) and n*log2(7) to the nearest integer,

and defining the n-et as the map one gets from this.

Let's call this map "h"; for the 12-et, h(2)=12, h(3)=19, h(5)=28 and

h(7)=34; this entails that h(5/3) = h(5)-h(3) = 9, h(7/3)=15 and

h(7/5)=6. I can now measure the relative badness of "h" by taking the

sum, or maximum, or rms, of the differences of |h(3)-n*log2(3)|,

|h(5)-n*log2(5)|, |h(7)-n*log2(7)|, |h(5/3)-n*log2(5/3)|,

|h(7/3)-n*log2(7/3)| and |h(7/5)-n*log2(7/5)|.

This measure of badness is flat in the sense that the density is the

same everywhere, so that we would be selecting about the same number

of ets in a range around 12 as we would in a range around 1200. I

don't really want this sort of "flatness", so I use the theory of

Diophantine approximation to tell we that if I multiply this badness

by the cube root of n, so that the density falls off at a rate of

n^(-1/3), I will still get an infinite list of ets, but if I make it

fall off faster I probably won't. I can use either the maximum of the

above numbers, or the sum, or the rms, and the same conclusion holds;

in fact, I can look at the 9-limit instead of the 7-limit and the

same conclusion holds. If I look at the maximum, and multiply by 1200

so we are looking at units of n^(4/3) cents, I get the following list

of ets which come out as less than 1000, for n going from 1 to 2000:

1 884.3587134

2 839.4327178

4 647.3739047

5 876.4669184

9 920.6653451

10 955.6795096

12 910.1603254

15 994.0402775

31 580.7780905

41 892.0787789

72 892.7193923

99 716.7738001

171 384.2612749

270 615.9368489

342 968.2768986

441 685.5766666

1578 989.4999106

This list just keeps on going, so I cut it off at 2000. I might look

at it, and decide that it doesn't have some important ets on it, such

as 19,22 and 27; I decide to put those on, not really caring about

any other range, by raising the ante to 1200; I then get the

following additions:

3 1154.683345

6 1068.957518

19 1087.886603

22 1078.033523

27 1108.589256

68 1090.046322

130 1182.191130

140 1091.565279

202 1143.628876

612 1061.222492

1547 1190.434242

My decision to add 19,22, and 27 leads me to add 3 and 6 at the low

end, and 68 and so forth at the high end. It tells me that if I'm

interested in 27 in the range around 31, I should also be interested

in 68 in the range around 72, in 140 and 202 around 171, 612 around

441, and 1547 near 1578. That's the sort of "flatness" Paul was

talking about; it doesn't favor one range over another.

> But no matter what you come up with I can't see how you can get

past

> the fact that gens and cents are fundamentally incomensurable

> quantities, so somewhere there has to be a parameter that says how

bad

> they are relative to each other.

"n" and cents are incommeasurable also, and n^(4/3) is only right for

the 7 and 9 limits, and wrong for everything else, so I don't think

this is the issue if we adopt this point of view.

Why not

> use k*gens + cents. e.g. if badness was simply gens + cents and you

> listed everything with badness not more than 30 then you don't need

> any additional cutoffs. You automatically eliminate anything with

gens

> > 30 or cents > 30 (actually cents > 29 because gens can't go below

> 1).

Gens^3 cents also automatically cuts things off, but I rather like

the idea of keeping it "flat" in the above sense and doing the

cutting off deliberately, it seems more objective.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > > If with all quantities positive we have g^2 c < A and c > B,

then

> > > 1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it

> probably

> > > makes more sense to use g>=1, so that if g^2 c <= A then c <= A.

>

> > Are you saying that using g>=1 is enough to make this a closed

> search?

>

> All it does is put an upper limit on how far out of tune the worst

> cases can be, so we really need to bound c below or g above to get

a

> finite search.

So do you still stand by this statement:

"If we bound one of them and gens^2 cents, we've bound the other;

that's what I'd do."

(which you wrote after I said that a single cufoff point wouldn't be

enough, that we would need a cutoff curve)?

Thanks Gene, for taking the time to explain this in a way that a

mere computer scientist can understand. :-)

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > So ... What is n? What is a 7-limit et? How does one use n^(4/3)

to

> > get a list of them? How would one check to see whether the list

> > favours high or low n.

>

> "n" is how many steps to the octave, or in other words what 2 is

> mapped to. By a "7-limit et" I mean something which maps 7-limit

> intervals to numbers of steps in a consistent way. Since we are

> looking for the best, we can safely restrict these to what we get by

> rounding n*log2(3), n*log2(5) and n*log2(7) to the nearest integer,

> and defining the n-et as the map one gets from this.

OK so far.

> Let's call this map "h";

> for the 12-et, h(2)=12, h(3)=19, h(5)=28

and

> h(7)=34; this entails that h(5/3) = h(5)-h(3) = 9, h(7/3)=15 and

> h(7/5)=6.

Fine.

> I can now measure the relative badness of "h" by taking the

> sum, or maximum, or rms, of the differences of |h(3)-n*log2(3)|,

> |h(5)-n*log2(5)|, |h(7)-n*log2(7)|, |h(5/3)-n*log2(5/3)|,

> |h(7/3)-n*log2(7/3)| and |h(7/5)-n*log2(7/5)|.

I'd say this is just one component of badness. Its the error expressed

as a proportion of the step size. The number of steps in the octave n

has an effect on badness independent of the relative error.

> This measure of badness is flat in the sense that the density is the

> same everywhere, so that we would be selecting about the same number

> of ets in a range around 12 as we would in a range around 1200.

Yes. I believe this. See the two charts near the end of

http://dkeenan.com/Music/EqualTemperedMusicalScales.htm

although it uses a weighting error that only includes the primes

(only the "rooted" intervals) that I now find dubious.

> I don't really want this sort of "flatness",

Hardly anyone would. Not without some additional penalty for large n,

even if it's just a crude sudden cutoff. But _why_ don't you want this

sort of flatness? Did you reject it on "objective" grounds? Is there

some other sort of flatness that you _do_ want? If so, what is it? How

many sorts of flatness are there and how did you choose between them?

> so I use the theory of

> Diophantine approximation to tell we that if I multiply this badness

> by the cube root of n, so that the density falls off at a rate of

> n^(-1/3), I will still get an infinite list of ets, but if I make it

> fall off faster I probably won't.

Here's where the real leap-of-faith occurs.

First of all, I take it that when you say you will (or wont) "get an

infinite list of ets", you mean "when the list is limited to ETs whose

badness does not exceed a given badness limit, greater than zero".

There are an infinite number of ways of defining badness to achieve a

finite list with a cutoff only on badness itself. Most of these will

produce a finite list that is of of absolutely no interest to 99.99%

of the population (of people who are interested in the topic at all).

Why do you immediately leap to the theory of Diophantine approximation

as giving the best way to achieve a finite list?

I think a good way to achieve it is simply to add an amount k*n to the

error in cents (absolute, not relative to step size). I suggest

initially trying a k of about 0.5 cents per step.

The only way to tell if this is better than something based on the

theory of Diophantine equations is to suck it and see. Some of us have

been on the tuning lists long enough to know what a lot of other

people find useful or interesting, even though we don't necessarily

find them so ourselves.

> I can use either the maximum of

the

> above numbers, or the sum, or the rms, and the same conclusion

holds;

> in fact, I can look at the 9-limit instead of the 7-limit and the

> same conclusion holds. If I look at the maximum, and multiply by

1200

> so we are looking at units of n^(4/3) cents, I get the following

list

> of ets which come out as less than 1000, for n going from 1 to 2000:

>

> 1 884.3587134

> 2 839.4327178

> 4 647.3739047

> 5 876.4669184

> 9 920.6653451

> 10 955.6795096

> 12 910.1603254

> 15 994.0402775

> 31 580.7780905

> 41 892.0787789

> 72 892.7193923

> 99 716.7738001

> 171 384.2612749

> 270 615.9368489

> 342 968.2768986

> 441 685.5766666

> 1578 989.4999106

>

> This list just keeps on going, so I cut it off at 2000. I might look

> at it, and decide that it doesn't have some important ets on it,

such

> as 19,22 and 27; I decide to put those on, not really caring about

> any other range, by raising the ante to 1200; I then get the

> following additions:

>

> 3 1154.683345

> 6 1068.957518

> 19 1087.886603

> 22 1078.033523

> 27 1108.589256

> 68 1090.046322

> 130 1182.191130

> 140 1091.565279

> 202 1143.628876

> 612 1061.222492

> 1547 1190.434242

>

> My decision to add 19,22, and 27 leads me to add 3 and 6 at the low

> end, and 68 and so forth at the high end. It tells me that if I'm

> interested in 27 in the range around 31, I should also be interested

> in 68 in the range around 72, in 140 and 202 around 171, 612 around

> 441, and 1547 near 1578. That's the sort of "flatness" Paul was

> talking about; it doesn't favor one range over another.

But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are

of approximately equal interest to 19, 22 and 27. Sure you'll always

be able to find one person who'll say they are. But ask anyone who has

actually used 19-tET or 22-tET when they plan to try 3-tET or

1547-tET. It's just a joke. I suspect you've been seduced by the

beauty of the math and forgotten your actual purpose. This metric

clearly favours both very small and very large n over middle ones.

> > But no matter what you come up with I can't see how you can get

> past

> > the fact that gens and cents are fundamentally incomensurable

> > quantities, so somewhere there has to be a parameter that says how

> bad

> > they are relative to each other.

>

> "n" and cents are incommeasurable also,

Yes.

> and n^(4/3) is only right for

> the 7 and 9 limits, and wrong for everything else, so I don't think

> this is the issue if we adopt this point of view.

>

> Why not

> > use k*gens + cents. e.g. if badness was simply gens + cents and

you

> > listed everything with badness not more than 30 then you don't

need

> > any additional cutoffs. You automatically eliminate anything with

> gens

> > > 30 or cents > 30 (actually cents > 29 because gens can't go

below

> > 1).

>

> Gens^3 cents also automatically cuts things off, but I rather like

> the idea of keeping it "flat" in the above sense and doing the

> cutting off deliberately, it seems more objective.

_Seems_ more objective? You mean that subjectively, to you, it seems

more objective?

Well I'm afraid that it seems to me that this quest for an "objective"

badness metric (with ad hoc cutoffs) is the silliest thing I've heard

in quite a while.

If you're combining two or more incomensurable quantities into a

single badness metric, the choice of the constant of proportionality

between them (and the choice of whether this constant should relate

the plain quantities or their logarithms or whatever) should be

decided so that as many people as possible agree that it actually

gives something like what they perceive as badness, even if its only

roughly so.

An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one. The

fact that its based on the theory of Diophantine equations is utterly

irrelevant.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547

are

> of approximately equal interest to 19, 22 and 27. Sure you'll

always

> be able to find one person who'll say they are. But ask anyone who

has

> actually used 19-tET or 22-tET when they plan to try 3-tET or

> 1547-tET. It's just a joke.

For the third or fourth time Dave, this isn't intended to appeal to

any one person, but rather to the widest possible audience. Since

this is a "flat" measure, it will rank the systems in the _vicinity_

of *your* #1 system, the same way you would, whoever *you* happen to

be. But it makes absolutely no preference for one end of the spectrum

over another, or the middle. That's what makes it flat

and "objective". Look at Gene's list for 7-limit ETs again. Can it be

denied that 31-tET is by far the best _in its vicinity_, and 171-tET

is by far the best _in its vicinity_?

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > So do you still stand by this statement:

> >

> > "If we bound one of them and gens^2 cents, we've bound the other;

> > that's what I'd do."

> >

> > (which you wrote after I said that a single cufoff point wouldn't

> be

> > enough, that we would need a cutoff curve)?

>

> Sure. I think bounding g makes the most sense, since we can

calculate

> it more easily. I've been thinking about how one might calculate

> cents without going through the map stage, but for gens we can get

it

> immediately from the wedgie with no trouble.

I don't immediately know what "the map stage" means, but I've been

thinking that, in regarding to "standardizing the wedge product", we

might want to use something that has the Tenney lattice built in.

> We could then toss

> anything with too high a gens figure before even calculating

anything

> else, which should help.

So I'm not getting where g>=1 comes into all this.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> So I'm not getting where g>=1 comes into all this.

What I wrote was confused, but you've already replied, I see. Bounding

g from below is easy, since it bounds itself. Bounding it from above

could mean just setting a bound, or bounding g^2 c; I think just

setting an upper bound to it makes a lot of sense.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547

> are

> > of approximately equal interest to 19, 22 and 27. Sure you'll

> always

> > be able to find one person who'll say they are. But ask anyone who

> has

> > actually used 19-tET or 22-tET when they plan to try 3-tET or

> > 1547-tET. It's just a joke.

>

> For the third or fourth time Dave, this isn't intended to appeal to

> any one person, but rather to the widest possible audience.

But that's exactly my intention too. I'm trying to help you find a

metric that will appeal, not to me, but to all those people whose

divergent views I've read on the tuning list over the years. I'm

simply claiming that your metric is seriously flawed in acheiving your

intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally as

good or bad or interesting as 19 or 22. If you include fluff like that

then there will be less room for ETs of interest to actual humans.

> Since

> this is a "flat" measure, it will rank the systems in the _vicinity_

> of *your* #1 system, the same way you would, whoever *you* happen to

> be. But it makes absolutely no preference for one end of the

spectrum

> over another, or the middle. That's what makes it flat

> and "objective".

You seem to be arguing in circles.

> Look at Gene's list for 7-limit ETs again. Can it

be

> denied that 31-tET is by far the best _in its vicinity_, and 171-tET

> is by far the best _in its vicinity_?

Of course I don't deny that. I claim that it is irrelevant. _Any_ old

half-baked way of monotonically combining steps and cents into a

badness metric will be the same as any other, _locally_. You said the

same yourself in regard to your HE curves. Maybe you need more sleep.

:-)

Since when does merely local behaviour determine if something is

_flat_ or not?

In any case, I don't think you understand Gene's particular kind of

flatness, you certainly weren't able to explain it to me, as Gene has

now done. This particular kind of "flatness" is just one of many.

There's nothing objective about a decision to favour it, and then to

ad hoc introduce additional cutoffs besides the one for badness.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > So I'm not getting where g>=1 comes into all this.

>

> What I wrote was confused, but you've already replied, I see.

Bounding

> g from below is easy, since it bounds itself. Bounding it from

above

> could mean just setting a bound, or bounding g^2 c; I think just

> setting an upper bound to it makes a lot of sense.

Yes -- g could play the role than N plays in your ET lists. One would

order the results by g, give the g^2 c score for each (or not), and

give about a page of nice musician-friendly information on each.

Gene, there are a lot of outstanding questions and comments . . . I

wanted to know if there would have been a lot more "slippery" ones

had you included simpler unison vectors in your source list . . . I

want to use a Tenney-distance weighted "gens" measure . . . but for

now, a master list would be great. Can someone produce such a list,

with columns for "cents" and "gens" at least as currently defined?

I'd like to try to find omissions.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> I'd say this is just one component of badness. Its the error

expressed

> as a proportion of the step size. The number of steps in the

octave n

> has an effect on badness independent of the relative error.

Then you should be happier with an extra cube root of n adjustment.

> Hardly anyone would. Not without some additional penalty for large

n,

> even if it's just a crude sudden cutoff. But _why_ don't you want

this

> sort of flatness?

Because my interest isn't independent of size--you need more at

higher levels to make me care.

Did you reject it on "objective" grounds? Is there

> some other sort of flatness that you _do_ want? If so, what is it?

How

> many sorts of flatness are there and how did you choose between

them?

You could use the Riemann Zeta function and the omega estimates based

on the assumption of the Riemann hypothesis and do it that way, if

you liked. Or there are no doubt other ways; this one seems the

simplest and it gets the job done, and the alternatives would have a

certain family resemblence.

> Why do you immediately leap to the theory of Diophantine

approximation

> as giving the best way to achieve a finite list?

It gives me a measure which is connected to the nature of the

problem, which is a Diophantine approximation problem, which seems to

make a lot of sense both in practice and theory to me, if not to you.

> I think a good way to achieve it is simply to add an amount k*n to

the

> error in cents (absolute, not relative to step size). I suggest

> initially trying a k of about 0.5 cents per step.

Should I muck around in the dark until I make this measure behave in

a way something like the measure I already have behaves, which would

be both pointless and inelegant, or is there something about it to

recommend it?

> The only way to tell if this is better than something based on the

> theory of Diophantine equations is to suck it and see.

Better how? The measure I already have does exactly what I'd want a

measure to do.

Some of us have

> been on the tuning lists long enough to know what a lot of other

> people find useful or interesting, even though we don't necessarily

> find them so ourselves.

One of the advantages of the measure I'm using is that it accomodates

this well.

> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547

are

> of approximately equal interest to 19, 22 and 27.

I'm not trying to measure your interest, I'm only saying if you want

to look at a certain range, look at these.

Sure you'll always

> be able to find one person who'll say they are. But ask anyone who

has

> actually used 19-tET or 22-tET when they plan to try 3-tET or

> 1547-tET. It's just a joke.

The 4-et is actually interesting in connection with the 7-limit, as

the 3-et is with the 5-limit, and the large ets have uses other than

tuning up a set of marimbas as well.

I suspect you've been seduced by the

> beauty of the math and forgotten your actual purpose. This metric

> clearly favours both very small and very large n over middle ones.

In other words, the range *you* happen to care about is the only

interesting range; it's that which I was regarding as not objective.

> An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one.

An isobad which passes near 3, 6, 19, 22, 612 and 1547 makes a lot of

sense to me, so I think I would probably *not* like your alternative

as well.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> >

> > > But this is nonsense. It simply isn't true that 3, 6, 612 and

1547

> > are

> > > of approximately equal interest to 19, 22 and 27. Sure you'll

> > always

> > > be able to find one person who'll say they are. But ask anyone

who

> > has

> > > actually used 19-tET or 22-tET when they plan to try 3-tET or

> > > 1547-tET. It's just a joke.

> >

> > For the third or fourth time Dave, this isn't intended to appeal

to

> > any one person, but rather to the widest possible audience.

>

> But that's exactly my intention too. I'm trying to help you find a

> metric that will appeal, not to me, but to all those people whose

> divergent views I've read on the tuning list over the years. I'm

> simply claiming that your metric is seriously flawed in acheiving

your

> intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally

as

> good or bad or interesting as 19 or 22. If you include fluff like

that

> then there will be less room for ETs of interest to actual humans.

Dave, if you don't have a cutoff, you'd have an infinite number of

ETs better than 1547. Of course there has to be a cutoff.

>

> > Look at Gene's list for 7-limit ETs again. Can it

> be

> > denied that 31-tET is by far the best _in its vicinity_, and 171-

tET

> > is by far the best _in its vicinity_?

>

> Of course I don't deny that. I claim that it is irrelevant. _Any_

old

> half-baked way of monotonically combining steps and cents into a

> badness metric will be the same as any other, _locally_. You said

the

> same yourself in regard to your HE curves. Maybe you need more

sleep.

> :-)

I mean that only Gene's measure tells you exactly _how much_ better a

system is than the systems in their vicinity, _in units of_ the

average differences between different systems in their vicinity.

> Since when does merely local behaviour determine if something is

> _flat_ or not?

It doesn't.

> In any case, I don't think you understand Gene's particular kind of

> flatness, you certainly weren't able to explain it to me, as Gene

has

> now done. This particular kind of "flatness" is just one of many.

I'd like to see a list of ETs, as far as you'd like to take it, above

some cutoff different from Gene's, that shows this kind of behavior

(not just the flatness of the measure itself, but also the flatness

of the size of the wiggles).

Gene wrote:

> Sure. I think bounding g makes the most sense, since we can calculate

> it more easily. I've been thinking about how one might calculate

> cents without going through the map stage, but for gens we can get it

> immediately from the wedgie with no trouble. We could then toss

> anything with too high a gens figure before even calculating anything

> else, which should help.

My program throws out bad temperaments before doing the optimization, if

that's what you're suggesting. It's on of the changes I made this, er,

yesterday morning. It does make a difference, but not much now my

optimization's faster. Big chunks of time are being spent generating the

ETs and formatting the results currently.

Graham

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > I'd say this is just one component of badness. Its the error

> expressed

> > as a proportion of the step size. The number of steps in the

> octave n

> > has an effect on badness independent of the relative error.

>

> Then you should be happier with an extra cube root of n adjustment.

Yes I am. But still way from as happy as I think most people would be

with something not based on k*log(gens) + log(cents) but instead on

k*gens + cents (or maybe something else).

> > But _why_ don't you want this

> > sort of flatness?

>

> Because my interest isn't independent of size--you need more at

> higher levels to make me care.

Indeed.

> Did you reject it on "objective" grounds? Is there

> > some other sort of flatness that you _do_ want? If so, what is it?

> How

> > many sorts of flatness are there and how did you choose between

> them?

>

> You could use the Riemann Zeta function and the omega estimates

based

> on the assumption of the Riemann hypothesis and do it that way, if

> you liked. Or there are no doubt other ways; this one seems the

> simplest and it gets the job done, and the alternatives would have a

> certain family resemblence.

But there's nothing "objective" about these decisions. You're just

finding stuff so it matches what you think everyone likes. Right?

> > Why do you immediately leap to the theory of Diophantine

> approximation

> > as giving the best way to achieve a finite list?

>

> It gives me a measure which is connected to the nature of the

> problem, which is a Diophantine approximation problem, which seems

to

> make a lot of sense both in practice and theory to me, if not to

you.

There are probably many such things "connected to the nature of the

problem" which give entirely different results.

> > I think a good way to achieve it is simply to add an amount k*n to

> the

> > error in cents (absolute, not relative to step size). I suggest

> > initially trying a k of about 0.5 cents per step.

>

> Should I muck around in the dark until I make this measure behave in

> a way something like the measure I already have behaves, which would

> be both pointless and inelegant, or is there something about it to

> recommend it?

Yes. The fact that I've been reading the tuning list and thinking

about and discussing these things with others for many years. So it's

hardly groping in the dark. I'm not saying this particular one I

pulled out of the air is the one most representative of all views, but

I do know that we can do a lot better than your current proposal.

> > The only way to tell if this is better than something based on the

> > theory of Diophantine equations is to suck it and see.

>

> Better how? The measure I already have does exactly what I'd want a

> measure to do.

Answered below.

> Some of us have

> > been on the tuning lists long enough to know what a lot of other

> > people find useful or interesting, even though we don't

necessarily

> > find them so ourselves.

>

> One of the advantages of the measure I'm using is that it

accomodates

> this well.

How do you know that?

> > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547

> are

> > of approximately equal interest to 19, 22 and 27.

>

> I'm not trying to measure your interest,

I keep saying that I'm trying to consider as wide a set of interests

as possible. You and Paul keep accusing me of only trying to serve my

own interests. I accept that you're trying to consider as wide a set

of interests as possible, I just claim that you're failing.

> I'm only saying if you want

> to look at a certain range, look at these.

Yes, but some _ranges_ are more interesting than others and so if you

include an equal number in every range then you won't be including

enough in the most interesting ranges. It isn't just _my_ prejudice

that there are more ETs of interest in the vicinity of 26-tET than

there are in the vicinity of 3-tET or 1550-tET. It's practically

everyone's.

> Sure you'll always

> > be able to find one person who'll say they are. But ask anyone who

> has

> > actually used 19-tET or 22-tET when they plan to try 3-tET or

> > 1547-tET. It's just a joke.

>

> The 4-et is actually interesting in connection with the 7-limit, as

> the 3-et is with the 5-limit, and the large ets have uses other than

> tuning up a set of marimbas as well.

Those are good points, which maybe says that my metric is too harsh on

the extremes, but I still say yours is way too soft. There's got to be

something pretty damn exceptional about an ET greater than 100 for it

to be of interest. But note that our badness metric is only based on

steps and cents (or gens and cents for temperaments) so we can't claim

that our metric should include some exceptional high ET if it's

exceptional property has nothing to do with the magnitude of the

number of steps or the cents error.

> I suspect you've been seduced by the

> > beauty of the math and forgotten your actual purpose. This metric

> > clearly favours both very small and very large n over middle ones.

>

> In other words, the range *you* happen to care about is the only

> interesting range; it's that which I was regarding as not objective.

There you go again. Accusing me of only trying to serve my own

interests.

> > An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one.

>

> An isobad which passes near 3, 6, 19, 22, 612 and 1547 makes a lot

of

> sense to me, so I think I would probably *not* like your alternative

> as well.

Whether you or I would like it, isn't the point. The only way this

could be settled is by some kind of experiment or survey, say on the

tuning list.

We could put together two lists of ETs of roughly equally "badness".

One using your metric, one using mine. They should contain the same

number of ETs (you've already given a suitable list of 11). They

should have as many ETs as possible in common. We would tell people

the 7-limit rms error of each and the number of steps per octave in

each, but nothing more. Then we'd ask them to choose which list was a

better example of a list of ETs of approximately equal 7-limit

goodness, badness, usefulness, interestingness or whatever you want to

call it, based only on considerations of the number of steps and the

error.

We could even ask them to rate each list on a scale of 1 to 10

according to how well they think each list manages to capture equal

7-limit interestingness or whatever, based only on considerations of

the number of steps and the error.

Here they are:

ET List 1

Steps 7-limit

per RMS

octave error (cents)

---------------------

3 176.9

6 66.9

19 12.7

22 8.6

27 7.9

68 2.4

130 1.1

140 1.0

202 0.61

612 0.15

1547 0.040

ET list 2

Steps 7-limit

per RMS

octave error (cents)

---------------------

15 18.5

19 12.7

22 8.6

24 15.1

26 10.4

27 7.9

31 4.0

35 9.9

36 8.6

37 7.6

41 4.2

Do we really need to do the experiment? Paul?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Dave, if you don't have a cutoff, you'd have an infinite number of

> ETs better than 1547. Of course there has to be a cutoff.

Yes. This just shows that this isn't a very good badness metric.

A decent badness metric would not need a cutoff in anything but

badness in order to arrive at a finite list.

> I mean that only Gene's measure tells you exactly _how much_ better

a

> system is than the systems in their vicinity,

How do you know it does that? "Exactly"?

> _in units of_ the

> average differences between different systems in their vicinity.

I don't understand that bit. Can you explain.

> I'd like to see a list of ETs, as far as you'd like to take it,

above

> some cutoff different from Gene's, that shows this kind of behavior

> (not just the flatness of the measure itself, but also the flatness

> of the size of the wiggles).

But why ever do you think the size of the wiggles should be flat? I

think it is quite expected that the size of the wiggles in badness

around 1-tET to 9-tET are _much_ bigger than the wiggles around 60-tET

to 69-tET. Apparently you agree that the wiggles around 100000-tET are

completely irrelevant, since you're happy to have a cutoff in

steps, somewhere below that.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> ET list 2

>

> Steps 7-limit

> per RMS

> octave error (cents)

> ---------------------

> 15 18.5

> 19 12.7

> 22 8.6

> 24 15.1

> 26 10.4

> 27 7.9

> 31 4.0

> 35 9.9

> 36 8.6

> 37 7.6

> 41 4.2

If you're going to do this, let's at least do it right and use the

right list:

1 884.3587134

2 839.4327178

4 647.3739047

5 876.4669184

9 920.6653451

10 955.6795096

12 910.1603254

15 994.0402775

31 580.7780905

41 892.0787789

72 892.7193923

99 716.7738001

171 384.2612749

270 615.9368489

342 968.2768986

441 685.5766666

1578 989.4999106

The first point to note is that the two lists are clearly not

intended to do the same thing. The second is that while you object to

this characterization, your list seems to want to do our thinking for

us more than mine; you've decided the important place to look is

around 27. The third thing to notice is that if you want to look at a

limited range, you always can. Suppose I look from 10 to 50 and see

what the top 11 are, using my measure:

10 .796

12 .758

15 .828

16 1.113

19 .906

22 .898

26 1.122

27 .924

31 .484

41 .743

46 1.181

I'm afraid I like this list better than yours, but your milage may

vary.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > ET list 2

> >

> > Steps 7-limit

> > per RMS

> > octave error (cents)

> > ---------------------

> > 15 18.5

> > 19 12.7

> > 22 8.6

> > 24 15.1

> > 26 10.4

> > 27 7.9

> > 31 4.0

> > 35 9.9

> > 36 8.6

> > 37 7.6

> > 41 4.2

>

> If you're going to do this, let's at least do it right and use the

> right list:

>

> 1 884.3587134

> 2 839.4327178

> 4 647.3739047

> 5 876.4669184

> 9 920.6653451

> 10 955.6795096

> 12 910.1603254

> 15 994.0402775

> 31 580.7780905

> 41 892.0787789

> 72 892.7193923

> 99 716.7738001

> 171 384.2612749

> 270 615.9368489

> 342 968.2768986

> 441 685.5766666

> 1578 989.4999106

But this doesn't look like an approximate isobad. It looks like a list

of ETs less than a certain badness. i.e. it's a top 17. Right?

We can do it that way if you like. So I'll have to give my top 17. I

wasn't proposing that we give the badness measure (since it was meant

to be an isobad). But I guess we could if it's a top 17. However I

don't want people distracted by 9 significant digits of badness.

Couldn't we normalise to a 10 point scale and only give whole numbers.

And you need to supply the RMS error.

> The first point to note is that the two lists are clearly not

> intended to do the same thing.

Mine is intended to pack the maximum number of ETs likely to be of

interest to musicians, composers, music theorists etc. who are

interested in 7-limit music, into a list of a given size. Maybe you

need to explain what yours is intended to do.

> The second is that while you object to

> this characterization, your list seems to want to do our thinking

for

> us more than mine; you've decided the important place to look is

> around 27.

Not at all. It just comes out that way. I simply decided that an extra

note per octave was worth about the same badness as an increase of 0.5

cent in the RMS error. This comes thru experience and tuning list

discussions.

> The third thing to notice is that if you want to look at

a

> limited range, you always can. Suppose I look from 10 to 50 and see

> what the top 11 are, using my measure:

>

> 10 .796

> 12 .758

> 15 .828

> 16 1.113

> 19 .906

> 22 .898

> 26 1.122

> 27 .924

> 31 .484

> 41 .743

> 46 1.181

Sure. I can do that too.

> I'm afraid I like this list better than yours, but your milage may

> vary.

I might like it better than mine too. Mine's still got problems. But

you had to arbitrarily limit it to 10<n<50 to get this list. This is

clearly doing our thinking for us.

I thought we we're talking about a single published list, not a piece

of software that lets you enter your favourite limits.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But this doesn't look like an approximate isobad. It looks like a

list

> of ETs less than a certain badness. i.e. it's a top 17. Right?

Right, but your list looked like a top 11 in a certain range also.

>

> We can do it that way if you like. So I'll have to give my top 17.

I

> wasn't proposing that we give the badness measure (since it was

meant

> to be an isobad).

The things on your list didn't make sense to me as an isobad, and I

didn't know that was what it was supposed to be. Trying a top n and

comparing makes more sense to me, but I need to pick a range.

> Mine is intended to pack the maximum number of ETs likely to be of

> interest to musicians, composers, music theorists etc. who are

> interested in 7-limit music, into a list of a given size.

It needs work.

Maybe you

> need to explain what yours is intended to do.

Mine is intended to show what the relatively best 7-limit ets are, in

a measurement which has the logarithmic flatness I describe in

another posting.

> I might like it better than mine too. Mine's still got problems.

But

> you had to arbitrarily limit it to 10<n<50 to get this list. This

is

> clearly doing our thinking for us.

And I can reduce that problem to essentially nil, by putting in a

high cut-off and leaving it at that. You are stuck with it as an

intrinsic feature.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

>

> > But this doesn't look like an approximate isobad. It looks like a

> list

> > of ETs less than a certain badness. i.e. it's a top 17. Right?

>

> Right, but your list looked like a top 11 in a certain range also.

It happens to also be the top 11 by the 0.5*steps + cents metric, but

not limited to any range.

> > We can do it that way if you like. So I'll have to give my top 17.

> I

> > wasn't proposing that we give the badness measure (since it was

> meant

> > to be an isobad).

>

> The things on your list didn't make sense to me as an isobad,

Obviously they wouldn't, given what your isobad looked like.

> and I

> didn't know that was what it was supposed to be.

I thought I made that pretty clear.

> Trying a top n and

> comparing makes more sense to me,

Fine.

> but I need to pick a range.

Objectively of course. Ha ha. If you have to pick a range then your

so-called badness metric obviously isn't really a badness metric at

all!

> > Mine is intended to pack the maximum number of ETs likely to be of

> > interest to musicians, composers, music theorists etc. who are

> > interested in 7-limit music, into a list of a given size.

>

> It needs work.

I think I said that.

> Mine is intended to show what the relatively best 7-limit ets are,

in

> a measurement which has the logarithmic flatness I describe in

> another posting.

Even if you and Paul are the only folks on the planet who find that

interesting? In that case I think its very misleading to call it a

badness metric when it only gives relative badness _locally_.

> > I might like it better than mine too. Mine's still got problems.

> But

> > you had to arbitrarily limit it to 10<n<50 to get this list. This

> is

> > clearly doing our thinking for us.

>

> And I can reduce that problem to essentially nil, by putting in a

> high cut-off and leaving it at that.

How high? How will this fix the problem that folks will assume you're

saying that 3-tET and 1547-tET are about as useful as 22-tET for

7-limit.

> You are stuck with it as an

> intrinsic feature.

And a damn fine feature it is too. :-) Seriously, mine was proposed

without any great amount of research or deliberation to show that it

is easy to find alternatives that do _much_ better than yours

_globally_ and about the same _locally_.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> >

> > > But this doesn't look like an approximate isobad. It looks like

a

> > list

> > > of ETs less than a certain badness. i.e. it's a top 17. Right?

> >

> > Right, but your list looked like a top 11 in a certain range also.

>

> It happens to also be the top 11 by the 0.5*steps + cents metric,

but

> not limited to any range.

You could describe my top 11 in the range from 10 to 50 as the top 11

using a measure which multipies by a function equal to 1 from 10 to

50, and 10^n otherwise, which we multiply by our badness measure and

so end up with a top 11 "not limited by range". The difference is

that you have blurry outlines to your chosen region, which seems to

me to be a bad thing, not a good one. It allows you to imagine you

have not chosen a range, which hardly clarifies matters, since in

effect you have.

> Objectively of course. Ha ha. If you have to pick a range then your

> so-called badness metric obviously isn't really a badness metric at

> all!

See above; I can screw it up in an _ad hoc_ way and make it a screwed-

up, _ad hoc_ measure also, but why should I want to?

> Even if you and Paul are the only folks on the planet who find that

> interesting? In that case I think its very misleading to call it a

> badness metric when it only gives relative badness _locally_.

Global relative badness means what, exactly? This makes no sense to

me.

> How high? How will this fix the problem that folks will assume

you're

> saying that 3-tET and 1547-tET are about as useful as 22-tET for

> 7-limit.

I think you would be one of the very few who looked at it that way.

After all, this is hardly the first time such a thing has been done.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Even if you and Paul are the only folks on the planet who find

that

> > interesting? In that case I think its very misleading to call it a

> > badness metric when it only gives relative badness _locally_.

>

> Global relative badness means what, exactly? This makes no sense to

> me.

It means if two ETs have around the same badness number then are are

about as bad as each other, no matter how far apart they are on the

spectrum.

> > How high? How will this fix the problem that folks will assume

> you're

> > saying that 3-tET and 1547-tET are about as useful as 22-tET for

> > 7-limit.

>

> I think you would be one of the very few who looked at it that way.

> After all, this is hardly the first time such a thing has been done.

Ok. So I'm the only person who will assume that two ETs with about the

same badness number are roughly as bad as each other. In that case, I

shant bother you any more. We are apparently speakimg different

languages.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> > Global relative badness means what, exactly? This makes no sense

to

> > me.

>

> It means if two ETs have around the same badness number then are

are

> about as bad as each other, no matter how far apart they are on the

> spectrum.

This strikes me as subjective to the point of being meaningless.

Gene wrote:

> I don't know what good Maple code will do, but here it is:

>

> findcoms := proc(l)

> local p,q,r,p1,q1,r1,s,u,v,w;

More descriptive variable names might help. Is l the wedge invariant?

> s := igcd(l[1], l[2], l[6]);

> u := [l[6]/s, -l[2]/s, l[1]/s,0];

Presumably this is simplifying the octave-equivalent part?

> v := [p,q,r,1];

What values do p, q and r have? Is it important?

> w := w7l(u,v);

> "w7l" takes two vectors representing intervals, and computes the

> wegdge product.

So w is the wedge product of u and v, whatever they are.

> s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w

> [6]});

> "isolve" gives integer solutions to a linear

> equation;

Oh, that sounds useful.

> s := subs(_N1=0,s);

> I get an undeterminded varable "_N1" in this way which I

> can set equal to any integer, so I set it to 0.

Okay.

> p1 := subs(s,p);

> q1 := subs(s,q);

> r1 := subs(s,r);

What about this?

> v := 2^p1 * 3^q1 * 5^r1 * 7;

And here ^ is exponentiation instead of a wedge product.

> if v < 1 then v := 1/v fi;

So v must be a ratio, and you want it to be ascending.

> w := 2^u[1] * 3^u[2] * 5^u[3];

> if w < 1 then w := 1/w fi;

Same for w.

> [w, v] end:

And that's the result, is it? Two unison vectors?

> coms := proc(l)

> local v;

> v := findcoms(l);

> com7(v[1],v[2]) end:

> The pair of unisons

> returned in this way can be LLL reduced by the "com7" function, which

> takes a pair of intervals and LLL reduces them.

That makes sense. Return the reduced results of the other function.

> "w7l" takes two vectors representing intervals, and computes the

> wegdge product. "isolve" gives integer solutions to a linear

> equation; I get an undeterminded varable "_N1" in this way which I

> can set equal to any integer, so I set it to 0. The pair of unisons

> returned in this way can be LLL reduced by the "com7" function, which

> takes a pair of intervals and LLL reduces them.

Looks like the magic is being done by "isolve" which I presume is built-in

to Maple.

Graham

--- In tuning-math@y..., graham@m... wrote:

> Gene wrote:

> > I don't know what good Maple code will do, but here it is:

> >

> > findcoms := proc(l)

> > local p,q,r,p1,q1,r1,s,u,v,w;

>

> More descriptive variable names might help. Is l the wedge

invariant?

Yes.

> > s := igcd(l[1], l[2], l[6]);

> > u := [l[6]/s, -l[2]/s, l[1]/s,0];

>

> Presumably this is simplifying the octave-equivalent part?

"s" is the gcd of the first, second and sixth coordinates of the

wedgie, these are the ones used to construct the 5-limit comma. I

divide out by s, and get u, which is a vector representing this comma.

> > v := [p,q,r,1];

>

> What values do p, q and r have? Is it important?

p, q, and r are indeterminates, and the "1" above should be "s", the

gcd I obtained before.

Here is a more recent version, which should be used instead of the

old one as a reference:

findcoms := proc(l)

local p,q,r,p1,q1,r1,s,t,u,v,w;

s := igcd(l[1], l[2], l[6]);

u := [l[6]/s, -l[2]/s, l[1]/s,0];

v := [p,q,r,s];

w := w7l(u,v);

t := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w

[6]});

t := subs(_N1=0,t);

p1 := subs(t,p);

q1 := subs(t,q);

r1 := subs(t,r);

v := 2^p1 * 3^q1 * 5^r1 * 7^s;

if v < 1 then v := 1/v fi;

w := 2^u[1] * 3^u[2] * 5^u[3];

if w < 1 then w := 1/w fi;

[w, v] end:

> So w is the wedge product of u and v, whatever they are.

Right, and "u" is the 5-limit comma, while "v" is undetermined aside

from the fact that the power of 7 is "s".

> > s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l

[6]-w

> > [6]});

>

> > "isolve" gives integer solutions to a linear

> > equation;

>

> Oh, that sounds useful.

It is; a linear Diophantine equation routine would be a good thing to

acquire.

> > p1 := subs(s,p);

> > q1 := subs(s,q);

> > r1 := subs(s,r);

>

> What about this?

I've now re-named "s" (bad programming style if I was going to

publish the code, but I didn't write it with that in mind) to be the

set of solutions of the linear Diophantine equation. In my newer

version, that is "t"; t is a particular solution, and I substitute

this solution into the indeterminates, getting a specific value. It's

Maple-specific idiocy, and you would no doubt do something different

using Python.

> > v := 2^p1 * 3^q1 * 5^r1 * 7;

>

> And here ^ is exponentiation instead of a wedge product.

Right, and 7 should be "7^s".

> > if v < 1 then v := 1/v fi;

>

> So v must be a ratio, and you want it to be ascending.

I just like to standardize things.

> > w := 2^u[1] * 3^u[2] * 5^u[3];

> > if w < 1 then w := 1/w fi;

>

> Same for w.

>

> > [w, v] end:

>

> And that's the result, is it? Two unison vectors?

Correct; two unison vectors free of torsion problems which define the

linear temperament.

> Looks like the magic is being done by "isolve" which I presume is

built-in

> to Maple.

It's a built-in Maple function; however much of the magic can still

be had by solving the system over the rationals, because part of the

magic was to start out in such a way that torsion problems would be

exterminated. One way to solve a linear Diophantine system is to

solve over the rationals, and then solve the congruence conditions

required to give an integer solution, in fact. You might look in

Niven and Zuckerman if you have a copy for linear Diophantine

equations.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Dave, if you don't have a cutoff, you'd have an infinite number

of

> > ETs better than 1547. Of course there has to be a cutoff.

>

> Yes. This just shows that this isn't a very good badness metric.

> A decent badness metric would not need a cutoff in anything but

> badness in order to arrive at a finite list.

>

> > I mean that only Gene's measure tells you exactly _how much_

better

> a

> > system is than the systems in their vicinity,

>

> How do you know it does that? "Exactly"?

Sure, in a limit-probability sense. How many digits did Gene report?

Anyhow, I'll have to refer you to Gene on the details of how it does

that.

I'd just like this paper to have some very simple systems with large

errors, where a combined adaptive-tuning & adaptive-timbre approach

would be needed, as well as systems to satisfy people like Rami

Vitale, for whom even the _melodic_ distinctions of 225:224 cannot be

tempered out.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But why ever do you think the size of the wiggles should be flat? I

> think it is quite expected that the size of the wiggles in badness

> around 1-tET to 9-tET are _much_ bigger than the wiggles around 60-

tET

> to 69-tET.

The two ranges would gave to be the same size logarithmically, for

example 1-tET to 9-tET and 10-tET to 90-tET.