back to list

Nonoctave scales and linear temperaments

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/6/2003 10:02:33 PM

I pointed out over on PostTonality that the 88CET is closely related to the Octafifths temperament; you might say it is Octafifths without the octave reduction. If Octafifths (with a generator around 8/109) has a generator small enough for this, so does Quartaminorthirds, with a generator arounf 7/108. Moreover, if the nonoctave people have not yet tried the secor, they are missing a bet. Slender, with a generator of about 4/125 gives something perversely related to 31-et, and we might even try Porcupine, Tertiathirds or Hemikleismic.

The Wendy Carlos alpha is related to an 11-limit temperament which appeared on my best-20 list; it is what you get by wedging 121/120,
126/125 and 176/175, [9,5,-3,7,-13,-30,-20,-21,-1,30], and we might call it Wendy or Alpha. Which is best?

I also looked at beta and gamma, but I didn't find much. Beta can be related to a cheeseball system obtainable as h75&h94, and it is possible to regard gamma as 5/171. Maybe Graham or Dave can point out something I am missing here.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/7/2003 3:46:51 AM

>I also looked at beta and gamma, but I didn't find much. Beta
>can be related to a cheeseball system obtainable as h75&h94,
>and it is possible to regard gamma as 5/171. Maybe Graham or
>Dave can point out something I am missing here.

Would you expect that tempering the "octave" instead of the
"generator" at a given limit would lead to a different list
of optimal temperaments or optimal generator for a temperament?

Perhaps we should, since our lists tend to keep 2:1 on both
axes of the map (or at least with a very short period of ie's)
while not considering its temperament. That is, we often
discuss temperaments with 2:1 "octaves" and irrational or
dissonant "generators". Perhaps we should instead allow the
"octave" to be any size while applying a single weighted error
function to all consonances, including the 2:1.

Most weighting schemes would probably attract the good
generators of good temperaments to consonant intervals. It
might even be possible to always force one of the generators
to a pure 2:1 with a steep enough weighting. Rather than
guessing, it seems like a good place to plug in harmonic
entropy.

But why average (max, rms, etc.) complexity and error across
a map before weighting and calculating badness? Why not
weight per harmonic identity, then just sum to find badness at
the given limit? If you weight the error right, you shouldn't
have to weight the complexity.

Let g(x) be the graham complexity of identity x, and e(x) be
the weighted error of that identity. Then minimize

Sum [g(r) * e(r)]

where r goes over all the identities in the given limit. If a
minimum could be found, we would know the optimal generator of
the optimal temperament. Assuming a perfect error function,
which can't exist outside of perfect, deterministic neuroscience.

Given the way the error and graham complexity of identities
compound when covering a limit (some sort of consistency is
required by Gene's def. for linear temperaments, IIRC), per-
identity consideration should be enough. But one could imagine
weighting per-dyad, or even per-chord.

Or maybe I'm missing something?

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/7/2003 11:41:10 AM

>Let g(x) be the graham complexity of identity x, and e(x) be
>the weighted error of that identity. Then minimize
>
>Sum [g(r) * e(r)]
>
>where r goes over all the identities in the given limit.

That's supposed to be...

Let g(x) be the graham complexity of identity x, and e(x) be
the weighted error of that identity. Then minimize

Sum [g(r) * e(r)]

over all maps, where r goes over the identities of the given map.

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/7/2003 12:51:22 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> Would you expect that tempering the "octave" instead of the
> "generator" at a given limit would lead to a different list
> of optimal temperaments or optimal generator for a temperament?

The problem with this is that it makes the question of what the consonances of the temperament are murky--we can't simply use everything in the odd limit for some odd n. Of course, we could simply create a set of intervals and then temper, or use n-limit
including evens.

Perhaps we should instead allow the
> "octave" to be any size while applying a single weighted error
> function to all consonances, including the 2:1.

Should we have 8-limit as well as 7-limit and 9-limit?

> But why average (max, rms, etc.) complexity and error across
> a map before weighting and calculating badness? Why not
> weight per harmonic identity, then just sum to find badness at
> the given limit? If you weight the error right, you shouldn't
> have to weight the complexity.

I'll return to this after I've had my breakfast coffee. :)

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/7/2003 1:17:42 PM

>The problem with this is that it makes the question of what the
>consonances of the temperament are murky--we can't simply use
>everything in the odd limit for some odd n. Of course, we could
>simply create a set of intervals and then temper, or use n-limit
>including evens.

Perhaps I'm not seeing it, but I don't think we need to change
our concept of limit.

>>But why average (max, rms, etc.) complexity and error across
>>a map before weighting and calculating badness? Why not
>>weight per harmonic identity, then just sum to find badness at
>>the given limit? If you weight the error right, you shouldn't
>>have to weight the complexity.
>
>I'll return to this after I've had my breakfast coffee. :)

That was a seriously late-night post, and hopefully it made
sense. For all I know you could already be calculating badness
this way.

For linear temperaments, we have a 2-D lattice of generators.
The map turns points on this lattice into points on the
(weighted, if you like) harmonic lattice, and back again. The
complexity of a pair of such points is the taxicab distance on
the lattice of generators, and the error is the taxicab distance
on the harmonic lattice.

You can define as many mapping points as you like. But if you
stick to consistent maps of the identities only, you should get
reasonable results for all the members of what we normally call
a limit.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/7/2003 4:08:32 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >The problem with this is that it makes the question of what the
> >consonances of the temperament are murky--we can't simply use
> >everything in the odd limit for some odd n. Of course, we could
> >simply create a set of intervals and then temper, or use n-limit
> >including evens.
>
> Perhaps I'm not seeing it, but I don't think we need to change
> our concept of limit.

we certainly would, and could use "integer limit" as gene suggests, or
use product limit (tenney).

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/7/2003 5:15:21 PM

>>Perhaps I'm not seeing it, but I don't think we need to change
>>our concept of limit.
>
>we certainly would, and could use "integer limit" as gene
>suggests, or use product limit (tenney).

Maybe so, but I don't see why. I'm suggesting we think only of
the map, and let it do the walking. We get to pick what goes
in the map. Picking 2, 3, 5, 7 and calling it "7-limit" seems
fine to me.

>>If you weight the error right, you shouldn't have to weight
>>the complexity.
>
>I'll return to this after I've had my breakfast coffee. :)

Sorry, all I should have said is, * is communitive. So it's
really...

Sum ( raw-error(i) * graham-complexity(i) * weighting-factor(i) )

I'll wager a coke this eliminates the need for an averaging
function over the intervals of the limit. If so, it would
approximate traditional badness, and this could be checked.

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/7/2003 5:25:53 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> Sum ( raw-error(i) * graham-complexity(i) * weighting-factor(i) )
>
> I'll wager a coke this eliminates the need for an averaging
> function over the intervals of the limit. If so, it would
> approximate traditional badness, and this could be checked.

I've had my coffee, but still can't see the advantage of this system. We want to have a measure of absolute error independent of complexity in any case, do we not?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/8/2003 4:06:16 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Perhaps I'm not seeing it, but I don't think we need to change
> >>our concept of limit.
> >
> >we certainly would, and could use "integer limit" as gene
> >suggests, or use product limit (tenney).
>
> Maybe so, but I don't see why. I'm suggesting we think only of
> the map, and let it do the walking. We get to pick what goes
> in the map. Picking 2, 3, 5, 7 and calling it "7-limit" seems
> fine to me.

you´re talking prime limit, which is fine for the mapping, as usual.
but for the optimization of the generator size, we need a list of
consonances to target.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/8/2003 6:52:21 PM

>you´re talking prime limit, which is fine for the mapping,
>as usual.

I'm talking 'put whatever you want in the map'.

> but for the optimization of the generator size, we need a
> list of consonances to target.

Why not optimize the generator size for the map, and let
it target the consonances? Presumably because in some
tunings the errors for say 3 and 5 will cancel on consonances
like 5:3.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/9/2003 12:40:09 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >you´re talking prime limit, which is fine for the mapping,
> >as usual.
>
> I'm talking 'put whatever you want in the map'.
>
> > but for the optimization of the generator size, we need a
> > list of consonances to target.
>
> Why not optimize the generator size for the map, and let
> it target the consonances? Presumably because in some
> tunings the errors for say 3 and 5 will cancel on consonances
> like 5:3.

i'm not following you, or where you differ from what's "standard"
around here . . . why don't you post a complete calculation for the
meantone case, or if you wish, some other, more contrived case . . .

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/9/2003 1:34:32 PM

>>Why not optimize the generator size for the map, and let
>>it target the consonances? Presumably because in some
>>tunings the errors for say 3 and 5 will cancel on consonances
>>like 5:3.
>
>i'm not following you, or where you differ from what's
"standard" around here . . .

As I say, I don't know how much differing from what's
standard. Calculations are seldom posted here at the
undergrad level.

As usual, I'm trying to figure things out by synthesizing
something and asking about it. When I hit something that
works, I keep it.

>why don't you post a complete calculation for the meantone
>case, or if you wish, some other, more contrived case . . .

Map for 5-limit meantone...

2 3 5
gen1 1 1 -2
gen2 0 1 4

Complexity for each identity...

2= 1
3= 2
5= 6

Let's weight by 1/base2log(i)...

2= 1.00
3= 1.26
5= 2.58

Now gen1 and gen2 are variables, and minimize...

error(2) + 1.26(error(3)) + 2.58(error(5))

I don't know how to do such a calculation, or even
if it's guarenteed to have a minimum. It would
give us minimum-badness generators, not minimum
error gens.

The log2(i) weighting is only off the top of my head.
One could imagine no weighting. One could imagine
weighting so steep we could find the optimal generators
for harmonic limit infinity.

If this does cause us to miss temperaments with good
composite consonances like 5:3, we can go back to
minimizing the error of all the intervals in the givin
limit, and keep the summed graham complexity.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/13/2003 11:16:03 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Why not optimize the generator size for the map, and let
> >>it target the consonances? Presumably because in some
> >>tunings the errors for say 3 and 5 will cancel on consonances
> >>like 5:3.
> >
> >i'm not following you, or where you differ from what's
> "standard" around here . . .
>
> As I say, I don't know how much differing from what's
> standard. Calculations are seldom posted here at the
> undergrad level.

well then you need to ask for clarification in such cases. there's no
reason anyone should be left behind.

> >why don't you post a complete calculation for the meantone
> >case, or if you wish, some other, more contrived case . . .
>
> Map for 5-limit meantone...
>
> 2 3 5
> gen1 1 1 -2
> gen2 0 1 4

hmm . . . gen1 is an octave, gen2 is a fifth . . . right?

> Complexity for each identity...
>
> 2= 1
> 3= 2
> 5= 6

defined how?

> Let's weight by 1/base2log(i)...
>
> 2= 1.00
> 3= 1.26
> 5= 2.58

> Now gen1 and gen2 are variables, and minimize...
>
> error(2) + 1.26(error(3)) + 2.58(error(5))
>
> I don't know how to do such a calculation, or even
> if it's guarenteed to have a minimum. It would
> give us minimum-badness generators, not minimum
> error gens.

i'm not sure what you're getting at. given the mapping, there's no
way to change the complexity, so we'd be holding complexity constant.
so isn't minimizing badness then the same thing as minimizing error?

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/13/2003 12:57:34 PM

>>Map for 5-limit meantone...
>>
>> 2 3 5
>>gen1 1 1 -2
>>gen2 0 1 4
>
>hmm . . . gen1 is an octave, gen2 is a fifth . . . right?

Right.

>>Complexity for each identity...
>>
>>2= 1
>>3= 2
>>5= 6
>
>defined how?

Those should be 2/1, 3/2, and 5/4. It's the taxicab distance
on the rectangular lattice of generators. Which I cooked up
as a generalization of Graham complexity for temperaments that
don't necessarily have octaves. How have you been calculating
Graham complexity for temperaments with more than one period
to an octave?

>>Let's weight by 1/base2log(i)...
>>
>>2= 1.00
>>3= 1.26
>>5= 2.58
>
>>Now gen1 and gen2 are variables, and minimize...
>>
>>error(2) + 1.26(error(3)) + 2.58(error(5))
>>
>>I don't know how to do such a calculation, or even
>>if it's guarenteed to have a minimum. It would
>>give us minimum-badness generators, not minimum
>>error gens.
>
>i'm not sure what you're getting at. given the mapping,
>there's no way to change the complexity, so we'd be
>holding complexity constant. so isn't minimizing
>badness then the same thing as minimizing error?

Doesn't the presence of the weighting factors change the
result?

How would you calculate complexity, error, badness, and
optimum generators for 5-limit meantone?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/13/2003 1:28:32 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Map for 5-limit meantone...
> >>
> >> 2 3 5
> >>gen1 1 1 -2
> >>gen2 0 1 4
> >
> >hmm . . . gen1 is an octave, gen2 is a fifth . . . right?
>
> Right.
>
> >>Complexity for each identity...
> >>
> >>2= 1
> >>3= 2
> >>5= 6
> >
> >defined how?
>
> Those should be 2/1, 3/2, and 5/4. It's the taxicab distance
> on the rectangular lattice of generators.

ok . . .

> Which I cooked up
> as a generalization of Graham complexity for temperaments that
> don't necessarily have octaves.

there seems to be a problem, in that by defining the generators as an
octave and a fifth, you get different numbers than by defining them
as an octave and a twelfth, say.

plus, graham complexity doesn't operate on a per-identity basis.

> How have you been calculating
> Graham complexity for temperaments with more than one period
> to an octave?

multiply the generator span of the otonal (or utonal) n-ad by the
number of periods per octave.

> >>Let's weight by 1/base2log(i)...
> >>
> >>2= 1.00
> >>3= 1.26
> >>5= 2.58
> >
> >>Now gen1 and gen2 are variables, and minimize...
> >>
> >>error(2) + 1.26(error(3)) + 2.58(error(5))
> >>
> >>I don't know how to do such a calculation, or even
> >>if it's guarenteed to have a minimum. It would
> >>give us minimum-badness generators, not minimum
> >>error gens.
> >
> >i'm not sure what you're getting at. given the mapping,
> >there's no way to change the complexity, so we'd be
> >holding complexity constant. so isn't minimizing
> >badness then the same thing as minimizing error?
>
> Doesn't the presence of the weighting factors change the
> result?

well, if you mean minimizing the expression above (assuming you meant
to the errors to be absolute values or squares), basically you've
just come up with a different weighting scheme for the error. not one
which i like, by the way, since you don't penalize the error in 3 and
the error in 5 for being in opposite directions; that is, you don't
take into account the error in 5:3.

> How would you calculate complexity, error, badness, and
> optimum generators for 5-limit meantone?

well, certainly woolhouse's derivation is known by everyone by now,
isn't it? that gives you the error and optimum generator, in the
equal-weighted RMS case. the complexity is a function of the mapping,
and can be defined in various ways (the graham complexity is 4), but
does not depend on the precise choice of generator. badness also has
several definitions -- log-flat badness is pretty much gene's
territory -- but is typically error times complexity to some power.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/13/2003 1:52:21 PM

>>Which I cooked up as a generalization of Graham complexity for
>>temperaments that don't necessarily have octaves.
>
>there seems to be a problem, in that by defining the generators
>as an octave and a fifth, you get different numbers than by
>defining them as an octave and a twelfth, say.

I thought of that, but I thought also that as long as one always
uses the same set of targets across temperaments, one is ok.
Whaddya think?

>plus, graham complexity doesn't operate on a per-identity basis.

Indeed. That's part of my inquiry into the order of operations.

>>How have you been calculating Graham complexity for temperaments
>>with more than one period to an octave?
>
>multiply the generator span of the otonal (or utonal) n-ad by the
>number of periods per octave.

That's what I thought. How does this compare to the taxicab
approach? Say, for Pajara.

>>>>error(2) + 1.26(error(3)) + 2.58(error(5))
//
>>>i'm not sure what you're getting at. given the mapping,
>>>there's no way to change the complexity, so we'd be
>>>holding complexity constant. so isn't minimizing
>>>badness then the same thing as minimizing error?
>>
>>Doesn't the presence of the weighting factors change the
>>result?
>
>well, if you mean minimizing the expression above (assuming you
>meant to the errors to be absolute values or squares),

yep.

>basically you've just come up with a different weighting scheme
>for the error.

Ok.

>not one which i like, by the way, since you don't penalize the
>error in 3 and the error in 5 for being in opposite directions;
>that is, you don't take into account the error in 5:3.

That's what I said at the beginning of the thread. So how do
you do weighted error? Do you weight the error for an entire
limit by the limit, for intervals individually?

>>How would you calculate complexity, error, badness, and
>>optimum generators for 5-limit meantone?
>
>well, certainly woolhouse's derivation is known by everyone
>by now, isn't it? that gives you the error and optimum generator,
>in the equal-weighted RMS case. the complexity is a function of
>the mapping, and can be defined in various ways (the graham
>complexity is 4), but does not depend on the precise choice of
>generator. badness also has several definitions -- log-flat
>badness is pretty much gene's territory -- but is typically error
>times complexity to some power.

Okay, that's what I thought.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/13/2003 2:35:56 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Which I cooked up as a generalization of Graham complexity for
> >>temperaments that don't necessarily have octaves.
> >
> >there seems to be a problem, in that by defining the generators
> >as an octave and a fifth, you get different numbers than by
> >defining them as an octave and a twelfth, say.
>
> I thought of that, but I thought also that as long as one always
> uses the same set of targets across temperaments, one is ok.
> Whaddya think?

what about temperaments without octaves, or without fifths? and
anyway, why would keeping the same set of targets help? complexity
shouldn't be this arbitrary!

> >plus, graham complexity doesn't operate on a per-identity basis.
>
> Indeed. That's part of my inquiry into the order of operations.

?

> >>How have you been calculating Graham complexity for temperaments
> >>with more than one period to an octave?
> >
> >multiply the generator span of the otonal (or utonal) n-ad by the
> >number of periods per octave.
>
> That's what I thought. How does this compare to the taxicab
> approach? Say, for Pajara.

i'm unclear on what taxicab approach you mean. be patient with me, i
know this would be easier in person. but i have to go now.

> >>>>error(2) + 1.26(error(3)) + 2.58(error(5))
> //
> >>>i'm not sure what you're getting at. given the mapping,
> >>>there's no way to change the complexity, so we'd be
> >>>holding complexity constant. so isn't minimizing
> >>>badness then the same thing as minimizing error?
> >>
> >>Doesn't the presence of the weighting factors change the
> >>result?
> >
> >well, if you mean minimizing the expression above (assuming you
> >meant to the errors to be absolute values or squares),
>
> yep.
>
> >basically you've just come up with a different weighting scheme
> >for the error.
>
> Ok.
>
> >not one which i like, by the way, since you don't penalize the
> >error in 3 and the error in 5 for being in opposite directions;
> >that is, you don't take into account the error in 5:3.
>
> That's what I said at the beginning of the thread. So how do
> you do weighted error? Do you weight the error for an entire
> limit by the limit, for intervals individually?

i don't think anyone's been doing weighted error on this list. but if
you did, you'd minimize

f(w3*error(3),w5*error(5),w5*error(5:3))

where f is either RMS or MAD or MAX or whatever, and w3 is your
weight on ratios of 3, and w5 is your weight on ratios of 5.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/13/2003 2:50:59 PM

>>I thought of that, but I thought also that as long as one always
>>uses the same set of targets across temperaments, one is ok.
>>Whaddya think?
//
>why would keeping the same set of targets help? complexity
>shouldn't be this arbitrary!

Because complexity is comparitive. The idea of a complete
otonal chord is not less arbitrary.

>>>multiply the generator span of the otonal (or utonal) n-ad by the
>>>number of periods per octave.
>>
>>That's what I thought. How does this compare to the taxicab
>>approach? Say, for Pajara.
>
>i'm unclear on what taxicab approach you mean. be patient with me,
>i know this would be easier in person. but i have to go now.

Oh, sure, dude. You still abroad? I've got to go now, too.
Just count the number of generators it takes, on the shortest
route in a rect. lattice, to get to the approx. of the target
interval.

>i don't think anyone's been doing weighted error on this list.

Oh, crap. What is it you've been pushing, then?
Weighted complexity?

>but if you did, you'd minimize
>
>f(w3*error(3),w5*error(5),w5*error(5:3))
>
>where f is either RMS or MAD or MAX or whatever, and w3 is your
>weight on ratios of 3, and w5 is your weight on ratios of 5.

Thanks. So my f is +, where you tend to use RMS. And I've got
complexity in the w's, which is a mistake, as I'll post about
shortly...

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/13/2003 3:04:16 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> > Which I cooked up
> > as a generalization of Graham complexity for temperaments that
> > don't necessarily have octaves.
>
> there seems to be a problem, in that by defining the generators as an
> octave and a fifth, you get different numbers than by defining them
> as an octave and a twelfth, say.
>
> plus, graham complexity doesn't operate on a per-identity basis.

All of which being why I came up with geometric complexity, which is invariant with respect to choice of generators and does not have this problem.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/13/2003 5:46:51 PM

>All of which being why I came up with geometric complexity,
>which is invariant with respect to choice of generators and
>does not have this problem.

Unfortunately, you may be the only one on this list that
understands geometric complexity. :(

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/14/2003 11:09:14 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>I thought of that, but I thought also that as long as one always
> >>uses the same set of targets across temperaments, one is ok.
> >>Whaddya think?
> //
> >why would keeping the same set of targets help? complexity
> >shouldn't be this arbitrary!
>
> Because complexity is comparitive. The idea of a complete
> otonal chord is not less arbitrary.

right, but whether a particular mapping is more complex than another
shouldn't be this arbitrary!

> >i don't think anyone's been doing weighted error on this list.
>
> Oh, crap. What is it you've been pushing, then?
> Weighted complexity?

that's been much more common, yes.

> >but if you did, you'd minimize
> >
> >f(w3*error(3),w5*error(5),w5*error(5:3))
> >
> >where f is either RMS or MAD or MAX or whatever, and w3 is your
> >weight on ratios of 3, and w5 is your weight on ratios of 5.
>
> Thanks. So my f is +,

are you sure? aren't there absolute values, in which case it's
equivalent to MAD? (or p=1, which gene doesn't want to consider)

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/14/2003 11:10:14 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> > > Which I cooked up
> > > as a generalization of Graham complexity for temperaments that
> > > don't necessarily have octaves.
> >
> > there seems to be a problem, in that by defining the generators
as an
> > octave and a fifth, you get different numbers than by defining
them
> > as an octave and a twelfth, say.
> >
> > plus, graham complexity doesn't operate on a per-identity basis.
>
> All of which being why I came up with geometric complexity, which
>is invariant with respect to choice of generators and does not have
>this problem.

can you demonstate the "problem" for other complexity measures, say
for meantone?

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/14/2003 2:38:49 PM

>>Because complexity is comparitive. The idea of a complete
>>otonal chord is not less arbitrary.
>
>right, but whether a particular mapping is more complex
>than another shouldn't be this arbitrary!

I'm lost. If you agree with that, then what's arbitrary?

>>Oh, crap. What is it you've been pushing, then?
>>Weighted complexity?
>
>that's been much more common, yes.

Ok. Here's my latest thinking, as promised.

Ideally we'd base everything on complete n-ads, with
harmonic entropy. Since that's not available, we'll look
at dyadic breakdowns.

If you use the concept of odd limits, and your best way
of measuring the error of an n-ad is to break it down
into dyads, you're basically saying that ratio containing
n is much different than any ratio containing at most n-2.
Thus, I suspect that my sum of abs-errors for each odd
identity up to the limit would make sense despite the fact
that for dyads like 5:3 the errors may cancel.

If we throw out odd-limit, however, we might be better off.
If there were a weighting that followed Tenney limit but
was steep enough to make near-perfect 2:1s a fact of life
and anything much beyond the 17-limit go away, we could
have individually-weighted errors and 'limit infinity'.

We should be able to search map space and assign generator
values from scratch. Pure 2:1 generators should definitely
not be assumed. Instead, we might use the appearence of
many near-octave generators as evidence the weighting is
right.

As far as my combining error and complexity before optimizing
generators, that was wrong. Moreover, combining them at all
is not for me. I'm not bound to ask, "What's the 'best' temp.
in size range x?". Rather, I might ask, "What's the most
accurate temperament in complexity range x?". Which is just
a sort on all possible temperaments, first by complexity, then
by accuracy. Which is how I set up Dave's 5-limit spreadsheet
after endlessly trying exponents in the badness calc. without
being able to get a sensicle ranking.

As for which complexity to use, we have the question of how to
define a map at 'limit infinity'. . . In the meantime, what
about standard n-limit complexity?

() Gene's geometric complexity sounds interesting (assuming
it's limit-specific...).

() The number of notes of the temperament needed to get all of
the n-limit dyads.

() The taxicab complexity of the n-limit commas on the harmonic
lattice. Or something that measured how much smaller the
average harmonic structure would be in the temperament than in
JI. This sort of formulation is probably best, and may in fact
be what Gene's geometric complexity does...

>>>f(w3*error(3),w5*error(5),w5*error(5:3))
>>>
>>>where f is either RMS or MAD or MAX or whatever, and w3 is
>>>your weight on ratios of 3, and w5 is your weight on ratios
>>>of 5.
>>
>>Thanks. So my f is +,
>
>are you sure? aren't there absolute values, in which case it's
>equivalent to MAD? (or p=1, which gene doesn't want to consider)

Yep, you're right. Though its choice was just an expedient
here, and it would be the MAD of just the identities, not of
all the dyads in the limit.

Last time we tested these things for all-the-dyads-in-a-chord,
I believe I preferred RMS. Which is not to say that MAD
shouldn't be included in the poptimal series.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/14/2003 2:57:16 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Because complexity is comparitive. The idea of a complete
> >>otonal chord is not less arbitrary.
> >
> >right, but whether a particular mapping is more complex
> >than another shouldn't be this arbitrary!
>
> I'm lost. If you agree with that, then what's arbitrary?

if you choose a different set of generators, you'll get a different
ranking for which mapping is more complex then which!

on the rest of this message, i'll have to get back to you later . . .

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/14/2003 5:36:49 PM

>if you choose a different set of generators, you'll get
>a different ranking for which mapping is more complex
>then which!

Doesn't a map uniquely determine its generators?

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/14/2003 7:55:06 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> >if you choose a different set of generators, you'll get
> >a different ranking for which mapping is more complex
> >then which!

> Doesn't a map uniquely determine its generators?

The problem is that the temperament does not uniquely determine the map.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/14/2003 8:44:36 PM

>>Doesn't a map uniquely determine its generators?
>
>The problem is that the temperament does not uniquely
>determine the map.

What is a temperament, then, if not a map?

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/14/2003 9:43:31 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:
> >>Doesn't a map uniquely determine its generators?
> >
> >The problem is that the temperament does not uniquely
> >determine the map.
>
> What is a temperament, then, if not a map?

A regular temperament, when tuned, has values for the primes which map it to a group of lower rank. For instance, meantone might send
2 to 2, 3 to 5^(1/4) and 5 to 5. Starling, a 7-limit planar temperament, when tuned, might send 2 to 2, 3 to (375/14)^(1/3), 5 to 5 and 7 to (6125/18)^(1/3) (these are minimax tunings.) However, we want to define things more abstractly, and call the above 1/4-comma meantone, etc. not just "Meantone" or "Starling".

In the case of a linear temperament, we can assume one generator is an octave or portion thereof, which means that while we have not defined the temperament as a mapping, we've at least come close. In the case of a planar temperament like Starling, this won't work. We could define standardized mappings, such as the Hermite reduced mapping, but it becomes more artificial. My answer has been to define the temperament by its associated wedgie; another option is to reduce a basis for the kernel (the commas of the temperament) in a standard way, such as Tenney-Minkowski. The more abstract creature we get in this way is what I think of as the temperament; it does not, for instance, presume octave equivalence.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/15/2003 9:39:53 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Because complexity is comparitive. The idea of a complete
> >>otonal chord is not less arbitrary.
> >
> >right, but whether a particular mapping is more complex
> >than another shouldn't be this arbitrary!
>
> I'm lost. If you agree with that, then what's arbitrary?
>
> >>Oh, crap. What is it you've been pushing, then?
> >>Weighted complexity?
> >
> >that's been much more common, yes.
>
> Ok. Here's my latest thinking, as promised.
>
> Ideally we'd base everything on complete n-ads, with
> harmonic entropy. Since that's not available, we'll look
> at dyadic breakdowns.
>
> If you use the concept of odd limits, and your best way
> of measuring the error of an n-ad is to break it down
> into dyads, you're basically saying that ratio containing
> n is much different than any ratio containing at most n-2.
> Thus, I suspect that my sum of abs-errors for each odd
> identity up to the limit would make sense despite the fact
> that for dyads like 5:3 the errors may cancel.

i don't understand this reasoning at all. completely baffled, i am.

> If we throw out odd-limit, however, we might be better off.
> If there were a weighting that followed Tenney limit but
> was steep enough to make near-perfect 2:1s a fact of life
> and anything much beyond the 17-limit go away, we could
> have individually-weighted errors and 'limit infinity'.

but there would have to be an infinitely long map, or wedgie, or list
of unison vectors in order to define the temperament family.

> We should be able to search map space and assign generator
> values from scratch.

i don't understand this.

> Pure 2:1 generators should definitely
> not be assumed. Instead, we might use the appearence of
> many near-octave generators as evidence the weighting is
> right.

as gene expained, we can let the 2:1s fall as they may even with the
current framework. though the choice of what gets defined
as "generator" becomes arbitrary when the tuning actually has more
than one dimension of freedom to it, the actual parameters describing
the temperament's tuning remain perfectly well-defined.

> As far as my combining error and complexity before optimizing
> generators, that was wrong. Moreover, combining them at all
> is not for me. I'm not bound to ask, "What's the 'best' temp.
> in size range x?". Rather, I might ask, "What's the most
> accurate temperament in complexity range x?".

that's exactly how i've been looking at all this for the entire
history of this list -- witness my comments to dave defending gene's
log-flat badness measures; i took exactly this tack!

> Which is just
> a sort on all possible temperaments, first by complexity,

this is exactly how i proposed that we present the results in our
paper . . .

> then
> by accuracy.

well, you'll rarely have two temperaments with the same complexity,
though many complexity measures do put meantone and pelogic as
identical complexity, and my heuristic puts blackwood and porcupine
as identical complexity, for instance . . .

> Which is how I set up Dave's 5-limit spreadsheet
> after endlessly trying exponents in the badness calc. without
> being able to get a sensicle ranking.

well, at this point, it's easy enough to sort the 5-limit database by
complexity, at least complexity as defined by my heuristic:

/tuning/database?
method=reportRows&tbl=10&sortBy=3

or

/tuning/database?
method=reportRows&tbl=10&sortBy=8

> >>>f(w3*error(3),w5*error(5),w5*error(5:3))
> >>>
> >>>where f is either RMS or MAD or MAX or whatever, and w3 is
> >>>your weight on ratios of 3, and w5 is your weight on ratios
> >>>of 5.
> >>
> >>Thanks. So my f is +,
> >
> >are you sure? aren't there absolute values, in which case it's
> >equivalent to MAD? (or p=1, which gene doesn't want to consider)
>
> Yep, you're right. Though its choice was just an expedient
> here, and it would be the MAD of just the identities, not of
> all the dyads in the limit.

this again. how strange since the original context in which you
proposed MAD was, iirc, 15-equal. 15-equal has a minor third which is
less that 4 cents off just, giving its triads a "locked" quality
which you liked. if you keep the magnitudes of the errors on the
identities, but make them disagree in sign, the minor third will be
32 cents off just. is such a tuning really just as good as 15-equal?
try it!

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/15/2003 10:48:32 AM

>>If we throw out odd-limit, however, we might be better off.
>>If there were a weighting that followed Tenney limit but
>>was steep enough to make near-perfect 2:1s a fact of life
>>and anything much beyond the 17-limit go away, we could
>>have individually-weighted errors and 'limit infinity'.
>
>but there would have to be an infinitely long map, or wedgie,
>or list of unison vectors in order to define the temperament
>family.

Yeah, but we could approximate it with a finite map.

>>We should be able to search map space and assign generator
>>values from scratch.
>
>i don't understand this.

The number of possible maps I'm interested in isn't that
large. Gene didn't deny that a map uniquely defined its
generators (still working through how a list of commas
could be more fundamental than a map...).

>as gene expained, we can let the 2:1s fall as they may even
>with the current framework.

What is the current framework? How have we been searching
for new systems?

>though the choice of what gets defined as "generator" becomes
>arbitrary

I never gave any special significance to the generator
vs. the ie.

>>As far as my combining error and complexity before optimizing
>>generators, that was wrong. Moreover, combining them at all
>>is not for me. I'm not bound to ask, "What's the 'best' temp.
>>in size range x?". Rather, I might ask, "What's the most
>>accurate temperament in complexity range x?".
>
>that's exactly how i've been looking at all this for the entire
>history of this list -- witness my comments to dave defending
>gene's log-flat badness measures; i took exactly this tack!

How could it defend Gene's log-flat badness? It's utterly
opposed to it!

>>Which is just a sort on all possible temperaments, first by
>>complexity,
>
>this is exactly how i proposed that we present the results in
>our paper . . .

Cool.

>>then by accuracy.
>
>well, you'll rarely have two temperaments with the same
>complexity,

Funny, I don't see Dave's 5-limitTemp spreadsheet on his
website, but with Graham complexity, you do get a fair
number of collisions IIRC.

>well, at this point, it's easy enough to sort the 5-limit
>database by complexity, at least complexity as defined by
>my heuristic:

Too bad there's nothing explaining the heuristic. :(

-C.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/15/2003 11:00:05 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>If we throw out odd-limit, however, we might be better off.
> >>If there were a weighting that followed Tenney limit but
> >>was steep enough to make near-perfect 2:1s a fact of life
> >>and anything much beyond the 17-limit go away, we could
> >>have individually-weighted errors and 'limit infinity'.
> >
> >but there would have to be an infinitely long map, or wedgie,
> >or list of unison vectors in order to define the temperament
> >family.
>
> Yeah, but we could approximate it with a finite map.

hmm . . . aren't you then just talking about a finite limit of some
sort?

> >>We should be able to search map space and assign generator
> >>values from scratch.
> >
> >i don't understand this.
>
> The number of possible maps I'm interested in isn't that
> large. Gene didn't deny that a map uniquely defined its
> generators

not sure if this concept has been pinned down . . .

> (still working through how a list of commas
> could be more fundamental than a map...).

given a list of commas, you can determine the mapping, no matter how
you define your generators.

> >as gene expained, we can let the 2:1s fall as they may even
> >with the current framework.
>
> What is the current framework? How have we been searching
> for new systems?

well, i guess i really meant a *generalization* of the current
framework, using say an integer limit or product limit instead of an
odd limit to optimize (there, we've come full circle). gene has
pulled out a few examples, iirc.

we've been searching by commas, i believe -- i'll let gene answer
this more fully. graham has been searching by "+"ing ET maps to get
linear temperament maps -- something i'm not sure i can explain right
now.

> >though the choice of what gets defined as "generator" becomes
> >arbitrary
>
> I never gave any special significance to the generator
> vs. the ie.

that's not what i mean -- i mean, if you're dealing with a planar
temperament (which might simply be a linear temperament with
tweakable octaves) or something with higher dimension, there's no
unique choice of the basis of generators -- gene's used things such
as hermite reduction to make this arbitrary choice for him.

> >>As far as my combining error and complexity before optimizing
> >>generators, that was wrong. Moreover, combining them at all
> >>is not for me. I'm not bound to ask, "What's the 'best' temp.
> >>in size range x?". Rather, I might ask, "What's the most
> >>accurate temperament in complexity range x?".
> >
> >that's exactly how i've been looking at all this for the entire
> >history of this list -- witness my comments to dave defending
> >gene's log-flat badness measures; i took exactly this tack!
>
> How could it defend Gene's log-flat badness? It's utterly
> opposed to it!

hardly!! the idea is that, if you sort by complexity, using a log-
flat badness criterion guarantees that you'll have a similar number
of temperaments to look at within each complexity range, so the
complexity will increase rather smoothly in your list.

> Too bad there's nothing explaining the heuristic. :(

you can find old posts on this list explaining it, and other posts
which link to that explanation. the logic is complete, though the
mathematics of it is -- naturally -- heuristic in nature.

🔗Graham Breed <graham@microtonal.co.uk>

1/15/2003 1:14:05 PM

wallyesterpaulrus wrote:

> we've been searching by commas, i believe -- i'll let gene answer > this more fully. graham has been searching by "+"ing ET maps to get > linear temperament maps -- something i'm not sure i can explain right > now.

I wrote my method up a while back:

http://x31eq.com/temper/method.html

I've implemented a search by unison vectors as well. But it isn't as efficient if you use an arbitrarily large set of unison vectors, as you really should for an exhaustive search.

> that's not what i mean -- i mean, if you're dealing with a planar > temperament (which might simply be a linear temperament with > tweakable octaves) or something with higher dimension, there's no > unique choice of the basis of generators -- gene's used things such > as hermite reduction to make this arbitrary choice for him.

Even with linear temperaments, it can be difficult to make the choice unique. I had a lot of trouble with scales with a small period generated by unison vectors. The relative sizes of the possible generators can change after you optimize.

Graham

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/15/2003 2:03:30 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> The number of possible maps I'm interested in isn't that
> large. Gene didn't deny that a map uniquely defined its
> generators (still working through how a list of commas
> could be more fundamental than a map...).

It is easy to define a canonical set of commas by insisting it be TM reduced; and thereby associate something (a set of commas) as a unique marker for the temperament. We can also do this with maps by for instance Hermite reducing the map. No one seems to like Hermite reduction much, so if you want to define a canonical map which is better the field of opportunity is open.

> >as gene expained, we can let the 2:1s fall as they may even
> >with the current framework.
>
> What is the current framework? How have we been searching
> for new systems?

I've been coming up with a set of wedgies, setting limits to error, complexity and log-flat badness, and filtering the list. This is, obviously, not the only way to do things, but it works, and has advantages.

> >>As far as my combining error and complexity before optimizing
> >>generators, that was wrong. Moreover, combining them at all
> >>is not for me. I'm not bound to ask, "What's the 'best' temp.
> >>in size range x?". Rather, I might ask, "What's the most
> >>accurate temperament in complexity range x?".
> >
> >that's exactly how i've been looking at all this for the entire
> >history of this list -- witness my comments to dave defending
> >gene's log-flat badness measures; i took exactly this tack!
>
> How could it defend Gene's log-flat badness? It's utterly
> opposed to it!

Eh? I go with Paul; this is the point of log-flat measures.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/15/2003 2:15:10 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
> <clumma@y...> wrote:
> > >>If we throw out odd-limit, however, we might be better off.
> > >>If there were a weighting that followed Tenney limit but
> > >>was steep enough to make near-perfect 2:1s a fact of life
> > >>and anything much beyond the 17-limit go away, we could
> > >>have individually-weighted errors and 'limit infinity'.
> > >
> > >but there would have to be an infinitely long map, or wedgie,
> > >or list of unison vectors in order to define the temperament
> > >family.
> >
> > Yeah, but we could approximate it with a finite map.
>
> hmm . . . aren't you then just talking about a finite limit of some
> sort?

My Zeta function method in theory goes out to limit infinity, but gives much greater weight to smaller primes, 2 in particular.

> we've been searching by commas, i believe -- i'll let gene answer
> this more fully.

One can use various methods to produce a list of wedgies to test, and merge the lists.

graham has been searching by "+"ing ET maps to get
> linear temperament maps -- something i'm not sure i can explain right
> now.

I think plusing is the same as taking the wedge product. If you want
n-dimensional temperaments (where linear is 2, planar 3, etc.) then
you can wedge n et maps. You may also wedge pi(p)-n commas together for the same result, where p is the prime limit and pi(x) is the number theory function counting primes less than or equal to x.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 12:19:28 AM

>>The number of possible maps I'm interested in isn't that
>>large. Gene didn't deny that a map uniquely defined its
>>generators (still working through how a list of commas
>>could be more fundamental than a map...).
>
>It is easy to define a canonical set of commas by insisting
>it be TM reduced; and thereby associate something (a set of
>commas) as a unique marker for the temperament. We can also
>do this with maps by for instance Hermite reducing the map.
>No one seems to like Hermite reduction much, so if you want
>to define a canonical map which is better the field of
>opportunity is open.

Well, I don't know what a basis is, let alone a TM reduced
one, let alone a Hermite normal form.

>>>>As far as my combining error and complexity before optimizing
>>>>generators, that was wrong. Moreover, combining them at all
>>>>is not for me. I'm not bound to ask, "What's the 'best' temp.
>>>>in size range x?". Rather, I might ask, "What's the most
>>>>accurate temperament in complexity range x?".
>>>
>>>that's exactly how i've been looking at all this for the entire
>>>history of this list -- witness my comments to dave defending
>>>gene's log-flat badness measures; i took exactly this tack!
>>
>>How could it defend Gene's log-flat badness? It's utterly
>>opposed to it!
>
>Eh? I go with Paul; this is the point of log-flat measures.

Eh? Name your complexity range, and take the x most accurate
temperaments within it. Why do I need badness at all?

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 12:59:23 AM

>hmm . . . aren't you then just talking about a finite limit
>of some sort?

With a steep weighting and a promise that what we've left
out affects things less than x, sure. Maybe the Zeta
function stuff can do even better...

>>>>We should be able to search map space and assign
>>>>generator values from scratch.
>>>
>>>i don't understand this.
>>
>>The number of possible maps I'm interested in isn't that
>>large. Gene didn't deny that a map uniquely defined its
>>generators
>
>not sure if this concept has been pinned down . . .

Certainly it hasn't. For linear 5-limit temps, if I take
something like this...

2 3 5
gen1
gen2

...and start filling in numbers, either every choice of
numbers converges on a pair of exact sizes for gen1 and
gen2 under an error function of a certain class (say, one
to which RMS, and apparently not minimax, belongs), or it
doesn't. Am I to understand it doesn't?

>>(still working through how a list of commas
>>could be more fundamental than a map...).
>
>given a list of commas, you can determine the
>mapping, no matter how you define your generators.

Are you saying that the same set of commas could vanish
under two different maps, each with different gens? If
so, can you give an example?

>we've been searching by commas, i believe -- i'll let gene
>answer this more fully. graham has been searching by "+"ing
>ET maps to get linear temperament maps -- something i'm not
>sure i can explain right now.

I remember Graham using the +ing ets method. He claimed it
was pretty fast and comprehensive. I remember thinking it
was anything but pretty.

I don't understand wedgies, so...

If the above is true about commas, then complexity should be
defined in terms of commas, and we could search all sets of
simple commas...

>that's not what i mean -- i mean, if you're dealing with a
>planar temperament (which might simply be a linear
>temperament with tweakable octaves)

How can tweaking one of the generators of a linear temperament
turn it into a planar temperament? You need a third gen!

>or something with higher dimension, there's no unique choice
>of the basis of generators -- gene's used things such as
>hermite reduction to make this arbitrary choice for him.

What's a "basis of generators"?

>the idea is that, if you sort by complexity, using a log-
>flat badness criterion guarantees that you'll have a similar
>number of temperaments to look at within each complexity
>range, so the complexity will increase rather smoothly in
>your list.

You mean if I take the twenty "best" 5-limit temperaments
and sort by badness, the resulting list will alse be sorted
by complexity, then accuracy? There didn't seem to be any
exponents that would do this on Dave's spreadsheet, and I
thought I tried the critical exponent for log-flat temps.

>though the mathematics of it is -- naturally -- heuristic
>in nature.

?

AFAIK, a heuristic is an algorithm that attempts to search
only a fraction of a network yet still deliver results one
can have confidence in.

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

1/16/2003 3:58:16 AM

Gene Ward Smith wrote:

> I think plusing is the same as taking the wedge product. If you want
> n-dimensional temperaments (where linear is 2, planar 3, etc.) then
> you can wedge n et maps. You may also wedge pi(p)-n commas together for the same result, where p is the prime limit and pi(x) is the number theory function counting primes less than or equal to x.

The dual/complement of the wedge product. But the wedge product can't distinguish torsion from contorsion.

I'm using the & operator for combining equal temperaments to get a linear temperament. The + operator can then be for adding equal temperaments to get another equal temperament. So h19&h31=meantone, but h19+h31=h50.

Graham

🔗Graham Breed <graham@microtonal.co.uk>

1/16/2003 5:48:33 AM

Carl Lumma wrote:
> If the above is true about commas, then complexity should be
> defined in terms of commas, and we could search all sets of
> simple commas...

What sets of simple commas do you propose to take? The files I have here show equivalences between second-order tonality diamonds. If you have p odd numbers in your ratios, the tonality diamond will have of the order of p**2 (p squared) ratios. The second order diamond is made by combining these, so that gives O(p**4) ratios. You're then setting pairs of these ratios to be equivalent, giving O(p**8) commas.

You then need to take combinations of pi(p)-1 commas. (That is one minus the number of primes in the ratios.) If you have many more commas than primes, that will go as the pi(p)-1st power. So the total number of wedgies you have to consider is O((p**8)**(pi(p)-1))

Lets simplify this by setting pi(p)~p. To get an understimate, I'll count it as pi(p) but still call it p. The complexity of finding p prime linear temperaments is then O(p**8(p-1))

In the 7-limit, there are only 4 primes, so the calculation is O(4**(8*3)) = O(4**24) = O(2.8e14) candidates (not that many, but that doesn't matter because we don't know how long each one will take).

In the 19-limit, there are 8 primes. So we need O(8**(8*7)) = O(8**56) = O(3.7e50)

If that really is 3.7e50 candidates, it's impossible. But even in comparison to the 7-limit case, it's huge. And (without much optimisation) I couldn't even do the 7-limit calculation!

For display purposes, I'm only showing commas between the first n*3 ratios from the second-order diamond, where n is the size of the second order diamond. Using that would mean only O(p**4(p-1)) candidates. So only O(8**28) in the 19-limit, or O(2e25). But this isn't good enough -- most of my top 19-limit temperaments don't have 7 such equivalences.

Whereas combing equal temperaments only gives O(n**2) calculations, where n is the number of ETs you consider. I find n=20 works well, requiring O(400) candidates. This is true in the 5-limit and also the 21-limit. I haven't heard of anybody doing a similar search with unison vectors in the 21-limit, or even suggesting ways to reduce the complexity.

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 12:37:35 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:

> >Eh? I go with Paul; this is the point of log-flat measures.
>
> Eh? Name your complexity range, and take the x most accurate
> temperaments within it. Why do I need badness at all?

it's a much easier, and prettier, way to acheive this. you get the
distribution you want regardless of where you put the endpoints of
your complexity ranges.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 12:50:17 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:

> Are you saying that the same set of commas could vanish
> under two different maps, each with different gens? If
> so, can you give an example?

81:80

in terms of octave and fifth, the map is [1,0] [1,1] [4,0]

in terms of octave and twelfth, the map is [1,0] [0,1] [4,-4]

> If the above is true about commas, then complexity should be
> defined in terms of commas,

it's nice when there's only one comma. then the log of the numbers in
the comma (say, the log of the odd limit) is an excellent estimate of
complexity (it's what i call the heuristic complexity). if there's
more than one comma being tempered out, we need a notion of
the "angle" between the commas . . .

> >that's not what i mean -- i mean, if you're dealing with a
> >planar temperament (which might simply be a linear
> >temperament with tweakable octaves)
>
> How can tweaking one of the generators of a linear temperament
> turn it into a planar temperament? You need a third gen!

it's a planar temperament in the sense that there are two
independently tweakable generators, two independent dimensions needed
in order to define each pitch of the tuning.

> >or something with higher dimension, there's no unique choice
> >of the basis of generators -- gene's used things such as
> >hermite reduction to make this arbitrary choice for him.
>
> What's a "basis of generators"?

for the case of meantone, one example would be octave and fifth.
another example would be octave and twelfth. another example would be
fifth and fourth. another example would be major second and minor
second. each pair comprises a complete basis for the vector space of
pitches in the tuning.

> >the idea is that, if you sort by complexity, using a log-
> >flat badness criterion guarantees that you'll have a similar
> >number of temperaments to look at within each complexity
> >range, so the complexity will increase rather smoothly in
> >your list.
>
> You mean if I take the twenty "best" 5-limit temperaments
> and sort by badness, the resulting list will alse be sorted
> by complexity, then accuracy?

no. you use a badness cutoff simply to define the list of
temperaments in the first place. *then* you sort by complexity.

> >though the mathematics of it is -- naturally -- heuristic
> >in nature.
>
> ?
>
> AFAIK, a heuristic is an algorithm that attempts to search
> only a fraction of a network yet still deliver results one
> can have confidence in.

from yourdictionary.com:

heu·ris·tic
(click to hear the word) (hy-rstk)
adj.
Of or relating to a usually speculative formulation serving as a
guide in the investigation or solution of a problem: "The historian
discovers the past by the judicious use of such a heuristic device as
the 'ideal type'" (Karl J. Weintraub).

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 12:53:12 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> Gene Ward Smith wrote:
>
> > I think plusing is the same as taking the wedge product. If you
want
> > n-dimensional temperaments (where linear is 2, planar 3, etc.)
then
> > you can wedge n et maps. You may also wedge pi(p)-n commas
together for the same result, where p is the prime limit and pi(x) is
the number theory function counting primes less than or equal to x.
>
> The dual/complement of the wedge product. But the wedge product
can't
> distinguish torsion from contorsion.
> I'm using the & operator for combining equal temperaments to get a
> linear temperament.

that's the wedge product. i've never had any problems detecting when
torsion is or isn't present with it. when the gcd isn't 1, you have
torsion. right?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 12:57:38 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

> Whereas combing equal temperaments only gives O(n**2) calculations,
> where n is the number of ETs you consider. I find n=20 works well,
> requiring O(400) candidates. This is true in the 5-limit and also
the
> 21-limit. I haven't heard of anybody doing a similar search with
unison
> vectors in the 21-limit, or even suggesting ways to reduce the
complexity.
>
>
> Graham

hasn't gene done a 13-limit search . . . *not* by starting from ETs?

🔗Graham Breed <graham@microtonal.co.uk>

1/16/2003 1:45:18 PM

wallyesterpaulrus wrote:

> that's the wedge product. i've never had any problems detecting when > torsion is or isn't present with it. when the gcd isn't 1, you have > torsion. right?

The wedge product of a set of commas is the complement of the wedge product of a pair of equal mappings that define the same linear temperament. The two are different, and it matters when you do quantitative calculations, although Gene seems to get round this in a way I don't understand.

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 1:48:28 PM

>>If the above is true about commas, then complexity should be
>>defined in terms of commas, and we could search all sets of
>>simple commas...
>
>What sets of simple commas do you propose to take?

All of them within taxicab radius r on the triangular harmonic
lattice. T(r), the number of tones within r, then seems to be
6r + T(r-1); T(0)=1 in the 5-limit. That seems to be roughly
O(r**2). The number of possible commas is than the 2-combin.
of this, which is again is roughly **2. So O(r**4) at the end
of the day.

Perhaps that first 2 is the number of dimensions on the lattice,
in which case the 15-limit would be O(r**14).

>The files I have here show equivalences between second-order
>tonality diamonds.

Which are? Tonality diamonds of the pitches in a tonality
diamond?

>If you have p odd numbers in your ratios, the tonality diamond
>will have of the order of p**2 (p squared) ratios. The second
>order diamond is made by combining these, so that gives O(p**4)
>ratios. You're then setting pairs of these ratios to be
>equivalent, giving O(p**8) commas.

Ah yeah, that's a lot of commas, and it seems a rather ad hoc
way to select them.

>You then need to take combinations of pi(p)-1 commas.
>(That is one minus the number of primes in the ratios.)

Right.

>If you have many more commas than primes, that will go as
>the pi(p)-1st power.

I'll take your word for it.

>So the total number of wedgies you have to consider is
>O((p**8)**(pi(p)-1)). Lets simplify this by setting pi(p)~p.
>To get an understimate, I'll count it as pi(p) but still
>call it p. The complexity of finding p prime linear
>temperaments is then O(p**8(p-1))
//
>In the 19-limit, there are 8 primes. So we need
>O(8**(8*7)) = O(8**56) = O(3.7e50). If that really
>is 3.7e50 candidates, it's impossible.

:(

>Whereas combing equal temperaments only gives O(n**2)
>calculations, where n is the number of ETs you consider.
>I find n=20 works well, requiring O(400) candidates.
>This is true in the 5-limit and also the 21-limit. I
>haven't heard of anybody doing a similar search with
>unison vectors in the 21-limit, or even suggesting ways
>to reduce the complexity.

There must be many ways to reduce the complexity of the
method I suggest. For example, rather than finding all
the pitches and taking combinations, we might set a max
comma size (as an interval) and step through ratios
involving up to r compoundings of the allowed factors
that are smaller than that bound.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 1:56:23 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> wallyesterpaulrus wrote:
>
> > that's the wedge product. i've never had any problems detecting
when
> > torsion is or isn't present with it. when the gcd isn't 1, you
have
> > torsion. right?
>
> The wedge product of a set of commas is the complement of the wedge
> product of a pair of equal mappings that define the same linear
> temperament. The two are different, and it matters when you do
> quantitative calculations, although Gene seems to get round this in
a
> way I don't understand.

well let's get these things understood!

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 2:00:05 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>If the above is true about commas, then complexity should be
> >>defined in terms of commas, and we could search all sets of
> >>simple commas...
> >
> >What sets of simple commas do you propose to take?
>
> All of them within taxicab radius r on the triangular harmonic
> lattice. T(r), the number of tones within r, then seems to be
> 6r + T(r-1); T(0)=1 in the 5-limit. That seems to be roughly
> O(r**2). The number of possible commas is than the 2-combin.
> of this, which is again is roughly **2. So O(r**4) at the end
> of the day.

this is silly. all you need to do to get all these commas to come up
as *tones* is to double your radius. so 2*O(r**2) will do -- the O
(r**4) method is extremely redundant.

> Perhaps that first 2 is the number of dimensions on the lattice,
> in which case the 15-limit would be O(r**14).

if we're looking for an efficient search, we'd definitely use a prime-
limit lattice, not one with redundant points!

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 2:03:45 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
> <clumma@y...> wrote:
> > >>If the above is true about commas, then complexity should be
> > >>defined in terms of commas, and we could search all sets of
> > >>simple commas...
> > >
> > >What sets of simple commas do you propose to take?
> >
> > All of them within taxicab radius r on the triangular harmonic
> > lattice. T(r), the number of tones within r, then seems to be
> > 6r + T(r-1); T(0)=1 in the 5-limit. That seems to be roughly
> > O(r**2). The number of possible commas is than the 2-combin.
> > of this, which is again is roughly **2. So O(r**4) at the end
> > of the day.
>
> this is silly. all you need to do to get all these commas to come
up
> as *tones* is to double your radius. so 2*O(r**2) will do -- the O
> (r**4) method is extremely redundant.

and, in fact, kees van prooijen does exactly this search on tones,
here:

http://www.kees.cc/tuning/perbl.html

and keeps the smallest, second-smallest, and third-smallest commas
for each possible r, and comes up with this list:

http://www.kees.cc/tuning/s235.html

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 2:13:29 PM

>>Are you saying that the same set of commas could vanish
>>under two different maps, each with different gens? If
>>so, can you give an example?
>
>81:80
>
>in terms of octave and fifth, the map is [1,0] [1,1] [4,0]
>
>in terms of octave and twelfth, the map is [1,0] [0,1] [4,-4]

Hmm. How bad are such cases? Could we take them out at the
end?

>>If the above is true about commas, then complexity should be
>>defined in terms of commas,
>
>it's nice when there's only one comma. then the log of the
>numbers in the comma (say, the log of the odd limit) is an
>excellent estimate of complexity (it's what i call the
>heuristic complexity).

That's what I call taxicab complexity, I think.

Isn't it also the case that the same temperament may be defined
by different lists of commas?

>if there's more than one comma being tempered out, we need
>a notion of the "angle" between the commas . . .

Please explain.

> > How can tweaking one of the generators of a linear temperament
> > turn it into a planar temperament? You need a third gen!
>
> it's a planar temperament in the sense that there are two
> independently tweakable generators, two independent dimensions
> needed in order to define each pitch of the tuning.

You assume there was an untweakable one in the 5-limit case?
Bah!

And in the 'now it's planar' case, you would no longer have an
untweakable one. Something's got to give!

>>What's a "basis of generators"?
>
>for the case of meantone, one example would be octave and fifth.
>another example would be octave and twelfth. another example
>would be fifth and fourth. another example would be major second
>and minor second. each pair comprises a complete basis for the
>vector space of pitches in the tuning.

Thanks. There is also, I gather, such a thing as a basis of
commas? TM (TM stands for?) reduction applies to commas only,
right?

> > You mean if I take the twenty "best" 5-limit temperaments
> > and sort by badness, the resulting list will alse be sorted
> > by complexity, then accuracy?
>
> no. you use a badness cutoff simply to define the list of
> temperaments in the first place.

That's the same as taking the 20 "best" temperaments.

>*then* you sort by complexity.

Aha! This isn't better than taking the 20 simplest temperaments
and sorting by accuracy.

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 2:18:01 PM

>and, in fact, kees van prooijen does exactly this search
>on tones, here:
>
> http://www.kees.cc/tuning/perbl.html

Man, it's been a while since I looked at Kees' site. This
is a goldmine!

>and keeps the smallest, second-smallest, and third-smallest
>commas for each possible r, and comes up with this list:
>
> http://www.kees.cc/tuning/s235.html

That's badass.

-C.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 2:28:25 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Are you saying that the same set of commas could vanish
> >>under two different maps, each with different gens? If
> >>so, can you give an example?
> >
> >81:80
> >
> >in terms of octave and fifth, the map is [1,0] [1,1] [4,0]
> >
> >in terms of octave and twelfth, the map is [1,0] [0,1] [4,-4]
>
> Hmm. How bad are such cases? Could we take them out at the
> end?

i have no idea what you're asking. bad?

> >>If the above is true about commas, then complexity should be
> >>defined in terms of commas,
> >
> >it's nice when there's only one comma. then the log of the
> >numbers in the comma (say, the log of the odd limit) is an
> >excellent estimate of complexity (it's what i call the
> >heuristic complexity).
>
> That's what I call taxicab complexity, I think.

not quite. for one thing, read this:

http://www.kees.cc/tuning/lat_perbl.html

including the link to my observations.

> Isn't it also the case that the same temperament may be defined
> by different lists of commas?
>
> >if there's more than one comma being tempered out, we need
> >a notion of the "angle" between the commas . . .
>
> Please explain.

search for "straightess" in these archives . . .

> > > How can tweaking one of the generators of a linear temperament
> > > turn it into a planar temperament? You need a third gen!
> >
> > it's a planar temperament in the sense that there are two
> > independently tweakable generators, two independent dimensions
> > needed in order to define each pitch of the tuning.
>
> You assume there was an untweakable one in the 5-limit case?
> Bah!

are you saying the octave should never be assumed to be exactly 1200
cents?

> And in the 'now it's planar' case, you would no longer have an
> untweakable one. Something's got to give!

i'm not following you.

> >>What's a "basis of generators"?
> >
> >for the case of meantone, one example would be octave and fifth.
> >another example would be octave and twelfth. another example
> >would be fifth and fourth. another example would be major second
> >and minor second. each pair comprises a complete basis for the
> >vector space of pitches in the tuning.
>
> Thanks. There is also, I gather, such a thing as a basis of
> commas? TM (TM stands for?)

tenney-minkowski. tenney is the metric being minimized, and minkowski
provided a basis-reduction algorithm applicable to such a case.

> reduction applies to commas only,
> right?

right.

> > > You mean if I take the twenty "best" 5-limit temperaments
> > > and sort by badness, the resulting list will alse be sorted
> > > by complexity, then accuracy?
> >
> > no. you use a badness cutoff simply to define the list of
> > temperaments in the first place.
>
> That's the same as taking the 20 "best" temperaments.

well, if your badness cutoff, extreme error cutoff, and extreme
complexity cutoff leave you with 20 inside, and if such a clunky
tripartite criterion is what you define as "best".

> >*then* you sort by complexity.
>
> Aha! This isn't better than taking the 20 simplest temperaments
> and sorting by accuracy.

of course it is. most of the 20 simplest temperaments are garbage.
don't you want the several best temperaments in each complexity
range, going up to complexity higher than you could ever use?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 2:31:24 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >and, in fact, kees van prooijen does exactly this search
> >on tones, here:
> >
> > http://www.kees.cc/tuning/perbl.html
>
> Man, it's been a while since I looked at Kees' site. This
> is a goldmine!
>
> >and keeps the smallest, second-smallest, and third-smallest
> >commas for each possible r, and comes up with this list:
> >
> > http://www.kees.cc/tuning/s235.html
>
> That's badass.
>
> -C.

and, since there are only two coordinates in the 5-limit, the comma
search (and thus the linear temperament search) requires only O(r^2)
operations -- kees apparently took r to be on the order of 10^5, much
more than anyone could ever possibly need :)

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 2:32:41 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

> search for "straightess" in these archives . . .

oops -- i meant "straightness"!

🔗Graham Breed <graham@microtonal.co.uk>

1/16/2003 2:46:08 PM

Carl Lumma wrote:

> Perhaps that first 2 is the number of dimensions on the lattice,
> in which case the 15-limit would be O(r**14).

Yes, it's the volume of a hypersphere. So it climbs dramatically with the number of dimensions, which is one less the number of primes (or odd numbers if you're being masochistic). For 8 dimensions, you still get O((r**7)**7) = O(r**49) candidate wedgies.

>>The files I have here show equivalences between second-order
>>tonality diamonds.
>
> Which are? Tonality diamonds of the pitches in a tonality
> diamond?

Yes

> Ah yeah, that's a lot of commas, and it seems a rather ad hoc
> way to select them.

The advantage of commas taken from nth order tonality diamond is that you can predict how good a temperament can be involved in. The size of the comma in cents divided by the order of the tonality diamond is the best possible minimax error.

> There must be many ways to reduce the complexity of the
> method I suggest. For example, rather than finding all
> the pitches and taking combinations, we might set a max
> comma size (as an interval) and step through ratios
> involving up to r compoundings of the allowed factors
> that are smaller than that bound.

Yes, you could, for example, take only commas less than 10 cents. That means only 10/1200 or 1/120 of the commas stay. So you save two orders of magnitude from you set of commas, but no more. I've found the code for my search, and I was using this kind of optimization.

The big reduction in complexity would come from reducing the number of combinations of commas you need to take. Ideally some way of predicting that a small set of commas (all good on their own) will never give a good temperament by adding more.

There are certainly a lot of commas that can be relevant. Below are all 42 the equivalences from my list of 19-limit temperaments (generated from 20 equal temperaments). Some may not be unique unison vectors, but whatever, they aren't sufficient to generate most of the linear temperaments anyway. There are 27 million combinations of 7 from 42. 7 from 28 already gives over a million combinations.

22:19 =~ 15:13
16:15 =~ 17:16
15:11 =~ 26:19
135:128 =~ 19:18
65:48 =~ 19:14
20:19 =~ 19:18
24:19 =~ 19:15
39:32 =~ 128:105
14:13 =~ 128:119
45:32 =~ 128:91
256:195 =~ 21:16
256:221 =~ 22:19
65:64 =~ 64:63
128:117 =~ 35:32
17:13 =~ 64:49
13:10 =~ 64:49
21:20 =~ 20:19
40:33 =~ 17:14
48:35 =~ 256:187
18:17 =~ 128:121
40:39 =~ 49:48
128:117 =~ 12:11
11:9 =~ 39:32
256:221 =~ 22:19
25:24 =~ 133:128
22:17 =~ 128:99
128:91 =~ 7:5
64:57 =~ 28:25
32:27 =~ 13:11
256:209 =~ 11:9
19:18 =~ 128:121
28:25 =~ 143:128
18:17 =~ 17:16
9:8 =~ 64:57
171:128 =~ 4:3
28:27 =~ 133:128
128:95 =~ 27:20
32:27 =~ 19:16
24:19 =~ 81:64
135:128 =~ 20:19
165:128 =~ 128:99
135:128 =~ 19:18 =~ 128:121

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 2:48:41 PM

> > Hmm. How bad are such cases? Could we take them out at the
> > end?
>
> i have no idea what you're asking. bad?

Can you find them at the end of a map-space search, and take
them out?

>>That's what I call taxicab complexity, I think.
>
> not quite. for one thing, read this:
>
> http://www.kees.cc/tuning/lat_perbl.html
>
> including the link to my observations.
//
> > >if there's more than one comma being tempered out, we need
> > >a notion of the "angle" between the commas . . .
> >
> > Please explain.
>
> search for "straightess" in these archives . . .

Working...

> > You assume there was an untweakable one in the 5-limit case?
> > Bah!
>
> are you saying the octave should never be assumed to be
> exactly 1200 cents?

Yes, yes, and... yes.

>>And in the 'now it's planar' case, you would no longer have an
>>untweakable one. Something's got to give!
>
> i'm not following you.

If you insist on there being an untweakable generator, you won't
have it if you take a linear temperament, tweak the octave, and
call it planar.

> > Thanks. There is also, I gather, such a thing as a basis of
> > commas? TM (TM stands for?)
>
> tenney-minkowski. tenney is the metric being minimized, and
> minkowski provided a basis-reduction algorithm applicable to
> such a case.
>
> > reduction applies to commas only,
> > right?
>
> right.

Thanks again. So if reduction is necc., it means that a
temperament can be described by two different lists of commas,
right? This means we'll have the same problem searching comma
space as we did map space. So wedgies are our last hope.

> > > no. you use a badness cutoff simply to define the list of
> > > temperaments in the first place.
> >
> > That's the same as taking the 20 "best" temperaments.
>
> well, if your badness cutoff, extreme error cutoff, and extreme
> complexity cutoff leave you with 20 inside, and if such a clunky
> tripartite criterion is what you define as "best".

Are you saying a badness cutoff is not sufficient to give a
finite list of temperaments?

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

1/16/2003 3:03:24 PM

>
>
>Yes, it's the volume of a hypersphere. So it climbs dramatically with >the number of dimensions, which is one less the number of primes (or odd >numbers if you're being masochistic). For 8 dimensions, you still get >O((r**7)**7) = O(r**49) candidate wedgies.
>
Oops! That should be O((r**7)**6) or O(r**42) and 6 from 42 only gives 5 million or so combinations. I made the same mistake in the earlier post: there are n-2 commas needed to define a linear temperament using n primes.

Graham

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 3:11:12 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> Carl Lumma wrote:

> Lets simplify this by setting pi(p)~p.

The prime number theorem says pi(x)~x/(ln(x)-1).

> Whereas combing equal temperaments only gives O(n**2) calculations,
> where n is the number of ETs you consider. I find n=20 works well,
> requiring O(400) candidates.

Once you take wedgies, you should have fewer candidates.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 3:25:13 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

> The wedge product of a set of commas is the complement of the wedge
> product of a pair of equal mappings that define the same linear
> temperament. The two are different, and it matters when you do
> quantitative calculations, although Gene seems to get round this in a
> way I don't understand.

The two may be identified via Poincare duality, so I identify them. In practice, I reorder the terms of the wedge product I get from wedging commas so that it is the same list as the wedge product I get from wedging ets.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 3:28:28 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > Hmm. How bad are such cases? Could we take them out at the
> > > end?
> >
> > i have no idea what you're asking. bad?
>
> Can you find them at the end of a map-space search, and take
> them out?

what do you want to take out? from what?

> >>That's what I call taxicab complexity, I think.
> >
> > not quite. for one thing, read this:
> >
> > http://www.kees.cc/tuning/lat_perbl.html
> >
> > including the link to my observations.
> //

what does "//" mean?

> If you insist on there being an untweakable generator, you won't
> have it if you take a linear temperament, tweak the octave, and
> call it planar.

correct. the untweakable generator has been tweaked. is that all?

> > > reduction applies to commas only,
> > > right?
> >
> > right.
>
> Thanks again. So if reduction is necc., it means that a
> temperament can be described by two different lists of commas,
> right?

right, although for single-comma temperaments, only one choice leaves
you without torsion.

> This means we'll have the same problem searching comma
> space as we did map space. So wedgies are our last hope.

the problem i was pointing out with map space, i think, was that the
arbitrariness of the set of generators means your complexity ranking
(if it's just based on the numbers of the map) will be meaningless.

> > > > no. you use a badness cutoff simply to define the list of
> > > > temperaments in the first place.
> > >
> > > That's the same as taking the 20 "best" temperaments.
> >
> > well, if your badness cutoff, extreme error cutoff, and extreme
> > complexity cutoff leave you with 20 inside, and if such a clunky
> > tripartite criterion is what you define as "best".
>
> Are you saying a badness cutoff is not sufficient to give a
> finite list of temperaments?

exactly. in *every* complexity range you have about the same number
of temperaments with log-flat badness lower than some cutoff -- and
there are an infinite number of non-overlapping complexity ranges.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 3:29:04 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> >
> >
> >Yes, it's the volume of a hypersphere. So it climbs dramatically
with
> >the number of dimensions, which is one less the number of primes
(or odd
> >numbers if you're being masochistic). For 8 dimensions, you still
get
> >O((r**7)**7) = O(r**49) candidate wedgies.
> >
> Oops! That should be O((r**7)**6) or O(r**42) and 6 from 42 only
gives
> 5 million or so combinations.

still vastly redundant.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 3:41:33 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
>
> tenney-minkowski. tenney is the metric being minimized, and minkowski
> provided a basis-reduction algorithm applicable to such a case.

He supplied a criterion; there seem to be no algorithms better than brute force.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 3:42:30 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> >
> > tenney-minkowski. tenney is the metric being minimized, and
minkowski
> > provided a basis-reduction algorithm applicable to such a case.
>
> He supplied a criterion; there seem to be no algorithms better than
brute force.

but it's a criterion guaranteed to give a unique answer, right?

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 4:09:55 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

> but it's a criterion guaranteed to give a unique answer, right?

It's unique.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 4:13:49 PM

>>Can you find them at the end of a map-space search, and take
>>them out?
>
>what do you want to take out?

Can I identify the duplicate temperaments?

>from what?

A search of all possible maps.

> what does "//" mean?

Cut. I've been using it since I've been on these lists.

>correct. the untweakable generator has been tweaked. is that all?

Assuming 2:1 reduction makes me squirm in my chair, is all.
Plentiful near-2:1s should emerge from the search if the criteria
are right.

>>Thanks again. So if reduction is necc., it means that a
>>temperament can be described by two different lists of commas,
>>right?
>
>right, although for single-comma temperaments, only one choice
>leaves you without torsion.

Thanks yet again.

>>This means we'll have the same problem searching comma
>>space as we did map space. So wedgies are our last hope.
>
>the problem i was pointing out with map space, i think, was
>that the arbitrariness of the set of generators means your
>complexity ranking (if it's just based on the numbers of the
>map) will be meaningless.

Oh. Now I get it! You're right. But doesn't the same
problem occur with different commatic representations, when
defining complexity off the commas?

>>Are you saying a badness cutoff is not sufficient to give a
>>finite list of temperaments?
>
>exactly. in *every* complexity range you have about the same
>number of temperaments with log-flat badness lower than some
>cutoff -- and there are an infinite number of non-overlapping
>complexity ranges.

Oh. I guess I need some examples, then, of most of the simple
temperaments that are garbage...

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 4:22:12 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>Can you find them at the end of a map-space search, and take
> >>them out?
> >
> >what do you want to take out?
>
> Can I identify the duplicate temperaments?

duplicate? any temperament can be generated from an infinite number
of possible basis vectors.

> >correct. the untweakable generator has been tweaked. is that all?
>
> Assuming 2:1 reduction makes me squirm in my chair, is all.
> Plentiful near-2:1s should emerge from the search if the criteria
> are right.

if the criteria include mapping to, and minimizing error from, 2:1,
then of course a near 2:1 will emerge in each temperament.

> >>This means we'll have the same problem searching comma
> >>space as we did map space. So wedgies are our last hope.
> >
> >the problem i was pointing out with map space, i think, was
> >that the arbitrariness of the set of generators means your
> >complexity ranking (if it's just based on the numbers of the
> >map) will be meaningless.
>
> Oh. Now I get it! You're right. But doesn't the same
> problem occur with different commatic representations, when
> defining complexity off the commas?

not if you define complexity right!

> >>Are you saying a badness cutoff is not sufficient to give a
> >>finite list of temperaments?
> >
> >exactly. in *every* complexity range you have about the same
> >number of temperaments with log-flat badness lower than some
> >cutoff -- and there are an infinite number of non-overlapping
> >complexity ranges.
>
> Oh. I guess I need some examples, then, of most of the simple
> temperaments that are garbage...

what are the 20 simplest 5-limit intervals? now set each of these to
be the commatic unison vector, and what temperaments do you get? gene
has studied a few of them because they actually had low badness --
but the error is so high that the generators themselves sometimes
provide better approximations to a consonance than the interval the
consonance is mapped to!

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 4:25:26 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
> <clumma@y...> wrote:
> > >>Can you find them at the end of a map-space search, and take
> > >>them out?
> > >
> > >what do you want to take out?
> >
> > Can I identify the duplicate temperaments?
>
> duplicate? any temperament can be generated from an infinite number
> of possible basis vectors.

oops -- i meant sets of possible basis vectors, assuming the
temperament is 2-d or more.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 4:45:24 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

> > Oops! That should be O((r**7)**6) or O(r**42) and 6 from 42 only
> gives
> > 5 million or so combinations.
>
> still vastly redundant.

One possibility would be to choose a comma list with a badness cutoff, where the badness was the badness of the corresponding codimension one, or one-comma, temperament. I think I'll try that unless someone has a better suggestion.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 4:59:54 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <graham@m...>
wrote:
>
> > > Oops! That should be O((r**7)**6) or O(r**42) and 6 from 42
only
> > gives
> > > 5 million or so combinations.
> >
> > still vastly redundant.
>
> One possibility would be to choose a comma list with a badness
>cutoff, where the badness was the badness of the corresponding
>codimension one, or one-comma, temperament.

you'd get a lot of temperaments that are worse than a lot you woudn't
get, because of the straightness thing.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 5:37:37 PM

>> Can I identify the duplicate temperaments?
>
> duplicate? any temperament can be generated from an infinite
> number of possible basis vectors.

"The results of a search of all possible maps is bound to return
pairs, trios, etc. of maps that represent the same temperament.
Can we find them in the mess of results?" Was all I was asking!

>>Assuming 2:1 reduction makes me squirm in my chair, is all.
>>Plentiful near-2:1s should emerge from the search if the criteria
>>are right.
>
>if the criteria include mapping to, and minimizing error from,
>2:1, then of course a near 2:1 will emerge in each temperament.

Yup, that's what I've been saying alright.

> > Oh. Now I get it! You're right. But doesn't the same
> > problem occur with different commatic representations, when
> > defining complexity off the commas?
>
> not if you define complexity right!

Aha!

> > >>Are you saying a badness cutoff is not sufficient to give a
> > >>finite list of temperaments?
> > >
> > >exactly. in *every* complexity range you have about the same
> > >number of temperaments with log-flat badness lower than some
> > >cutoff -- and there are an infinite number of non-overlapping
> > >complexity ranges.
> >
> > Oh. I guess I need some examples, then, of most of the simple
> > temperaments that are garbage...
>
> what are the 20 simplest 5-limit intervals? now set each of
> these to be the commatic unison vector, and what temperaments
> do you get?

Perhaps we could enforce "validity", and maybe also Kees'
'complexity validity'.

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 5:41:31 PM

[I wrote...]
>Perhaps we could enforce "validity", and maybe also Kees'
>'complexity validity'.

I guess I should have called that 'expressibility validity'.

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 5:48:49 PM

> > >it's nice when there's only one comma. then the log of the
> > >numbers in the comma (say, the log of the odd limit) is an
> > >excellent estimate of complexity (it's what i call the
> > >heuristic complexity).
> >
> > That's what I call taxicab complexity, I think.
>
> not quite. for one thing, read this:
>
> http://www.kees.cc/tuning/lat_perbl.html
>
> including the link to my observations.

All I get from this is that it depends whether one uses a
triangular or rectangular lattice. I must be missing
something...

-----

> > >if there's more than one comma being tempered out, we need
> > >a notion of the "angle" between the commas . . .
> >
> > Please explain.
>
> search for "straightess" in these archives . . .

For a given block, notes can be transposed by unison vectors.
This changes the shape of the block. Does it change its
straightness?

Don't follow why the less straight blocks are supposed to
be less interesting.

-----

Here's what I found on the heuristic. Last time I asked,
you referred me to this message:

/tuning-math/messages/2491?expand=1

Which I can't follow at all. Which column is the heuristic,
what are the other columns, and what are their values expected
to do (go down or up...)?

I also found this blurb:

>the heuristics are only formulated for the one-unison-vector
>case (e.g., 5-limit linear temperaments), and no one has bothered
>to figure out the metric that makes it work exactly (though it
>seems like a tractable math problem). but they do seem to work
>within a factor of two for the current "step" and "cent"
>functions. "step" is approximately proportional to log(d),
>and "cent" is approximately proportional to (n-d)/(d*log(d)).

Why are they called "step" and "cent"? How were they derrived?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 5:55:47 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >> Can I identify the duplicate temperaments?
> >
> > duplicate? any temperament can be generated from an infinite
> > number of possible basis vectors.
>
> "The results of a search of all possible maps is bound to return
> pairs, trios, etc. of maps that represent the same temperament.
> Can we find them in the mess of results?" Was all I was asking!

yes, since you can calculate the wedgie of each. but clearly this is
a terrible way to go about the search. how would you even delimit it?

> >>Assuming 2:1 reduction makes me squirm in my chair, is all.
> >>Plentiful near-2:1s should emerge from the search if the criteria
> >>are right.
> >
> >if the criteria include mapping to, and minimizing error from,
> >2:1, then of course a near 2:1 will emerge in each temperament.
>
> Yup, that's what I've been saying alright.

and this kind of optimization has been done a few times, for example
by gene.

> > > >>Are you saying a badness cutoff is not sufficient to give a
> > > >>finite list of temperaments?
> > > >
> > > >exactly. in *every* complexity range you have about the same
> > > >number of temperaments with log-flat badness lower than some
> > > >cutoff -- and there are an infinite number of non-overlapping
> > > >complexity ranges.
> > >
> > > Oh. I guess I need some examples, then, of most of the simple
> > > temperaments that are garbage...
> >
> > what are the 20 simplest 5-limit intervals? now set each of
> > these to be the commatic unison vector, and what temperaments
> > do you get?
>
> Perhaps we could enforce "validity",

?

> and maybe also Kees'
> 'complexity validity'.

??

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 6:06:51 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > >it's nice when there's only one comma. then the log of the
> > > >numbers in the comma (say, the log of the odd limit) is an
> > > >excellent estimate of complexity (it's what i call the
> > > >heuristic complexity).
> > >
> > > That's what I call taxicab complexity, I think.
> >
> > not quite. for one thing, read this:
> >
> > http://www.kees.cc/tuning/lat_perbl.html
> >
> > including the link to my observations.
>
> All I get from this is that it depends whether one uses a
> triangular or rectangular lattice. I must be missing
> something...

it shows that the 'expressibility' metric is not quite the same as
the taxicab metric on the isosceles-triangular lattice. you need to
use scalene triangles of a certain type, which i don't think is what
you were thinking when you wrote "taxicab complexity" above.

> > > >if there's more than one comma being tempered out, we need
> > > >a notion of the "angle" between the commas . . .
> > >
> > > Please explain.
> >
> > search for "straightess" in these archives . . .
>
> For a given block, notes can be transposed by unison vectors.
> This changes the shape of the block. Does it change its
> straightness?

straightness applies to a set of unison vectors. different sets of
unison vectors can define the same temperament. a temperament may
look good on the basis of being defined by good unison vectors. but
in fact you may end up with a terrible temperament if the unison
vectors point in approximately the same direction.

> Here's what I found on the heuristic. Last time I asked,
> you referred me to this message:
>
> /tuning-math/messages/2491?expand=1
>
> Which I can't follow at all.

well, let me help you then.

> Which column is the heuristic,

column V is proportional to the heuristic error, and Y is
proportional to the heuristic complexity.

> what are the other columns,

U is the rms error, W is the ratio of the two error measures. X is
the complexity (weighted rms of generators-per-consonance at that
point i believe), and Z is the ratio of the two complexity measures.

> and what are their values expected
> to do (go down or up...)?

W and Z are expected to remain relatively constant.

> I also found this blurb:
>
> >the heuristics are only formulated for the one-unison-vector
> >case (e.g., 5-limit linear temperaments), and no one has bothered
> >to figure out the metric that makes it work exactly (though it
> >seems like a tractable math problem). but they do seem to work
> >within a factor of two for the current "step" and "cent"
> >functions. "step" is approximately proportional to log(d),
> >and "cent" is approximately proportional to (n-d)/(d*log(d)).
>
> Why are they called "step" and "cent"? How were they derrived?

that's what gene used to call them. "step" is simply complexity,
and "cent" is simply rms error.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 7:13:30 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

but they do seem to work
> > >within a factor of two for the current "step" and "cent"
> > >functions. "step" is approximately proportional to log(d),
> > >and "cent" is approximately proportional to (n-d)/(d*log(d)).
> >
> > Why are they called "step" and "cent"? How were they derrived?
>
> that's what gene used to call them. "step" is simply complexity,
> and "cent" is simply rms error.

We could put this together, and get "bad" as heuristically as

bad = (n-d)*log(d)^2/d

We might also do multiple linear regression analysis on log badness vs log(n-d), loglog(d) and log(d) and see what we got.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 9:40:49 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
>
> but they do seem to work
> > > >within a factor of two for the current "step" and "cent"
> > > >functions. "step" is approximately proportional to log(d),
> > > >and "cent" is approximately proportional to (n-d)/(d*log(d)).
> > >
> > > Why are they called "step" and "cent"? How were they derrived?
> >
> > that's what gene used to call them. "step" is simply complexity,
> > and "cent" is simply rms error.
>
> We could put this together, and get "bad" as heuristically as
>
> bad = (n-d)*log(d)^2/d

of course -- where the second "d" should really be odd limit.

> We might also do multiple linear regression analysis on log badness
>vs log(n-d), loglog(d) and log(d) and see what we got.

i'm not following. i do multiple linear regression analysis every
day, so please clarify!

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/16/2003 10:17:50 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

> > We could put this together, and get "bad" as heuristically as
> >
> > bad = (n-d)*log(d)^2/d
>
> of course -- where the second "d" should really be odd limit.

"d" does not mean denominator?

> > We might also do multiple linear regression analysis on log badness
> >vs log(n-d), loglog(d) and log(d) and see what we got.

> i'm not following. i do multiple linear regression analysis every
> day, so please clarify!

I'm not following what you're not following.

I find that log(d) is a good heuristic for geometric 5-limit complexity, at least in the comma range of sizes. A linear regression
of log(complexity) vs log(d) gives c ~ .991685*log(d)^.986763, which is awfully close to log(d); in fact some of the time the geometric complexity is exactly log(d). A model using log|n-d|, loglog(d), and log(d) (each d being the denominator, and n the numerator) gives

log(c) ~ -.009541*log|n-d|+.009534*log(d)+.967642*loglog(d)-.0090687

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/16/2003 11:06:09 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus
<wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
>
> > > We could put this together, and get "bad" as heuristically as
> > >
> > > bad = (n-d)*log(d)^2/d
> >
> > of course -- where the second "d" should really be odd limit.
>
> "d" does not mean denominator?

it can, or it can mean odd limit of the ratio (in other words, the
ratio is a "Ratio Of d") -- the difference is slight.

the second and third "d", i meant to say.

> > > We might also do multiple linear regression analysis on log
badness
> > >vs log(n-d), loglog(d) and log(d) and see what we got.
>
> > i'm not following. i do multiple linear regression analysis every
> > day, so please clarify!
>
> I'm not following what you're not following.
>
> I find that log(d) is a good heuristic for geometric 5-limit
>complexity, at least in the comma range of sizes.

in all ranges, i think you'll find.

> A linear regression
> of log(complexity) vs log(d) gives c ~ .991685*log(d)^.986763,

oh, so you mean to use *your* complexity/badness/whatever as the
dependent variable in the regression!

clearly, though, you understand the heuristic.

>which is awfully close to log(d); in fact some of the time the
>geometric complexity is exactly log(d). A model using log|n-d|,
>loglog(d), and log(d) (each d being the denominator, and n the
>numerator) gives
>
> log(c) ~ -.009541*log|n-d|+.009534*log(d)+.967642*loglog(d)-.0090687

where c is complexity . . .

don't forget to report standard error ranges for your coefficient
estimates!

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/16/2003 11:53:09 PM

>>"The results of a search of all possible maps is bound to return
>>pairs, trios, etc. of maps that represent the same temperament.
>>Can we find them in the mess of results?" Was all I was asking!
>
>yes, since you can calculate the wedgie of each. but clearly this
>is a terrible way to go about the search.

Well, well by now I of course agree.

>how would you even delimit it?

I won't ask...

>>Perhaps we could enforce "validity",
>
> ?

That's Gene's name for a concept you said was equivalent to
the condition that all steps of a block be larger than its
unison vectors.

>>and maybe also Kees' 'complexity validity'.
>
> ??

See later message.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 12:25:19 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>"The results of a search of all possible maps is bound to return
> >>pairs, trios, etc. of maps that represent the same temperament.
> >>Can we find them in the mess of results?" Was all I was asking!
> >
> >yes, since you can calculate the wedgie of each. but clearly this
> >is a terrible way to go about the search.
>
> Well, well by now I of course agree.
>
> >how would you even delimit it?
>
> I won't ask...
>
> >>Perhaps we could enforce "validity",
> >
> > ?
>
> That's Gene's name for a concept you said was equivalent to
> the condition that all steps of a block be larger than its
> unison vectors.

there are no blocks here, just temperaments.

🔗Graham Breed <graham@microtonal.co.uk>

1/17/2003 12:26:46 AM

Gene Ward Smith wrote:

> The prime number theorem says pi(x)~x/(ln(x)-1).

Oh, I think I had that backwards.

>>Whereas combing equal temperaments only gives O(n**2) calculations, >>where n is the number of ETs you consider. I find n=20 works well, >>requiring O(400) candidates. > > Once you take wedgies, you should have fewer candidates.

Maybe, but isn't taking wedgies as hard as finding the maps? How do you avoid calculating a huge number of wedge products?

(Oh, n=20 gives exactly 190 candidates)

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 12:31:53 AM

>it shows that the 'expressibility' metric is not quite the same
>as the taxicab metric on the isosceles-triangular lattice. you
>need to use scalene triangles of a certain type, which i don't
>think is what you were thinking when you wrote "taxicab
>complexity" above.

That's true; I just count the rungs. Any lattice that's
'topologically' (?) equivalent to the triangular lattice
will do.

>straightness applies to a set of unison vectors. different sets
>of unison vectors can define the same temperament. a temperament
>may look good on the basis of being defined by good unison
>vectors. but in fact you may end up with a terrible temperament
>if the unison vectors point in approximately the same direction.

Why would it be terrible?

>>/tuning-math/messages/2491?expand=1
//
>>Which column is the heuristic,
>
>column V is proportional to the heuristic error, and Y is
>proportional to the heuristic complexity.
>
>>what are the other columns,
>
>U is the rms error, W is the ratio of the two error measures. X
>is the complexity (weighted rms of generators-per-consonance at
>that point i believe), and Z is the ratio of the two complexity
>measures.
>
>>and what are their values expected
>>to do (go down or up...)?
>
>W and Z are expected to remain relatively constant.

Thanks again! Now everything is clear. Except how you
derrived the heuristics!

Seriously man, one expository blurb would save you from having
to do this for each person who's interested, and for me again
when I've forgotten it in 6 months. :~)

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 12:50:23 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >it shows that the 'expressibility' metric is not quite the same
> >as the taxicab metric on the isosceles-triangular lattice. you
> >need to use scalene triangles of a certain type, which i don't
> >think is what you were thinking when you wrote "taxicab
> >complexity" above.
>
> That's true; I just count the rungs. Any lattice that's
> 'topologically' (?) equivalent to the triangular lattice
> will do.

you mean approximately?

> >straightness applies to a set of unison vectors. different sets
> >of unison vectors can define the same temperament. a temperament
> >may look good on the basis of being defined by good unison
> >vectors. but in fact you may end up with a terrible temperament
> >if the unison vectors point in approximately the same direction.
>
> Why would it be terrible?

heuristically speaking,

in most cases the *difference* between the unison vectors will be of
similar or greater magnitude in terms of JI comma interval size as
the unison vectors themselves, but since the angle is very small,
this *difference* vector will be very short (i.e., low complexity). a
comma of a given JI interval size will lead to much higher error if
tempered out over much fewer consonant rungs (i.e., if it's very
short) than if it's tempered out over more consonant rungs (i.e., if
it's long). therefore, you may end up with a temperament with much
larger error than you would have expected given your original pair of
unison vectors.

> Thanks again! Now everything is clear. Except how you
> derrived the heuristics!

i did post the derivation a while back, probably before i even used
the word "heuristic", but if you search for "heuristic", i think
you'll find a post that links to the derivation post.

> Seriously man, one expository blurb would save you from having
> to do this for each person who's interested, and for me again
> when I've forgotten it in 6 months. :~)

i welcome suggestions or just make your own blurb, and let's put this
on a webpage somewhere.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/17/2003 1:39:16 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:

> > A linear regression
> > of log(complexity) vs log(d) gives c ~ .991685*log(d)^.986763,
>
> oh, so you mean to use *your* complexity/badness/whatever as the
> dependent variable in the regression!
>
> clearly, though, you understand the heuristic.

The heuristic for complexity works particularly well as an estimate for geometric complexity. I did a regression analysis (which you could no doubt do better, if you have stat software) which also confirmed that your heuristic for error seems to have the right coefficients. The result is that your estimates look like an excellent way of quickly filtering a list of commas for geometric complexity, rms error, and geometic badness, which could then be used to compute wedgies. If we had estimates for the wedgies that would be even better.

> don't forget to report standard error ranges for your coefficient
> estimates!

I don't have a program which gives me residuals or F-ratios or anything which one really wants to do this right. You can probably do it better; just remember to take the log of both sides.

🔗Graham Breed <graham@microtonal.co.uk> <graham@microtonal.co.uk>

1/17/2003 2:16:06 AM

--- wallyesterpaulrus wrote:

> straightness applies to a set of unison vectors. different sets of
> unison vectors can define the same temperament. a temperament may
> look good on the basis of being defined by good unison vectors. but
> in fact you may end up with a terrible temperament if the unison
> vectors point in approximately the same direction.

Then that's what you need to reduce the complexity of the search. I
can't find a quantitative definition of "straightness" in the
archives. What is it? I presume it works for insufficient sets of
unison vectors.

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 2:18:52 AM

>>That's true; I just count the rungs. Any lattice that's
>>'topologically' (?) equivalent to the triangular lattice
>>will do.
>
>you mean approximately?

? I originally meant that counting the rungs between a dyad
and the origin would be approximately the same as the log of
the odd limit of the dyad...

lnoddlimit 7limittaxicab ratio
15:8 2.7 2 1.35
5:3 1.6 1 1.6
105:64 4.7 3 1.6
225:224 5.4 4 1.35

>>>but in fact you may end up with a terrible temperament
>>>if the unison vectors point in approximately the same
>>>direction.
>>
>>Why would it be terrible?
>
>heuristically speaking,
>
>in most cases the *difference* between the unison vectors
>will be of similar or greater magnitude in terms of JI comma
>interval size as the unison vectors themselves, but since
>the angle is very small, this *difference* vector will be
>very short (i.e., low complexity). a comma of a given JI
>interval size will lead to much higher error if tempered out
>over much fewer consonant rungs (i.e., if it's very short)
>than if it's tempered out over more consonant rungs (i.e.,
>if it's long). therefore, you may end up with a temperament
>with much larger error than you would have expected given
>your original pair of unison vectors.

Right, the difference vector has to vanish, too. Ok. What
I don't get is, for a given temperament, can I change the
straightness by changing the unison vector representation?
If so, this means that badness is not fixed for a given
temperament...

Also, can I change the straightness by transposing pitches
by uvs?

Finally, is "commatic basis" an acceptable synonym for
"kernel"?

>i did post the derivation a while back, probably before i
>even used the word "heuristic", but if you search for
>"heuristic", i think you'll find a post that links to the
>derivation post.

I think I remember it coming out, but I couldn't find it
today in my searches. I did try.

>i welcome suggestions or just make your own blurb, and
>let's put this on a webpage somewhere.

Maybe the original exposition can just be updated a bit, and
then monz or I could host it, certainly.

-C.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 2:22:37 AM

--- In tuning-math@yahoogroups.com, "Graham Breed <graham@m...>"
<graham@m...> wrote:
> --- wallyesterpaulrus wrote:

> Then that's what you need to reduce the complexity of the search. I
> can't find a quantitative definition of "straightness" in the
> archives.

i haven't come up with one yet.

> I presume it works for insufficient sets of
> unison vectors.

insufficient? how can a set of unison vectors be insufficient?

🔗Graham Breed <graham@microtonal.co.uk>

1/17/2003 2:26:40 AM

wallyesterpaulrus wrote:

> insufficient? how can a set of unison vectors be insufficient?

One unison vector is insufficient to define a 7-limit linear temperament, two or three unison vectors are insufficient to define a 21-limit linear temperament. For an arbitrary search to be practicable, there has to be a way of rejecting sets of three unison vectors because you know they can't give a good linear temperament. That would reduce the search to more like O(n**3) in the number of unison vectors instead of O(n**6).

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 2:28:00 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"

> Right, the difference vector has to vanish, too. Ok. What
> I don't get is, for a given temperament, can I change the
> straightness by changing the unison vector representation?

yes.

> If so, this means that badness is not fixed for a given
> temperament...

that's not true. since both the defining unison vectors *and* the
straightness change, the badness can (and will) remain constant.

> Also, can I change the straightness by transposing pitches
> by uvs?

this is meaningless, as we're talking about temperaments, not
irregular finite periodicity blocks. we're talking either equal
temperaments or infinite regular tunings.

> Finally, is "commatic basis" an acceptable synonym for
> "kernel"?

no. every vector that can be generated from the basis belongs to the
kernel, but not every set of n members of the kernel (even if they're
linearly independent) is a basis (since you may get torsion).

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 2:30:03 AM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> wallyesterpaulrus wrote:
>
> > insufficient? how can a set of unison vectors be insufficient?
>
> For an arbitrary search to be practicable,
> there has to be a way of rejecting sets of three unison vectors
because
> you know they can't give a good linear temperament.

then you need a 3-d generalization of "straightness". i bet if i went
and learned grassmann algebra i'd be able to get a better grasp on
all this.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 2:32:34 AM

> > Right, the difference vector has to vanish, too. Ok. What
> > I don't get is, for a given temperament, can I change the
> > straightness by changing the unison vector representation?
>
> yes.

Ok.

> > If so, this means that badness is not fixed for a given
> > temperament...
>
> that's not true. since both the defining unison vectors *and*
> the straightness change, the badness can (and will) remain
> constant.

Then how can it become "terrible"?

> > Also, can I change the straightness by transposing pitches
> > by uvs?
>
> this is meaningless, as we're talking about temperaments, not
> irregular finite periodicity blocks. we're talking either equal
> temperaments or infinite regular tunings.

Straightness certainly sounds like it can be defined on an
untempered block.

> > Finally, is "commatic basis" an acceptable synonym for
> > "kernel"?
>
> no. every vector that can be generated from the basis belongs
> to the kernel, but not every set of n members of the kernel
> (even if they're linearly independent) is a basis (since you may
> get torsion).

So is it possible to have more than one kernel representation
for a temperament?

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

1/17/2003 2:33:29 AM

wallyesterpaulrus wrote:

> then you need a 3-d generalization of "straightness". i bet if i went > and learned grassmann algebra i'd be able to get a better grasp on > all this.

Yes, probably. Vectors are straighter the smaller the area of the parallelogram they describe. The wedge product is a generalisation of area, as it tends towards the determinant. So hopefully we can do something with the intermediate wedge products. But I don't know what :(

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 2:49:05 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > Right, the difference vector has to vanish, too. Ok. What
> > > I don't get is, for a given temperament, can I change the
> > > straightness by changing the unison vector representation?
> >
> > yes.
>
> Ok.
>
> > > If so, this means that badness is not fixed for a given
> > > temperament...
> >
> > that's not true. since both the defining unison vectors *and*
> > the straightness change, the badness can (and will) remain
> > constant.
>
> Then how can it become "terrible"?

if you change to a "straighter" pair of unison vectors, one or both
of them will have to be a lot shorter, thus less distribution of
error and a worse temperament.

> > > Also, can I change the straightness by transposing pitches
> > > by uvs?
> >
> > this is meaningless, as we're talking about temperaments, not
> > irregular finite periodicity blocks. we're talking either equal
> > temperaments or infinite regular tunings.
>
> Straightness certainly sounds like it can be defined on an
> untempered block.

well, that's not the context in which it's been discussed, and thus
not the context into which your questions about it were posed.

> > > Finally, is "commatic basis" an acceptable synonym for
> > > "kernel"?
> >
> > no. every vector that can be generated from the basis belongs
> > to the kernel, but not every set of n members of the kernel
> > (even if they're linearly independent) is a basis (since you may
> > get torsion).
>
> So is it possible to have more than one kernel representation
> for a temperament?

no. the kernel is an infinite set of vectors, and is unique to the
temperament.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/17/2003 2:52:41 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> Maybe the original exposition can just be updated a bit, and
> then monz or I could host it, certainly.

You might want to add to

complexity ~ log(d)

error ~ log(n-d)/(d log(d))

a badness heursitic of

badness ~ log(n-d) log(d)^e / d

where e = pi(prime limit)-1 = number of odd primes in limit.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/17/2003 2:56:01 AM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> wallyesterpaulrus wrote:
>
> > then you need a 3-d generalization of "straightness". i bet if i went
> > and learned grassmann algebra i'd be able to get a better grasp on
> > all this.
>
> Yes, probably. Vectors are straighter the smaller the area of the
> parallelogram they describe. The wedge product is a generalisation of
> area, as it tends towards the determinant. So hopefully we can do
> something with the intermediate wedge products. But I don't know what :(

The geometic complexity is a measure of the length of the wedge product of the commas.

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/17/2003 2:59:34 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith <genewardsmith@j...>" <genewardsmith@j...> wrote:

> The geometic complexity is a measure of the length of the wedge product of the commas.

Of commas, taken mod 2 (note classes or whatever you want to call it.)

I'm not sure what you want here--if all of the commas point in about the same direction, do you mean with or without 2?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 3:07:42 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <graham@m...>
wrote:
> > wallyesterpaulrus wrote:
> >
> > > then you need a 3-d generalization of "straightness". i bet if
i went
> > > and learned grassmann algebra i'd be able to get a better grasp
on
> > > all this.
> >
> > Yes, probably. Vectors are straighter the smaller the area of
the
> > parallelogram they describe. The wedge product is a
generalisation of
> > area, as it tends towards the determinant. So hopefully we can
do
> > something with the intermediate wedge products. But I don't know
what :(
>
> The geometic complexity is a measure of the length of the wedge
>product of the commas.

or the length of the comma, if there's only one.

so the main difference is that you're using a euclidean metric (for
geometric complexity), while i'm using a taxicab one (for heuristic
complexity).

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 3:08:35 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:

> I'm not sure what you want here--if all of the commas point in
>about the same direction, do you mean with or without 2?

i'm thinking without.

🔗Graham Breed <graham@microtonal.co.uk>

1/17/2003 3:14:25 AM

wallyesterpaulrus wrote:

> so the main difference is that you're using a euclidean metric (for > geometric complexity), while i'm using a taxicab one (for heuristic > complexity).

I take it you're adding either the moduli or squares of the coefficients of the exterior element (wedge product of vector)? Well, that's easy enough. And the more complex the better, because that covers the complexity of the original unison vectors and their straightness. If we've already chosen unison vectors that are small pitch intervals, do we have a badness measure?

Graham

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

1/17/2003 3:31:56 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith
> <genewardsmith@j...>" <genewardsmith@j...> wrote:
>
> > I'm not sure what you want here--if all of the commas point in
> >about the same direction, do you mean with or without 2?
>
> i'm thinking without.

One method which might come to the same thing as "straightness" in effect is to take two commas, and combine to get a codimension 2 wedgie. Produce a list of these by taking the best (here you run into geometric badness) and then wedge these with another comma, and so forth.

🔗Graham Breed <graham@microtonal.co.uk>

1/17/2003 3:38:47 AM

Gene Ward Smith wrote:

> One method which might come to the same thing as "straightness" in effect is to take two commas, and combine to get a codimension 2 wedgie. Produce a list of these by taking the best (here you run into geometric badness) and then wedge these with another comma, and so forth.

So is "geometric badness" simplicity or complexity of the exterior element?

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 10:38:30 AM

> > > that's not true. since both the defining unison vectors *and*
> > > the straightness change, the badness can (and will) remain
> > > constant.
> >
> > Then how can it become "terrible"?
>
> if you change to a "straighter" pair of unison vectors, one or
> both of them will have to be a lot shorter, thus less distribution
> of error and a worse temperament.

I was trying to point out that badness here has failed to reflect
your opinion of the temperament.

>>>>Also, can I change the straightness by transposing pitches
>>>>by uvs?
>>>
>>>this is meaningless, as we're talking about temperaments, not
>>>irregular finite periodicity blocks. we're talking either equal
>>>temperaments or infinite regular tunings.
>>
>>Straightness certainly sounds like it can be defined on an
>>untempered block.
>
>well, that's not the context in which it's been discussed, and
>thus not the context into which your questions about it were posed.

That's true, but I couldn't see making a separate post. I'm
trying to understand your concept of straightness.

>no. the kernel is an infinite set of vectors, and is unique to the
>temperament.

Whew. Got it!

-C.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 3:49:59 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > > that's not true. since both the defining unison vectors *and*
> > > > the straightness change, the badness can (and will) remain
> > > > constant.
> > >
> > > Then how can it become "terrible"?
> >
> > if you change to a "straighter" pair of unison vectors, one or
> > both of them will have to be a lot shorter, thus less distribution
> > of error and a worse temperament.
>
> I was trying to point out that badness here has failed to reflect
> your opinion of the temperament.

how so?

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 3:47:34 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> wallyesterpaulrus wrote:
>
> > so the main difference is that you're using a euclidean metric
(for
> > geometric complexity), while i'm using a taxicab one (for
heuristic
> > complexity).
>
> I take it you're adding either the moduli or squares of the
coefficients
> of the exterior element (wedge product of vector)?

i don't even know what that means.

> Well, that's easy
> enough. And the more complex the better, because that covers the
> complexity of the original unison vectors and their straightness.
If
> we've already chosen unison vectors that are small pitch intervals,
do
> we have a badness measure?

not following.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 5:19:14 PM

>>I was trying to point out that badness here has failed
>>to reflect your opinion of the temperament.
>
> how so?

You said the temperament got worse, but the badness
remained constant. -C.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/17/2003 5:53:18 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>I was trying to point out that badness here has failed
> >>to reflect your opinion of the temperament.
> >
> > how so?
>
> You said the temperament got worse,

shortening the unison vectors makes the temperament worse, but in a
given temperament, this would be counteracted by an increase in
straighness, which makes the temperament better. overall, the measure
must remain fixed for a given temperament, otherwise it's meaningless.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/17/2003 9:43:22 PM

>>in most cases the *difference* between the unison vectors
>>will be of similar or greater magnitude in terms of JI comma
>>interval size as the unison vectors themselves, but since
>>the angle is very small, this *difference* vector will be
>>very short (i.e., low complexity). a comma of a given JI
>>interval size will lead to much higher error if tempered out
>>over much fewer consonant rungs (i.e., if it's very short)
>>than if it's tempered out over more consonant rungs (i.e.,
>>if it's long). therefore, you may end up with a temperament
>>with much larger error than you would have expected given
>>your original pair of unison vectors.
>
>Right, the difference vector has to vanish, too. Ok. What
>I don't get is, for a given temperament, can I change the
>straightness by changing the unison vector representation?
>If so, this means that badness is not fixed for a given
>temperament...
>
>>that's not true. since both the defining unison vectors *and*
>>the straightness change, the badness can (and will) remain
>>constant.
>
>Then how can it [the temperament] become "terrible"?
>
>>if you change to a "straighter" pair of unison vectors,

Wait a minute -- straightness goes up or down with the angle
between the vectors? I thought up.

>>one or both of them will have to be a lot shorter, thus less
>>distribution of error and a worse temperament.

I don't follow the 'shorter' bit. The only thing I thought
straightness did was make the difference vector more complex.

>shortening the unison vectors makes the temperament worse, but
>in a given temperament, this would be counteracted by an
>increase in straighness, which makes the temperament better.

You lost me.

>overall, the measure must remain fixed for a given temperament,
>otherwise it's meaningless.

If by "the measure", you mean badness, I agree.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/18/2003 2:51:43 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>in most cases the *difference* between the unison vectors
> >>will be of similar or greater magnitude in terms of JI comma
> >>interval size as the unison vectors themselves, but since
> >>the angle is very small, this *difference* vector will be
> >>very short (i.e., low complexity). a comma of a given JI
> >>interval size will lead to much higher error if tempered out
> >>over much fewer consonant rungs (i.e., if it's very short)
> >>than if it's tempered out over more consonant rungs (i.e.,
> >>if it's long). therefore, you may end up with a temperament
> >>with much larger error than you would have expected given
> >>your original pair of unison vectors.
> >
> >Right, the difference vector has to vanish, too. Ok. What
> >I don't get is, for a given temperament, can I change the
> >straightness by changing the unison vector representation?
> >If so, this means that badness is not fixed for a given
> >temperament...
> >
> >>that's not true. since both the defining unison vectors *and*
> >>the straightness change, the badness can (and will) remain
> >>constant.
> >
> >Then how can it [the temperament] become "terrible"?
> >
> >>if you change to a "straighter" pair of unison vectors,
>
> Wait a minute -- straightness goes up or down with the angle
> between the vectors? I thought up.

yep -- until you reach 90 degrees (or whatever the equivalent is in
the metric you're using).

> >>one or both of them will have to be a lot shorter, thus less
> >>distribution of error and a worse temperament.
>
> I don't follow the 'shorter' bit. The only thing I thought
> straightness did was make the difference vector more complex.

yes, or if you're decreasing from >90 degrees to 90 degrees, it's the
sum vector that gets more complex.

> >shortening the unison vectors makes the temperament worse, but
> >in a given temperament, this would be counteracted by an
> >increase in straighness, which makes the temperament better.
>
> You lost me.

well, there are probably too many counterfactuals here. why don't we
start again with any examples from the archives which came up in
connection with straightness. your choice.

> >overall, the measure must remain fixed for a given temperament,
> >otherwise it's meaningless.
>
> If by "the measure", you mean badness, I agree.

good!

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/18/2003 9:50:09 PM

>>>shortening the unison vectors makes the temperament worse, but
>>>in a given temperament, this would be counteracted by an
>>>increase in straighness, which makes the temperament better.
>>
>>You lost me.
>
>well, there are probably too many counterfactuals here. why don't
>we start again with any examples from the archives which came up
>in connection with straightness. your choice.

Good idea. I did find some talk between you and Gene, referring
to results I couldn't find. :( At one point, you mention low-
badness blocks of one note... If you pick the example, I'm happy
to attempt to build the block and find the alternate versions...
no promises, though, since I'm new to this.

If we get a good example or two, I'll collect everything and
give you a URL. Bless you, monz, for doing this sort of thing.

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/19/2003 12:23:55 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>shortening the unison vectors makes the temperament worse, but
> >>>in a given temperament, this would be counteracted by an
> >>>increase in straighness, which makes the temperament better.
> >>
> >>You lost me.
> >
> >well, there are probably too many counterfactuals here. why don't
> >we start again with any examples from the archives which came up
> >in connection with straightness. your choice.
>
> Good idea. I did find some talk between you and Gene, referring
> to results I couldn't find. :( At one point, you mention low-
> badness blocks of one note... If you pick the example, I'm happy
> to attempt to build the block and find the alternate versions...
> no promises, though, since I'm new to this.

what are we talking about, anyway? (no offence)

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/19/2003 12:24:13 PM

>>>>>shortening the unison vectors makes the temperament worse,
>>>>>but in a given temperament, this would be counteracted by
>>>>>an increase in straighness, which makes the temperament
>>>>>better.
>>>>
>>>>You lost me.
>>>
>>>well, there are probably too many counterfactuals here. why
>>>don't we start again with any examples from the archives
>>>which came up in connection with straightness. your choice.
>>
>>Good idea. I did find some talk between you and Gene,
>>referring to results I couldn't find. :( At one point, you
>>mention low-badness blocks of one note... If you pick the
>>example, I'm happy to attempt to build the block and find the
>>alternate versions... no promises, though, since I'm new to
>>this.
>
>what are we talking about, anyway? (no offence)

I'm trying to figure out what you meant in the top paragraph,
there...

>>>>>shortening the unison vectors makes the temperament worse,

If we heuristically ignore the sizes, then yes.

>>>>>but in a given temperament, this would be counteracted by
>>>>>an increase in straighness, which makes the temperament
>>>>>better.

??? If straightness is maximal when the uvs are maximally
orthogonal, how does this mean the uvs have gotten shorter?
I thought it was a *decrease* in straightness that made the
difference/sum vector shorten, making the temperament *worse*.

Ultimately, I'm trying to figure out how changing the
straigtness can make a temperament "worse" but keep badness
constant. Either badness is broken or it doesn't!

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/20/2003 12:27:45 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma
<clumma@y...>" <clumma@y...> wrote:
> >>>>>shortening the unison vectors makes the temperament
worse,
> >>>>>but in a given temperament, this would be counteracted
by
> >>>>>an increase in straighness, which makes the
temperament
> >>>>>better.
> >>>>
> >>>>You lost me.
> >>>
> >>>well, there are probably too many counterfactuals here. why
> >>>don't we start again with any examples from the archives
> >>>which came up in connection with straightness. your
choice.
> >>
> >>Good idea. I did find some talk between you and Gene,
> >>referring to results I couldn't find. :( At one point, you
> >>mention low-badness blocks of one note... If you pick the
> >>example, I'm happy to attempt to build the block and find the
> >>alternate versions... no promises, though, since I'm new to
> >>this.
> >
> >what are we talking about, anyway? (no offence)
>
> I'm trying to figure out what you meant in the top paragraph,
> there...

if you have a set of unison vectors A and a set of unison vectors
B, where both sets have the same straightness, and vectors in A
and B are about the same size intervals in JI, but the vectors in B
are shorter than those in A, then the temperament defined by B
will have larger error than the temperament defined by A.

but if the straightness is different, and the vectors are the same
length, then it is the straighter set that will have lower error.

does the paragraph make more sense now?

> >>>>>shortening the unison vectors makes the temperament
worse,
>
> If we heuristically ignore the sizes, then yes.
>
> >>>>>but in a given temperament, this would be counteracted
by
> >>>>>an increase in straighness, which makes the
temperament
> >>>>>better.
>
> ??? If straightness is maximal when the uvs are maximally
> orthogonal, how does this mean the uvs have gotten shorter?

any basis for the temperament that uses much longer unison
vectors will have to have much less straightness, because the
area/volume/etc. enclosed by the UVs must remain constant.
otherwise, you have torsion.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/21/2003 10:57:16 AM

>>??? If straightness is maximal when the uvs are maximally
>>orthogonal, how does this mean the uvs have gotten shorter?
>
>any basis for the temperament that uses much longer unison
>vectors will have to have much less straightness, because the
>area/volume/etc. enclosed by the UVs must remain constant.
>otherwise, you have torsion.

So is this right:

Straightness...LengthUVs...Length+/-UV...Badness
Down...........Up..........Down..........Same
Up.............Down........Up............Same

?

If so, you can't say that changing the straightness of a
given temperament makes it "worse". But am I correct
that across temperaments, 'crooked' ones will tend to be
worse (assuming we've already considered the LengthUVs)?

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/21/2003 11:56:45 AM

>>>the heuristics are only formulated for the one-unison-vector
>>>case (e.g., 5-limit linear temperaments), and no one has
>>>bothered to figure out the metric that makes it work exactly
>>>(though it seems like a tractable math problem). but they do
>>>seem to work within a factor of two for the current "step"
>>>and "cent" functions. "step" is approximately proportional to
>>>log(d), and "cent" is approximately proportional to
>>>(n-d)/(d*log(d)).
>>
>>Why are they called "step" and "cent"? How were they derrived?
>
>that's what gene used to call them. "step" is simply complexity,
>and "cent" is simply rms error.

Now, look here. Maybe this was obvious to everyone but me, but
a single paragraph on the derrivation each of these would have
saved us much heartache...

"There are complexity and error heuristics. They approximate
many different complexity and error functions (resp.) of
temperaments in which one comma is tempered out, through simple
math on the ratio, n/d (in lowest terms, n > d) representing
the comma that is tempered out.

"The complexity heuristic is log(d). It works because ratios
sharing denominator d are confined to a certain radius on the
harmonic lattice. blah blah blah

"The error heuristic is |n-d|/d*log(d). It works because it
reflects the size of the comma, per the number of consonant
intervals over which it must vanish. That is, ratios whose
n is far bigger than d are larger, and you'll recognize the
complexity heuristic underneath. blah blah blah

"To apply the heuristics to temperaments where more than one
comma vanishes, we might consider each of them in turn, but
we must be careful to include the difference/sum vector,
because it too must vanish. A concept called "straightness"
measures the angle between the commas on the harmonic lattice,
and therefore the relative length of the difference/sum vector.
blah blah blah"

Just an example, and probably contains errors. I must fly
to lunch!

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/21/2003 2:59:09 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>??? If straightness is maximal when the uvs are maximally
> >>orthogonal, how does this mean the uvs have gotten shorter?
> >
> >any basis for the temperament that uses much longer unison
> >vectors will have to have much less straightness, because the
> >area/volume/etc. enclosed by the UVs must remain constant.
> >otherwise, you have torsion.
>
> So is this right:
>
> Straightness...LengthUVs...Length+/-UV...Badness
> Down...........Up..........Down..........Same
> Up.............Down........Up............Same
>
> ?

if you replace "badness" with "error", it's right.

> If so, you can't say that changing the straightness of a
> given temperament makes it "worse". But am I correct
> that across temperaments, 'crooked' ones will tend to be
> worse (assuming we've already considered the LengthUVs)?

if you're using TM reduction or something similar to define the
basis, then yes, this will be the general trend.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/21/2003 3:06:33 PM

sorry for the heartache, folk(s). save this post!

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>the heuristics are only formulated for the one-unison-vector
> >>>case (e.g., 5-limit linear temperaments), and no one has
> >>>bothered to figure out the metric that makes it work exactly
> >>>(though it seems like a tractable math problem). but they do
> >>>seem to work within a factor of two for the current "step"
> >>>and "cent" functions. "step" is approximately proportional to
> >>>log(d), and "cent" is approximately proportional to
> >>>(n-d)/(d*log(d)).
> >>
> >>Why are they called "step" and "cent"? How were they derrived?
> >
> >that's what gene used to call them. "step" is simply complexity,
> >and "cent" is simply rms error.
>
> Now, look here. Maybe this was obvious to everyone but me, but
> a single paragraph on the derrivation each of these would have
> saved us much heartache...
>
> "There are complexity and error heuristics. They approximate
> many different complexity and error functions (resp.) of
> temperaments in which one comma is tempered out, through simple
> math on the ratio, n/d (in lowest terms, n > d) representing
> the comma that is tempered out.
>
> "The complexity heuristic is log(d). It works because ratios
> sharing denominator d are confined to a certain radius on the
> harmonic lattice. blah blah blah
>
> "The error heuristic is |n-d|/d*log(d). It works because it
> reflects the size of the comma, per the number of consonant
> intervals over which it must vanish. That is, ratios whose
> n is far bigger than d are larger, and you'll recognize the
> complexity heuristic underneath. blah blah blah
>
> "To apply the heuristics to temperaments where more than one
> comma vanishes, we might consider each of them in turn, but
> we must be careful to include the difference/sum vector,
> because it too must vanish. A concept called "straightness"
> measures the angle between the commas on the harmonic lattice,
> and therefore the relative length of the difference/sum vector.
> blah blah blah"
>
>
> Just an example, and probably contains errors. I must fly
> to lunch!
>
> -Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/21/2003 3:06:35 PM

sorry for the heartache, folk(s). save this post!

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>the heuristics are only formulated for the one-unison-vector
> >>>case (e.g., 5-limit linear temperaments), and no one has
> >>>bothered to figure out the metric that makes it work exactly
> >>>(though it seems like a tractable math problem). but they do
> >>>seem to work within a factor of two for the current "step"
> >>>and "cent" functions. "step" is approximately proportional to
> >>>log(d), and "cent" is approximately proportional to
> >>>(n-d)/(d*log(d)).
> >>
> >>Why are they called "step" and "cent"? How were they derrived?
> >
> >that's what gene used to call them. "step" is simply complexity,
> >and "cent" is simply rms error.
>
> Now, look here. Maybe this was obvious to everyone but me, but
> a single paragraph on the derrivation each of these would have
> saved us much heartache...
>
> "There are complexity and error heuristics. They approximate
> many different complexity and error functions (resp.) of
> temperaments in which one comma is tempered out, through simple
> math on the ratio, n/d (in lowest terms, n > d) representing
> the comma that is tempered out.
>
> "The complexity heuristic is log(d). It works because ratios
> sharing denominator d are confined to a certain radius on the
> harmonic lattice. blah blah blah
>
> "The error heuristic is |n-d|/d*log(d). It works because it
> reflects the size of the comma, per the number of consonant
> intervals over which it must vanish. That is, ratios whose
> n is far bigger than d are larger, and you'll recognize the
> complexity heuristic underneath. blah blah blah
>
> "To apply the heuristics to temperaments where more than one
> comma vanishes, we might consider each of them in turn, but
> we must be careful to include the difference/sum vector,
> because it too must vanish. A concept called "straightness"
> measures the angle between the commas on the harmonic lattice,
> and therefore the relative length of the difference/sum vector.
> blah blah blah"
>
>
> Just an example, and probably contains errors. I must fly
> to lunch!
>
> -Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/21/2003 4:01:47 PM

> > So is this right:
> >
> > Straightness...LengthUVs...Length+/-UV...Badness
> > Down...........Up..........Down..........Same
> > Up.............Down........Up............Same
> >
> > ?
>
> if you replace "badness" with "error", it's right.

I should have noted that this was for a given temperament,
not for all temperaments, though I take it you took it
that way.

So what should the badness column be?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/22/2003 2:32:55 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > So is this right:
> > >
> > > Straightness...LengthUVs...Length+/-UV...Badness
> > > Down...........Up..........Down..........Same
> > > Up.............Down........Up............Same
> > >
> > > ?
> >
> > if you replace "badness" with "error", it's right.
>
> I should have noted that this was for a given temperament,
> not for all temperaments, though I take it you took it
> that way.

yes, because of the last column being all "same".

> So what should the badness column be?

well, i guess that's all "same" too!

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

1/22/2003 3:55:41 PM

> > > > Straightness...LengthUVs...Length+/-UV...Badness
> > > > Down...........Up..........Down..........Same
> > > > Up.............Down........Up............Same
> > > >
> > > > ?
> > >
> > > if you replace "badness" with "error", it's right.
> >
> > I should have noted that this was for a given temperament,
> > not for all temperaments, though I take it you took it
> > that way.
>
> yes, because of the last column being all "same".
>
> > So what should the badness column be?
>
> well, i guess that's all "same" too!

So if error is the same and badness is the same, then
complexity is the same (which I suppose makes since,
if the volume of the block is not to change). So is it
safe to conclude that straightness is important for
heuristically searching temperaments, but not for
choosing a commatic basis for a given temperament?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

1/22/2003 5:27:31 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > > > > Straightness...LengthUVs...Length+/-UV...Badness
> > > > > Down...........Up..........Down..........Same
> > > > > Up.............Down........Up............Same
> > > > >
> > > > > ?
> > > >
> > > > if you replace "badness" with "error", it's right.
> > >
> > > I should have noted that this was for a given temperament,
> > > not for all temperaments, though I take it you took it
> > > that way.
> >
> > yes, because of the last column being all "same".
> >
> > > So what should the badness column be?
> >
> > well, i guess that's all "same" too!
>
> So if error is the same and badness is the same, then
> complexity is the same (which I suppose makes since,
> if the volume of the block is not to change). So is it
> safe to conclude that straightness is important for
> heuristically searching temperaments,

i think so.

> but not for
> choosing a commatic basis for a given temperament?

the commatic basis with the shortest uvs will generally be the
straightest one.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/3/2003 5:36:43 PM

carl, here's an old message where i explained the error heuristic:

/tuning-math/message/1437

and you can see that gene, in his reply, was the one who actually
suggested the word "heuristic" in connection with this . . .

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>the heuristics are only formulated for the one-unison-vector
> >>>case (e.g., 5-limit linear temperaments), and no one has
> >>>bothered to figure out the metric that makes it work exactly
> >>>(though it seems like a tractable math problem). but they do
> >>>seem to work within a factor of two for the current "step"
> >>>and "cent" functions. "step" is approximately proportional to
> >>>log(d), and "cent" is approximately proportional to
> >>>(n-d)/(d*log(d)).
> >>
> >>Why are they called "step" and "cent"? How were they derrived?
> >
> >that's what gene used to call them. "step" is simply complexity,
> >and "cent" is simply rms error.
>
> Now, look here. Maybe this was obvious to everyone but me, but
> a single paragraph on the derrivation each of these would have
> saved us much heartache...
>
> "There are complexity and error heuristics. They approximate
> many different complexity and error functions (resp.) of
> temperaments in which one comma is tempered out, through simple
> math on the ratio, n/d (in lowest terms, n > d) representing
> the comma that is tempered out.
>
> "The complexity heuristic is log(d). It works because ratios
> sharing denominator d are confined to a certain radius on the
> harmonic lattice. blah blah blah
>
> "The error heuristic is |n-d|/d*log(d). It works because it
> reflects the size of the comma, per the number of consonant
> intervals over which it must vanish. That is, ratios whose
> n is far bigger than d are larger, and you'll recognize the
> complexity heuristic underneath. blah blah blah
>
> "To apply the heuristics to temperaments where more than one
> comma vanishes, we might consider each of them in turn, but
> we must be careful to include the difference/sum vector,
> because it too must vanish. A concept called "straightness"
> measures the angle between the commas on the harmonic lattice,
> and therefore the relative length of the difference/sum vector.
> blah blah blah"
>
>
> Just an example, and probably contains errors. I must fly
> to lunch!
>
> -Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/3/2003 8:46:03 PM

> carl, here's an old message where i explained the error
> heuristic:
>
> /tuning-math/message/1437

Great, thanks! I hadn't seen this, as "heuristic" doesn't
appear in it.

> and you can see that gene, in his reply, was the one who
> actually suggested the word "heuristic" in connection
> with this . . .

I do see that... you were already using the term for the
complexity heuristic at that time, right?

I understand everything but a few details...

>log(n) + log(d) (hence approx. proportional to log(d)

Is it any more proportional to log(d) than log(n) in this
case? Since n~=d?

>w=log(n/d)

Got that.

>w~=n/d-1

How do you get this from that?

>w~=(n-d)/d

Ditto.

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

2/4/2003 1:33:03 AM

Carl Lumma wrote:

>>log(n) + log(d) (hence approx. proportional to log(d
> > Is it any more proportional to log(d) than log(n) in this
> case? Since n~=d?

No, and the spreadsheet sorted by d is also sorted by n. And in that case it would have been easier to go straight to log(n*d).

>>w=log(n/d)
> > > Got that.
> > >>w~=n/d-1
> > > How do you get this from that?

It's the first order approximaton where n/d ~= 1. See (8) in

http://mathworld.wolfram.com/NaturalLogarithm.html

or check on your calculator.

>>w~=(n-d)/d
> > Ditto.

That's subtracting fractions. Did you do fractions at school?

4/3 - 1 = 4/3 - 3/3 = (4-3)/3 = 1/3

5/4 - 1 = 5/4 - 4/4 = (5-4)/4 = 1/4

n/d - 1 = n/d - d/d = (n-d)/d

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 2:08:37 AM

>>>log(n) + log(d) (hence approx. proportional to log(d
>>
>> Is it any more proportional to log(d) than log(n) in this
>> case? Since n~=d?
>
>No, and the spreadsheet sorted by d is also sorted by n.

So it could just as well be (n-d)/(d*log(n))?

>And in that case it would have been easier to go straight
>to log(n*d).

Straight to where (do you see log(n*d))?

>>>w=log(n/d)
//
>>>w~=n/d-1
>>
>>
>>How do you get this from that?

Oh, (n/d)-1, not n/(d-1).

>It's the first order approximaton where n/d ~= 1. See (8) in
>http://mathworld.wolfram.com/NaturalLogarithm.html

The Mercator series?? And all the stuff on this page applies
only to ln, not log in general (which is what I assume Paul
meant), right?

>>>w~=(n-d)/d
>>
>>Ditto.
>
>That's subtracting fractions. Did you do fractions at school?

Yes; I was still seeing n/(d-1).

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

2/4/2003 2:26:03 AM

Carl Lumma wrote:

> So it could just as well be (n-d)/(d*log(n))?

This is approximately log(n/d)/log(n)

The order will be reversed. If you can calculate it, see if it holds the ordering. With numerators it's easy as they're already in the database.

>>And in that case it would have been easier to go straight
>>to log(n*d).
> > Straight to where (do you see log(n*d))?

log(n*d) = log(n) + log(d)

The intervals start out in prime factor notation. So log(n*d) can be calculated as log(2)*abs(x_1) + log(3)*abs(x_2) + log(5)*abs(x_3) + log(7)*abs(x_4) where x_i is the ith prime component of n/d.

> The Mercator series?? And all the stuff on this page applies
> only to ln, not log in general (which is what I assume Paul
> meant), right?

log2(x) = log(x)/log(2). In this case we're only estimating the complexity, so the common factor if 1/log(2) isn't important.

For estimating comma sizes using mental arithmetic, remembering that 1200/log(2) is approximately 1730 comes in handy. 1730/80 = 21.625, so a syntonic commma's around 22 cents.

Graham

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 11:10:35 AM

>>>And in that case it would have been easier to go straight
>>>to log(n*d).
>>
>>Straight to where (do you see log(n*d))?
>
>log(n*d) = log(n) + log(d)

Of course... so we're coming from log(n*d), not going to it.

>>The Mercator series?? And all the stuff on this page applies
>>only to ln, not log in general (which is what I assume Paul
>>meant), right?
>
>log2(x) = log(x)/log(2). In this case we're only estimating
>the complexity, so the common factor if 1/log(2) isn't important.

Ok, but I still don't get how the "Mercator series" shown in (8)
dictates the rules for this approximation.

-Carl

🔗Graham Breed <graham@microtonal.co.uk>

2/4/2003 1:04:13 PM

Carl Lumma wrote:

> Ok, but I still don't get how the "Mercator series" shown in (8)
> dictates the rules for this approximation.

Oh, I thought you had followed that.

It's usually called the Taylor series. I don't know what Mercator's got to do with it. But anyway it's

ln(1+x) = x - x**2/2 + x**3/3 + ...

where x is small. Because x is small, x**2 must be even smaller, so you can use the first order approximation

ln(1+x) =~ x

In this case, 1+x is n/d, so x = n/d - 1

ln(n/d) =~ n/d - 1

If we really wanted the logarithm to base 2, that'd be

log2(n/d) = ln(n/d)/ln(2) =~ (n/d - 1)/ln(2)

or

log2(n/d) ~ n/d - 1

where ~ is "roughly proportional to" which is all we need to know. And the same's true whatever base logarithm you use.

Graham

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/4/2003 1:18:35 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> > carl, here's an old message where i explained the error
> > heuristic:
> >
> > /tuning-math/message/1437
>
> Great, thanks! I hadn't seen this, as "heuristic" doesn't
> appear in it.
>
> > and you can see that gene, in his reply, was the one who
> > actually suggested the word "heuristic" in connection
> > with this . . .
>
> I do see that... you were already using the term for the
> complexity heuristic at that time, right?

no, gene introduced the word "heuristic".

> I understand everything but a few details...
>
> >log(n) + log(d) (hence approx. proportional to log(d)
>
> Is it any more proportional to log(d) than log(n) in this
> case?

no.

> Since n~=d?

yes.

i prefer log(odd limit) over either log(n) or log(d).

> >w=log(n/d)
>
> Got that.
>
> >w~=n/d-1
>
> How do you get this from that?

standard taylor series approximation for log . . . if x is close to
1, then log(x) is close to x-1 (since the derivative of log(x) near
x=1 is 1/1 = 1).

> >w~=(n-d)/d
>
> Ditto.

arithmetic. n/d - 1 = n/d - d/d = (n-d)/d.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/4/2003 1:23:11 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>log(n) + log(d) (hence approx. proportional to log(d
> >>
> >> Is it any more proportional to log(d) than log(n) in this
> >> case? Since n~=d?
> >
> >No, and the spreadsheet sorted by d is also sorted by n.
>
> So it could just as well be (n-d)/(d*log(n))?

a very different sorting. that would be heuristic error, not
heuristic complexity. of course it's a very different sorting, since
knowing log(n) or log(d) tells you nothing about (n-d).

> >And in that case it would have been easier to go straight
> >to log(n*d).
>
> Straight to where (do you see log(n*d))?

meaning the sorting by log(n*d) would be virtually identical to the
sorting by log(n) or by log(d).

>
> >It's the first order approximaton where n/d ~= 1. See (8) in
> >http://mathworld.wolfram.com/NaturalLogarithm.html
>
> The Mercator series?? And all the stuff on this page applies
> only to ln, not log in general (which is what I assume Paul
> meant), right?

i meant ln. i always use matlab, in which "log" means ln.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/4/2003 1:28:26 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>And in that case it would have been easier to go straight
> >>>to log(n*d).
> >>
> >>Straight to where (do you see log(n*d))?
> >
> >log(n*d) = log(n) + log(d)
>
> Of course... so we're coming from log(n*d), not going to it.

what do you mean, we're coming from log(n*d)??

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 7:02:49 PM

>>>> Is it any more proportional to log(d) than log(n) in this
>>>> case? Since n~=d?
>>>
>>>No, and the spreadsheet sorted by d is also sorted by n.
>>
>>So it could just as well be (n-d)/(d*log(n))?
>
>a very different sorting. that would be heuristic error, not
>heuristic complexity. of course it's a very different sorting,
>since knowing log(n) or log(d) tells you nothing about (n-d).

? I was asking if the *error* heuristic could become
(n-d)/(d*log(n)) if we substituted log(n) for log(d).

>>>It's the first order approximaton where n/d ~= 1. See (8) in
>>>http://mathworld.wolfram.com/NaturalLogarithm.html
>>
>>The Mercator series?? And all the stuff on this page applies
>>only to ln, not log in general (which is what I assume Paul
>>meant), right?
>
>i meant ln. i always use matlab, in which "log" means ln.

Oh. How odd.

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 7:07:34 PM

>>>>>And in that case it would have been easier to go straight
>>>>>to log(n*d).
>>>>
>>>>Straight to where (do you see log(n*d))?
>>>
>>>log(n*d) = log(n) + log(d)
>>
>>Of course... so we're coming from log(n*d), not going to it.
>
>what do you mean, we're coming from log(n*d)??

In your original message, you *start from* "is proportional to
log(n) + log(d)" and *arrive at* "Hence the amount of tempering
implied by the unison vector is approx. proportional to
(n-d)/(d*log(d))".

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 7:10:55 PM

> > I do see that... you were already using the term for the
> > complexity heuristic at that time, right?
>
> no, gene introduced the word "heuristic".

K.

> > >w=log(n/d)
> >
> > Got that.
> >
> > >w~=n/d-1
> >
> > How do you get this from that?
>
> standard taylor series approximation for log . . . if x is
> close to 1, then log(x) is close to x-1 (since the derivative
> of log(x) near x=1 is 1/1 = 1).

Cool.

> > >w~=(n-d)/d
> >
> > Ditto.
>
> arithmetic. n/d - 1 = n/d - d/d = (n-d)/d.

Yeah, I thought the 1 was in the denominator.

-C.

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 7:29:37 PM

>>Ok, but I still don't get how the "Mercator series" shown in (8)
>>dictates the rules for this approximation.
>
>Oh, I thought you had followed that.
>
>It's usually called the Taylor series. I don't know what
>Mercator's got to do with it.

On the mathworld page, it says "the Mercator series gives a
Taylor series for the natural logarithm", and in fact makes
it look like the Taylor series is the Mercator series.

>But anyway it's
>
> ln(1+x) = x - x**2/2 + x**3/3 + ...
>
> where x is small. Because x is small, x**2 must be even
> smaller, so you can use the first order approximation
>
> ln(1+x) =~ x
>
> In this case, 1+x is n/d, so x = n/d - 1
>
> ln(n/d) =~ n/d - 1

Thanks!

-Carl

🔗Carl Lumma <clumma@yahoo.com> <clumma@yahoo.com>

2/4/2003 8:02:23 PM

>>It's usually called the Taylor series. I don't know what
>>Mercator's got to do with it.
>
>On the mathworld page, it says "the Mercator series gives a
>Taylor series for the natural logarithm", and in fact makes
>it look like the Taylor series is the Mercator series.

...by the way the page is formatted.

Turns out that the Mercator series is what we're talking
about; a special case of the Taylor series.

http://mathworld.wolfram.com/TaylorSeries.html

"A Taylor series is a series expansion of a function about
a point. ... "

-Carl

🔗Gene Ward Smith <genewardsmith@juno.com> <genewardsmith@juno.com>

2/5/2003 5:21:50 AM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>" <clumma@y...> wrote:

> On the mathworld page, it says "the Mercator series gives a
> Taylor series for the natural logarithm", and in fact makes
> it look like the Taylor series is the Mercator series.

It is in this case--it's a special name for this series alone, sort of like "Gregory/Leibniz" for the arctangent (or the special value when x=1, namely pi/4.)

This is "our" Mercator, of the Mercator comma, incidentally, and not the map Mercator.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/5/2003 2:40:38 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>> Is it any more proportional to log(d) than log(n) in this
> >>>> case? Since n~=d?
> >>>
> >>>No, and the spreadsheet sorted by d is also sorted by n.
> >>
> >>So it could just as well be (n-d)/(d*log(n))?
> >
> >a very different sorting. that would be heuristic error, not
> >heuristic complexity. of course it's a very different sorting,
> >since knowing log(n) or log(d) tells you nothing about (n-d).
>
> ? I was asking if the *error* heuristic could become
> (n-d)/(d*log(n)) if we substituted log(n) for log(d).

oh . . . yeah sure. these are almost identical. i use odd limit for
both occurences of n or d in the denominator.

🔗wallyesterpaulrus <wallyesterpaulrus@yahoo.com> <wallyesterpaulrus@yahoo.com>

2/5/2003 2:44:24 PM

--- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
> >>>>>And in that case it would have been easier to go straight
> >>>>>to log(n*d).
> >>>>
> >>>>Straight to where (do you see log(n*d))?
> >>>
> >>>log(n*d) = log(n) + log(d)
> >>
> >>Of course... so we're coming from log(n*d), not going to it.
> >
> >what do you mean, we're coming from log(n*d)??
>
> In your original message, you *start from* "is proportional to
> log(n) + log(d)" and *arrive at* "Hence the amount of tempering
> implied by the unison vector is approx. proportional to
> (n-d)/(d*log(d))".
>
> -Carl

right . . . well if you temper the octave too, then you'd use the
tenney metric which is indeed log(n) + log(d) . . . but typically
we've not been tempering the octave, so it's the van prooijen metric,
or log of the "odd limit" of the ratio, which is really relevant
here . . .

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 9:49:45 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith
<genewardsmith@j...>" <genewardsmith@j...> wrote:
> --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
<clumma@y...> wrote:
>
> > Maybe the original exposition can just be updated a bit, and
> > then monz or I could host it, certainly.
>
> You might want to add to
>
> complexity ~ log(d)
>
> error ~ log(n-d)/(d log(d))
>
> a badness heursitic of
>
> badness ~ log(n-d) log(d)^e / d
>
> where e = pi(prime limit)-1 = number of odd primes in limit.

gene, you too got the error heuristic wrong, it's

error ~ |n-d|/(d log(d))

and what kind of temperaments was this badness heuristic meant to
apply to?

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 11:48:11 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith
> <genewardsmith@j...>" <genewardsmith@j...> wrote:
> > --- In tuning-math@yahoogroups.com, "Carl Lumma <clumma@y...>"
> <clumma@y...> wrote:
> >
> > > Maybe the original exposition can just be updated a bit, and
> > > then monz or I could host it, certainly.
> >
> > You might want to add to
> >
> > complexity ~ log(d)
> >
> > error ~ log(n-d)/(d log(d))
> >
> > a badness heursitic of
> >
> > badness ~ log(n-d) log(d)^e / d
> >
> > where e = pi(prime limit)-1 = number of odd primes in limit.
>
> gene, you too got the error heuristic wrong, it's
>
> error ~ |n-d|/(d log(d))
>
> and what kind of temperaments was this badness heuristic meant to
> apply to?

if i correct the error and use 5-limit linear temperaments (of course
you meant single-comma temperaments, duh), and thus use e=2, and cut
off the numerator and denominator at about 10^50, but don't cut off
for error (i just insist the size of the comma is under 600 cents), i
get the following for lowest badness:

numerator denominator
( 1 1)
2.92300327466181e+048 2.92297733949268e+048 atomic
32805 32768 schismic
1.77635683940025e+034 1.77630864952823e+034 pirate
81 80 meantone
4.5035996273705e+017 4.50283905890997e+017 monzismic
4 3 -
25 24 dicot
1.7179869184e+047 1.7179250691067e+047 raider
15625 15552 kleismic
7629394531250 7625597484987 ennealimmal
9.01016235351562e+015 9.00719925474099e+015 kwazy
16 15 father
6 5 -
5 4 -
9 8 -
10 9 -
274877906944 274658203125 semithirds
128 125 augmented
3.81520424476946e+029 3.814697265625e+029 senior
2048 2025 diaschismic
1600000 1594323 amity
27 25 beep
1.16450459770592e+023 1.16415321826935e+023 whoosh
250 243 porcupine
1.62285243890121e+032 1.62259276829213e+032 fortune
5.00315450989997e+016 5e+016 minortone
1076168025 1073741824 UNNAMED!!!!!!!!
6115295232 6103515625 semisuper
78732 78125 semisixths
3125 3072 magic
393216 390625 würschmidt
2109375 2097152 orwell
135 128 pelogic
10485760000 10460353203 vulture
68719476736000 68630377364883 tricot
4.44089209850063e+035 4.44002166576103e+035 egads
1224440064 1220703125 parakleismic
2.23007451985306e+043 2.22975839456296e+043 gross
19073486328125 19042491875328 enneadecal
648 625 diminished
20000 19683 tetracot
256 243 blackwood
2.47588007857076e+027 2.47471500188112e+027 astro
6561 6400 -
32 27 -
2.02824096036517e+035 2.02755595904453e+035 -
531441 524288 aristoxenean
2.95431270655083e+021 2.95147905179353e+021 counterschismic
31381059609 31250000000 -
5.82076609134674e+023 5.81595589965365e+023 -
4294967296 4271484375 escapade
75 64 -
16875 16384 negri
27 20 -
95367431640625 95105071448064 -
2.25283995449392e+034 2.25179981368525e+034 -
32 25 -
25 18 -
129140163 128000000 -
125 108 -
390625000 387420489 -
2.9557837600708e+020 2.95147905179353e+020 vavoom
625 576 -
35303692060125 35184372088832 -
3.4359738368e+030 3.43368382029251e+030 -
67108864 66430125 misty
244140625 241864704 -
etc. etc. etc. etc. etc. etc. etc. etc. etc. etc. etc. etc. etc.

gene, did 1076168025:1073741824 not make your geometric badness
cutoff, or did i mistakenly skip over it when i was working from your
list? 67108864:66430125 made it onto your list, does that have lower
geometric badness? if so, why is 1076168025:1073741824 so unusual
from the point of view of heuristic vs. geometric badness?

🔗Gene Ward Smith <gwsmith@svpal.org>

10/27/2003 1:12:36 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> 1076168025 1073741824 UNNAMED!!!!!!!!

Unnamed since it is a schisma squared.

🔗Carl Lumma <ekin@lumma.org>

10/27/2003 1:50:20 PM

>> > You might want to add to
>> >
>> > complexity ~ log(d)
>> >
>> > error ~ log(n-d)/(d log(d))
>> >
>> > a badness heursitic of
>> >
>> > badness ~ log(n-d) log(d)^e / d
>> >
>> > where e = pi(prime limit)-1 = number of odd primes in limit.
>>
>> gene, you too got the error heuristic wrong, it's
>>
>> error ~ |n-d|/(d log(d))
>>
>> and what kind of temperaments was this badness heuristic meant to
>> apply to?
>
>if i correct the error

Giving (n-d)log(d)^e / d ?

I don't get the point of the log(d)^e term.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 2:00:02 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
>
> > 1076168025 1073741824
UNNAMED!!!!!!!!
>
> Unnamed since it is a schisma squared.

OOPS!!!!!!!! (i can feel that torsion in my gut)

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 2:00:25 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> > You might want to add to
> >> >
> >> > complexity ~ log(d)
> >> >
> >> > error ~ log(n-d)/(d log(d))
> >> >
> >> > a badness heursitic of
> >> >
> >> > badness ~ log(n-d) log(d)^e / d
> >> >
> >> > where e = pi(prime limit)-1 = number of odd primes in limit.
> >>
> >> gene, you too got the error heuristic wrong, it's
> >>
> >> error ~ |n-d|/(d log(d))
> >>
> >> and what kind of temperaments was this badness heuristic meant
to
> >> apply to?
> >
> >if i correct the error
>
> Giving (n-d)log(d)^e / d ?
>
> I don't get the point of the log(d)^e term.
>
> -Carl

that gives you a log-flat badness measure.

🔗Carl Lumma <ekin@lumma.org>

10/27/2003 2:08:36 PM

>> >> > a badness heursitic of
>> >> >
>> >> > badness ~ log(n-d) log(d)^e / d
>> >> >
>> >> > where e = pi(prime limit)-1 = number of odd primes in limit.
>> >>
>> >> gene, you too got the error heuristic wrong, it's
>> >>
>> >> error ~ |n-d|/(d log(d))
//
>> >if i correct the error
>>
>> Giving (n-d)log(d)^e / d ?
>>
>> I don't get the point of the log(d)^e term.
>>
>> -Carl
>
>that gives you a log-flat badness measure.

Aha. Can we get results with e = 7 (19-limit)?

You said I just needed to penalize complexity, but:

() Wouldn't this ruin the log-flatness?
() Here you are restricting yourself to the 5-limit!

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 2:20:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> > a badness heursitic of
> >> >> >
> >> >> > badness ~ log(n-d) log(d)^e / d
> >> >> >
> >> >> > where e = pi(prime limit)-1 = number of odd primes in limit.
> >> >>
> >> >> gene, you too got the error heuristic wrong, it's
> >> >>
> >> >> error ~ |n-d|/(d log(d))
> //
> >> >if i correct the error
> >>
> >> Giving (n-d)log(d)^e / d ?

yes, if n>d.

> >> I don't get the point of the log(d)^e term.
> >>
> >> -Carl
> >
> >that gives you a log-flat badness measure.
>
> Aha. Can we get results with e = 7 (19-limit)?

sure, but then we'd be talking about 6-dimensional temperaments. i
*might* attempt the calculation if you give me a nice low cutoff for
numerator and denominator . . .

> You said I just needed to penalize complexity,

penalize it more, yes.

> but:
>
> () Wouldn't this ruin the log-flatness?

no, it would get you closer to it.

> () Here you are restricting yourself to the 5-limit!

yes, though a few of the commas are 3-limit too. so?

🔗Carl Lumma <ekin@lumma.org>

10/27/2003 2:37:25 PM

>> >> Giving (n-d)log(d)^e / d ?
>
>yes, if n>d.

I thought you might say that!

>> >> I don't get the point of the log(d)^e term.
>> >>
>> >> -Carl
>> >
>> >that gives you a log-flat badness measure.
>>
>> Aha. Can we get results with e = 7 (19-limit)?
>
>sure, but then we'd be talking about 6-dimensional temperaments.

Saints preserve us!

>i *might* attempt the calculation if you give me a nice low cutoff
>for numerator and denominator . . .

Well, 10^50 would send my code to the sun. I could probably do this
with an imperative style, but since you apparently already have done
so, I thought I'd ask you.

What I don't get is why upping the prime limit from 5 to 19 would
make it any harder. The way I'd do it, is for each d < 10^50, run
n until n/d > 600 cents, kicking out any ratios where n*d has a
factor greater than 19. The factoring algorithm I'm using walks
up from 2, so aborting it after 19 or 5 wouldn't make much difference.

>> You said I just needed to penalize complexity,
>
>penalize it more, yes.
>
>> but:
>>
>> () Wouldn't this ruin the log-flatness?
>
>no, it would get you closer to it.

You mean without the log(d)^e term? Because if that term gives
flatness, and then I put an exponent on d, wouldn't I be ruining
the flatness?

>> () Here you are restricting yourself to the 5-limit!
>
>yes, though a few of the commas are 3-limit too. so?

I asked if these searches could be done without a restriction
on prime-limit, and you said yes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 2:42:55 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> >> Giving (n-d)log(d)^e / d ?
> >
> >yes, if n>d.
>
> I thought you might say that!
>
> >> >> I don't get the point of the log(d)^e term.
> >> >>
> >> >> -Carl
> >> >
> >> >that gives you a log-flat badness measure.
> >>
> >> Aha. Can we get results with e = 7 (19-limit)?
> >
> >sure, but then we'd be talking about 6-dimensional temperaments.
>
> Saints preserve us!
>
> >i *might* attempt the calculation if you give me a nice low cutoff
> >for numerator and denominator . . .
>
> Well, 10^50 would send my code to the sun. I could probably do this
> with an imperative style, but since you apparently already have done
> so, I thought I'd ask you.

i've only done it for prime limit 5.

> What I don't get is why upping the prime limit from 5 to 19 would
> make it any harder. The way I'd do it, is for each d < 10^50, run
> n until n/d > 600 cents, kicking out any ratios where n*d has a
> factor greater than 19. The factoring algorithm I'm using walks
> up from 2, so aborting it after 19 or 5 wouldn't make much
>difference.

ok, so why don't you do it? (seriously -- my factoring algorithm
refuses numbers higher than 2^32). see if you can reproduce my 5-
limit results first.

> >> You said I just needed to penalize complexity,
> >
> >penalize it more, yes.
> >
> >> but:
> >>
> >> () Wouldn't this ruin the log-flatness?
> >
> >no, it would get you closer to it.
>
> You mean without the log(d)^e term?

i mean, you were using complexity * error, and that didn't penalize
complexity enough, while a higher power on complexity would.

> Because if that term gives
> flatness, and then I put an exponent on d, wouldn't I be ruining
> the flatness?
>
> >> () Here you are restricting yourself to the 5-limit!
> >
> >yes, though a few of the commas are 3-limit too. so?
>
> I asked if these searches could be done without a restriction
> on prime-limit, and you said yes.

i don't think i would have been referring to the same search. the
exponent on complexity in the log-flat badness formula, at least
according to gene, depends on the prime limit.

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2003 2:54:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> What I don't get is why upping the prime limit from 5 to 19 would
> make it any harder.

i did it this way:

http://www.kees.cc/tuning/perbl.html

🔗Carl Lumma <ekin@lumma.org>

10/27/2003 5:38:49 PM

>> What I don't get is why upping the prime limit from 5 to 19 would
>> make it any harder. The way I'd do it, is for each d < 10^50, run
>> n until n/d > 600 cents, kicking out any ratios where n*d has a
>> factor greater than 19. The factoring algorithm I'm using walks
>> up from 2, so aborting it after 19 or 5 wouldn't make much
>> difference.
>
>ok, so why don't you do it? (seriously -- my factoring algorithm
>refuses numbers higher than 2^32). see if you can reproduce my 5-
>limit results first.

Ok, maybe later tonight/this morning. But how'd you do 10^50 if
you can't factor above 2^32?

>> >> You said I just needed to penalize complexity,
>> >
>> >penalize it more, yes.
>> >
>> >> but:
>> >>
>> >> () Wouldn't this ruin the log-flatness?
>> >
>> >no, it would get you closer to it.
>>
>> You mean without the log(d)^e term?
>
>i mean, you were using complexity * error, and that didn't penalize
>complexity enough, while a higher power on complexity would.

Ok.

>> Because if that term gives
>> flatness, and then I put an exponent on d, wouldn't I be ruining
>> the flatness?
>>
>> >> () Here you are restricting yourself to the 5-limit!
>> >
>> >yes, though a few of the commas are 3-limit too. so?
>>
>> I asked if these searches could be done without a restriction
>> on prime-limit, and you said yes.
>
>i don't think i would have been referring to the same search. the
>exponent on complexity in the log-flat badness formula, at least
>according to gene, depends on the prime limit.

You were referring to |n-d|/d. I get it now.

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/28/2003 11:07:41 AM

>> What I don't get is why upping the prime limit from 5 to 19 would
>> make it any harder.
>
>i did it this way:
>
>http://www.kees.cc/tuning/perbl.html

This doesn't make a method clear to me. Could you go over it,
and/or post your code?

How long did it take you to do n=10^50? I've been running n=5000
for the last hour, and I'm still waiting.

Here's what I do...

For each d <= n, I run n up to 600 cents. That's well less than
n(n+1)/2 ratios -- shouldn't be a problem for n=5000. I check if
the ratio's in lowest-terms; if not, I throw it out. I then
compute its prime-limit; if > p (runtime param, 5 in this case),
I throw it out. I then compute the badness, using this handy-dandy
formula for pi(x)...

;; Minac (unpublished, proof in Ribenboim 1995, p. 181).
;; pi(x)= (sum j=2...x) (floor ((j-1)! + 1)/j - (floor (j-1)!/j)).

I believe all these tests to be quite cheap. Anywho, I then place
the ratio in a running list of the best r (runtime param, in this
case 10) ratios.

I really don't know what other optimizations to make. Anyway,
here are the results for n=1000, p=5, r=10, which takes about a
minute...

(badness primelimit ratio)
((0.24002696783739128 5 81/80)
(0.25993019270997947 3 9/8)
(0.2938674846231569 3 256/243)
(0.3662040962227033 3 4/3)
(0.42083442285788536 5 25/24)
(0.4804530139182014 5 5/4)
(0.4889023927793147 5 16/15)
(0.5180580787960469 5 6/5)
(0.5364217603611476 5 10/9)
(0.5595027250997308 5 128/125))

...they look roughly consistent with your results. p=7 takes no
longer, but returns the same top-10 results as above. In res.
10-20 we do see some 7-limit ratios...

(0.6103401603711721 3 32/27)
(0.7075223009389495 7 225/224)
(0.828892926073675 5 27/25)
(0.8692019265111186 5 250/243)
(0.9004848978857011 7 126/125)
(0.9587113625980545 7 7/6)
(1.0526168298843461 7 8/7)
(1.1047033190174127 3 81/64)
(1.1288769832608407 7 64/63)
(1.2029906627249671 7 50/49))

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/28/2003 11:08:21 AM

>>> |n-d|log(d)^e / d

() Is e here supposed to be based on the prime limit of the
particular ratio, or the prime limit of the group of ratios?

() log-flat is supposed to give us an equal number of results
in all size ranges? I tend to think this allows too much
complexity in the small ratios.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/28/2003 11:14:20 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> What I don't get is why upping the prime limit from 5 to 19 would
> >> make it any harder. The way I'd do it, is for each d < 10^50,
run
> >> n until n/d > 600 cents, kicking out any ratios where n*d has a
> >> factor greater than 19. The factoring algorithm I'm using walks
> >> up from 2, so aborting it after 19 or 5 wouldn't make much
> >> difference.
> >
> >ok, so why don't you do it? (seriously -- my factoring algorithm
> >refuses numbers higher than 2^32). see if you can reproduce my 5-
> >limit results first.
>
> Ok, maybe later tonight/this morning. But how'd you do 10^50 if
> you can't factor above 2^32?

i started with the factors!

🔗Paul Erlich <perlich@aya.yale.edu>

10/28/2003 11:26:17 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >> What I don't get is why upping the prime limit from 5 to 19 would
> >> make it any harder.
> >
> >i did it this way:
> >
> >http://www.kees.cc/tuning/perbl.html
>
> This doesn't make a method clear to me. Could you go over it,
> and/or post your code?

i did it with command-line instructions. first create your 2-d array
of 5-limit ratios (the 5-limit lattice) and then calculate the
numerator and denominator of each ratio . . .

> How long did it take you to do n=10^50?

the computer time was negligible.

> I then
> compute its prime-limit; if > p (runtime param, 5 in this case),
> I throw it out. I then compute the badness, using this handy-dandy
> formula for pi(x)...
>
> ;; Minac (unpublished, proof in Ribenboim 1995, p. 181).
> ;; pi(x)= (sum j=2...x) (floor ((j-1)! + 1)/j - (floor (j-1)!/j)).

you're misunderstanding something. the highest prime in the ratio is
not necessarily the prime limit in which it will be used. witness the
pythagorean limma and pythagorean comma in the 5-limit case, for
example.

🔗Paul Erlich <perlich@aya.yale.edu>

10/28/2003 11:27:55 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >>> |n-d|log(d)^e / d
>
> () Is e here supposed to be based on the prime limit of the
> particular ratio, or the prime limit of the group of ratios?

the prime limit of the lattice you're tempering.

> () log-flat is supposed to give us an equal number of results
> in all size ranges?

when they're defined logarithmically, yes.

> I tend to think this allows too much
> complexity in the small ratios.

so you want to penalize complexity even more? go for it.

🔗Carl Lumma <ekin@lumma.org>

10/28/2003 12:04:19 PM

>> I then
>> compute its prime-limit; if > p (runtime param, 5 in this case),
>> I throw it out. I then compute the badness, using this handy-dandy
>> formula for pi(x)...
>>
>> ;; Minac (unpublished, proof in Ribenboim 1995, p. 181).
>> ;; pi(x)= (sum j=2...x) (floor ((j-1)! + 1)/j - (floor (j-1)!/j)).
>
>you're misunderstanding something. the highest prime in the ratio is
>not necessarily the prime limit in which it will be used. witness the
>pythagorean limma and pythagorean comma in the 5-limit case, for
>example.
//
>> >>> |n-d|log(d)^e / d
>>
>> () Is e here supposed to be based on the prime limit of the
>> particular ratio, or the prime limit of the group of ratios?
>
>the prime limit of the lattice you're tempering.

Aha! Well then, I certainly don't need to compute pi(x) more
than once.

There's no apparent speedup, but at least these should be correct...

(primelimit=5 max-d=1000, res=10)
(badness primelimit ratio)
((0.24002696783739128 5 81/80)
(0.4023163202708607 3 4/3)
(0.42083442285788536 5 25/24)
(0.4804530139182014 5 5/4)
(0.4889023927793147 5 16/15)
(0.5180580787960469 5 6/5)
(0.5364217603611476 5 10/9)
(0.5405096406579765 3 9/8)
(0.5595027250997308 5 128/125)
(0.828892926073675 5 27/25))

(primelimit=7 max-d=1000, res=20)
(badness primelimit ratio)
((0.4419896533813025 3 4/3)
(0.6660493039778589 5 5/4)
(0.7075223009389495 7 225/224)
(0.8337823128571303 5 6/5)
(0.9004848978857011 7 126/125)
(0.9587113625980545 7 7/6)
(1.0518045661034596 5 81/80)
(1.0526168298843461 7 8/7)
(1.1239582004626365 3 9/8)
(1.1288769832608407 7 64/63)
(1.1786390756834733 5 10/9)
(1.2029906627249671 7 50/49)
(1.20863712518982 7 49/48)
(1.284039331328201 7 36/35)
(1.312860066467948 7 15/14)
(1.3239722230853748 5 16/15)
(1.3259689601439075 7 28/27)
(1.3374344495057697 5 25/24)
(1.3442467614814364 7 21/20)
(1.3641655968558717 7 245/243))

>> () log-flat is supposed to give us an equal number of results
>> in all size ranges?
>
>when they're defined logarithmically, yes.

Does this also give an equal number of results in all
complexity ranges? 'Cause that seems more intuitive to me.

>> I tend to think this allows too much
>> complexity in the small ratios.
>
>so you want to penalize complexity even more? go for it.

Prosibly. If I can get this working. Looks like the start-from-
factors approach is the only workable solution. There's no
native array in scheme, but I'm sure you can make them with vectors
of vectors.

Weirdly, though, my method is apparently O(n^2) no matter what the
limit, whereas your lattice-based method is O(n^2) at the 5-limit,
O(n^3) at the 7-limit, etc. No wait, my n is max-d, and your n is
radius on the lattice. Not the same. Did you just up the radius
until max-d was exceeded somewhere?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/28/2003 12:13:07 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >> () log-flat is supposed to give us an equal number of results
> >> in all size ranges?
> >
> >when they're defined logarithmically, yes.
>
> Does this also give an equal number of results in all
> complexity ranges? 'Cause that seems more intuitive to me.

by 'size', i meant complexity, or size of the numerator and
denominator. i didn't mean cents size of the comma in JI!

> Weirdly, though, my method is apparently O(n^2) no matter what the
> limit, whereas your lattice-based method is O(n^2) at the 5-limit,
> O(n^3) at the 7-limit, etc. No wait, my n is max-d, and your n is
> radius on the lattice. Not the same. Did you just up the radius
> until max-d was exceeded somewhere?

i made sure it was large enough to enclose kees' figure. note the
relationship with his work mentioned here:

http://www.sonic-arts.org/dict/heuristic-complexity.htm

🔗Carl Lumma <ekin@lumma.org>

10/28/2003 12:35:57 PM

>> >> () log-flat is supposed to give us an equal number of results
>> >> in all size ranges?
>> >
>> >when they're defined logarithmically, yes.
>>
>> Does this also give an equal number of results in all
>> complexity ranges? 'Cause that seems more intuitive to me.
>
>by 'size', i meant complexity, or size of the numerator and
>denominator. i didn't mean cents size of the comma in JI!

Oh. I guess I'm satisfied, then.

>> Weirdly, though, my method is apparently O(n^2) no matter what the
>> limit, whereas your lattice-based method is O(n^2) at the 5-limit,
>> O(n^3) at the 7-limit, etc. No wait, my n is max-d, and your n is
>> radius on the lattice. Not the same. Did you just up the radius
>> until max-d was exceeded somewhere?
>
>i made sure it was large enough to enclose kees' figure. note the
>relationship with his work mentioned here:
>
>http://www.sonic-arts.org/dict/heuristic-complexity.htm

...

>heuristic complexity
>
>for a unison vector n/d, which is a ratio of R, the heuristic for
>complexity is log(R).
>
>the complexity heuristic is identical to kees van prooijen's
>'expressibility' measure

Huh? For n/d, I thought it was just log(d). What's R here, odd-
or prime-limit?

Further, isn't there something on kees' page about your meassure
and and his being subtly different?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/28/2003 12:40:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >heuristic complexity
> >
> >for a unison vector n/d, which is a ratio of R, the heuristic for
> >complexity is log(R).
> >
> >the complexity heuristic is identical to kees van prooijen's
> >'expressibility' measure
>
> Huh? For n/d, I thought it was just log(d). What's R here, odd-
> or prime-limit?

ratio of R, related to odd limit (click on "ratio of" for the
definition). for a small (in cents) comma, d is a very good
approximation of R.

> Further, isn't there something on kees' page about your meassure
> and and his being subtly different?

that was a different measure of mine, namely taxicab distance on the
isosceles triangular lattice.

🔗Carl Lumma <ekin@lumma.org>

10/28/2003 12:44:11 PM

>> >heuristic complexity
>> >
>> >for a unison vector n/d, which is a ratio of R, the heuristic for
>> >complexity is log(R).
>> >
>> >the complexity heuristic is identical to kees van prooijen's
>> >'expressibility' measure
>>
>> Huh? For n/d, I thought it was just log(d). What's R here, odd-
>> or prime-limit?
>
>ratio of R, related to odd limit (click on "ratio of" for the
>definition). for a small (in cents) comma, d is a very good
>approximation of R.
>
>> Further, isn't there something on kees' page about your meassure
>> and and his being subtly different?
>
>that was a different measure of mine, namely taxicab distance on the
>isosceles triangular lattice.

Sweet. Tx.

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 2:49:37 AM

>>> >the complexity heuristic is identical to kees van prooijen's
>>> >'expressibility' measure

>>> Isn't there something on kees' page about your meassure
>>> and and his being subtly different?

>>that was a different measure of mine, namely taxicab distance on the
>>isosceles triangular lattice.

>...but I still don't understand what's wrong with an
>octave-equiv. rect. lattice.

Doesn't this

http://www.kees.cc/tuning/erl_perbl.html

show that Kees' expressibility is an rectangular, octave-equivalent
measure?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

10/26/2005 11:17:29 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Doesn't this
>
> http://www.kees.cc/tuning/erl_perbl.html
>
> show that Kees' expressibility is an rectangular, octave-equivalent
> measure?

The other way to define it is that given a monzo |a0 a1 ... an> we first
form |a0 log2(3)a1 log2(5)a2 ... log2(p)an> = |b0 ... bn>, and then
the so-called "expressibility" will be

(|b1| + |b2| + ... + |bn| + |b1+b2+...+bn|)/2

From this it is clear that things cannot be described as rectangular,
even to the extent that this sort of terminology applies at all in
non-Euclidean contexts.

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 2:36:44 PM

>> Doesn't this
>>
>> http://www.kees.cc/tuning/erl_perbl.html
>>
>> show that Kees' expressibility is an rectangular, octave-equivalent
>> measure?
>
>The other way to define it is that given a monzo |a0 a1 ... an> we
>first form |a0 log2(3)a1 log2(5)a2 ... log2(p)an> = |b0 ... bn>, and
>then the so-called "expressibility" will be
>
>(|b1| + |b2| + ... + |bn| + |b1+b2+...+bn|)/2
>
>From this it is clear that things cannot be described as rectangular,
>even to the extent that this sort of terminology applies at all in
>non-Euclidean contexts.

I would call a taxicab distance measure weighted/unweighted,
rectangular/triangular if there is a sphere (centered on a lattice
point with with an arbitrary radius) that encloses all of the
lattice points of taxicab distance <= x and no or few other lattice
points.

I think you get rectangular in this sense if you factor n*d,
whether weighted or unweighted. Am I wrong?

Unweighted triangular is max ((n1+n2...) (d1+d2...)), where
n1... and d1... are the exponents in the prime factorizations of
n and d. Am I wrong?

Weighted triangular is trickier. I believe the formula is
>Given a Fokker-style interval vector (I1, I2, . . . In):
>
>1. Go to the rightmost nonzero exponent; add the product of its
>absolute value with the log of its base to the total.
>2. Use that exponent to cancel out as many exponents of the opposite
>sign as possible, starting to its immediate left and working right;
>discard anything remaining of that exponent.
> Example: starting with, say, (4 2 -3), we would add 3 lg(7) to
> our total, then cancel the -3 against the 2, then the remaining
> -1 against the 4, leaving (3 0 0). OTOH, starting with
> (-2 3 5), we would add 5 lg(7) to our total, then cancel 2 of
> the 5 against the -2 and discard the remainder, leaving (0 3 0).
>3. If any nonzero exponents remain, go back to step one, otherwise
>stop.
This gives Paul's "taxicab isosceles" measure from the link above.
Kees shows it on the lattice at
http://kees.cc/tuning/lat_perbl.html
and it doesn't look spherical, but that's possibly because he's
still got the axes at 90deg.

So I take it back that expressibility is rectangluar. In fact
I can't see how it has anything to do with lattice distance, unless
we have a limit-infinity lattice. I made this criticism in 99, I
think, when it first came up.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 4:46:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >>> >the complexity heuristic is identical to kees van prooijen's
> >>> >'expressibility' measure
>
> >>> Isn't there something on kees' page about your meassure
> >>> and and his being subtly different?
>
> >>that was a different measure of mine, namely taxicab distance on
the
> >>isosceles triangular lattice.
>
> >...but I still don't understand what's wrong with an
> >octave-equiv. rect. lattice.
>
> Doesn't this
>
> http://www.kees.cc/tuning/erl_perbl.html
>
> show that Kees' expressibility is an rectangular, octave-equivalent
> measure?
>
> -Carl

No, absolutely not. I don't know what gave you that impression. When
the lattice is rectangular, Kees expressibility is given by a
hexagonal norm of skewed shape (while a rectangular measure of
complexity would be given by a rectangular norm here). When the
hexagon is made regular, the lattice ends up looking like this (see
the second-to-last lattice here):

http://www.kees.cc/tuning/lat_perbl.html

Recall that expressibility is not a taxicab metric but is a hexagonal-
norm metric. So there is no paradox (contra Kees); 5/4 and 5/3 lie on
the same regular hexagon centered on 1/1, and thus they get the
same "distance" according to the regular-hexagon metric.

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 5:01:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Doesn't this
> >>
> >> http://www.kees.cc/tuning/erl_perbl.html
> >>
> >> show that Kees' expressibility is an rectangular, octave-
equivalent
> >> measure?
> >
> >The other way to define it is that given a monzo |a0 a1 ... an> we
> >first form |a0 log2(3)a1 log2(5)a2 ... log2(p)an> = |b0 ... bn>,
and
> >then the so-called "expressibility" will be
> >
> >(|b1| + |b2| + ... + |bn| + |b1+b2+...+bn|)/2
> >
> >From this it is clear that things cannot be described as
rectangular,
> >even to the extent that this sort of terminology applies at all in
> >non-Euclidean contexts.
>
> I would call a taxicab distance measure weighted/unweighted,
> rectangular/triangular if there is a sphere (centered on a lattice
> point with with an arbitrary radius) that encloses all of the
> lattice points of taxicab distance <= x and no or few other lattice
> points.
>
> I think you get rectangular in this sense

You gave only one if clause above; you didn't explain how one "gets"
or doesn't "get" rectangular.

> Unweighted triangular is max ((n1+n2...) (d1+d2...)), where
> n1... and d1... are the exponents in the prime factorizations of
> n and d. Am I wrong?

You probably don't want to use prime factorization here, but rather
Hahn's "odd factorization". Also below.

> Weighted triangular is trickier. I believe the formula is
> >Given a Fokker-style interval vector (I1, I2, . . . In):
> >
> >1. Go to the rightmost nonzero exponent; add the product of its
> >absolute value with the log of its base to the total.
> >2. Use that exponent to cancel out as many exponents of the
opposite
> >sign as possible, starting to its immediate left and working right;
> >discard anything remaining of that exponent.
> > Example: starting with, say, (4 2 -3), we would add 3 lg(7) to
> > our total, then cancel the -3 against the 2, then the
remaining
> > -1 against the 4, leaving (3 0 0). OTOH, starting with
> > (-2 3 5), we would add 5 lg(7) to our total, then cancel 2 of
> > the 5 against the -2 and discard the remainder, leaving (0 3
0).
> >3. If any nonzero exponents remain, go back to step one, otherwise
> >stop.
> This gives Paul's "taxicab isosceles" measure from the link above.
> Kees shows it on the lattice at
> http://kees.cc/tuning/lat_perbl.html
> and it doesn't look spherical, but that's possibly because he's
> still got the axes at 90deg.

It's the second lattice on that page. I don't see any 90 degree
angles there!

> So I take it back that expressibility is rectangluar. In fact
> I can't see how it has anything to do with lattice distance, unless
> we have a limit-infinity lattice. I made this criticism in 99, I
> think, when it first came up.

And it took me years to address it, but I eventually did. If you give
up both taxicab and Euclidean metrics, and use a hexagonal norm
instead, then Kees expressibility can be made exactly identical to
lattice distance, no matter how few or many dimensions (primes minus
1) there are. This norm is a regular hexagon if you use the lengths
and angles in the second-to-last lattice on that page
(http://kees.cc/tuning/lat_perbl.html).

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 5:15:13 PM

>Recall that expressibility is not a taxicab metric but is a hexagonal-
>norm metric. So there is no paradox (contra Kees); 5/4 and 5/3 lie on
>the same regular hexagon centered on 1/1, and thus they get the
>same "distance" according to the regular-hexagon metric.

That's what I'd forgotten. Why is hexagon better than taxicab?

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 5:21:48 PM

>> I would call a taxicab distance measure weighted/unweighted,
>> rectangular/triangular if there is a sphere (centered on a lattice
>> point with with an arbitrary radius) that encloses all of the
>> lattice points of taxicab distance <= x and no or few other lattice
>> points.
>>
>> I think you get rectangular in this sense
>
>You gave only one if clause above; you didn't explain how one "gets"
>or doesn't "get" rectangular.

If such a sphere exists on an un/weighted tri/rectangular lattice
the distance measure is called un/weighted tri/rectangular. Makes
sense to me.

>> Unweighted triangular is max ((n1+n2...) (d1+d2...)), where
>> n1... and d1... are the exponents in the prime factorizations of
>> n and d. Am I wrong?
>
>You probably don't want to use prime factorization here, but rather
>Hahn's "odd factorization". Also below.

For unweighted I usually prefer prime factorization. I'm not
trying to outdo harmonic entropy here in terms of a dissonance
measure, I'm just trying to get a reasonable notion of lattice
distance.

>> Weighted triangular is trickier. I believe the formula is
//
>> This gives Paul's "taxicab isosceles" measure from the link above.
>> Kees shows it on the lattice at
>> http://kees.cc/tuning/lat_perbl.html
>> and it doesn't look spherical, but that's possibly because he's
>> still got the axes at 90deg.
>
>It's the second lattice on that page. I don't see any 90 degree
>angles there!

I don't see the unit shell there either.

>> So I take it back that expressibility is rectangluar. In fact
>> I can't see how it has anything to do with lattice distance, unless
>> we have a limit-infinity lattice. I made this criticism in 99, I
>> think, when it first came up.
>
>And it took me years to address it, but I eventually did. If you give
>up both taxicab and Euclidean metrics, and use a hexagonal norm
>instead, then Kees expressibility can be made exactly identical to
>lattice distance, no matter how few or many dimensions (primes minus
>1) there are. This norm is a regular hexagon if you use the lengths
>and angles in the second-to-last lattice on that page
>(http://kees.cc/tuning/lat_perbl.html).

Is the motivation for this that odd limit seems to be such a good
measure of octave-equivalent dissonance?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 5:57:28 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >Recall that expressibility is not a taxicab metric but is a
hexagonal-
> >norm metric. So there is no paradox (contra Kees); 5/4 and 5/3 lie
on
> >the same regular hexagon centered on 1/1, and thus they get the
> >same "distance" according to the regular-hexagon metric.
>
> That's what I'd forgotten. Why is hexagon better than taxicab?

It's not "better", it's just what works here.

🔗Paul Erlich <perlich@aya.yale.edu>

10/26/2005 6:05:45 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I would call a taxicab distance measure weighted/unweighted,
> >> rectangular/triangular if there is a sphere (centered on a
lattice
> >> point with with an arbitrary radius) that encloses all of the
> >> lattice points of taxicab distance <= x and no or few other
lattice
> >> points.
> >>
> >> I think you get rectangular in this sense
> >
> >You gave only one if clause above; you didn't explain how
one "gets"
> >or doesn't "get" rectangular.
>
> If such a sphere exists on an un/weighted tri/rectangular lattice
> the distance measure is called un/weighted tri/rectangular. Makes
> sense to me.

It seems circular to me. Start with the weighting part. A weighted
taxicab measure means to me that some rungs are longer than others.
So the sphere will naturally accomodate fewer rungs in the longer
direction, which seems to mean you "get" weighted according to the
above. But if you started with unweighted, the lengths would be
equal, so you'd "get" unweighted of course . . . (?)

> >> Unweighted triangular is max ((n1+n2...) (d1+d2...)), where
> >> n1... and d1... are the exponents in the prime factorizations of
> >> n and d. Am I wrong?
> >
> >You probably don't want to use prime factorization here, but
rather
> >Hahn's "odd factorization". Also below.
>
> For unweighted I usually prefer prime factorization. I'm not
> trying to outdo harmonic entropy here in terms of a dissonance
> measure, I'm just trying to get a reasonable notion of lattice
> distance.

I think Hahn gives a reasonable notion of lattice distance, and it's
based on odd factorization and is unweighted. Replacing odd with
prime just seems to reduce the reasonableness of the lattice
distances.

> >> Weighted triangular is trickier. I believe the formula is
> //
> >> This gives Paul's "taxicab isosceles" measure from the link
above.
> >> Kees shows it on the lattice at
> >> http://kees.cc/tuning/lat_perbl.html
> >> and it doesn't look spherical, but that's possibly because he's
> >> still got the axes at 90deg.
> >
> >It's the second lattice on that page. I don't see any 90 degree
> >angles there!
>
> I don't see the unit shell there either.

It's shown on the fourth graphic on that page.

> >> So I take it back that expressibility is rectangluar. In fact
> >> I can't see how it has anything to do with lattice distance,
unless
> >> we have a limit-infinity lattice. I made this criticism in 99, I
> >> think, when it first came up.
> >
> >And it took me years to address it, but I eventually did. If you
give
> >up both taxicab and Euclidean metrics, and use a hexagonal norm
> >instead, then Kees expressibility can be made exactly identical to
> >lattice distance, no matter how few or many dimensions (primes
minus
> >1) there are. This norm is a regular hexagon if you use the
lengths
> >and angles in the second-to-last lattice on that page
> >(http://kees.cc/tuning/lat_perbl.html).
>
> Is the motivation for this that odd limit seems to be such a good
> measure of octave-equivalent dissonance?

Partly. I wonder what Kees has to say?

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 6:42:51 PM

>> > Recall that expressibility is not a taxicab metric but is a
>> > hexagonal-norm metric. So there is no paradox (contra Kees);
>> > 5/4 and 5/3 lie on the same regular hexagon centered on 1/1,
>> > and thus they get the same "distance" according to the
>> > regular-hexagon metric.
>>
>> That's what I'd forgotten. Why is hexagon better than taxicab?
>
>It's not "better", it's just what works here.

Can you describe the desiderata making up "here"?

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 6:50:29 PM

>> >> I would call a taxicab distance measure weighted/unweighted,
>> >> rectangular/triangular if there is a sphere (centered on a
>> >> lattice point with with an arbitrary radius) that encloses
>> >> all of the lattice points of taxicab distance <= x and no or
>> >> few other lattice points.
//
>> >You gave only one if clause above; you didn't explain how
>> >one "gets" or doesn't "get" rectangular.
>>
>> If such a sphere exists on an un/weighted tri/rectangular lattice
>> the distance measure is called un/weighted tri/rectangular. Makes
>> sense to me.
>
>It seems circular to me. Start with the weighting part. A weighted
>taxicab measure means to me that some rungs are longer than others.
>So the sphere will naturally accomodate fewer rungs in the longer
>direction, which seems to mean you "get" weighted according to the
>above. But if you started with unweighted, the lengths would be
>equal, so you'd "get" unweighted of course . . . (?)

You're right, it doesn't work for weighting. But for tri/rect,
I still think it does.

>> For unweighted I usually prefer prime factorization. I'm not
>> trying to outdo harmonic entropy here in terms of a dissonance
>> measure, I'm just trying to get a reasonable notion of lattice
>> distance.
>
>I think Hahn gives a reasonable notion of lattice distance, and it's
>based on odd factorization and is unweighted. Replacing odd with
>prime just seems to reduce the reasonableness of the lattice
>distances.

This doesn't work for arbitrary commas with no a priori harmonic
limit, since everything would just be 1.

>> >> Weighted triangular is trickier. I believe the formula is
>> //
>> >> This gives Paul's "taxicab isosceles" measure from the link
>> >> above.
>> >> Kees shows it on the lattice at
>> >> http://kees.cc/tuning/lat_perbl.html
>> >> and it doesn't look spherical, but that's possibly because he's
>> >> still got the axes at 90deg.
>> >
>> >It's the second lattice on that page. I don't see any 90 degree
>> >angles there!
>>
>> I don't see the unit shell there either.
>
>It's shown on the fourth graphic on that page.

Looks like the axes are at 90 degrees to me!

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2005 1:36:17 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> > Recall that expressibility is not a taxicab metric but is a
> >> > hexagonal-norm metric. So there is no paradox (contra Kees);
> >> > 5/4 and 5/3 lie on the same regular hexagon centered on 1/1,
> >> > and thus they get the same "distance" according to the
> >> > regular-hexagon metric.
> >>
> >> That's what I'd forgotten. Why is hexagon better than taxicab?
> >
> >It's not "better", it's just what works here.
>
> Can you describe the desiderata making up "here"?

"Distance" = Kees expressibility.

🔗Paul Erlich <perlich@aya.yale.edu>

10/27/2005 1:44:41 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> I would call a taxicab distance measure weighted/unweighted,
> >> >> rectangular/triangular if there is a sphere (centered on a
> >> >> lattice point with with an arbitrary radius) that encloses
> >> >> all of the lattice points of taxicab distance <= x and no or
> >> >> few other lattice points.
> //
> >> >You gave only one if clause above; you didn't explain how
> >> >one "gets" or doesn't "get" rectangular.
> >>
> >> If such a sphere exists on an un/weighted tri/rectangular lattice
> >> the distance measure is called un/weighted tri/rectangular.
Makes
> >> sense to me.
> >
> >It seems circular to me. Start with the weighting part. A weighted
> >taxicab measure means to me that some rungs are longer than
others.
> >So the sphere will naturally accomodate fewer rungs in the longer
> >direction, which seems to mean you "get" weighted according to the
> >above. But if you started with unweighted, the lengths would be
> >equal, so you'd "get" unweighted of course . . . (?)
>
> You're right, it doesn't work for weighting. But for tri/rect,
> I still think it does.

The same argument applies. With the triangular lattice, you
have "shortcuts" the taxicab can take, and the sphere best
approximates the hexagonal balls when the geometry is actually
triangular. For rectangular, the same argument leads to a rectangular
geometry.

> >> For unweighted I usually prefer prime factorization. I'm not
> >> trying to outdo harmonic entropy here in terms of a dissonance
> >> measure, I'm just trying to get a reasonable notion of lattice
> >> distance.
> >
> >I think Hahn gives a reasonable notion of lattice distance, and
it's
> >based on odd factorization and is unweighted. Replacing odd with
> >prime just seems to reduce the reasonableness of the lattice
> >distances.
>
> This doesn't work for arbitrary commas with no a priori harmonic
> limit, since everything would just be 1.

Huh? How did commas get into this? Everything would just be 1?? I
must be on crack . . .

> >> >> Weighted triangular is trickier. I believe the formula is
> >> //
> >> >> This gives Paul's "taxicab isosceles" measure from the link
> >> >> above.
> >> >> Kees shows it on the lattice at
> >> >> http://kees.cc/tuning/lat_perbl.html
> >> >> and it doesn't look spherical, but that's possibly because
he's
> >> >> still got the axes at 90deg.
> >> >
> >> >It's the second lattice on that page. I don't see any 90 degree
> >> >angles there!
> >>
> >> I don't see the unit shell there either.
> >
> >It's shown on the fourth graphic on that page.
>
> Looks like the axes are at 90 degrees to me!

Ha ha. Kees superimposed a horizontal and vertical axis on
the "shell" diagram. So what? That doesn't mean that either the shell
or the coresponding lattice have any 90 angles in them!

:)

🔗Carl Lumma <ekin@lumma.org>

10/28/2005 1:51:31 AM

>> >> > Recall that expressibility is not a taxicab metric but is a
>> >> > hexagonal-norm metric. So there is no paradox (contra Kees);
>> >> > 5/4 and 5/3 lie on the same regular hexagon centered on 1/1,
>> >> > and thus they get the same "distance" according to the
>> >> > regular-hexagon metric.
>> >>
>> >> That's what I'd forgotten. Why is hexagon better than taxicab?
>> >
>> >It's not "better", it's just what works here.
>>
>> Can you describe the desiderata making up "here"?
>
>"Distance" = Kees expressibility.

If the only reason for using hexagons is that they correspond to
the measure you like, it hardly justifies liking the measure.
But I think you've already said elsewhere in this thread that that
was based on psychoacoustics. Anyway, now may be an excellent time
for me to finally review that long IM we had about hexagonal
contours in projective tuningspace (or whatever)....

-Carl

🔗Carl Lumma <ekin@lumma.org>

10/28/2005 2:03:05 AM

>> >> >> I would call a taxicab distance measure weighted/unweighted,
>> >> >> rectangular/triangular if there is a sphere (centered on a
>> >> >> lattice point with with an arbitrary radius) that encloses
>> >> >> all of the lattice points of taxicab distance <= x and no or
>> >> >> few other lattice points.
//
>> You're right, it doesn't work for weighting. But for tri/rect,
>> I still think it does.
>
>With the triangular lattice, you have "shortcuts" the taxicab can
>take, and the sphere best approximates the hexagonal balls

And isn't unweighted triangular (Hahn diameter) the hexagonal
ball measure?

Given the agreement Gene's found between Euclidean distance and
unweighted Hahn diameter, I tend to think I'm right here.

>when the geometry is actually triangular. For rectangular, the
>same argument leads to a rectangular geometry.

Nah, there are better sphere approximations on the rect. lattice
than rectangles.

>> >> For unweighted I usually prefer prime factorization.
//
>> >I think Hahn gives a reasonable notion of lattice distance,
>> >and it's based on odd factorization and is unweighted.
//
>> This doesn't work for arbitrary commas with no a priori harmonic
>> limit, since everything would just be 1.
>
>Huh? How did commas get into this? Everything would just be 1?? I
>must be on crack . . .

I just realized the subject line is different, but I thought I
explained myself in the "the thing to do" message, and (after
the original message of this subject) in the "exploring badness"
thread.

Everything would be 1 because we assume the entire comma is
intended to be consonant and (since we're not weighting) assign
it length 1.

>> >> >> Weighted triangular is trickier. I believe the formula is
>> >> //
>> >> >> This gives Paul's "taxicab isosceles" measure from the link
>> >> >> above.
>> >> >> Kees shows it on the lattice at
>> >> >> http://kees.cc/tuning/lat_perbl.html
>> >> >> and it doesn't look spherical, but that's possibly because
>he's
>> >> >> still got the axes at 90deg.
>> >> >
>> >> >It's the second lattice on that page. I don't see any 90 degree
>> >> >angles there!
>> >>
>> >> I don't see the unit shell there either.
>> >
>> >It's shown on the fourth graphic on that page.
>>
>> Looks like the axes are at 90 degrees to me!
>
>Ha ha. Kees superimposed a horizontal and vertical axis on
>the "shell" diagram. So what? That doesn't mean that either the shell
>or the coresponding lattice have any 90 angles in them!
>
>:)

Well, it sure fooled me, since that's how he draws the axes on
every other lattice on his site.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

10/28/2005 12:22:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> And isn't unweighted triangular (Hahn diameter) the hexagonal
> ball measure?
>
> Given the agreement Gene's found between Euclidean distance and
> unweighted Hahn diameter, I tend to think I'm right here.

The big difference is between weighted and unweighted. If for monzos
we define the unweighted p-distance to be

|| |a1 ... an> ||_p = ((|a1|^p + ... + |an|^p + |a1+...+an|)/2)^(1/p)

then the unweighted 2-distance is symmetrical Euclidean, and the
1-distance is the unweighted version of Kees distance. The
infinity-distance isn't Hahn distance, but it is related and in many
cases will give the same answer.

> >Ha ha. Kees superimposed a horizontal and vertical axis on
> >the "shell" diagram. So what? That doesn't mean that either the shell
> >or the coresponding lattice have any 90 angles in them!

Given that the lattices are not Euclidean, they can't have any 90
angles. Angles don't exist for Kees, Tenney, Hahn etc. lattices.

🔗Carl Lumma <ekin@lumma.org>

10/28/2005 1:30:12 PM

>> And isn't unweighted triangular (Hahn diameter) the hexagonal
>> ball measure?
>>
>> Given the agreement Gene's found between Euclidean distance and
>> unweighted Hahn diameter, I tend to think I'm right here.
>
>The big difference is between weighted and unweighted. If for monzos
>we define the unweighted p-distance to be
>
>|| |a1 ... an> ||_p = ((|a1|^p + ... + |an|^p + |a1+...+an|)/2)^(1/p)
>
>then the unweighted 2-distance is symmetrical Euclidean, and the
>1-distance is the unweighted version of Kees distance. The
>infinity-distance isn't Hahn distance, but it is related and in many
>cases will give the same answer.

Neat formula.
Just for my records, you've previously given Hahn distance as:

||(a,b,c)||_Hahn = max(|a|,|b|,|c|,|b+c|,|a+c|,|a+b|,|a+b+c|)

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

10/28/2005 6:35:32 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Neat formula.
> Just for my records, you've previously given Hahn distance as:
>
> ||(a,b,c)||_Hahn = max(|a|,|b|,|c|,|b+c|,|a+c|,|a+b|,|a+b+c|)

Right, whereas the infinity-distance would be

max(|a|,|b|,|c|,|a+b+c|)

🔗Gene Ward Smith <gwsmith@svpal.org>

10/28/2005 10:50:21 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> > Neat formula.
> > Just for my records, you've previously given Hahn distance as:
> >
> > ||(a,b,c)||_Hahn = max(|a|,|b|,|c|,|b+c|,|a+c|,|a+b|,|a+b+c|)
>
> Right, whereas the infinity-distance would be
>
> max(|a|,|b|,|c|,|a+b+c|)

However, be it noted that we can define not only

||(a,b,c)||_p = ((|a|^p+|b|^p+|c|^p+|a+b+c|^p)/2)^(1/p)

but also

||(a,b,c)||_q = ((|a|^q+|b|^q+|c|^q+|a+b|^q+|b+c|^q+|c+a|^q+
|a+b+c|^q)/4)^(1/q)

It happens that *both* p=2 and q=2 give the symmetric Euclidean norm,
and q=infinity gives the Hahn norm.

🔗Carl Lumma <ekin@lumma.org>

10/29/2005 1:42:20 AM

>If for monzos we define the unweighted p-distance to be
>
>|| |a1 ... an> ||_p = ((|a1|^p + ... + |an|^p + |a1+...+an|)/2)^(1/p)
>
>then the unweighted 2-distance is symmetrical Euclidean, and the
>1-distance is the unweighted version of Kees distance. The
>infinity-distance isn't Hahn distance, but it is related and in many
>cases will give the same answer.
//
>||(a,b,c)||_q = ((|a|^q+|b|^q+|c|^q+|a+b|^q+|b+c|^q+|c+a|^q+
>|a+b+c|^q)/4)^(1/q)
>
>It happens that *both* p=2 and q=2 give the symmetric Euclidean norm,
>and q=infinity gives the Hahn norm.

It looks like q=1 will still give unweighted Kees, so ||_q looks
better. Aside from that it's harder to compute when the monzo
gets longer, is there any reason to use ||_p?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 5:46:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> > Recall that expressibility is not a taxicab metric but is a
> >> >> > hexagonal-norm metric. So there is no paradox (contra Kees);
> >> >> > 5/4 and 5/3 lie on the same regular hexagon centered on 1/1,
> >> >> > and thus they get the same "distance" according to the
> >> >> > regular-hexagon metric.
> >> >>
> >> >> That's what I'd forgotten. Why is hexagon better than
taxicab?
> >> >
> >> >It's not "better", it's just what works here.
> >>
> >> Can you describe the desiderata making up "here"?
> >
> >"Distance" = Kees expressibility.
>
> If the only reason for using hexagons is that they correspond to
> the measure you like, it hardly justifies liking the measure.

Of course! I liked the measure long before I found the hexagons.

> But I think you've already said elsewhere in this thread that that
> was based on psychoacoustics. Anyway, now may be an excellent time
> for me to finally review that long IM we had about hexagonal
> contours in projective tuningspace (or whatever)....

That's related to the dual picture to this one . . .

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 6:23:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> I would call a taxicab distance measure
weighted/unweighted,
> >> >> >> rectangular/triangular if there is a sphere (centered on a
> >> >> >> lattice point with with an arbitrary radius) that encloses
> >> >> >> all of the lattice points of taxicab distance <= x and no
or
> >> >> >> few other lattice points.
> //
> >> You're right, it doesn't work for weighting. But for tri/rect,
> >> I still think it does.
> >
> >With the triangular lattice, you have "shortcuts" the taxicab can
> >take, and the sphere best approximates the hexagonal balls
>
> And isn't unweighted triangular (Hahn diameter) the hexagonal
> ball measure?

In the very lowest limits, it can be, depending on what shape
hexagons you mean and how you're constructing the lattice.

> Given the agreement Gene's found between Euclidean distance and
> unweighted Hahn diameter, I tend to think I'm right here.

Huh? I have no idea how this fits in, or how it contradicts what I
said. Your statement above didn't bring the ratios on the lattice at
all, so something like Hahn seems a separate consideration.

>> >when the geometry is actually triangular. For rectangular, the
>> >same argument leads to a rectangular geometry.
>
>> Nah, there are better sphere approximations on the rect. lattice
>> than rectangles.

The relevant approximations would be close to regular octahedra in
the 3D case. If you want to get even closer to spheres, it seems you
have to throw taxicab out the window. But your original statement was
all about taxicab measures!

>> >> >> For unweighted I usually prefer prime factorization.
> //
>> >> >I think Hahn gives a reasonable notion of lattice distance,
>> >> >and it's based on odd factorization and is unweighted.
> //
>> >> This doesn't work for arbitrary commas with no a priori harmonic
>> >> limit, since everything would just be 1.
> >
>> >Huh? How did commas get into this? Everything would just be 1?? I
>> >must be on crack . . .
>
>> I just realized the subject line is different, but I thought I
>> explained myself in the "the thing to do" message, and (after
>> the original message of this subject) in the "exploring badness"
>> thread.
>
>> Everything would be 1 because we assume the entire comma is
>> intended to be consonant and (since we're not weighting) assign
> it length 1.

I think you're trying to imply that everything is consonant when
there's no a priori harmonic limit. But "odd factorization" in the
sense required here makes no sense without one.

> >> >> >> Weighted triangular is trickier. I believe the formula is
> >> >> //
> >> >> >> This gives Paul's "taxicab isosceles" measure from the link
> >> >> >> above.
> >> >> >> Kees shows it on the lattice at
> >> >> >> http://kees.cc/tuning/lat_perbl.html
> >> >> >> and it doesn't look spherical, but that's possibly because
> >he's
> >> >> >> still got the axes at 90deg.
> >> >> >
> >> >> >It's the second lattice on that page. I don't see any 90
degree
> >> >> >angles there!
> >> >>
> >> >> I don't see the unit shell there either.
> >> >
> >> >It's shown on the fourth graphic on that page.
> >>
> >> Looks like the axes are at 90 degrees to me!
> >
> >Ha ha. Kees superimposed a horizontal and vertical axis on
> >the "shell" diagram. So what? That doesn't mean that either the
shell
> >or the coresponding lattice have any 90 angles in them!
> >
> >:)
>
> Well, it sure fooled me,

Why? Kees clearly indicates which lattice each shell corresponds to.

> since that's how he draws the axes on
> every other lattice on his site.

What are you talking about? The 2nd and 3rd lattices on this page,
for example, clearly don't have any 90 degree angles in them; when he
draws the shells for these lattices, he superimposes a vertical and
horizontal line just for reference (since the transformations
matrices tell you how to switch the lattice between one set of x and
y axes and another).

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 6:32:28 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> > And isn't unweighted triangular (Hahn diameter) the hexagonal
> > ball measure?
> >
> > Given the agreement Gene's found between Euclidean distance and
> > unweighted Hahn diameter, I tend to think I'm right here.
>
> The big difference is between weighted and unweighted. If for monzos
> we define the unweighted p-distance to be
>
> || |a1 ... an> ||_p = ((|a1|^p + ... + |an|^p + |a1+...+an|)/2)^
(1/p)
>
> then the unweighted 2-distance is symmetrical Euclidean, and the
> 1-distance is the unweighted version of Kees distance. The
> infinity-distance isn't Hahn distance, but it is related and in many
> cases will give the same answer.
>
> > >Ha ha. Kees superimposed a horizontal and vertical axis on
> > >the "shell" diagram. So what? That doesn't mean that either the
shell
> > >or the coresponding lattice have any 90 angles in them!
>
> Given that the lattices are not Euclidean, they can't have any 90
> angles. Angles don't exist for Kees, Tenney, Hahn etc. lattices.

Kees depicted the very same lattice, and the very same shell, using
several different conventions of how to embed it in familiar 2D
space. For each of these, he displays the shell against a vertical
and horizontal axis at 90 degree angles. This gives the
transformation matrices he's talking about direct meaning. But the
vertical and horizontal axes don't have to signify anything in
particular for any of these conventions.

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 6:42:16 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >If for monzos we define the unweighted p-distance to be
> >
> >|| |a1 ... an> ||_p = ((|a1|^p + ... + |an|^p + |a1+...+an|)/2)^
(1/p)
> >
> >then the unweighted 2-distance is symmetrical Euclidean, and the
> >1-distance is the unweighted version of Kees distance. The
> >infinity-distance isn't Hahn distance, but it is related and in
many
> >cases will give the same answer.
> //
> >||(a,b,c)||_q = ((|a|^q+|b|^q+|c|^q+|a+b|^q+|b+c|^q+|c+a|^q+
> >|a+b+c|^q)/4)^(1/q)
> >
> >It happens that *both* p=2 and q=2 give the symmetric Euclidean
norm,
> >and q=infinity gives the Hahn norm.
>
> It looks like q=1 will still give unweighted Kees,

What does that mean? In my view, the Kees lattice doesn't even have
rungs, so I don't know what it would mean for them all to be the same
length -- or, if you picked some and made them the same, why this
wouldn't be the same as some already-named thing. Unweighted Kees is
an oxymoron to me but what does it mean to you?

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 2:34:35 AM

>> >> >> >> I would call a taxicab distance measure
>> >> >> >> rectangular/triangular if there is a sphere (centered on a
>> >> >> >> lattice point with with an arbitrary radius) that encloses
>> >> >> >> all of the lattice points of taxicab distance <= x and no
>> >> >> >> or few other lattice points.
//
>> Given the agreement Gene's found between Euclidean distance and
>> unweighted Hahn diameter, I tend to think I'm right here.
>
>Huh? I have no idea how this fits in, or how it contradicts what I
>said. Your statement above didn't bring the ratios on the lattice at
>all, so something like Hahn seems a separate consideration.

I'm not sure what "statement above" you're referring to. The one
at the top of this message seems to be the one in contention, and
mentions "lattice points" which are "ratios on the lattice".

Gene has reported that ||_Hahn and Euclidean norm shells differ by
few notes in a variety of harmonic limits, which is all I'm saying
at the top.

>>> >when the geometry is actually triangular. For rectangular, the
>>> >same argument leads to a rectangular geometry.
>>
>>> Nah, there are better sphere approximations on the rect. lattice
>>> than rectangles.
>
>The relevant approximations would be close to regular octahedra in
>the 3D case. If you want to get even closer to spheres, it seems you
>have to throw taxicab out the window. But your original statement was
>all about taxicab measures!

Howabout: The closest you can get with unit lengths to spheres on the
triangular lattice is to count ratios of consonances as a single
step. The closest you can get with unit lengths to spheres on the
rectangular latice is to count ratios of consonances as two steps.

>>> >> >I think Hahn gives a reasonable notion of lattice distance,
>>> >> >and it's based on odd factorization and is unweighted.
>> //
>>> >> This doesn't work for arbitrary commas with no a priori harmonic
>>> >> limit, since everything would just be 1.
//
>>> Everything would be 1 because we assume the entire comma is
>>> intended to be consonant and (since we're not weighting) assign
>> it length 1.
>
>I think you're trying to imply that everything is consonant when
>there's no a priori harmonic limit. But "odd factorization" in the
>sense required here makes no sense without one.

Exactly.

>> >> >> >> Weighted triangular is trickier. I believe the formula is
>> >> >> //
>> >> >> >> This gives Paul's "taxicab isosceles" measure from the link
>> >> >> >> above.
>> >> >> >> Kees shows it on the lattice at
>> >> >> >> http://kees.cc/tuning/lat_perbl.html
>> >> >> >> and it doesn't look spherical, but that's possibly because
>> >> >> >> he's still got the axes at 90deg.
>> >> >> >
>> >> >> >It's the second lattice on that page. I don't see any 90
>> >> >> >degree angles there!
>> >> >>
>> >> >> I don't see the unit shell there either.
>> >> >
>> >> >It's shown on the fourth graphic on that page.
>> >>
>> >> Looks like the axes are at 90 degrees to me!
>> >
>> >Ha ha. Kees superimposed a horizontal and vertical axis on
>> >the "shell" diagram. So what? That doesn't mean that either the
>> >shell or the coresponding lattice have any 90 angles in them!
>> >
>> >:)
>>
>> Well, it sure fooled me,
>
>Why? Kees clearly indicates which lattice each shell corresponds to.

Let's go back...

> >It seems circular to me. Start with the weighting part. A
> >weighted taxicab measure means to me that some rungs are longer
> >than others. So the sphere will naturally accomodate fewer
> >rungs in the longer direction, which seems to mean you "get"
> >weighted according to the above. But if you started with
> >unweighted, the lengths would be equal, so you'd "get" unweighted
> >of course . . . (?)

Faced with this, I took back that it worked for weighted. But it
does, and in fact now I read this to be agreeing with me.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 2:50:02 AM

>> >||(a,b,c)||_q = ((|a|^q+|b|^q+|c|^q+|a+b|^q+|b+c|^q+|c+a|^q+
>> >|a+b+c|^q)/4)^(1/q)
>> >
>> >It happens that *both* p=2 and q=2 give the symmetric Euclidean
>> >norm and q=infinity gives the Hahn norm.
>>
>> It looks like q=1 will still give unweighted Kees,
>
>What does that mean? In my view, the Kees lattice doesn't even have
>rungs, so I don't know what it would mean for them all to be the same
>length -- or, if you picked some and made them the same, why this
>wouldn't be the same as some already-named thing. Unweighted Kees is
>an oxymoron to me but what does it mean to you?

Gene defined it in that message.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 12:58:19 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> >> I would call a taxicab distance measure
> >> >> >> >> rectangular/triangular if there is a sphere (centered
on a
> >> >> >> >> lattice point with with an arbitrary radius) that
encloses
> >> >> >> >> all of the lattice points of taxicab distance <= x and
no
> >> >> >> >> or few other lattice points.
> //
> >> Given the agreement Gene's found between Euclidean distance and
> >> unweighted Hahn diameter, I tend to think I'm right here.
> >
> >Huh? I have no idea how this fits in, or how it contradicts what I
> >said. Your statement above didn't bring the ratios on the lattice
at
> >all, so something like Hahn seems a separate consideration.
>
> I'm not sure what "statement above" you're referring to. The one
> at the top of this message seems to be the one in contention, and
> mentions "lattice points" which are "ratios on the lattice".
>
> Gene has reported that ||_Hahn and Euclidean norm shells differ by
> few notes in a variety of harmonic limits, which is all I'm saying
> at the top.

It is? Can you clarify your point then?

>> >>> >when the geometry is actually triangular. For rectangular, the
>> >>> >same argument leads to a rectangular geometry.
> >>
>> >>> Nah, there are better sphere approximations on the rect.
>lattice
>> >>> than rectangles.
> >
>> >The relevant approximations would be close to regular octahedra
in
>> >the 3D case. If you want to get even closer to spheres, it seems
you
>> >have to throw taxicab out the window. But your original statement
was
>> >all about taxicab measures!
>
>> Howabout: The closest you can get with unit lengths to spheres on
the
>> triangular lattice is to count ratios of consonances as a single
>> step.

If the triangular lattice is equilateral and has (all of the)
consonances for rungs, then this gets pretty close, though of course
Euclidean gets closer.

> The closest you can get with unit lengths to spheres on the
> rectangular latice is to count ratios of consonances as two steps.

Two steps? This makes no sense to me. Can you elaborate on your
thinking here?

> >>> >> >I think Hahn gives a reasonable notion of lattice distance,
> >>> >> >and it's based on odd factorization and is unweighted.
> >> //
> >>> >> This doesn't work for arbitrary commas with no a priori
harmonic
> >>> >> limit, since everything would just be 1.
> //
> >>> Everything would be 1 because we assume the entire comma is
> >>> intended to be consonant and (since we're not weighting) assign
> >> it length 1.
> >
> >I think you're trying to imply that everything is consonant when
> >there's no a priori harmonic limit. But "odd factorization" in the
> >sense required here makes no sense without one.
>
> Exactly.

:-p

> >> >> >> >> Weighted triangular is trickier. I believe the formula
is
> >> >> >> //
> >> >> >> >> This gives Paul's "taxicab isosceles" measure from the
link
> >> >> >> >> above.
> >> >> >> >> Kees shows it on the lattice at
> >> >> >> >> http://kees.cc/tuning/lat_perbl.html
> >> >> >> >> and it doesn't look spherical, but that's possibly
because
> >> >> >> >> he's still got the axes at 90deg.
> >> >> >> >
> >> >> >> >It's the second lattice on that page. I don't see any 90
> >> >> >> >degree angles there!
> >> >> >>
> >> >> >> I don't see the unit shell there either.
> >> >> >
> >> >> >It's shown on the fourth graphic on that page.
> >> >>
> >> >> Looks like the axes are at 90 degrees to me!
> >> >
> >> >Ha ha. Kees superimposed a horizontal and vertical axis on
> >> >the "shell" diagram. So what? That doesn't mean that either the
> >> >shell or the coresponding lattice have any 90 angles in them!
> >> >
> >> >:)
> >>
> >> Well, it sure fooled me,
> >
> >Why? Kees clearly indicates which lattice each shell corresponds
to.
>
> Let's go back...
>
> > >It seems circular to me. Start with the weighting part. A
> > >weighted taxicab measure means to me that some rungs are longer
> > >than others. So the sphere will naturally accomodate fewer
> > >rungs in the longer direction, which seems to mean you "get"
> > >weighted according to the above. But if you started with
> > >unweighted, the lengths would be equal, so you'd "get" unweighted
> > >of course . . . (?)
>
> Faced with this, I took back that it worked for weighted. But it
> does, and in fact now I read this to be agreeing with me.

OK, maybe we need to rephrase what we're talking about so that it
means the same thing to both of us. I certainly believe in the
principle of embedding these lattices in Euclidean space such that
their balls come as close to being spherical as possible. This
dictates a rectangular geometry for octave-specific Tenney, an
equilateral triangular geometry for equal-weighted Hahn (breaks down
beyond 7-limit), and a geometry without rungs but conforming to the
third full lattice diagram on the Kees page we've been looking at for
expressibility or "odd limit" (really the smallest odd limit the
interval belongs to).

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 1:03:19 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >||(a,b,c)||_q = ((|a|^q+|b|^q+|c|^q+|a+b|^q+|b+c|^q+|c+a|^q+
> >> >|a+b+c|^q)/4)^(1/q)
> >> >
> >> >It happens that *both* p=2 and q=2 give the symmetric Euclidean
> >> >norm and q=infinity gives the Hahn norm.
> >>
> >> It looks like q=1 will still give unweighted Kees,
> >
> >What does that mean? In my view, the Kees lattice doesn't even
have
> >rungs, so I don't know what it would mean for them all to be the
same
> >length -- or, if you picked some and made them the same, why this
> >wouldn't be the same as some already-named thing. Unweighted Kees
is
> >an oxymoron to me but what does it mean to you?
>
> Gene defined it in that message.
>
> -Carl

Gene, how do you respond? I don't see how "unweighted Kees" could
possibly make any sense, so I look to you to fill me in.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 1:51:37 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> OK, maybe we need to rephrase what we're talking about so that it
> means the same thing to both of us. I certainly believe in the
> principle of embedding these lattices in Euclidean space such that
> their balls come as close to being spherical as possible.

Are you taking about introducing two norms, a Euclidean and a
noneuclidean one?

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 1:57:09 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> Gene, how do you respond? I don't see how "unweighted Kees" could
> possibly make any sense, so I look to you to fill me in.

We can define the "p-norm" on 7-limit monzo classes as

|| |* a b c> ||_p = ((|a|^p + |b|^p + |c|^p + |a+b+c|^p)/2)^(1/p)

Then || ||_2 is the symmetrical Euclidean norm, and || ||_1 is the
unweighted Kees norm. The reason for calling it that is that if
a, b and c are weighted by log2(p) factors, then what you get is in
fact the Kees norm. Hence, the Kees norm can be described as a
weighted p-norm.

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 2:46:30 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > OK, maybe we need to rephrase what we're talking about so that it
> > means the same thing to both of us. I certainly believe in the
> > principle of embedding these lattices in Euclidean space such
that
> > their balls come as close to being spherical as possible.
>
> Are you taking about introducing two norms, a Euclidean and a
> noneuclidean one?

Not really, I'm just talking about visually depicting these lattices
in such a way that the norms we want to use with them come out
looking as spherical as possible.

I see no reason we can't understand them as living in Euclidean space
and then use various custom notions of "distance" to represent the
complexity measures we care about; for example, the third lattice on
the Kees page we keep looking at can be associated with a little
vehicle that is constrained to be able to move in only the
six "cardinal" directions at 60 degree increments from one another;
the distance that this vehicle has to travel to get from one lattice
point to another corresponds to Kees expressibility (due to the ball
becoming a regular hexagon in this depiction).

In the Tenney case, there's less motivation to do so, since the
taxicab distance is so straightforward and remains the same
regardless of what the angles are. To give the sense of this, I
suggested imagining a Tenney lattice where the angle between the axes
is constantly changing; the quantities we care about are invariants
in this world.

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 2:48:59 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > Gene, how do you respond? I don't see how "unweighted Kees" could
> > possibly make any sense, so I look to you to fill me in.
>
> We can define the "p-norm" on 7-limit monzo classes as
>
> || |* a b c> ||_p = ((|a|^p + |b|^p + |c|^p + |a+b+c|^p)/2)^(1/p)
>
> Then || ||_2 is the symmetrical Euclidean norm, and || ||_1 is the
> unweighted Kees norm. The reason for calling it that is that if
> a, b and c are weighted by log2(p) factors, then what you get is in
> fact the Kees norm. Hence, the Kees norm can be described as a
> weighted p-norm.

But don't we have another name for this (what you're
calling "unweighted Kees norm") already?

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 4:59:19 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I see no reason we can't understand them as living in Euclidean space
> and then use various custom notions of "distance" to represent the
> complexity measures we care about...

You can, but then technically you don't get a Tenney, Hahn etc.
lattice. There are advantages at times to having these be honest
lattices, but it isn't too helpful if you want to picture them.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/1/2005 5:05:08 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > We can define the "p-norm" on 7-limit monzo classes as
> >
> > || |* a b c> ||_p = ((|a|^p + |b|^p + |c|^p + |a+b+c|^p)/2)^(1/p)

> But don't we have another name for this (what you're
> calling "unweighted Kees norm") already?

None that I know of. A better name for this p-norm, and also
the "q-norm"

|| |* a b c> ||_q = ((|a|^q + |b|^q + |c|^q +
|a+b|^q + |b+c|^q + |c+a|^q + |a+b+c|^q)/4)^(1/q)

would be a good start.

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 5:14:13 PM

>> >> >> >> >> I would call a taxicab distance measure
>> >> >> >> >> rectangular/triangular if there is a sphere (centered
>> >> >> >> >> on a lattice point with with an arbitrary radius) that
>> >> >> >> >> encloses all of the lattice points of taxicab
>> >> >> >> >> distance <= x and no or few other lattice points.
//
>> Gene has reported that ||_Hahn and Euclidean norm shells differ by
>> few notes in a variety of harmonic limits, which is all I'm saying
>> at the top.
>
>It is?

Well, that's the triangular case.

>Can you clarify your point then?

I thought you'd asked what I meant by triangular and rectangular.

>>> >>> Nah, there are better sphere approximations on the rect.
>>> >>> lattice than rectangles.
>> >
>>> >The relevant approximations would be close to regular octahedra
>>> >in the 3D case.

Yes, I think that's right.

>>> If you want to get even closer to spheres, it seems you have
>>> to throw taxicab out the window. But your original statement
>>> was all about taxicab measures!

Where's the problem? I'm defining "triangular" to be 'as close
as you can get to Euclidiean with taxicab on a triangular lattice'.

>>> Howabout: The closest you can get with unit lengths to spheres on
>>> the triangular lattice is to count ratios of consonances as a
>>> single step.
>
>If the triangular lattice is equilateral and has (all of the)
>consonances for rungs, then this gets pretty close, though of course
>Euclidean gets closer.

Euclidean *is* "spheres".

>> The closest you can get with unit lengths to spheres on the
>> rectangular latice is to count ratios of consonances as two steps.
>
>Two steps? This makes no sense to me. Can you elaborate on your
>thinking here?

If you count them (ie 5:3) as two steps on the rectangular lattice,
it looks like the nth shell of that agrees with the Euclidean shell
of radius n. Though it does look like you can count them as one
step and make it agree with the Euclidean shell of radius n*sqrt(2).

>> Let's go back...
>>
>> > >It seems circular to me. Start with the weighting part. A
>> > >weighted taxicab measure means to me that some rungs are longer
>> > >than others. So the sphere will naturally accomodate fewer
>> > >rungs in the longer direction, which seems to mean you "get"
>> > >weighted according to the above. But if you started with
>> > >unweighted, the lengths would be equal, so you'd "get" unweighted
>> > >of course . . . (?)
>>
>> Faced with this, I took back that it worked for weighted. But it
>> does, and in fact now I read this to be agreeing with me.
>
>OK, maybe we need to rephrase what we're talking about so that it
>means the same thing to both of us. I certainly believe in the
>principle of embedding these lattices in Euclidean space such that
>their balls come as close to being spherical as possible. This
>dictates a rectangular geometry for octave-specific Tenney, an
>equilateral triangular geometry for equal-weighted Hahn

Now we're in business. But doesn't it dictate a rectangular
geometry for any measure where rungs represent factors (not ratios),
and a triangular measure where the reverse is true? (And have
nothing to do with octaves or the particular weighting?)

>(breaks down beyond 7-limit),

Yes, I remember something about this. Something about A3 = D3...
but I can't find anything more on this at mathworld or google.

>and a geometry without rungs but conforming to the third full
>lattice diagram on the Kees page we've been looking at for
>expressibility or "odd limit" (really the smallest odd limit the
>interval belongs to).

With hexagonal contours, gotcha. Does this depend on limit?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 12:29:38 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> > > We can define the "p-norm" on 7-limit monzo classes as
> > >
> > > || |* a b c> ||_p = ((|a|^p + |b|^p + |c|^p + |a+b+c|^p)/2)^(1/p)
>
> > But don't we have another name for this (what you're
> > calling "unweighted Kees norm") already?
>
> None that I know of.

So this isn't either L_1 or L_inf in the symmetrical (equilateral
triangular) lattice? How do they differ?

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 12:54:12 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

I think we're good up to here . . .

> >> The closest you can get with unit lengths to spheres on the
> >> rectangular latice is to count ratios of consonances as two
steps.
> >
> >Two steps? This makes no sense to me. Can you elaborate on your
> >thinking here?
>
> If you count them (ie 5:3) as two steps on the rectangular lattice,

This can be a ratio of consonances but so can, say, 9:5, which is the
ratio between 5/6 and 3/2; and so can, say, 5:4, which is one step
(right?) . . .

> it looks like the nth shell of that agrees with the Euclidean shell
> of radius n.

You're saying you can get a perfect sphere? Wow. I'd love to see this
demostrated in more detail.

> Though it does look like you can count them as one
> step and make it agree with the Euclidean shell of radius n*sqrt(2).

Again, I'd love a demonstration. That would blow my mind.

> >OK, maybe we need to rephrase what we're talking about so that it
> >means the same thing to both of us. I certainly believe in the
> >principle of embedding these lattices in Euclidean space such that
> >their balls come as close to being spherical as possible. This
> >dictates a rectangular geometry for octave-specific Tenney, an
> >equilateral triangular geometry for equal-weighted Hahn
>
> Now we're in business.

Whew!

> But doesn't it

What's "it"?

> dictate a rectangular
> geometry for any measure where rungs represent factors (not ratios),
> and a triangular measure where the reverse is true?

The Kees lattice, for one, is best understood with no rungs
whatsoever, so you can't *begin* by asking what the rungs represent
for a given measure. Since the other measures can be also be
understood in terms of a ball of particular shape rather that in
terms of taxicab distance, I could say the same for all of them. We
can do away with rungs altogether and still come up with the
configurations of ratios that best "spherify" the balls; that's what
I had in mind above.

> (And have
> nothing to do with octaves or the particular weighting?)

My statement above to which you replied "Now we're in business" has
everything to do with octaves and the particular weightings.

> >(breaks down beyond 7-limit),
>
> Yes, I remember something about this. Something about A3 = D3...
> but I can't find anything more on this at mathworld or google.

Huh? I don't know if we've jumped the tracks here. Beyond the 7-
limit, no Euclidean lattice can represent each of the consonances,
and only the consonances, by an equal and minimal length. Think about
ratios of 9 . . .

> >and a geometry without rungs but conforming to the third full
> >lattice diagram on the Kees page we've been looking at for
> >expressibility or "odd limit" (really the smallest odd limit the
> >interval belongs to).
>
> With hexagonal contours, gotcha. Does this depend on limit?

For higher prime limits, you need more dimensions, and the hexagon
becomes a rhombic dodecahedron and then its higher-dimensional
analogues . . . but if you can accept higher-dimensional Euclidean
spaces, this doesn't break down beyond the 7-limit the way the Hahn
construction does . . .

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 2:44:27 PM

>> >> The closest you can get with unit lengths to spheres on the
>> >> rectangular latice is to count ratios of consonances as two
>> >> steps.
>> >
>> >Two steps? This makes no sense to me. Can you elaborate on your
>> >thinking here?
>>
>> If you count them (ie 5:3) as two steps on the rectangular lattice,
>
>This can be a ratio of consonances but so can, say, 9:5, which is the
>ratio between 5/6 and 3/2; and so can, say, 5:4, which is one step
>(right?) . . .

Depends on what's consonant. In an octave-specific formulation,
5:4 is three steps... Howabout if we swap "is to count fractional
factors (as opposed to integer factors) as two steps" in the above.

>> it looks like the nth shell of that agrees with the Euclidean shell
>> of radius n.
>
>You're saying you can get a perfect sphere? Wow. I'd love to see this
>demostrated in more detail.
>
>> Though it does look like you can count them as one
>> step and make it agree with the Euclidean shell of radius n*sqrt(2).
>
>Again, I'd love a demonstration. That would blow my mind.

We're obviously talking about different things. I'm superimposing
a sphere on a lattice (a Euclidean object centered on a lattice point)
and then asking what lattice points it encloses. Yes, at least in
2-D, n-taxicab on a rect. lattice agrees perfectly with radius-n
Euclidean (I assume this works in higher-D since I'm under the
impression that 90deg axes are always orthogonal). I originally
said no such spheres existed for taxicab of fractional-factor
taxicab on a rectangular lattice. However, n*sqrt(2) spheres
apparently do the trick.

>> >OK, maybe we need to rephrase what we're talking about so that it
>> >means the same thing to both of us. I certainly believe in the
>> >principle of embedding these lattices in Euclidean space such that
>> >their balls come as close to being spherical as possible. This
>> >dictates a rectangular geometry for octave-specific Tenney, an
>> >equilateral triangular geometry for equal-weighted Hahn
>>
>> Now we're in business.
>
>Whew!
>
>> But doesn't it
>
>What's "it"?

The business we're now in. :)

>> dictate a rectangular
>> geometry for any measure where rungs represent factors (not ratios),
>> and a triangular measure where the reverse is true?
>
>The Kees lattice, for one, is best understood with no rungs
>whatsoever, so you can't *begin* by asking what the rungs represent
>for a given measure.

Sure. Let's not worry about Kees for now.

>Since the other measures can be also be understood in terms of a
>ball of particular shape rather that in terms of taxicab distance,
>I could say the same for all of them. We can do away with rungs
>altogether and still come up with the configurations of ratios
>that best "spherify" the balls; that's what I had in mind above.

Great. I'd love to see more of this.

>> (And have
>> nothing to do with octaves or the particular weighting?)
>
>My statement above to which you replied "Now we're in business" has
>everything to do with octaves and the particular weightings.

How could this possibly be so?

>> >(breaks down beyond 7-limit),
>>
>> Yes, I remember something about this. Something about A3 = D3...
>> but I can't find anything more on this at mathworld or google.
>
>Huh? I don't know if we've jumped the tracks here. Beyond the 7-
>limit, no Euclidean lattice

I don't know what a Euclidean lattice is.

>can represent each of the consonances,
>and only the consonances, by an equal and minimal length. Think about
>ratios of 9 . . .

All of the stuff in my "exploring badness" msg. and since has assumed
prime limits, btw (aside from log-weighted versions, which come out
the same). Have you been thinking odd limits?

>> >and a geometry without rungs but conforming to the third full
>> >lattice diagram on the Kees page we've been looking at for
>> >expressibility or "odd limit" (really the smallest odd limit the
>> >interval belongs to).
>>
>> With hexagonal contours, gotcha. Does this depend on limit?
>
>For higher prime limits, you need more dimensions, and the hexagon
>becomes a rhombic dodecahedron and then its higher-dimensional
>analogues . . . but if you can accept higher-dimensional Euclidean
>spaces, this doesn't break down beyond the 7-limit the way the Hahn
>construction does . . .

Huh.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 3:20:08 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> The closest you can get with unit lengths to spheres on the
> >> >> rectangular latice is to count ratios of consonances as two
> >> >> steps.
> >> >
> >> >Two steps? This makes no sense to me. Can you elaborate on your
> >> >thinking here?
> >>
> >> If you count them (ie 5:3) as two steps on the rectangular
lattice,
> >
> >This can be a ratio of consonances but so can, say, 9:5, which is
the
> >ratio between 5/6 and 3/2; and so can, say, 5:4, which is one step
> >(right?) . . .
>
> Depends on what's consonant. In an octave-specific formulation,
> 5:4 is three steps... Howabout if we swap "is to count fractional
> factors (as opposed to integer factors) as two steps" in the above.

I don't get it -- how do you define fractional factors, and what are
we swapping this for?

> >> it looks like the nth shell of that agrees with the Euclidean
shell
> >> of radius n.
> >
> >You're saying you can get a perfect sphere? Wow. I'd love to see
this
> >demostrated in more detail.
> >
> >> Though it does look like you can count them as one
> >> step and make it agree with the Euclidean shell of radius n*sqrt
(2).
> >
> >Again, I'd love a demonstration. That would blow my mind.
>
> We're obviously talking about different things. I'm superimposing
> a sphere on a lattice (a Euclidean object centered on a lattice
point)
> and then asking what lattice points it encloses.

Yes, we're talking about the same thing.

> Yes, at least in
> 2-D, n-taxicab on a rect. lattice agrees perfectly with radius-n
> Euclidean

Of course it doesn't. If the sphere (circle) is big enough, it won't
be able to contain the same lattice points as the diamond which is
the taxicab "shell". Perhaps you're only looking at very small
circles?

> (I assume this works in higher-D since I'm under the
> impression that 90deg axes are always orthogonal).

:)

> I originally
> said no such spheres existed for taxicab of fractional-factor
> taxicab on a rectangular lattice.

I must be misunderstanding "fractional-factor taxicab" -- what does
that mean?

> However, n*sqrt(2) spheres
> apparently do the trick.

How did you arrive at this startling conclusion?

> >> >OK, maybe we need to rephrase what we're talking about so that
it
> >> >means the same thing to both of us. I certainly believe in the
> >> >principle of embedding these lattices in Euclidean space such
that
> >> >their balls come as close to being spherical as possible. This
> >> >dictates a rectangular geometry for octave-specific Tenney, an
> >> >equilateral triangular geometry for equal-weighted Hahn
> >>
> >> Now we're in business.
> >
> >Whew!
> >
> >> But doesn't it
> >
> >What's "it"?
>
> The business we're now in. :)
>
> >> dictate a rectangular
> >> geometry for any measure where rungs represent factors (not
ratios),
> >> and a triangular measure where the reverse is true?
> >
> >The Kees lattice, for one, is best understood with no rungs
> >whatsoever, so you can't *begin* by asking what the rungs
represent
> >for a given measure.
>
> Sure. Let's not worry about Kees for now.
>
> >Since the other measures can be also be understood in terms of a
> >ball of particular shape rather that in terms of taxicab distance,
> >I could say the same for all of them. We can do away with rungs
> >altogether and still come up with the configurations of ratios
> >that best "spherify" the balls; that's what I had in mind above.
>
> Great. I'd love to see more of this.

I don't know what else to say to convince you that the answer to your
question above is "no".

> >> (And have
> >> nothing to do with octaves or the particular weighting?)
> >
> >My statement above to which you replied "Now we're in business"
has
> >everything to do with octaves and the particular weightings.
>
> How could this possibly be so?

Because the complexity measures in question have everything to do
with them.

> >> >(breaks down beyond 7-limit),
> >>
> >> Yes, I remember something about this. Something about A3 = D3...
> >> but I can't find anything more on this at mathworld or google.
> >
> >Huh? I don't know if we've jumped the tracks here. Beyond the 7-
> >limit, no Euclidean lattice
>
> I don't know what a Euclidean lattice is.

No lattice that exists in a Euclidean space.

> >can represent each of the consonances,
> >and only the consonances, by an equal and minimal length. Think
about
> >ratios of 9 . . .
>
> All of the stuff in my "exploring badness" msg. and since has
assumed
> prime limits, btw (aside from log-weighted versions, which come out
> the same). Have you been thinking odd limits?

In one sense, no -- ratios of 9 exist whether or not anyone thinks
odd limits. In another sense, "odd limit" -- rather, the minimum odd
limit that a particular ratio could belong to -- is supposed to be
the determinant of lattice distance in both the Hahn and Kees cases.

> >> >and a geometry without rungs but conforming to the third full
> >> >lattice diagram on the Kees page we've been looking at for
> >> >expressibility or "odd limit" (really the smallest odd limit
the
> >> >interval belongs to).
> >>
> >> With hexagonal contours, gotcha. Does this depend on limit?
> >
> >For higher prime limits, you need more dimensions, and the hexagon
> >becomes a rhombic dodecahedron and then its higher-dimensional
> >analogues . . . but if you can accept higher-dimensional Euclidean
> >spaces, this doesn't break down beyond the 7-limit the way the
Hahn
> >construction does . . .
>
> Huh.

Mm-hm.

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 4:04:25 PM

>> >> >> The closest you can get with unit lengths to spheres on the
>> >> >> rectangular latice is to count ratios of consonances as two
>> >> >> steps.
>> >> >
>> >> >Two steps? This makes no sense to me. Can you elaborate on your
>> >> >thinking here?
>> >>
>> >> If you count them (ie 5:3) as two steps on the rectangular
>> >> lattice,
>> >
>> >This can be a ratio of consonances but so can, say, 9:5, which is
>> >the ratio between 5/6 and 3/2; and so can, say, 5:4, which is one
>> >step (right?) . . .
>>
>> Depends on what's consonant. In an octave-specific formulation,
>> 5:4 is three steps... Howabout if we swap "is to count fractional
>> factors (as opposed to integer factors) as two steps" in the above.
>
>I don't get it -- how do you define fractional factors,

Rational numbers with a non-1 numerator and denominator.

>and what are
>we swapping this for?

Find "is to count" in the quoted text at top.

>> We're obviously talking about different things. I'm superimposing
>> a sphere on a lattice (a Euclidean object centered on a lattice
>> point) and then asking what lattice points it encloses.
>
>Yes, we're talking about the same thing.
>
>> Yes, at least in
>> 2-D, n-taxicab on a rect. lattice agrees perfectly with radius-n
>> Euclidean
>
>Of course it doesn't. If the sphere (circle) is big enough, it won't
>be able to contain the same lattice points as the diamond which is
>the taxicab "shell". Perhaps you're only looking at very small
>circles?

Maybe so...

>> (I assume this works in higher-D since I'm under the
>> impression that 90deg axes are always orthogonal).
>
>:)

Is this wrong?

>> I originally
>> said no such spheres existed for taxicab of fractional-factor
>> taxicab on a rectangular lattice.
>
>I must be misunderstanding "fractional-factor taxicab" -- what does
>that mean?

5/3 = 1

>> However, n*sqrt(2) spheres apparently do the trick.
>
>How did you arrive at this startling conclusion?

Since the Euclidean distance to fractional factors is sqrt(2).
But maybe I'm not looking at big enough circles...

>> >> >OK, maybe we need to rephrase what we're talking about so that
>> >> >it means the same thing to both of us. I certainly believe in
>> >> >the principle of embedding these lattices in Euclidean space
>> >> >such that their balls come as close to being spherical as
>> >> >possible. This dictates a rectangular geometry for octave-
>> >> >specific Tenney, an equilateral triangular geometry for equal-
>> >> >weighted Hahn
//
>> >Since the other measures can be also be understood in terms of a
>> >ball of particular shape rather that in terms of taxicab distance,
>> >I could say the same for all of them. We can do away with rungs
>> >altogether and still come up with the configurations of ratios
>> >that best "spherify" the balls; that's what I had in mind above.
>>
>> Great. I'd love to see more of this.
>
>I don't know what else to say to convince you that the answer to your
>question above is "no".

I meant, I'd love to know the ball shapes.

>> >> (And have
>> >> nothing to do with octaves or the particular weighting?)
>> >
>> >My statement above to which you replied "Now we're in business"
>> >has everything to do with octaves and the particular weightings.
>>
>> How could this possibly be so?
>
>Because the complexity measures in question have everything to do
>with them.

Yes, but forgetting about psychoacoustics just for the moment...

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 3:06:10 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> The closest you can get with unit lengths to spheres on the
> >> >> >> rectangular latice is to count ratios of consonances as two
> >> >> >> steps.
> >> >> >
> >> >> >Two steps? This makes no sense to me. Can you elaborate on
your
> >> >> >thinking here?
> >> >>
> >> >> If you count them (ie 5:3) as two steps on the rectangular
> >> >> lattice,
> >> >
> >> >This can be a ratio of consonances but so can, say, 9:5, which
is
> >> >the ratio between 5/6 and 3/2; and so can, say, 5:4, which is
one
> >> >step (right?) . . .
> >>
> >> Depends on what's consonant. In an octave-specific formulation,
> >> 5:4 is three steps... Howabout if we swap "is to count
fractional
> >> factors (as opposed to integer factors) as two steps" in the
above.
> >
> >I don't get it -- how do you define fractional factors,
>
> Rational numbers with a non-1 numerator and denominator.

So 81:80 would be two steps?

> >and what are
> >we swapping this for?
>
> Find "is to count" in the quoted text at top.

OK . . . I'm still not grasping your construction . . .

> >> We're obviously talking about different things. I'm
superimposing
> >> a sphere on a lattice (a Euclidean object centered on a lattice
> >> point) and then asking what lattice points it encloses.
> >
> >Yes, we're talking about the same thing.
> >
> >> Yes, at least in
> >> 2-D, n-taxicab on a rect. lattice agrees perfectly with radius-n
> >> Euclidean
> >
> >Of course it doesn't. If the sphere (circle) is big enough, it
won't
> >be able to contain the same lattice points as the diamond which is
> >the taxicab "shell". Perhaps you're only looking at very small
> >circles?
>
> Maybe so...
>
> >> (I assume this works in higher-D since I'm under the
> >> impression that 90deg axes are always orthogonal).
> >
> >:)
>
> Is this wrong?

90 degree axes are always orthogonal, yes, but I don't see how that
supports your assumption.

>> >> I originally
>> >> said no such spheres existed for taxicab of fractional-factor
>> >> taxicab on a rectangular lattice.
> >
>> >I must be misunderstanding "fractional-factor taxicab" -- what
does
>> >that mean?
>
> 5/3 = 1

What is that supposed to mean? A major sixth is a unison?

> >> However, n*sqrt(2) spheres apparently do the trick.
> >
> >How did you arrive at this startling conclusion?
>
> Since the Euclidean distance to fractional factors is sqrt(2).
> But maybe I'm not looking at big enough circles...

Clearly.

> >> >> >OK, maybe we need to rephrase what we're talking about so
that
> >> >> >it means the same thing to both of us. I certainly believe in
> >> >> >the principle of embedding these lattices in Euclidean space
> >> >> >such that their balls come as close to being spherical as
> >> >> >possible. This dictates a rectangular geometry for octave-
> >> >> >specific Tenney, an equilateral triangular geometry for
equal-
> >> >> >weighted Hahn
> //
> >> >Since the other measures can be also be understood in terms of a
> >> >ball of particular shape rather that in terms of taxicab
distance,
> >> >I could say the same for all of them. We can do away with rungs
> >> >altogether and still come up with the configurations of ratios
> >> >that best "spherify" the balls; that's what I had in mind above.
> >>
> >> Great. I'd love to see more of this.
> >
> >I don't know what else to say to convince you that the answer to
your
> >question above is "no".
>
> I meant, I'd love to know the ball shapes.

For 3-limit Tenney HD, you get equilateral diamonds (squares rotated
by 45 degrees). For 5-limit Tenney HD, as you acknowledged before,
you get regular octahedra. For 5-limit equal-weighted Hahn diameter,
you get regular hexagons. For 5-limit Kees expressibility or "minimum
odd limit" (using the configuration of lattice points shown in the
third lattice on that Kees page), you get regular hexagons again. The
7-limit and higher-limit extensions to this are straighforward,
except that equal-weighted Hahn breaks down with the first odd
composite.

> >> >> (And have
> >> >> nothing to do with octaves or the particular weighting?)
> >> >
> >> >My statement above to which you replied "Now we're in business"
> >> >has everything to do with octaves and the particular weightings.
> >>
> >> How could this possibly be so?
> >
> >Because the complexity measures in question have everything to do
> >with them.
>
> Yes, but forgetting about psychoacoustics just for the moment...

Forget that lattice distance is supposed to correspond to some
measure of ratio complexity? Then we're back to the "circular"
scenario where the lattice defines its own geometry again.

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 4:26:48 PM

Sorry it's taken me so long to reply. I've come down with the flu.
First time since I was a kid. Nasty stuff. I'm still running a
fever, but I'll try...

>> >Of course it doesn't. If the sphere (circle) is big enough, it
>> >won't be able to contain the same lattice points as the diamond
>> >which is the taxicab "shell". Perhaps you're only looking at
>> >very small circles?
>>
>> Maybe so...
>>
>> >> (I assume this works in higher-D since I'm under the
>> >> impression that 90deg axes are always orthogonal).
>> >
>> >:)
>>
>> Is this wrong?
>
>90 degree axes are always orthogonal, yes, but I don't see how that
>supports your assumption.

If it were true, I assume it would remain true in higher-D.

>>> >> I originally said no such spheres existed for taxicab of
>>> >> fractional-factor taxicab on a rectangular lattice.
>> >
>>> >I must be misunderstanding "fractional-factor taxicab" -- what
>>> >does that mean?

Just noticed there's a typo here -- delete "for taxicab of".

>> 5/3 = 1
>
>What is that supposed to mean? A major sixth is a unison?

A major sixth has length 1.

>> >> However, n*sqrt(2) spheres apparently do the trick.
>> >
>> >How did you arrive at this startling conclusion?
>>
>> Since the Euclidean distance to fractional factors is sqrt(2).
>> But maybe I'm not looking at big enough circles...
>
>Clearly.

Ok.

>> >> >Since the other measures can be also be understood in terms of
>> >> >a ball of particular shape rather that in terms of taxicab
>> >> >distance, I could say the same for all of them. We can do away
>> >> >with rungs altogether and still come up with the configurations
>> >> >of ratios that best "spherify" the balls; that's what I had in
>> >> >mind above.
//
>> I'd love to know the ball shapes.
>
>For 3-limit Tenney HD, you get equilateral diamonds (squares rotated
>by 45 degrees). For 5-limit Tenney HD, as you acknowledged before,
>you get regular octahedra.

Yes.

>For 5-limit equal-weighted Hahn diameter, you get regular hexagons.

Knew that.

>For 5-limit Kees expressibility or "minimum odd limit" (using the
>configuration of lattice points shown in the third lattice on that
>Kees page), you get regular hexagons again.

Exactly what sort of lattice is this?

>The 7-limit and higher-limit extensions to this are straighforward,
>except that equal-weighted Hahn breaks down with the first odd
>composite.

How so? Does this depend on that Hahn typically used odd
limits (I've been using a prime-limit approach)?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/10/2005 2:09:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> Sorry it's taken me so long to reply. I've come down with the flu.
> First time since I was a kid. Nasty stuff. I'm still running a
> fever, but I'll try...

Hope you feel better. I'll try to go easy on you . . . :)

> >>> >> I originally said no such spheres existed for taxicab of
> >>> >> fractional-factor taxicab on a rectangular lattice.
> >> >
> >>> >I must be misunderstanding "fractional-factor taxicab" -- what
> >>> >does that mean?
>
> Just noticed there's a typo here -- delete "for taxicab of".
>
> >> 5/3 = 1
> >
> >What is that supposed to mean? A major sixth is a unison?
>
> A major sixth has length 1.

OK. But intervals that aren't "fractional factors" can also get
length one, right?

> >> >> >Since the other measures can be also be understood in terms
of
> >> >> >a ball of particular shape rather that in terms of taxicab
> >> >> >distance, I could say the same for all of them. We can do
away
> >> >> >with rungs altogether and still come up with the
configurations
> >> >> >of ratios that best "spherify" the balls; that's what I had
in
> >> >> >mind above.
> //
> >> I'd love to know the ball shapes.
> >
> >For 3-limit Tenney HD, you get equilateral diamonds (squares
rotated
> >by 45 degrees). For 5-limit Tenney HD, as you acknowledged before,
> >you get regular octahedra.
>
> Yes.
>
> >For 5-limit equal-weighted Hahn diameter, you get regular hexagons.
>
> Knew that.
>
> >For 5-limit Kees expressibility or "minimum odd limit" (using the
> >configuration of lattice points shown in the third lattice on that
> >Kees page), you get regular hexagons again.
>
> Exactly what sort of lattice is this?

It is what it is. I don't know what else to say. But I certainly
would have loved to know of this much earlier.

> >The 7-limit and higher-limit extensions to this are
straighforward,
> >except that equal-weighted Hahn breaks down with the first odd
> >composite.
>
> How so? Does this depend on that Hahn typically used odd
> limits (I've been using a prime-limit approach)?

Yes, so that in the 9-limit and beyond, ratios of 9 can still have
length 1, etc. I thought you mentioned "odd factorization" so I'm
surprised if you now say that you've been using a primes-only version
of Hahn's algorithm . . . (?)

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 9:45:51 PM

>> Sorry it's taken me so long to reply. I've come down with the flu.
>> First time since I was a kid. Nasty stuff. I'm still running a
>> fever, but I'll try...
>
>Hope you feel better. I'll try to go easy on you . . . :)

Probably wasn't the flu after all. I feel better, but bloody
diarrhea has set in. Waiting on the results of a stool culture.
Sorry if that's too much information. They don't seem to think
it's terribly serious.

>> >>> >> I originally said no such spheres existed for taxicab of
>> >>> >> fractional-factor taxicab on a rectangular lattice.
>> >> >
>> >>> >I must be misunderstanding "fractional-factor taxicab" -- what
>> >>> >does that mean?
>>
>> Just noticed there's a typo here -- delete "for taxicab of".
>>
>> >> 5/3 = 1
>> >
>> >What is that supposed to mean? A major sixth is a unison?
>>
>> A major sixth has length 1.
>
>OK. But intervals that aren't "fractional factors" can also get
>length one, right?

(I've answered this affirmatively elsewhere.)

>> >For 5-limit Kees expressibility or "minimum odd limit" (using the
>> >configuration of lattice points shown in the third lattice on that
>> >Kees page), you get regular hexagons again.
>>
>> Exactly what sort of lattice is this?
>
>It is what it is. I don't know what else to say. But I certainly
>would have loved to know of this much earlier.

Yes, I'm grokking it now. For some reason Kees assumes the two
5-limit axes should be equally scaled. And indeed it seems that
way, since they have the same expressibility! IS this the paradox
he refers to, and how have you decided it doesn't exist?

Why do you like it better than your isosceles lattice?

>> >The 7-limit and higher-limit extensions to this are
>> >straighforward, except that equal-weighted Hahn breaks
>> >down with the first odd composite.
>>
>> How so? Does this depend on that Hahn typically used odd
>> limits (I've been using a prime-limit approach)?
>
>Yes, so that in the 9-limit and beyond, ratios of 9 can still have
>length 1, etc. I thought you mentioned "odd factorization" so I'm
>surprised if you now say that you've been using a primes-only version
>of Hahn's algorithm . . . (?)

Where did I mention odd factorization? Only when I was claiming
it's the same as prime factorization under log weighting. But
I've been using strictly prime limit thinking this entire thread.

So was your claim the generalized FCC lattice dosen't work
in > 3-D related to pure geometry, or some expectation based
on odd limit?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:55:01 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Sorry it's taken me so long to reply. I've come down with the
flu.
> >> First time since I was a kid. Nasty stuff. I'm still running a
> >> fever, but I'll try...
> >
> >Hope you feel better. I'll try to go easy on you . . . :)
>
> Probably wasn't the flu after all. I feel better, but bloody
> diarrhea has set in. Waiting on the results of a stool culture.
> Sorry if that's too much information. They don't seem to think
> it's terribly serious.

I hope not. I suffered with food poisoning for 10 days (including an
appendectomy) before it was discovered. Good luck!

> >> >>> >> I originally said no such spheres existed for taxicab of
> >> >>> >> fractional-factor taxicab on a rectangular lattice.
> >> >> >
> >> >>> >I must be misunderstanding "fractional-factor taxicab" --
what
> >> >>> >does that mean?
> >>
> >> Just noticed there's a typo here -- delete "for taxicab of".
> >>
> >> >> 5/3 = 1
> >> >
> >> >What is that supposed to mean? A major sixth is a unison?
> >>
> >> A major sixth has length 1.
> >
> >OK. But intervals that aren't "fractional factors" can also get
> >length one, right?
>
> (I've answered this affirmatively elsewhere.)

OK, so where does that leave us?

> >> >For 5-limit Kees expressibility or "minimum odd limit" (using
the
> >> >configuration of lattice points shown in the third lattice on
that
> >> >Kees page), you get regular hexagons again.
> >>
> >> Exactly what sort of lattice is this?
> >
> >It is what it is. I don't know what else to say. But I certainly
> >would have loved to know of this much earlier.
>
> Yes, I'm grokking it now. For some reason Kees assumes the two
> 5-limit axes should be equally scaled.

Huh?

> And indeed it seems that
> way, since they have the same expressibility!

What do you mean? 3 doesn't have the same expressibility as 5.

> IS this the paradox
> he refers to, and how have you decided it doesn't exist?

He refers to 5:4 and 5:3 looking like different distances in the
lattice. But that's not correct, since the relevant distance measure
is one that only allows motions (of whatever length) along six
equally-spaced directions (60 degrees apart) -- not along the rungs
Kees draws -- or equivalently, one that measures the size of a
regular hexagon with the origin at the center and the destination on
the outer edge.

> Why do you like it better than your isosceles lattice?

It's better for a prime limit of 5 and the new paradigm where
consonance/dissonance is shades-of-gray rather than black-and-white,
because all the (hexagonally-determined) distances correspond to
complexity (expressibility), no matter how complex the ratios. This
is why I believe the rungs should be deleted. If ratios of 9, in
particular, are to be considered something other than mere
dissonances in a 5-prime-limit tuning, then you want to represent
them as shorter than ratios of 15. This is exactly what this lattice,
with a hexagonal-norm metric, accomplishes perfectly. I should make a
diagram with lots of ratios on it . . .

> >> >The 7-limit and higher-limit extensions to this are
> >> >straighforward, except that equal-weighted Hahn breaks
> >> >down with the first odd composite.
> >>
> >> How so? Does this depend on that Hahn typically used odd
> >> limits (I've been using a prime-limit approach)?
> >
> >Yes, so that in the 9-limit and beyond, ratios of 9 can still have
> >length 1, etc. I thought you mentioned "odd factorization" so I'm
> >surprised if you now say that you've been using a primes-only
version
> >of Hahn's algorithm . . . (?)
>
> Where did I mention odd factorization? Only when I was claiming
> it's the same as prime factorization under log weighting. But
> I've been using strictly prime limit thinking this entire thread.

So you're not actually using Hahn's algorithm as he proposed it?

> So was your claim the generalized FCC lattice dosen't work
> in
> 3-D related to pure geometry,

No, if I even said such a thing.

> or some expectation based
> on odd limit?

I don't know what you mean by "expectation", but clearly it fails for
the Hahn case, where ratios of 9 must be given length 1.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:06:20 PM

>I hope not. I suffered with food poisoning for 10 days (including an
>appendectomy) before it was discovered. Good luck!

I remembered that!

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:04:02 PM

Turned out to be food poisoning (Campylobacter). Antibiotics
to the rescue!

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 1:54:48 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> Turned out to be food poisoning (Campylobacter).

Exactly what I had -- sucks when it goes undiagnosed for a while . . .

> Antibiotics
> to the rescue!

Good luck again!!

🔗oyarman@ozanyarman.com

11/14/2005 2:06:41 PM

Good to know you're still live and kicking, my incurable naturalist
colleague!

Cordially,
Ozan

----- Original Message -----
From: "Carl Lumma" <ekin@lumma.org>
To: <tuning-math@yahoogroups.com>
Sent: 14 Kas�m 2005 Pazartesi 23:04
Subject: [tuning-math] my health

> Turned out to be food poisoning (Campylobacter). Antibiotics
> to the rescue!
>
> -Carl
>
>

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:10:31 PM

>> Turned out to be food poisoning (Campylobacter).
>
>Exactly what I had -- sucks when it goes undiagnosed for a while . . .

Dude, the stuff's a real killer. I'm lucky I didn't wait for
the stool culture (72 hours) to finish -- I demanded antibiotics
on Friday. I'm hip to antibiotic overuse, but I rarely get sick
so I knew something serious was happening and I could feel I
wasn't making progress on my own.

>> Antibiotics to the rescue!
>
>Good luck again!!

I'm basically all better now, except having to rebuild my
strength and be careful with keeping everything clean (still
shedding going on according to the Doc).

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:39:17 PM

>> >> >>> >> I originally said no such spheres existed for taxicab of
>> >> >>> >> fractional-factor taxicab on a rectangular lattice.
>> >> >> >
>> >> >>> >I must be misunderstanding "fractional-factor taxicab" --
>> >> >>> >what does that mean?
>> >>
>> >> Just noticed there's a typo here -- delete "for taxicab of".
>> >>
>> >> >> 5/3 = 1
>> >> >
>> >> >What is that supposed to mean? A major sixth is a unison?
>> >>
>> >> A major sixth has length 1.
>> >
>> >OK. But intervals that aren't "fractional factors" can also get
>> >length one, right?
>>
>> (I've answered this affirmatively elsewhere.)
>
>OK, so where does that leave us?

I'm not sure.

'be nice to have a list of complexity measures, showing their ball
shapes on different lattices in 1- through 3-D (or higher)...

lattice type................Triangu. Rect.
dimensions..................1 2 3... 1 2 3...

taxicab
odd limit
Tenney height
symmetric Euclidean

The weighted versions of these would be

"isosceles"
expressibility
Tenney HD
?

>> >> >For 5-limit Kees expressibility shown in the third lattice on
>> >> >that Kees page), you get regular hexagons again.
>> >>
>> >> Exactly what sort of lattice is this?
>> >
>> >It is what it is. I don't know what else to say. But I certainly
>> >would have loved to know of this much earlier.
>>
>> Yes, I'm grokking it now. For some reason Kees assumes the two
>> 5-limit axes should be equally scaled.
>
>Huh?

"Now we seem to hit a paradox, because the measure of a 5/4 (blue)
and a 5/3 (green) should be the same"

>> And indeed it seems that
>> way, since they have the same expressibility!
>
>What do you mean? 3 doesn't have the same expressibility as 5.

5/3 and 5/4 do.

>> IS this the paradox
>> he refers to, and how have you decided it doesn't exist?
>
>He refers to 5:4 and 5:3 looking like different distances in the
>lattice. But that's not correct, since the relevant distance measure
>is one that only allows motions (of whatever length) along six
>equally-spaced directions (60 degrees apart) -- not along the rungs
>Kees draws -- or equivalently, one that measures the size of a
>regular hexagon with the origin at the center and the destination
>on the outer edge.

Huh. Is there such a motion of length n from the unison to
both 5:3 and 5:4?

>> Why do you like it better than your isosceles lattice?
>
>It's better for a prime limit of 5 and the new paradigm where
>consonance/dissonance is shades-of-gray rather than black-and-white,
>because all the (hexagonally-determined) distances correspond to
>complexity (expressibility), no matter how complex the ratios. This
>is why I believe the rungs should be deleted. If ratios of 9, in
>particular, are to be considered something other than mere
>dissonances in a 5-prime-limit tuning, then you want to represent
>them as shorter than ratios of 15. This is exactly what this lattice,
>with a hexagonal-norm metric, accomplishes perfectly. I should make a
>diagram with lots of ratios on it . . .

Hmm...

>> >> >The 7-limit and higher-limit extensions to this are
>> >> >straighforward, except that equal-weighted Hahn breaks
>> >> >down with the first odd composite.
>> >>
>> >> How so? Does this depend on that Hahn typically used odd
>> >> limits (I've been using a prime-limit approach)?
>> >
>> >Yes, so that in the 9-limit and beyond, ratios of 9 can still have
>> >length 1, etc. I thought you mentioned "odd factorization" so I'm
>> >surprised if you now say that you've been using a primes-only
>> >version of Hahn's algorithm . . . (?)
>>
>> Where did I mention odd factorization? Only when I was claiming
>> it's the same as prime factorization under log weighting. But
>> I've been using strictly prime limit thinking this entire thread.
>
>So you're not actually using Hahn's algorithm as he proposed it?

Right.

>> So was your claim the generalized FCC lattice dosen't work
>> in 3-D related to pure geometry,
>
>No, if I even said such a thing.

That was supposed to be "> 3-D" not "in 3-D".

>> or some expectation based on odd limit?
>
>I don't know what you mean by "expectation", but clearly it fails for
>the Hahn case, where ratios of 9 must be given length 1.

That's an expectation.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/14/2005 2:52:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Dude, the stuff's a real killer. I'm lucky I didn't wait for
> the stool culture (72 hours) to finish -- I demanded antibiotics
> on Friday.

Sometimes that is wise, I went into an energency room once, and the ER
doctor said he didn't know what I had, but it looked like it was
probably mono or strep throat. Whatever it was, it was bad, and he was
going to give me an antibiotic right now, because my throat looked
like raw liver.

It turned out to be mono *and* strep throat.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 2:56:40 PM

>> Dude, the stuff's a real killer. I'm lucky I didn't wait for
>> the stool culture (72 hours) to finish -- I demanded antibiotics
>> on Friday.
>
>Sometimes that is wise, I went into an energency room once, and the ER
>doctor said he didn't know what I had, but it looked like it was
>probably mono or strep throat. Whatever it was, it was bad, and he was
>going to give me an antibiotic right now, because my throat looked
>like raw liver.
>
>It turned out to be mono *and* strep throat.

Heh! -C.

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 3:53:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> 'be nice to have a list of complexity measures, showing their ball
> shapes on different lattices in 1- through 3-D (or higher)...

Yes, I'd like to draw the relevant pictures soon.

>
> lattice type................Triangu. Rect.
> dimensions..................1 2 3... 1 2 3...
>
> taxicab
> odd limit
> Tenney height
> symmetric Euclidean
>
> The weighted versions of these would be
>
> "isosceles"
> expressibility
> Tenney HD
> ?

Makes no sense to me, so I'll ignore it :)

> >> >> >For 5-limit Kees expressibility shown in the third lattice on
> >> >> >that Kees page), you get regular hexagons again.
> >> >>
> >> >> Exactly what sort of lattice is this?
> >> >
> >> >It is what it is. I don't know what else to say. But I
certainly
> >> >would have loved to know of this much earlier.
> >>
> >> Yes, I'm grokking it now. For some reason Kees assumes the two
> >> 5-limit axes should be equally scaled.
> >
> >Huh?
>
> "Now we seem to hit a paradox, because the measure of a 5/4 (blue)
> and a 5/3 (green) should be the same"

Right, that's what I discussed below. I don't consider these to be
the two 5-limit axes, though.

> >> And indeed it seems that
> >> way, since they have the same expressibility!
> >
> >What do you mean? 3 doesn't have the same expressibility as 5.
>
> 5/3 and 5/4 do.

Yes -- that's what I discussed below.

> >> IS this the paradox
> >> he refers to, and how have you decided it doesn't exist?
> >
> >He refers to 5:4 and 5:3 looking like different distances in the
> >lattice. But that's not correct, since the relevant distance
measure
> >is one that only allows motions (of whatever length) along six
> >equally-spaced directions (60 degrees apart) -- not along the
rungs
> >Kees draws -- or equivalently, one that measures the size of a
> >regular hexagon with the origin at the center and the destination
> >on the outer edge.
>
> Huh. Is there such a motion of length n from the unison to
> both 5:3 and 5:4?

There are an infinite number of such motions to either, and they're
all the same length (assuming you don't backtrack in any of the six
directions).

> >> Why do you like it better than your isosceles lattice?
> >
> >It's better for a prime limit of 5 and the new paradigm where
> >consonance/dissonance is shades-of-gray rather than black-and-
white,
> >because all the (hexagonally-determined) distances correspond to
> >complexity (expressibility), no matter how complex the ratios.
This
> >is why I believe the rungs should be deleted. If ratios of 9, in
> >particular, are to be considered something other than mere
> >dissonances in a 5-prime-limit tuning, then you want to represent
> >them as shorter than ratios of 15. This is exactly what this
lattice,
> >with a hexagonal-norm metric, accomplishes perfectly. I should
make a
> >diagram with lots of ratios on it . . .
>
> Hmm...

As in you understand, or you don't?

> >> >> >The 7-limit and higher-limit extensions to this are
> >> >> >straighforward, except that equal-weighted Hahn breaks
> >> >> >down with the first odd composite.
> >> >>
> >> >> How so? Does this depend on that Hahn typically used odd
> >> >> limits (I've been using a prime-limit approach)?
> >> >
> >> >Yes, so that in the 9-limit and beyond, ratios of 9 can still
have
> >> >length 1, etc. I thought you mentioned "odd factorization" so
I'm
> >> >surprised if you now say that you've been using a primes-only
> >> >version of Hahn's algorithm . . . (?)
> >>
> >> Where did I mention odd factorization? Only when I was claiming
> >> it's the same as prime factorization under log weighting. But
> >> I've been using strictly prime limit thinking this entire thread.
> >
> >So you're not actually using Hahn's algorithm as he proposed it?
>
> Right.
>
> >> So was your claim the generalized FCC lattice dosen't work
> >> in 3-D related to pure geometry,
> >
> >No, if I even said such a thing.
>
> That was supposed to be "> 3-D" not "in 3-D".

I got that.

> >> or some expectation based on odd limit?
> >
> >I don't know what you mean by "expectation", but clearly it fails
for
> >the Hahn case, where ratios of 9 must be given length 1.
>
> That's an expectation.

What do you mean, an expectation?

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 5:19:02 PM

>> lattice type................Triangu. Rect.
>> dimensions..................1 2 3... 1 2 3...
>>
>> taxicab
>> odd limit
>> Tenney height
>> symmetric Euclidean
>>
>> The weighted versions of these would be
>>
>> "isosceles"
>> expressibility
>> Tenney HD
>> ?
>
>Makes no sense to me, so I'll ignore it :)

The entire chart? The entries would be things like
"rhombic dodecahedron", under each of the six (in this case)
columns (lattice types) for each measure.

Certainly you're not balking at the term "isosceles", which
we've been using to refer to your weighted-rungs thing you
originally thought would be the same as expressibility.

You just finished lecturing me on the difference between
Tenney height and Tenney HD.

"Odd limit" should perhaps be called "Ratio of".

What else?

>> >> Yes, I'm grokking it now. For some reason Kees assumes the two
>> >> 5-limit axes should be equally scaled.
>> >
>> >Huh?
>>
>> "Now we seem to hit a paradox, because the measure of a 5/4 (blue)
>> and a 5/3 (green) should be the same"
>
>Right, that's what I discussed below. I don't consider these to be
>the two 5-limit axes, though.

Obviously.

>> >Kees refers to 5:4 and 5:3 looking like different distances in
>> >the lattice. But that's not correct, since the relevant distance
>> >measure is one that only allows motions (of whatever length)
>> >along six equally-spaced directions (60 degrees apart) -- not
>> >along the rungs Kees draws -- or equivalently, one that measures
>> >the size of a regular hexagon with the origin at the center and
>> >the destination on the outer edge.
>>
>> Huh. Is there such a motion of length n from the unison to
>> both 5:3 and 5:4?
>
>There are an infinite number of such motions to either, and they're
>all the same length (assuming you don't backtrack in any of the six
>directions).

Wow. Sounds good.

>> >> Why do you like it better than your isosceles lattice?
>> >
>> >It's better for a prime limit of 5 and the new paradigm where
>> >consonance/dissonance is shades-of-gray rather than black-and-
>> >white, because all the (hexagonally-determined) distances
>> >correspond to complexity (expressibility), no matter how
>> >complex the ratios. This is why I believe the rungs should
>> >be deleted. If ratios of 9, in particular, are to be considered
>> >something other than mere dissonances in a 5-prime-limit tuning,
>> >then you want to represent them as shorter than ratios of 15.
>> >This is exactly what this lattice, with a hexagonal-norm metric,
>> >accomplishes perfectly. I should make a diagram with lots of
>> >ratios on it . . .
>>
>> Hmm...
>
>As in you understand, or you don't?

Don't. I'm not sure what paradigm you're referring to.
The weighted paradigm?

I do take it you're saying allowed motions to 9 are of length
log(9)?

>> >> >> >equal-weighted Hahn breaks
>> >> >> >down with the first odd composite.
>> >> >>
>> >> >> How so? Does this depend on that Hahn typically used odd
>> >> >> limits (I've been using a prime-limit approach)?
>> >>
>> >> it fails for the Hahn case, where ratios of 9 must be given
>> >> length 1.
>>
>> That's an expectation.
>
>What do you mean, an expectation?

If I construct a 4-D triangular lattice with four prime
numbers, are all the ratios of these primes and the primes
themselves, and no other intervals, mapped to single rungs?
If so, the "Hahn approach" does not break down above 3-D.
It breaks down when you don't define limits the way I do,
and the only problem is an expectation on your or Hahn's
part. If not, it's a problem of geometry, and I need a
new lattice (of course I'm already doing this numerically
with no apparent problems).

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:26:06 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> lattice type................Triangu. Rect.
> >> dimensions..................1 2 3... 1 2 3...
> >>
> >> taxicab
> >> odd limit
> >> Tenney height
> >> symmetric Euclidean
> >>
> >> The weighted versions of these would be
> >>
> >> "isosceles"
> >> expressibility
> >> Tenney HD
> >> ?
> >
> >Makes no sense to me, so I'll ignore it :)
>
> The entire chart? The entries would be things like
> "rhombic dodecahedron", under each of the six (in this case)
> columns (lattice types) for each measure.

I'd rather use only one lattice type for each measure; the one that
makes the balls of that measure most symmetrical.

> Certainly you're not balking at the term "isosceles", which
> we've been using to refer to your weighted-rungs thing you
> originally thought would be the same as expressibility.

Especially if you *do* use multiple lattice types for each complexity
measure, it's a bad idea to use names for complexity measures which
sound more like names of lattice types.

And although you can make a lattice where expressibility corresponds
to a distance measure, you can't do that for odd limit or Tenney
height. Can you do it for taxicab? Makes no sense. Can you do it for
symmetric Euclidean? Makes no sense.

> You just finished lecturing me on the difference between
> Tenney height and Tenney HD.
>
> "Odd limit" should perhaps be called "Ratio of".
>
> What else?
>
> >> >> Yes, I'm grokking it now. For some reason Kees assumes the
two
> >> >> 5-limit axes should be equally scaled.
> >> >
> >> >Huh?
> >>
> >> "Now we seem to hit a paradox, because the measure of a 5/4
(blue)
> >> and a 5/3 (green) should be the same"
> >
> >Right, that's what I discussed below. I don't consider these to be
> >the two 5-limit axes, though.
>
> Obviously.
>
> >> >Kees refers to 5:4 and 5:3 looking like different distances in
> >> >the lattice. But that's not correct, since the relevant distance
> >> >measure is one that only allows motions (of whatever length)
> >> >along six equally-spaced directions (60 degrees apart) -- not
> >> >along the rungs Kees draws -- or equivalently, one that measures
> >> >the size of a regular hexagon with the origin at the center and
> >> >the destination on the outer edge.
> >>
> >> Huh. Is there such a motion of length n from the unison to
> >> both 5:3 and 5:4?
> >
> >There are an infinite number of such motions to either, and
they're
> >all the same length (assuming you don't backtrack in any of the
six
> >directions).
>
> Wow. Sounds good.

And you only need to move in two of these direction for any
particular interval.

> >> >> Why do you like it better than your isosceles lattice?
> >> >
> >> >It's better for a prime limit of 5 and the new paradigm where
> >> >consonance/dissonance is shades-of-gray rather than black-and-
> >> >white, because all the (hexagonally-determined) distances
> >> >correspond to complexity (expressibility), no matter how
> >> >complex the ratios. This is why I believe the rungs should
> >> >be deleted. If ratios of 9, in particular, are to be considered
> >> >something other than mere dissonances in a 5-prime-limit tuning,
> >> >then you want to represent them as shorter than ratios of 15.
> >> >This is exactly what this lattice, with a hexagonal-norm metric,
> >> >accomplishes perfectly. I should make a diagram with lots of
> >> >ratios on it . . .
> >>
> >> Hmm...
> >
> >As in you understand, or you don't?
>
> Don't.

Think of a set of nested, concentric regular hexagons. These
represent increasing "distances". This is the notion of "distance" we
need here.

> I'm not sure what paradigm you're referring to.
> The weighted paradigm?

No, just the paradigm where we drop the assumption of a "consonance
limit", and things like consistency are out the window.

> I do take it you're saying allowed motions to 9 are of length
> log(9)?

Yes; if you can't see that on the diagram as it is, I'll make a new
one for you.

> >> >> >> >equal-weighted Hahn breaks
> >> >> >> >down with the first odd composite.
> >> >> >>
> >> >> >> How so? Does this depend on that Hahn typically used odd
> >> >> >> limits (I've been using a prime-limit approach)?
> >> >>
> >> >> it fails for the Hahn case, where ratios of 9 must be given
> >> >> length 1.
> >>
> >> That's an expectation.
> >
> >What do you mean, an expectation?
>
> If I construct a 4-D triangular lattice with four prime
> numbers, are all the ratios of these primes and the primes
> themselves, and no other intervals, mapped to single rungs?

Yes.

> If so, the "Hahn approach" does not break down above 3-D.

Yes it does, if you ever read anything Hahn wrote.

> It breaks down when you don't define limits the way I do,
> and the only problem is an expectation on your or Hahn's
> part.

Ha!

> If not,

If not what?

> it's a problem of geometry, and I need a
> new lattice (of course I'm already doing this numerically
> with no apparent problems).
>
> -Carl
>

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:26:56 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> If I construct a 4-D triangular lattice

This is what Gene calls the "symmetrical" lattice, right?

🔗Paul Erlich <perlich@aya.yale.edu>

11/15/2005 11:51:07 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > I do take it you're saying allowed motions to 9 are of length
> > log(9)?
>
> Yes; if you can't see that on the diagram as it is, I'll make a new
> one for you.

Actually, for the case of 9 (9:8, 9:4, 16:9 etc.), which appears to be
what you're asking about, it's trivial to see it. Since it's located at
an angle zero degrees from the horizontal, there's only one allowed
motion -- a straight line. And since the length from 1/1 to 3/2 is log
(3), the length from 1/1 to 9/4 is log(3)+log(3)=log(9).

🔗Carl Lumma <ekin@lumma.org>

11/15/2005 12:44:13 PM

>> >> lattice type................Triangu. Rect.
>> >> dimensions..................1 2 3... 1 2 3...
>> >>
>> >> taxicab
>> >> odd limit
>> >> Tenney height
>> >> symmetric Euclidean
>> >>
>> >> The weighted versions of these would be
>> >>
>> >> "isosceles"
>> >> expressibility
>> >> Tenney HD
>> >> ?
>> >
>> >Makes no sense to me, so I'll ignore it :)
>>
>> The entire chart? The entries would be things like
>> "rhombic dodecahedron", under each of the six (in this case)
>> columns (lattice types) for each measure.
>
>I'd rather use only one lattice type for each measure; the one that
>makes the balls of that measure most symmetrical.

One way to find that out is to read the ball shapes off a chart
like this!

>And although you can make a lattice where expressibility corresponds
>to a distance measure, you can't do that for odd limit //
>Can you do it for symmetric Euclidean? Makes no sense.

Why not? It's only a difference of a log.

What do you mean by "distance measure"? Crow flies? Surely
Euclidean distance as Gene's defined it measures crow's distance
on some lattice.

>> >> >> Why do you like it better than your isosceles lattice?
>> >> >
>> >> >It's better for a prime limit of 5 and the new paradigm where
>> >> >consonance/dissonance is shades-of-gray rather than black-and-
>> >> >white, because all the (hexagonally-determined) distances
>> >> >correspond to complexity (expressibility), no matter how
>> >> >complex the ratios. This is why I believe the rungs should
>> >> >be deleted. If ratios of 9, in particular, are to be considered
>> >> >something other than mere dissonances in a 5-prime-limit tuning,
>> >> >then you want to represent them as shorter than ratios of 15.
>> >> >This is exactly what this lattice, with a hexagonal-norm metric,
>> >> >accomplishes perfectly. I should make a diagram with lots of
>> >> >ratios on it . . .
>> >>
>> >> Hmm...
>> >
>> >As in you understand, or you don't?
>>
>> Don't.
>
>Think of a set of nested, concentric regular hexagons. These
>represent increasing "distances". This is the notion of "distance" we
>need here.

I have the picture, it's an example of 'grey-area dissonance and
ratios of 9' that I'm missing.

>> I'm not sure what paradigm you're referring to.
>> The weighted paradigm?
>
>No, just the paradigm where we drop the assumption of a "consonance
>limit",

How long has this been going on?

>> I do take it you're saying allowed motions to 9 are of length
>> log(9)?
>
>Yes; if you can't see that on the diagram as it is, I'll make a new
>one for you.

It's hard to visualize regular hexagons over that lattices skewed
ones.

>> If so, the "Hahn approach" does not break down above 3-D.
>
>Yes it does, if you ever read anything Hahn wrote.

Hahn's code showed 9-limit examples working perfectly, and my
code tests against those examples.

>> If not, it's a problem of geometry, and I need a
>> new lattice (of course I'm already doing this numerically
>> with no apparent problems).
>
>If not what?

If the 4-D triangular lattice failed to map consonances to
rungs.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/15/2005 12:49:07 PM

>> If I construct a 4-D triangular lattice
>
>This is what Gene calls the "symmetrical" lattice, right?

By "symmetrical" I think Gene simply means "unweighted".

I just meant a 4-D thing with 60 deg. angles, which I
assume exists but have no knowledge of.

I think Gene's Euclidean norm gives crow's distance
on an unweighted rectangular lattice, but I could be
wrong.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/15/2005 2:17:34 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >And although you can make a lattice where expressibility corresponds
> >to a distance measure, you can't do that for odd limit //
> >Can you do it for symmetric Euclidean? Makes no sense.
>
> Why not? It's only a difference of a log.

One reason I have been assuming you mean something different by
"lattice" are the discussions like this.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/15/2005 2:23:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> I just meant a 4-D thing with 60 deg. angles, which I
> assume exists but have no knowledge of.

Well, you do now, since you have this general formula which in the
4-D (11-limit) case becomes

|| |* a b c d> || = sqrt((a^2+b^2+c^2+d^2)+(a+b+c+d)^2)/2)

> I think Gene's Euclidean norm gives crow's distance
> on an unweighted rectangular lattice, but I could be
> wrong.

No, on an unweighted "triangular" (An) lattice. Of course in the
11-limit it probably makes more sense to make 9, not 3, the same
length as 5, 7, and 11.

🔗Carl Lumma <ekin@lumma.org>

11/15/2005 2:27:34 PM

>> I think Gene's Euclidean norm gives crow's distance
>> on an unweighted rectangular lattice, but I could be
>> wrong.
>
>No, on an unweighted "triangular" (An) lattice. Of course in the
>11-limit it probably makes more sense to make 9, not 3, the same
>length as 5, 7, and 11.
>
>> I just meant a 4-D thing with 60 deg. angles, which I
>> assume exists but have no knowledge of.
>
>Well, you do now, since you have this general formula which in the
>4-D (11-limit) case becomes
>
>|| |* a b c d> || = sqrt((a^2+b^2+c^2+d^2)+(a+b+c+d)^2)/2)

Aha! I should have known that, since ||7/5||=1. What's the formula
for distance on a rectangular lattice?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/15/2005 2:32:51 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Aha! I should have known that, since ||7/5||=1. What's the formula
> for distance on a rectangular lattice?

|| (a1, ..., an) || = sqrt(a1^2+...+an^2)

🔗Carl Lumma <ekin@lumma.org>

11/15/2005 2:53:33 PM

>> Aha! I should have known that, since ||7/5||=1. What's the formula
>> for distance on a rectangular lattice?
>
>|| (a1, ..., an) || = sqrt(a1^2+...+an^2)

Rad! I didn't know the pythagorean theorem generalized like this,
but it's perfectly obvious after spending 15 seconds with a pen
and scrap of paper (here at the cafe). The ^2 on the previously-
derived side will remove the sqrt there, and the new side will
get a new ^2.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/15/2005 4:32:15 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> Rad! I didn't know the pythagorean theorem generalized like this,
> but it's perfectly obvious after spending 15 seconds with a pen
> and scrap of paper (here at the cafe). The ^2 on the previously-
> derived side will remove the sqrt there, and the new side will
> get a new ^2.

The Pythagorean theorem can be generalized in a number of different
ways. For the purposes of this list, another good one to keep in mind
is the Law of Cosines:

c^2 = a^2 + b^2 - 2ab cos(C)

where C is the angle between sides of length a and b of a triangle,
and c is the length of the opposte side. When C=90 degrees, you get
the Pythagorean theorem. When C = 60 degrees, you get

c^2 = a^2 + b^2 - ab

When C = 120 degrees, you get

c^2 = a^2 + b^2 + ab

which is the 5-limit symmetrical lattice distance formula. It tells us
the 10/9 interval from 3/2 to 5/3, the 16/15 interval from 3/2 to 8/5,
and the 25/24 interval from 8/5 to 5/3 all have a length of sqrt(3)
and are on the vertices of an equilateral triangle with 1 as the
centroid. The same is true of 6/5, 5/4 and 4/3, and all six form an
equilateral hexagon surrounding 1.

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 2:15:05 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> lattice type................Triangu. Rect.
> >> >> dimensions..................1 2 3... 1 2 3...
> >> >>
> >> >> taxicab
> >> >> odd limit
> >> >> Tenney height
> >> >> symmetric Euclidean
> >> >>
> >> >> The weighted versions of these would be
> >> >>
> >> >> "isosceles"
> >> >> expressibility
> >> >> Tenney HD
> >> >> ?
> >> >
> >> >Makes no sense to me, so I'll ignore it :)
> >>
> >> The entire chart? The entries would be things like
> >> "rhombic dodecahedron", under each of the six (in this case)
> >> columns (lattice types) for each measure.
> >
> >I'd rather use only one lattice type for each measure; the one
that
> >makes the balls of that measure most symmetrical.
>
> One way to find that out is to read the ball shapes off a chart
> like this!

But then you would need even more columns, for the different kinds of
geometry that are most symmetrical for these different measures.

> >And although you can make a lattice where expressibility
corresponds
> >to a distance measure, you can't do that for odd limit //
> >Can you do it for symmetric Euclidean? Makes no sense.
>
> Why not? It's only a difference of a log.

Distance measures must satisfy the triangle inequality: the distance
from A to B plus the distance from B to C must be less than or equal
to the distance from A to C.

> What do you mean by "distance measure"? Crow flies? Surely
> Euclidean distance as Gene's defined it measures crow's distance
> on some lattice.

Euclidean distance equals crow's distance on any lattice. But any
reference to actual ratios or musical intervals is missing here.

> >> >> >> Why do you like it better than your isosceles lattice?
> >> >> >
> >> >> >It's better for a prime limit of 5 and the new paradigm where
> >> >> >consonance/dissonance is shades-of-gray rather than black-
and-
> >> >> >white, because all the (hexagonally-determined) distances
> >> >> >correspond to complexity (expressibility), no matter how
> >> >> >complex the ratios. This is why I believe the rungs should
> >> >> >be deleted. If ratios of 9, in particular, are to be
considered
> >> >> >something other than mere dissonances in a 5-prime-limit
tuning,
> >> >> >then you want to represent them as shorter than ratios of 15.
> >> >> >This is exactly what this lattice, with a hexagonal-norm
metric,
> >> >> >accomplishes perfectly. I should make a diagram with lots of
> >> >> >ratios on it . . .
> >> >>
> >> >> Hmm...
> >> >
> >> >As in you understand, or you don't?
> >>
> >> Don't.
> >
> >Think of a set of nested, concentric regular hexagons. These
> >represent increasing "distances". This is the notion of "distance"
we
> >need here.
>
> I have the picture, it's an example of 'grey-area dissonance and
> ratios of 9' that I'm missing.

Hmm . . . what kind of example are you looking for?

> >> I'm not sure what paradigm you're referring to.
> >> The weighted paradigm?
> >
> >No, just the paradigm where we drop the assumption of
a "consonance
> >limit",
>
> How long has this been going on?

For quite a while. Remember my posts on the tuning list about
consistency going out the window in such a paradigm?

> >> I do take it you're saying allowed motions to 9 are of length
> >> log(9)?
> >
> >Yes; if you can't see that on the diagram as it is, I'll make a
new
> >one for you.
>
> It's hard to visualize regular hexagons over that lattices skewed
> ones.

True. As I said, the rungs shouldn't even be there. I'll have to draw
a version with no rungs, with ratios written in, and with the regular
hexagons shown.

> >> If so, the "Hahn approach" does not break down above 3-D.
> >
> >Yes it does, if you ever read anything Hahn wrote.
>
> Hahn's code showed 9-limit examples working perfectly, and my
> code tests against those examples.

But what does the lattice look like?

> >> If not, it's a problem of geometry, and I need a
> >> new lattice (of course I'm already doing this numerically
> >> with no apparent problems).
> >
> >If not what?
>
> If the 4-D triangular lattice failed to map consonances to
> rungs.

🔗Carl Lumma <ekin@lumma.org>

11/16/2005 3:57:35 PM

>> >> >> lattice type................Triangu. Rect.
>> >> >> dimensions..................1 2 3... 1 2 3...
>> >> >>
>> >> >> taxicab
>> >> >> odd limit
>> >> >> Tenney height
>> >> >> symmetric Euclidean
>> >> >>
>> >> >> The weighted versions of these would be
>> >> >>
>> >> >> "isosceles"
>> >> >> expressibility
>> >> >> Tenney HD
>> >> >> ?
>> >> >
>> >> >Makes no sense to me, so I'll ignore it :)
>> >>
>> >> The entire chart? The entries would be things like
>> >> "rhombic dodecahedron", under each of the six (in this case)
>> >> columns (lattice types) for each measure.
>> >
>> >I'd rather use only one lattice type for each measure; the one
>that
>> >makes the balls of that measure most symmetrical.
>>
>> One way to find that out is to read the ball shapes off a chart
>> like this!
>
>But then you would need even more columns, for the different kinds of
>geometry that are most symmetrical for these different measures.

There are 6 columns in this chart.

>> >And although you can make a lattice where expressibility
>> >corresponds to a distance measure, you can't do that for
>> >odd limit //
>>
>> Why not? It's only a difference of a log.
>
>Distance measures must satisfy the triangle inequality: the distance
>from A to B plus the distance from B to C must be less than or equal
>to the distance from A to C.

That's true.

>> >Can you do it for symmetric Euclidean? Makes no sense.
//
>Euclidean distance equals crow's distance on any lattice.
>But any reference to actual ratios or musical intervals
>is missing here.

Can you give an example?

>> >> I'm not sure what paradigm you're referring to.
>> >> The weighted paradigm?
>> >
>> >No, just the paradigm where we drop the assumption of
>> >a "consonance limit",
>>
>> How long has this been going on?
>
>For quite a while. Remember my posts on the tuning list about
>consistency going out the window in such a paradigm?

No...

>> It's hard to visualize regular hexagons over that lattices skewed
>> ones.
>
>True. As I said, the rungs shouldn't even be there. I'll have to draw
>a version with no rungs, with ratios written in, and with the regular
>hexagons shown.

Sweet.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/16/2005 7:32:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> lattice type................Triangu. Rect.
> >> >> >> dimensions..................1 2 3... 1 2 3...
> >> >> >>
> >> >> >> taxicab
> >> >> >> odd limit
> >> >> >> Tenney height
> >> >> >> symmetric Euclidean
> >> >> >>
> >> >> >> The weighted versions of these would be
> >> >> >>
> >> >> >> "isosceles"
> >> >> >> expressibility
> >> >> >> Tenney HD
> >> >> >> ?
> >> >> >
> >> >> >Makes no sense to me, so I'll ignore it :)
> >> >>
> >> >> The entire chart? The entries would be things like
> >> >> "rhombic dodecahedron", under each of the six (in this case)
> >> >> columns (lattice types) for each measure.
> >> >
> >> >I'd rather use only one lattice type for each measure; the one
> >that
> >> >makes the balls of that measure most symmetrical.
> >>
> >> One way to find that out is to read the ball shapes off a chart
> >> like this!
> >
> >But then you would need even more columns, for the different kinds
of
> >geometry that are most symmetrical for these different measures.
>
> There are 6 columns in this chart.

OK, OK.

> >> >And although you can make a lattice where expressibility
> >> >corresponds to a distance measure, you can't do that for
> >> >odd limit //
> >>
> >> Why not? It's only a difference of a log.
> >
> >Distance measures must satisfy the triangle inequality: the
distance
> >from A to B plus the distance from B to C must be less than or
equal
> >to the distance from A to C.
>
> That's true.
>
> >> >Can you do it for symmetric Euclidean? Makes no sense.
> //
> >Euclidean distance equals crow's distance on any lattice.
> >But any reference to actual ratios or musical intervals
> >is missing here.
>
> Can you give an example?

An example of what?

> >> >> I'm not sure what paradigm you're referring to.
> >> >> The weighted paradigm?
> >> >
> >> >No, just the paradigm where we drop the assumption of
> >> >a "consonance limit",
> >>
> >> How long has this been going on?
> >
> >For quite a while. Remember my posts on the tuning list about
> >consistency going out the window in such a paradigm?
>
> No...

Really?? It was less than two years ago . . .

> >> It's hard to visualize regular hexagons over that lattices skewed
> >> ones.
> >
> >True. As I said, the rungs shouldn't even be there. I'll have to
draw
> >a version with no rungs, with ratios written in, and with the
regular
> >hexagons shown.
>
> Sweet.

Don't let me forget!

🔗Carl Lumma <ekin@lumma.org>

11/16/2005 11:20:26 PM

>> >> >Can you do it for symmetric Euclidean? Makes no sense.

I think "it" referred to:

>> >> >make a lattice where ___________
>> >> >corresponds to a distance measure

>> >Euclidean distance equals crow's distance on any lattice.
>> >But any reference to actual ratios or musical intervals
>> >is missing here.
>>
>> Can you give an example?
>
>An example of what?

How a lack reference to ratios or musical intervals causes
Euclidean distance to fail to be a distance measure.

>> >> >No, just the paradigm where we drop the assumption of
>> >> >a "consonance limit",
>> >>
>> >> How long has this been going on?
>> >
>> >For quite a while. Remember my posts on the tuning list about
>> >consistency going out the window in such a paradigm?
>>
>> No...
>
>Really?? It was less than two years ago . . .

I you sure I wasn't on hiatus at the time?

>> I'll have to draw a version with no rungs, with ratios written
>> in, and with the regular hexagons shown.
>>
>> Sweet.
>
>Don't let me forget!

Heh! I can't even remember to tie my shoes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/17/2005 1:16:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >Can you do it for symmetric Euclidean? Makes no sense.
>
> I think "it" referred to:
>
> >> >> >make a lattice where ___________
> >> >> >corresponds to a distance measure
>
> >> >Euclidean distance equals crow's distance on any lattice.
> >> >But any reference to actual ratios or musical intervals
> >> >is missing here.
> >>
> >> Can you give an example?
> >
> >An example of what?
>
> How a lack reference to ratios or musical intervals causes
> Euclidean distance to fail to be a distance measure.

How can I supply an example for what isn't there? I meant it isn't a
distance measure on JI intervals, but you seemed to want to compare
it directly against distance measures on JI intervals. Meanwhile, it
seems it *is* comparable to the things you wanted to put in the
*columns* (rather than rows) of your table.

> >> >> >No, just the paradigm where we drop the assumption of
> >> >> >a "consonance limit",
> >> >>
> >> >> How long has this been going on?
> >> >
> >> >For quite a while. Remember my posts on the tuning list about
> >> >consistency going out the window in such a paradigm?
> >>
> >> No...
> >
> >Really?? It was less than two years ago . . .
>
> I you sure I wasn't on hiatus at the time?

Maybe . . .

> >> I'll have to draw a version with no rungs, with ratios written
> >> in, and with the regular hexagons shown.
> >>
> >> Sweet.
> >
> >Don't let me forget!
>
> Heh! I can't even remember to tie my shoes.

OK . . . I'll be dreaming of this lattice every night until I make it
for you.

🔗Carl Lumma <ekin@lumma.org>

11/17/2005 1:55:57 PM

>> >> >> >Can you do it for symmetric Euclidean? Makes no sense.
>>
>> I think "it" referred to:
>>
>> >> >> >make a lattice where ___________
>> >> >> >corresponds to a distance measure
>>
>> >> >Euclidean distance equals crow's distance on any lattice.
>> >> >But any reference to actual ratios or musical intervals
>> >> >is missing here.
>> >>
>> >> Can you give an example?
>> >
>> >An example of what?
>>
>> How a lack reference to ratios or musical intervals causes
>> Euclidean distance to fail to be a distance measure.
>
>How can I supply an example for what isn't there? I meant it isn't a
>distance measure on JI intervals,

Then can you give an example of where it breaks the triangle
inequality?

>Meanwhile, it seems it *is* comparable to the things you wanted
>to put in the *columns* (rather than rows) of your table.

We must be talking about different "it"s here.

The rows are measures, all of which should obey the triangle
inequality. The columns are dimensions of either triangular
or rectangular latti. The entries are ball shapes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/17/2005 2:06:41 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> >Can you do it for symmetric Euclidean? Makes no sense.
> >>
> >> I think "it" referred to:
> >>
> >> >> >> >make a lattice where ___________
> >> >> >> >corresponds to a distance measure
> >>
> >> >> >Euclidean distance equals crow's distance on any lattice.
> >> >> >But any reference to actual ratios or musical intervals
> >> >> >is missing here.
> >> >>
> >> >> Can you give an example?
> >> >
> >> >An example of what?
> >>
> >> How a lack reference to ratios or musical intervals causes
> >> Euclidean distance to fail to be a distance measure.
> >
> >How can I supply an example for what isn't there? I meant it isn't
a
> >distance measure on JI intervals,
>
> Then can you give an example of where it breaks the triangle
> inequality?

It doesn't -- Tenney height and 'odd limit' do.

> >Meanwhile, it seems it *is* comparable to the things you wanted
> >to put in the *columns* (rather than rows) of your table.
>
> We must be talking about different "it"s here.
>
> The rows are measures, all of which should obey the triangle
> inequality. The columns are dimensions of either triangular
> or rectangular latti.

Dimensions? I thought they were different varieties. If you don't
have different varieties of triangular lattice, you don't have enough
columns to get at the thing you're interested in, which is to find
which lattice gives most "sphericity" to each measure. So you'd need
more columns.

Anyway, "symmetrical Euclidean" to me refers to a particular variety
of triangular lattice. If there's a ratio-complexity measure that
agrees well with this lattice, that's great, but using the former as
a name for it is asking for confusion, particularly when you're
planning to look at the ball shapes produced by every possible ratio-
complexity measure in every possible lattice!

> The entries are ball shapes.

🔗Carl Lumma <ekin@lumma.org>

11/17/2005 2:25:15 PM

>> Then can you give an example of where it breaks the triangle
>> inequality?
>
>It doesn't -- Tenney height and 'odd limit' do.

Ah, right, they should be taken off the list (rows). Or
better yet, their entries should be "N/A".

>> The rows are measures, all of which should obey the triangle
>> inequality. The columns are dimensions of either triangular
>> or rectangular latti.
>
>Dimensions?

That's why it says "dimensions"

>> >> lattice type................Triangu. Rect.
>> >> dimensions..................1 2 3... 1 2 3...
>> >>
>> >> taxicab
>> >> odd limit
>> >> Tenney height
>> >> symmetric Euclidean
>> >>
>> >> The weighted versions of these would be
>> >>
>> >> "isosceles"
>> >> expressibility
>> >> Tenney HD
>> >> ?

Probably those column labels should have been above
the columns.... :)

>If you don't
>have different varieties of triangular lattice, you don't have enough
>columns to get at the thing you're interested in, which is to find
>which lattice gives most "sphericity" to each measure. So you'd need
>more columns.

I'm cool with more columns (and rows).

>Anyway, "symmetrical Euclidean" to me refers to a particular variety
>of triangular lattice. If there's a ratio-complexity measure that
>agrees well with this lattice, that's great, but using the former as
>a name for it is asking for confusion, particularly when you're
>planning to look at the ball shapes produced by every possible ratio-
>complexity measure in every possible lattice!

They don't have to be "ratio" complexity measures (as in a simple
rule for ratios?). They just have to be complexity measures.
By "Symmetric Euclidean" I mean the actual thing, not a simplified
version. It's true that "Symmetric Euclidean" and "taxicab",
unlike "Tenney HD" and "expressibility" require different formulas
for their tri. and rect. varieties.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 2:24:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >Anyway, "symmetrical Euclidean" to me refers to a particular
variety
> >of triangular lattice. If there's a ratio-complexity measure that
> >agrees well with this lattice, that's great, but using the former
as
> >a name for it is asking for confusion, particularly when you're
> >planning to look at the ball shapes produced by every possible
ratio-
> >complexity measure in every possible lattice!
>
> They don't have to be "ratio" complexity measures (as in a simple
> rule for ratios?). They just have to be complexity measures.

But what are they measuring the complexity of if not ratios?

> By "Symmetric Euclidean" I mean the actual thing, not a simplified
> version.

Why would anyone think otherwise?

> It's true that "Symmetric Euclidean" and "taxicab",
> unlike "Tenney HD" and "expressibility" require different formulas
> for their tri. and rect. varieties.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 2:48:22 PM

>> >Anyway, "symmetrical Euclidean" to me refers to a particular
>> >variety of triangular lattice. If there's a ratio-complexity
>> >measure that agrees well with this lattice, that's great, but
>> >using the former as a name for it is asking for confusion,
>> >particularly when you're planning to look at the ball shapes
>> >produced by every possible ratio-complexity measure in every
>> >possible lattice!
>>
>> They don't have to be "ratio" complexity measures (as in a simple
>> rule for ratios?). They just have to be complexity measures.
>
>But what are they measuring the complexity of if not ratios?

You dropped the term "ratio-complexity measure". What does it
mean? Clearly the formula Gene gave for symmetrical Euclidean
distance takes ratios as input.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 3:01:39 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >Anyway, "symmetrical Euclidean" to me refers to a particular
> >> >variety of triangular lattice. If there's a ratio-complexity
> >> >measure that agrees well with this lattice, that's great, but
> >> >using the former as a name for it is asking for confusion,
> >> >particularly when you're planning to look at the ball shapes
> >> >produced by every possible ratio-complexity measure in every
> >> >possible lattice!
> >>
> >> They don't have to be "ratio" complexity measures (as in a simple
> >> rule for ratios?). They just have to be complexity measures.
> >
> >But what are they measuring the complexity of if not ratios?
>
> You dropped the term "ratio-complexity measure".

Dropped it?

> What does it
> mean?

It means something that measures the complexity of ratios.

> Clearly the formula Gene gave for symmetrical Euclidean
> distance takes ratios as input.

Then it needs a name to distinguish it from things that are supposed
to go in *columns* of your table. That's what I was trying to say in
the bit at the top.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 3:39:49 PM

>> >> >Anyway, "symmetrical Euclidean" to me refers to a particular
>> >> >variety of triangular lattice. If there's a ratio-complexity
>> >> >measure that agrees well with this lattice, that's great, but
>> >> >using the former as a name for it is asking for confusion,
>> >> >particularly when you're planning to look at the ball shapes
>> >> >produced by every possible ratio-complexity measure in every
>> >> >possible lattice!
>> >>
>> >> They don't have to be "ratio" complexity measures (as in a simple
>> >> rule for ratios?). They just have to be complexity measures.
>> >
>> >But what are they measuring the complexity of if not ratios?
>>
>> You dropped the term "ratio-complexity measure".
>
>Dropped it?
>
>> What does it
>> mean?
>
>It means something that measures the complexity of ratios.
>
>> Clearly the formula Gene gave for symmetrical Euclidean
>> distance takes ratios as input.
>
>Then it needs a name to distinguish it from things that are supposed
>to go in *columns* of your table. That's what I was trying to say in
>the bit at the top.

The columns of the table are lattice types... 2-D triangular, 3-D rect.,
etc. Symmetric Euclidean is on a row in the table.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 4:16:09 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >Anyway, "symmetrical Euclidean" to me refers to a particular
> >> >> >variety of triangular lattice. If there's a ratio-complexity
> >> >> >measure that agrees well with this lattice, that's great, but
> >> >> >using the former as a name for it is asking for confusion,
> >> >> >particularly when you're planning to look at the ball shapes
> >> >> >produced by every possible ratio-complexity measure in every
> >> >> >possible lattice!
> >> >>
> >> >> They don't have to be "ratio" complexity measures (as in a
simple
> >> >> rule for ratios?). They just have to be complexity measures.
> >> >
> >> >But what are they measuring the complexity of if not ratios?
> >>
> >> You dropped the term "ratio-complexity measure".
> >
> >Dropped it?
> >
> >> What does it
> >> mean?
> >
> >It means something that measures the complexity of ratios.
> >
> >> Clearly the formula Gene gave for symmetrical Euclidean
> >> distance takes ratios as input.
> >
> >Then it needs a name to distinguish it from things that are
supposed
> >to go in *columns* of your table. That's what I was trying to say
in
> >the bit at the top.
>
> The columns of the table are lattice types... 2-D triangular, 3-D
rect.,
> etc. Symmetric Euclidean is on a row in the table.

Yes, I know this is what you said, and the above was a response to
that. As I stated before, you'll need different kinds of triangular
lattice types if you want the balls of each measure to be as
spherical as possible in one of the lattice types. One of these
triangular lattice types would be the symmetric type. So, to repeat
myself, I'm trying to encourage you to avoid confusion by not
referring to the *rows* by names that sound more appropriate for
*columns*. At least, you confused *me* by doing that. So why are you
merely repeating yourself here? Did you misunderstand what I was
saying?

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 4:27:15 PM

>> The columns of the table are lattice types... 2-D triangular, 3-D
>> rect., etc. Symmetric Euclidean is on a row in the table.
>
>Yes, I know this is what you said, and the above was a response to
>that. As I stated before, you'll need different kinds of triangular
>lattice types if you want the balls of each measure to be as
>spherical as possible in one of the lattice types.

These should be added to the columns, I thought I said. The
entries only report the ball shapes. The 'most spherical' ones
are to be determined by the reader.

>I'm trying to encourage you to avoid confusion by not
>referring to the *rows* by names that sound more appropriate for
>*columns*. At least, you confused *me* by doing that. So why are you
>merely repeating yourself here? Did you misunderstand what I was
>saying?

The rows are distance measures. "Symmetric Euclidean" is a
distance measure. What's confusing about that?

You correctly pointed out that odd limit and Tenney height are
not distance measures, and I said they should be deleted from
the rows.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 4:33:53 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> The columns of the table are lattice types... 2-D triangular, 3-
D
> >> rect., etc. Symmetric Euclidean is on a row in the table.
> >
> >Yes, I know this is what you said, and the above was a response to
> >that. As I stated before, you'll need different kinds of
triangular
> >lattice types if you want the balls of each measure to be as
> >spherical as possible in one of the lattice types.
>
> These should be added to the columns, I thought I said.

Yes, you did.

> The
> entries only report the ball shapes. The 'most spherical' ones
> are to be determined by the reader.

Well right now it seems you and I would agree on 'most spherical' --
I just meant you need enough columns so that each of the rows gets
one colums where the corresponding measure gives a ball that is as
spherical as possible.

> >I'm trying to encourage you to avoid confusion by not
> >referring to the *rows* by names that sound more appropriate for
> >*columns*. At least, you confused *me* by doing that. So why are
you
> >merely repeating yourself here? Did you misunderstand what I was
> >saying?
>
> The rows are distance measures. "Symmetric Euclidean" is a
> distance measure. What's confusing about that?

I thought the rows were supposed to be measures of ratio-complexity.
Each measure of ratio complexity corresponds to a different distance
measure on each lattice, since each lattice is a different way of
arranging the ratios as points in space.

Anyway, what's confusing (to repeat myself once again) is that this
one sounds more like a lattice type than a ratio-complexity measure.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 4:44:51 PM

>I just meant you need enough columns so that each of the rows gets
>one colums where the corresponding measure gives a ball that is as
>spherical as possible.

Agreed.

>> >I'm trying to encourage you to avoid confusion by not
>> >referring to the *rows* by names that sound more appropriate for
>> >*columns*. At least, you confused *me* by doing that. So why are
>> >you merely repeating yourself here? Did you misunderstand what I
>> >was saying?
>>
>> The rows are distance measures. "Symmetric Euclidean" is a
>> distance measure. What's confusing about that?
>
>I thought the rows were supposed to be measures of ratio-complexity.

Distance measures. That's why odd limit and Tenney height need
to be deleted.

>Each measure of ratio complexity corresponds to a different distance
>measure on each lattice, since each lattice is a different way of
>arranging the ratios as points in space.

Ok, I see how you're thinking of this. I was thinking, start with
the distance measure you like and then find the best lattice for
them (by plotting the points in each ball and comparing them to a
sphere). You're thinking, start with a ratio complexity measure,
then find the lattice on which there is some corresponding distance
measure... yes? But I'm not clear on what difference there is
between these two approaches, since a ratio complexity measure
like odd limit could never correspond to a distance measure, could
it?

>Anyway, what's confusing (to repeat myself once again) is that this
>one sounds more like a lattice type than a ratio-complexity measure.

Ah, I see! I was just trying to follow Gene's usage.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 4:56:45 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> You dropped the term "ratio-complexity measure". What does it
> mean? Clearly the formula Gene gave for symmetrical Euclidean
> distance takes ratios as input.

Equivalence classes, actually.

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 4:59:29 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >I just meant you need enough columns so that each of the rows gets
> >one colums where the corresponding measure gives a ball that is as
> >spherical as possible.
>
> Agreed.
>
> >> >I'm trying to encourage you to avoid confusion by not
> >> >referring to the *rows* by names that sound more appropriate
for
> >> >*columns*. At least, you confused *me* by doing that. So why are
> >> >you merely repeating yourself here? Did you misunderstand what I
> >> >was saying?
> >>
> >> The rows are distance measures. "Symmetric Euclidean" is a
> >> distance measure. What's confusing about that?
> >
> >I thought the rows were supposed to be measures of ratio-
complexity.
>
> Distance measures. That's why odd limit and Tenney height need
> to be deleted.
>
> >Each measure of ratio complexity corresponds to a different
distance
> >measure on each lattice, since each lattice is a different way of
> >arranging the ratios as points in space.
>
> Ok, I see how you're thinking of this. I was thinking, start with
> the distance measure you like and then find the best lattice for
> them (by plotting the points in each ball

You mean plotting the ball in each lattice?

> and comparing them to a
> sphere).

If you agree with the substitution above, that's what I'm thinking.

> You're thinking, start with a ratio complexity measure,
> then find the lattice on which there is some corresponding distance
> measure... yes?

No.

> But I'm not clear on what difference there is
> between these two approaches, since a ratio complexity measure
> like odd limit could never correspond to a distance measure, could
> it?

It can if you allow a series of concentric regular hexagons, instead
of circles, to define a distance measure -- then the required lattice
is the lattice that we've been talking about so much lately (the
second-to-last one on that http://www.kees.cc/tuning/lat_perbl.html
page, preferably with the rungs deleted). Technically, any convex
figure can be used to define a distance measure or _metric_.

> >Anyway, what's confusing (to repeat myself once again) is that
this
> >one sounds more like a lattice type than a ratio-complexity
measure.
>
> Ah, I see!

Whew!

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 5:02:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> But I'm not clear on what difference there is
> between these two approaches, since a ratio complexity measure
> like odd limit could never correspond to a distance measure, could
> it?

Eh? What about the Kees norm?

> >Anyway, what's confusing (to repeat myself once again) is that this
> >one sounds more like a lattice type than a ratio-complexity measure.
>
> Ah, I see! I was just trying to follow Gene's usage.

Even though I have no idea what all this is about.

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 5:10:59 PM

>> Ok, I see how you're thinking of this. I was thinking, start with
>> the distance measure you like and then find the best lattice for
>> them (by plotting the points in each ball
>
>You mean plotting the ball in each lattice?

Yes!

>> and comparing them to a sphere).
>
>If you agree with the substitution above, that's what I'm thinking.

Oh.

>> You're thinking, start with a ratio complexity measure,
>> then find the lattice on which there is some corresponding distance
>> measure... yes?
>
>No.

Oh.

>> But I'm not clear on what difference there is
>> between these two approaches, since a ratio complexity measure
>> like odd limit could never correspond to a distance measure, could
>> it?
>
>It can if you allow a series of concentric regular hexagons, instead
>of circles, to define a distance measure -- then the required lattice
>is the lattice that we've been talking about so much lately (the
>second-to-last one on that http://www.kees.cc/tuning/lat_perbl.html
>page, preferably with the rungs deleted). Technically, any convex
>figure can be used to define a distance measure or _metric_.

How could this be, since odd limit doesn't obey the triangle
inequality?

>> >Anyway, what's confusing (to repeat myself once again) is that
>> >this one sounds more like a lattice type than a ratio-complexity
>> >measure.
>>
>> Ah, I see!
>
>Whew!

What should we call it?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/21/2005 5:49:11 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> Ok, I see how you're thinking of this. I was thinking, start
with
> >> the distance measure you like and then find the best lattice for
> >> them (by plotting the points in each ball
> >
> >You mean plotting the ball in each lattice?
>
> Yes!
>
> >> and comparing them to a sphere).
> >
> >If you agree with the substitution above, that's what I'm thinking.
>
> Oh.
>
> >> You're thinking, start with a ratio complexity measure,
> >> then find the lattice on which there is some corresponding
distance
> >> measure... yes?
> >
> >No.
>
> Oh.
>
> >> But I'm not clear on what difference there is
> >> between these two approaches, since a ratio complexity measure
> >> like odd limit could never correspond to a distance measure,
could
> >> it?
> >
> >It can if you allow a series of concentric regular hexagons,
instead
> >of circles, to define a distance measure -- then the required
lattice
> >is the lattice that we've been talking about so much lately (the
> >second-to-last one on that
http://www.kees.cc/tuning/lat_perbl.html
> >page, preferably with the rungs deleted). Technically, any convex
> >figure can be used to define a distance measure or _metric_.
>
> How could this be, since odd limit doesn't obey the triangle
> inequality?

Sorry, I was thinking log of odd limit, or expressibility.

> >> >Anyway, what's confusing (to repeat myself once again) is that
> >> >this one sounds more like a lattice type than a ratio-complexity
> >> >measure.
> >>
> >> Ah, I see!
> >
> >Whew!
>
> What should we call it?

I don't know but for the purposes of the table, we could just label
it Gene1 or whatever. As long as it doesn't sound like it could be
the label for a column, we're good.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 6:06:31 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> How could this be, since odd limit doesn't obey the triangle
> inequality?

As applied to pitch classes, it does.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 6:07:33 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> I don't know but for the purposes of the table, we could just label
> it Gene1 or whatever. As long as it doesn't sound like it could be
> the label for a column, we're good.

Could one of you explain what these rows and columns are?

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 6:15:33 PM

>> I don't know but for the purposes of the table, we could just label
>> it Gene1 or whatever. As long as it doesn't sound like it could be
>> the label for a column, we're good.
>
>Could one of you explain what these rows and columns are?

The rows are different distance measures and the columns
are types of things you don't like calling lattices, and
the entries are ball shapes. Ex. (hit reply to view on
the web):

3-D triangular
Gene1 (aka symmetric Euclidean distance) cuboctahedron

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/21/2005 6:25:23 PM

But I wouldn't be too bothered by this incredibly convoluted
discussion Paul and I are having. I think the key points
are, we're curious about the lattice on Kees' website with
the angles you recently supplied...

"Then put 3/2 and 5/4 at 60 degrees from each other, and
scale then so that the length of 3/2 to 5/4 is in the
proportion 1:log3(5)."

And I'm curious about anything you have on the size of the
smallest interval we should expect to find as we search n
notes of whatever-limit JI.

-Carl

At 06:15 PM 11/21/2005, you wrote:
>>> I don't know but for the purposes of the table, we could
>>> just label it Gene1 or whatever. As long as it doesn't
>>> sound like it could be the label for a column, we're good.
>>
>>Could one of you explain what these rows and columns are?
>
>The rows are different distance measures and the columns
>are types of things you don't like calling lattices, and
>the entries are ball shapes. Ex. (hit reply to view on
>the web):
>
> 3-D triangular
>Gene1 (aka symmetric Euclidean distance) cuboctahedron
>
>-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/21/2005 8:54:57 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> And I'm curious about anything you have on the size of the
> smallest interval we should expect to find as we search n
> notes of whatever-limit JI.

A partial answer to that is the n-diamond comma, which is the smallest
comma in the n-limit tonality diamond. It's either n^2/(n^2-1) or
(n^2+1)/n^2.

🔗Carl Lumma <ekin@lumma.org>

11/22/2005 9:35:22 AM

>> And I'm curious about anything you have on the size of the
>> smallest interval we should expect to find as we search n
>> notes of whatever-limit JI.
>
>A partial answer to that is the n-diamond comma, which is the smallest
>comma in the n-limit tonality diamond. It's either n^2/(n^2-1) or
>(n^2+1)/n^2.

That's a pretty restricted case that I don't think would be
generally applicable to lattice searches.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/22/2005 12:19:43 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> That's a pretty restricted case that I don't think would be
> generally applicable to lattice searches.

The answer is going to depend on what norm you use, however. For Hahn
in the 7-limit case, the smallest interval greater than one in the
various shells from 1 to 10 goes 8/7, 50/49, 126/125, 2401/2400,
2401/2400, 2401/2400, 4375/4374, 4375/4374, 250047/250000,
250047/250000. It's possible log log regression would come up with a
rate of growth.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/22/2005 1:22:20 PM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> The answer is going to depend on what norm you use, however. For Hahn
> in the 7-limit case, the smallest interval greater than one in the
> various shells from 1 to 10 goes 8/7, 50/49, 126/125, 2401/2400,
> 2401/2400, 2401/2400, 2401/2400, 4375/4374, 4375/4374, 250047/250000,
> 250047/250000. It's possible log log regression would come up with a
> rate of growth.

I left off a 2401/2400, there are three of them.

🔗Carl Lumma <ekin@lumma.org>

11/22/2005 1:44:36 PM

>> That's a pretty restricted case that I don't think would be
>> generally applicable to lattice searches.
>
>The answer is going to depend on what norm you use, however. For
>Hahn in the 7-limit case, the smallest interval greater than one
>in the various shells from 1 to 10 goes 8/7, 50/49, 126/125,
>2401/2400, 2401/2400, 2401/2400, 4375/4374, 4375/4374,
>250047/250000, 250047/250000. It's possible log log regression
>would come up with a rate of growth.

The only reason I asked is that Paul said you had already posted
formulas for this.

I wouldn't expect it to depend much on the norm as the number of
notes gets large, but I'm prepared to be surprised. But maybe a
better first question is: for your favorite norm, how does this
change in the 3-, 5-, 7-limit?

I don't know how to do log log regression, but maybe it's time
I learned. Neither wikipedia or mathworld have entries for it.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/23/2005 12:54:47 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> I don't know but for the purposes of the table, we could just
label
> >> it Gene1 or whatever. As long as it doesn't sound like it could be
> >> the label for a column, we're good.
> >
> >Could one of you explain what these rows and columns are?
>
> The rows are different distance measures

!!!

They are??

I thought we had this clear, that the rows were ratio-complexity
measures, not distance measures. What happened?

> and the columns
> are types of things you don't like calling lattices,

Are you sure about that?

> and
> the entries are ball shapes. Ex. (hit reply to view on
> the web):
>
> 3-D triangular
> Gene1 (aka symmetric Euclidean distance) cuboctahedron

I know you don't mean for this to be nitpicked, but if it were, I'd
change the above to "equilateral triangular", since other forms of
triangular lattice are going to be needed as columns in this table if
some of the other ratio-complexity measures are going to show up as
having fairly symmetrical balls in some lattice -- for example odd
limit or expressibility has "regular" cuboctahedron balls when the
lattice is based on scalene triangles (with only one 60-degree angle)
of the kind we've been discussing lately.

🔗Paul Erlich <perlich@aya.yale.edu>

11/23/2005 1:00:49 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> That's a pretty restricted case that I don't think would be
> >> generally applicable to lattice searches.
> >
> >The answer is going to depend on what norm you use, however. For
> >Hahn in the 7-limit case, the smallest interval greater than one
> >in the various shells from 1 to 10 goes 8/7, 50/49, 126/125,
> >2401/2400, 2401/2400, 2401/2400, 4375/4374, 4375/4374,
> >250047/250000, 250047/250000. It's possible log log regression
> >would come up with a rate of growth.
>
> The only reason I asked is that Paul said you had already posted
> formulas for this.

I thought the idea was to compare different prime limits. Comma size
as a function of comma complexity (the latter is directly related to
the "number of notes" you need to search) is related to temperament
error as a function of comma complexity. For the latter, Gene found
some formulas that depend on the number of independent factors, so I
figured we could therefore derive some formulas for the former from
that.

> I wouldn't expect it to depend much on the norm as the number of
> notes gets large, but I'm prepared to be surprised. But maybe a
> better first question is: for your favorite norm, how does this
> change in the 3-, 5-, 7-limit?
>
> I don't know how to do log log regression, but maybe it's time
> I learned. Neither wikipedia or mathworld have entries for it.

You take the log of one quantity and plot it against the log of the
other quantity. If you find a straight line, then you know the
relationship between the original quantities was an exponential one,
where the slope of the line gives you the exponent:

y=x^a
implies that
log(y) = a*log(x)

🔗Carl Lumma <ekin@lumma.org>

11/23/2005 3:47:07 PM

>> >> I don't know but for the purposes of the table, we could just
>> >> label it Gene1 or whatever. As long as it doesn't sound like
>> >> it could be the label for a column, we're good.
>> >
>> >Could one of you explain what these rows and columns are?
>>
>> The rows are different distance measures
>
>!!!
>
>They are??
>
>I thought we had this clear, that the rows were ratio-complexity
>measures, not distance measures. What happened?

No, you never told me what you mean (and I still don't know)
by "ratio-complexity measure". You were the one who mentioned
that odd limit and Tenney height should be removed since
they didn't qualify as distance measures.

>> and the columns are types of things you don't like calling
>> lattices,
>
>Are you sure about that?

I guess you can put anything there you want. But I expect to
all of the major "lattices" discussed on these lists in the last
10 years.

>> and the entries are ball shapes. Ex. (hit reply to view on
>> the web):
>>
>> 3-D triangular
>> Gene1 (aka symmetric Euclidean distance) cuboctahedron
>
>I know you don't mean for this to be nitpicked, but if it were, I'd
>change the above to "equilateral triangular", since other forms of
>triangular lattice are going to be needed as columns in this table

Yes, very good.

-Carl

🔗Carl Lumma <ekin@lumma.org>

11/23/2005 3:47:59 PM

>> I don't know how to do log log regression, but maybe it's time
>> I learned. Neither wikipedia or mathworld have entries for it.
>
>You take the log of one quantity and plot it against the log of the
>other quantity. If you find a straight line, then you know the
>relationship between the original quantities was an exponential one,
>where the slope of the line gives you the exponent:
>
>y=x^a
>implies that
>log(y) = a*log(x)

Thanks!

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/25/2005 1:50:23 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> I don't know but for the purposes of the table, we could just
> >> >> label it Gene1 or whatever. As long as it doesn't sound like
> >> >> it could be the label for a column, we're good.
> >> >
> >> >Could one of you explain what these rows and columns are?
> >>
> >> The rows are different distance measures
> >
> >!!!
> >
> >They are??
> >
> >I thought we had this clear, that the rows were ratio-complexity
> >measures, not distance measures. What happened?
>
> No, you never told me what you mean (and I still don't know)
> by "ratio-complexity measure".

I thought we had already come to agreement on this -- did you forget
so soon? The input is a JI interval (two pitch-ratios or one interval-
ratio, octave-equivalence may or may not be assumed), and the output
is some reckoning of the "complexity" of that interval.

> You were the one who mentioned
> that odd limit and Tenney height should be removed since
> they didn't qualify as distance measures.

I said that no distance measure could agree with them. And you seemed
to understand me perfectly when you responded that the relevant
entries in these rows would therefore be "N/A". That seemed fine --
what happened to the mutual understanding we had come to?

> >> and the columns are types of things you don't like calling
> >> lattices,
> >
> >Are you sure about that?
>
> I guess you can put anything there you want.

I mean are you sure Gene doesn't like calling them lattices?

> But I expect to
> all of the major "lattices" discussed on these lists in the last
> 10 years.

Oh, ok, so that might include some things which are really
only "graphs" and not "lattices"?

🔗Carl Lumma <ekin@lumma.org>

11/25/2005 2:50:55 PM

>> >I thought we had this clear, that the rows were ratio-complexity
>> >measures, not distance measures. What happened?
>>
>> No, you never told me what you mean (and I still don't know)
>> by "ratio-complexity measure".
>
>I thought we had already come to agreement on this -- did you forget
>so soon? The input is a JI interval (two pitch-ratios or one interval-
>ratio, octave-equivalence may or may not be assumed), and the output
>is some reckoning of the "complexity" of that interval.
>
>> You were the one who mentioned
>> that odd limit and Tenney height should be removed since
>> they didn't qualify as distance measures.
>
>I said that no distance measure could agree with them. And you seemed
>to understand me perfectly when you responded that the relevant
>entries in these rows would therefore be "N/A". That seemed fine --
>what happened to the mutual understanding we had come to?

Why would anyone include a row if all its entries were N/A?
It seems that if the ratio-complexity measure is not a distance
measure, it will have more than one ball shape per lattice. So
I don't know why you want to keep these on the chart.

>> >> and the columns are types of things you don't like calling
>> >> lattices,
>> >
>> >Are you sure about that?
>>
>> I guess you can put anything there you want.
>
>I mean are you sure Gene doesn't like calling them lattices?

It doesn't look like either of us fully understand Gene's position
on this.

>> But I expect to all of the major "lattices" discussed on these
>> lists in the last 10 years.
>
>Oh, ok, so that might include some things which are really
>only "graphs" and not "lattices"?

The latter term is apparently under contention, so I don't know
how to answer this.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/25/2005 3:33:52 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...> wrote:

> > >> and the columns are types of things you don't like calling
> > >> lattices,
> > >
> > >Are you sure about that?
> >
> > I guess you can put anything there you want.
>
> I mean are you sure Gene doesn't like calling them lattices?

I'm fine calling them lattices so long as they live inside a space
with a distance measure on it.

> Oh, ok, so that might include some things which are really
> only "graphs" and not "lattices"?

Or things which are abelian groups and not lattices, or things which are
tessellation but not lattices, etc.

🔗Gene Ward Smith <gwsmith@svpal.org>

11/25/2005 3:38:34 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> The latter term is apparently under contention, so I don't know
> how to answer this.

I suggest that the definition this group needs is "discrete subgroup
of a finite-dimensional normed real vector space".

🔗Carl Lumma <ekin@lumma.org>

11/25/2005 3:52:59 PM

>> The latter term is apparently under contention, so I don't know
>> how to answer this.
>
>I suggest that the definition this group needs is "discrete subgroup
>of a finite-dimensional normed real vector space".

The only problem with this that I can see is that intuitively,
the A3 "lattice" is the A3 "lattice" whether we're using Hahn
diameter or symmetric Euclidean distance. Whereas if I understand
the above, they'd actually be different lattices.

By the way, things like "A3" are impossible to search for.
Does anybody know of a good resource listing the various
instances of this nomenclature? Or is there a name for
this nomenclature ("Whostartedit symbols" or something)?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/25/2005 11:00:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >I suggest that the definition this group needs is "discrete subgroup
> >of a finite-dimensional normed real vector space".
>
> The only problem with this that I can see is that intuitively,
> the A3 "lattice" is the A3 "lattice" whether we're using Hahn
> diameter or symmetric Euclidean distance. Whereas if I understand
> the above, they'd actually be different lattices.

They are, but then I think they should be. But there can be two
different lattices by this definition which clearly are both "the" A3
lattice even without non-Euclidean distances.

> By the way, things like "A3" are impossible to search for.
> Does anybody know of a good resource listing the various
> instances of this nomenclature? Or is there a name for
> this nomenclature ("Whostartedit symbols" or something)?

I thought you owned a copy of Conway and Sloane?

🔗Carl Lumma <ekin@lumma.org>

11/26/2005 11:38:18 PM

>> >I suggest that the definition this group needs is "discrete
>> >subgroup of a finite-dimensional normed real vector space".
>>
>> The only problem with this that I can see is that intuitively,
>> the A3 "lattice" is the A3 "lattice" whether we're using Hahn
>> diameter or symmetric Euclidean distance. Whereas if I understand
>> the above, they'd actually be different lattices.
>
>They are, but then I think they should be. But there can be two
>different lattices by this definition which clearly are both "the"
>A3 lattice even without non-Euclidean distances.

Only two? What are they?

>> By the way, things like "A3" are impossible to search for.
>> Does anybody know of a good resource listing the various
>> instances of this nomenclature? Or is there a name for
>> this nomenclature ("Whostartedit symbols" or something)?
>
>I thought you owned a copy of Conway and Sloane?

This one

http://www.amazon.com/gp/product/0387985859/

?

Nah, I have Conway & Guy _The Book of Numbers_. And at $77, it's
a library trip for me...

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/27/2005 12:47:35 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >They are, but then I think they should be. But there can be two
> >different lattices by this definition which clearly are both "the"
> >A3 lattice even without non-Euclidean distances.
>
> Only two? What are they?

I didn't mean only two. Take an A3 lattice, and rotate, reflect or
dilate it and you get an A3 lattice.

> >I thought you owned a copy of Conway and Sloane?
>
> This one
>
> http://www.amazon.com/gp/product/0387985859/
>
> ?

That's the one. Your basic resource for Euclidean lattices.

🔗Carl Lumma <ekin@lumma.org>

11/27/2005 8:22:52 PM

>> >They are, but then I think they should be. But there can be two
>> >different lattices by this definition which clearly are both "the"
>> >A3 lattice even without non-Euclidean distances.
>>
>> Only two? What are they?
>
>I didn't mean only two. Take an A3 lattice, and rotate, reflect or
>dilate it and you get an A3 lattice.

What's the name of the class of things to which A3 belongs?

>> >I thought you owned a copy of Conway and Sloane?
>>
>> This one
>>
>> http://www.amazon.com/gp/product/0387985859/
>>
>> ?
>
>That's the one. Your basic resource for Euclidean lattices.

I'll pick it up.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

11/28/2005 11:03:42 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >I didn't mean only two. Take an A3 lattice, and rotate, reflect or
> >dilate it and you get an A3 lattice.
>
> What's the name of the class of things to which A3 belongs?

It belongs to a wider class of "root lattices", but the point is that
often you will equate lattices by similarity. All equilateral
triangles are the same shape, and so forth. On the other hand for
number theoretic purposes you might require that the corresponding
quadratic form is reduced with integer coefficients, for example.