back to list

exploring badness

🔗Carl Lumma <ekin@lumma.org>

10/26/2005 10:52:05 PM

Cadence Research released the long-awaited Chez Scheme 7
recently (www.scheme.com), and I dusted off my single-comma
badness code to celebrate.

Generally, badness has been error * complexity, and that seems
good to me. But what kind of error and complexity? My goal was
to try and define these using as few free parameters as possible,
and in a way that makes sense from a composer's point of view.

The first thing I decided to do away with was the notion of
harmonic limit. While we can reasonably expect our composer to
tell us what intervals she considers consonant, it doesn't seem
reasonable to expect them to conform to a complete odd- or
prime-limit. Classifying temperaments based on traditional
harmonic limits therefore seems too prescriptive.

As far as badness is concerned, harmonic limit has mainly been
used to decide how to measure the 'length' of a comma on the
lattice. Is a compound number like 35 one step (of 35) or
two (one each of 5 and 7)?

One solution is to use log lengths, since log(n*d) is equal to
the sum of the logs of the factors of n*d. It so happens that
log(n*d) is called Tenney harmonic distance. I'll call it
comma-thd for short. Here's my Scheme procedure:

(define thd
(lambda (f)
(log2 (tenneyheight f))))
(thd 225/224) -> 15.621136113274641

An unweighted version of thd:

(define comma-dist
(lambda (f)
(apply + (all-vals (occurrences (factor (tenneyheight f)))))))
(comma-dist 225/224) -> 10

An octave-equivalent, triangular version of comma-dist (which
is like Paul Hahn's "diameter"):

(define comma-hahn
(lambda (f)
(let
((over (apply +
(all-vals
(occurrences
(remove 2
(factor
(numerator f)))))))
(under (apply +
(all-vals
(occurrences
(remove 2
(factor
(denominator f))))))))
(max over under))))
(comma-hahn 225/224) -> 4

A weighted version of comma-hahn:

(define comma-isosceles
(lambda (f)
(letrec ((loop
(lambda (ls total)
(if (null? ls)
total
(loop
(cancel (cadar ls) (cdr ls))
(+ (* (log2 (caar ls)) (abs (cadar ls)))
total)))))
(cancel
(lambda (exp ls)
(cond
[(null? ls) '()]
[(zero? exp) ls]
[(eq? (sign exp) (sign (cadar ls)))
(cons (car ls) (cancel exp (cdr ls)))]
[(< (abs exp) (abs (cadar ls)))
(cons (list (caar ls) (+ exp (cadar ls)))
(cdr ls))]
[else (cancel (+ exp (cadar ls)) (cdr ls))]))))
(loop (reverse (monzo (remove-2s f))) 0))))
(comma-isosceles 225/224) -> 8.29920801838728

That brings us to error. John deLaubenfels proposed that the
amount of "pain" mistuning causes us is the square of the error
in a simultaneity, and I agree with him. My own listening tests
indicate the exponent should be > 1, and 2 is natural because
it gives a nice distribution over the target. Also, there would
scarcely be a reason to temper with an exponent of 1... if we
spread a 24-cent comma over 12 fifths, we'd experience the same
amount of pain once we heard all of them, no matter how we
tempered. But (12 * 2^2) = 48 < 24^2 = 576.

So, for error I arrived at:

((cents comma)/(comma-dist comma))^2 * (comma-dist comma)

or

(cents comma)^2 / (comma-dist comma)

I use comma-dist here because I want octave-specific (allowing
tempered octaves), unweighted (the available 'pain relief' depends
only on the number of intervals to temper over) distances.

Now complexity. For Fokker blocks in JI, complexity can be
defined as the number of notes in the block (I know it's possible
to refactor the unison vectors of a block... can this change
its volume?). In the single-comma case, we can't guarantee a
closed block, but we can estimate the comma's contribution to the
volume of any block by finding the volume of the 'cube'.

(comma-hahn comma)^(- (comma-rank comma) 1)

Here, comma-rank is simply the number of different primes needed
to factor comma, which I take to be the implied dimensionality of
the lattice. I chose comma-hahn here because diagonal lengths
seemed to best estimate the comma's contribution to Fokker block
volume (and can be thought of as the number of dyad-preserving
modulations in its pump) and because measuring a volume of "notes"
usually implies octave-equivalence. I subtract 1 from the rank
because 2 has been removed from the lattice basis. I don't take
any steps to ensure logflat badness, because massively complex
commas are musically useless. I want my badness to eventually go
high with complexity regardless of size. This ruins some nice
number-theoretical aspects of badness but frees me from having to
use arbitrary, sharp bounds on complexity to keep useless commas
out of my top-10 list.

*Results 1*
Here are the top 10 commas < 600 cents with denominators < 1731.

(0.1111111111111111 1664/1663)
(0.14285714285714285 1697/1696)
(0.16 1409/1408)
(0.18 1472/1471)
(0.19599999999999998 1280/1279)
(0.2016666666666667 1553/1552)
(0.24 1424/1423)
(0.24200000000000005 1544/1543)
(0.24499999999999997 1217/1216)
(0.25 1724/1723)

Lo and behold, they're all superparticulars near our maximum
denominator. That's bad (though they're not just in size order).

(comma-badness 225/224) -> 379.456
(comma-badness 64/63) -> 745.2900000000001
(comma-badness 81/80) -> 821.7777777777778
(comma-badness 36/35) -> 3175.2533333333326
(comma-badness 16/15) -> 8317.926666666668
(comma-badness 21/20) -> 11424.4

Obviously, the complexity term needs a shot in the arm. One
way to do this would be to use weighted measures that will
favor the simpler commas we're used to seeing in these lists.
How much will it help?

(comma-dist 81/80) -> 9
(comma-dist 1664/1663) -> 9

(thd 81/80) -> 12.66177809777199
(thd 1664/1663) -> 21.40001217142836

(comma-hahn 81/80) -> 4
(comma-hahn 1664/1663) -> 1

(comma-isosceles 81/80) -> 7.076815597050832
(comma-isosceles 1664/1663) -> 10.699572453287269

Using a weighted measure in the error term will make things
worse, but

*2*
replacing comma-hahn with comma-isosceles will help.

(12.720094520349305 1664/1663)
(16.443789134570917 1697/1696)
(17.507382009382244 1409/1408)
(19.93044997474295 1472/1471)
(20.877709098079915 1280/1279)
(21.460940274368443 1699/1697)
(21.501413972449896 1723/1721)
(22.237030720523645 1025/1024)
(22.60480829291785 1601/1600)
(22.662867129390435 1553/1552)

Hm, that didn't help as much as I thought it would.

(comma-badness 81/80) -> 2572.23218947583
(comma-badness 64/63) -> 2958.6253522577035
(comma-badness 225/224) -> 3389.154763755284
(comma-badness 16/15) -> 31740.780048904246
(comma-badness 36/35) -> 53562.19689380825
(comma-badness 21/20) -> 121010.92138877488

A different and probably better ranking than the unweighted
version, but the top 10 is still dominated by small commas.
One point is that such commas are benefiting from their high
2 exponent in the error term but not getting damaged by it
in our octave-equivalent complexity term.

*3*
So let's put the octaves back in to the complexity term
by using plain f instead of "remove-2s f" in comma-isosceles,
If 2 can be tempered, no reason to assume it's the interval
of equivalence, eh?

...

Well it turns out that doesn't change the top 10 much, and
the ranking for simpler commas was already pretty good
before the change.

*4*
It seems we're stuck as long as we're considering primes like
1663 consonant. Time to pull out the big guns. We'll use
the highest prime in the comma, instead of comma-rank, as the
exponent in the complexity term.

(define comma-primelimit
(lambda (f)
(apply max (factor (tenneyheight f)))))

Badness is now:

(* (cents comma)^2 / (comma-dist comma)
(comma-isosceles-spec comma)^(comma-primelimit comma)

(602905.4076750558 9/8)
(816040.4736314346 256/243)
(1427907.8221372738 4/3)
(1765367.9922481766 81/80)
(3332379.266365136 32/27)
(7570872.389905203 10/9)
(9646486.522800893 81/64)
(10784218.878497453 25/24)
(13430861.561113684 6/5)
(14953736.77353493 16/15)

As I feared, we've gone too far. Let's divide comma-primelimit
by comma-rank. This basically finds the smallest, most compound
ratios.

(30.199638182549783 1716/1715)
(235.22671475300302 1275/1274)
(251.22205507245735 1001/1000)
(270.2512143047205 715/714)
(294.25699881529823 441/440)
(302.9311779376435 540/539)
(327.4415910954278 1156/1155)
(351.06550017138903 225/224)
(364.34255767758054 1540/1539)
(555.3403731857843 1575/1573)

This is perhaps only interesting because of the usually low
score of the top result.

I didn't get very far, but I wrote some code and I had some
fun.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

10/27/2005 12:49:53 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> comma-thd for short. Here's my Scheme procedure:

Normally the best way to give a definition is to give a mathematical
definition. If you want to give code, the standard pseudo-code is
Algol-like. Scheme is a version of Lisp, and Lisp is infamous for
being an obfuscated language. I think trying to explain things using
Lisp code is a bad idea.

> (define thd
> (lambda (f)
> (log2 (tenneyheight f))))
> (thd 225/224) -> 15.621136113274641

I would simply say this is log base 2 of the Tenney height, which is
the traditional Tenney norm.

> An unweighted version of thd:
>
> (define comma-dist
> (lambda (f)
> (apply + (all-vals (occurrences (factor (tenneyheight f)))))))
> (comma-dist 225/224) -> 10

I'd explain this also, but I don't know what it means. Log2 of the
Kees height would give log2(225), which is 7.81. Why not just give a
definition?

> (0.1111111111111111 1664/1663)

> It seems we're stuck as long as we're considering primes like
> 1663 consonant.

Don't look at me. From now on, you own that comma.

> As I feared, we've gone too far. Let's divide comma-primelimit
> by comma-rank. This basically finds the smallest, most compound
> ratios.
>
> (30.199638182549783 1716/1715)

Finally, a useful comma! But hardly a godlike comma...

🔗Carl Lumma <ekin@lumma.org>

10/27/2005 1:05:48 AM

>> comma-thd for short. Here's my Scheme procedure:
>
>Normally the best way to give a definition is to give a mathematical
>definition. If you want to give code, the standard pseudo-code is
>Algol-like. Scheme is a version of Lisp, and Lisp is infamous for
>being an obfuscated language. I think trying to explain things using
>Lisp code is a bad idea.

I know, but hell, it's my one chance to be obscure as you are. :)
I know this is tuning-math and not tuning-lisp, but these examples
are fairly simple.

>> (define thd
>> (lambda (f)
>> (log2 (tenneyheight f))))
>> (thd 225/224) -> 15.621136113274641
>
>I would simply say this is log base 2 of the Tenney height, which is
>the traditional Tenney norm.

See, you're getting the hang of it already. :)

>> An unweighted version of thd:
>>
>> (define comma-dist
>> (lambda (f)
>> (apply + (all-vals (occurrences (factor (tenneyheight f)))))))
>> (comma-dist 225/224) -> 10
>
>I'd explain this also, but I don't know what it means. Log2 of the
>Kees height would give log2(225), which is 7.81. Why not just give a
>definition?

Ach, you're probably right. This is the sum of the abs. values
of the elements in the comma's monzo.

>> (0.1111111111111111 1664/1663)
>
>> It seems we're stuck as long as we're considering primes like
>> 1663 consonant.
>
>Don't look at me. From now on, you own that comma.
>
>> As I feared, we've gone too far. Let's divide comma-primelimit
>> by comma-rank. This basically finds the smallest, most compound
>> ratios.
>>
>> (30.199638182549783 1716/1715)
>
>Finally, a useful comma! But hardly a godlike comma...

One day, I'll figure out how to make the badness roll off and
give a finite list of interesting commas without separate size or
error bounds (these may be needed for the computation, of course),
without assuming a particular harmonic limit.

-C.

🔗Gene Ward Smith <gwsmith@svpal.org>

10/27/2005 6:17:29 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> I know, but hell, it's my one chance to be obscure as you are. :)
> I know this is tuning-math and not tuning-lisp, but these examples
> are fairly simple.

I've actually programmed in Lisp (though never Scheme) and I still
couldn't sort it out. But at least it isn't Forth.

> Ach, you're probably right. This is the sum of the abs. values
> of the elements in the comma's monzo.

Do you find that particularly interesting?

🔗Carl Lumma <ekin@lumma.org>

10/28/2005 2:09:32 AM

>> I know, but hell, it's my one chance to be obscure as you are. :)
>> I know this is tuning-math and not tuning-lisp, but these examples
>> are fairly simple.
>
>I've actually programmed in Lisp (though never Scheme) and I still
>couldn't sort it out. But at least it isn't Forth.

I tried to make it as simple as possible. But things like
tenneyheight, factor, and others are defined in libraries I didn't
show, and since you don't know how they're returning data I can
see how it might be confusing.

>> Ach, you're probably right. This is the sum of the abs. values
>> of the elements in the comma's monzo.
>
>Do you find that particularly interesting?

It's what I'd call unweighted Tenney Harmonic Distance. I use
it in the error term of the badness formula. There, I think,
the important thing is the number of intervals over which the
error must be tempered, not the weighted length over which it
must be tempered.

But I was basically just playing around with everything I
could think of.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

10/31/2005 6:24:35 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> >> Ach, you're probably right. This is the sum of the abs. values
> >> of the elements in the comma's monzo.
> >
> >Do you find that particularly interesting?
>
> It's what I'd call unweighted Tenney Harmonic Distance. I use
> it in the error term of the badness formula. There, I think,
> the important thing is the number of intervals over which the
> error must be tempered, not the weighted length over which it
> must be tempered.

Why can error only be tempered over primes and not over other
consonances?

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 2:38:47 AM

>> >> Ach, you're probably right. This is the sum of the abs. values
>> >> of the elements in the comma's monzo.
>> >
>> >Do you find that particularly interesting?
>>
>> It's what I'd call unweighted Tenney Harmonic Distance. I use
>> it in the error term of the badness formula. There, I think,
>> the important thing is the number of intervals over which the
>> error must be tempered, not the weighted length over which it
>> must be tempered.
>
>Why can error only be tempered over primes and not over other
>consonances?

It will be tempered over all intervals containing those primes.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/1/2005 1:02:25 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> Ach, you're probably right. This is the sum of the abs.
values
> >> >> of the elements in the comma's monzo.
> >> >
> >> >Do you find that particularly interesting?
> >>
> >> It's what I'd call unweighted Tenney Harmonic Distance. I use
> >> it in the error term of the badness formula. There, I think,
> >> the important thing is the number of intervals over which the
> >> error must be tempered, not the weighted length over which it
> >> must be tempered.
> >
> >Why can error only be tempered over primes and not over other
> >consonances?
>
> It will be tempered over all intervals containing those primes.

Above you used "tempered over" to mean a distributing of a fixed
amount of comma so as to contribute equally to several errors. Now
that you say "tempered over all intervals", it seems to lose that
meaning. Is there something precise you mean here, or is this just a
cop-out?

🔗Carl Lumma <ekin@lumma.org>

11/1/2005 5:18:25 PM

>> >> >> This is the sum of the abs. values
>> >> >> of the elements in the comma's monzo.
>> >> >
>> >> >Do you find that particularly interesting?
>> >>
>> >> It's what I'd call unweighted Tenney Harmonic Distance. I use
>> >> it in the error term of the badness formula. There, I think,
>> >> the important thing is the number of intervals over which the
>> >> error must be tempered, not the weighted length over which it
>> >> must be tempered.
>> >
>> >Why can error only be tempered over primes and not over other
>> >consonances?
>>
>> It will be tempered over all intervals containing those primes.
>
>Above you used "tempered over" to mean a distributing of a fixed
>amount of comma so as to contribute equally to several errors.

Yes.

>Now that you say "tempered over all intervals", it seems to lose that
>meaning. Is there something precise you mean here, or is this just a
>cop-out?

Eh? If I temper the 2-axis, intervals containing 2 will be affected.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 12:55:48 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> This is the sum of the abs. values
> >> >> >> of the elements in the comma's monzo.
> >> >> >
> >> >> >Do you find that particularly interesting?
> >> >>
> >> >> It's what I'd call unweighted Tenney Harmonic Distance. I use
> >> >> it in the error term of the badness formula. There, I think,
> >> >> the important thing is the number of intervals over which the
> >> >> error must be tempered, not the weighted length over which it
> >> >> must be tempered.
> >> >
> >> >Why can error only be tempered over primes and not over other
> >> >consonances?
> >>
> >> It will be tempered over all intervals containing those primes.
> >
> >Above you used "tempered over" to mean a distributing of a fixed
> >amount of comma so as to contribute equally to several errors.
>
> Yes.
>
> >Now that you say "tempered over all intervals", it seems to lose
that
> >meaning. Is there something precise you mean here, or is this just
a
> >cop-out?
>
> Eh? If I temper the 2-axis, intervals containing 2 will be
affected.

Clearly. How is that different from "tempering over primes?"

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 2:47:12 PM

>> >> >> >> This is the sum of the abs. values
>> >> >> >> of the elements in the comma's monzo.
>> >> >> >
>> >> >> >Do you find that particularly interesting?
>> >> >>
>> >> >> It's what I'd call unweighted Tenney Harmonic Distance. I use
>> >> >> it in the error term of the badness formula. There, I think,
>> >> >> the important thing is the number of intervals over which the
>> >> >> error must be tempered, not the weighted length over which it
>> >> >> must be tempered.
>> >> >
>> >> >Why can error only be tempered over primes and not over other
>> >> >consonances?
>> >>
>> >> It will be tempered over all intervals containing those primes.
>> >
>> >Above you used "tempered over" to mean a distributing of a fixed
>> >amount of comma so as to contribute equally to several errors.
>>
>> Yes.
>>
>> >Now that you say "tempered over all intervals", it seems to lose
>> >that meaning. Is there something precise you mean here, or is this
>> >just a cop-out?
>>
>>Eh? If I temper the 2-axis, intervals containing 2 will be
>>affected.
>
>Clearly. How is that different from "tempering over primes?"

The only question I'm trying to answer here is: When is it
appropriate to use one-factor (rectangular) measures vs.
two-factor (triangular) ones?

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/3/2005 3:22:15 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> >> >> This is the sum of the abs. values
> >> >> >> >> of the elements in the comma's monzo.
> >> >> >> >
> >> >> >> >Do you find that particularly interesting?
> >> >> >>
> >> >> >> It's what I'd call unweighted Tenney Harmonic Distance. I
use
> >> >> >> it in the error term of the badness formula. There, I
think,
> >> >> >> the important thing is the number of intervals over which
the
> >> >> >> error must be tempered, not the weighted length over which
it
> >> >> >> must be tempered.
> >> >> >
> >> >> >Why can error only be tempered over primes and not over
other
> >> >> >consonances?
> >> >>
> >> >> It will be tempered over all intervals containing those
primes.
> >> >
> >> >Above you used "tempered over" to mean a distributing of a
fixed
> >> >amount of comma so as to contribute equally to several errors.
> >>
> >> Yes.
> >>
> >> >Now that you say "tempered over all intervals", it seems to lose
> >> >that meaning. Is there something precise you mean here, or is
this
> >> >just a cop-out?
> >>
> >>Eh? If I temper the 2-axis, intervals containing 2 will be
> >>affected.
> >
> >Clearly. How is that different from "tempering over primes?"
>
> The only question I'm trying to answer here is: When is it
> appropriate to use one-factor (rectangular) measures vs.
> two-factor (triangular) ones?

As for rectangular vs. triangular, I've tried to answer that by
saying the geometry in each case comes from making the distance
measure agree as well as possible with a Euclidean sphere. I have no
idea what one-factor vs. two-factor means, or why you equate these to
rectangular vs. triangular.

🔗Carl Lumma <ekin@lumma.org>

11/3/2005 4:05:49 PM

>> The only question I'm trying to answer here is: When is it
>> appropriate to use one-factor (rectangular) measures vs.
>> two-factor (triangular) ones?
>
>As for rectangular vs. triangular, I've tried to answer that by
>saying the geometry in each case comes from making the distance
>measure agree as well as possible with a Euclidean sphere. I have no
>idea what one-factor vs. two-factor means, or why you equate these to
>rectangular vs. triangular.

Unit motions on a triangular lattice remove a pair of factors.
" " rect. " " a single factor.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/4/2005 3:08:12 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> The only question I'm trying to answer here is: When is it
> >> appropriate to use one-factor (rectangular) measures vs.
> >> two-factor (triangular) ones?
> >
> >As for rectangular vs. triangular, I've tried to answer that by
> >saying the geometry in each case comes from making the distance
> >measure agree as well as possible with a Euclidean sphere. I have no
> >idea what one-factor vs. two-factor means, or why you equate these
to
> >rectangular vs. triangular.
>
> Unit motions on a triangular lattice remove a pair of factors.

How do you get "remove"?

> " " rect. " " a single factor.

Ditto.

And aren't there some unit motions on a triangular lattice
that "remove" a single factor, if you're not counting 1 as a factor?

🔗Carl Lumma <ekin@lumma.org>

11/9/2005 4:28:13 PM

>> >> The only question I'm trying to answer here is: When is it
>> >> appropriate to use one-factor (rectangular) measures vs.
>> >> two-factor (triangular) ones?
>> >
>> >As for rectangular vs. triangular, I've tried to answer that by
>> >saying the geometry in each case comes from making the distance
>> >measure agree as well as possible with a Euclidean sphere. I have no
>> >idea what one-factor vs. two-factor means, or why you equate these
>to
>> >rectangular vs. triangular.
>>
>> Unit motions on a triangular lattice remove a pair of factors.
>
>How do you get "remove"?

Another way of saying "factor out".

>> " " rect. " " a single factor.
>
>Ditto.
>
>And aren't there some unit motions on a triangular lattice
>that "remove" a single factor, if you're not counting 1 as a factor?

Yes. I should have said "up to a pair" I guess.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/10/2005 2:12:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> >> >> The only question I'm trying to answer here is: When is it
> >> >> appropriate to use one-factor (rectangular) measures vs.
> >> >> two-factor (triangular) ones?
> >> >
> >> >As for rectangular vs. triangular, I've tried to answer that by
> >> >saying the geometry in each case comes from making the distance
> >> >measure agree as well as possible with a Euclidean sphere. I
have no
> >> >idea what one-factor vs. two-factor means, or why you equate
these
> >to
> >> >rectangular vs. triangular.
> >>
> >> Unit motions on a triangular lattice remove a pair of factors.
> >
> >How do you get "remove"?
>
> Another way of saying "factor out".

Or "factor in", right?

> >> " " rect. " " a single factor.
> >
> >Ditto.
> >
> >And aren't there some unit motions on a triangular lattice
> >that "remove" a single factor, if you're not counting 1 as a
factor?
>
> Yes. I should have said "up to a pair" I guess.

So where does this leave us?

:)

🔗Carl Lumma <ekin@lumma.org>

11/10/2005 9:48:35 PM

>> >> >> The only question I'm trying to answer here is: When is it
>> >> >> appropriate to use one-factor (rectangular) measures vs.
>> >> >> two-factor (triangular) ones?
>> >> >
>> >> >As for rectangular vs. triangular, I've tried to answer that by
>> >> >saying the geometry in each case comes from making the distance
>> >> >measure agree as well as possible with a Euclidean sphere.
//
>> >> Unit motions on a triangular lattice remove a pair of factors.
>> >
>> >How do you get "remove"?
>>
>> Another way of saying "factor out".
>
>Or "factor in", right?

I think so.

>> >> " " rect. " " a single factor.
>> >
>> >Ditto.
>> >
>> >And aren't there some unit motions on a triangular lattice
>> >that "remove" a single factor, if you're not counting 1 as a
>factor?
>>
>> Yes. I should have said "up to a pair" I guess.
>
>So where does this leave us?
>
>:)

I'm not sure. Wait, there's a question at the very top of this
message. Maybe you can lay down a definition precise def. of
how to make "the distance agree as well as possible with a Euclidean
sphere", since that's what I tried to do.

-Carl

🔗Paul Erlich <perlich@aya.yale.edu>

11/14/2005 12:58:02 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> I'm not sure. Wait, there's a question at the very top of this
> message. Maybe you can lay down a definition precise def. of
> how to make "the distance agree as well as possible with a Euclidean
> sphere", since that's what I tried to do.

I guess it means that the ball is represented in Euclidean space in
such a way that it has as many of the symmetries of the sphere as
possible. A regular hexagon has more of the rotational and mirror
symmetries of the circle than any other hexagon.

🔗Carl Lumma <ekin@lumma.org>

11/14/2005 1:02:56 PM

>> I'm not sure. Wait, there's a question at the very top of this
>> message. Maybe you can lay down a definition precise def. of
>> how to make "the distance agree as well as possible with a Euclidean
>> sphere", since that's what I tried to do.
>
>I guess it means that the ball is represented in Euclidean space in
>such a way that it has as many of the symmetries of the sphere as
>possible. A regular hexagon has more of the rotational and mirror
>symmetries of the circle than any other hexagon.

That's a good thought.

-Carl