back to list

comma lists

🔗Carl Lumma <ekin@lumma.org>

3/18/2007 12:44:46 PM

Hi Dave and George,

Of all commas < 600 cents with denominator <= 5000, here
are the 12 best, where badness is the size of the comma
squared times sqrt(n*d)...

((7.1782819948765155 3.5849625007211565 4/3)
(7.552542930418188 4.321928094887363 5/4)
(7.827746223937746 4.906890595608519 6/5)
(8.057433812385074 5.392317422778761 7/6)
(8.260394893171524 5.807354922057605 8/7)
(8.444927568164685 6.169925001442312 9/8)
(8.615410480981284 6.491853096329675 10/9)
(8.77449807989303 6.78135971352466 11/10)
(8.92397771359202 7.044394119358453 12/11)
(9.065148920192453 7.285402218862249 13/12)
(9.199009985700116 7.507794640198696 14/13)
(9.326358497457118 7.714245517666122 15/14)

I can give you 300 of these, but it looks like the moral
is: use simple superparticular commas. This result doesn't
look at prime limits, so it's probably of limited use to you.

This next list restricts itself to p-limit commas (in
this case 5-limit), but these are still drawn from
ratios < 600 cents with denominators <= 5000. Here
badness is (n-d)log(d)^e / d, where e is the number
of primes in the p-limit, minus 1...

((0.24002696783739128 5 81/80)
(0.4023163202708607 3 4/3)
(0.42083442285788536 5 25/24)
(0.4804530139182014 5 5/4)
(0.4889023927793147 5 16/15)
(0.5180580787960469 5 6/5)
(0.5364217603611476 5 10/9)
(0.5405096406579765 3 9/8)
(0.5595027250997308 5 128/125)
(0.6583419736366627 5 2048/2025)
(0.828892926073675 5 27/25)
(0.8692019265111186 5 250/243))

I've been meaning to write a version that searches
monzos instead of ratios; this version would give a much
better view of prime limits.

In the meantime, here's a third list. Again we'll consider
ratios < 600 cents with denominators <=5000, but this time
badness is (size^2 * euclid-length^rank) / length, where
size is the size of the comma, rank is the number of primes
needed to factor the comma, length is the city block length
of the comma on a rectangular lattice, and euclid-length is
the symmetric Euclidean length of the comma on a triangular
lattice...

((0.09000000000000001 4955/4954)
(0.1508494466531302 3863/3862)
(0.1508494466531302 3947/3946)
(0.1508494466531302 4007/4006)
(0.1508494466531302 4022/4021)
(0.1508494466531302 4058/4057)
(0.1508494466531302 4079/4078)
(0.1508494466531302 4127/4126)
(0.1508494466531302 4139/4138)
(0.1508494466531302 4178/4177)
(0.1508494466531302 4259/4258)
(0.1508494466531302 4262/4261))

Here the moral seems to be complex superparticular ratios.

Clearly these would work better if I used weighted
complexity.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/18/2007 4:34:18 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> In the meantime, here's a third list. Again we'll consider
> ratios < 600 cents with denominators <=5000, but this time
> badness is (size^2 * euclid-length^rank) / length, where
> size is the size of the comma, rank is the number of primes
> needed to factor the comma, length is the city block length
> of the comma on a rectangular lattice, and euclid-length is
> the symmetric Euclidean length of the comma on a triangular
> lattice...

> Clearly these would work better if I used weighted
> complexity.

I think you need to weight the Euclidean length for this to make any
sense. One effective way to do that is to multiply each coefficient in
the monzo associated to odd prime p by log2(p). Since 2 isn't counted
you can multiply there by zero if you like. Then sqrt(sum({i <= j} a[i]
*a[j])) gives a weighted Euclidean length on interval classes, which
has the desirable property that 3<5<7<9.

🔗Carl Lumma <ekin@lumma.org>

3/18/2007 5:00:51 PM

>> In the meantime, here's a third list. Again we'll consider
>> ratios < 600 cents with denominators <=5000, but this time
>> badness is (size^2 * euclid-length^rank) / length, where
>> size is the size of the comma, rank is the number of primes
>> needed to factor the comma, length is the city block length
>> of the comma on a rectangular lattice, and euclid-length is
>> the symmetric Euclidean length of the comma on a triangular
>> lattice...
>
>> Clearly these would work better if I used weighted
>> complexity.
>
>I think you need to weight the Euclidean length for this to make any
>sense. One effective way to do that is to multiply each coefficient in
>the monzo associated to odd prime p by log2(p).

Yes. Except I think I'm going to wind up choosing something
closer to 1/p.

>Since 2 isn't counted
>you can multiply there by zero if you like.

2 isn't counted? I have in my notes that the venison I'm using
is octave-specific.

;; Symmetric Euclidean norm (octave-specific).
;; Crow's distance on a triangular lattice.
;; Algorithm due to Gene Ward Smith.

(define comma-euclid
(lambda (f)
(let ((m (all-vals (factor-rational f))))
(sqrt
(/
(+
(apply + (map (lambda (x) (expt x 2)) m))
(expt (apply + m) 2))
2)))))

Hmm, but IIRC you use the word "symmetric" to mean 'gives
the same result regardless of 2s'. ?

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/18/2007 5:22:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> 2 isn't counted? I have in my notes that the venison I'm using
> is octave-specific.

What's the point of it being octave-specific? We nearly always assume
octave-equivalence. But trying it both ways might make sense.

> Hmm, but IIRC you use the word "symmetric" to mean 'gives
> the same result regardless of 2s'. ?

Symmetric should have a symmetry, and if you really want to do it right
you should toss the formula I gave before and use

sqrt(sum_{i <= j; i>1} log2(ithprime(i))^2 e[i] * e[j])

Here e[i] is the exponent of the ith prime, in other words the ith
monzo element. The point is that you really don't want 6/5 to be
smaller than 5/4; this makes them the same size.

🔗Carl Lumma <ekin@lumma.org>

3/18/2007 5:25:06 PM

>> 2 isn't counted? I have in my notes that the venison I'm using
>> is octave-specific.
>
>What's the point of it being octave-specific? We nearly always assume
>octave-equivalence. But trying it both ways might make sense.

This was part of an approach I was taking a year or two ago where
I wasn't making that assumption. I'd do it differently today.

>> Hmm, but IIRC you use the word "symmetric" to mean 'gives
>> the same result regardless of 2s'. ?
>
>Symmetric should have a symmetry, and if you really want to do it right
>you should toss the formula I gave before and use
>
>sqrt(sum_{i <= j; i>1} log2(ithprime(i))^2 e[i] * e[j])
>
>Here e[i] is the exponent of the ith prime, in other words the ith
>monzo element. The point is that you really don't want 6/5 to be
>smaller than 5/4; this makes them the same size.

That's what I'm using, except without the weighting.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/18/2007 10:42:35 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> >sqrt(sum_{i <= j; i>1} log2(ithprime(i))^2 e[i] * e[j])
> >
> >Here e[i] is the exponent of the ith prime, in other words the ith
> >monzo element. The point is that you really don't want 6/5 to be
> >smaller than 5/4; this makes them the same size.
>
> That's what I'm using, except without the weighting.

I think you need the weighting for this to work. Also, I don't
understand your formula. Why the division by L? Is "size" a logarithmic
measure?

I would think cents(q) * EucWeighted(q)^primelimit(q) would make sense,
or if you don't want to bother with the square root for Euclidean
distance, then cents(q)^2.

🔗Carl Lumma <ekin@lumma.org>

3/18/2007 11:42:01 PM

>> >sqrt(sum_{i <= j; i>1} log2(ithprime(i))^2 e[i] * e[j])
>> >
>> >Here e[i] is the exponent of the ith prime, in other words the ith
>> >monzo element. The point is that you really don't want 6/5 to be
>> >smaller than 5/4; this makes them the same size.
>>
>> That's what I'm using, except without the weighting.
>
>I think you need the weighting for this to work.

It just means you consider 1663 is as consonant as 5. Not
terribly realistic, but perhaps interesting anyway.

>> In the meantime, here's a third list. Again we'll consider
>> ratios < 600 cents with denominators <=5000, but this time
>> badness is (size^2 * euclid-length^rank) / length, where
>> size is the size of the comma, rank is the number of primes
>> needed to factor the comma, length is the city block length
>> of the comma on a rectangular lattice, and euclid-length is
>> the symmetric Euclidean length of the comma on a triangular
>> lattice...
>
>> Clearly these would work better if I used weighted
>> complexity.
//
>Also, I don't
>understand your formula. Why the division by L?

See:
/tuning-math/message/13064

>Is "size" a logarithmic measure?

Yes.

>I would think cents(q) * EucWeighted(q)^primelimit(q) would make sense,
>or if you don't want to bother with the square root for Euclidean
>distance, then cents(q)^2.

That's certainly one way to do it. If you read message 13064,
maybe what I was doing will make sense.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/19/2007 12:51:17 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:

> It just means you consider 1663 is as consonant as 5. Not
> terribly realistic, but perhaps interesting anyway.

It makes no sense if this is supposed to connect to actual music. Not
that I always demand that, but saying all primes are equal doesn't
seem to even be interesting in theory.

> See:
> /tuning-math/message/13064

As I think I told you at the time, you do NOT use a dialect of Lisp
for pseudocode. It's a really, really, really bad choice.

Really.

> >I would think cents(q) * EucWeighted(q)^primelimit(q) would make
sense,
> >or if you don't want to bother with the square root for Euclidean
> >distance, then cents(q)^2.
>
> That's certainly one way to do it.

One advantage it has is that the numbers which come out make some
kind of sense. For instance, you get 4.7866 * 10^346 for 1664/1663,
which allows you to conclude it isn't nearly as good as 81/80, which
gets 4555.96, or even 2401/2400, which gets 7506.63. A sorted list of
rational numbers 1<q<2 which get this figure under a million might be
useful for this project.

If you read message 13064,
> maybe what I was doing will make sense.

It didn't make sense then, and still doesn't now. Why not lose the
Lisp and try to explain what your ideas are? As for me, my
computational resources are not all that terrific and if someone else
wanted to compute cents(q) * EucWeighted(q)^primelimit(q) I'd be
interested.

🔗Carl Lumma <ekin@lumma.org>

3/19/2007 1:11:21 AM

>> See:
>> /tuning-math/message/13064
>
>As I think I told you at the time, you do NOT use a dialect of Lisp
>for pseudocode. It's a really, really, really bad choice.
>
>Really.

It's entirely explained in English also, in particular the
part about why I'm dividing by length.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/19/2007 10:44:31 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> >> See:
> >> /tuning-math/message/13064
> >
> >As I think I told you at the time, you do NOT use a dialect of
Lisp
> >for pseudocode. It's a really, really, really bad choice.
> >
> >Really.
>
> It's entirely explained in English also, in particular the
> part about why I'm dividing by length.

You mean this?

"(cents comma)^2 / (comma-dist comma)

I use comma-dist here because I want octave-specific (allowing
tempered octaves), unweighted (the available 'pain relief' depends
only on the number of intervals to temper over) distances."

That didn't help.

🔗Carl Lumma <ekin@lumma.org>

3/19/2007 11:16:35 AM

>You mean this?

Nope.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

3/19/2007 4:55:22 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> Hi Dave and George,
>
> Of all commas < 600 cents with denominator <= 5000, here
> are
...

Thanks Carl.

We only need commas less than half an apotome in size, say less than
57 cents, because other symbol values are assigned automatically based
on those.

We're really talking extreme sagittal here. We should probably include
primes up to 37 or so (although we don't expect to be able to notate
anything other than the straight harmonic for primes above 23).

Your middle list looked to have the most promise.

Numerators and denominator should probably not be limited to anything
less than 10^9.

Already at the 5-limit we have an 8 digit numerator and denominator in
the notationally useful 5^6-comma

34171875/33554432 31.567c <25 -7, -6]

which will probably be given the symbol .(|'

Regards,
-- Dave K

🔗Dave Keenan <d.keenan@bigpond.net.au>

3/19/2007 5:47:33 PM

--- In tuning-math@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
> We only need commas less than half an apotome in size, say less than
> 57 cents, because other symbol values are assigned automatically based
> on those.

I should explain that.

The apotome is 2187/2048 113.685c <-11 7]. It probably makes sense to
look at commas up to the double-apotome initially. Then all commas
larger than an apotome should be mapped down by subtracting an apotome
from them, then all commas still greater than a half apotome should be
mapped down by subtracting them from an apotome (i.e. obtaining their
apotome complement).

This may mean that several commas of varying badness are mapped down
to the same comma less than a half-apotome. Ideally they should all
contribute to the "goodness" of that sub-half-apotome comma.

But it's OK if you don't want to worry about that.

Here's how these comma lists are to be used:

There are a limited number of accented sagittal symbols below the
half-apotome. Somewhere between 116 and 390 (exact number undecided
but likely to be closer to the lower figure). These initially have
approximate sizes assigned to them by taking the sum of the core
symbol size and the standard sizes for the accents (as given in the
recent character map spreadsheet). We call this the symbol's SoCA (Sum
of core and accents).

A comma will be considered for notation by the symbol whose SoCA is
closest. There will always be one within 0.25 cents.

If there are two or more commas vying for the same symbol the one with
the lowest "badness" will get it and the others will miss out.

So that's what these lists are all about. The ordering of two commas
on the list only really matters if they are with 0.5 cents of each other.

-- Dave K

🔗Dave Keenan <d.keenan@bigpond.net.au>

3/19/2007 6:32:08 PM

Oops! Sorry I put the angle brackets on the wrong end of my exponent
vectors.

-- Dave K

🔗Carl Lumma <ekin@lumma.org>

3/19/2007 10:10:46 PM

> > It's entirely explained in English also, in particular the
> > part about why I'm dividing by length.
>
> You mean this?
>
> "(cents comma)^2 / (comma-dist comma)
>
> I use comma-dist here because I want octave-specific (allowing
> tempered octaves), unweighted (the available 'pain relief' depends
> only on the number of intervals to temper over) distances."
>
> That didn't help.

The part before it didn't help?

>That brings us to error. John deLaubenfels proposed that the
>amount of "pain" mistuning causes us is the square of the error
>in a simultaneity, and I agree with him. My own listening tests
>indicate the exponent should be > 1, and 2 is natural because
>it gives a nice distribution over the target. Also, there would
>scarcely be a reason to temper with an exponent of 1... if we
>spread a 24-cent comma over 12 fifths, we'd experience the same
>amount of pain once we heard all of them, no matter how we
>tempered. But (12 * 2^2) = 48 < 24^2 = 576.
>
>So, for error I arrived at:
>
>((cents comma)/(comma-dist comma))^2 * (comma-dist comma)
>
>or
>
>(cents comma)^2 / (comma-dist comma)
>
>I use comma-dist here because I want octave-specific (allowing
>tempered octaves), unweighted (the available 'pain relief' depends
>only on the number of intervals to temper over) distances.

?

-Carl