back to list

rankin's rankings, OEIS

🔗Jon Wild <wild@music.mcgill.ca>

3/15/2006 4:53:40 PM

I happened across these sequences at the Online Encyclopedia of Integer Sequences:

http://www.research.att.com/~njas/sequences/?q=author%3Arankin

Here's an example (each one uses a different set of intervals):

----------------------
A054540 A list of equal temperaments (equal divisions of the octave) whose nearest scale steps are closer and closer approximations to the six simple ratios of musical harmony: 6/5, 5/4, 4/3, 3/2, 8/5 and 5/3.

1, 2, 3, 5, 7, 12, 19, 31, 34, 53, 118, 171, 289, 323, 441, 612, 730, 1171, 1783, 2513, 4296, 12276, 16572, 20868, 25164, 46032, 48545, 52841, 73709, 78005, 151714, 229719, 537443, 714321, 792326, 944040, 1022045, 1251764, 3755292, 3985011

COMMENT The sequence was found by a computer search of all of the equal divisions of the octave from 1 to over 3985011. There seems to be a hidden aspect or mystery here: what is it about the more and more harmonious equal temperaments that causes them to express themselves collectively as a perfect, self-accumulating recurrent sequence?

FORMULA Stochastic recurrence rule - the next term equals the current term plus one or more previous terms: a(n+1) = a(n) + a(n-x)... + a(n-y)... + a(n-z), etc.

EXAMPLE 34 = 31 + the earlier term 3. Again, 118 = 53 + the earlier terms 34 and 31.

AUTHOR Mark William Rankin, Apr 09 2000; Dec 17 2000
-------------------------------------------------

There are several more, some with pretty arbitrary sets of intervals. Is Mark Rankin here? (The name seems familiar.) Are these lists based on maximum error in the interval approximations, sum of squared errors, sum of errors, product of errors, or what? Is there a weighting that penalises higher cardinalities at all? (It doesn't appear so.) Why include ratios *and* their inversions on the list, when the error is the same for each?

The "stochastic recurrence" thing is fairly bogus too. We know why it tends to happen (two quite good temperaments will combine to form an excellent temperament if their errors are in opposite directions), but the sequence starts out dense enough that just about *any* number, not just the ones that appear on the list, is expressible as a sum of previous terms (e.g. 250 = 171 + 53 + 19 + 7).

I have lists that I generated for my own purposes using different kinds of weighting, and I know many other people here have done the same. Is there agreement on what a canonic version should be? I was thinking of submitting improvements to the OEIS.

I have separate sequences for various definitions of "most accurate": smallest maximum error; smallest mean squared error; smallest mean error; and smallest product of errors. (This last one is a little silly and I don't know if anyone has seriously proposed it - it rewards EDOs very highly for having one very accurate interval. Errors of 1 cent and 100 cents combine to be just as good, using this measure, as two errors of 10 cents each. It doesn't give any surprises in the 5-limit, but in the
7-limit 18 appears - because of the excellent approximation to 7/6,
despite its lousy performance on the other intervals.)

There are also various "penalty" schemes for higher cardinalities: unpenalised; penalised by multiplying by the cardinality; and an intermediate version, that penalises by multiplying by the square root of the cardinality.

Combining four definitions of error with three penalty schemes gives me twelve candidate sequences for each set of intervals. I'll paste them in here, for the five-limit and seven-limit cases. They're displayed so the same EDOs are vertically aligned (though for many of you I'm afraid the lines will break). Oh and yahoo will surely break the formatting if you're reading on the web (click "view original", I think, to see a proper version?)

Five-limit (3/2, 5/4, 6/5):
MAXIMUM ERROR
unpenalised: 1 2 3 5 7 12 19 31 34 53 118 171
penalty = sqrt(n): 1 2 3 7 12 19 34 53 118
penalty = n: 1 3 12 19 34 53 118

MEAN ERROR
unpenalised: 1 2 3 5 7 12 19 31 34 53 118 171
penalty = sqrt(n): 1 2 3 7 12 19 34 53 118
penalty = n: 1 3 12 19 34 53 118

MEAN ERROR^2
unpenalised: 1 2 3 5 7 12 19 31 34 53 118 171
penalty = sqrt(n): 1 2 3 7 12 19 31 34 53 118 171
penalty = n: 1 2 3 7 12 19 31 34 53 118

PRODUCT OF ERRORS
unpenalised: 1 2 3 7 12 19 53 118 171
penalty = sqrt(n): 1 2 3 7 12 19 53 118
penalty = n: 1 2 3 12 19 53

Seven-limit (3/2, 5/4, 6/5, 7/4, 7/6, 7/5):

MAXIMUM ERROR
unpenalised: 1 2 3 4 6 7 8 9 10 12 15 19 22 27 31 68 72 99 140 171
penalty = sqrt(n): 1 2 3 4 7 8 9 10 12 15 19 22 27 31 68 72 99 171
penalty = n: 1 2 3 4 12 22 27 31 99 171

MEAN ERROR
unpenalised: 1 2 3 4 6 7 8 9 10 12 15 19 22 27 31 53 68 72 99 140 171
penalty = sqrt(n): 1 2 3 4 9 10 12 15 19 22 27 31 68 72 99 171
penalty = n: 1 2 3 4 19 27 31 99 171

MEAN ERROR^2
unpenalised: 1 2 3 4 6 7 8 9 10 12 15 19 22 27 31 68 72 99 140 171
penalty = sqrt(n): 1 2 3 4 7 8 9 10 12 15 19 22 27 31 68 72 99 171
penalty = n: 1 2 3 4 9 10 12 15 19 22 27 31 68 72 99 171

PRODUCT OF ERRORS
unpenalised: 1 2 3 4 5 8 9 12 15 18 19 27 31 53 68 72 99 140 171
penalty = sqrt(n): 1 2 3 4 9 18 19 27 31 53 72 99 171
penalty = n: 1 2 3 4 9 19 31 99 171

Anyway, all very basic but I wanted to lay it out explicitly with the various options (as you can see, to a large extent they have a lot of agreement). What do you recommend for the OEIS? Sorry if it's all been done here recently, I haven't always been reading the lists. I have other lists that do the same with other sets of intervals.

Best --Jon

🔗Carl Lumma <ekin@lumma.org>

3/15/2006 5:02:32 PM

At 04:53 PM 3/15/2006, you wrote:
>
>I happened across these sequences at the Online Encyclopedia of Integer
>Sequences:
>
>http://www.research.att.com/~njas/sequences/?q=author%3Arankin
>
>Here's an example (each one uses a different set of intervals):
>
>----------------------
> A054540 A list of equal temperaments (equal divisions of the octave)
>whose nearest scale steps are closer and closer approximations to the six
>simple ratios of musical harmony: 6/5, 5/4, 4/3, 3/2, 8/5 and 5/3.
>
> 1, 2, 3, 5, 7, 12, 19, 31, 34, 53, 118, 171, 289, 323, 441, 612, 730,
>1171, 1783, 2513, 4296, 12276, 16572, 20868, 25164, 46032, 48545, 52841,
>73709, 78005, 151714, 229719, 537443, 714321, 792326, 944040, 1022045,
>1251764, 3755292, 3985011
>
> COMMENT The sequence was found by a computer search of all of the equal
>divisions of the octave from 1 to over 3985011. There seems to be a hidden
>aspect or mystery here: what is it about the more and more harmonious
>equal temperaments that causes them to express themselves collectively as
>a perfect, self-accumulating recurrent sequence?
>
> FORMULA Stochastic recurrence rule - the next term equals the current
>term plus one or more previous terms: a(n+1) = a(n) + a(n-x)... +
>a(n-y)... + a(n-z), etc.
>
> EXAMPLE 34 = 31 + the earlier term 3. Again, 118 = 53 + the earlier
>terms 34 and 31.
>
> AUTHOR Mark William Rankin, Apr 09 2000; Dec 17 2000
>-------------------------------------------------
>
>There are several more, some with pretty arbitrary sets of intervals. Is
>Mark Rankin here?

I don't think he's on this list, but he reads tuning off and on.
We've met on several occasions. Once through my friend Denny (a
microtonalist), and again through my friend Stephen (who wrote the
C program Mark used to generate the above sequence).

>Are these lists based on maximum error in the interval approximations,
>sum of squared errors, sum of errors, product of errors, or what?

I think max, but I'm not sure.

>Is there a weighting that penalises higher cardinalities at all?

I don't think so. It's just max 5-limit error, I think. But
again, I'm not sure.

>Why include ratios
>*and* their inversions on the list, when the error is the same for each?

Good question.

>The "stochastic recurrence" thing is fairly bogus too. We know why it
>tends to happen (two quite good temperaments will combine to form an
>excellent temperament if their errors are in opposite directions), but the
>sequence starts out dense enough that just about *any* number, not just
>the ones that appear on the list, is expressible as a sum of previous
>terms (e.g. 250 = 171 + 53 + 19 + 7).

Yeah. Paul E. and Gene have done more work into this sort of
thing, though. Gene can explain it better than I. In fact there
was a message about it here not long ago...

>I have lists that I generated for my own purposes using different kinds of
>weighting, and I know many other people here have done the same. Is there
>agreement on what a canonic version should be?

"No" would be an understatement.

>There are also various "penalty" schemes for higher cardinalities:
>unpenalised; penalised by multiplying by the cardinality; and an
>intermediate version, that penalises by multiplying by the square root of
>the cardinality.

If you dig in the archives here, you'll find literally thousands of
messages about this sort of thing. The thread pops up, we burn
ourselves out, and then it goes away for a while.

-Carl

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

3/15/2006 6:15:06 PM

--- In tuning-math@yahoogroups.com, Jon Wild <wild@...> wrote:

> I have lists that I generated for my own purposes using different
kinds of
> weighting, and I know many other people here have done the same. Is
there
> agreement on what a canonic version should be? I was thinking of
> submitting improvements to the OEIS.

It could use some help. Possible lists are decreasing L1, L2, and L
infinity error, everything below a logflat cutoff figure, such as 1
for L infinity error, decreasing Pepper ambigutity, and one that they
might find really cute: nearest integer to increasing size of the
absolute value of the integral of the Z function between successive
zeros, with the argument normalized by log(2)/2 pi.

I wrote a Wikipedia article on the Z function with the intent of
explaining this stuff on my own web page, and then never did.

Other fun ideas are smallest consistent edo for each odd integer
greater than 1, and smallest edo which conflates nothing in the
n-diamond for each odd integer greater than 1.

> Combining four definitions of error with three penalty schemes gives me
> twelve candidate sequences for each set of intervals.

I like logflat penalty schemes, as they give infinite lists but thin
them out pretty well. Here are some logflat minimax error less than 1
lists; since Paul seems to be gone for the moment it may be safe to
remark that this tested patent vals only.

5-limit

1, 2, 3, 4, 5, 7, 12, 15, 19, 31, 34, 53, 65, 118, 171, 289, 441, 559,
612, 730, 1171, 1783, 2513, 4296, 6809, 8592

7-limit

1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 19, 22, 27, 31, 41, 68, 72, 99, 130,
140, 171, 202, 270, 342, 441, 612, 1547, 1578, 2019, 3125, 3395, 3566,
5144, 6520, 6691

9-limit

2, 5, 12, 19, 22, 31, 41, 72, 99, 171, 270, 342, 441, 612, 3125, 6691

11-limit

2, 7, 12, 22, 31, 41, 46, 58, 72, 118, 152, 270, 342, 494, 612, 764,
836, 1578, 1848, 6421, 6691, 7927