back to list

Extension of a Riemann zeta function sequence

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/20/2010 12:21:43 PM

I received an email from Tony D. Noe, noe(AT)sspectra.com, telling me he has extended the integer sequence A117536:

http://www.research.att.com/~njas/sequences/index.html?q=A117536&language=english&go=Search

"These are the locations of the increasingly larger peaks of the absolute value of the Riemann zeta function along the critical line. Equivalently, the locations of the increasingly large peaks of the absolute value of the Z function for increasing real t. If Z'(s)=0 is a positive zero of the derivative of Z, then |Z(s)| is the peak value. We renormalize s by r = ln(2) s /2 pi and round to the nearest integer to get the terms of the sequence. The fractional parts of these values are not randomly distributed; r shows a very strong tendency to be near an integer.

It would be interesting to have theorems on the distribution of the fractional part of the "r" above, for which the Riemann hypothesis would surely be needed. It would be particularly interesting to know if the absolute value fractional part was constrained to be less than some bound, such as 0.25. This computation could be pushed much farther by someone using a better algorithm, for instance the Riemann-Siegel formula and better computing resources. The computations were done using Maple's accurate but very slow zeta function evaluation. They are correct as far as they go, but do not go very far. The terms of the sequence have an interpretation in terms of music theory; the terms which appear in it, 12, 19, 22 and so forth, are equal divisions of the octave which do relatively well approximating intervals given by rational numbers with small numerators and denominators."

While it does not seem to have appeared on the integer sequence site as yet, Tony has very considerably extended this sequence. I had

0, 1, 2, 3, 4, 5, 7, 10, 12, 19, 22, 27, 31, 41, 53, 72, 99, 118, 130, 152, 171, 217, 224, 270

Tony now gives us

0,1,2,3,4,5,7,10,12,19,22,27,31,41,53,72,99,118,130,152,171,217,224,
270,342,422,441,494,742,764,935,954,1012,1106,1178,1236,1395,1448,1578
2460,2684,3395,5585,6079,7033,8269,8539,11664,14348,16808,28742,34691
36269,57578,58973

🔗gdsecor <gdsecor@yahoo.com>

4/20/2010 2:48:49 PM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
> I received an email from Tony D. Noe, noe(AT)sspectra.com, telling me he has extended the integer sequence A117536:
> http://www.research.att.com/~njas/sequences/index.html?q=A117536&language=english&go=Search
>
> "These are the locations of the increasingly larger peaks of the absolute value of the Riemann zeta function along the critical line. Equivalently, the locations of the increasingly large peaks of the absolute value of the Z function for increasing real t. If Z'(s)=0 is a positive zero of the derivative of Z, then |Z(s)| is the peak value. We renormalize s by r = ln(2) s /2 pi and round to the nearest integer to get the terms of the sequence. The fractional parts of these values are not randomly distributed; r shows a very strong tendency to be near an integer.
>
> It would be interesting to have theorems on the distribution of the fractional part of the "r" above, for which the Riemann hypothesis would surely be needed. It would be particularly interesting to know if the absolute value fractional part was constrained to be less than some bound, such as 0.25. This computation could be pushed much farther by someone using a better algorithm, for instance the Riemann-Siegel formula and better computing resources. The computations were done using Maple's accurate but very slow zeta function evaluation. They are correct as far as they go, but do not go very far. The terms of the sequence have an interpretation in terms of music theory; the terms which appear in it, 12, 19, 22 and so forth, are equal divisions of the octave which do relatively well approximating intervals given by rational numbers with small numerators and denominators."
>
> While it does not seem to have appeared on the integer sequence site as yet, Tony has very considerably extended this sequence. I had
>
> 0, 1, 2, 3, 4, 5, 7, 10, 12, 19, 22, 27, 31, 41, 53, 72, 99, 118, 130, 152, 171, 217, 224, 270
>
> Tony now gives us
>
> 0,1,2,3,4,5,7,10,12,19,22,27,31,41,53,72,99,118,130,152,171,217,224,
> 270,342,422,441,494,742,764,935,954,1012,1106,1178,1236,1395,1448,1578
> 2460,2684,3395,5585,6079,7033,8269,8539,11664,14348,16808,28742,34691
> 36269,57578,58973

Hi Gene,

Around a year ago I was looking for low-error EDO's consistent at high prime limits, with exceptionally low error at the 7 limit, and I did happen to find 58973 (47-limit consistent). I was particularly interested in finding one that would distinguish 40960:41553 from 6561:6656 by a different number of degrees, to use as a tuning measure for commas notated in Sagittal. I found one that filled the bill: 324296 (59-limit consistent); this also happened to be the first one greater than 58973 that clearly stood out.

Beyond 58973, I came across a few 59-limit consistent divisions (639400, 1422445, 2805055), 57-limit ones (2836528, 3463634, 3492376), 65-limit (3605631, 4393509), 69-limit (9648692), and 71-limit (8595351), all with exceptionally low 7-limit error.The champ turned out to be the only stand-out above 324296: 2901533 (131-limit consistent, and error of <8.02% of a degree at the 17 limit). (This was done with a program using the brute-force method; I stopped at 10,000,000, when I failed to find anything better than 2901533.)

The early stand-outs were 612, 2460, 6079, and 28342 (28742 was skipped because of >14% error for prime 5).

--George

🔗Graham Breed <gbreed@gmail.com>

4/20/2010 11:05:27 PM

On 21 April 2010 01:48, gdsecor <gdsecor@yahoo.com> wrote:

> Around a year ago I was looking for low-error EDO's consistent at
> high prime limits, with exceptionally low error at the 7 limit, and
> I did happen to find 58973 (47-limit consistent).  I was particularly
> interested in finding one that would distinguish 40960:41553 from
> 6561:6656 by a different number of degrees, to use as a tuning
> measure for commas notated in Sagittal.  I found one that filled the bill
>: 324296 (59-limit consistent); this also happened to be the first one
> greater than 58973 that clearly stood out.

Well, this is pretty hardcore. Three questions spring to mind:

1) How are you defining prime limit consistency?

2) How can something consistent with so many notes *not* have an
exceptionally low error at the 7-limit?

3) Why the blue blazes are you playing around with these limits anyway?

> Beyond 58973, I came across a few 59-limit consistent divisions
> (639400, 1422445, 2805055), 57-limit ones (2836528, 3463634,
> 3492376), 65-limit (3605631, 4393509), 69-limit (9648692), and
> 71-limit (8595351), all with exceptionally low 7-limit error.The
> champ turned out to be the only stand-out above 324296:
> 2901533 (131-limit consistent, and error of <8.02% of a degree at
> the 17 limit).  (This was done with a program using the brute-force
> method; I stopped at 10,000,000, when I failed to find anything
> better than 2901533.)

I've got as far as these 59-limit divisions:

4380, 6151, 10257, 20567, 28771, 37592, 38828, 40649, 43119, 48203

All claim to be consistent and make your intervals distinct.
Interestingly enough, they have no intersection with Gene's list.
This list of 19-limit divisions sorted by 0.1 cent/oct badness,
though, does intersect:

311, 94, 217, 270, 282, 130, 243, 193, 190, 183

As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:

311, 422, 217, 388, 270, 190, 270, 436, 103, 183

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/22/2010 2:37:10 AM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
> I received an email from Tony D. Noe, noe(AT)sspectra.com, telling me he has extended the integer sequence A117536:

At my suggestion, he has now also extended the list for A117538, obtaining 2, 5, 7, 12, 19, 31, 41, 53, 72, 130, 171, 224, 270, 764, 954, 1178, 1395, 1578, 2684, 3395, 7033, 8269, 8539, 14348, 16808, 36269, 58973.

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/22/2010 2:42:06 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:
>
> 311, 422, 217, 388, 270, 190, 270, 436, 103, 183

270 is hot stuff, but does it really appear on the list twice?

🔗Graham Breed <gbreed@gmail.com>

4/22/2010 7:44:47 AM

On 22 April 2010 11:42, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:
>>
>> 311, 422, 217, 388, 270, 190, 270, 436, 103, 183
>
> 270 is hot stuff, but does it really appear on the list twice?

Because it makes the list with two different mappings.

Graham

🔗gdsecor <gdsecor@yahoo.com>

4/22/2010 10:05:00 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 21 April 2010 01:48, gdsecor <gdsecor@...> wrote:
>
> > Around a year ago I was looking for low-error EDO's consistent at
> > high prime limits, with exceptionally low error at the 7 limit, and
> > I did happen to find 58973 (47-limit consistent). Â I was particularly
> > interested in finding one that would distinguish 40960:41553 from
> > 6561:6656 by a different number of degrees, to use as a tuning
> > measure for commas notated in Sagittal. Â I found one that filled the bill
> >: 324296 (59-limit consistent); this also happened to be the first one
> > greater than 58973 that clearly stood out.
>
> Well, this is pretty hardcore. Three questions spring to mind:
>
> 1) How are you defining prime limit consistency?

Oops, sorry! I meant odd-limit consistency.

> 2) How can something consistent with so many notes *not* have an
> exceptionally low error at the 7-limit?

I computed error as a percentage of the interval between consecutive tones of each EDO, so it's evaluated relative to complexity. The program kept only those EDO's with error no more than 2%, 10%, and 15% of an EDO-degree for primes 3, 5, and 7, respectively. When the program got past about 30000, I was getting too many results, so I added a condition that the odd-limit consistency not be below 27 beyond that point; I also raised the odd limit cutoff to 41 once 324296 (with 59 limit consistency) was encountered. This resulted in 2901533 being the 91st EDO on the list, and a total of 199 below 10,000,000.

I then went through the list, plugging each EDO number into a spreadsheet that calculates the error of each odd harmonic in both cents and as % of an EDO-degree (and also shows odd-limit consistency). I took note of those EDO numbers that have exceptionally low % error at the 7 limit. For example, for 2901533 the figures are 0.04% for 3, 0.91% for 5, 5.09% for 7, and total spread of only +5.09% -0.27% at the 15 limit; for 324296 they are 0.09% for 3, 0.65% for 5, 2.82% for 7, and total spread of +20.12% -0.00% at the 15 limit.

> 3) Why the blue blazes are you playing around with these limits anyway?

Dave Keenan asked Manuel Op de Coul to compile a list of small intervals, or "commas" (in order of "popularity", as determined by the number of occurrences in the .scl files in the Scala tunings archive). Dave & I then used this list to determine which ratios should be considered to define some of the more obscure Sagittal symbols (particularly those with combinations of accent marks). We were running into high prime limits well before the 100th one in the list (ordered by popularity): 31:32 (#48), 47:48 (#72), 243:248 (#73), 128:129 (#79), 36:37 (#82), 999:1024 (#88), 7936:8019 (#89), 296:297 (#96), 53:54 (#104), etc.

The reason for wanting low percentage error at the 7-limit is that low primes (3 in particular) frequently occur as factors in notational commas multiple times, and I didn't want the accumulated error to be anywhere near 50%. This is to allow intervals to be expressed as integers of the tuning measure (single degree of the EDO), without rounding errors. (The two commas I wanted to distinguish, 40960:41553 and 6561:6656, have 3^7 and 3^8 as factors, respectively.)

> > Beyond 58973, I came across a few 59-limit consistent divisions
> > (639400, 1422445, 2805055), 57-limit ones (2836528, 3463634,
> > 3492376), 65-limit (3605631, 4393509), 69-limit (9648692), and
> > 71-limit (8595351), all with exceptionally low 7-limit error.The
> > champ turned out to be the only stand-out above 324296:
> > 2901533 (131-limit consistent, and error of <8.02% of a degree at
> > the 17 limit). Â (This was done with a program using the brute-force
> > method; I stopped at 10,000,000, when I failed to find anything
> > better than 2901533.)
>
> I've got as far as these 59-limit divisions:
>
> 4380,

4380 is only 31-limit consistent (error for 23 is -46.8% and for 35 is +44.6%, for a total spread of >90% of a degree).

> 6151,

only 37-limit consistent

> 10257,

only 25-limit

> 20567,

57-limit consistent

> 28771,

only 25-limit

> 37592,

57-limit consistent

> 38828,

55-limit consistent

> 40649,

31-limit

> 43119,

33-limit

> 48203

43-limit

All of these, except 43119, failed my 3-limit criterion, and 43119 fails both 5 and 7.

> All claim to be consistent and make your intervals distinct.

48203 has 40960:41553 (the smaller interval) as 1000 degrees and 6561:6656 (the larger interval) as 999 degrees, so they're not represented correctly. 10257 suffers from the same situation (213 & 212 deg), while 4380 has them 2 degrees apart (90 & 92 deg, respectively).

Besides, these are not the only two intervals I needed to distinguish (another pair is 45927:47104 and 39:40, difference of ~0.0226 cents); they are merely the two with the smaller difference in size: ~0.003347 cents, so I needed an EDO with enough resolution to distinguish them properly, i.e., having its single degree in that ballpark, the best solution being 1deg324296, ~0.003700 cents. The runners-up were 363466 (51-limit consistent), 412064 (47-limit consis.), and 693400 (59-limit consis. and very good at the 15 limit, but with twice the resolution I needed).

> Interestingly enough, they have no intersection with Gene's list.
> This list of 19-limit divisions sorted by 0.1 cent/oct badness,
> though, does intersect:
>
> 311, 94, 217, 270, 282, 130, 243, 193, 190, 183
>
> As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:
>
> 311, 422, 217, 388, 270, 190, 270, 436, 103, 183

I don't see 224 there, which is arguably the best division in which the 5-schisma (32768:32805) vanishes (and is hence closely associated with arthenian-level Sagittal, which also does not distinguish the 5-schisma).

Due to the rather stringent 2% error requirement for prime 3, the lowest numbers that made my list were 612, 665, 1236, 1848, 2460, 5585, 6079, 8539, 10428, 11664, 20203, 22092, 25841, 26282, and 28342. (53 fails due to excessive error for prime 7.)

--George

🔗Graham Breed <gbreed@gmail.com>

4/23/2010 11:00:59 PM

On 22 April 2010 21:05, gdsecor <gdsecor@yahoo.com> wrote:

> I computed error as a percentage of the interval between
> consecutive tones of each EDO, so it's evaluated relative to complexity.
>  The program kept only those EDO's with error no more than
> 2%, 10%, and 15% of an EDO-degree for primes 3, 5, and 7,
> respectively.  When the program got past about 30000, I was getting
> too many results, so I added a condition that the odd-limit
> consistency not be below 27 beyond that point; I also raised the
> odd limit cutoff to 41 once 324296 (with 59 limit consistency)
> was encountered.  This resulted in 2901533 being the 91st EDO
> on the list, and a total of 199 below 10,000,000.

For those following at home, 30,000 EDO has scale steps of 40
millicents. For consistency, primes should be within half a step, so
20 millicents. But once you hit the 49 limit, primes up to 7 are
represented at least twice, so they must be within 10 millicents of
just. I'd call that an exceptionally low error. A 2% criterion gets
you 0.2 millicents. 2,901,533 has steps of 0.4 millicents.

Error as a proportion of step size is a kind of "badness", BTW.

> Dave Keenan asked Manuel Op de Coul to compile a list of
> small intervals, or "commas" (in order of "popularity", as
> determined by the number of occurrences in the .scl files in the
> Scala tunings archive).  Dave & I then used this list to determine
> which ratios should be considered to define some of the more
> obscure Sagittal symbols (particularly those with combinations
> of accent marks).  We were running into high prime limits well
> before the 100th one in the list (ordered by popularity):
> 31:32 (#48), 47:48 (#72), 243:248 (#73), 128:129 (#79),
> 36:37 (#82), 999:1024 (#88), 7936:8019 (#89), 296:297 (#96),
> 53:54 (#104), etc.

Why do you assume these are to be used in a context of odd-limit
harmony? Could any of them be approximations to more complex (but
lower limit) ratios? I don't remember any scales with millions of
notes in the Scala archive.

> The reason for wanting low percentage error at the 7-limit is that
> low primes (3 in particular) frequently occur as factors in notational
> commas multiple times, and I didn't want the accumulated error
> to be anywhere near 50%.  This is to allow intervals to be expressed
> as integers of the tuning measure (single degree of the EDO),
> without rounding errors.  (The two commas I wanted to distinguish,
> 40960:41553 and 6561:6656, have 3^7 and 3^8 as factors, respectively.)

Odd limits correct for that to an extent. 3^4 is already a factor
when you reach the 81-limit. For the size of those ratios, you'd
expect them to factor into to post-81 limit "consonances".

> 4380 is only 31-limit consistent (error for 23 is -46.8% and for 35 is +44.6%, for a total spread of >90% of a degree).

I don't get that. And to check it's not my code that's wrong, here it
is in standard Python:

>>> 4380*math.log(23)/math.log(2)
19813.201367529717
>>> 4380*math.log(35)/math.log(2)
22466.259614218954

That's about 20% and 26% out, right? And in the same direction so
they cancel out in the odd limit. The Windows Calculator agrees. Is
there a bug in my floating point library or am I missing something
obvious?

>> As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:
>>
>> 311, 422, 217, 388, 270, 190, 270, 436, 103, 183
>
> I don't see 224 there, which is arguably the best division in which
> the 5-schisma (32768:32805) vanishes (and is hence closely
> associated with arthenian-level Sagittal, which also does not
> distinguish the 5-schisma).

Not that hot beyond the 17-limit. Here's the 13-limit top 10 with a
0.1 cent/oct badness:

270, 224, 72, 130, 58, 87, 311, 494, 198, 140

17-limit:

72, 270, 183, 140, 311, 224, 111, 121, 46, 217

19-limit:

270, 72, 311, 217, 111, 422, 183, 94, 354, 103

and way up in the 31-limit with the same badness:

311, 217, 103, 190, 121, 183, 270, 193, 270, 99

I see two versions of 270 there. The better is

<270, 428, 627, 758, 934, 999, 1104, 1147, 1221, 1312, 1338]

And the other:

<270, 428, 627, 758, 934, 999, 1104, 1147, 1222, 1312, 1338]

These kind of mappings are thoroughly annoying when you get to the
31-limit. It may be because higher primes have so little weight that
their mapping becomes almost random. Some of the rank 2 classes end
up as a combination of two different ETs with different 29- or
31-limit mappings.

Note: I've added my Python code for this to the Files section. Here's
an ugly link:

http://f1.grp.yahoofs.com/v1/0HrSS1iKax0H0qqfr0gWDw8MWd97Tmm7IPXY0VUCw-yC1QvtIlUJIQ011bAX1p8qsOlNz34SoLbur3oHvxVV/x31eq/regular.zip

It would be nice if you could run these searches on my website, and
who knows, maybe that could happen if I got hold of a Secure FTP
connection. The odd limit code is restricted to something like
21-limit, and likely to stay that way. Our previous opinion was that
you'd have to be insane to be interested in anything higher than that
;-)

Graham

🔗gdsecor <gdsecor@yahoo.com>

4/25/2010 9:43:46 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 22 April 2010 21:05, gdsecor <gdsecor@...> wrote:
>
> > I computed error as a percentage of the interval between
> > consecutive tones of each EDO, so it's evaluated relative to complexity.
> > Â The program kept only those EDO's with error no more than
> > 2%, 10%, and 15% of an EDO-degree for primes 3, 5, and 7,
> > respectively. Â When the program got past about 30000, I was getting
> > too many results, so I added a condition that the odd-limit
> > consistency not be below 27 beyond that point; I also raised the
> > odd limit cutoff to 41 once 324296 (with 59 limit consistency)
> > was encountered. Â This resulted in 2901533 being the 91st EDO
> > on the list, and a total of 199 below 10,000,000.
>
> For those following at home, 30,000 EDO has scale steps of 40
> millicents. For consistency, primes should be within half a step, so
> 20 millicents. But once you hit the 49 limit, primes up to 7 are
> represented at least twice, so they must be within 10 millicents of
> just. I'd call that an exceptionally low error. A 2% criterion gets
> you 0.2 millicents. 2,901,533 has steps of 0.4 millicents.
>
> Error as a proportion of step size is a kind of "badness", BTW.

Yes.

> > Dave Keenan asked Manuel Op de Coul to compile a list of
> > small intervals, or "commas" (in order of "popularity", as
> > determined by the number of occurrences in the .scl files in the
> > Scala tunings archive). Â Dave & I then used this list to determine
> > which ratios should be considered to define some of the more
> > obscure Sagittal symbols (particularly those with combinations
> > of accent marks). Â We were running into high prime limits well
> > before the 100th one in the list (ordered by popularity):
> > 31:32 (#48), 47:48 (#72), 243:248 (#73), 128:129 (#79),
> > 36:37 (#82), 999:1024 (#88), 7936:8019 (#89), 296:297 (#96),
> > 53:54 (#104), etc.
>
> Why do you assume these are to be used in a context of odd-limit
> harmony? Could any of them be approximations to more complex (but
> lower limit) ratios? I don't remember any scales with millions of
> notes in the Scala archive.

I don't know the details concerning how the ratios were expressed and interpreted, but a 47-limit tuning wouldn't require a large number of tones; all you'd need is 1/1 and another tone containing 47 as a factor, e.g., 47/24 or 47/32 (written that way rather than in cents), which would then require an accidental defined as 47:48 -- Sagittal has a symbol for this, BTW: ~|)'' .

> > The reason for wanting low percentage error at the 7-limit is that
> > low primes (3 in particular) frequently occur as factors in notational
> > commas multiple times, and I didn't want the accumulated error
> > to be anywhere near 50%. Â This is to allow intervals to be expressed
> > as integers of the tuning measure (single degree of the EDO),
> > without rounding errors. Â (The two commas I wanted to distinguish,
> > 40960:41553 and 6561:6656, have 3^7 and 3^8 as factors, respectively.)
>
> Odd limits correct for that to an extent. 3^4 is already a factor
> when you reach the 81-limit. For the size of those ratios, you'd
> expect them to factor into to post-81 limit "consonances".

But this is nowhere what I need if I wanted to quantify some of the ratios in a comma list that Dave got from Gene, e.g., 2*5^18:3^27 (~0.8618 cents).

> > 4380 is only 31-limit consistent (error for 23 is -46.8% and for 35 is +44.6%, for a total spread of >90% of a degree).
>
> I don't get that. And to check it's not my code that's wrong, here it
> is in standard Python:
>
> >>> 4380*math.log(23)/math.log(2)
> 19813.201367529717
> >>> 4380*math.log(35)/math.log(2)
> 22466.259614218954
>
> That's about 20% and 26% out, right? And in the same direction so
> they cancel out in the odd limit. The Windows Calculator agrees. Is
> there a bug in my floating point library or am I missing something
> obvious?

Sorry, my mistake in the details, made in haste, but my conclusion is still correct. The error of 19 is +7.75% and 33 is -44.62%, for a total spread of 52.37% at the 33 limit, which is inconsistent (>50%). Thus the division is only 31-limit consistent.

> >> As does this list, of 31-limit divisions ordered by 0.01 cent/oct badness:
> >>
> >> 311, 422, 217, 388, 270, 190, 270, 436, 103, 183
> >
> > I don't see 224 there, which is arguably the best division in which
> > the 5-schisma (32768:32805) vanishes (and is hence closely
> > associated with arthenian-level Sagittal, which also does not
> > distinguish the 5-schisma).
>
> Not that hot beyond the 17-limit.

Yes, I see now -- nor beyond the 15 limit.

> Here's the 13-limit top 10 with a
> 0.1 cent/oct badness:
>
> 270, 224, 72, 130, 58, 87, 311, 494, 198, 140
>
> 17-limit:
>
> 72, 270, 183, 140, 311, 224, 111, 121, 46, 217
>
> 19-limit:
>
> 270, 72, 311, 217, 111, 422, 183, 94, 354, 103
>
> and way up in the 31-limit with the same badness:
>
> 311, 217, 103, 190, 121, 183, 270, 193, 270, 99
>
> I see two versions of 270 there. The better is
>
> <270, 428, 627, 758, 934, 999, 1104, 1147, 1221, 1312, 1338]
>
> And the other:
>
> <270, 428, 627, 758, 934, 999, 1104, 1147, 1222, 1312, 1338]
>
> These kind of mappings are thoroughly annoying when you get to the
> 31-limit. It may be because higher primes have so little weight that
> their mapping becomes almost random. Some of the rank 2 classes end
> up as a combination of two different ETs with different 29- or
> 31-limit mappings.
>
> Note: I've added my Python code for this to the Files section. Here's
> an ugly link:
>
> http://f1.grp.yahoofs.com/v1/0HrSS1iKax0H0qqfr0gWDw8MWd97Tmm7IPXY0VUCw-yC1QvtIlUJIQ011bAX1p8qsOlNz34SoLbur3oHvxVV/x31eq/regular.zip
>
> It would be nice if you could run these searches on my website, and
> who knows, maybe that could happen if I got hold of a Secure FTP
> connection. The odd limit code is restricted to something like
> 21-limit, and likely to stay that way. Our previous opinion was that
> you'd have to be insane to be interested in anything higher than that
> ;-)

I had all sorts of assumptions regarding what's "insane" before I started working on Sagittal. (For example, I thought that you had to be nuts even to entertain the idea of composing with anything over 100 tones/octave.) Over the past 10 years, various folks have come up with all sorts of requests regarding the notation that have gradually changed my opinion as to what's useful and what's truly "insane". It's refreshing, for once, to be turning the tables and spouting even geater insanity. :-)

--George

🔗Graham Breed <gbreed@gmail.com>

4/27/2010 5:14:39 AM

On 26 April 2010 08:43, gdsecor <gdsecor@yahoo.com> wrote:

> I don't know the details concerning how the ratios were expressed
> and interpreted, but a 47-limit tuning wouldn't require a large number
> of tones; all you'd need is 1/1 and another tone containing 47 as a
> factor, e.g., 47/24 or 47/32 (written that way rather than in cents),
> which would then require an accidental defined as 47:48 --
> Sagittal has a symbol for this, BTW: ~|)'' .

There you go, you can already do it. You don't need to define a
notation for a huge equal temperament.

> But this is nowhere what I need if I wanted to quantify some of
> the ratios in a comma list that Dave got from Gene, e.g.,
> 2*5^18:3^27 (~0.8618 cents).

This is moving the goalposts. Who's using that ratio? Why would it
make sense in the context of a high prime limit?

> Sorry, my mistake in the details, made in haste, but my conclusion
> is still correct.  The error of 19 is +7.75% and 33 is -44.62%, for a
> total spread of 52.37% at the 33 limit, which is inconsistent (>50%).
>  Thus the division is only 31-limit consistent.

That's right. I've found out what was wrong with my code, and it
seems to make sense now. I think I can say that you meant 693,400
when you wrote 639400.

> I had all sorts of assumptions regarding what's "insane" before
> I started working on Sagittal.  (For example, I thought that you
> had to be nuts even to entertain the idea of composing with
> anything over 100 tones/octave.)  Over the past 10 years, various
> folks have come up with all sorts of requests regarding the notation
> that have gradually changed my opinion as to what's useful and
> what's truly "insane".  It's refreshing, for once, to be turning the
> tables and spouting even geater insanity. :-)

If this is something people are going to want to do, maybe the search
could be made less brutal. The simplest thing is to check for
consistency in each successive limit before you calculate the next
prime. Or maybe your "brute force" search is already doing that.
Brutality is in the eye of the beholder.

Everything I try has problems. My old code is very slow. The new
code with parametric badness has problems with floating point
precision when the limit gets beyond 31. That could be fixed with a
library, but it's still a complication.

One thing that might work is a badness-lattice basis reduction. I
read one of the early tuning-math posts where Gene said that LLL
reduction should work for any inner product defined by a positive
definite matrix. My parametric badness is one of those. So you
should be able to start with an identity matrix, and reduce it to get
some good equal temperaments. The first one should be the best
possible. The other ones will be linearly independent. They're what
you want for constructing higher rank temperaments (which may be
useful for notations). All the genetic material is there. And you
can find other equal temperaments from them if that's what you want.

The problem here is that the LLL algorithm as I have it doesn't work
with this kind of inner product. I haven't got any other lattice
reduction algorithms working. If anybody has libraries that are
likely to work, they can give it a try, or me some pointers.

I have a naive lattice reduction algorithm. But it fails in the
higher limits because some prime intervals are local minima, or at
least close enough given floating point imprecision that they don't
get reduced.

Whether it applies to music or not, this is an interesting question.

Graham

🔗gdsecor <gdsecor@yahoo.com>

4/27/2010 2:38:29 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 26 April 2010 08:43, gdsecor <gdsecor@...> wrote:
>
> > I don't know the details concerning how the ratios were expressed
> > and interpreted, but a 47-limit tuning wouldn't require a large number
> > of tones; all you'd need is 1/1 and another tone containing 47 as a
> > factor, e.g., 47/24 or 47/32 (written that way rather than in cents),
> > which would then require an accidental defined as 47:48 --
> > Sagittal has a symbol for this, BTW: ~|)'' .
>
> There you go, you can already do it. You don't need to define a
> notation for a huge equal temperament.

Yes, if it were simply a matter of my own personal requirements, the symbols in olympian-level Sagittal (resolution of 2460-ET) already provide much more than I would ever need. But the plot thickens.

A half-dozen of those 1/2460th-octave"buckets" (or "minas", short for schisminas, alternatively defined as 1/233rd-apotome) are occupied by two symbols (sharing a single mina). While the boundaries between symbols in adjacent minas can be equally spaced (1/2460th-octave or 1/233rd-apotome wide), the boundary between the symbols used in my (by now) infamous example:
40960:41553, or 5:19-comma (~24.884308 cents), notated with )/|
and
6561:6656, or 13-comma (~24.887655 cents), notated with .|).
was arbitrarily set as the halfway point between the two ratios. And likewise for the boundaries between the other 5 pairs of shared-mina symbols.

This is not the end, however. Someone requested a finer resolution than this offlist, so Dave Keenan & I looked for a finer division than 233-EDA (2460-EDO), which led to the "tina" (809-EDA or 8539-EDO). We devised diacritical markings that could modify the existing symbols to notate tinas. (We also found that we had passed the point where we could assign rational definitions, or ratios, for each tina, but that's another matter that I don't care to go into, other than to point up how "insane" these requests seem to be getting.)

The point I'm leading up to is that, with tinas as "buckets" for the symbols (instead of minas), the symbol boundaries didn't stay put, nor did all 6 of the symbol pairs sharing minas drop into separate tina-buckets. The only way to fix the problem of shifting boundaries was to find a set of buckets that would hold all possible Sagittal symbols (past, present, and future, no matter how "insane" the requirement), with no more than one symbol per bucket. For this purpose I sought & found 30723-EDA (alias 324296-EDO), with 59-limit consistency and fantastically low 7-limit error. With this, the boundary between the 5:19-comma and 13-comma symbols can now be firmly fixed at ~24.886519 cents. (Mina and tina boundaries still need to be shifted slightly so that they coincide with 30723-EDA boundaries.)

> > But this is nowhere what I need if I wanted to quantify some of
> > the ratios in a comma list that Dave got from Gene, e.g.,
> > 2*5^18:3^27 (~0.8618 cents).
>
> This is moving the goalposts. Who's using that ratio?

It was 95th (when sorted in order of badness) on a list of 318 intervals (23-limit, from 0 to 600 cents) that Dave got from Gene. I don't know what Gene was using it for. There are others 5-limit ratio farther up the list (less badness) that have larger exponents:
#60: 2^90*3^15:5^49 (~0.0470 cents)
#47: 2^161:3^84*5^12 (~0.0154 cents)

> Why would it
> make sense in the context of a high prime limit?

Because seemingly "insane" requests can make unpredicatable demands, you have to be prepared to accommodate as many of them as you can. I wanted an interval measuring unit that is free of rounding errors under highly extreme conditions. If you have 5^49 and 0.654% error for 5, your accumulated 5 error is a hair over 32%, which keeps you in the right bucket. If you want high prime limits, like some of the JI folks are using, you need high-prime consistency. (I hope 59-limit will do it.)

> ...
> If this is something people are going to want to do, maybe the search
> could be made less brutal. The simplest thing is to check for
> consistency in each successive limit before you calculate the next
> prime. Or maybe your "brute force" search is already doing that.
> Brutality is in the eye of the beholder.

Since I've already found what I need, the brutality is over & done. I can't locate the source code for the program I wrote, but it went something like this:

1. Initialize n to 1, max-n to 10,000,000.
2. Execute the following loop for each n.
3. If n > max-n then quit.
4. Initialize most negative% and most positive% errors to 0, harmonic number to 3, and odd limit to 1.
5. Calculate signed % error for the current harmonic number.
6. If unsigned 3 error > 2% or unsigned 5 error > 10% or unsigned 7 error > 15%, then skip EDO: increment n, and go to top of loop (step 3).
7. If signed % error < most negative% error or > most positive% error, then replace smallest% or most% with new % error.
8. If most positive% error minus most negative% error < 50% (i.e., is consistent), then odd limit = harmonic number; add 2 to harmonic number & repeat from step 5.
9. If the odd limit (i.e., odd-limit consistency) is too low (<11 if n<2460 or <27 if n<28342), then skip EDO: increment n, and go to top of loop (step 3).
10. Write line to comma-delimited output file containing: n, odd limit, most negative% error, most positive% error.
11. Increment n and repeat loop (step 3).

This gave me 198 lines of output, which I imported into a spreadsheet. I plugged the numbers with high-limit consistency into another spreadsheet in which I could examine the % errors for each odd harmonic. I evaluated each one against the others, deciding (for example) that an error exceeding 30% would be more acceptable for harmonic 41 (or even 17) than for harmonics 11 or 13 (which factors are more likely to be raised to the 2nd power in a ratio). I bolded the best ones on the first spreadsheet and highlighted the standouts (of which there were only 7).

> Everything I try has problems. My old code is very slow. The new
> code with parametric badness has problems with floating point
> precision when the limit gets beyond 31. That could be fixed with a
> library, but it's still a complication.

If I remember correctly, I think I first had the program calculate cents for each odd harmonic (reduced to the octave between 1/1 and 2/1) into an array. Floating point precision then becomes an issue only when the number of tones/octave gets very large.

--George

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/28/2010 9:28:22 PM

--- In tuning-math@yahoogroups.com, "gdsecor" <gdsecor@...> wrote:

> I had all sorts of assumptions regarding what's "insane" before I started working on Sagittal. (For example, I thought that you had to be nuts even to entertain the idea of composing with anything over 100 tones/octave.)

Damn. I'd been eyeing 111 for my next project, and now I have to quit. And I've already done 99.

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/28/2010 9:35:35 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 26 April 2010 08:43, gdsecor <gdsecor@...> wrote:

> The problem here is that the LLL algorithm as I have it doesn't work
> with this kind of inner product. I haven't got any other lattice
> reduction algorithms working. If anybody has libraries that are
> likely to work, they can give it a try, or me some pointers.

What is this inner product of which you speak?

🔗Graham Breed <gbreed@gmail.com>

4/29/2010 9:18:38 AM

On 29 April 2010 08:35, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> What is this inner product of which you speak?

See http://x31eq.com/badness.pdf

Graham