back to list

What are some high-numbered ET's that have extremely low error and are consistent to very high limits?

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/25/2011 3:14:45 PM

This would be really helpful for some of the research I've been doing
with "fuzzy mapping," as I've spoken about it on Facebook and the
like. In this case, I represent intervals as impulses that lie across
the number line. By representing the "basis set" of vectors in a fuzzy
group as this minimal set of impulses and then convolving the signal
with itself a bajillion times, the entire n-limit JI lattice can be
computed. By then convolving the whole thing with a Gaussian, you end
up getting some interesting results. If you expand these results into
a prime-based Tenney lattice, it looks like this:

http://www.mikebattagliamusic.com/music/contours.gif

Or this, if you weight each interval by complexity:

http://www.mikebattagliamusic.com/music/contours2.gif

The lines are pitch contours and the Euclidean subspace of the second
graph that you get by taking the view from a flat running
perpendicular to the origin is DC exactly, and the whole thing may or
may not be workable to set up such that it exactly yields HE.

When you're doing this with discrete intervals, rounding error frankly
makes life hell on earth. So it seems best to do these computations
with respect to some highly-numbered "universe ET," designed such that
it's extremely consistent, such that if you go 10 steps out into the
11-axis you might at most accrue a round-off error of a few cents. The
accuracy of the calculations can be increased by using a larger
universe ET, but consistency is what's most important here.

Can anyone suggest anything to go with? I hear the "mina" is good for
this. I think something that's 2197-consistent would be good for
starters, as it lets us go three steps out into the 13-axis with no
roundoff error at all.

-Mike

๐Ÿ”—gdsecor <gdsecor@yahoo.com>

2/25/2011 7:17:42 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> This would be really helpful for some of the research I've been doing
> with "fuzzy mapping," as I've spoken about it on Facebook and the
> like. In this case, I represent intervals as impulses that lie across
> the number line. By representing the "basis set" of vectors in a fuzzy
> group as this minimal set of impulses and then convolving the signal
> with itself a bajillion times, the entire n-limit JI lattice can be
> computed. By then convolving the whole thing with a Gaussian, you end
> up getting some interesting results. If you expand these results into
> a prime-based Tenney lattice, it looks like this:
>
> http://www.mikebattagliamusic.com/music/contours.gif
>
> Or this, if you weight each interval by complexity:
>
> http://www.mikebattagliamusic.com/music/contours2.gif
>
> The lines are pitch contours and the Euclidean subspace of the second
> graph that you get by taking the view from a flat running
> perpendicular to the origin is DC exactly, and the whole thing may or
> may not be workable to set up such that it exactly yields HE.
>
> When you're doing this with discrete intervals, rounding error frankly
> makes life hell on earth. So it seems best to do these computations
> with respect to some highly-numbered "universe ET," designed such that
> it's extremely consistent, such that if you go 10 steps out into the
> 11-axis you might at most accrue a round-off error of a few cents. The
> accuracy of the calculations can be increased by using a larger
> universe ET, but consistency is what's most important here.
>
> Can anyone suggest anything to go with? I hear the "mina" is good for
> this.

The mina is a single degree of 2460-EDO (27-limit consistent), but that's only for practical calculations. What you're looking for is something truly insane, such as 324296 (59-limit consistent) or 2901533 (131-limit consistent). Here's how I found these numbers:
/tuning-math/message/17561

--George

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/25/2011 7:33:28 PM

On Fri, Feb 25, 2011 at 10:17 PM, gdsecor <gdsecor@yahoo.com> wrote:
>
> The mina is a single degree of 2460-EDO (27-limit consistent), but that's only for practical calculations. What you're looking for is something truly insane, such as 324296 (59-limit consistent) or 2901533 (131-limit consistent). Here's how I found these numbers:
> /tuning-math/message/17561

Maybe then it's not consistency that's important then. If we're in
324296-EDO, each step corresponds to 0.0037 cents, so who cares about
consistency at that point? Maybe what I'm looking for is extremely low
error, and I guess if that's the goal, the zeta integral tuning list
is probably the way to go.

-Mike

๐Ÿ”—gdsecor <gdsecor@yahoo.com>

2/25/2011 9:43:24 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Fri, Feb 25, 2011 at 10:17 PM, gdsecor <gdsecor@...> wrote:
> >
> > The mina is a single degree of 2460-EDO (27-limit consistent), but that's only for practical calculations. What you're looking for is something truly insane, such as 324296 (59-limit consistent) or 2901533 (131-limit consistent). Here's how I found these numbers:
> > /tuning-math/message/17561
>
> Maybe then it's not consistency that's important then. If we're in
> 324296-EDO, each step corresponds to 0.0037 cents, so who cares about
> consistency at that point? Maybe what I'm looking for is extremely low
> error, and I guess if that's the goal, the zeta integral tuning list
> is probably the way to go.
>
> -Mike

If it's extremely low error you want, then any number will do, but if you want to use integers, keep the size of the integers more manageable, and minimize the rounding error, then low *relative* error (< 10% of your measuring unit) in the lower primes *and* consistency to the required odd limit is what you really need. Minas are 27-limit consistent and have very low relative error where it's most needed: no rounding error up to 3^64, 5^8, and 7^5.

In rereading your prior message, you specified "extremely consistent, such that if you go 10 steps out into the 11-axis you might at most accrue a round-off error of a few cents. With minas, 11^10 accumulates to an error of 4 minas, which is slightly less than 2 cents. If that's acceptable to you, then it looks as if minas might fill the bill after all. In the survey of rational intervals that were tabulated in the Scala archive for the Sagittal notation project, I did not find any ratios within the 23 prime limit that suffered from any rounding error when the minas of the constituent primes were combined to give an aggregate total for each ratio.

Why don't you give 2460 a try?

BTW, what sort of tuning measure is given by the zeta integral tuning list?

--George

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/25/2011 9:50:37 PM

On Sat, Feb 26, 2011 at 12:43 AM, gdsecor <gdsecor@yahoo.com> wrote:
>
> If it's extremely low error you want, then any number will do, but if you want to use integers, keep the size of the integers more manageable, and minimize the rounding error, then low *relative* error (< 10% of your measuring unit) in the lower primes *and* consistency to the required odd limit is what you really need. Minas are 27-limit consistent and have very low relative error where it's most needed: no rounding error up to 3^64, 5^8, and 7^5.

What do you mean by "if you want to use integers?" I want to use this
approach to see if I can re-derive some common linear temperaments
from this and see how they rank up to one another in a vaguely
psychoacoustic sense.

> In rereading your prior message, you specified "extremely consistent, such that if you go 10 steps out into the 11-axis you might at most accrue a round-off error of a few cents. With minas, 11^10 accumulates to an error of 4 minas, which is slightly less than 2 cents. If that's acceptable to you, then it looks as if minas might fill the bill after all. In the survey of rational intervals that were tabulated in the Scala archive for the Sagittal notation project, I did not find any ratios within the 23 prime limit that suffered from any rounding error when the minas of the constituent primes were combined to give an aggregate total for each ratio.

That's fantastic. I'll have to see how sensitive all of this is to
that, but so far that looks great. And this is what happens if you
compare 10 * the mapping for 11 to the actual nearest mapping for
11^10?

> Why don't you give 2460 a try?
>
> BTW, what sort of tuning measure is given by the zeta integral tuning list?

It basically gives a list of ET's that successively give greater and
greater accuracy for all primes, regardless of limit. At least that's
what I think it does.

-Mike

๐Ÿ”—genewardsmith <genewardsmith@sbcglobal.net>

2/25/2011 10:28:24 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> It basically gives a list of ET's that successively give greater and
> greater accuracy for all primes, regardless of limit. At least that's
> what I think it does.

That would be impossible. But it doesn't optimize any specific prime limit; it weighs smaller primes more than larger ones.

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/25/2011 10:32:19 PM

On Sat, Feb 26, 2011 at 1:28 AM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > It basically gives a list of ET's that successively give greater and
> > greater accuracy for all primes, regardless of limit. At least that's
> > what I think it does.
>
> That would be impossible. But it doesn't optimize any specific prime limit; it weighs smaller primes more than larger ones.

I thought everyone was talking about zeta error as giving us a measure
of tuning error that didn't require any limit? Or is it just that it
weighs smaller primes more than larger ones?

-Mike

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/25/2011 10:34:57 PM

On Sat, Feb 26, 2011 at 1:32 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Sat, Feb 26, 2011 at 1:28 AM, genewardsmith
> <genewardsmith@sbcglobal.net> wrote:
>>
>> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>>
>> > It basically gives a list of ET's that successively give greater and
>> > greater accuracy for all primes, regardless of limit. At least that's
>> > what I think it does.
>>
>> That would be impossible. But it doesn't optimize any specific prime limit; it weighs smaller primes more than larger ones.
>
> I thought everyone was talking about zeta error as giving us a measure
> of tuning error that didn't require any limit? Or is it just that it
> weighs smaller primes more than larger ones?

In fact, to get into this more, how exactly does it weigh smaller
primes more than larger primes? My half-assed knowledge of the zeta
function, the Mellin transform, and rect functions tells me that there
should be some kind of sinc-function weighting going on. Am I right?

-Mike

๐Ÿ”—genewardsmith <genewardsmith@sbcglobal.net>

2/26/2011 12:53:06 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> In fact, to get into this more, how exactly does it weigh smaller
> primes more than larger primes?

This might help:

http://en.wikipedia.org/wiki/Riemannย–Siegel_formula

Along the critical line, this gives a 1/sqrt(p) wieghting, more or less.

My half-assed knowledge of the zeta
> function, the Mellin transform, and rect functions tells me that there
> should be some kind of sinc-function weighting going on. Am I right?

Decades ago I was led to the zeta function starting from cosines, but starting with a weighted sum of sinc functions would certainly make sense. But I don't see any close connection with zeta(s+it).

๐Ÿ”—Mike Battaglia <battaglia01@gmail.com>

2/27/2011 12:05:13 AM

On Sat, Feb 26, 2011 at 3:53 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > In fact, to get into this more, how exactly does it weigh smaller
> > primes more than larger primes?
>
> This might help:
>
> http://en.wikipedia.org/wiki/Riemannโ€“Siegel_formula
>
> Along the critical line, this gives a 1/sqrt(p) wieghting, more or less.

OK, I think I understand. I see that the gamma function keeps turning
up here, although I'm not quite sure why.

> My half-assed knowledge of the zeta
> > function, the Mellin transform, and rect functions tells me that there
> > should be some kind of sinc-function weighting going on. Am I right?
>
> Decades ago I was led to the zeta function starting from cosines, but starting with a weighted sum of sinc functions would certainly make sense. But I don't see any close connection with zeta(s+it).

After thinking about this, it wouldn't be sinc at all - I don't know
why I said that. But when you said starting from a sum of cosines -
would these not be exponentially warped cosines, something like
cos(e^x), since that's closer to what the basis vectors for the Mellin
transform are?

-Mike

๐Ÿ”—genewardsmith <genewardsmith@sbcglobal.net>

2/27/2011 6:42:59 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> After thinking about this, it wouldn't be sinc at all - I don't know
> why I said that. But when you said starting from a sum of cosines -
> would these not be exponentially warped cosines, something like
> cos(e^x), since that's closer to what the basis vectors for the Mellin
> transform are?

Just plain old cosines. But I soon found I wanted to tweak that, and in the course of tweaking, the zeta function suggested itself. Then I went over the edge by crossing into the critical strip.

๐Ÿ”—gdsecor <gdsecor@yahoo.com>

2/27/2011 8:26:28 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Sat, Feb 26, 2011 at 12:43 AM, gdsecor <gdsecor@...> wrote:
> >
> ...
> > In rereading your prior message, you specified "extremely consistent, such that if you go 10 steps out into the 11-axis you might at most accrue a round-off error of a few cents. With minas, 11^10 accumulates to an error of 4 minas, which is slightly less than 2 cents. If that's acceptable to you, then it looks as if minas might fill the bill after all. In the survey of rational intervals that were tabulated in the Scala archive for the Sagittal notation project, I did not find any ratios within the 23 prime limit that suffered from any rounding error when the minas of the constituent primes were combined to give an aggregate total for each ratio.
>
> That's fantastic. I'll have to see how sensitive all of this is to
> that, but so far that looks great. And this is what happens if you
> compare 10 * the mapping for 11 to the actual nearest mapping for
> 11^10?

Yes.

--George