back to list

CAT tuning

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/29/2010 6:13:12 PM

Suppose we have a val v for an n edo. Dividing by successive log2(prime) for the successive primes, we get a vector in some vector space which Graham had a name for which I have forgotten. These have coefficients which are all approximately n; call this thing V. We can apply our favorite measure of central tendency to these values clustered around n, and get a value m. Call the vector [m m m ... m] as M, where the dimension is the same as of V. M has the same measure of central tendency m as V, and hence V-M has an average of 0. If we now take a vector N = [n n n ... n], it has an average value of n. Hence V-M+N has an average of n. If we multiply through the successive values by log2(prime), and then scalar multiply by 1200/n, we get a tuning map valued in cents.

I propose calling this family of tunings "Center Additively Translated", or CAT tunings. What, you are probably asking, is the point of a CAT tuning? This emerges when we take wedge products. The wedge products of M, N, and N-M with each other vanish, and hence the wedge products of CAT tuned mappings are the same as those from the original Vs. What are these wedge products? If we multiply through by the corresponding pair of log2(prime) for each entry, we get the wedgie (up to sign). This weighted wedgie is something I've mentioned in connection with complexity measurement.

Of course, since the wedge product is bilinear, what you get by stretching and shrinking is also in proportion, but it isn't invariant with respect to the choice of mappings to wedge together. Whether this has any real significance I will leave for subsequent considerations to decide.

🔗Carl Lumma <carl@lumma.org>

4/29/2010 7:37:30 PM

Gene wrote:

>Suppose we have a val v for an n edo. Dividing by successive
>log2(prime) for the successive primes, we get a vector in some vector
>space which Graham had a name for which I have forgotten. These have
>coefficients which are all approximately n; call this thing V. We can
>apply our favorite measure of central tendency to these values
>clustered around n, and get a value m. Call the vector [m m m ... m]
>as M, where the dimension is the same as of V. M has the same measure
>of central tendency m as V, and hence V-M has an average of 0. If we
>now take a vector N = [n n n ... n], it has an average value of n.
>Hence V-M+N has an average of n. If we multiply through the successive
>values by log2(prime), and then scalar multiply by 1200/n, we get a
>tuning map valued in cents.
>I propose calling this family of tunings "Center Additively
>Translated", or CAT tunings. What, you are probably asking, is the
>point of a CAT tuning? This emerges when we take wedge products. The
>wedge products of M, N, and N-M with each other vanish, and hence the
>wedge products of CAT tuned mappings are the same as those from the
>original Vs. What are these wedge products? If we multiply through by
>the corresponding pair of log2(prime) for each entry, we get the
>wedgie (up to sign). This weighted wedgie is something I've mentioned
>in connection with complexity measurement.

Ok, so TOP and TOP-RMS are both CAT.

>Of course, since the wedge product is bilinear, what you get by
>stretching and shrinking is also in proportion, but it isn't invariant
>with respect to the choice of mappings to wedge together.

Hm.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/29/2010 10:11:04 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Ok, so TOP and TOP-RMS are both CAT.

I get that TOP for 5-limit 12-et has a tuning map

[1197.6740698521903352, 1896.3172772659680307,
2794.5728296551107820]

whereas 5-limit midrange CAT tuning is

1197.6695528043651638, 1896.3063285850080046,
2794.5888691828040084]

Obviously this huge difference, amounting to more than 0.016 cents in the case of the map from 5, makes then two tunings distinct.

🔗Graham Breed <gbreed@gmail.com>

4/30/2010 10:18:51 AM

On 30 April 2010 03:13, genewardsmith <genewardsmith@sbcglobal.net> wrote:
> Suppose we have a val v for an n edo. Dividing by
> successive log2(prime) for the successive primes,
> we get a vector in some vector space which Graham
> had a name for which I have forgotten. These have
> coefficients which are all approximately n; call this
> thing V. We can apply our favorite measure of central
> tendency to these values clustered around n, and get
> a value m. Call the vector [m m m ... m] as M, where
> the dimension is the same as of V. M has the same
> measure of central tendency m as V, and hence V-M
> has an average of 0. If we now take a vector N =
> [n n n ... n], it has an average value of n. Hence
> V-M+N has an average of n. If we multiply through the
> successive values by log2(prime), and then scalar
> multiply by 1200/n, we get a tuning map valued in cents.

I called it tuning space, because each point specifies the tuning of
the prime intervals. It isn't a temperament space because not all
points map to a system of lower rank. And it isn't a val space
because the vals (as I understand them) are only the lattice points.

Right, so, what's a "measure of central tendency"? Ah, "These
measures indicate the middle or center of a distribution."

http://writing.colostate.edu/guides/research/glossary

Or synonym of "average" according to Wikipedia. So the RMS and
mean-abs are measures of central tendency, and so is the (max+min)/2
used in the TOP-max formulae.

So, V - M = 0. Where did M come from again?

Perhaps this is a generalization of something I called zero mean
deviation, anyway. Or something more devious that I got to come out
as DZMS.

> I propose calling this family of tunings "Center Additively
> Translated", or CAT tunings. What, you are probably asking,
> is the point of a CAT tuning? This emerges when we take
> wedge products. The wedge products of M, N, and N-M with
> each other vanish, and hence the wedge products of CAT
> tuned mappings are the same as those from the original Vs.
> What are these wedge products? If we multiply through by
> the corresponding pair of log2(prime) for each entry, we get
> the wedgie (up to sign). This weighted wedgie is something
> I've mentioned in connection with complexity measurement.

Wedge products, right.

> Of course, since the wedge product is bilinear, what you get
> by stretching and shrinking is also in proportion, but it isn't
> invariant with respect to the choice of mappings to wedge
> together. Whether this has any real significance I will leave
> for subsequent considerations to decide.

I don't know. Zero mean deviation made some calculations easier, but
I didn't think it had any great significance.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/30/2010 11:39:45 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Right, so, what's a "measure of central tendency"? Ah, "These
> measures indicate the middle or center of a distribution."
>
> http://writing.colostate.edu/guides/research/glossary
>
> Or synonym of "average" according to Wikipedia. So the RMS and
> mean-abs are measures of central tendency, and so is the (max+min)/2
> used in the TOP-max formulae.

Better known under the names "mean", "median" and "midrange".

http://en.wikipedia.org/wiki/Arithmetic_mean
http://en.wikipedia.org/wiki/Median
http://en.wikipedia.org/wiki/Mid-range

🔗Graham Breed <gbreed@gmail.com>

4/30/2010 11:34:00 PM

On 30 April 2010 09:11, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> I get that TOP for 5-limit 12-et has a tuning map
>
> [1197.6740698521903352, 1896.3172772659680307,
> 2794.5728296551107820]

Oh, shrunk not stretched. That means there's a bug on my website :(

> whereas 5-limit midrange CAT tuning is
>
> 1197.6695528043651638, 1896.3063285850080046,
> 2794.5888691828040084]

The unstretched tuning map is

[1200.0, 1900.0, 2800.0]

That's CAT stretched by a factor of about 1.001942039329696

The unstretched, weighted tuning map is

w = [1.0, 0.99897210982147422, 1.0049119688379171]

[max(w) + min(w)]/2 is the same number as above. So when you say
"apply our favorite measure of central tendency to these values" you
mean you divide by it. There we go, I know what you're talking about
now.

It looks like it is the zero mean deviation when you use a mean as the
average. But I can't find a mention of this in primerr.pdf. I'll
check the revision control tonight to see if it used to be there, as I
suspect it was. The point of it was that the RMS and standard
deviation of the weighted tuning maps end up the same.

Graham

🔗Graham Breed <gbreed@gmail.com>

4/30/2010 11:44:00 PM

I wrote:
> It looks like it is the zero mean deviation when you use a mean as the
> average.  But I can't find a mention of this in primerr.pdf.  I'll
> check the revision control tonight to see if it used to be there, as I
> suspect it was.  The point of it was that the RMS and standard
> deviation of the weighted tuning maps end up the same.

Weighted errors, rather. The standard deviation of the weighted
tuning map is always an approximation to the optimal error (and you
can generalize this to other measures of dispersion). The ZMD tuning
has the property that the RMS weighted error is equal to the STD
weighted error, and so approximates the TOP-RMS error. It would be
useful if the TOP-RMS tuning were difficult to calculate.

Graham

🔗Carl Lumma <carl@lumma.org>

5/1/2010 12:03:39 AM

Graham wrote:
>[max(w) + min(w)]/2 is the same number as above. So when you say
>"apply our favorite measure of central tendency to these values" you
>mean you divide by it. There we go, I know what you're talking about
>now.

See also: /tuning/topicId_88292.html#88394

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/1/2010 12:52:43 AM

On 1 May 2010 11:03, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>>[max(w) + min(w)]/2 is the same number as above.  So when you say
>>"apply our favorite measure of central tendency to these values" you
>>mean you divide by it.  There we go, I know what you're talking about
>>now.
>
> See also:  /tuning/topicId_88292.html#88394

Ah, right, so TOP-max should be this kind of CAT. Why does Gene get
something different? Hmm. I think I got muddled up and was using TOP
instead of CAT before. So TOP-max really seems to be what I thought
CAT with half-range was. And I still don't understand CAT. It
doesn't even look like a valid stretch of 12-equal.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 1:48:48 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Ah, right, so TOP-max should be this kind of CAT. Why does Gene get
> something different? Hmm. I think I got muddled up and was using TOP
> instead of CAT before. So TOP-max really seems to be what I thought
> CAT with half-range was. And I still don't understand CAT. It
> doesn't even look like a valid stretch of 12-equal.

Well, CAT is basically pointless I expect. But I still want to stretch weighted vals by dividing by the mean or the median.

🔗Carl Lumma <carl@lumma.org>

5/1/2010 2:51:28 PM

At 12:52 AM 5/1/2010, you wrote:

>> See also: /tuning/topicId_88292.html#88394
>
>Ah, right, so TOP-max should be this kind of CAT. Why does Gene get
>something different? Hmm. I think I got muddled up and was using TOP
>instead of CAT before. So TOP-max really seems to be what I thought
>CAT with half-range was. And I still don't understand CAT. It
>doesn't even look like a valid stretch of 12-equal.
>
> Graham

I get

< 1197.6740698521903 1896.3172772659682 2794.5728296551106 |

for 5-limit 12-tone TOP-max. Gene got

[ 1197.6740698521903352, 1896.3172772659680307, 2794.5728296551107820 ]

Other than the fact that he's clearly giving more digits than
his computing system can handle (:P), we agree. What are you
doing? This is a really simple calculation.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 3:19:16 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> I get
>
> < 1197.6740698521903 1896.3172772659682 2794.5728296551106 |
>
> for 5-limit 12-tone TOP-max. Gene got
>
> [ 1197.6740698521903352, 1896.3172772659680307, 2794.5728296551107820 ]
>
> Other than the fact that he's clearly giving more digits than
> his computing system can handle (:P), we agree. What are you
> doing? This is a really simple calculation.

I'm taking the weighted val, dividing it by the midrange, unweighting it and multiplying by 1200. Top-mean and Top-median are the same as Top-midrange, only using the mean or the median instead. I don't think Rop-median is a very good idea, though. Top-mean I like.

🔗Carl Lumma <carl@lumma.org>

5/1/2010 7:43:46 PM

Gene wrote:

>> I get
>> < 1197.6740698521903 1896.3172772659682 2794.5728296551106 |
>> Gene got
>> [ 1197.6740698521903352, 1896.3172772659680307, 2794.5728296551107820 ]
>>
>> Other than the fact that he's clearly giving more digits than
>> his computing system can handle (:P), we agree.
>
>I'm taking the weighted val, dividing it by the midrange, unweighting
>it and multiplying by 1200. Top-mean and Top-median are the same as
>Top-midrange, only using the mean or the median instead. I don't think
>Rop-median is a very good idea, though. Top-mean I like.

I was kidding about your computing system, though I would be curious
to know whether maple or scheme is handling this better (you gave more
digits but they don't round to mine). I thought both are supposed to
have infinite precision.

Looping back to what I wrote on Tuning, Graham wants to multiply the
weighted val w by mean(w)/mean-sq(w). That's like
(a+b+c...)/(a^2+b^2+c^2...) where a b & c are elements of w. You want
to multiply by n/(a+b+c...). In your original post you noted that w
consists of numbers near n (the ET) and in fact, the more accurate the
temperament the more this is true. So it's clear that for good
temperaments, both your scalar and Graham's is close to 1/n and the
resulting product is a vector filled with near-ones. The question is
how the two scalars differ as the temperament gets worse.

Paul proved that any bag of intervals has max weighted error less
than the TOP damage. Getting something similar for one of these CAT
variants seems critical if TOP aka TOP-max aka CAT-max is to be
unseated as the de facto standard. Kalle seemed to agree.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 8:02:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> I was kidding about your computing system, though I would be curious
> to know whether maple or scheme is handling this better (you gave more
> digits but they don't round to mine). I thought both are supposed to
> have infinite precision.

You can set the precision level on Maple, but you need to tell it to round off to something sensible to get rid of junk digits and I didn't bother.

> Paul proved that any bag of intervals has max weighted error less
> than the TOP damage. Getting something similar for one of these CAT
> variants seems critical if TOP aka TOP-max aka CAT-max is to be
> unseated as the de facto standard. Kalle seemed to agree.

Firstly, these aren't CAT variants. Forget CAT, it isn't worth pursuing. Secondly, I think we do have something similar for TOP mean, but I haven't looked at the question really.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 9:58:23 PM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

> Firstly, these aren't CAT variants. Forget CAT, it isn't worth pursuing. Secondly, I think we do have something similar for TOP mean, but I haven't looked at the question really.

I've thought about this, and it occurs to me that while you can cook up a height function designed to work with TOP-mean, it should't really matter since it will be bounded with respect to its ratio with Tenney height anyway. These different metrics are bounded with respect to each other, which is why they define the same topology. So, the error of the TOP-mean tuning, divided by Tenney height, ought to be bounded.

🔗Graham Breed <gbreed@gmail.com>

5/1/2010 11:31:39 PM

On 2 May 2010 06:43, Carl Lumma <carl@lumma.org> wrote:

> I was kidding about your computing system, though I would be curious
> to know whether maple or scheme is handling this better (you gave more
> digits but they don't round to mine).  I thought both are supposed to
> have infinite precision.

You can't have infinite precision floats. You may have arbitrary
precision, but you have to set it. I forgot until now that Python
does come with this as standard in the Decimal type. So here's the
TOP-max tuning map for 5-limit 12-equal given a precision of 30
(whatever that means)

<1197.67406985219033511192385283,
1896.31727726596803059387943365, 2794.57282965511078192782232328]

I calculated it using the 2/(max(w)+min(w)) method and verified it to
most of these digits using different calculations:

0.00296417296773805329795313419
0.002964172967738053297953134194
0.00296417296773805329795313419681

One of those converted to cents/octave:

3.55700756128566395754376102800

The TOP-max scale stretch is

0.998061724876825279259936544028

TOP-RMS:

0.998700288812473324599129797606

TOP-like zero mean deviation (the thing Gene's talking about):

0.998706981175809944769329868549

The TOP-RMS error in cents/octave:

3.10636124174717351638458621504

The related standard deviation error:

3.11039344698373043622764925746

The related RMS error for the zero mean deviation:

3.10637164970614308020782531921

So that isn't the same as the STD error. But it is the tuning for
which RMS error and STD error are identical.

You could do the same calculations with bc, which is a standard Unix
utility that you can probably download for Windows, but I haven't.

Note that the standard deviation of the weighted errors is the same as
the standard deviation of the weighted tuning map. Sometimes you can
get better precision by working with the errors.

> Looping back to what I wrote on Tuning, Graham wants to multiply the
> weighted val w by mean(w)/mean-sq(w).  That's like
> (a+b+c...)/(a^2+b^2+c^2...) where a b & c are elements of w.  You want
> to multiply by n/(a+b+c...).  In your original post you noted that w
> consists of numbers near n (the ET) and in fact, the more accurate the
> temperament the more this is true.  So it's clear that for good
> temperaments, both your scalar and Graham's is close to 1/n and the
> resulting product is a vector filled with near-ones.  The question is
> how the two scalars differ as the temperament gets worse.

Multiplying the real val is more to the point. I don't know why Gene
wants to multiply the weighted val and then un-weight it. Also, w is
the weighted tuning map, not the weighted val. You can get the
optimal errors straight from the weighted val but not the scale
stretches.

I've noticed on my website, now that it's fixed, that some of the
TOP-RMS stretches are looking very similar to the TOP-RMS errors. It
may be this approximation at work.

Graham

🔗Carl Lumma <carl@lumma.org>

5/1/2010 11:42:21 PM

Graham wrote:

>> I was kidding about your computing system, though I would be curious
>> to know whether maple or scheme is handling this better (you gave more
>> digits but they don't round to mine). I thought both are supposed to
>> have infinite precision.
>
>[snip] So here's the TOP-max tuning map for 5-limit 12-equal given a
>precision of 30 (whatever that means)
>
><1197.67406985219033511192385283,
>1896.31727726596803059387943365, 2794.57282965511078192782232328]

Drat. Both Gene and I gave junk digits, but his were better
than mine.

>You can't have infinite precision floats. You may have arbitrary
>precision, but you have to set it.

I'm pretty sure I could design a system that never gave incorrect
digits.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 11:51:21 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> I'm pretty sure I could design a system that never gave incorrect
> digits.

http://en.wikipedia.org/wiki/Interval_arithmetic

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 11:53:44 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Multiplying the real val is more to the point. I don't know why Gene
> wants to multiply the weighted val and then un-weight it.

What do you propose as an alternative Euclidean norm on tuning space?

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 12:13:34 AM

On 2 May 2010 10:53, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> What do you propose as an alternative Euclidean norm on tuning space?

If you're using that space, you can find the optimal stretch as the
closest approach on the val line to the JI point. But for general
arithmetic there's no need to do that.

I invented a term there: val line. It's the line passing through the
origin and the val. Each point on it is a different tuning mapped by
the val. I could also call it a temperament line as each point is a
temperament, most of them stupidly inaccurate.

The geometry is simple: the optimal tuning of any temperament class is
its closest approach to the JI point. The TOP-RMS error is the
shortest distance from the temperament line/(hyper-)surface.
Alternatively, it's the angle between the hypersurface and the JI
line. (All of these definitions will be off by a constant.)

If you want to fix the octave stretch, projecting onto the unit
sphere, or a hyperspace passing through the JI point orthogonal to the
origin, might work. The standard deviation error is probably one of
these. But beyond the rank 2 case I don't think it makes anything
simpler. There's extra work in throwing away one dimension and then
putting it back again.

For a single norm, you know my parametric badness, which is a linear
transformation of the tuning space.

The complexity, in case you missed that, is the distance from the
origin to the val. It generalizes to higher ranks no problem.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 7:49:24 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 2 May 2010 10:53, genewardsmith <genewardsmith@...> wrote:
>
> > What do you propose as an alternative Euclidean norm on tuning space?
>
> If you're using that space, you can find the optimal stretch as the
> closest approach on the val line to the JI point.

I get it, I think--this is where your TOP-rms stretch comes from.

But for general
> arithmetic there's no need to do that.

What's general arithmetic?

> I invented a term there: val line. It's the line passing through the
> origin and the val. Each point on it is a different tuning mapped by
> the val. I could also call it a temperament line as each point is a
> temperament, most of them stupidly inaccurate.

You could call it a point in projective space and I wouldn't object.

> For a single norm, you know my parametric badness, which is a linear
> transformation of the tuning space.

I know that, do I?

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 12:29:41 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> If you want to fix the octave stretch, projecting onto the unit
> sphere, or a hyperspace passing through the JI point orthogonal to the
> origin, might work. The standard deviation error is probably one of
> these. But beyond the rank 2 case I don't think it makes anything
> simpler. There's extra work in throwing away one dimension and then
> putting it back again.

I proposed "Euclidean tuning" for what you seem to want: that fixes the search for a nearest point to the JI point to a flat whose first coordinate has value 1. Is this a good name? Anyway, it doesn't seem to involve any extra work throwing things away and then putting them back in.

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 11:00:24 PM

On 2 May 2010 18:49, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> What's general arithmetic?

I mean if you have to calculate the stretched tuning, there's no need
to appeal to the geometry. It's easier to stretch the unweighted
tuning map then stretch the weighted one and then unweighted.

>> I invented a term there: val line.  It's the line passing through the
>> origin and the val.  Each point on it is a different tuning mapped by
>> the val.  I could also call it a temperament line as each point is a
>> temperament, most of them stupidly inaccurate.
>
> You could call it a point in projective space and I wouldn't object.

I don't think that's a useful approximation. You need the full space
to calculate complexity. And there are different kinds of
projections. One gives simple badness as error*complexity.

>> For a single norm, you know my parametric badness, which is a linear
>> transformation of the tuning space.
>
> I know that, do I?

I gave you a link: http://x31eq.com/badness.pdf

It's only in two columns, so zoom in and try to understand equation 3
if you can't read the whole thing.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 11:40:31 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> >> I invented a term there: val line.  It's the line passing through the
> >> origin and the val.  Each point on it is a different tuning mapped by
> >> the val.  I could also call it a temperament line as each point is a
> >> temperament, most of them stupidly inaccurate.

> > You could call it a point in projective space and I wouldn't object.
>
> I don't think that's a useful approximation.

That was a kind of a joke. It's not an approximation, it's a definition:

http://en.wikipedia.org/wiki/Projective_space

> > I know that, do I?
>
> I gave you a link: http://x31eq.com/badness.pdf
>
> It's only in two columns, so zoom in and try to understand equation 3
> if you can't read the whole thing.

Thanks. I've been amusing myself with "Measures of Composite Intervals" so I should be able to read it.

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 11:54:31 PM

On 3 May 2010 10:40, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>>
>> >> I invented a term there: val line.  It's the line passing through the
>> >> origin and the val.  Each point on it is a different tuning mapped by
>> >> the val.  I could also call it a temperament line as each point is a
>> >> temperament, most of them stupidly inaccurate.
>
>> > You could call it a point in projective space and I wouldn't object.
>>
>> I don't think that's a useful approximation.
>
> That was a kind of a joke. It's not an approximation, it's a definition:
>
> http://en.wikipedia.org/wiki/Projective_space

It was probably me picking the wrong word. You really do need the
full space to get complexity. And there really is a projective space
that gives error*complexity badness.

> Thanks. I've been amusing myself with "Measures of Composite
> Intervals" so I should be able to read it.

That one does come in a alternative format:

http://x31eq.com/composite_onecol.pdf

You may also notice the same formula in another guise.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/3/2010 11:43:49 PM

On 3 May 2010 10:54, Graham Breed <gbreed@gmail.com> wrote:
> On 3 May 2010 10:40, genewardsmith <genewardsmith@sbcglobal.net> wrote:

>> That was a kind of a joke. It's not an approximation, it's a definition:
>>
>> http://en.wikipedia.org/wiki/Projective_space
>
> It was probably me picking the wrong word.  You really do need the
> full space to get complexity.  And there really is a projective space
> that gives error*complexity badness.

In fact, there is a definition that matches what you were talking
about there. So badness space is a projection into another Euclidean
space, not a projective space.

How does projective space make anything simpler?

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/4/2010 12:14:59 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> How does projective space make anything simpler?

There are lots of contexts where projective spaoe makes things simpler, but I doubt this is one of them.

🔗Graham Breed <gbreed@gmail.com>

5/6/2010 7:17:21 AM

On 02/05/2010, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> If you want to fix the octave stretch, projecting onto the unit
>> sphere, or a hyperspace passing through the JI point orthogonal to the
>> origin, might work. The standard deviation error is probably one of
>> these. But beyond the rank 2 case I don't think it makes anything
>> simpler. There's extra work in throwing away one dimension and then
>> putting it back again.
>
> I proposed "Euclidean tuning" for what you seem to want: that fixes the
> search for a nearest point to the JI point to a flat whose first coordinate
> has value 1. Is this a good name? Anyway, it doesn't seem to involve any
> extra work throwing things away and then putting them back in.

What's "a flat"? I still don't understand this. It does seem to
involve extra work in that you've fixed the first coordinate of
something to 1. TOP-RMS still looks like the obvious Euclidean
tuning.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/6/2010 1:10:25 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > I proposed "Euclidean tuning" for what you seem to want: that fixes the
> > search for a nearest point to the JI point to a flat whose first coordinate
> > has value 1. Is this a good name? Anyway, it doesn't seem to involve any
> > extra work throwing things away and then putting them back in.

> What's "a flat"?

http://en.wikipedia.org/wiki/Flat_%28geometry%29

I still don't understand this. It does seem to
> involve extra work in that you've fixed the first coordinate of
> something to 1. TOP-RMS still looks like the obvious Euclidean
> tuning.

Of course it is, but it doesn't have pure octaves.

🔗Carl Lumma <carl@lumma.org>

5/6/2010 1:29:02 PM

At 01:10 PM 5/6/2010, you wrote:
>> TOP-RMS still looks like the obvious Euclidean
>> tuning.
>
>Of course it is, but it doesn't have pure octaves.

Forgive me for butting in, but are you guys talking about
the error or the complexity? When I read Euclidean I think
of Euclidean complexity, e.g. vs. taxicab. When I read RMS
I think of error. I can imagine reasons for relating the two
but ultimately, if we wanted to satisfy Paul with a taxicab
complexity and, say, George Secor with a sum-abs error,
there's nothing preventing this... is there?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/6/2010 4:42:01 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Forgive me for butting in, but are you guys talking about
> the error or the complexity?

Given that the dual space to a vector space with Euclidean norm also has Euclidean norm, an interesting question. The Euclidean distance to the JIP from a point in tuning space (ie weighted vals, etc) is an error measure. The Euclidean norm of an "unweighted" monzo, which is in the dual space to the tuning space, is a height or complexity measure.

There are various reasons for liking Euclidean norms. One I've just now been playing with is projecting an interval to the subspace tempering out a list of intervals. I've done that before, but if we are now all happy with the RMS/Euclidean Tenney analogue, we can use it, and it seems to work fine.

🔗Carl Lumma <carl@lumma.org>

5/6/2010 6:42:50 PM

Gene wrote:

>> Forgive me for butting in, but are you guys talking about
>> the error or the complexity?
>
>Given that the dual space to a vector space with Euclidean norm also
>has Euclidean norm, an interesting question. The Euclidean distance to
>the JIP from a point in tuning space (ie weighted vals, etc) is an
>error measure. The Euclidean norm of an "unweighted" monzo, which is
>in the dual space to the tuning space, is a height or complexity measure.

It's amazing but I completely understand this.

>There are various reasons for liking Euclidean norms. One I've just
>now been playing with is projecting an interval to the subspace
>tempering out a list of intervals. I've done that before, but if we
>are now all happy with the RMS/Euclidean Tenney analogue, we can use
>it, and it seems to work fine.

Lost you here. If you can link me to a post describing the
technique, I'll read it. In particular, not sure how such a
technique would depend on the particular flavor of TOP used.

I know Euclidean norms are nice because there's lots of analysis
you can do with the Pythagorean theorem, etc.

I can't tell if Graham's convinced you that TOP-RMS is better
than TOP-ZMD, but if so I'd like to know why. Of ZMD, you wrote
on tuning that "Normalizing by the mean is associated to the
Euclidean norm and minimizes the sum of squared error" which sounds
like what Graham's now claiming of TOP-RMS. Can you explain the
difference if you've figured it out?

Thanks,

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/7/2010 1:56:55 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> >There are various reasons for liking Euclidean norms. One I've just
> >now been playing with is projecting an interval to the subspace
> >tempering out a list of intervals. I've done that before, but if we
> >are now all happy with the RMS/Euclidean Tenney analogue, we can use
> >it, and it seems to work fine.

> Lost you here. If you can link me to a post describing the
> technique, I'll read it. In particular, not sure how such a
> technique would depend on the particular flavor of TOP used.

The technique, orthogonal projection, requires an inner product (Euclidean) space:

http://en.wikipedia.org/wiki/Projection_%28linear_algebra%29

The article is, I think, harder than it needs to be and you may find my Maple code easier to digest:

dot := proc(u, v)
# generic dotproduct of lists u and v
local i, s;
if not nops(u)=nops(v) then RETURN('dimensions') fi;
s := 0;
for i from 1 to nops(u) do
s := s + u[i]*v[i] od;
expand(s) end:

rmsproj := proc(u, s)
# projection of interval u by comma list s
local i, m, q, t, v, z;
m := plim(u);
for i from 1 to nops(s) do
m := max(m, plim(s[i])) od;
v := unweight(vecl(u, m));
for i from 1 to nops(s) do
z := unweight(vecl(s[i],m));
v := v + q[i]*z od;
v := expand(v);
t := {};
for i from 1 to nops(s) do
z := unweight(vecl(s[i],m));
t := t union {dot(z, v)} od;
z := solve(t);
subs(z, v) end:

> I can't tell if Graham's convinced you that TOP-RMS is better
> than TOP-ZMD, but if so I'd like to know why.

I liked ZMD because it was so easy to compute and I guessed it would be close to what Graham was doing, but now that I know what Graham is doing, I know it is also pretty easy to compute. The justification for using TOP-RMS seems straightforward, but in practical terms there's not much to choose between.

Of ZMD, you wrote
> on tuning that "Normalizing by the mean is associated to the
> Euclidean norm and minimizes the sum of squared error" which sounds
> like what Graham's now claiming of TOP-RMS. Can you explain the
> difference if you've figured it out?

ZMD forces the average deviation from the JIP to be zero. For TOP-RMS this average is very nearly, but not exactly, equal to zero.

🔗Graham Breed <gbreed@gmail.com>

5/7/2010 9:56:44 AM

On 7 May 2010 00:29, Carl Lumma <carl@lumma.org> wrote:

> Forgive me for butting in, but are you guys talking about
> the error or the complexity?  When I read Euclidean I think
> of Euclidean complexity, e.g. vs. taxicab.  When I read RMS
> I think of error.  I can imagine reasons for relating the two
> but ultimately, if we wanted to satisfy Paul with a taxicab
> complexity and, say, George Secor with a sum-abs error,
> there's nothing preventing this...  is there?

We're talking about error, but it goes for complexity as well.

There are advantages to measuring error and complexity in similar
ways. Like I show in that good old primerr.pdf, with some kind of RMS
complexity, the optimal RMS error is a simple "badness" function
divided by the complexity. That is, error*complexity badness is
easier to calculate than the error, and gives the same result whether
or not you use the octave-equivalent approximations. Also, badness
and complexity are both quadratic forms, which means you can mix them
together to get my parametric badness. That has some nice algebraic
properties which make the searches easier -- hence my website requires
less user input and returns the answers pretty quickly.

I don't know how to get this working with taxicab geometry. I don't
even know how to get complexity working beyond rank 2 beyond arbitrary
functions of the wedgie that may or may not have some logic I don't
understand.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/7/2010 10:03:58 AM

On 7 May 2010 12:56, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> The technique, orthogonal projection, requires an inner product (Euclidean) space:
>
> http://en.wikipedia.org/wiki/Projection_%28linear_algebra%29

Oh, orthogonal projections, there you go. The simple answer, Carl, to
why standard deviations and covariances get used is that they happen
to be the same formula as orthogonal projections.

The STD error is a projection orthogonal to the JI line. That's the
same as measuring the shortest distance from the temperament point to
the JI line, which I think is what Gene said above in the thread. The
TOP-RMS error is the shortest distance from the JI point to the
temperament line/plane/whatever. One of the differences between them
is sine vs tangent, which doesn't matter because the angle's so small.
The other is that the TOP-RMS tuning will be a different distance
from the origin, corresponding to the stretch.

If I'd known about orthogonal projections before I worked through the
algebra the hard way I could have saved myself a lot of trouble.

Graham

🔗Carl Lumma <carl@lumma.org>

5/7/2010 11:15:19 AM

Gene wrote:
>ZMD forces the average deviation from the JIP to be zero. For TOP-RMS
>this average is very nearly, but not exactly, equal to zero.

Thank you. -C.

🔗Carl Lumma <carl@lumma.org>

5/7/2010 11:16:28 AM

Graham wrote:

>I don't know how to get this working with taxicab geometry. I don't
>even know how to get complexity working beyond rank 2 beyond arbitrary
>functions of the wedgie that may or may not have some logic I don't
>understand.

Are you referring to Paul's weighted wedgie complexity? -Carl

🔗Graham Breed <gbreed@gmail.com>

5/7/2010 10:16:51 PM

On 7 May 2010 22:16, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>I don't know how to get this working with taxicab geometry.  I don't
>>even know how to get complexity working beyond rank 2 beyond arbitrary
>>functions of the wedgie that may or may not have some logic I don't
>>understand.
>
> Are you referring to Paul's weighted wedgie complexity?  -Carl

There are two different wedgie complexities: max-abs and mean-abs.
For the rank 2 case they give very consistent results (one is about
double the other). I don't know what happens after that. I remember
Gene and I getting different rank 3 orderings which was partly to do
with the choices of complexity. RMS and minimax will disagree
sometimes. My guess is that max-abs is the correct way to do it,
mathematically speaking.

In practical terms, I still like the old odd-limit complexity. But I
don't know how to generalize that to higher ranks. I've been
thinking, though, about a single generator orthogonal to the
Pythagorean plane, like the generator of a rank 2 temperament is
orthogonal to the octave.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/7/2010 11:34:53 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> There are advantages to measuring error and complexity in similar
> ways. Like I show in that good old primerr.pdf, with some kind of RMS
> complexity, the optimal RMS error is a simple "badness" function
> divided by the complexity.

Can you give an abstract characterizations of this without invoking matrices?

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/7/2010 11:39:57 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> The STD error is a projection orthogonal to the JI line.

Is the JI line the line through the origin and the JI point or some other line? What is this projection a projection of?

That's the
> same as measuring the shortest distance from the temperament point to
> the JI line, which I think is what Gene said above in the thread.

I thought we wanted the shortest distance from the temperament line to the JI point.

The
> TOP-RMS error is the shortest distance from the JI point to the
> temperament line/plane/whatever.

And I guess I was right, so what were you saying above?

One of the differences between them
> is sine vs tangent, which doesn't matter because the angle's so small.

What are "them"?

🔗Graham Breed <gbreed@gmail.com>

5/7/2010 11:49:17 PM

On 8 May 2010 10:39, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> The STD error is a projection orthogonal to the JI line.
>
> Is the JI line the line through the origin and the JI point
> or some other line? What is this projection a projection of?

Yes, that's the JI line.

It's a projection of the weighted deviations from JI. Which entails
an error space instead of tuning space. So i don't know how the
geometry works out.

>
>  That's the
>> same as measuring the shortest distance from the temperament point to
>> the JI line, which I think is what Gene said above in the thread.
>
> I thought we wanted the shortest distance from the
> temperament line to the JI point.

That's TOP-RMS error.

> The
>> TOP-RMS error is the shortest distance from the JI point to the
>> temperament line/plane/whatever.
>
> And I guess I was right, so what were you saying above?

STD error.

>  One of the differences between them
>> is sine vs tangent, which doesn't matter because the angle's so small.
>
> What are "them"?

TOP-RMS error and STD error.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/8/2010 12:00:39 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 8 May 2010 10:39, genewardsmith <genewardsmith@...> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@> wrote:
> >
> >> The STD error is a projection orthogonal to the JI line.
> >
> > Is the JI line the line through the origin and the JI point
> > or some other line? What is this projection a projection of?
>
> Yes, that's the JI line.
>
> It's a projection of the weighted deviations from JI. Which entails
> an error space instead of tuning space. So i don't know how the
> geometry works out.

Why isn't that Tuning Map - JIP, which is in tuning space?

> STD error.

I guess you are taking the average of the coordinates of the tuning map and finding the distance from JIP times this average to the tuning map, which is where the JIP line comes in? This would be looking at Tuning Map - average * JIP, still a point in tuning space.

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 12:23:41 AM

On 8 May 2010 10:34, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> There are advantages to measuring error and complexity in similar
>> ways.  Like I show in that good old primerr.pdf, with some kind of RMS
>> complexity, the optimal RMS error is a simple "badness" function
>> divided by the complexity.
>
> Can you give an abstract characterizations of this without invoking matrices?

I can invoke exterior algebra: ||Q^V|| / ||Q|| ||V||

Or I can invoke geometry. The complexity is the distance from the
origin to the val point in tuning space. The TOP-RMS error is the
sine of the angle between the JI line and the val line (or JI and val
vectors) in the same space.

You can construct a right angled triangle to get that angle. I can't
draw diagrams here, so you'll have to imagine it. Call it ABC. A is
the val, B is the origin, C is a point on the JI line. The length AB
is the distance from the origin to the val, hence the complexity.
I'll call it k.

Let's look at another triangle, involving the JI point and the optimal
tuning. There, the tuning is the nearest point on the val/tuning line
to the JI point. You can set the geometry so that the JI point is a
distance of 1 from the origin. The distance from the JI point to the
val line is the error. Call it e. For this to be minimized, there
must be a right angle between the val line and this far side of the
triangle. Call the angle between the JI and val lines, the smallest
angle in both triangles, t. Trigonometry gives us

sin(t) = e/1

e = sin(t)

Let's go back to the first triangle. As the val point is fixed, it's
the JI line we need the closest approach to, so the right angle is in
a different place. But the small angle's the same, so

sin(t) = AC/AB

e = AC/k

AC = ek

So the side of the triangle opposite the origin has a length equal to
error*complexity badness.

To find it, you do an orthogonal projection parallel to the JI line
onto a plane normal to the JI line including the origin. This is what
standard orthogonal projections do. The length of AC is unchanged,
but point C (which wasn't interesting) becomes the origin. So AC, the
simple badness, is the distance from the origin to the val point under
this projection.

Graham

🔗Carl Lumma <carl@lumma.org>

5/8/2010 2:00:02 AM

Gene wrote:

>> The STD error is a projection orthogonal to the JI line.
>
>Is the JI line the line through the origin and the JI point or some
>other line? What is this projection a projection of?
>
>> That's the
>> same as measuring the shortest distance from the temperament point to
>> the JI line, which I think is what Gene said above in the thread.
>
>I thought we wanted the shortest distance from the temperament line to
>the JI point.
>
>> The
>> TOP-RMS error is the shortest distance from the JI point to the
>> temperament line/plane/whatever.
>
>And I guess I was right, so what were you saying above?

I have a draft of an almost identical message in my outbox. -Carl

🔗Carl Lumma <carl@lumma.org>

5/8/2010 2:01:59 AM

Graham wrote

>> That's the
>>> same as measuring the shortest distance from the temperament point to
>>> the JI line, which I think is what Gene said above in the thread.
>>
>> I thought we wanted the shortest distance from the
>> temperament line to the JI point.
>
>That's TOP-RMS error.
>
>>> The
>>> TOP-RMS error is the shortest distance from the JI point to the
>>> temperament line/plane/whatever.
>>
>> And I guess I was right, so what were you saying above?
>
>STD error.

So these live in different spaces... -Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/8/2010 2:04:07 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > Can you give an abstract characterizations of this without invoking matrices?
>
> I can invoke exterior algebra: ||Q^V|| / ||Q|| ||V||

Cool. You didn't define Q and V, but I'm guessing Q is the JI point and V is a weighted val. The above can be rewritten
||Q/||Q|| ^ V/||V||)

a wedge between two unit vectors. Hence, this would be your measure of TOP-RMS error. Right so far?

> Or I can invoke geometry. The complexity is the distance from the
> origin to the val point in tuning space. The TOP-RMS error is the
> sine of the angle between the JI line and the val line (or JI and val
> vectors) in the same space.

Complexity is ||V|| and error is ||Q^V||/(||Q|| ||V||)? That would make error*complexity ||Q/||Q|| ^ V||, which would be your badness measure?

🔗Carl Lumma <carl@lumma.org>

5/8/2010 2:09:58 AM

Graham wrote:

>The complexity is the distance from the
>origin to the val point in tuning space.

Do vals live in tuning space? We had this debate in the past.
Paul argued that vals with noninteger entries were just points
off the val lattice in tuning space, whereas Gene, if I understood,
argued the vals should live in a space with only integer coordinates.

But taking what you say above at face value -- the complexity
depends upon the tuning? That doesn't seem right.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 2:14:32 AM

On 8 May 2010 13:04, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> > Can you give an abstract characterizations of this without invoking matrices?
>>
>> I can invoke exterior algebra: ||Q^V|| / ||Q|| ||V||
>
> Cool. You didn't define Q and V, but I'm guessing Q is the JI point and V is a weighted val. The above can be rewritten
> ||Q/||Q|| ^ V/||V||)

No, the other way round. But it doesn't really matter, does it? I
did define them in another message, as well.

Whatever it is doesn't have to be a val. It could be the "wedgie" for
any rank temperament, as a multivector.

> a wedge between two unit vectors. Hence, this would be your measure of TOP-RMS error. Right so far?

Yes.

>> Or I can invoke geometry.  The complexity is the distance from the
>> origin to the val point in tuning space.  The TOP-RMS error is the
>> sine of the angle between the JI line and the val line (or JI and val
>> vectors) in the same space.
>
> Complexity is ||V|| and error is ||Q^V||/(||Q|| ||V||)? That would make error*complexity ||Q/||Q|| ^ V||, which would be your badness measure?

Simple badness is ||Q^V||/||whatever we call the JI point|| where the
denominator is a constant for the given prime limit. Unless you mess
it up by starting with unit vectors. (A unit JI vector is fine.)

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/8/2010 2:26:06 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Do vals live in tuning space? We had this debate in the past.
> Paul argued that vals with noninteger entries were just points
> off the val lattice in tuning space, whereas Gene, if I understood,
> argued the vals should live in a space with only integer coordinates.

Not exactly. Vals are, to start out with, elements of a finitely generated free abelian group. But embed a finitely generated free abelian group in a real vector space and you get, by definition, a lattice.

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 4:54:27 AM

On 8 May 2010 13:09, Carl Lumma <carl@lumma.org> wrote:

> Do vals live in tuning space? We had this debate in the past.
> Paul argued that vals with noninteger entries were just points
> off the val lattice in tuning space, whereas Gene, if I understood,
> argued the vals should live in a space with only integer coordinates.

You can say that vals live in a val lattice and tunings exist in
tuning space. But, still, certain points in tuning space look very
much like vals.

> But taking what you say above at face value -- the complexity
> depends upon the tuning? That doesn't seem right.

If you define complexity as the number of consonances per octave, it
does depend on the tuning of the octave.

The val as a point in tuning space isn't a tuning. It does define a
set of tunings, which lie on the line in tuning space linking the val
to the origin. One of those points is the TOP-RMS tuning.

Say you have 12 note equal temperament. You can find a false val on
the right line but exactly 12 units from the origin. You can still
construct a triangle to get the right badness, though. Divide by the
complexity of the false val and you get the STD error. Scale down by
12 units and you have a point corresponding to the STD tuning with a
distance to the JI point corresponding to the STD error. So there is
a relationship between vals, tunings, complexity, and error.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 5:00:08 AM

On 8 May 2010 13:01, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote

>>That's TOP-RMS error.

>>STD error.
>
> So these live in different spaces...  -Carl

It depends on what you mean by "space". If it's a set of a points
with certain algebraic properties and a metric, then they're different
spaces, because the two metrics are different. But the sets of points
are the same.

Error space would be tuning space translated so that the JI point is
at the origin. I think STD error comes from a projection in error
space but it must be some transformation of tuning space as well.
Simple badness is a distance measured in a projection of tuning space.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/8/2010 1:28:55 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> You can say that vals live in a val lattice and tunings exist in
> tuning space. But, still, certain points in tuning space look very
> much like vals.

That's because the lattice is a lattice in tuning space.

> The val as a point in tuning space isn't a tuning. It does define a
> set of tunings, which lie on the line in tuning space linking the val
> to the origin.

The projective val. I used to think Paul liked those and you didn't.

> Say you have 12 note equal temperament. You can find a false val on
> the right line but exactly 12 units from the origin.

What's a false val?

🔗Carl Lumma <carl@lumma.org>

5/8/2010 2:48:41 PM

Graham wrote:

>> But taking what you say above at face value -- the complexity
>> depends upon the tuning? That doesn't seem right.
>
>If you define complexity as the number of consonances per octave, it
>does depend on the tuning of the octave.

What kind of complexity is this?? It sounds more like the inverse
of complexity for a constant number of notes/octave.

>The val as a point in tuning space isn't a tuning. It does define a
>set of tunings, which lie on the line in tuning space linking the val
>to the origin. One of those points is the TOP-RMS tuning.

We should say how the axes are scaled. I can think of

* weighted log units, JIP is [1 1 1 ...]

* constant log units, JIP is [1200 1901.955 ...]

* relative log units, JIP is [12 19.01955 27.86314 ...]
or [31 49.13384 71.97977 ...] ... Are the JIPs colinear here?
Maybe that's what you were talking about "JI line".

Do you have names for each of these? The val lattice must be
found in the latter, which looks like a different beast to the
first two.

If we call them weighted tuning space, tuning space, and val
space, respectively, then I could almost believe a point in
val space is a line in either of the tuning spaces. If the
JIP really is a JIL in val space. That would make the spaces
duals or something.

You convinced me before that with a Euclidean norm on val space,
the distance from the origin to a point is a good complexity
(unweighted though).

For error we want definitely want weighted tuning space.
I believe rank 1 temperaments are lines in this space, which
fits the idea that vals are lines. Then we want the point on
the line closest to the JIP and the only choice is: which norm.
You claim TOP-RMS is the answer if we choose a Euclidean norm,
which is believable since the formula for RMS looks a lot
Euclidean distance.

Rank 2 temperaments would be planes I suppose.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 10:41:19 PM

On 9 May 2010 01:48, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>> But taking what you say above at face value -- the complexity
>>> depends upon the tuning?  That doesn't seem right.
>>
>>If you define complexity as the number of consonances per octave, it
>>does depend on the tuning of the octave.
>
> What kind of complexity is this??  It sounds more like the inverse
> of complexity for a constant number of notes/octave.

Erm, yes. But however complexity's defined, it's the number something
per octave, or some other reference interval.

>>The val as a point in tuning space isn't a tuning.  It does define a
>>set of tunings, which lie on the line in tuning space linking the val
>>to the origin.  One of those points is the TOP-RMS tuning.
>
> We should say how the axes are scaled.  I can think of
>
> * weighted log units, JIP is [1 1 1 ...]

That's Tenney-weighted tuning space.

> * constant log units, JIP is [1200 1901.955 ...]

Equal weighting, which isn't the same thing.

> * relative log units, JIP is [12 19.01955 27.86314 ...]
> or [31 49.13384 71.97977 ...] ...  Are the JIPs colinear here?
> Maybe that's what you were talking about "JI line".

Yes, the JI line is the line through the JI point.

> Do you have names for each of these?  The val lattice must be
> found in the latter, which looks like a different beast to the
> first two.

The val lattice is found in the first one, and the metric defines complexity.

> If we call them weighted tuning space, tuning space, and val
> space, respectively, then I could almost believe a point in
> val space is a line in either of the tuning spaces.  If the
> JIP really is a JIL in val space.  That would make the spaces
> duals or something.

Not duals. The metrics would be different. In tuning space you
measure pitch differences or errors, and in val space you measure
complexity. Except the metrics are really the same.

> You convinced me before that with a Euclidean norm on val space,
> the distance from the origin to a point is a good complexity
> (unweighted though).

Yes. But only for lattice points. It doesn't make sense to talk
about the complexity of a tuning.

> For error we want definitely want weighted tuning space.
> I believe rank 1 temperaments are lines in this space, which
> fits the idea that vals are lines.  Then we want the point on
> the line closest to the JIP and the only choice is: which norm.
> You claim TOP-RMS is the answer if we choose a Euclidean norm,
> which is believable since the formula for RMS looks a lot
> Euclidean distance.

Euclidean distance gives root sum squared. That's proportional to
RMS. So they're essentially the same thing.

Vals aren't lines. Vals are lattice points. The lines are equal
temperaments or temperament classes. Each point on the line is a
different tuning of the temperament class. Of course the nearest
tuning to JI will be the optimal one, according to whatever metric you
use, and and the distance from it to JI will be the optimal error.

> Rank 2 temperaments would be planes I suppose.

Yes.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 10:43:29 PM

On 9 May 2010 00:28, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> You can say that vals live in a val lattice and tunings exist in
>> tuning space.  But, still, certain points in tuning space look very
>> much like vals.
>
> That's because the lattice is a lattice in tuning space.

Yes, but vals aren't tunings and the metric you apply to them doesn't
measure tuning.

>> The val as a point in tuning space isn't a tuning.  It does define a
>> set of tunings, which lie on the line in tuning space linking the val
>> to the origin.
>
> The projective val. I used to think Paul liked those and you didn't.

Whyever would that be?

>> Say you have 12 note equal temperament.  You can find a false val on
>> the right line but exactly 12 units from the origin.
>
> What's a false val?

It's a point close to a val but not on the lattice.

Graham

🔗Carl Lumma <carl@lumma.org>

5/8/2010 11:11:41 PM

Graham wrote:

>>>If you define complexity as the number of consonances per octave, it
>>>does depend on the tuning of the octave.
>>
>> What kind of complexity is this?? It sounds more like the inverse
>> of complexity for a constant number of notes/octave.
>
>Erm, yes. But however complexity's defined, it's the number something
>per octave, or some other reference interval.

I use rounded notes/octave, which doesn't really depend on the
octave stretch.

>>>The val as a point in tuning space isn't a tuning. It does define a
>>>set of tunings, which lie on the line in tuning space linking the val
>>>to the origin. One of those points is the TOP-RMS tuning.
>>
>> We should say how the axes are scaled. I can think of
>>
>> * weighted log units, JIP is [1 1 1 ...]
>
>That's Tenney-weighted tuning space.
>
>> * constant log units, JIP is [1200 1901.955 ...]
>
>Equal weighting, which isn't the same thing.

OK

>> * relative log units, JIP is [12 19.01955 27.86314 ...]
>> or [31 49.13384 71.97977 ...] ... Are the JIPs colinear here?
>> Maybe that's what you were talking about "JI line".
>
>Yes, the JI line is the line through the JI point.

But I gave two JI points, and there are quite a lot of them.
Which one is "the" JI point?

>> Do you have names for each of these? The val lattice must be
>> found in the latter, which looks like a different beast to the
>> first two.
>
>The val lattice is found in the first one, and the metric defines
>complexity.

The val lattice ought to have vals in it. Things like <31 49 72|
are vals. Weighted, we get things like <n n n n| which don't seem
to make much of a lattice.

>> You convinced me before that with a Euclidean norm on val space,
>> the distance from the origin to a point is a good complexity
>> (unweighted though).
>
>Yes. But only for lattice points. It doesn't make sense to talk
>about the complexity of a tuning.

OK

>> For error we want definitely want weighted tuning space.
>> I believe rank 1 temperaments are lines in this space, which
>> fits the idea that vals are lines. Then we want the point on
>> the line closest to the JIP and the only choice is: which norm.
>> You claim TOP-RMS is the answer if we choose a Euclidean norm,
>> which is believable since the formula for RMS looks a lot
>> Euclidean distance.
>
>Euclidean distance gives root sum squared. That's proportional to
>RMS. So they're essentially the same thing.
>
>Vals aren't lines. Vals are lattice points. The lines are equal
>temperaments or temperament classes. Each point on the line is a
>different tuning of the temperament class. Of course the nearest
>tuning to JI will be the optimal one, according to whatever metric you
>use, and and the distance from it to JI will be the optimal error.

A val is an ET, or rank 1 temperament class. So I don't see how
they are lattice points.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/9/2010 12:19:41 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> The val lattice ought to have vals in it. Things like <31 49 72|
> are vals. Weighted, we get things like <n n n n| which don't seem
> to make much of a lattice.

It's a coordinate transformation. If you find <31 30.9155579 31.00871218| in it, unweight and and you have integers. It's an embedding of the vals into a vector space as a lattice, which is a discrete group spanning the vector space. The definition requires a topology, but people often restrict it to the case of a Euclidean normed vector space. Hence you have the abstract abelian group, dual of the group of p-limit intervals, which is the group of vals, and an embedding map weight:Vals --> R^n, where n is the rank of the val group.

🔗Carl Lumma <carl@lumma.org>

5/9/2010 12:53:31 AM

Gene wrote:

>> The val lattice ought to have vals in it. Things like <31 49 72|
>> are vals. Weighted, we get things like <n n n n| which don't seem
>> to make much of a lattice.
>
>It's a coordinate transformation. If you find <31 30.9155579
>31.00871218| in it, unweight and and you have integers.

Are you sure it's a JI point? It seems more like a JI line.
Weighted, it's all points n * <1 1 1 ...| where n is an integer.
Unweighted, it's n * <1 1.58496 2.32193 ...|.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/9/2010 1:49:59 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Gene wrote:
>
> >> The val lattice ought to have vals in it. Things like <31 49 72|
> >> are vals. Weighted, we get things like <n n n n| which don't seem
> >> to make much of a lattice.
> >
> >It's a coordinate transformation. If you find <31 30.9155579
> >31.00871218| in it, unweight and and you have integers.
>
> Are you sure it's a JI point? It seems more like a JI line.

It's neither, it's the lattice point associated to <31 49 72|. Transform coordinates and it becomes <31.0 49.0 72.0|.

🔗Carl Lumma <carl@lumma.org>

5/9/2010 2:01:51 AM

Gene wrote:

>> Are you sure it's a JI point? It seems more like a JI line.
>
>It's neither, it's the lattice point associated to <31 49 72|.
>Transform coordinates and it becomes <31.0 49.0 72.0|.

Isn't <15.0 24.5 36| a valid transformed point for this val?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/10/2010 12:32:04 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Gene wrote:
>
> >> Are you sure it's a JI point? It seems more like a JI line.
> >
> >It's neither, it's the lattice point associated to <31 49 72|.
> >Transform coordinates and it becomes <31.0 49.0 72.0|.
>
> Isn't <15.0 24.5 36| a valid transformed point for this val?

You'd need to define your coordinate transformation to be able to answer that, but it doesn't look like it makes any sense. Did you mean
<15.5 24.5 36|?

🔗Carl Lumma <carl@lumma.org>

5/10/2010 12:39:34 AM

Gene wrote:

>> >> Are you sure it's a JI point? It seems more like a JI line.
>> >
>> >It's neither, it's the lattice point associated to <31 49 72|.
>> >Transform coordinates and it becomes <31.0 49.0 72.0|.
>>
>> Isn't <15.0 24.5 36| a valid transformed point for this val?
>
>You'd need to define your coordinate transformation to be able to
>answer that, but it doesn't look like it makes any sense. Did you mean
><15.5 24.5 36|?

Sorry, yes. -C.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/10/2010 2:46:50 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> ><15.5 24.5 36|?
>
> Sorry, yes. -C.

So now you have a coordinate transformation, but no apparent point to it.

🔗Carl Lumma <carl@lumma.org>

5/10/2010 10:39:24 AM

At 02:46 AM 5/10/2010, you wrote:

>> ><15.5 24.5 36|?
>>
>> Sorry, yes. -C.
>
>So now you have a coordinate transformation, but no apparent point to it.

? Clearly this is a point in tuning space, not on the val
lattice, but colinear with <31 49 72|.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/10/2010 4:20:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> At 02:46 AM 5/10/2010, you wrote:
>
> >> ><15.5 24.5 36|?
> >>
> >> Sorry, yes. -C.
> >
> >So now you have a coordinate transformation, but no apparent point to it.
>
> ? Clearly this is a point in tuning space, not on the val
> lattice, but colinear with <31 49 72|.

In that case, you aren't transforming coordinates.

🔗Carl Lumma <carl@lumma.org>

5/10/2010 4:38:05 PM

At 04:20 PM 5/10/2010, you wrote:

>> >> ><15.5 24.5 36|?
>> >>
>> >> Sorry, yes. -C.
>> >
>> >So now you have a coordinate transformation, but no apparent point to it.
>>
>> ? Clearly this is a point in tuning space, not on the val
>> lattice, but colinear with <31 49 72|.
>
>In that case, you aren't transforming coordinates.

There are three things I mentioned, raw vals like the above being
one of them. But it doesn't matter, let's talk Tenney space.
<31 31 31| and <5 5 5| are both JIPs, no?

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/10/2010 11:31:27 PM

On 11 May 2010 03:38, Carl Lumma <carl@lumma.org> wrote:

> There are three things I mentioned, raw vals like the above being
> one of them.  But it doesn't matter, let's talk Tenney space.
> <31 31 31| and <5 5 5| are both JIPs, no?

The fact that raw vals have to be translated to Tenney space is what
suggests it isn't really a val space. You could, though, use integers
to label the points but have rectangular instead of square spacing.
Then measure Euclidean distances. This amounts to applying a metric
to the lattice instead of transforming the vals to fit the space. It
would also mean tuning space was labeled by intervals in octaves. A
different metric would give labels in cents.

In general, tuning/complexity space, error space and badness space are
different inner product spaces.

In Tenney space, the JIP is <1 1 1]. Those other things are points on
the JI line. Considering all points on the JI line equivalent is like
saying you don't care about the scale stretch.

I know different threads are hanging. Are there any questions people
really want answered?

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/11/2010 1:42:35 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> At 04:20 PM 5/10/2010, you wrote:
>
> >> >> ><15.5 24.5 36|?
> >> >>
> >> >> Sorry, yes. -C.
> >> >
> >> >So now you have a coordinate transformation, but no apparent point to it.
> >>
> >> ? Clearly this is a point in tuning space, not on the val
> >> lattice, but colinear with <31 49 72|.
> >
> >In that case, you aren't transforming coordinates.
>
> There are three things I mentioned, raw vals like the above being
> one of them. But it doesn't matter, let's talk Tenney space.
> <31 31 31| and <5 5 5| are both JIPs, no?

No, only <1 1 1| is the JIP.

🔗Carl Lumma <carl@lumma.org>

5/11/2010 2:03:00 PM

Graham wrote:

>In Tenney space, the JIP is <1 1 1]. Those other things are points on
>the JI line. Considering all points on the JI line equivalent is like
>saying you don't care about the scale stretch.
>
>I know different threads are hanging. Are there any questions people
>really want answered?

Just trying to figure out this geometry. Maybe we should talk about
units. The units are

< notes/octave notes/twelfth etc |

So <31 31 31| is the same tuning as <1 1 1| but not the same...
morphism? icon?

Fine and good, but I still don't see why we're minimizing distance
to the JIP -- we should be using the JI line. Here's where maybe we
got it wrong. A rank 2 temperament isn't a plane (which I think you
agreed to recently) but rather a line. Rank 1 temperaments are points.
To find the best rank 1 temperament (val) and its tuning for a given
number of notes/octaves n, we pick from the points that unweight to
all-integers (true vals) the one closest to <n n n|. Not the one
closest to <1 1 1|.

For rank 2, rather than starting with just notes/octave, we start
with two complete vals. Then the tuning of the primes is the point
on their line closest to the JI line. The two lines can never
intersect of course.

I don't know how to tune the generators. They live in the dual
space of monzos, correct?

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/11/2010 9:56:27 PM

On 12 May 2010 01:03, Carl Lumma <carl@lumma.org> wrote:

> Just trying to figure out this geometry.  Maybe we should talk about
> units.  The units are
>
> < notes/octave  notes/twelfth  etc |
>
> So <31 31 31| is the same tuning as <1 1 1| but not the same...
> morphism?  icon?

They're not the same tuning. <31 31 31] means the octave is tuned to
2^31:1 and so on. It sits in a completely different part of the
space. They're the same temperament class.

> Fine and good, but I still don't see why we're minimizing distance
> to the JIP -- we should be using the JI line.  Here's where maybe we
> got it wrong.  A rank 2 temperament isn't a plane (which I think you
> agreed to recently) but rather a line.  Rank 1 temperaments are points.

We're minimizing distance to the JIP because that's JI. We can use
the JI line as well, but that entails octave equivalence (or
scale-stretch equivalence). A rank 2 temperament class is a plane.
It becomes a line in projective space. A rank 1 temperament class is
a point in projective space.

> To find the best rank 1 temperament (val) and its tuning for a given
> number of notes/octaves n, we pick from the points that unweight to
> all-integers (true vals) the one closest to <n n n|.  Not the one
> closest to <1 1 1|.

You find the one that projects closest to <1 1 1]

> For rank 2, rather than starting with just notes/octave, we start
> with two complete vals.  Then the tuning of the primes is the point
> on their line closest to the JI line.  The two lines can never
> intersect of course.

The tuning of the prime is the point in the plane closest to the JI
point. The tuning you're talking about ignores scale stretch.

> I don't know how to tune the generators.  They live in the dual
> space of monzos, correct?

I'm not sure. They're probably points in tuning space.

Graham

🔗Carl Lumma <carl@lumma.org>

5/11/2010 10:22:51 PM

Graham wrote:

>> So <31 31 31| is the same tuning as <1 1 1| but not the same...
>> morphism? icon?
>
>They're not the same tuning. <31 31 31] means the octave is tuned to
>2^31:1 and so on.

It sure as shinola unweights to the same thing as <1 1 1|. I call
that the same tuning.

>> To find the best rank 1 temperament (val) and its tuning for a given
>> number of notes/octaves n, we pick from the points that unweight to
>> all-integers (true vals) the one closest to <n n n|. Not the one
>> closest to <1 1 1|.
>
>You find the one that projects closest to <1 1 1]

I find the one closest to <n n n|, and it works. Gene's code
seems to work the same way.

>> I don't know how to tune the generators. They live in the dual
>> space of monzos, correct?
>
>I'm not sure. They're probably points in tuning space.

Maybe Gene knows.

-Carl

🔗Carl Lumma <carl@lumma.org>

5/11/2010 10:26:40 PM

I wrote:

>>> So <31 31 31| is the same tuning as <1 1 1| but not the same...
>>> morphism? icon?
>>
>>They're not the same tuning. <31 31 31] means the octave is tuned to
>>2^31:1 and so on.
>
>It sure as shinola unweights to the same thing as <1 1 1|. I call
>that the same tuning.

Different step size of course. I'll call that a different icon,
until Gene corrects me.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/11/2010 10:39:43 PM

On 12 May 2010 09:22, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>> So <31 31 31| is the same tuning as <1 1 1| but not the same...
>>> morphism?  icon?
>>
>>They're not the same tuning.  <31 31 31] means the octave is tuned to
>>2^31:1 and so on.
>
> It sure as shinola unweights to the same thing as <1 1 1|.  I call
> that the same tuning.

It doesn't unweight to the same thing. And it doesn't matter how many
times you say it does, because it doesn't.

>>> To find the best rank 1 temperament (val) and its tuning for a given
>>> number of notes/octaves n, we pick from the points that unweight to
>>> all-integers (true vals) the one closest to <n n n|.  Not the one
>>> closest to <1 1 1|.
>>
>>You find the one that projects closest to <1 1 1]
>
> I find the one closest to <n n n|, and it works.  Gene's code
> seems to work the same way.

It works because the triangles are congruent. But the JI point is
still <1 1 1]. Gene agrees. If you're doing something else you're in
a different space.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/11/2010 11:02:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Graham wrote:
>
> >> So <31 31 31| is the same tuning as <1 1 1| but not the same...
> >> morphism? icon?
> >
> >They're not the same tuning. <31 31 31] means the octave is tuned to
> >2^31:1 and so on.
>
> It sure as shinola unweights to the same thing as <1 1 1|. I call
> that the same tuning.

Does not, it unweights to something 31 times JI, or JI to the 31st power.

> >> To find the best rank 1 temperament (val) and its tuning for a given
> >> number of notes/octaves n, we pick from the points that unweight to
> >> all-integers (true vals) the one closest to <n n n|. Not the one
> >> closest to <1 1 1|.
> >
> >You find the one that projects closest to <1 1 1]
>
> I find the one closest to <n n n|, and it works. Gene's code
> seems to work the same way.

Since it's causing confusion I suppose I'd better rewrite the code.

By the way, I'm adding stuff to the xenharmonic wiki as per your suggestion. I've already been tagged for stealing my own articles from your web site.

> >> I don't know how to tune the generators. They live in the dual
> >> space of monzos, correct?
> >
> >I'm not sure. They're probably points in tuning space.
>
> Maybe Gene knows.

Since they are intervals, they clearly don't live in tuning space.

🔗Carl Lumma <carl@lumma.org>

5/11/2010 11:54:40 PM

Gene wrote:

>> >They're not the same tuning. <31 31 31] means the octave is tuned to
>> >2^31:1 and so on.
>>
>> It sure as shinola unweights to the same thing as <1 1 1|. I call
>> that the same tuning.
>
>Does not, it unweights to something 31 times JI, or JI to the 31st power.

I asked if the unweighted units are steps/prime. If they are,
clearly <n n n| is the same tuning for all n. The step size would
be different.

>> I find the one closest to <n n n|, and it works. Gene's code
>> seems to work the same way.
>
>Since it's causing confusion I suppose I'd better rewrite the code.

The whole thing you said before about minimizing some measure of
central tendency from, lo and behold, <n n n|, means you're
minimizing distance to the JI line, not the JI point.

>By the way, I'm adding stuff to the xenharmonic wiki as per your
>suggestion. I've already been tagged for stealing my own articles from
>your web site.

Heh. I guess I'll pop over there.

>> >> I don't know how to tune the generators. They live in the dual
>> >> space of monzos, correct?
>> >
>> >I'm not sure. They're probably points in tuning space.
>>
>> Maybe Gene knows.
>
>Since they are intervals, they clearly don't live in tuning space.

Right. They're representable by monzos and I recollect that monzos
and vals are duals. So I have to wedge, take the complement, and
unwedge. Or something.

But let me go on about the geometry I'm pretending to understand.
The way I'm looking at it, a rank 2 temperament will be a line
connecting two vals. If I have a tuning point like

<11.976740698521905 18.963172772659682 27.94572829655111|

presumably it sits between the vals, or if not, at least they are
the nearest pair of points on the line with integer coordinates.
Beyond them, other points with integer coords can be found, but
they would lead to torsional maps. Then I'd find the optimal
tuning by

1. drawing a ball around each of the line's defining vals, of
radius .49 along each axis (this has to be done while weighted)

2. allowing the line to be defined instead by any pair of points,
one from each ball, such that the distance to the JI line is
minimized. this is the hard part.

3. then I could find the tuning of each generator by using the
trick for ETs, where you can get the step size right from the val
dividing any prime by its number of steps.

I tried this before without the geometry picture, and it only
works if there's at least one prime in the val you're trying to
tune for which the other val has a zero. So there must be
something wrong with my assumptions, since each of the vals
defining the line ought to be independent and the ET trick ought
to work. Hrm.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/12/2010 2:46:12 AM

Carl:
>> >> I don't know how to tune the generators.  They live in the dual
>> >> space of monzos, correct?

Gene:
> Since they are intervals, they clearly don't live in tuning space.

Each generator corresponds to a val, which represents it in tuning
space. You can also define a line of tunings of that val, the same as
for an equal temperament (the special case of only one generator). So
it may be possible but it's not clear to me how.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/12/2010 7:44:52 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Carl:
> >> >> I don't know how to tune the generators.  They live in the dual
> >> >> space of monzos, correct?
>
> Gene:
> > Since they are intervals, they clearly don't live in tuning space.
>
> Each generator corresponds to a val, which represents it in tuning
> space.

There are different things you can call a generator. First, there are rational numbers, as we might say 2 and 3/2 are generators for meantone temperament. Then there are the real numbers obtained by applying a vector in tuning space to these first generators. Corresponding to each generator is a val telling how many of that generator to take when generating the retuned version of any specific p-limit rational number. If you apply it to the rational number which was retuned to get the generator, it should tell you to take one of that generator, and none of any of the others, which is the correspondence. You can also cook up a vector in interval space corresponding to a generator, by applying the tuning map to each prime in succession and weighting the coordinates if you are using an unweighted tuning map.