back to list

Zeta integral tunings and 7-tet vs 10-tet

🔗Mike Battaglia <battaglia01@gmail.com>

1/22/2011 3:56:38 PM

After messing with the HE algorithm over the past few days, I was left
with the following question: why do we generally say, that as far as
very low-numbered EDO's are concerned, that 7-tet stands out above
10-tet?

The two are very similar: 7-tet's fifth is 2 cents more accurate than
10-tet's fifth, and they're both dicot temperaments, which are
generally not that high accuracy. But in 10-tet, the 5/4's are 17
cents closer to just than in 7-tet, and unlike 7-tet, 10-tet has an
approximation to 7/4 as well. Both have an approximation to 13/8, but
7-tet's is very sharp and 10-tet's is almost perfect. 7-tet has an
interval 20 cents flat of 11/8, though, and 20-tet doesn't really have
11/8 at all.

They also both approximate quote unquote "tonal" systems: 7-tet can be
viewed as a prototypical meantone, and 10-tet can be viewed as a
prototypical blackwood. (It would be nice to make a knowsur-style
album with 20-tet...)

But in light of the above, it seems like 10-tet should swamp 7-tet in
terms of proto-accuracy, its only real detriment being practical: once
you get to 10, you might as well go to 12. But the same applies also
to 7, so bah. Why does the zeta integral sequence prioritize 10 above
7?

In a sense I'm also asking why 19 pops up on the zeta integral
sequence instead of 22.

Is it just because you're taking the integral between two consecutive
renormalized zeros, whereas if we were going to do this the HE way,
you'd be taking the integral of the whole function, weighted by a
Gaussian (or an exponentially warped Gaussian, I guess, since we're
dealing with the Mellin transform)?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/22/2011 7:00:16 PM

Mike wrote:

>After messing with the HE algorithm over the past few days, I was left
>with the following question: why do we generally say, that as far as
>very low-numbered EDO's are concerned, that 7-tet stands out above
>10-tet?

Best ET > 4, >= #primes, < 12

logflat badness from notes/octave & TOP damage w/TOP stretch
5-limit
((7 11 16) 174.67218406839893)
7-limit
((5 8 12 14) 183.08890042350328)
11-limit
((5 8 12 14 18) 190.41688185073954)
13-limit
((8 13 19 23 28 30) 181.49146878806235)
17-limit
((8 13 19 23 28 30 33) 169.33752806173496)

logflat from notes/oct & RMS Tenney-weighted error w/pure octaves
5-limit
((7 11 16) 228.15133004347476)
7-limit
((10 16 23 28) 176.32669201889667)
11-limit
((5 8 12 14 17) 156.5661071052294)
13-limit
((7 11 16 20 24 26) 136.19831288585243)
17-limit
((10 16 23 28 35 37 41) 121.48598796823913)

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/22/2011 11:35:42 PM

On Sat, Jan 22, 2011 at 10:00 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Best ET > 4, >= #primes, < 12
>
> logflat badness from notes/octave & TOP damage w/TOP stretch
> 5-limit
> ((7 11 16) 174.67218406839893)
> 7-limit
> ((5 8 12 14) 183.08890042350328)
> 11-limit
> ((5 8 12 14 18) 190.41688185073954)
> 13-limit
> ((8 13 19 23 28 30) 181.49146878806235)
> 17-limit
> ((8 13 19 23 28 30 33) 169.33752806173496)

Logflat badness is something I just spent literally an hour looking up
on tuning-math and I still don't understand it. I couldn't find the
first message where it was defined, just a huge discussion between
Paul, Gene, and Dave Keenan about whether or not it was useful. It
seems to represent error as a proportion of step size. Can you give me
some background on this?

Why are 11 and 16 showing up in the 5-limit and not 12? Why is 22
beating out 18 for the 11-limit too?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/23/2011 12:26:29 AM

Mike wrote:

>> Best ET > 4, >= #primes, < 12
>>
>> logflat badness from notes/octave & TOP damage w/TOP stretch
>> 5-limit
>> ((7 11 16) 174.67218406839893)
>> 7-limit
>> ((5 8 12 14) 183.08890042350328)
>> 11-limit
>> ((5 8 12 14 18) 190.41688185073954)
>> 13-limit
>> ((8 13 19 23 28 30) 181.49146878806235)
>> 17-limit
>> ((8 13 19 23 28 30 33) 169.33752806173496)
>
>Logflat badness is something I just spent literally an hour looking up
>on tuning-math and I still don't understand it. I couldn't find the
>first message where it was defined, just a huge discussion between
>Paul, Gene, and Dave Keenan about whether or not it was useful. It
>seems to represent error as a proportion of step size. Can you give me
>some background on this?

The formula is: error * complexity^(pi(p)/(pi(p)-r)

for the p-limit where r is the rank of the temperament. This kind
of thing is usually lookupable @ Tonalsoft
http://tonalsoft.com/enc/b/badness.aspx

There are an infinite number of temperaments better than any fixed
logflat badness cutoff. However, the histogram of these results is
flat when complexity is on a log scale. That is, if there are 5
results of complexity < 10, there will be about 5 of complexity
10-100, etc.

You can think of the exponent as normalizing the complexity across
temperaments of different ranks, by turning a 'number of notes'
into a radius on the lattice. JI is always rank 0 so the badness
is just error * complexity. We would expect a codimension 1
temperament to deliver more for this complexity, so we penalize it.
This should be the weighted lattice so we should use weighted
complexity. However for ETs this is proportional to notes/oct
(since notes/prime ~~ notes/oct * prime/2, any weighting that is a
function of the primes won't change things). For rank 2 and beyond
you need real weighted complexity.

There are newer badness formulae now that I haven't followed,
including Cangwu badness, "parametric badness", TE badness, "simple
badness" and probably others. I think the first three are different
names for the same thing, but nobody has yet admitted this to me in
a single breath.

>Why are 11 and 16 showing up in the 5-limit and not 12? Why is 22
>beating out 18 for the 11-limit too?

Sorry, these are vals. The best ETs above are 7, 5, 5, 8, 8.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/23/2011 12:49:31 AM

On 23 January 2011 12:26, Carl Lumma <carl@lumma.org> wrote:

> There are newer badness formulae now that I haven't followed,
> including Cangwu badness, "parametric badness", TE badness, "simple
> badness" and probably others.  I think the first three are different
> names for the same thing, but nobody has yet admitted this to me in
> a single breath.

Cangwu badness is the only thing called "parametric badness" AFAIK,
but there are other ways you could parametrize badness hence it has a
special name. TE badness could plausibly be the same thing, or it
might not. I don't remember seeing the term. Simple badness is
usually Cangwu badness with the parameter set to zero (or equivalent
for different formulations) but could plausibly be a synonym for
relative error (error times complexity).

Graham

🔗Carl Lumma <carl@lumma.org>

1/23/2011 1:23:05 AM

Graham wrote:

>> There are newer badness formulae now that I haven't followed,
>> including Cangwu badness, "parametric badness", TE badness, "simple
>> badness" and probably others. I think the first three are different
>> names for the same thing, but nobody has yet admitted this to me in
>> a single breath.
>
>Cangwu badness is the only thing called "parametric badness" AFAIK,
>but there are other ways you could parametrize badness hence it has a
>special name. TE badness could plausibly be the same thing, or it
>might not. I don't remember seeing the term. Simple badness is
>usually Cangwu badness with the parameter set to zero (or equivalent
>for different formulations) but could plausibly be a synonym for
>relative error (error times complexity).
>

Alright, thanks. Cangwu it is then. By the way, I've come to
really like the error-centric way your temperament finder works,
so I retract my earlier statements about complexity being more
important.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/23/2011 1:52:51 AM

On Sun, Jan 23, 2011 at 3:26 AM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> The formula is: error * complexity^(pi(p)/(pi(p)-r)
>
> for the p-limit where r is the rank of the temperament. This kind
> of thing is usually lookupable @ Tonalsoft
> http://tonalsoft.com/enc/b/badness.aspx

Oh snap, forgot about that. I looked at the xenharmonic wiki and
tuning-math for an hour and then gave up.

> There are an infinite number of temperaments better than any fixed
> logflat badness cutoff. However, the histogram of these results is
> flat when complexity is on a log scale. That is, if there are 5
> results of complexity < 10, there will be about 5 of complexity
> 10-100, etc.

And the motivation behind this is because the practical difficulties
in conceptualizing a scale roughly with the log of the size involved?
As in, the jump from 12 to 22 would be more difficult than the jump
from 72 to 82? That's an idea I was tossing around to myself the other
day.

> You can think of the exponent as normalizing the complexity across
> temperaments of different ranks, by turning a 'number of notes'
> into a radius on the lattice. JI is always rank 0 so the badness
> is just error * complexity.

I see. But JI is rank 0? I thought 5-limit JI was rank 3...? And would
JI just have a badness of 0, since the error is 0?

> We would expect a codimension 1 temperament to deliver more for this complexity, so we penalize it.

Is codimension 1 the same thing rank 1? As in an equal temperament?

> This should be the weighted lattice so we should use weighted
> complexity. However for ETs this is proportional to notes/oct
> (since notes/prime ~~ notes/oct * prime/2, any weighting that is a
> function of the primes won't change things). For rank 2 and beyond
> you need real weighted complexity.

What do you mean by weighted complexity - each prime somehow factors
into complexity different?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/23/2011 10:46:16 AM

Mike wrote:

>> There are an infinite number of temperaments better than any fixed
>> logflat badness cutoff. However, the histogram of these results is
>> flat when complexity is on a log scale. That is, if there are 5
>> results of complexity < 10, there will be about 5 of complexity
>> 10-100, etc.
>
>And the motivation behind this is because the practical difficulties
>in conceptualizing a scale roughly with the log of the size involved?
>As in, the jump from 12 to 22 would be more difficult than the jump
>from 72 to 82? That's an idea I was tossing around to myself the other
>day.

That's how I usually think of it, yes.

>> You can think of the exponent as normalizing the complexity across
>> temperaments of different ranks, by turning a 'number of notes'
>> into a radius on the lattice. JI is always rank 0 so the badness
>> is just error * complexity.
>
>I see. But JI is rank 0? I thought 5-limit JI was rank 3...? And would
>JI just have a badness of 0, since the error is 0?

Here the rank is the dimension of the kernel of the temperament,
which is empty in the case of JI. See
http://en.wikipedia.org/wiki/Linear_map#Kernel.2C_image_and_the_rank-nullity_theorem

It's the complexity term that's normalized to JI. I didn't say the
error term was applicable to JI.

>> We would expect a codimension 1 temperament to deliver more for
>> this complexity, so we penalize it.
>
>Is codimension 1 the same thing rank 1? As in an equal temperament?

If you look at the rank nullity theorem there, V is JI, W is the
temperament and f() is the mapping. dim(im(f)) is the rank and
dim(ker(f)) is the codimension.

>> This should be the weighted lattice so we should use weighted
>> complexity. However for ETs this is proportional to notes/oct
>> (since notes/prime ~~ notes/oct * prime/2, any weighting that is a
>> function of the primes won't change things). For rank 2 and beyond
>> you need real weighted complexity.
>
>What do you mean by weighted complexity - each prime somehow factors
>into complexity different?

Yes. Tenney-weighted primes are the basis of TOP and TE complexity.
Other weightings are possible. I've played with sqrt(p).

-Carl

🔗Carl Lumma <carl@lumma.org>

1/23/2011 11:28:37 AM

I wrote:

>Here the rank is the dimension of the kernel of the temperament,
>which is empty in the case of JI.

Sorry, the dimension of the *image* (as I said below). What's
the image in JI? The mapping is the identity matrix, so images
will have dimension equal to the number of primes and we get a
zero in the denominator in Gene's formula (which I made this
error to avoid). But this seems a technicality. The point is
that as the rank of the temperament goes up, the complexity is
expected to be less.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 3:46:42 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> After messing with the HE algorithm over the past few days, I was left
> with the following question: why do we generally say, that as far as
> very low-numbered EDO's are concerned, that 7-tet stands out above
> 10-tet?

We do?

> But in light of the above, it seems like 10-tet should swamp 7-tet in
> terms of proto-accuracy, its only real detriment being practical: once
> you get to 10, you might as well go to 12. But the same applies also
> to 7, so bah. Why does the zeta integral sequence prioritize 10 above
> 7?

Why shouldn't it? It's 5 is better, and its 7 is much better.

> In a sense I'm also asking why 19 pops up on the zeta integral
> sequence instead of 22.

19 is a smaller number and does the 5-limit better, and the 7-limit advantage of 22 just isn't enough.

> Is it just because you're taking the integral between two consecutive
> renormalized zeros, whereas if we were going to do this the HE way,
> you'd be taking the integral of the whole function, weighted by a
> Gaussian (or an exponentially warped Gaussian, I guess, since we're
> dealing with the Mellin transform)?

You've lost me.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 3:53:46 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

Why does the zeta integral sequence prioritize 10 above
> 7?

It occured to me after posting a reply to this that the premise is wrong: actually, it prioritizes 7 over 10. Since 7 is smaller and has a better fifth, that's less of a mystery than it would be the other way around. The zeta integral edo thing gives considerably more weight to smaller primes.

🔗Mike Battaglia <battaglia01@gmail.com>

1/23/2011 3:57:01 PM

On Sun, Jan 23, 2011 at 2:28 PM, Carl Lumma <carl@lumma.org> wrote:
>
> I wrote:
>
> >Here the rank is the dimension of the kernel of the temperament,
> >which is empty in the case of JI.
>
> Sorry, the dimension of the *image* (as I said below). What's
> the image in JI? The mapping is the identity matrix, so images
> will have dimension equal to the number of primes and we get a
> zero in the denominator in Gene's formula (which I made this
> error to avoid). But this seems a technicality. The point is
> that as the rank of the temperament goes up, the complexity is
> expected to be less.

OK, thanks for the explanation. I'm going to have to take some time to
figure this one out.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 4:01:10 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Cangwu badness is the only thing called "parametric badness" AFAIK,
> but there are other ways you could parametrize badness hence it has a
> special name. TE badness could plausibly be the same thing, or it
> might not.

TE badness is just logflat badness using TE error and TE complexity.

🔗Mike Battaglia <battaglia01@gmail.com>

1/23/2011 4:06:17 PM

On Sun, Jan 23, 2011 at 6:46 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> > After messing with the HE algorithm over the past few days, I was left
> > with the following question: why do we generally say, that as far as
> > very low-numbered EDO's are concerned, that 7-tet stands out above
> > 10-tet?
>
> We do?

I thought so. I guess it's because of the TOP dmg metric that Carl posted.

> > But in light of the above, it seems like 10-tet should swamp 7-tet in
> > terms of proto-accuracy, its only real detriment being practical: once
> > you get to 10, you might as well go to 12. But the same applies also
> > to 7, so bah. Why does the zeta integral sequence prioritize 10 above
> > 7?
>
> Why shouldn't it? It's 5 is better, and its 7 is much better.

Sorry, I'm stupid. Why does the zeta integral sequence prioritize 7
above 10? That was the question.

> > In a sense I'm also asking why 19 pops up on the zeta integral
> > sequence instead of 22.
>
> 19 is a smaller number and does the 5-limit better, and the 7-limit advantage of 22 just isn't enough.

Curses.

> > Is it just because you're taking the integral between two consecutive
> > renormalized zeros, whereas if we were going to do this the HE way,
> > you'd be taking the integral of the whole function, weighted by a
> > Gaussian (or an exponentially warped Gaussian, I guess, since we're
> > dealing with the Mellin transform)?
>
> You've lost me.

I'm going to have to figure out what you're doing with the Zeta
function before I can express this concretely, but I'm in a sense
asking what would instead happen if rather than integrating between
two zeroes, you took the highest point between two zeroes, centered a
Gaussian at that point, multiplied the entire critical line pointwise
with that Gaussian, and integrated.

More precisely, let's say Z(s) is the Z function that you sent me, and
s is the point along the critical strip

You're doing Integral_z1,z2(|Z(s)|), where z1 and z2 are two consecutive zeroes

What happens if instead, you do Integral_-Inf,Inf(G(s) * |Z(s)|),
where G(s) is some Gaussian centered at the local maximum between the
two zeroes, and of a suitable standard deviation?

Except, to clarify, what I'm really saying is to use the Mellin
transform of a Gaussian with a standard deviation s=1.0% and s=1.2%,
which will probably be G(exp(s)) instead of G(s). But this is a very
rough train of thought that you can perhaps finish, I don't understand
the reasoning behind the zeta integral in the first place so I'm not
sure how to play with it. Does it have something to do with omega
approximations?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 3:58:50 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> There are an infinite number of temperaments better than any fixed
> logflat badness cutoff.

Probably, but where's the proof?

🔗Carl Lumma <carl@lumma.org>

1/23/2011 4:39:41 PM

At 03:58 PM 1/23/2011, you wrote:
>
>
>--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
>> There are an infinite number of temperaments better than any fixed
>> logflat badness cutoff.
>
>Probably, but where's the proof?

I was only parroting your claim.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 6:10:21 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> >Probably, but where's the proof?
>
> I was only parroting your claim.

I believe my claim was that this was probably true, but that at least there will be some fixed cutoff below which an infinite number of results will appear.

🔗Carl Lumma <carl@lumma.org>

1/23/2011 6:17:54 PM

>>>>There are an infinite number of temperaments better than any fixed
>>>>logflat badness cutoff.
>>>
>>>Probably, but where's the proof?
>>
>>I was only parroting your claim.
>
>I believe my claim was that this was probably true, but that at least
>there will be some fixed cutoff below which an infinite number of
>results will appear.

I think I said that above. My recollection is the claim that it's
the least exponent such that the results are "infinite, but only just".
Not that you said you'd proved it.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/23/2011 7:37:05 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> I think I said that above. My recollection is the claim that it's
> the least exponent such that the results are "infinite, but only just".
> Not that you said you'd proved it.

That I believe can be proven using standard results from Diophantine approximation, but I haven't done so.

🔗Carl Lumma <carl@lumma.org>

1/24/2011 4:39:17 PM

Gene wrote:

> TE badness is just logflat badness using TE error and TE complexity.

TE error = TOP-RMS error

TE complexity is defined here
http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures

It looks like it's a distance, as opposed to a volume. In particular,
it's the average radius of what you get when you put the unit ball of
interval space into tuning space.

For badness we want the complexity term to be a number of notes -- the
volume of a ball with radius equal to the complexity of some target(s),
since each lattice point in this ball is a chance to hit them.

I don't understand how the usual logflat exponent of pi(p)/co(T)
(where co(T) is the codimension of the temperament) makes sense. It
seems to convert a volume in the commatic subspace for T to a volume
in JI. But no complexity we use seems to be a volume in commatic
subspace.

What used to be called Graham complexity was a volumetric version
of TE complexity.

In the case of "temperamental complexity"
http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
it's not clear to me what the targets are. They live in the quotient
space of JI by the commatic subspace. I'm having trouble getting a
handle on this quotient space. What do its basis elements look like?

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/25/2011 1:18:53 AM

On 25 January 2011 04:39, Carl Lumma <carl@lumma.org> wrote:
> Gene wrote:
>
>> TE badness is just logflat badness using TE error and TE complexity.
>
> TE error = TOP-RMS error
>
> TE complexity is defined here
> http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
>
> It looks like it's a distance, as opposed to a volume.  In particular,
> it's the average radius of what you get when you put the unit ball of
> interval space into tuning space.

No, it's a distance for an equal temperament, an area for a linear
temperament, a volume for a planar temperament, and so on. You can
relate it to a radius of a unit ball, like any hypervolume, but you
need to do an nth root for the dimensions to be correct.

> For badness we want the complexity term to be a number of notes -- the
> volume of a ball with radius equal to the complexity of some target(s),
> since each lattice point in this ball is a chance to hit them.

We do?

> I don't understand how the usual logflat exponent of pi(p)/co(T)
> (where co(T) is the codimension of the temperament) makes sense.  It
> seems to convert a volume in the commatic subspace for T to a volume
> in JI.  But no complexity we use seems to be a volume in commatic
> subspace.

It agrees with what I could find about linear Diophantine
approximations. I couldn't find anything about TE measures applied to
Diophantine approximations, which is a shame, because they should be
as valid in the general case as they are for musical applications.

> What used to be called Graham complexity was a volumetric version
> of TE complexity.

It was? It should be called odd-limit complexity. It counts a number
of steps in an octave-equivalent space. To make it octave-equivalent
you call it steps per octave, or multiply by the size of an octave to
get an area. The distinction can be swept under the carpet if you set
the size of an octave to be 1.

There's a chain of associations that leads from odd limit complexity
to TE complexity. The max weighted complexity is the equivalent of
odd limit complexity with weighted primes. The STD complexity is the
equivalent with an RMS instead of a maximum, and will be strictly less
than the odd-limit complexity. And that's a good approximation to TE
complexity. So the units should agree.

> In the case of "temperamental complexity"
> http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
> it's not clear to me what the targets are.  They live in the quotient
> space of JI by the commatic subspace.  I'm having trouble getting a
> handle on this quotient space.  What do its basis elements look like?

It's the equivalent of TE complexity in a space defined by the
generators of a temperament instead of the prime intervals of JI. It
should be defined so that it's independent of the choice of
generators. I think that works until you try to make it octave
specific. It's related to the temperament class being a subspace of
JI. I explained that on this list and can't remember the details now.

Graham

🔗Carl Lumma <carl@lumma.org>

1/25/2011 12:01:42 PM

Hi Graham,

>> TE complexity is defined here
>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
>> It looks like it's a distance, as opposed to a volume. In particular,
>> it's the average radius of what you get when you put the unit ball of
>> interval space into tuning space.
>
>No, it's a distance for an equal temperament, an area for a linear
>temperament, a volume for a planar temperament, and so on. You can
>relate it to a radius of a unit ball, like any hypervolume, but you
>need to do an nth root for the dimensions to be correct.

It's the RMS of the weighted multival. How is that ever an area?
Can you show me? (I thought later that it must be a diameter not
a radius but anyway...)

>> For badness we want the complexity term to be a number of notes -- the
>> volume of a ball with radius equal to the complexity of some target(s),
>> since each lattice point in this ball is a chance to hit them.
>
>We do?

That's my premise. Perhaps I should have asked, "Has anyone checked
that the pi(p)/co(T) exponent produces logflatness for all these
different complexity measures that have been proposed?"

>> What used to be called Graham complexity was a volumetric version
>> of TE complexity.
>
>It was?

As I recall, it was contiguous-generators * contiguous-periods needed
to complete the map. I believe that is approximately the volume of
the JI unit ball (identity matrix of monzos) mapped into tuning space.

>It should be called odd-limit complexity. It counts a number
>of steps in an octave-equivalent space. To make it octave-equivalent
>you call it steps per octave, or multiply by the size of an octave to
>get an area. The distinction can be swept under the carpet if you set
>the size of an octave to be 1.

I have no idea what you mean, so let's call the thing I define above
"Foo complexity" for now.

>> In the case of "temperamental complexity"
>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
>> it's not clear to me what the targets are. They live in the quotient
>> space of JI by the commatic subspace. I'm having trouble getting a
>> handle on this quotient space. What do its basis elements look like?
>
>It's the equivalent of TE complexity in a space defined by the
>generators of a temperament instead of the prime intervals of JI.

I can see there is no danger of me understanding any of these
spaces unless some brave soul is willing to exhibit examples.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/25/2011 8:55:04 PM

On 26 January 2011 00:01, Carl Lumma <carl@lumma.org> wrote:
> Hi Graham,
>
>>> TE complexity is defined here
>>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
>>> It looks like it's a distance, as opposed to a volume.  In particular,
>>> it's the average radius of what you get when you put the unit ball of
>>> interval space into tuning space.
>>
>>No, it's a distance for an equal temperament, an area for a linear
>>temperament, a volume for a planar temperament, and so on.  You can
>>relate it to a radius of a unit ball, like any hypervolume, but you
>>need to do an nth root for the dimensions to be correct.
>
> It's the RMS of the weighted multival.  How is that ever an area?
> Can you show me?  (I thought later that it must be a diameter not
> a radius but anyway...)

It can be calculated as the RMS of the entries of the multival. That
means it has the same dimensions as each element of the multival. The
"multi" part stops it being a diameter. You'll have to check your
multilinear algebra reference for the geometric interpretation of a
wedge product.

http://en.wikipedia.org/wiki/Exterior_algebra

http://www.av8n.com/physics/area-volume.htm

> That's my premise.  Perhaps I should have asked, "Has anyone checked
> that the pi(p)/co(T) exponent produces logflatness for all these
> different complexity measures that have been proposed?"

I haven't.

>>> What used to be called Graham complexity was a volumetric version
>>> of TE complexity.
>>
>>It was?
>
> As I recall, it was contiguous-generators * contiguous-periods needed
> to complete the map.  I believe that is approximately the volume of
> the JI unit ball (identity matrix of monzos) mapped into tuning space.

Okay, yes, you multiply by the number of periods to the octave. But
that means the dimensions become steps/octave instead of steps/period.
It counts how many notes you need to get the hairiest consonance on
the set -- minus one, or how many notes you have left over after
getting all the consonances of a given complexity.

>>It should be called odd-limit complexity.  It counts a number
>>of steps in an octave-equivalent space.  To make it octave-equivalent
>>you call it steps per octave, or multiply by the size of an octave to
>>get an area.  The distinction can be swept under the carpet if you set
>>the size of an octave to be 1.
>
> I have no idea what you mean, so let's call the thing I define above
> "Foo complexity" for now.
>
>>> In the case of "temperamental complexity"
>>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
>>> it's not clear to me what the targets are.  They live in the quotient
>>> space of JI by the commatic subspace.  I'm having trouble getting a
>>> handle on this quotient space.  What do its basis elements look like?
>>
>>It's the equivalent of TE complexity in a space defined by the
>>generators of a temperament instead of the prime intervals of JI.
>
> I can see there is no danger of me understanding any of these
> spaces unless some brave soul is willing to exhibit examples.

I copied my Pari session before. If you follow that you can produce
whatever examples you want.

Graham

🔗Carl Lumma <carl@lumma.org>

1/25/2011 11:10:34 PM

>You'll have to check your
>multilinear algebra reference for the geometric interpretation of a
>wedge product.
>
>http://en.wikipedia.org/wiki/Exterior_algebra
> http://www.av8n.com/physics/area-volume.htm

Ok, the latter link convinced me it's an area/volume/etc. I was
thrown by the meantone case, where the wedgie elements are about
the same size as the elements in the component vals. Seems to be
due to the fact that meantone's a very simple temperament.

>> I can see there is no danger of me understanding any of these
>> spaces unless some brave soul is willing to exhibit examples.
>
>I copied my Pari session before. If you follow that you can produce
>whatever examples you want.

It's time to cut the crap and actually document this stuff in a
meaningful way, so that the average determined member of the community
could study the wiki over a couple of afternoons and understand it.
It's not that hard -- that much is evident. It would be a trivial
exercise for either you or Gene to add the required examples at this
point. The two of you developed methods of practical utility in music
and music theory. Pointing people to Clifford algebra textbooks or
Python libraries when they ask about them is, frankly, ridiculous.

A first step might be to merge these two pages
http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
and preface the result with descriptions of the four spaces used:
interval, tuning, commatic subspace, and quotient space of the
first by the last. Then each of the measures can be defined by
referencing the appropriate space, which should be easy since they
all seem to use the same weighting and L2 norm.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/26/2011 3:24:14 AM

On 26 January 2011 11:10, Carl Lumma <carl@lumma.org> wrote:
>>You'll have to check your
>>multilinear algebra reference for the geometric interpretation of a
>>wedge product.
>>
>>http://en.wikipedia.org/wiki/Exterior_algebra
>> http://www.av8n.com/physics/area-volume.htm
>
> Ok, the latter link convinced me it's an area/volume/etc.  I was
> thrown by the meantone case, where the wedgie elements are about
> the same size as the elements in the component vals.  Seems to be
> due to the fact that meantone's a very simple temperament.

The octave-equivalent generator mapping is always a subset of the
wedgie for a strictly linear temperament -- that is, period=octave.

Note that Fokker's determinants also define the area/volume/whatever
of periodicity blocks. TE complexity is a weighted generalization of
this.

>>> I can see there is no danger of me understanding any of these
>>> spaces unless some brave soul is willing to exhibit examples.
>>
>>I copied my Pari session before.  If you follow that you can produce
>>whatever examples you want.
>
> It's time to cut the crap and actually document this stuff in a
> meaningful way, so that the average determined member of the community
> could study the wiki over a couple of afternoons and understand it.
> It's not that hard -- that much is evident.  It would be a trivial
> exercise for either you or Gene to add the required examples at this
> point.  The two of you developed methods of practical utility in music
> and music theory.  Pointing people to Clifford algebra textbooks or
> Python libraries when they ask about them is, frankly, ridiculous.

It clearly is hard, because over the years I haven't succeeded in
doing it. I can have another look at that PDF we were working on.
Temperamental complexity is more advanced so it's going to be more
difficult.

Graham

🔗Carl Lumma <carl@lumma.org>

1/26/2011 9:45:31 AM

>The octave-equivalent generator mapping is always a subset of the
>wedgie for a strictly linear temperament -- that is, period=octave.

Ok yeah. I'm surprised the numbers are big enough but there you go.

>Note that Fokker's determinants also define the area/volume/whatever
>of periodicity blocks.

Yes, that I understand from the definition of determinants.

>TE complexity is a weighted generalization of this.

Good. Anyway I don't understand the rationale for the logflat
exponent on it. It looks like it scales from the commatic subspace
to JI.

>> It's time to cut the crap and actually document this stuff in a
>> meaningful way, so that the average determined member of the community
>> could study the wiki over a couple of afternoons and understand it.
>> It's not that hard -- that much is evident. It would be a trivial
>> exercise for either you or Gene to add the required examples at this
>> point. The two of you developed methods of practical utility in music
>> and music theory. Pointing people to Clifford algebra textbooks or
>> Python libraries when they ask about them is, frankly, ridiculous.
>
>It clearly is hard, because over the years I haven't succeeded in
>doing it. I can have another look at that PDF we were working on.

That PDF was good but it was prior to this TE stuff.

Are you saying you can't give examples of points in these four
spaces?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 10:38:09 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> It's time to cut the crap and actually document this stuff in a
> meaningful way, so that the average determined member of the community
> could study the wiki over a couple of afternoons and understand it.
> It's not that hard -- that much is evident. It would be a trivial
> exercise for either you or Gene to add the required examples at this
> point.

What, precisely, do you want examples of? Would it make a good Xenwiki entry?

> A first step might be to merge these two pages
> http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
> http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics

You think it's going to clarify things to make the articles longer?

🔗Carl Lumma <carl@lumma.org>

1/26/2011 11:09:09 AM

>What, precisely, do you want examples of? Would it make a good Xenwiki entry?

The whole shebang uses four spaces:
1. Interval Space and TE Interval Space
2. Tuning Space and TE Tuning Space
3. the "commatic subspace" (and TE version?)
4. the quotient space of 1 by 3 (and TE version?)

There should be one entry defining them and providing at least one
example of each. Probably start by combining these pages:
http://xenharmonic.wikispaces.com/Monzos+and+Interval+Space
http://xenharmonic.wikispaces.com/Vals+and+Tuning+Space

1. doesn't depend on the temperament so it is sufficient to
show its basis in the 5-limit

|1 0 0>
|0 1 0>
|0 0 1>

You say the Tenney Interval Space basis is

|1 0 0>
|0 1.585 0>
|0 0 2.322>

but you don't say what the TE Interval Space basis is.

2 - 4 depend on the temperament and prime limit so each should
be demonstrated against a standard set of examples, say
* 12-ET (5-limit rank 1)
* porcupine (5-limit rank 2)
* pajara (7-limit rank 2)

>> A first step might be to merge these two pages
>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures
>> http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
>
>You think it's going to clarify things to make the articles longer?

It clarifies things to put content that belongs together together.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/27/2011 1:13:59 PM

Carl Lumma <carl@lumma.org> wrote:

> That PDF was good but it was prior to this TE stuff.

It's all about TE. It was prior to the term "TE" but I
decided I may as well update it to do so instead of
apologize for not doing so. It should be here:

http://x31eq.com/te.pdf

http://x31eq.com/te.tex

I can see that the geometry's wrong, because I said it's a
distance, but that's only for equal temperaments.

What counts as "this stuff" isn't clear to me. The topic
is still Zeta integrals. I don't think temperamental
complexity is settled enough to be written up.

> Are you saying you can't give examples of points in these
> four spaces?

I couldn't have said that before because you hadn't posted
about the four spaces, or not that I'd noticed. I still
don't understand them but it's late.

Graham

🔗Carl Lumma <carl@lumma.org>

1/27/2011 1:52:58 PM

>> That PDF was good but it was prior to this TE stuff.
>
>It's all about TE. It was prior to the term "TE"

Yes, I know.

>but I
>decided I may as well update it to do so instead of
>apologize for not doing so. It should be here:
>
> http://x31eq.com/te.pdf

Thanks. The pagination could use touching up. And the
date should be smaller than your name.

>What counts as "this stuff" isn't clear to me. The topic
>is still Zeta integrals.

I've changed the subject here.

>I don't think temperamental
>complexity is settled enough to be written up.

There's clearly a simple geometric interpretation for it.
Too bad Gene won't stoop to explain it to peons like me.

>> Are you saying you can't give examples of points in these
>> four spaces?
>
>I couldn't have said that before because you hadn't posted
>about the four spaces, or not that I'd noticed. I still
>don't understand them but it's late.

So the number of people who potentially understand the wiki
content on TE is not 2 but 1. Got it.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/27/2011 3:27:58 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> There's clearly a simple geometric interpretation for it.
> Too bad Gene won't stoop to explain it to peons like me.

Give me a break. I've started putting examples on the pages you mentioned.

🔗Carl Lumma <carl@lumma.org>

1/27/2011 4:39:38 PM

Gene wrote:

>> There's clearly a simple geometric interpretation for it.
>> Too bad Gene won't stoop to explain it to peons like me.
>
>Give me a break. I've started putting examples on the pages you mentioned.

Thank you. First question:

http://xenharmonic.wikispaces.com/Vals+and+Tuning+Space
"It useful to renormalize to the RMS (root mean square) instead,
which requires dividing the above by sqrt(n)"

Understood that the Euclidean distance proper doesn't have
the "mean" part. Why is it useful?

-Carl

🔗Carl Lumma <carl@lumma.org>

1/27/2011 5:11:48 PM

Perhaps the answer is,
"The point of this normalization is that measures of corresponding
temperaments in different prime limits can be meaningfully compared."

? -Carl

At 04:39 PM 1/27/2011, you wrote:
>Gene wrote:
>
>>> There's clearly a simple geometric interpretation for it.
>>> Too bad Gene won't stoop to explain it to peons like me.
>>
>>Give me a break. I've started putting examples on the pages you mentioned.
>
>Thank you. First question:
>
>http://xenharmonic.wikispaces.com/Vals+and+Tuning+Space
>"It useful to renormalize to the RMS (root mean square) instead,
>which requires dividing the above by sqrt(n)"
>
>Understood that the Euclidean distance proper doesn't have
>the "mean" part. Why is it useful?
>
>-Carl
>
>

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/27/2011 6:28:32 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Gene wrote:
>
> >> There's clearly a simple geometric interpretation for it.
> >> Too bad Gene won't stoop to explain it to peons like me.
> >
> >Give me a break. I've started putting examples on the pages you mentioned.
>
> Thank you. First question:
>
> http://xenharmonic.wikispaces.com/Vals+and+Tuning+Space
> "It useful to renormalize to the RMS (root mean square) instead,
> which requires dividing the above by sqrt(n)"
>
> Understood that the Euclidean distance proper doesn't have
> the "mean" part. Why is it useful?

Look at the example of 7-limit 31, which in the RMS norm comes
out as approximately 31. Then, if you compare 31 in the 5-limit, the 7-limit or the 11-limit, they are all approximately 31, and hence all approximately the same size.

🔗Graham Breed <gbreed@gmail.com>

1/27/2011 9:21:11 PM

Carl Lumma <carl@lumma.org> wrote:

> http://xenharmonic.wikispaces.com/Vals+and+Tuning+Space
> "It useful to renormalize to the RMS (root mean square)
> instead, which requires dividing the above by sqrt(n)"
>
> Understood that the Euclidean distance proper doesn't have
> the "mean" part. Why is it useful?

The mean can still give a Euclidean distance, but with
different weighting. It's there because it makes the TE
complexity of an equal temperament close to the number of
steps to the octave, the TE error comparable to the TOP
error, and so on.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/27/2011 10:21:32 PM

Carl Lumma <carl@lumma.org> wrote:
> >but I
> >decided I may as well update it to do so instead of
> >apologize for not doing so. It should be here:
> >
> > http://x31eq.com/te.pdf
>
> Thanks. The pagination could use touching up. And the
> date should be smaller than your name.

What about the pagination?

The name and date are printed by the standard macro under
strartcl. I don't see any need to mess with the styles for
this document. Perhaps I could make my name longer, or
finish in March.

> >What counts as "this stuff" isn't clear to me. The topic
> >is still Zeta integrals.
>
> I've changed the subject here.

It looks like it's not applying to me.

> >I don't think temperamental
> >complexity is settled enough to be written up.
>
> There's clearly a simple geometric interpretation for it.
> Too bad Gene won't stoop to explain it to peons like me.

It's a distance on a lattice, like TE complexity, and
always a distance if you only take one interval at a time.
The metric is defined by a non-diagonal matrix that happens
to be the inverse of the one in Equation 15 of my PDF. So
instead of taking a determinant, you take the inverse, and
use it in place of W squared.

Gene does things a different way.

> >> Are you saying you can't give examples of points in
> >> these four spaces?
> >
> >I couldn't have said that before because you hadn't
> >posted about the four spaces, or not that I'd noticed.
> >I still don't understand them but it's late.
>
> So the number of people who potentially understand the
> wiki content on TE is not 2 but 1. Got it.

It's something to do with the Wiki, then? I've got this
page:

http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics

Now I need to cross reference with your other message.

> 1. Interval Space and TE Interval Space

Are these one space? In TE interval space, the distance of
an interval from the origin would be its TE complexity.
An example would be [-4, 4, -1> for the syntonic comma.
Its magnitude is

sqrt([4^2 + (4*log2(3))^2 + (1*log2(5))^2]/3)

> 2. Tuning Space and TE Tuning Space

That's a space of vals where you measure the complexity of
a tuning class. For example, <12, 19, 28]. The complexity
is

sqrt([12^2 + (19/log2(3))^2 + (28/log2(5))^2)]/3)

> 3. the "commatic subspace" (and TE version?)

The commatic subspace is the subspace of interval space
containing commas, or unison vectors: things tempered out
in a given regular temperament. The same intervals belong
to it as interval space.

> 4. the quotient space of 1 by 3 (and TE version?)

The transformation of interval space so that unison
vectors are tempered to unisons. It's the way Gene does
temperamental complexity and he explains it on the page.
I'd have to get Pari out to give examples.

Graham

🔗Carl Lumma <carl@lumma.org>

1/28/2011 12:04:23 AM

>> > http://x31eq.com/te.pdf
>>
>> Thanks. The pagination could use touching up. And the
>> date should be smaller than your name.
>
>What about the pagination?

The last page shouldn't exist. The 2nd page starts
with "ment is".

>> >What counts as "this stuff" isn't clear to me. The topic
>> >is still Zeta integrals.
>>
>> I've changed the subject here.
>
>It looks like it's not applying to me.

The web site has both subject and thread titles. There's no
way to change the latter via an e-mail that I know of.

>> So the number of people who potentially understand the
>> wiki content on TE is not 2 but 1. Got it.
>
>It's something to do with the Wiki, then? I've got this
>page:
>
>http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
>
>Now I need to cross reference with your other message.

I've been talking about the wiki all along. All of my posts
about it link to the pages I was talking about and/or quote
text verbatim.

>> 1. Interval Space and TE Interval Space
>
>Are these one space?

This is one aspect I understand. The former employs an L1 norm
with no weighting. The latter an L2 norm with Tenney weighting.
In between there is Tenney Interval space, which has Tenney
weighting and L1.

>In TE interval space, the distance of
>an interval from the origin would be its TE complexity.
>An example would be [-4, 4, -1> for the syntonic comma.
>Its magnitude is
>
>sqrt([4^2 + (4*log2(3))^2 + (1*log2(5))^2]/3)

Yes.

>> 2. Tuning Space and TE Tuning Space
>
>That's a space of vals where you measure the complexity of
>a tuning class. For example, <12, 19, 28]. The complexity is
>
>sqrt([12^2 + (19/log2(3))^2 + (28/log2(5))^2)]/3)

Tuning space ought to be unweighted vals with the dual norm
to L1, which is Linf, which I think is minimax. TE Tuning space
ought to be Tenney-weighted vals with an L2 norm, as above.
One key realization I think I made is that the duality turns the
weights from factors into divisors. I've been doing that all
along but would often mix them up when writing about it and never
paid attention to role of duality, which at the moment I'm still
kind of guessing is the case.

Anyway, the above is for a single val. If I've got a rank 2
temperament I need to wedge first and then apply the distance to
the wedgie. Presumably I can also find the distance of each and
multiply them together with an angle. Wedgies out to be easier,
especially with 3 vals and beyond.

>> 3. the "commatic subspace" (and TE version?)
>
>The commatic subspace is the subspace of interval space
>containing commas, or unison vectors: things tempered out
>in a given regular temperament. The same intervals belong
>to it as interval space.

What are its basis vectors?

>> 4. the quotient space of 1 by 3 (and TE version?)
>
>The transformation of interval space so that unison
>vectors are tempered to unisons. It's the way Gene does
>temperamental complexity and he explains it on the page.
>I'd have to get Pari out to give examples.

It reminds me of talk of the 5-limit lattice turning into a
helix. But again, I have no idea where to begin with such
a space. Examples against the three cases I gave would be a
huge help. Those were 12-ET 5-limit, porcupine 5-limit, and
pajara 7-limit.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/28/2011 12:44:31 AM

Carl Lumma <carl@lumma.org> wrote:
> >> > http://x31eq.com/te.pdf
> >>
> >> Thanks. The pagination could use touching up. And the
> >> date should be smaller than your name.
> >
> >What about the pagination?
>
> The last page shouldn't exist. The 2nd page starts
> with "ment is".

It's going to be different when it's finished, isn't it?
There's no point fiddling with the hyphenation now.

> >> So the number of people who potentially understand the
> >> wiki content on TE is not 2 but 1. Got it.
> >
> >It's something to do with the Wiki, then? I've got this
> >page:
> >
> >http://xenharmonic.wikispaces.com/Tenney-Euclidean+metrics
> >
> >Now I need to cross reference with your other message.
>
> I've been talking about the wiki all along. All of my
> posts about it link to the pages I was talking about
> and/or quote text verbatim.

Do not!

> >> 1. Interval Space and TE Interval Space
> >
> >Are these one space?
>
> This is one aspect I understand. The former employs an
> L1 norm with no weighting. The latter an L2 norm with
> Tenney weighting. In between there is Tenney Interval
> space, which has Tenney weighting and L1.

Okay.

> >> 2. Tuning Space and TE Tuning Space
> >
> >That's a space of vals where you measure the complexity
> >of a tuning class. For example, <12, 19, 28]. The
> >complexity is
> >
> >sqrt([12^2 + (19/log2(3))^2 + (28/log2(5))^2)]/3)
>
> Tuning space ought to be unweighted vals with the dual
> norm to L1, which is Linf, which I think is minimax. TE
> Tuning space ought to be Tenney-weighted vals with an L2
> norm, as above. One key realization I think I made is
> that the duality turns the weights from factors into
> divisors. I've been doing that all along but would often
> mix them up when writing about it and never paid
> attention to role of duality, which at the moment I'm
> still kind of guessing is the case.

Yes, worst-abs is dual to sum-abs. It would become minimax
when you have a function to minimize.

Yes, duality does that. The metric for an interval space
is the inverse of the metric for a tuning space, when that
inverse exists.

> Anyway, the above is for a single val. If I've got a
> rank 2 temperament I need to wedge first and then apply
> the distance to the wedgie. Presumably I can also find
> the distance of each and multiply them together with an
> angle. Wedgies out to be easier, especially with 3 vals
> and beyond.

If you have a rank 2 temperament, you need to measure the
area of a parallelogram. Wedge products are one way of
finding the area of a parallelogram. If you find them easy
to calculate, fine, use them. But you (or, at least, I)
are still going to need linear algebra for temperamental
complexity or cangwu badness.

> >> 3. the "commatic subspace" (and TE version?)
> >
> >The commatic subspace is the subspace of interval space
> >containing commas, or unison vectors: things tempered out
> >in a given regular temperament. The same intervals
> >belong to it as interval space.
>
> What are its basis vectors?

I don't think matters. Gene can disagree if he likes.

> >> 4. the quotient space of 1 by 3 (and TE version?)
> >
> >The transformation of interval space so that unison
> >vectors are tempered to unisons. It's the way Gene does
> >temperamental complexity and he explains it on the page.
> >I'd have to get Pari out to give examples.
>
> It reminds me of talk of the 5-limit lattice turning into
> a helix. But again, I have no idea where to begin with
> such a space. Examples against the three cases I gave
> would be a huge help. Those were 12-ET 5-limit,
> porcupine 5-limit, and pajara 7-limit.

Sure, somebody could do that, if they have time.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/28/2011 4:17:12 AM

I've uploaded the Pari code now. Hopefully
http://x31eq.com/parametric.gp if it isn't recognized as a
script.

Carl Lumma <carl@lumma.org> wrote:

> It reminds me of talk of the 5-limit lattice turning into
> a helix. But again, I have no idea where to begin with
> such a space. Examples against the three cases I gave
> would be a huge help. Those were 12-ET 5-limit,
> porcupine 5-limit, and pajara 7-limit.

To get the metric matrix for 12-equal, do

? tcmet([12;19;28])

[0.0007696056717124342220452659560]

Note very exciting. I don't have the code for Gene's space
and I won't try now because I might not get it working by
the time my connection's cut off. The complexity of some
intervals:

? temperamental complexity([12;19;28], [1])
0.02774176763857044171470867238
? temperamental complexity([12;19;28], [2])
0.05548353527714088342941734477

I can do the complexity of a JI vector (monzo) like this:

? temperamental complexity([12;19;28], [-4,4,-1]*[12;19;28])
0.E-14

So 81:80 is tempered out by 12-equal.

Porcupine:

[<12, 19, 28, 34],
<22, 35, 51, 62]>

The metric for temperamental complexity is

? porcupine = [12,22;19,35;28,51];
? tcmet(porcupine)

[35.91889642472442969260549560
-19.60332363292845572268672490]

[-19.60332363292845572268672490
10.69906288246315471594267892]

Intervals in these coordinates:

? [-4,4,-1]*porcupine
[0, 1]
? [1,0,0]*porcupine
[12, 22]
? [-1,1,0]*porcupine
[7, 13]
? [-2,0,1]*porcupine
[4, 7]
? [1,1,-1]*porcupine
[3, 6]

Complexities of these intervals:

? temperamental complexity(porcupine, [0,1])
3.270942201027580786185515086
? temperamental complexity(porcupine, [12,22])
0.3356219394797316775335614686
? temperamental complexity(porcupine, [7,13])
0.6022049109657446184434357601
? temperamental complexity(porcupine, [4,7])
1.081804322551881696086452127
? temperamental complexity(porcupine, [3,6])
1.648235664511914082209104851

Pajara:

[<12, 19, 28, 34],
<22, 35, 51, 62]>

? pajara = [12,22;19,35;28,51;34,62];

The metric for temperamental complexity is

? tcmet(pajara)

[21.20663857848723498888771491
-11.58782885863549431115869754]

[-11.58782885863549431115869754
6.332003415583224781021377144]

Some monzos in Pajara:

? temperamental complexity(pajara, [-4,4,-1,0]*pajara)
2.516347236687183595942550563
? temperamental complexity(pajara, [-1,1,0,0]*pajara)
0.4990143362465028591553842796
? temperamental complexity(pajara, [-2,0,1,0]*pajara)
0.8099188451851782090152286618
? temperamental complexity(pajara, [-2,0,0,1]*pajara)
0.7839485881806127231228259873

Those intervals using the Pajara basis:

? [-4,4,-1,0]*pajara
[0, 1]
? [-1,1,0,0]*pajara
[7, 13]
? [-2,0,1,0]*pajara
[4, 7]
? [-2,0,0,1]*pajara
[10, 18]

Pajara in Hermite normal form:

? pajara = rowhnf(pajara~)~

[2 0]

[0 1]

[11 -2]

[12 -2]

The same intervals in these coordinates:

? [-4,4,-1,0]*pajara
[-19, 6]
? [-1,1,0,0]*pajara
[-2, 1]
? [-2,0,1,0]*pajara
[7, -2]
? [-2,0,0,1]*pajara
[8, -2]

Complexity of these intervals (agrees with above)

? temperamental complexity(pajara, [-19,6])
2.516347236687183595942550848
? temperamental complexity(pajara, [-2,1])
0.4990143362465028591553843377
? temperamental complexity(pajara, [7,-2])
0.8099188451851782090152287496
? temperamental complexity(pajara, [8,-2])
0.7839485881806127231228260703

Graham

🔗Graham Breed <gbreed@gmail.com>

1/28/2011 6:19:14 AM

I've worked out the projection matrices!

For 12-equal:

? [12;19;28]*tcmet([12;19;28])*[12,19,28]

[0.1108232167265905279745182977
0.1754700931504350026263206380
0.2585875056953778986072093612]

[0.1754700931504350026263206380
0.2778276474881887541583410101
0.4094302173510150061280814886]

[0.2585875056953778986072093612
0.4094302173510150061280814886
0.6033708466225484300834885095]

This is the metric that defines distances in the projective
space. Some distances calculated with it:

?
[-4,4,-1]*([12;19;28]*tcmet([12;19;28])*[12,19,28])*[-4;4;-1]
[-2.524354897 E-29]

(81:80 is tempered out)
?
sqrt([4,-1,-1]*([12;19;28]*tcmet([12;19;28])*[12,19,28])*[4;-1;-1])
[0.02774176763857044171470867217]

(16:15 is one step)
?
sqrt([-3,2,0]*([12;19;28]*tcmet([12;19;28])*[12,19,28])*[-3;2;0])
[0.05548353527714088342941734485]

(9:8 is two steps -- twice as far as 16:15)

For Porcupine:

? porcupine * tcmet(porcupine) * porcupine~

[0.1126420862601366728572157578
0.2015999397532055784402978521
0.2163315949243405438340909635]

[0.2015999397532055784402979329
0.6532085480375348870214699040
-0.1976174274324390926213011769]

[0.2163315949243405438340908698
-0.1976174274324390926213015129
1.585058626948751176330102810]

To find an interval according to this metric,

? sqrt([-1,-1,1] * porcupine * tcmet(porcupine) *
porcupine~ * [-1;-1;1])
[1.648235664511914082209104852]

Compare this with the value from my other message.

The metric for Pajara:

[0.07197108490163612238518138582
0.1556423576284261018708583131
0.08455625170214646937678099589
0.1205417941529645305693716888]

[0.1556423576284261018708583131
0.4883289381347538985843501721
-0.1206249093131642368789796223
-0.04280373049895118594355046577]

[0.08455625170214646937678099589
-0.1206249093131642368789796223
0.7063092029881340553302547220
0.7485873288392072900186452200]

[0.1205417941529645305693716888
-0.04280373049895118594355046577
0.7485873288392072900186452200
0.8088582259156895553033310644]

Graham

🔗Carl Lumma <carl@lumma.org>

1/28/2011 9:55:12 AM

>> >> 1. Interval Space and TE Interval Space
>> >
>> >Are these one space?
>>
>> This is one aspect I understand. The former employs an
>> L1 norm with no weighting. The latter an L2 norm with
>> Tenney weighting. In between there is Tenney Interval
>> space, which has Tenney weighting and L1.
>
>Okay.

Very well. I'll be using these terms precisely from now on.

>> >> 2. Tuning Space and TE Tuning Space
>> >
>> >That's a space of vals where you measure the complexity
>> >of a tuning class. For example, <12, 19, 28]. The
>> >complexity is
>> >
>> >sqrt([12^2 + (19/log2(3))^2 + (28/log2(5))^2)]/3)
>>
>> Tuning space ought to be unweighted vals with the dual
>> norm to L1, which is Linf, which I think is minimax. TE
>> Tuning space ought to be Tenney-weighted vals with an L2
>> norm, as above. One key realization I think I made is
>> that the duality turns the weights from factors into
>> divisors. I've been doing that all along but would often
>> mix them up when writing about it and never paid
>> attention to role of duality, which at the moment I'm
>> still kind of guessing is the case.
>
>Yes, worst-abs is dual to sum-abs. It would become minimax
>when you have a function to minimize.

Agreed.

>Yes, duality does that. The metric for an interval space
>is the inverse of the metric for a tuning space, when that
>inverse exists.

Thanks for confirming my suspicion.

>> >> 3. the "commatic subspace" (and TE version?)
>> >
>> >The commatic subspace is the subspace of interval space
>> >containing commas, or unison vectors: things tempered out
>> >in a given regular temperament. The same intervals
>> >belong to it as interval space.
>>
>> What are its basis vectors?
>
>I don't think matters. Gene can disagree if he likes.

I ask because I have no idea what lives in this space.
Seeing an example of a basis, and an example of a point
corresponding to something familiar, would certainly help.

>> >> 4. the quotient space of 1 by 3 (and TE version?)
>> >
>> >The transformation of interval space so that unison
>> >vectors are tempered to unisons. It's the way Gene does
>> >temperamental complexity and he explains it on the page.
>> >I'd have to get Pari out to give examples.
>>
>> It reminds me of talk of the 5-limit lattice turning into
>> a helix. But again, I have no idea where to begin with
>> such a space. Examples against the three cases I gave
>> would be a huge help. Those were 12-ET 5-limit,
>> porcupine 5-limit, and pajara 7-limit.
>
>Sure, somebody could do that, if they have time.

No, somebody couldn't. Somewhere between 1 and 4 people on
this planet could. They happen to be people who spend years
going round in circles in confusing threads with one another
and with me. I submit that providing such examples on the
wiki would be a much better use of their time.

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/28/2011 10:07:24 AM

Carl Lumma <carl@lumma.org> wrote:
> >> >> 1. Interval Space and TE Interval Space
> >> >
> >> >Are these one space?
> >>
> >> This is one aspect I understand. The former employs an
> >> L1 norm with no weighting. The latter an L2 norm with
> >> Tenney weighting. In between there is Tenney Interval
> >> space, which has Tenney weighting and L1.
> >
> >Okay.
>
> Very well. I'll be using these terms precisely from now
> on.

Yikes!

> >> >> 3. the "commatic subspace" (and TE version?)
> >> >
> >> >The commatic subspace is the subspace of interval
> >> >space containing commas, or unison vectors: things
> >> >tempered out in a given regular temperament. The
> >> >same intervals belong to it as interval space.
> >>
> >> What are its basis vectors?
>
> I ask because I have no idea what lives in this space.
> Seeing an example of a basis, and an example of a point
> corresponding to something familiar, would certainly help.

What live in this space are unison vectors -- or things
like unison vectors that people might not like to call
unisons or vectors. A basis would be a minimal set of
unison vectors, without torsion. But the space itself
isn't interesting. The point, in the context of Gene's (or
Xenwiki's) explanation, is that you define the projective
space such that all unison vectors are given zero
magnitude. You don't need to find those unison vectors.

The space is defined by the mapping. The mapping matrix
multiplied by its pseudoinverse gives the projection
matrix. That's all defined way above. The business with
the quotient space seems to be there to make it clearer :-p

Graham