back to list

Deprecating weighted coordinates and the two kinds of Tp complexity

🔗Mike Battaglia <battaglia01@gmail.com>

11/3/2012 12:26:42 AM

I reached some clarity on the TE complexity issues we were talking
about on the tuning list; below is what I've worked out.

The context is, we're noting that 2.9.5 81/80 (13&19) has a TE
complexity equal to half that of 2.3.5 81/80 (12&19), but that the
contorted and insane 2.9.5 81/80 (12&19) is equal to 2.3.5 81/80.
However, the weighted monzo for 81/80 is the same in all these cases
regardless of subgroup, which is |-4 6.340 -2.322>. What gives; how
can the TE complexity differ?

On Fri, Nov 2, 2012 at 11:38 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> This looks
> like a thing with the way that Gene's "dual" is defined on the wiki;
> it isn't invariant under a change of basis. The L2 norms of the
> multimonzo and multival associated with a temperament will be
> identical if the coordinates aren't weighted, but once you weight
> things, one will be a multiple of the other.

This is a lot messier than I thought; in general the weighted "dual"
ends up being a scaled version of what you want. So let's compare
these two multivals:

1) take 81/80, put that into a weighted monzo, and then get the dual
of it; you now have <<2.322 6.340 4||
2) take <12 19 28| and <7 11 16|, weight those vals, and then take the
wedge product; you now have <<0.631 1.723 1.087||

These are clearly not equal. However, the former is a multiple of the
latter; specifically it's equal to the latter times log2(2) * log2(3)
* log2(5), which is the determinant of the monzo weighting matrix.
Now, if all we care about is ranking all of the temperaments within a
single prime limit by TE complexity, then this fact doesn't matter at
all, because this scaling doesn't change the rankings, but should we
ever care to venture outside of this comfort zone and look at the
behavior of this function across limits and subgroups, we'll need to
get a better handle on what this means.

I was going to try to fix this "problem" by creating a refinement of
the dual which can handle weighted coordinates, and which I was going
to call the Tenney dual. This is simple enough to do, but it didn't
take long before I realized that this wasn't going to solve the
original problem. The problem is that the entire concept of weighted
coordinates MAKES NO SENSE for the vast majority of subgroups. It's
not just that working out Tenney height is more complicated in these
subgroups, it's that the entire concept of weighting coordinates at
all falls apart and becomes useless.

The whole thing is a nice mathematical trick for if you're working in
a full-limit group (or a few special subgroups), because it simplifies
things so you can just weight tuning maps or whatever and compute raw
"post-weighted" distances to the JIP. However, there's literally no
incentive to weight the axes if your axes are 2.9.15; this is stupid
and evil and makes nothing easier. The whole thing is basically
"deprecated" now that we're looking at subgroups in their full
generality (but it can serve an indirect mathematical purpose as an
intermediary in certain calculations).

Things get much more clear if you get rid of the idea of weighting the
coordinates of monzos and vals directly. Instead, just keep the
coordinates unweighted and natural, and add all of this
complexification in the definition of the -norm-, not the coordinate
system. For full-limits, this basically just means that rather than
scaling the axes and using the normal Lp norm, we're going to keep the
axes normal but used a scaled Lp norm. For more complex subgroups, the
norm -won't- just be a scaled Lp norm, but will have to be obtained by
a more elaborate transformation. If we're talking about a norm on
monzos to start, then what I'm describing here is what I called the
"Tp" norm on the wiki (T stands for Tenney rather than L for
Lebesgue); the T1 norm is always Tenney height no matter what subgroup
you're in, and likewise with the T2 norm and TE height, etc.

This makes things much clearer. Once you have a norm, you
automatically get a bunch of "induced" norms related to it. For
instance, using the Tp norm automatically gives you a dual norm on the
dual space, and it also induces norms on the exterior algebra of that
vector space, etc. From this perspective, it's now obvious what's
going on with TE complexity if you just think about Tp complexity in
general. So say you have some (unweighted!) multival V and its dual M,
representing the kernel. Then the induced norm on M is NOT (!!) the
same as the induced norm on V, its dual! So the norm of the multimonzo
representing some temperament's kernel is NOT the same as the norm of
the multival representing the temperament itself.

Therefore, there are two types of Tp complexity - kernel complexity
and "mapping complexity" or something, whatever we want to call it,
and they're emphatically not the same. They actually have two
different musical interpretations as well, though that's a subject for
a different post. This means that it's a total coincidence that we
even have this scenario where the TE complexity from the multival
agrees with the TE complexity from the multimonzo, because this is
usually NOT the case.

For instance, let's go back to our variables V and M from above, and
say you're using the naive unweighted L1 norm on unweighted monzos
(just to keep the math simple for now). Then the norm on M is also the
L1 norm, but the norm on V is the Linf norm, since it's dual to M and
lives in the dual space. These are the two complexity measures; the
former is the direct complexity of the kernel and the latter has a
really nice interpretation in terms of how much simple intervals are
"made more complex" by the temperament. It's -ONLY- if you're using
the L2 norm on M that the norm on M and the norm on V agree, since the
L2 norm is dual to itself.

If instead we use the real T1 and T2 norms above, a similar thing
happens. You still get the T1 norm on M and then you get a dual norm
on V which is Linf-based just like T1 is L1-based. In the case of the
T2 norm, the dual norm on V is still L2-based, but the scaling ends up
working in a different way, so that one is a scaled version of the
other. But note that the two are usually even further off!

So the fact that these two types of TE complexity differ is related to
the fact that there are two types of Tp complexity in general. For TE
complexity you get lucky and the complexity you get by looking at the
multival is just a scaled version of the complexity you get by looking
at the multimonzo; for other Tp norms these diverge and are even
further apart from one another.

-Mike

🔗Keenan Pepper <keenanpepper@gmail.com>

11/5/2012 11:38:14 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> I reached some clarity on the TE complexity issues we were talking
> about on the tuning list; below is what I've worked out.
>
> The context is, we're noting that 2.9.5 81/80 (13&19) has a TE
> complexity equal to half that of 2.3.5 81/80 (12&19), but that the
> contorted and insane 2.9.5 81/80 (12&19) is equal to 2.3.5 81/80.
> However, the weighted monzo for 81/80 is the same in all these cases
> regardless of subgroup, which is |-4 6.340 -2.322>. What gives; how
> can the TE complexity differ?
>
> On Fri, Nov 2, 2012 at 11:38 PM, Mike Battaglia <battaglia01@...> wrote:
> >
> > This looks
> > like a thing with the way that Gene's "dual" is defined on the wiki;
> > it isn't invariant under a change of basis. The L2 norms of the
> > multimonzo and multival associated with a temperament will be
> > identical if the coordinates aren't weighted, but once you weight
> > things, one will be a multiple of the other.
>
> This is a lot messier than I thought; in general the weighted "dual"
> ends up being a scaled version of what you want. So let's compare
> these two multivals:
>
> 1) take 81/80, put that into a weighted monzo, and then get the dual
> of it; you now have <<2.322 6.340 4||
> 2) take <12 19 28| and <7 11 16|, weight those vals, and then take the
> wedge product; you now have <<0.631 1.723 1.087||
>
> These are clearly not equal. However, the former is a multiple of the
> latter; specifically it's equal to the latter times log2(2) * log2(3)
> * log2(5), which is the determinant of the monzo weighting matrix.
> Now, if all we care about is ranking all of the temperaments within a
> single prime limit by TE complexity, then this fact doesn't matter at
> all, because this scaling doesn't change the rankings, but should we
> ever care to venture outside of this comfort zone and look at the
> behavior of this function across limits and subgroups, we'll need to
> get a better handle on what this means.
>
> I was going to try to fix this "problem" by creating a refinement of
> the dual which can handle weighted coordinates, and which I was going
> to call the Tenney dual. This is simple enough to do, but it didn't
> take long before I realized that this wasn't going to solve the
> original problem. The problem is that the entire concept of weighted
> coordinates MAKES NO SENSE for the vast majority of subgroups. It's
> not just that working out Tenney height is more complicated in these
> subgroups, it's that the entire concept of weighting coordinates at
> all falls apart and becomes useless.
>
> The whole thing is a nice mathematical trick for if you're working in
> a full-limit group (or a few special subgroups), because it simplifies
> things so you can just weight tuning maps or whatever and compute raw
> "post-weighted" distances to the JIP. However, there's literally no
> incentive to weight the axes if your axes are 2.9.15; this is stupid
> and evil and makes nothing easier. The whole thing is basically
> "deprecated" now that we're looking at subgroups in their full
> generality (but it can serve an indirect mathematical purpose as an
> intermediary in certain calculations).
>
> Things get much more clear if you get rid of the idea of weighting the
> coordinates of monzos and vals directly. Instead, just keep the
> coordinates unweighted and natural, and add all of this
> complexification in the definition of the -norm-, not the coordinate
> system. For full-limits, this basically just means that rather than
> scaling the axes and using the normal Lp norm, we're going to keep the
> axes normal but used a scaled Lp norm. For more complex subgroups, the
> norm -won't- just be a scaled Lp norm, but will have to be obtained by
> a more elaborate transformation. If we're talking about a norm on
> monzos to start, then what I'm describing here is what I called the
> "Tp" norm on the wiki (T stands for Tenney rather than L for
> Lebesgue); the T1 norm is always Tenney height no matter what subgroup
> you're in, and likewise with the T2 norm and TE height, etc.
>
> This makes things much clearer. Once you have a norm, you
> automatically get a bunch of "induced" norms related to it. For
> instance, using the Tp norm automatically gives you a dual norm on the
> dual space, and it also induces norms on the exterior algebra of that
> vector space, etc. From this perspective, it's now obvious what's
> going on with TE complexity if you just think about Tp complexity in
> general. So say you have some (unweighted!) multival V and its dual M,
> representing the kernel. Then the induced norm on M is NOT (!!) the
> same as the induced norm on V, its dual! So the norm of the multimonzo
> representing some temperament's kernel is NOT the same as the norm of
> the multival representing the temperament itself.
>
> Therefore, there are two types of Tp complexity - kernel complexity
> and "mapping complexity" or something, whatever we want to call it,
> and they're emphatically not the same. They actually have two
> different musical interpretations as well, though that's a subject for
> a different post. This means that it's a total coincidence that we
> even have this scenario where the TE complexity from the multival
> agrees with the TE complexity from the multimonzo, because this is
> usually NOT the case.
>
> For instance, let's go back to our variables V and M from above, and
> say you're using the naive unweighted L1 norm on unweighted monzos
> (just to keep the math simple for now). Then the norm on M is also the
> L1 norm, but the norm on V is the Linf norm, since it's dual to M and
> lives in the dual space. These are the two complexity measures; the
> former is the direct complexity of the kernel and the latter has a
> really nice interpretation in terms of how much simple intervals are
> "made more complex" by the temperament. It's -ONLY- if you're using
> the L2 norm on M that the norm on M and the norm on V agree, since the
> L2 norm is dual to itself.
>
> If instead we use the real T1 and T2 norms above, a similar thing
> happens. You still get the T1 norm on M and then you get a dual norm
> on V which is Linf-based just like T1 is L1-based. In the case of the
> T2 norm, the dual norm on V is still L2-based, but the scaling ends up
> working in a different way, so that one is a scaled version of the
> other. But note that the two are usually even further off!
>
> So the fact that these two types of TE complexity differ is related to
> the fact that there are two types of Tp complexity in general. For TE
> complexity you get lucky and the complexity you get by looking at the
> multival is just a scaled version of the complexity you get by looking
> at the multimonzo; for other Tp norms these diverge and are even
> further apart from one another.

I'm confused about the difference in meaning or interpretation of T1 multimonzo complexity vs T1 val complexity (these are not duals, they just seem like they're very similar). If you take a bunch of good temperaments (e.g. the Middle Path 5-limit list) and plot them with T1 multimonzo complexity on one axis and T1 multival complexity on the other axis, what does the scatter plot look like? What does the axis orthogonal to the simple-complex axis represent?

Keenan

🔗Carl Lumma <carl@lumma.org>

11/7/2012 12:28:51 AM

Mike wrote:
>This is a lot messier than I thought; in general the weighted "dual"
>ends up being a scaled version of what you want. So let's compare
>these two multivals:
>
>1) take 81/80, put that into a weighted monzo, and then get the dual
>of it; you now have <<2.322 6.340 4||
>2) take <12 19 28| and <7 11 16|, weight those vals, and then take the
>wedge product; you now have <<0.631 1.723 1.087||

Indeed.

>Things get much more clear if you get rid of the idea of weighting the
>coordinates of monzos and vals directly. Instead, just keep the
>coordinates unweighted and natural, and add all of this
>complexification in the definition of the -norm-, not the coordinate
>system. For full-limits, this basically just means that rather than
>scaling the axes and using the normal Lp norm, we're going to keep the
>axes normal but used a scaled Lp norm. For more complex subgroups, the
>norm -won't- just be a scaled Lp norm, but will have to be obtained by
>a more elaborate transformation.

Ok!

>If we're talking about a norm on
>monzos to start, then what I'm describing here is what I called the
>"Tp" norm on the wiki (T stands for Tenney rather than L for
>Lebesgue); the T1 norm is always Tenney height no matter what subgroup
>you're in, and likewise with the T2 norm and TE height, etc.

Lost me here. I thought the 'problem' we're trying to solve is
the complexity of 2.3.5 meantone being different than 2.9.5
meantone.

>If instead we use the real T1 and T2 norms above, a similar thing
>happens. You still get the T1 norm on M and then you get a dual norm
>on V which is Linf-based just like T1 is L1-based. In the case of the
>T2 norm, the dual norm on V is still L2-based, but the scaling ends up
>working in a different way, so that one is a scaled version of the
>other. But note that the two are usually even further off!
>So the fact that these two types of TE complexity differ is related to
>the fact that there are two types of Tp complexity in general. For TE
>complexity you get lucky and the complexity you get by looking at the
>multival is just a scaled version of the complexity you get by looking
>at the multimonzo; for other Tp norms these diverge and are even
>further apart from one another.

What happens if we apply the weighting after getting the multival?
e.g. <<1 2 4|| vs <<1 4 4||. What are weighting coefficients here?

-Carl

🔗Carl Lumma <carl@lumma.org>

11/7/2012 1:59:03 PM

I wrote:
> What happens if we apply the weighting after getting the multival?
> e.g. <<1 2 4|| vs <<1 4 4||. What are weighting coefficients here?

I think the weights are 1/p1, 1/p2, and 1/p0p2 where pi is the
base-2 log of the (0-indexed) ith prime in the smonzo. Right?

If so, then for TE complexity I get 0.87 and 1.27 respectively.
Drat.

I get 0.62 and 1.23 if I weight the reduced vals before wedging.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/7/2012 9:20:42 PM

I'm trying to keep up here... promised too many responses to too many
people (and I still haven't forgotten your explanation of
thermoeconomics on FB). Anyways, enjoy...

On Wed, Nov 7, 2012 at 3:28 AM, Carl Lumma <carl@lumma.org> wrote:
>
> >If we're talking about a norm on
> >monzos to start, then what I'm describing here is what I called the
> >"Tp" norm on the wiki (T stands for Tenney rather than L for
> >Lebesgue); the T1 norm is always Tenney height no matter what subgroup
> >you're in, and likewise with the T2 norm and TE height, etc.
>
> Lost me here. I thought the 'problem' we're trying to solve is
> the complexity of 2.3.5 meantone being different than 2.9.5
> meantone.

The reason I'm going into more abstract details about the structure of
the Tp norm is that it's necessary to see why the 2.3.5 vs 2.9.5 issue
arises to begin with. The essential topic being discussed here is:
given the Tp norm being placed on the vector space of monzos M, what
are the induced norms on various vector spaces associated with M, and
what complexity measures do they represent? These spaces would include
V, the dual space to M, as well as Λ(M) and Λ(V), the exterior algebra
on M and V. Our overall strategy here is to first impose a norm on M
representing intervallic complexity, and to then start chaining
together "induced" norms on these various related spaces in standard
linear-algebraic ways and see what we come up with.

Λⁿ(M), in simple terms, is the vector space of n-dimensional
multimonzos, and Λⁿ(V) is the vector space of n-dimensional multivals.
For any space of multimonzos Λⁿ(M), the space of "dual" or
"complement" multivals exists in Λᵈ⁻ⁿ(V), where d is the
dimensionality of M (e.g. the rank of the subgroup M represents). And
what we're trying to do specifically is to compare the norm induced on
Λⁿ(M), where each multimonzo represents the kernel of some
temperament, with the one induced on Λᵈ⁻ⁿ(V), where each multival is
dual to a multimonzo in the former space, and see how their behaviors
change as the subgroup changes.

> What happens if we apply the weighting after getting the multival?
> e.g. <<1 2 4|| vs <<1 4 4||. What are weighting coefficients here?

I'll do the math out for 2.3.5 and 2.9.5 so you can compare.

So for instance, if we're looking at 2.3.5 meantone, then the kernel
|-4 4 -1> exists in Λ¹(M) = M, the space of monzos, and its dual
multival <<1 4 4|| exists in Λ³⁻¹(V) = Λ²(V), the space of bivals.
Then the question we're asking ourselves is, if we equip M with a Tp
norm, how do the norms on M and Λ²(V) differ? Well, in general, they
differ a lot. Here's the Tp norm on M for |-4 4 -1>:

|| |-4 4 -1> || = (|log2(2)·(-4)|^p + |log2(3)·(4)|^p + |log2(5)·(-1)|^p)^(1/p)

So for p=1 (Tenney Height) you get 12.661, for p=2 (Tenney-Euclidean
height) you get 7.848, for p=Inf you get 6.340, etc.

You can see that in this case, since the subgroup is "neat", the Tp
norm just reduces to a simple weighted Lp norm. There is hence a dual
weighted Lq norm induced on V, where 1/p + 1/q = 1, but this time the
weighting is backwards: we're weighting things by 1/log2(2),
1/log2(3), etc rather than their reciprocals as we did with monzos.
Then, from there, there's also a sort of higher-dimensional weighted
Lq norm induced on Λ²(V), which is as follows:

|| <<1 4 4|| || = (|1/(log2(2)log2(3))·(1)|^q +
|1/(log2(2)log2(5))·(4)|^q + |1/(log2(3)log2(5))·(4)|^q)^(1/q)
(Note that the weights now look like 1/(log2(2)log2(3)), which is the
product of the individual weights of the basis vals you wedge to get
the basis bivals.)

So for p=1 you get q=Inf, which works out to 1.723. If p=2 then q=2,
which works out to 2.132. If p=Inf then q=1, which works out to 3.441.

The key thing to note here is that these things are all NOT the same,
even in the case of p=2/q=2. Even in that case, you still do not
actually get the exact same thing if you take the induced norm on the
monzo representing the kernel vs the induced norm on the multival
representing the mapping, because the weightings are different, and
the weightings are an inherent part of the Tp norm. If we were using
-unweighted- Lp norms, this situation wouldn't arise, and the L2 norm
would give you the same exact thing for |-4 4 -1> as it would for <<1
4 4||. But this isn't the case here.

However, look what happens if you multiply these multival norm results
by log2(2)·log2(3)·log2(5): for q=Inf you get 6.340, for q=2 you get
7.848, and for q=1 you get 12.661. If we look at this scaled norm,
then the p=2/q=2 norms on multimonzos and multivals agree at 7.848.
Setting p=1 on multimonzos gives you q=Inf on multivals - in this
case, the result for multivals is the same as if you'd set p=Inf on
multimonzos, which is 6.340. And if you set p=Inf on multimonzos, you
get q=1 on multivals, and now the scaled multival norm agrees with p=1
on multimonzos at 12.661.

Is this scaled norm "correct," with the unscaled norm being messed up
somehow? No, I don't think so. There are simply two different norms
here, one for the temperament's kernel and another for its multival,
both of which are naturally induced from the original norm on monzos.
In the case of TE (T2) and -only- TE, you can sort of force them to be
the same by doing this scaling trick above, but in general they won't
even coincide with a simple constant scaling factor and they
shouldn't. They have two different linear-algebraic interpretations
and tell you two different things about the temperament, and even in
the case of T2 the scaling changes differently when the subgroup
changes, as we'll see now.

So let's do the whole thing again for the 2.9.5 subgroup, assuming we
want the non-contorted 81/80 temperament generated by 2/1 and 9/8
(e.g. there's a deliberate generator change). Then the kernel is now
|-4 2 -1>, and the bival is <<1 2 4||. Since this is still a "nice"
subgroup, we can continue to just weight the axes without worrying
about anything more complicated. Let's see what we get:

|| |-4 2 -1> || = (|log2(2)·(-4)|^p + |log2(9)·(2)|^p + |log2(5)·(-1)|^p)^(1/p)

So for p=1 (Tenney Height) you get 12.661, for p=2 (Tenney-Euclidean
height) you get 7.848, for p=Inf you get 6.340, etc. As you can see,
these values are all completely identical to the 2.3.5 81/80 version.
And for the bival:

|| <<1 2 4|| || = (|1/(log2(2)log2(9))·(1)|^q +
|1/(log2(2)log2(5))·(2)|^q + |1/(log2(9)log2(5))·(4)|^q)^(1/q)

So for p=1 you get q=Inf, which works out to 0.861. If p=2 then q=2,
which works out to 1.066. If p=Inf then q=1, which works out to 1.720.

As before, if we multiply these latter values by
log2(2)·log2(9)·log2(5), things align with the values for 2.9.5 |-4 2
-1>. For p=1/q=Inf you get 6.340, for p=2/q=2 you get 7.848, for
p=Inf/q=1 you get 12.661. However, if we don't do this scaling, these
values are exactly half of the values for the 2.3.5 81/80 bival. You
can go compare these with the values above.

For posterity's sake, we might as well compare the 2.9.5 contorted and
insane 12&19 temperament. In this case, the kernel is still |-4 2 -1>,
so you know the kernel complexity is the same as above. However, the
multival is now <<2 4 8||. Let's see what we get:

|| <<2 4 8|| || = (|1/(log2(2)log2(9))·(2)|^q +
|1/(log2(2)log2(5))·(4)|^q + |1/(log2(9)log2(5))·(8)|^q)^(1/q)

For p=1/q=Inf you get 1.723, for p=2/q=2, you get 2.132, for
p=Inf/q=1, you get 3.441. These values are now the same as the 2.3.5
81/80 values. But, now, if you scale it by log2(2)·log2(9)·log2(5),
you get p=1/q=Inf: 12.680, p=2/q=2: 15.695, p=Inf/q=1: 25.324. Now
everything is twice the complexity of the 2.3.5 81/80 temperament.

-Mike

🔗Carl Lumma <carl@lumma.org>

11/8/2012 11:16:18 AM

I wrote:

>> What happens if we apply the weighting after getting the multival?
>> e.g. <<1 2 4|| vs <<1 4 4||. What are weighting coefficients here?
>
>I think the weights are 1/p1, 1/p2, and 1/p0p2 where pi is the
>base-2 log of the (0-indexed) ith prime in the smonzo. Right?

Gene: are these the right weights?

>If so, then for TE complexity I get 0.87 and 1.27 respectively.
>Drat.
>
>I get 0.62 and 1.23 if I weight the reduced vals before wedging.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/8/2012 11:23:21 AM

On Wed, Nov 7, 2012 at 4:59 PM, Carl Lumma <carl@lumma.org> wrote:
>
> I wrote:
> > What happens if we apply the weighting after getting the multival?
> > e.g. <<1 2 4|| vs <<1 4 4||. What are weighting coefficients here?
>
> I think the weights are 1/p1, 1/p2, and 1/p0p2 where pi is the
> base-2 log of the (0-indexed) ith prime in the smonzo. Right?

No, they're going to be 1/p0p1, 1/p0p2, 1/p1p2. If your basis vectors
are e0, e1, and e2, then the first coefficient is for e0^e1, the
second is e0^e2, and the third is e1^e2. Each coefficient represents a
combination of ordinary grade-1 basis vectors. Since the wedge product
is bilinear and respects (ka)^b = a^(kb) = k(a^b) for some scalar k,
the weight for any basis element of the bivector will be the product
of the weights of the basis vectors you wedge to get that bivector.

-Mike

🔗Carl Lumma <carl@lumma.org>

11/8/2012 1:07:57 PM

Mike wrote:

>> I think the weights are 1/p1, 1/p2, and 1/p0p2 where pi is the
>> base-2 log of the (0-indexed) ith prime in the smonzo. Right?
>
>No, they're going to be 1/p0p1, 1/p0p2, 1/p1p2. If your basis vectors
>are e0, e1, and e2, then the first coefficient is for e0^e1, the
>second is e0^e2, and the third is e1^e2. Each coefficient represents a
>combination of ordinary grade-1 basis vectors. Since the wedge product
>is bilinear and respects (ka)^b = a^(kb) = k(a^b) for some scalar k,
>the weight for any basis element of the bivector will be the product
>of the weights of the basis vectors you wedge to get that bivector.

Thanks. p0p2 was a typo, which I then followed when I did the
calculation. And quite right, I shouldn't assume p0 = log2(2).
So here's a correction and a few more things to look at

2.3.5 81/80
weighted wedgie 1.2311451463736638
weighted wedgie, 1/(p0+p1) etc. 0.9394665497911078
wedged weighted vals 0.5043897296092386
wedged weighted vals, 1/(p0+p1) etc. 0.297447306060962
2.9.5 81/80
weighted wedgie 0.6155725731868319
weighted wedgie, 1/(p0+p1) etc. 0.5628742631174743
wedged weighted vals 0.16389775164911985
wedged weighted vals, 1/(p0+p1) etc. 0.14249183178107935

These are all RMS. "1/(p0+p1)" means I add the weights instead
of multiplying them (or use the log of the product of the two
basis elements).

Can you reproduce any of these numbers?

The only one of these with a theoretical justification known to
me is plain "weighted wedgie", and that gives the 'unwanted'
result that the 2.9.5-based temperament is half as complex. And
none of the rest helps.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/8/2012 2:03:54 PM

On Thu, Nov 8, 2012 at 4:07 PM, Carl Lumma <carl@lumma.org> wrote:
>
> 2.3.5 81/80
> weighted wedgie 1.2311451463736638
> weighted wedgie, 1/(p0+p1) etc. 0.9394665497911078
> wedged weighted vals 0.5043897296092386
> wedged weighted vals, 1/(p0+p1) etc. 0.297447306060962

> 2.9.5 81/80
> weighted wedgie 0.6155725731868319
> weighted wedgie, 1/(p0+p1) etc. 0.5628742631174743
> wedged weighted vals 0.16389775164911985
> wedged weighted vals, 1/(p0+p1) etc. 0.14249183178107935
>
> These are all RMS. "1/(p0+p1)" means I add the weights instead
> of multiplying them (or use the log of the product of the two
> basis elements).
>
> Can you reproduce any of these numbers?

The figures I gave don't use RMS, they're using the T2/weighted L2
norm, which is root-sum-squared. I'd rather not use RMS because it
complicates the algebra involved. But for the sake of comparison, if I
weight <<1 4 4|| and take RMS instead of the L2 norm, I get
1.23114514637367, which is what you had above. If I first weight the
vals, take the wedge product and then take the RMS, I also get
1.23114514637367.

The weighting on the multival is defined so that these two values are
supposed to be identical, which they are for my calculation but not
yours. You must be doing something different for your "wedged weighted
vals" calculation.

For 2.9.5 81/80, the RMS I get is 0.615572573186832. If I weight the
vals beforehand and then take the wedge product and RMS I still also
get 0.615572573186825.

I don't get the "wedged weighted vals, 1/(p0+p1) etc." thing. How are
you weighting the individual vals so as to get that result when you
wedge them?

-Mike

🔗Carl Lumma <carl@lumma.org>

11/8/2012 2:41:41 PM

>> 2.3.5 81/80
>> weighted wedgie 1.2311451463736638
>> weighted wedgie, 1/(p0+p1) etc. 0.9394665497911078
>> wedged weighted vals 0.5043897296092386
>> wedged weighted vals, 1/(p0+p1) etc. 0.297447306060962
>
>> 2.9.5 81/80
>> weighted wedgie 0.6155725731868319
>> weighted wedgie, 1/(p0+p1) etc. 0.5628742631174743
>> wedged weighted vals 0.16389775164911985
>> wedged weighted vals, 1/(p0+p1) etc. 0.14249183178107935
>>
>> These are all RMS. "1/(p0+p1)" means I add the weights instead
>> of multiplying them (or use the log of the product of the two
>> basis elements).
>>
>> Can you reproduce any of these numbers?
>
>The figures I gave don't use RMS, they're using the T2/weighted L2
>norm, which is root-sum-squared. I'd rather not use RMS because it
>complicates the algebra involved. But for the sake of comparison, if I
>weight <<1 4 4|| and take RMS instead of the L2 norm, I get
>1.23114514637367, which is what you had above. If I first weight the
>vals, take the wedge product and then take the RMS, I also get
>1.23114514637367.

Crap, yeah, I used the wedgie weighting on the vals by mistake.

>I don't get the "wedged weighted vals, 1/(p0+p1) etc." thing. How are
>you weighting the individual vals so as to get that result when you
>wedge them?

Totally incorrectly. Here is the corrected table

2.3.5 81/80
weighted wedgie 1.2311451463736638
wedged weighted vals 1.2311451463736638
weighted wedgie, 1/(p0+p1) etc. 0.9394665497911078
2.9.5 81/80
weighted wedgie 0.6155725731868319
wedged weighted vals 0.6155725731868319
weighted wedgie, 1/(p0+p1) etc. 0.5628742631174743

Thanks for looking at this.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/8/2012 2:56:58 PM

On Thu, Nov 8, 2012 at 5:41 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Crap, yeah, I used the wedgie weighting on the vals by mistake.

There you go. So then, to address this:

> The only one of these with a theoretical justification known to
> me is plain "weighted wedgie", and that gives the 'unwanted'
> result that the 2.9.5-based temperament is half as complex. And
> none of the rest helps.

So weighted wedgie and wedged weighted vals are the same thing, as you
know. What you've computed here is basically 1/sqrt(3) times the
natural norm induced on multivals that you get if you put the TE norm
on monzos. This is indeed one thing with some sort of theoretical
justification; what its exact interpretation is is something that I'm
still figuring out. That'll probably be my next post on this topic.

The thing which you were advocating for, however, is slightly
different. Try taking various types of weighted norm of |-4 4 -1>
under both subgroups and you'll see it works out to be the same. This
is a second type of complexity which is also theoretically justified.

If we go back to the old school and start using regular Tenney height
instead of TE, then the difference between these two norms is
heightened. The equivalent of your norm is the weighted L1 norm of the
multimonzo representing the kernel, whereas the equivalent of the
thing you calculated before would be the weighted Linf norm of the
wedgie.

-Mike

🔗Carl Lumma <carl@lumma.org>

11/8/2012 3:16:45 PM

At 02:56 PM 2012/11/08, you wrote:
>On Thu, Nov 8, 2012 at 5:41 PM, Carl Lumma <carl@lumma.org> wrote:
>>
>> Crap, yeah, I used the wedgie weighting on the vals by mistake.
>
>There you go. So then, to address this:
>
>> The only one of these with a theoretical justification known to
>> me is plain "weighted wedgie", and that gives the 'unwanted'
>> result that the 2.9.5-based temperament is half as complex. And
>> none of the rest helps.
>
>So weighted wedgie and wedged weighted vals are the same thing, as you
>know. What you've computed here is basically 1/sqrt(3) times the
>natural norm induced on multivals that you get if you put the TE norm
>on monzos. This is indeed one thing with some sort of theoretical
>justification; what its exact interpretation is is something that I'm
>still figuring out. That'll probably be my next post on this topic.

It's just Euclidean harmonic distance, right? It isn't quite as
good as Tenney harmonic distance, but neither is it bad. I don't
think the choice of L1 or L2 is terribly significant here...

>The thing which you were advocating for, however, is slightly
>different. Try taking various types of weighted norm of |-4 4 -1>
>under both subgroups and you'll see it works out to be the same. This
>is a second type of complexity which is also theoretically justified.

You mean the multimonzo complexity? I don't want to be accused
of advocating it. I pointed out it was the same under different
subgroups... I agree that reaching a better understanding of the
differences between val-based and comma-based complexity is a
good goal.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/8/2012 6:06:33 PM

On Thu, Nov 8, 2012 at 6:16 PM, Carl Lumma <carl@lumma.org> wrote:
>
> At 02:56 PM 2012/11/08, Mike wrote:
> >So weighted wedgie and wedged weighted vals are the same thing, as you
> >know. What you've computed here is basically 1/sqrt(3) times the
> >natural norm induced on multivals that you get if you put the TE norm
> >on monzos. This is indeed one thing with some sort of theoretical
> >justification; what its exact interpretation is is something that I'm
> >still figuring out. That'll probably be my next post on this topic.
>
> It's just Euclidean harmonic distance, right? It isn't quite as
> good as Tenney harmonic distance, but neither is it bad. I don't
> think the choice of L1 or L2 is terribly significant here...

No, I mean the norm on multivals is one thing with some sort of
theoretical justification. The specific thing it's measuring is hidden
somewhere in this page:
http://en.wikipedia.org/wiki/Operator_norm

Multivals are bounded linear operators, so there's a very interesting
musical interpretation for some of that, which I'll post about soon
once I have it really nailed down well.

Basically, it tells you something like, in its worst-case scenario,
how much the temperament increases the complexity of its intervals,
using a very particular definition of "complexity increase" that works
across ranks. But the devil is in the details here.

I agree that L1 vs L2 doesn't make much of a difference.

> >The thing which you were advocating for, however, is slightly
> >different. Try taking various types of weighted norm of |-4 4 -1>
> >under both subgroups and you'll see it works out to be the same. This
> >is a second type of complexity which is also theoretically justified.
>
> You mean the multimonzo complexity? I don't want to be accused
> of advocating it. I pointed out it was the same under different
> subgroups... I agree that reaching a better understanding of the
> differences between val-based and comma-based complexity is a
> good goal.

Well, Paul's an advocate of it, for whatever reason. I think it's
useful in its own way.

-Mike

🔗Carl Lumma <carl@lumma.org>

11/8/2012 9:43:56 PM

Mike wrote:
>> >So weighted wedgie and wedged weighted vals are the same thing, as you
>> >know. What you've computed here is basically 1/sqrt(3) times the
>> >natural norm induced on multivals that you get if you put the TE norm
>> >on monzos. This is indeed one thing with some sort of theoretical
>> >justification; what its exact interpretation is is something that I'm
>> >still figuring out. That'll probably be my next post on this topic.
>>
>> It's just Euclidean harmonic distance, right? It isn't quite as
>> good as Tenney harmonic distance, but neither is it bad. I don't
>> think the choice of L1 or L2 is terribly significant here...
>
>No, I mean the norm on multivals is one thing with some sort of
>theoretical justification.

Thanks for clarifying - I thought the 2nd half of the paragraph
referred to the TE norm on monzos.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/8/2012 10:07:55 PM

Sorry Keenan, I missed this post.

On Tue, Nov 6, 2012 at 2:38 AM, Keenan Pepper <keenanpepper@gmail.com>
wrote:
>
> I'm confused about the difference in meaning or interpretation of T1
> multimonzo complexity vs T1 val complexity (these are not duals, they just
> seem like they're very similar).

First off, assuming that you put the T1 norm on monzos, then you'll
naturally induce up a norm on multimonzos which we might as well also
just call the T1 norm for multimonzos. For "nice" subgroups it's going
to be a weighted L1 norm, where the weights for each basis multivector
of the exterior power are the products of the weights of the basis
vectors you have to wedge to get it. Then, the natural norm induced on
multivals is NOT a T1 norm, but something kind of like a Tinf norm,
but where you're using 1/log2(p) weighting rather than log2(p)
weighting. Maybe I'll call this the Tinf* norm from now on.

I'm not sure if, when you say "T1 multival complexity," you mean the
complexity on multivals which is ultimately derived from the T1 norm
on monzos (e.g. the result is the Tinf* norm on multivals), or you
mean putting the T1 norm on multivals itself. I'll assume the former.

One interpretation of the T1 norm of the kernel of a temperament is in
how efficient the temperament is, overall, in using tempered intervals
to represent a wide array of simple JI intervals. If the kernel is
simple then it "appears often" between simple JI intervals, or is
reachable by very small and simple circuits using only simple JI
intervals; therefore, if you temper a very simple kernel out then we
can expect to see, as a rule, less notes standing in for more JI
intervals.

The interpretation of the Tinf* norm on the multival for the
temperament is a bit more complex. Multivals are linear functionals on
same-grade multimonzos, and in that capacity they're also bounded
linear operators; the norm in this case is telling you what the bound
is. So if V is a multival and M is a multimonzo of the same grade,
then ||V|| = sup |V(M)|/||M||. It's a bit mysterious and pinning down
the meaning of that is what I'm working on now.

> If you take a bunch of good temperaments
> (e.g. the Middle Path 5-limit list) and plot them with T1 multimonzo
> complexity on one axis and T1 multival complexity on the other axis, what
> does the scatter plot look like? What does the axis orthogonal to the
> simple-complex axis represent?

So assuming my interpretation before was correct, and that T1 multival
complexity means the Linf-based complexity on multivals which is
derived from the original T1 norm on monzos, then I'm not sure. I
haven't done it yet. I've been wondering about things like the ratio
of the two quantities as well.

-Mike

🔗Keenan Pepper <keenanpepper@gmail.com>

11/9/2012 8:51:32 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
> I'm not sure if, when you say "T1 multival complexity," you mean the
> complexity on multivals which is ultimately derived from the T1 norm
> on monzos (e.g. the result is the Tinf* norm on multivals), or you
> mean putting the T1 norm on multivals itself. I'll assume the former.

What if we're talking about the latter? Is there any actual difference between using the T1 norm on multimonzos and using the T1 norm (not Tinf*) on multivals?

Keenan

🔗Mike Battaglia <battaglia01@gmail.com>

11/9/2012 11:22:59 AM

On Fri, Nov 9, 2012 at 11:51 AM, Keenan Pepper <keenanpepper@gmail.com>
wrote:
>
> What if we're talking about the latter? Is there any actual difference
> between using the T1 norm on multimonzos and using the T1 norm (not Tinf*)
> on multivals?

Do you mean the T1 norm here and not the T1* norm? Meaning you want us
to weight vals with log2(p) weighting instead of 1/log2(p)?

-Mike