back to list

Temperamental complexity idea dual to Gene's idea about Lp tunings

🔗Mike Battaglia <battaglia01@gmail.com>

7/14/2012 12:32:10 AM

There's an interesting notion of temperamental complexity which is
dual to Gene's Lp tuning thing. I haven't investigated it yet, but it
seems interesting.

If JI interval space is V and a tempered interval space is W, then any
temperament implies an equivalence class of linear transformations V
-> W. For each linear transformation in this class, there's a dual
transformation W* -> V*. This transformation will be injective but not
surjective, and the image of this transformation will be a subspace of
V*: specifically the subspace consisting of all vals supporting the
temperament. If we equip V* with some norm, we can thus look at the
induced norm on the relevant subspace of V* and assign that as a norm
to W*. Then, by analogy with Gene's Lp idea, we can then take the dual
norm to W* and get a norm on W.

This is a measure of temperamental complexity for W, which has been
induced in a strange roundabout way from the original Lp norm on V.
This is what you get if you take Gene's Lp tuning and apply all of the
concepts to the dual of the objects he was applying it to: e.g.
replace monzos with vals and smonzos with tvals.

Unfortunately, I still have no intuition at all for what these crazy
dual norms to induced norms on subspaces of Lp look like, either for
Gene's Lp tunings or for this. It would be nice to know, for instance,
what sort of polytope the unit ball for meantone will be using this
setup, which is the same sort of question as what sort of polytope the
unit ball for the space of 2.5/3-svals will be using Gene's setup. How
do you actually compute the unit ball for a dual norm to an induced
norm on a subspace of an Lp?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/14/2012 12:35:52 AM

On Sat, Jul 14, 2012 at 3:32 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> There's an interesting notion of temperamental complexity which is
> dual to Gene's Lp tuning thing.

To make the analogy explicit: Gene's thing is

monzos -> pick subspace of monzos defining subgroup -> put norm on
monzos -> induce norm on subgroup -> give induced norm to space of
smonzos -> give dual norm to space of svals

My thing is

vals -> pick subspace of vals defining temperament -> put norm on vals
-> induce norm on temperament -> give induced norm to space of tvals
-> give dual norm to space of tmonzos

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/16/2012 1:53:48 PM

Anyone have any thoughts on this? Is this in any way related to the
temperamental complexity metrics we've been using?

-Mike

On Sat, Jul 14, 2012 at 3:32 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> There's an interesting notion of temperamental complexity which is
> dual to Gene's Lp tuning thing. I haven't investigated it yet, but it
> seems interesting.

🔗Graham Breed <gbreed@gmail.com>

7/16/2012 2:30:54 PM

Mike Battaglia <battaglia01@gmail.com> wrote:
> Anyone have any thoughts on this? Is this in any way
> related to the temperamental complexity metrics we've
> been using?

Temperamental complexity comes from Tenney-Euclidean
complexity, which is an L2 norm. That's easy because the
dual of such a norm (an inner product) is defined by the
inverse of the defining matrix (maybe the Gram matrix, I'm
not sure). I don't know how to generalize this to other
norms, like I don't know how to generalize complexity at
all beyond rank 2.

I don't even have a satisfactory inverse for the inner
product for Cangwu badness. I can get something sensible
for individual intervals but it goes wrong (gives zero, I
think) for matrices representing sets of unison vectors.
Maybe I should be looking at pseudoinverses instead of
transposes.

The problem's there for simple TE badness. Simplify it
further by ignoring weighting. Then the badness of a
mapping [M> is

<M|M> - <M|J><J|M>/<J|J>

where <M| is the inverse of |M>, |J> is the JI vector, and
<J| is its inverse.

You can write that using a complexity operator K as <M|K|M>
where

K = I - |J><J|/<J|J>

So what is the inverse of that? One problem is that it's a
projection matrix.

KK = (I - |J><J|/<J|J>)(I - |J><J|/<J|J>)
= II - |J><J|I/<J|J> - I|J><J|/<J|J>
+|J><J|J><J|/<J|J><J|J>

= I - |J><J|/<J|J> - |J><J|/<J|J> + |J><J|/<J|J>
= I - |J><J|/<J|J>

It happens that the determinant is zero, because it's
positive semidefinite. Any multiple of the JI vector has a
measure of zero.

<J|(I - |J><J|/<J|J>)|J>
= <J|I|J> - <J|J><J|J>/<J|J>
= <J|J> - <J|J> = 0

Cangwu badness should avoid this by being positive
definite. I still don't have a satisfactory formula
for its inverse.

Graham

> On Sat, Jul 14, 2012 at 3:32 AM, Mike Battaglia
> <battaglia01@gmail.com> wrote:
> > There's an interesting notion of temperamental
> > complexity which is dual to Gene's Lp tuning thing. I
> > haven't investigated it yet, but it seems interesting.

🔗Mike Battaglia <battaglia01@gmail.com>

7/16/2012 3:54:30 PM

On Mon, Jul 16, 2012 at 5:30 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> Mike Battaglia <battaglia01@gmail.com> wrote:
> > Anyone have any thoughts on this? Is this in any way
> > related to the temperamental complexity metrics we've
> > been using?
>
> Temperamental complexity comes from Tenney-Euclidean
> complexity, which is an L2 norm. That's easy because the
> dual of such a norm (an inner product) is defined by the
> inverse of the defining matrix (maybe the Gram matrix, I'm
> not sure). I don't know how to generalize this to other
> norms, like I don't know how to generalize complexity at
> all beyond rank 2.

Here's a general explanation of dual norms for Lp norms in general
which I hope is intuitive.

Say that M is a monzo in unweighted coordinates and that we have some
norm on monzos given by ||M||, which we presume corresponds to
interval complexity. Then for some unweighted tuning map T, <T|M> is
the number of cents that M maps to.

If J is the JIP, then we can define the error map E = T - J. <E|M>
gives you the amount of unweighted error in cents that M has under the
tuning map T. And if ||M|| is the norm on M, then <E|M>/||M|| is the
amount of weighted error in cents that M has under the map T.

For whatever norm || · || that we put on monzos, there's a
corresponding dual norm || · ||* on vals and tuning maps satisfying
this relationship for any val V:

||V||* = sup <V|M>/||M||

In other words, the norm of a val is the maximum value that a monzo
can map to if divided by the complexity of that monzo. So if we apply
this to an error map such as E, then ||E||* using the dual norm is the
maximum weighted error in cents over all monzos in the entire JI
lattice (!), where the weight of each monzo is given by its norm. So
if you're using the weighted L1 norm on monzos, then ||E||* would give
you the maximum Tenney-weighted error in cents over all monzos, and if
you're using the weighted L2 norm on monzos, then ||E||* would give
you the maximum Tenney Euclidean-weighted error in cents over all
monzos, etc.

For any Lp norm, we can calculate the corresponding dual as the Lq
norm, where q = p/(p-1). So the dual of the L2 norm is again the L2
norm, and the dual of the L1 norm is the Linf norm, and the dual of
the L3 norm is the L1.5 norm. If we want to use the L1 norm on
weighted monzos, therefore, the dual will be the Linf norm on weighted
vals. This means that, for some tuning map, the Linf distance from any
point to the JIP tells you the maximum Tenney-weighted error over all
monzos for that map - and the tuning which minimizes this for some
temperament is the TOP tuning. Also, for some tuning map, the L2
distance from any point to the JIP tells you the maximum Tenney
Euclidean-weighted error over all monzos for that map - and the tuning
which minimizes this is the TE tuning.

Unfortunately, I don't have a clue how to work this out for arbitrary
non-Lp norms, such as the strange sorts of norms you get if you look
at a subspace of L1. The unit balls in this space, as Keenan's pointed
out, can be hexagons. What's the dual norm to a norm where the unit
ball's a hexagon? But anyway, maybe that'll give some insight into how
to solve your problem for Cangwu badness.

> The problem's there for simple TE badness. Simplify it
> further by ignoring weighting. Then the badness of a
> mapping [M> is
>
> <M|M> - <M|J><J|M>/<J|J>
>
> where <M| is the inverse of |M>, |J> is the JI vector, and
> <J| is its inverse.

What's simple TE badness, and how does it relate to TE error and TE complexity?

Also, am I to assume that <M|M> is going to be a square matrix? And
when you say "inverse" here, do you mean the pseudoinverse of it?

-Mike

🔗Graham Breed <gbreed@gmail.com>

7/17/2012 2:28:15 PM

Mike Battaglia <battaglia01@gmail.com> wrote:
> On Mon, Jul 16, 2012 at 5:30 PM, Graham Breed

> > The problem's there for simple TE badness. Simplify it
> > further by ignoring weighting. Then the badness of a
> > mapping [M> is
> >
> > <M|M> - <M|J><J|M>/<J|J>
> >
> > where <M| is the inverse of |M>, |J> is the JI vector,
> > and <J| is its inverse.
>
> What's simple TE badness, and how does it relate to TE
> error and TE complexity?

Simple badness is complexity*error. Also relative error.

> Also, am I to assume that <M|M> is going to be a square
> matrix? And when you say "inverse" here, do you mean the
> pseudoinverse of it?

<M|M> is a square matrix. It has a real inverse. But the
Moore-Penrose pseudoinverse is a generalization of that
inverse, so you can call it a pseudoinverse if you like.

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

7/17/2012 4:45:13 PM

On Tue, Jul 17, 2012 at 5:28 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> <M|M> is a square matrix. It has a real inverse. But the
> Moore-Penrose pseudoinverse is a generalization of that
> inverse, so you can call it a pseudoinverse if you like.

No, I mean that you said this:

Graham wrote:
> the badness of a mapping [M> is
>
> <M|M> - <M|J><J|M>/<J|J>
>
> where <M| is the inverse of |M>, |J> is the JI vector, and
> <J| is its inverse.

You're saying <M| is the inverse of |M>. First I'm confused because
you have the mapping matrix in a ket - are you still using the setup
where vals are bras?

Secondly, when you say "inverse" do you mean the pseudoinverse of M here?

Lastly, you say that |J> is the JI vector, and <J| is its inverse. Am
I interpreting |J> as a monzo then, so that if we're using unweighted
coordinates in cents, |J> is 1200*|log(2) log(3) log(5) ...>? And when
you say <J| is the "inverse" of |J>, do you mean the pseudoinverse of
it, which is NOT the usual JIP in val space?

-Mike

🔗Graham Breed <gbreed@gmail.com>

7/18/2012 11:51:00 AM

Mike Battaglia <battaglia01@gmail.com> wrote:

> No, I mean that you said this:
>
> Graham wrote:
> > the badness of a mapping [M> is
> >
> > <M|M> - <M|J><J|M>/<J|J>
> >
> > where <M| is the inverse of |M>, |J> is the JI vector,
> > and <J| is its inverse.
>
> You're saying <M| is the inverse of |M>. First I'm
> confused because you have the mapping matrix in a ket -
> are you still using the setup where vals are bras?

Oh, sorry, I meant transpose.

Graham

🔗Graham Breed <gbreed@gmail.com>

7/21/2012 3:53:49 AM

Mike Battaglia <battaglia01@gmail.com> wrote:
> There's an interesting notion of temperamental complexity
> which is dual to Gene's Lp tuning thing. I haven't
> investigated it yet, but it seems interesting.
>
> If JI interval space is V and a tempered interval space
> is W, then any temperament implies an equivalence class
> of linear transformations V -> W. For each linear
> transformation in this class, there's a dual
> transformation W* -> V*. This transformation will be
> injective but not surjective, and the image of this
> transformation will be a subspace of V*: specifically the
> subspace consisting of all vals supporting the
> temperament. If we equip V* with some norm, we can thus
> look at the induced norm on the relevant subspace of V*
> and assign that as a norm to W*. Then, by analogy with
> Gene's Lp idea, we can then take the dual norm to W* and
> get a norm on W.

Oh, I was busy last week so I missed this. I always
thought this was the general case of temperamental
complexity, with temperamental TE complexity the special
case that we could calculate.

The way to show the formula matches:

<M] is the mapping in V* so <M|X> gives generator steps to
the interval |X> in V.

<m] is the mapping in W* so <m|x> gives generator step to
[x> defined in W according to the generators given by <M].

You can transform <m] to V* as <m]<M]. So [x> = <M|X> and
<m|x> = <m|<M|X>.

The complexity of a mapping matrix <M| is defined as
<M|K|M>. The equivalent mapping in W*, <m], should have
the same complexity. That means <m|k|m> = <m|<M|K|M>|m>
and k = <M|K|M> defines the norm. The dual operation, on
vectors in W, is defined by the inverse of <M|K|M>. This
is temperamental TE complexity.

It transforms to the projection matrix in W as

<x| inv(<M|K|M>) |x> = <X|M> inv(<M|K|M>) <M|X>

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 4:00:11 AM

On Sat, Jul 21, 2012 at 6:53 AM, Graham Breed <gbreed@gmail.com> wrote:
>
> Oh, I was busy last week so I missed this. I always
> thought this was the general case of temperamental
> complexity, with temperamental TE complexity the special
> case that we could calculate.

You mean "calculate easily," right? It seems like we should be able to
calculate TOP temperamental complexity as well, it'll just be a pain
in the ass because we don't have nice things like the pseudoinverse
for L1 and Linf.

> <m] is the mapping in W* so <m|x> gives generator step to
> [x> defined in W according to the generators given by <M].

So looks like <m| is always the identity matrix, yes? It has to
satisfy the identify <m|*<M| = <M|.

> The complexity of a mapping matrix <M| is defined as
> <M|K|M>.

What's K?

-Mike

🔗Graham Breed <gbreed@gmail.com>

7/21/2012 4:12:08 AM

Mike Battaglia <battaglia01@gmail.com> wrote:
> On Sat, Jul 21, 2012 at 6:53 AM, Graham Breed
> <gbreed@gmail.com> wrote:
> >
> > Oh, I was busy last week so I missed this. I always
> > thought this was the general case of temperamental
> > complexity, with temperamental TE complexity the special
> > case that we could calculate.
>
> You mean "calculate easily," right? It seems like we
> should be able to calculate TOP temperamental complexity
> as well, it'll just be a pain in the ass because we don't
> have nice things like the pseudoinverse for L1 and Linf.

I mean "could calculate" past tense. If you know how to
calculate temperamental TOP complexity now, that's good. I
didn't notice anybody doing it when TE complexity came out.

> > <m] is the mapping in W* so <m|x> gives generator step
> > to [x> defined in W according to the generators given
> > by <M].
>
> So looks like <m| is always the identity matrix, yes? It
> has to satisfy the identify <m|*<M| = <M|.

No. <m|*<M| is a different mapping matrix in the same
space as <M|.

Let's take meantone, so as 19&12 <M] is

[<19, 30, 44], <12, 19, 28]>

You can define the mapping for diatonics in this space as

<m] = <1, -1|

That multiplies out as

<m|<M] = <1, -1|<19, 30, 44], <12, 19, 28]>
= [<19, 30, 44] - <12, 19, 28]>
= [<7, 11, 16]>

Every meantone val is a member of the subgroup defined by
<M].

> > The complexity of a mapping matrix <M| is defined as
> > <M|K|M>.
>
> What's K?

It's a matrix that defines the complexity of vals.

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 4:39:52 AM

On Sat, Jul 21, 2012 at 7:12 AM, Graham Breed <gbreed@gmail.com> wrote:
>
> I mean "could calculate" past tense. If you know how to
> calculate temperamental TOP complexity now, that's good. I
> didn't notice anybody doing it when TE complexity came out.

I know conceptually how to calculate it brute force for any interval;
you just find the JI interval mapping to it with shortest L1 norm. If
my conjecture about there existing a TOP projection map for any
temperament sending every interval to the interval in the same coset
with minimal L1 norm, that'll make life easier. Lp projection maps
that generalize TE projection maps would be very useful.

> > > <m] is the mapping in W* so <m|x> gives generator step
> > > to [x> defined in W according to the generators given
> > > by <M].
> >
> > So looks like <m| is always the identity matrix, yes? It
> > has to satisfy the identify <m|*<M| = <M|.
>
> No. <m|*<M| is a different mapping matrix in the same
> space as <M|.

OK, so <m| just represents a mapping matrix that you want to create
out of tempered vals, right? So it's a matrix in which the rows are
what I was calling tvals. Then I see.

> Let's take meantone, so as 19&12 <M] is
//snip
> Every meantone val is a member of the subgroup defined by
>
> <M].

OK, yeah. So if you've already been down this road before, then my
recent posts can be summed up as followed: this whole concept has a
nice dual concept involving subgroups. If you think about matrices
which look like the transpose of mapping matrices, e.g. things where
the columns are monzos (what I've been calling V-maps), then
right-multiplication of such a matrix by some column vector turns
smonzos back into monzos, and left-multiplication of such a matrix
turns vals into svals. So that's what it means to "temper out a val";
to move to a subgroup of your original group.

So now I'm trying to find the dual to many of the concepts we already
have, now that I having this new intuition. The dual operation to
tempering is subgroup reduction. Looks like Lp temperamental
complexity and the Lp sval norm is completely dual to one another in
this regard, the former being an induced norm on a quotient space of
monzos, and the latter being an induced norm on a quotient space of
vals.

I doubt anyone's following my insane wave of posts today, but I'm
starting to perceive so many layers and layers of duality that I
wonder if category theory might be helpful to organize things.

> > > The complexity of a mapping matrix <M| is defined as
> > > <M|K|M>.
> >
> > What's K?
>
> It's a matrix that defines the complexity of vals.

Are you saying that K is a matrix that gives you the norm on vals? Is
K some kind of norm matrix? What does it look like for L2?

-Mike