back to list

Another notion of subgroup/temperament complexity

🔗Mike Battaglia <battaglia01@gmail.com>

7/29/2012 7:24:30 AM

There are two types of matrix norm here which are interesting:
http://en.wikipedia.org/wiki/Matrix_norm

The first is the induced Lp matrix norm on a matrix, which is defined
as follows: ||M||_p = max ||Mx||_p/||x||_p. This generalizes the
definition of the dual norm of a val in a straightforward way. There's
very easy formulas for this norm for the cases p=1 and p=Inf, and also
apparently for p=2, although I don't have the algorithm handy (MATLAB
can do it).

The second is the Schatten norm, of which the Frobenius norm is a
special case. The Schatten norm is the Lp norm of the vector of
singular values for the matrix. The L1 norm of this vector is a
generalization of trace for nonsquare matrices in the same sense that
the product of the singular values is a generalization of the
determinant. The L2 norm of this vector ends up being the same as the
Frobenius norm on the matrix.

These norms are well-defined for any temperament or subgroup. So, we
can define the complexity of any mapping matrix M of any kind, either
V-map or M-map, as

||M|| = min ||U*M||

where U is any unimodular matrix. This can be applied to any matrix
norm. Conceptually, this makes the most sense if U is restricted to
unimodular matrices with integer coordinates, as all such U*M will
have the same hermite form as M; however, I still haven't seen any
benefit from doing it that way vs taking the norm over all of U.

I'm sure there's some clever way to linearize this problem, but for
the moment it's a pretty straightforward nonlinear optimization
problem, and MATLAB/octave's fminsearch should do the trick pretty
well.

Why is this better than just taking the norm of the multivector? I
dunno, should 2.77.121 have the same complexity as 4.7.11?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/29/2012 1:14:38 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
There's
> very easy formulas for this norm for the cases p=1 and p=Inf, and also
> apparently for p=2, although I don't have the algorithm handy (MATLAB
> can do it).

Try Lagrange multipliers.

> where U is any unimodular matrix. This can be applied to any matrix
> norm. Conceptually, this makes the most sense if U is restricted to
> unimodular matrices with integer coordinates, as all such U*M will
> have the same hermite form as M; however, I still haven't seen any
> benefit from doing it that way vs taking the norm over all of U.

A unimodular matrix has integer coordinates by definition, but certainly you can define your complexity measure in terms of real square matricies with determinant +-1.

> I'm sure there's some clever way to linearize this problem, but for
> the moment it's a pretty straightforward nonlinear optimization
> problem, and MATLAB/octave's fminsearch should do the trick pretty
> well.

Have you tried it, and with what result?

🔗Mike Battaglia <battaglia01@gmail.com>

7/29/2012 5:19:28 PM

On Sun, Jul 29, 2012 at 4:14 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> A unimodular matrix has integer coordinates by definition, but certainly
> you can define your complexity measure in terms of real square matricies
> with determinant +-1.

OK, I had that definition wrong I guess.

> > I'm sure there's some clever way to linearize this problem, but for
> > the moment it's a pretty straightforward nonlinear optimization
> > problem, and MATLAB/octave's fminsearch should do the trick pretty
> > well.
>
> Have you tried it, and with what result?

I've done some preliminary stuff and it worked out well, but I'm still
tweaking fminsearch to make it not take forever. I need some time to
compile a more thorough listing of temperaments and such. If you have
a good go-to list of 7-limit temperaments that'd be useful.

One thing that was really strange is, for meantone, the matrix [<1 0
-4|, <0 1 4|] has an L2 norm (not Schatten) of 5.74456264653803.
Eerily, the L2 norm of the meantone wedgie is also exactly
5.74456264653803. So here we have a case where the L2 norm of the
matrix is equal to the product of its singular values.

I don't understand why it syncs up like that. For instance, that above
matrix for meantone isn't the one with the lowest L2 norm, but it is
the one where these values magically sync up like that. Do you have
any idea why that would be the case?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/30/2012 2:39:53 AM

On Sun, Jul 29, 2012 at 4:14 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> > where U is any unimodular matrix. This can be applied to any matrix
> > norm. Conceptually, this makes the most sense if U is restricted to
> > unimodular matrices with integer coordinates, as all such U*M will
> > have the same hermite form as M; however, I still haven't seen any
> > benefit from doing it that way vs taking the norm over all of U.
>
> A unimodular matrix has integer coordinates by definition, but certainly
> you can define your complexity measure in terms of real square matricies
> with determinant +-1.

Quick note: it has to be unimodular. Otherwise, matrices like [<2 0
0|, <0 1 0|, <0 -0.5 0.5|] end up being fair game, and 2.77.121 and
4.7.11 end up being in the same equivalence class. It has to be only
matrices with integer coordinates, e.g. those with the same Hermite
form.

-Mike