back to list

Applying a tuning map using wedgies

🔗Mike Battaglia <battaglia01@gmail.com>

4/26/2012 4:12:18 PM

Someone asked a question similar to this over on XA and I realized I
had no answer for them:

How does one work out a tuning map if one prefers to do things using
only wedgies, and not matrices?

For instance, say I'm looking at 5-limit meantone, which is <<1 4 4||
with vanishing comma |-4 4 -1>. Is there a way to calculate something
like the POTE optimal tuning using only exterior algebra?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/26/2012 7:52:22 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> Someone asked a question similar to this over on XA and I realized I
> had no answer for them:
>
> How does one work out a tuning map if one prefers to do things using
> only wedgies, and not matrices?

I'm not sure what the point of an allergy to matricies is. A wedgie-based method for rank two is as follows: form the matrix of interior products M = [WV2 Wv3 ... Wvp], where W is the wedgie. Now P = M`M, where M` is the pseudoinverse, is the Frobenius projection map. It has rows which are fractional monzos equalling the tuning values. Put M into weighted coordinates to get the TE tuning instead.

🔗Mike Battaglia <battaglia01@gmail.com>

4/26/2012 8:25:35 PM

On Thu, Apr 26, 2012 at 10:52 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
> >
> > Someone asked a question similar to this over on XA and I realized I
> > had no answer for them:
> >
> > How does one work out a tuning map if one prefers to do things using
> > only wedgies, and not matrices?
>
> I'm not sure what the point of an allergy to matricies is. A wedgie-based
> method for rank two is as follows: form the matrix of interior products M =
> [WV2 Wv3 ... Wvp], where W is the wedgie. Now P = M`M, where M` is the
> pseudoinverse, is the Frobenius projection map. It has rows which are
> fractional monzos equalling the tuning values. Put M into weighted
> coordinates to get the TE tuning instead.

Haha... I see what you did there.

There's no allergy. I actually prefer matrices. It's just that I want
to know what exterior algebra is capable of.

If it requires turning wedgies into matrices to do stuff, then what
advantage do they have? Just that they're elegant representations and
easy to notate?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/27/2012 9:08:51 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Haha... I see what you did there.

Did you see that [WV2 WV3 ... WVp] is really nothing more than a matrix version of the wedgie? Of course, this just3 works in rank two.

> There's no allergy. I actually prefer matrices. It's just that I want
> to know what exterior algebra is capable of.

Clearly you could contrive something which would not use the language of matricies, but what would be the point?

> If it requires turning wedgies into matrices to do stuff, then what
> advantage do they have? Just that they're elegant representations and
> easy to notate?

I do something things with wedgies, some things with tuning maps, and some things with projection matrices depending on what seems easiest.

🔗Mike Battaglia <battaglia01@gmail.com>

4/27/2012 11:21:14 PM

On Fri, Apr 27, 2012 at 12:08 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > Haha... I see what you did there.
>
> Did you see that [WV2 WV3 ... WVp] is really nothing more than a matrix
> version of the wedgie? Of course, this just3 works in rank two.

Yes, I saw it. I said "how do you calculate a tuning map with wedgies
and not using matrices?" and your answer was "turn the wedgie into a
matrix and calculate the tuning map." Tryin to pull one over on ol
Mike I see.

> > There's no allergy. I actually prefer matrices. It's just that I want
> > to know what exterior algebra is capable of.
>
> Clearly you could contrive something which would not use the language of
> matricies, but what would be the point?

I dunno, isn't that like asking what the point of exterior algebra is?
Exterior algebra itself is something that was contrived to not use the
language of matrices. "The point" is to come up with more elegant ways
of talking about stuff. You just made this argument to Dave on XA.

> > If it requires turning wedgies into matrices to do stuff, then what
> > advantage do they have? Just that they're elegant representations and
> > easy to notate?
>
> I do something things with wedgies, some things with tuning maps, and some
> things with projection matrices depending on what seems easiest.

What's an example of something which is easiest with wedgies?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/28/2012 1:07:40 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> What's an example of something which is easiest with wedgies?

Putting a multival into the canonical form of a wedgie is much easier than putting a mapping matrix into the canonical form of Hermite normal form. I'm not even sure how I would do all the stuff I recently did with Fokker blocks not using wedgies, though I am sure there is a way. Defining TE complexity, TE relative error/simple badness, TE error, TE badness is easier and makes more sense with wedgies, and I'm not sure how you even do the TOP analogs Paul likes without wedgies. You can use tuning maps or Frobenius stuff for testing if a monzo or val is associated to a temperament or not, etc, but I find wedgies easiest myself. This is all off the top of my head. The point is, you can pretty much do everything with tuning maps or Frobenius projection matricies also, if you prefer, but wedgies are nice for some things.

Would it maker you happier to use only one approach?

🔗Mike Battaglia <battaglia01@gmail.com>

4/28/2012 1:35:22 AM

On Sat, Apr 28, 2012 at 4:07 AM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> Defining TE complexity,
> TE relative error/simple badness, TE error, TE badness is easier and makes
> more sense with wedgies, and I'm not sure how you even do the TOP analogs
> Paul likes without wedgies.

You can do TE error with wedgies?? How??

> Would it maker you happier to use only one approach?

No - the point is that I'm deliberately trying to learn more than one
approach, so I can understand what's going on from as many angles as
possible.

I'm pretty used to both matrices and wedgies at this point, and I've
been doing things like you say you are - sometimes use matrices for
some things, sometimes use wedgies for other things. The reason I'm
asking all these questions is because I'm trying to see what clever
insights I might be missing, like the connection between TE complexity
and the L2 norm of a wedgie which totally eluded me. It's easier for
me to think in general if I can understand things from more than one
standpoint.

Other people may prefer to use only one approach, but that's where I'm at.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/28/2012 7:53:33 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> You can do TE error with wedgies?? How??

TE error = ||J^W||/||W||

Here J is the JIP, and W is a wedgie.

Here's how to find the TE tuning using wedgies:

(1) Take the wedge product of W with X = <x1 x2 ... xn| where the xi are indeterminats.

(2) Solve W^X = 0 and substitute into X, getting Y.

(3) Put Y into weighted coordinates, getting Z.

(4) Compute (Z-J).(Z-J), the dot product of Z minus J in weighted coordinates with itself; in these coordinates, J = <1 1 1 ... 1|.

(5) Take partial derivatives wrt any remaining variables, set equal to zero, solve and substitute. That's your TE tuning map.

Steps 4 and 5 simply expand on "solve for the minimum distance using least squares." You can use whatever method you like to find the nearest point to J.

> It's easier for
> me to think in general if I can understand things from more than one
> standpoint.

Next you'll want to know how do do everything the Frobenius way.

🔗Mike Battaglia <battaglia01@gmail.com>

4/29/2012 3:42:46 AM

On Sat, Apr 28, 2012 at 10:53 AM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > You can do TE error with wedgies?? How??
>
> TE error = ||J^W||/||W||
>
> Here J is the JIP, and W is a wedgie.

Aha! Very clever. So if W is a bivector, the volume of the
parallelopiped formed by J^W, divided by the volume of the
parallelogram formed by W, gives you the length of the "altitude" of
the parallelopiped (assuming that W is the "base"). And this altitude
is exactly the L2 distance of the vector normal to the base which
intersects the JIP.

That's handy!

> Here's how to find the TE tuning using wedgies:

Very neat. So if T is the tuning map, that's equivalent to solving the
following equations, right? (With everything being expressed in
weighted coordinates)

T^W = 0 (lies in the wedgie)
(T-J)^W* = 0 (where W* is hodge dual, e.g. orthogonal to wedgie)
||T-J|| = ||J^W||/||W|| (magnitude of error map

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/29/2012 10:17:24 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> T^W = 0 (lies in the wedgie)

Correct!

> (T-J)^W* = 0 (where W* is hodge dual, e.g. orthogonal to wedgie)

Not sure where this floated in from, or even what it means. The Hodge dual is the dual, except you flip the angle braces around. That is, say in the 11-limit you take the dual of <<stuff|| and get |||dual stuff>>>, then the Hodge dual is <<dual stuff||.

> ||T-J|| = ||J^W||/||W|| (magnitude of error map

Correct, except that ||T-J|| would mean you divide by the dimension. The usual proceedure is not to do this and to multiply the result by 1200 to get a result in cents.

You've got me writing code in two different ways now, once with wedgies and once with matricies. It makes for an interesting comparison and I hope for a good check.

🔗Graham Breed <gbreed@gmail.com>

5/3/2012 9:28:31 AM

On 29/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Sat, Apr 28, 2012 at 10:53 AM, genewardsmith
> <genewardsmith@sbcglobal.net> wrote:

>> TE error = ||J^W||/||W||
>>
>> Here J is the JIP, and W is a wedgie.
>
> Aha! Very clever. So if W is a bivector, the volume of the
> parallelopiped formed by J^W, divided by the volume of the
> parallelogram formed by W, gives you the length of the "altitude" of
> the parallelopiped (assuming that W is the "base"). And this altitude
> is exactly the L2 distance of the vector normal to the base which
> intersects the JIP.

It could also be ||J^W||/||W||/||J|| because ||J|| is a constant.
This defines the angle between J and W (whatever they are).

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

5/3/2012 9:41:55 AM

On Sun, Apr 29, 2012 at 1:17 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> > (T-J)^W* = 0 (where W* is hodge dual, e.g. orthogonal to wedgie)
>
> Not sure where this floated in from, or even what it means. The Hodge dual
> is the dual, except you flip the angle braces around. That is, say in the
> 11-limit you take the dual of <<stuff|| and get |||dual stuff>>>, then the
> Hodge dual is <<dual stuff||.

Right. The Hodge dual of a multivector is an orthogonal multivector,
right? So the expression means that T-J lies in the Hodge dual of the
wedgie, which means that T-J is orthogonal to the wedgie, which the
L2-optimal tuning map ought to be.

> > ||T-J|| = ||J^W||/||W|| (magnitude of error map
>
> Correct, except that ||T-J|| would mean you divide by the dimension. The
> usual proceedure is not to do this and to multiply the result by 1200 to get
> a result in cents.

What do you mean by divide by the dimension here?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

5/3/2012 9:43:20 AM

On Thu, May 3, 2012 at 12:28 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> It could also be ||J^W||/||W||/||J|| because ||J|| is a constant.
> This defines the angle between J and W (whatever they are).
>
> Graham

So that bottom ||J|| is going to equal sqrt(r), where r is the rank of
the JI system you're starting with, right?

-Mike

🔗Graham Breed <gbreed@gmail.com>

5/3/2012 9:49:17 AM

On 28/04/2012, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
>> What's an example of something which is easiest with wedgies?
>
> Putting a multival into the canonical form of a wedgie is much easier than
> putting a mapping matrix into the canonical form of Hermite normal form. I'm
> not even sure how I would do all the stuff I recently did with Fokker blocks
> not using wedgies, though I am sure there is a way. Defining TE complexity,
> TE relative error/simple badness, TE error, TE badness is easier and makes
> more sense with wedgies, and I'm not sure how you even do the TOP analogs
> Paul likes without wedgies. You can use tuning maps or Frobenius stuff for
> testing if a monzo or val is associated to a temperament or not, etc, but I
> find wedgies easiest myself. This is all off the top of my head. The point
> is, you can pretty much do everything with tuning maps or Frobenius
> projection matricies also, if you prefer, but wedgies are nice for some
> things.

Finding Hermite normal form may be NP, I'm not sure. Wedgies are
polynomial. But the wedgie is bigger. You could take the independent
elements of the wedgie but you can define the same things using matrix
operations. It's the reduced row echelon form multiplied through by
common factors.

TE complexity is dead simple with matrices. It's an absolute
determinant. The relative error is an orthogonalization. It is
simpler with wedgies because orthogonalization is what the wedge
product does. Matrices allow it to be generalized to give Cangwu
badness. I can't get that to work with pure wedgies.

TE error is relative error/complexity whatever the formalism.

There are some minimax complexity measures that need wedgies to go
beyond rank 2. They seem to work but I don't know the theory behind
the geometry to make them do so. Maybe they'd also work using reduced
row echelon form multiplied by the GCD of the lowest common
denominators -- so it isn't a wedgie unless you call it a wedgie.

Detecting rank 2 contorsion is easier using the simple function to
generate the wedgie. Finding a contorsion free basis requires
matrices, and currently I use hermite normal form, so that earns its
keep. In terms of simplicity of code there are cases where I need
matrices (including HNF and the inverse) but no cases where I need
wedgies, so there's no need to implement the latter. Outside TE (or
some kind of Euclidean) you may need wedgies because we don't have any
better ideas. If you want to work with only wedgies you can go a long
way. It should be possible to replace any matrix formula with
exterior algebra. I think there's a theorem that says so.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/3/2012 9:53:46 AM

On 04/05/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Thu, May 3, 2012 at 12:28 PM, Graham Breed <gbreed@gmail.com> wrote:
>>
>> It could also be ||J^W||/||W||/||J|| because ||J|| is a constant.
>> This defines the angle between J and W (whatever they are).
>>
>> Graham
>
> So that bottom ||J|| is going to equal sqrt(r), where r is the rank of
> the JI system you're starting with, right?

Yes, if you define W and J to be dimensionless. The formula is such
that they could be in cents/octave and the units cancel out.

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

5/4/2012 12:40:19 AM

First paragraph has me stumped

On Thu, May 3, 2012 at 12:49 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> Finding Hermite normal form may be NP, I'm not sure. Wedgies are
> polynomial.

Do you mean factorial? It seems like they'd be in O(N!) or something.

> But the wedgie is bigger.

In what sense is a wedgie bigger? A 5-limit rank-2 matrix in rref form
has 6 coefficients, but a 5-limit rank-2 wedgie has 3.

> You could take the independent
> elements of the wedgie but you can define the same things using matrix
> operations. It's the reduced row echelon form multiplied through by
> common factors.

What is rref multiplied by common factors? Hermite form?

> TE complexity is dead simple with matrices. It's an absolute
> determinant. The relative error is an orthogonalization. It is
> simpler with wedgies because orthogonalization is what the wedge
> product does. Matrices allow it to be generalized to give Cangwu
> badness. I can't get that to work with pure wedgies.

What do you mean by "relative error?" Right below this you say

> TE error is relative error/complexity whatever the formalism.

So "relative error" is TE complexity * TE error, which I note is "TE
simple badness" on the wiki. Did you call this something else in
primerr.pdf?

> There are some minimax complexity measures that need wedgies to go
> beyond rank 2. They seem to work but I don't know the theory behind
> the geometry to make them do so. Maybe they'd also work using reduced
> row echelon form multiplied by the GCD of the lowest common
> denominators -- so it isn't a wedgie unless you call it a wedgie.

Yeah, I still don't get this thing about rref form. I was just talking
about this to Keenan the other day, because I was trying to come up
with an easier alternative to Hermite form that people could end up
doing. We couldn't get to something like Hermite form by just
multiplying the rref form by the lowest common denominator of the
entries of the rref matrix. Sometimes we'd get a lattice that wasn't
saturated, for instance.

-Mike

🔗Graham Breed <gbreed@gmail.com>

5/7/2012 4:23:45 AM

Mike Battaglia <battaglia01@gmail.com> wrote:
> First paragraph has me stumped
>
> On Thu, May 3, 2012 at 12:49 PM, Graham Breed
> <gbreed@gmail.com> wrote:
> >
> > Finding Hermite normal form may be NP, I'm not sure.
> > Wedgies are polynomial.
>
> Do you mean factorial? It seems like they'd be in O(N!)
> or something.

They may well be. I don't know. What I mean is
nondeterministic polynomial time: it should be possible to
verify that a matrix is in HNF in polynomial time, but it's
harder to reduce an arbitrary matrix to

> > But the wedgie is bigger.
>
> In what sense is a wedgie bigger? A 5-limit rank-2 matrix
> in rref form has 6 coefficients, but a 5-limit rank-2
> wedgie has 3.

It's bigger for most cases we're interested in, including
the ones where it's big enough to be worth worrying about.

The matrix in RREF has 6 coefficients, of which one is
guaranteed to be 1 and, depending on the position of the
first 1 in the next row, two other coefficients are
guaranteed to be zero. That gives three independent
coefficients to match the wedgie. They could be given as
rationals with one coefficient defined as 1, or as integers
by multiplying through by the lowest common denominator.

The HNF will be bigger. Generally only one coefficient
is defined to be zero for rank 2, so two wasted
coefficients. The rule for rank 2 is:

HNF: 2n - 1 coefficients
Wedgie: (n**2 - n)/2 coefficients

for n primes. Where n=3, yes, there are 5 for the HNF and
only 3 for the wedgie. For n=4, 7-limit, there are 7 for
the HNF and 6 for the wedgie. For n=7, the 17-limit, there
are 13 for HNF and 78 for the wedgie.

> > You could take the independent
> > elements of the wedgie but you can define the same
> > things using matrix operations. It's the reduced row
> > echelon form multiplied through by common factors.
>
> What is rref multiplied by common factors? Hermite form?

No, hermite form is guaranteed to preserve contorsion.
RREF ignores it. RREF is defined such that certain entries
are 1 and others are 0. Other entries are rational. If
you multiply through by their lowest common denominator,
you get something very like the wedgie. It's simpler than
the HNF because you know that certain entries are zero and
so can be thrown away.

> > TE complexity is dead simple with matrices. It's an
> > absolute determinant. The relative error is an
> > orthogonalization. It is simpler with wedgies because
> > orthogonalization is what the wedge product does.
> > Matrices allow it to be generalized to give Cangwu
> > badness. I can't get that to work with pure wedgies.
>
> What do you mean by "relative error?" Right below this
> you say
>
> > TE error is relative error/complexity whatever the
> > formalism.
>
> So "relative error" is TE complexity * TE error, which I
> note is "TE simple badness" on the wiki. Did you call
> this something else in primerr.pdf?

Yes. I called it simple badness before.

> > There are some minimax complexity measures that need
> > wedgies to go beyond rank 2. They seem to work but I
> > don't know the theory behind the geometry to make them
> > do so. Maybe they'd also work using reduced row echelon
> > form multiplied by the GCD of the lowest common
> > denominators -- so it isn't a wedgie unless you call it
> > a wedgie.
>
> Yeah, I still don't get this thing about rref form. I was
> just talking about this to Keenan the other day, because
> I was trying to come up with an easier alternative to
> Hermite form that people could end up doing. We couldn't
> get to something like Hermite form by just multiplying
> the rref form by the lowest common denominator of the
> entries of the rref matrix. Sometimes we'd get a lattice
> that wasn't saturated, for instance.

That's right. It has the same expressiveness as the
wedgie, but with the minimal number of coefficients.

You can probably take a subset of the wedgie as well. I
don't know how that works for ranks beyond 2.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/7/2012 4:29:54 AM

Mike Battaglia <battaglia01@gmail.com> wrote:
> First paragraph has me stumped
>
> On Thu, May 3, 2012 at 12:49 PM, Graham Breed
> <gbreed@gmail.com> wrote:
> >
> > Finding Hermite normal form may be NP, I'm not sure.
> > Wedgies are polynomial.
>
> Do you mean factorial? It seems like they'd be in O(N!)
> or something.

Oh, you mean wedgies O(N!)? I think it's O(n**r) for n
primes and rank r.

Graham