back to list

tiptop.py constraint matrices

🔗Mike Battaglia <battaglia01@gmail.com>

9/10/2012 6:13:54 AM

To Keenan and others tuned into this: I spent some time today going
over your tiptop.py code. I get what's going on at a high level, and
what you're doing with "constraint matrices", which look basically
like V-maps that you've chosen to help find the generator tuning map
for all the tunings that split the error evenly among r primes, where
r is the rank of the temperament. Then you pick the generator maps
with the lowest max weighted prime error as evaluated on the original
mapping matrix, and then these serve as endpoints to the TOP tuning
polytope.

After this, you do some magic involving A_back and b_back that I
couldn't figure out at all. What's going on here? I'm talking about
this chunk of code:

# Used to get "back" to the solution x of the original
# problem at the end
A_back = numpy.eye(A.shape[1]) #diagonal matrix of the same rank as
the mapping matrix
b_back = numpy.zeros([A.shape[1], 1]) #zero matrix

while len(xs) > 1: #keep doing this until we only have one candidate

b = b - A * xs[0] #error map
X = numpy.hstack([xs[i] - xs[0] for i in range(1,len(xs))]) #all
generator maps
A = A * X # all tuning maps

b_back = A_back * xs[0] + b_back #make b_back equal to
A_back = A_back * X

xs = candidates(A, b, tol, num_already)
num_already += A.shape[1] + 1

What the heck is this?

-Mike

🔗Keenan Pepper <keenanpepper@gmail.com>

9/14/2012 2:11:58 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> To Keenan and others tuned into this: I spent some time today going
> over your tiptop.py code. I get what's going on at a high level, and
> what you're doing with "constraint matrices", which look basically
> like V-maps that you've chosen to help find the generator tuning map
> for all the tunings that split the error evenly among r primes, where
> r is the rank of the temperament. Then you pick the generator maps
> with the lowest max weighted prime error as evaluated on the original
> mapping matrix, and then these serve as endpoints to the TOP tuning
> polytope.
>
> After this, you do some magic involving A_back and b_back that I
> couldn't figure out at all. What's going on here? I'm talking about
> this chunk of code:
>
> # Used to get "back" to the solution x of the original
> # problem at the end
> A_back = numpy.eye(A.shape[1]) #diagonal matrix of the same rank as
> the mapping matrix
> b_back = numpy.zeros([A.shape[1], 1]) #zero matrix
>
> while len(xs) > 1: #keep doing this until we only have one candidate
>
> b = b - A * xs[0] #error map
> X = numpy.hstack([xs[i] - xs[0] for i in range(1,len(xs))]) #all
> generator maps
> A = A * X # all tuning maps
>
> b_back = A_back * xs[0] + b_back #make b_back equal to
> A_back = A_back * X
>
> xs = candidates(A, b, tol, num_already)
> num_already += A.shape[1] + 1
>
> What the heck is this?

Okay, so you try out all these tunings that might minimize the Linf norm, and some subset of them actually do; those are the "candidates", which in the first step are the vertices of the TOP polytope. The TIP-TOP tuning is always in the affine subspace spanned by the candidates. In the second iteration, we want to do the same thing again, but we're constrained to this affine subspace, and now we're trying to minimize the second-greatest error rather than the greatest error. In the third iteration we're restricted to a smaller affine subspace and are trying to minimize the third-greatest error, and so on, until there is only one candidate tuning left (the TIP-TOP).

The "candidates" function just finds the best approximate solutions to Ax=b, where "best" means the Linf norm of Ax-b is minimized. We want to reuse this function for all the iterations, so for all the later iterations we're giving it a transformed version of the problem. Look at the return value of the "tiptop" function - it's (A_back * xs[0] + b_back), so that's the real tuning vector x we want (not xs[0] itself).

Note that the matrix capital X consists of a set of vectors that span the linear subspace parallel to the affine subspace we're restricting ourselves to, so by updating A to A*X and b to b - A*xs[0], we're giving the "candidates" function a new version of the problem that is restricted to the affine subspace, and the only problem is that the xs it returns are in these funny coordinates now. A_back and b_back just keep track of the affine transformation needed to convert those xs back into the standard coordinates. You need a new affine transformation for each iteration but they are accumulated into the single affine transformation (A_back, b_back) in order not to keep a big list of A_back's and b_back's.

Keenan