back to list

Lenstra, Lenstra and Lovász

🔗Graham Breed <gbreed@gmail.com>

4/8/2008 6:47:45 AM

I've been looking at LLL reduction again. I tried to use LLL reduction with parametric badness before and it didn't work. I've now found the bug in my code so it sort of does work now.

Here's a reference for the algorithm:

http://www.farcaster.com/papers/sm-thesis/node7.html

I think it should work for scalar complexity, which is the Euclidean distance on a lattice of mappings (vals) with distances governed by the weighting. This is a special case of parametric badness.

I don't know if parametric badness generally is the kind of norm that LLL works for. I do know that except for simple, scalar badness (when the parameter's zero) it is an inner product, in the sense of inner product spaces, which is important.

To find equal temperaments, you seed the algorithm with prime intervals. The shortest vectors in the lattice then correspond to the mappings of the best equal temperaments.

The results I get are that the LLL reduced mappings are the best equal temperaments for the given badness function for lower prime limits. But when I go to the 13-limit some less good (but not completely stupid) mappings come out. I don't know if this is because my code's wrong, LLL doesn't properly apply to this norm, or LLL isn't close enough to the true lattice reduction.

For proving completeness of searches, LLL reduction is good enough. It places strict bounds on the sizes of the vectors that come out. What we need is to relate those sizes to the parametric badness we're looking for, so that you can say that a rank 2 temperament with a given parametric badness must be produced by a pair of equal temperaments within a certain parametric badness. Then you can search for equal temperaments within that badness which (for parameters other than zero) is a finite search, if not an efficient one.

Graham