I've been finding the LLL lattice basis reduction algorithm very

useful for finding a basis for a planar temperament, and it probably

would be even more useful for higher dimensions.

The notation [h19,h22,h27,h31] is a useful one for looking at some of

these, since [h19,h22,h27,h31]^(-1) =

<1728/1715,126/125,225/224,245/243>. This gives us four planar

temperaments, each of which is interesting.

The above means, for instance, that the 19,22, and 31 ets each set

225/224 to a unison, whereas h27(225/224)=1, a single step. To get a

basis for the 225/224 planar temperament, we therefore eliminate h27.

We now may perform lattice basis reduction on the columns of the

remaining three vals, getting:

[19 22 31] [ 0 1 0]

[30 35 49] [-1 1 -1]

[44 51 72] ==> [ 1 1 0]

[53 62 87] [ 0 -1 -2]

Similar calculations with the other commas gives us the following:

1728/1715: basis 6,1/2,12/7 equivalent to 2,3/2,7/6

126/125: basis 6/25,5,5/3 equivalent to 2,5,5/3 or 2,3,5/3

245/243: basis 1/2,9/7,3 equivalent to 2,3/2,9/7

The LLL lattice basis reduction algorithm is in Maple, which may mean

it is in Matlab also.

We keep the top row, since we want octave equivalence, and by

preference the second row, since we like 3. We then choose which of

the last two rows will give us a unimodular 3x3 matrix; in this case

it is the third row, so 2,3 and 5 are the remaining primes. We then

invert:

[ 0 1 0]^(-1) [-1 0 1]

[-1 1 -1] = [ 1 0 0]

[ 1 1 0] [ 2 -1 -1]

The three rows of the inverted matrix represent 5/2, 2, and 4/15; a

minor adjustment gives us the equivalent basis of 2,5/4, and 16/15,

which is the Miracle-Magic basis.

--- In tuning-math@y..., genewardsmith@j... wrote:

> I've been finding the LLL lattice basis reduction algorithm very

> useful for finding a basis for a planar temperament, and it

probably

> would be even more useful for higher dimensions.

>

> The notation [h19,h22,h27,h31] is a useful one for looking at some

of

> these, since [h19,h22,h27,h31]^(-1) =

> <1728/1715,126/125,225/224,245/243>. This gives us four planar

> temperaments, each of which is interesting.

>

> The above means, for instance, that the 19,22, and 31 ets each set

> 225/224 to a unison, whereas h27(225/224)=1, a single step. To get

a

> basis for the 225/224 planar temperament, we therefore eliminate

h27.

> We now may perform lattice basis reduction on the columns of the

> remaining three vals, getting:

>

> [19 22 31] [ 0 1 0]

> [30 35 49] [-1 1 -1]

> [44 51 72] ==> [ 1 1 0]

> [53 62 87] [ 0 -1 -2]

>

> Similar calculations with the other commas gives us the following:

>

> 1728/1715: basis 6,1/2,12/7 equivalent to 2,3/2,7/6

>

> 126/125: basis 6/25,5,5/3 equivalent to 2,5,5/3 or 2,3,5/3

>

> 245/243: basis 1/2,9/7,3 equivalent to 2,3/2,9/7

Not sure what this is telling me.

>

> The LLL lattice basis reduction algorithm is in Maple, which may

mean

> it is in Matlab also.

I don't see it:

MAERESIZE Change the size of a matrix to be [m n].

mldivide.m: %\ Backslash or left matrix divide.

mpower.m: %^ Matrix power.

mrdivide.m: %/ Slash or right matrix divide.

mtimes.m: %* Matrix multiply.

slash.m: %Matrix division.

COMPAN Companion matrix.

DIAG Diagonal matrices and diagonals of a matrix.

EYE Identity matrix.

FLIPDIM Flip matrix along specified dimension.

FLIPLR Flip matrix in left/right direction.

FLIPUD Flip matrix in up/down direction.

HADAMARD Hadamard matrix.

HANKEL Hankel matrix.

HILB Hilbert matrix.

INVHILB Inverse Hilbert matrix.

ISEMPTY True for empty matrix.

PASCAL Pascal matrix.

ROT90 Rotate matrix 90 degrees.

SIZE Size of matrix.

TOEPLITZ Toeplitz matrix.

VANDER Vandermonde matrix.

WILKINSON Wilkinson's eigenvalue test matrix.

EXPM Matrix exponential.

EXPM1 Matrix exponential via Pade approximation.

EXPM2 Matrix exponential via Taylor series.

EXPM3 Matrix exponential via eigenvalues and eigenvectors.

FUNM Evaluate general matrix function.

INV Matrix inverse.

LOGM Matrix logarithm.

NORM Matrix or vector norm.

NORMEST Estimate the matrix 2-norm.

RANK Matrix rank.

SQRTM Matrix square root.

COV Covariance matrix.

POLYVALM Evaluate polynomial with matrix argument.

FULL Convert sparse matrix to full matrix.

ISSPARSE True for sparse matrix.

NNZ Number of nonzero matrix elements.

NONZEROS Nonzero matrix elements.

NZMAX Amount of storage allocated for nonzero matrix elements.

SPALLOC Allocate space for sparse matrix.

SPARSE Create sparse matrix.

SPCONVERT Import from sparse matrix external format.

SPDIAGS Sparse matrix formed from diagonals.

SPEYE Sparse identity matrix.

SPFUN Apply function to nonzero matrix elements.

SPONES Replace nonzero sparse matrix elements with ones.

SPPARMS Set parameters for sparse matrix routines.

SPRAND Sparse uniformly distributed random matrix.

SPRANDN Sparse normally distributed random matrix.

SPRANDSYM Sparse random symmetric matrix.

UNMESH Convert a list of bedges to a graph or matrix.

VIEWMTX View transformation matrix.

PLOTMATRIX Scatter plot matrix.

MATQUEUE Creates and manipulates a figure-based matrix queue.

TEXTWRAP Return wrapped string matrix for given UI Control.

MAT2STR Convert matrix to eval'able string.

STR2MAT Form blank padded character matrix from strings.

STR2NUM Convert string matrix to numeric array.

AIRFOIL Display sparse matrix from NASA airfoil.

FEM1ODE Stiff problem with a time-dependent mass matrix, M(t)*y' = f

(t,y).

FEM2ODE Stiff problem with a constant mass matrix, M*y' = f(t,y).

MATDEMS For setting up matrix computation demos from the MATLAB DEMO.

PLTMAT Display a matrix in a figure window.

SPIRAL SPIRAL(n) is an n-by-n matrix with elements ranging

CASEWRITE Writes casenames from a string matrix to a file.

PCACOV Principal Component Analysis using the covariance matrix.

SQUAREFORM Square matrix formatted distance.

X2FX Factor settings matrix (x) to design matrix (fx).

ATAMULT Example Jacobian-matrix multiply

FINDMAX2 Interpolates the maxima in a matrix of data.

HMULT Hessian-matrix product

normal.m: % NORMAL.M - function to normalize a matrix.

standardize.m: % STANDARDIZE.M - function to standardize a matrix.

DISP3D Display 3D matrix

ELEM3D Element positions of 3-D matrix packed in a 2-D matrix.

FIND3D Return position of non-zero elements in 3-D matrix.

NDX3D Index into 3-D matrix packed in a 2-D matrix.

SIZE3D Size of 2-D matrix to hold 3-D matrix.

CAUCHY Cauchy matrix.

CHEBSPEC Chebyshev spectral differentiation matrix.

CHEBVAND Vandermonde-like matrix for the Chebyshev polynomials.

CHOW Chow matrix (singular Toeplitz lower Hessenberg matrix).

CIRCUL Circulant matrix.

CLEMENT Clement matrix.

CONDEX "Counter-examples" to matrix condition number estimators.

CYCOL Matrix whose columns repeat cyclically.

DORR Dorr matrix.

DRAMADAH Matrix of zeros and ones whose inverse has large integer

entries.

FIEDLER Fiedler matrix.

FORSYTHE Forsythe matrix (perturbed Jordan block).

FRANK Frank matrix.

GEARMAT Gear matrix.

GRCAR Grcar matrix.

HANOWA Matrix whose eigenvalues lie on a vertical line.

HOUSE Householder matrix.

INVHESS Inverse of an upper Hessenberg matrix.

INVOL Involutory matrix.

IPJFACT Hankel matrix with factorial elements.

KAHAN Kahan matrix.

KMS Kac-Murdock-Szego Toeplitz matrix.

KRYLOV Krylov matrix.

LAUCHLI Lauchli matrix.

LEHMER Lehmer matrix.

LESP Tridiagonal matrix with real, sensitive eigenvalues.

LOTKIN Lotkin matrix.

MINIJ Symmetric positive definite matrix MIN(i,j).

MOLER Moler matrix (symmetric positive definite).

NEUMANN Singular matrix from the discrete Neumann problem.

PARTER Parter matrix (Toeplitz with singular values near pi).

PEI Pei matrix.

POISSON Block tridiagonal matrix from Poisson's equation.

PROLATE Prolate matrix (symmetric, ill-conditioned Toeplitz matrix).

RANDHESS Random, orthogonal upper Hessenberg matrix.

RANDO Random matrix with elements -1, 0 or 1.

RANDSVD Random matrix with pre-assigned singular values.

REDHEFF Redheffer matrix.

RIEMANN Matrix associated with the Riemann hypothesis.

RIS Symmetric Hankel matrix.

SMOKE Complex matrix with a "smoke ring" pseudospectrum.

TOEPPD Symmetric positive definite Toeplitz matrix.

TOEPPEN Pentadiagonal Toeplitz matrix.

TRIDIAG Tridiagonal matrix (sparse).

TRIW Upper triangular matrix discussed by Wilkinson and others.

WATHEN Wathen matrix.

ITERAPP Apply matrix operator to vector and error gracefully.

FINDP Nonsingular basis permutation matrix.

> We keep the top row, since we want octave equivalence, and by

> preference the second row, since we like 3. We then choose which of

> the last two rows will give us a unimodular 3x3 matrix; in this

case

> it is the third row, so 2,3 and 5 are the remaining primes. We then

> invert:

>

> [ 0 1 0]^(-1) [-1 0 1]

> [-1 1 -1] = [ 1 0 0]

> [ 1 1 0] [ 2 -1 -1]

>

> The three rows of the inverted matrix represent 5/2, 2, and 4/15; a

> minor adjustment gives us the equivalent basis of 2,5/4, and 16/15,

> which is the Miracle-Magic basis.

Wish I followed this post.

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> > The LLL lattice basis reduction algorithm is in Maple, which may

> mean

> > it is in Matlab also.

> I don't see it:

In Maple the function is called "lattice" and requires a

"readlib(lattice)" before it can be used; however it's clear that

Matlab mostly has different names. Since it had a lot of

functionality before the Maple addition, that's to be expected. Does

it have a help function? Can you put in "lattice", "lattice

basis" "lattice basis reduction" or "LLL" and see what happens?

Thanks for the entropy post, I'll see if I can finally figure out

what you are doing.

--- In tuning-math@y..., genewardsmith@j... wrote:

> however it's clear that

> Matlab mostly has different names. Since it had a lot of

> functionality before the Maple addition, that's to be expected.

Does

> it have a help function? Can you put in "lattice", "lattice

> basis" "lattice basis reduction" or "LLL" and see what happens?

Nothing found.

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> Nothing found.

If you are very adventerous, of if by some chance you are on a Linux

box, you could try the freeware mathematicians have cooked up--Pari

and Lydia both have lattice basis reduction. Alas, they come from a

Unix/academic environment, and getting them up and running on a PC is

a pain.

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

>

> > Nothing found.

>

> If you are very adventerous, of if by some chance you are on a

Linux

> box, you could try the freeware mathematicians have cooked up--Pari

> and Lydia both have lattice basis reduction. Alas, they come from a

> Unix/academic environment, and getting them up and running on a PC

is

> a pain.

Unfortunately I don't understand what the point of this is. Could you

please explain your original post to me?

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> Unfortunately I don't understand what the point of this is. Could

you

> please explain your original post to me?

Let's first see if we have the idea of lattice basis reduction. I

randomly picked 225/224, 1029/1024 and 1728/1715 from a list of 7-

limit commas. The lattice they generate is

(225/224)^a (1029/1024)^b (1728/1715)^c, which as it happens is the

kernel of the 31-et in the 7-limit. However, there might be

a "better" basis for the kernel, in the sense that the Euclidean

lengths of these, considered as vectors, and the dot products of the

corresponding unit vectors, are smaller. In other words, we

take "better" to mean smaller vectors, more nearly orthogonal. The

idea of lattice basis reduction is to find a "better" basis.

If I apply the LLL algorithm to the basis [-5,2,2,-1], [-10,1,0,3],

and [6,3,-1,-3] I get [1,2,-2,1], or 126/125; [-4,4,-1,0], or 81/80,

and [-1,-5,-1,4], or 2401/2430. If we check the Euclidean length and

the dot products for these, we find that they are, indeed, "better".

Moreover, they still generate the kernel of the 31-et. Considered as

commas for a block, they would give a 31-note block, but one with

more regularly sized steps. Also, they are larger.

Does this make sense so far?

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

>

> > Unfortunately I don't understand what the point of this is. Could

> you

> > please explain your original post to me?

>

> Let's first see if we have the idea of lattice basis reduction. I

> randomly picked 225/224, 1029/1024 and 1728/1715 from a list of 7-

> limit commas. The lattice they generate is

> (225/224)^a (1029/1024)^b (1728/1715)^c, which as it happens is the

> kernel of the 31-et in the 7-limit. However, there might be

> a "better" basis for the kernel, in the sense that the Euclidean

> lengths of these, considered as vectors, and the dot products of

the

> corresponding unit vectors, are smaller. In other words, we

> take "better" to mean smaller vectors, more nearly orthogonal. The

> idea of lattice basis reduction is to find a "better" basis.

Aha! So this is related (as I suspected from its name) to the quest

for "canonical unison vectors" for PBs that I asked you about a while

back. However, I wouldn't compute lengths on a Cartesian or

rectangular grid, which it seems you're doing here . . .

> If I apply the LLL algorithm to the basis [-5,2,2,-1], [-10,1,0,3],

> and [6,3,-1,-3] I get [1,2,-2,1], or 126/125; [-4,4,-1,0], or 81/80,

> and [-1,-5,-1,4], or 2401/2430. If we check the Euclidean length

and

> the dot products for these, we find that they are,

indeed, "better".

> Moreover, they still generate the kernel of the 31-et. Considered

as

> commas for a block, they would give a 31-note block, but one with

> more regularly sized steps. Also, they are larger.

>

> Does this make sense so far?

Yup! Can we modify the algorithm so that it is uses lengths _not_

based on a Cartesian or rectangular grid? For example, if we used

Kees van Prooijen's lattice, we might end up with the set of unison

vectors that uses the smallest numbers in their ratios, for example.

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> Aha! So this is related (as I suspected from its name) to the quest

> for "canonical unison vectors" for PBs that I asked you about a

while

> back.

I think I mentioned lattice basis reduction then; I thought I'd try

it.

However, I wouldn't compute lengths on a Cartesian or

> rectangular grid, which it seems you're doing here . . .

That's the way the classic LLL, which is what I have on Maple, works.

It's easy enough to adjust it somewhat, however, by adding

multipliers, so that we could weight 3 and 5 more than 17 and 19,

which might be a good plan.

> Yup! Can we modify the algorithm so that it is uses lengths _not_

> based on a Cartesian or rectangular grid? For example, if we used

> Kees van Prooijen's lattice, we might end up with the set of unison

> vectors that uses the smallest numbers in their ratios, for example.

I don't know what the van Prooijen lattice is; however, there *are*

versions of LLL which work for noneuclidean metrics. I don't know if

they've been implimented or how to lay hands on one, but I could try

if this is what you mean.

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

>

> > Aha! So this is related (as I suspected from its name) to the

quest

> > for "canonical unison vectors" for PBs that I asked you about a

> while

> > back.

>

> I think I mentioned lattice basis reduction then; I thought I'd try

> it.

Awesome.

>

> However, I wouldn't compute lengths on a Cartesian or

> > rectangular grid, which it seems you're doing here . . .

>

> That's the way the classic LLL, which is what I have on Maple,

works.

> It's easy enough to adjust it somewhat, however, by adding

> multipliers, so that we could weight 3 and 5 more than 17 and 19,

> which might be a good plan.

Is 2 considered a prime on equal footing with the others?

>

> I don't know what the van Prooijen lattice is; however, there *are*

> versions of LLL which work for noneuclidean metrics. I don't know

if

> they've been implimented or how to lay hands on one, but I could

try

> if this is what you mean.

Well, Euclidean is not so bad to start with -- what I'm thinking is

more triangular vs. rectangular, along the lines of the "15:8 should

be a longer distance than 6:5" kind of thing . . .

I wrote,

>

> Well, Euclidean is not so bad to start with -- what I'm thinking is

> more triangular vs. rectangular, along the lines of the "15:8

should

> be a longer distance than 6:5" kind of thing . . .

Hmm . . . the point of this is to have simpler ratios span a smaller

distance -- of course that's what the Tenney city-block metric, where

the orientations of the axes don't matter, does so well. Can we

develop a version of LLL for this metric? BTW, what's the optimality

criterion used by the original LLL . . . that the sum of the lengths

of the basis vectors be minimized, or what?

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> Hmm . . . the point of this is to have simpler ratios span a

smaller

> distance -- of course that's what the Tenney city-block metric,

where

> the orientations of the axes don't matter, does so well.

I've not heard of a Tenney metric, but it sounds like you are talking

about the L1 (taxicab) metric, for which there is an LLL variant, at

least in theory--or so I've heard.

Can we

> develop a version of LLL for this metric? BTW, what's the

optimality

> criterion used by the original LLL . . . that the sum of the

lengths

> of the basis vectors be minimized, or what?

The criterion somewhat complicated, but I could post it if you really

need it, and want to hear about Gram-Schimdt and put up with the fact

that there is a parameter involved, so there really isn't just one

version anyway. The problem was that if you insist on a canonical

optimally reduced basis, the problem is NP complete, but the trick is

to relax the condition enough to get a polynomial time algorithm. It

then turns out that most of the time, it gives you more than you've

proven it ought to, anyway. It's been a real breakthrough in

computational math.

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

>

> > Hmm . . . the point of this is to have simpler ratios span a

> smaller

> > distance -- of course that's what the Tenney city-block metric,

> where

> > the orientations of the axes don't matter, does so well.

>

> I've not heard of a Tenney metric,

City-block metric, where the length of one rung along prime axis p is

log(p).

> but it sounds like you are talking

> about the L1 (taxicab) metric,

Yes, but with the above rung lengths.

> for which there is an LLL variant, at

> least in theory--or so I've heard.

Cool.

> Can we

> > develop a version of LLL for this metric? BTW, what's the

> optimality

> > criterion used by the original LLL . . . that the sum of the

> lengths

> > of the basis vectors be minimized, or what?

>

> The criterion somewhat complicated, but I could post it if you

really

> need it, and want to hear about Gram-Schimdt and put up with the

fact

> that there is a parameter involved, so there really isn't just one

> version anyway. The problem was that if you insist on a canonical

> optimally reduced basis, the problem is NP complete, but the trick

is

> to relax the condition enough to get a polynomial time algorithm.

It

> then turns out that most of the time, it gives you more than you've

> proven it ought to, anyway. It's been a real breakthrough in

> computational math.

Well then I'd like to understand the condition, and if it (for some

or all choices of the parameter) has any relatively simple musical

interpretation. Especially if you can hunt down the taxicab version.