We begin with a real n-dimensional vector space R^n with an inner

product, which we denote by <u, v>. In orthonormal coordinates this

will be the ordinary dot product; if we want another inner product

for LLL reduction one way to do it is to transform coordinates and

then transform them back.

The goal of lattice basis reduction is to obtain a lattice basis

equivalent to a given basis that consists of short vectors or, stated

another way, into a basis consisting of vectors that are pairwise

nearly orthogonal.

Starting from an ordered lattice basis [b_1, b_2, ..., b_m] we can

obtain another basis [g_1, g_2, ..., g_m] by the Gram-Schmidt

orthogonalization process. This will span the same linear subspace,

but *not*, normally, the same lattice. We also obtain Gram-Schmidt

coefficients:

c[i,j] = <b_i, g_j>/<g_j, g_j>

We do this by the standard Gram-Schmidt recursion from linear algebra:

g_1 = b_1

g_i = b_i - sum_{j = 1 to i - 1} c[i,j] g_j

Definition:

An ordered basis [b_1, b_2, ..., b_m] in R^n is called *size-reduced*

if |c[i,j]| <= 1/2 for 1 <= j < i <= m.

Definition:

An ordered basis [b_1, b_2, ..., b_m] in R^n is called

*LLL-reduced with constant d*, where d satisfies 1/4 < d < 1, if it

is size-reduced and if

d |g_{k-1}|^2 <= |g_k + c[k, k-1] g_{k-1}|^2 for k = 2, 3, ..., m.

The constant d is not documented in the sketchy Maple help file, but

chances are good it is 3/4, since this is what Lenstra, Lenstra and

Lovasz used originally and it is a sort of default standard. The

larger d is the better your chance of an optimal reduction, but for

such small problems as we are looking at I suspect we are getting

that anyway.

--- In tuning-math@y..., genewardsmith@j... wrote:

> We begin with a real n-dimensional vector space R^n with an inner

> product, which we denote by <u, v>. In orthonormal coordinates this

> will be the ordinary dot product; if we want another inner product

> for LLL reduction one way to do it is to transform coordinates and

> then transform them back.

Is there a continuous, reversible transformation between Euclidean

and taxicab?

>

> Starting from an ordered lattice basis [b_1, b_2, ..., b_m] we can

> obtain another basis [g_1, g_2, ..., g_m] by the Gram-Schmidt

> orthogonalization process. This will span the same linear subspace,

> but *not*, normally, the same lattice.

What do you mean by the "lattice" spanned by a particular basis, as

opposed to the linear subspace spanned by a particular basis? Do you

mean something that musically speaking, is or corresponds to the

particular JI Fokker periodicity block (just a guess)?

> We also obtain Gram-Schmidt

> coefficients:

>

> c[i,j] = <b_i, g_j>/<g_j, g_j>

OK . . .

> We do this by the standard Gram-Schmidt recursion from linear

algebra:

> g_1 = b_1

> g_i = b_i - sum_{j = 1 to i - 1} c[i,j] g_j

This I'm familiar with.

> Definition:

> An ordered basis [b_1, b_2, ..., b_m] in R^n is called *size-

reduced*

> if |c[i,j]| <= 1/2 for 1 <= j < i <= m.

I'm confused as to why you use b here rather than g. I thought b was

the given basis, and g the derived one.

Given a basis (but allowing changes that leave the linear subspace

unchanged), does this size-reduced basis always exist? Is it always

unique?

> Definition:

> An ordered basis [b_1, b_2, ..., b_m] in R^n is called

> *LLL-reduced with constant d*, where d satisfies 1/4 < d < 1, if it

> is size-reduced and if

>

> d |g_{k-1}|^2 <= |g_k + c[k, k-1] g_{k-1}|^2 for k = 2, 3, ..., m.

Hmm . . . how is this motivated?

> The constant d is not documented in the sketchy Maple help file,

but

> chances are good it is 3/4, since this is what Lenstra, Lenstra and

> Lovasz used originally and it is a sort of default standard. The

> larger d is the better your chance of an optimal reduction, but for

> such small problems as we are looking at I suspect we are getting

> that anyway.

Hmm . . . do certain values of d guarantee existence, and/or do

certain other values guarantee uniqueness?

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> Is there a continuous, reversible transformation between Euclidean

> and taxicab?

In the finite-dimensional case, they define the same topology; hence

the identity map is continuous.

> What do you mean by the "lattice" spanned by a particular basis, as

> opposed to the linear subspace spanned by a particular basis?

The lattice consists of the Z-linear combinations of basis vectors;

if they are [1, 2, 1] and [0, 1 0] then the lattice is [a, 2a + b, a]

for any pair of integers a and b. The subspace consists of the

R-linear combinations, so in this case a and b are any real numbers.

> > Definition:

> > An ordered basis [b_1, b_2, ..., b_m] in R^n is called *size-

> reduced*

> > if |c[i,j]| <= 1/2 for 1 <= j < i <= m.

> I'm confused as to why you use b here rather than g. I thought b

was

> the given basis, and g the derived one.

So it is, but it's b which is or isn't size reduced. We create g and

c in order to find if b is size-reduced.

> Given a basis (but allowing changes that leave the linear subspace

> unchanged), does this size-reduced basis always exist? Is it always

> unique?

It exists, since we can show the LLL algorithm works, and that gives

size-reduction plus more. It is not in general unique.

>

> > Definition:

> > An ordered basis [b_1, b_2, ..., b_m] in R^n is called

> > *LLL-reduced with constant d*, where d satisfies 1/4 < d < 1, if

it

> > is size-reduced and if

> >

> > d |g_{k-1}|^2 <= |g_k + c[k, k-1] g_{k-1}|^2 for k = 2, 3, ..., m.

> Hmm . . . how is this motivated?

I warned you this was ugly. Using this definition, we can prove

theorems showing that the basis vectors of an LLL-reduced basis can't

be too large, relatively speaking. We can also get a polynomial time

algorithm which gives us such a reduced basis.

> Hmm . . . do certain values of d guarantee existence, and/or do

> certain other values guarantee uniqueness?

We don't have uniqueness, though in practice for such small problems

as we are looking at we very likely do get it. Existence we do have,

since LLL will work for this range of d.

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

>

> > Is there a continuous, reversible transformation between

Euclidean

> > and taxicab?

>

> In the finite-dimensional case, they define the same topology;

hence

> the identity map is continuous.

So that's promising for being able to use the Tenney metric

eventually, yes?

>

> > What do you mean by the "lattice" spanned by a particular basis,

as

> > opposed to the linear subspace spanned by a particular basis?

>

> The lattice consists of the Z-linear combinations of basis vectors;

> if they are [1, 2, 1] and [0, 1 0] then the lattice is [a, 2a + b,

a]

> for any pair of integers a and b. The subspace consists of the

> R-linear combinations, so in this case a and b are any real numbers.

OK . . . somehow I has skipped over the whole orthogonalization step

in my mind when I asked this question. Sorry! Now I'm confused

> > > Definition:

> > > An ordered basis [b_1, b_2, ..., b_m] in R^n is called *size-

> > reduced*

> > > if |c[i,j]| <= 1/2 for 1 <= j < i <= m.

>

> > I'm confused as to why you use b here rather than g. I thought b

> was

> > the given basis, and g the derived one.

>

> So it is, but it's b which is or isn't size reduced. We create g

and

> c in order to find if b is size-reduced.

OK -- so you should have said "for all g" in the definition . . .

yes? Or is g uniquely defined? It would seem to depend on how you

order the elements of b, since the first vector is made to be the

same.

>

> > Given a basis (but allowing changes that leave the linear

subspace

> > unchanged), does this size-reduced basis always exist? Is it

always

> > unique?

>

> It exists, since we can show the LLL algorithm works, and that

gives

> size-reduction plus more. It is not in general unique.

Well that kind of slams the door on the "classification" I was hoping

to acheive, though I suppose a classification by optimal generators

would be good enough (and would eliminate the need to use LLL, but I

feel we should still have some lower bound on

allowable "orthogonality").

> > Hmm . . . do certain values of d guarantee existence, and/or do

> > certain other values guarantee uniqueness?

>

> We don't have uniqueness, though in practice for such small

problems

> as we are looking at we very likely do get it.

Can we determine for certain whether we do or don't in any particular

case?

> Existence we do have,

> since LLL will work for this range of d.

I was thinking that two different choices of b might turn out to be

equal contenders for the LLL throne for a particular equivalence

class of b's. Possible or impossible?

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> --- In tuning-math@y..., genewardsmith@j... wrote:

> > In the finite-dimensional case, they define the same topology;

> hence

> > the identity map is continuous.

> So that's promising for being able to use the Tenney metric

> eventually, yes?

So far as I can see we can't use it unless we dig up an L1 version of

LLL, but it does give grounds for hope the answers would often be the

same.

> OK -- so you should have said "for all g" in the definition . . .

> yes? Or is g uniquely defined? It would seem to depend on how you

> order the elements of b, since the first vector is made to be the

> same.

It does depend on the fact that b is ordered, which is why we started

with an ordered basis. It is uniquely defined given the ordering.

> Can we determine for certain whether we do or don't in any

particular

> case?

I think in such small cases that might be possible; in general this

turns into a non-polynomial time mess.

> > Existence we do have,

> > since LLL will work for this range of d.

> I was thinking that two different choices of b might turn out to be

> equal contenders for the LLL throne for a particular equivalence

> class of b's. Possible or impossible?

In theory possible, in practice I think we may be OK. We are not

working in the 101 limit, after all.

--- In tuning-math@y..., genewardsmith@j... wrote:

> --- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

> > --- In tuning-math@y..., genewardsmith@j... wrote:

>

> > > In the finite-dimensional case, they define the same topology;

> > hence

> > > the identity map is continuous.

>

> > So that's promising for being able to use the Tenney metric

> > eventually, yes?

>

> So far as I can see we can't use it unless we dig up an L1 version

of

> LLL, but it does give grounds for hope the answers would often be

the

> same.

But you said,

"if we want another inner product

for LLL reduction one way to do it is to transform coordinates and

then transform them back."

Is there an inner product that is appropriate to the taxicab metric?

Am I confused about something?

> > OK -- so you should have said "for all g" in the definition . . .

> > yes? Or is g uniquely defined? It would seem to depend on how you

> > order the elements of b, since the first vector is made to be the

> > same.

>

> It does depend on the fact that b is ordered, which is why we

started

> with an ordered basis.

Hmm . . . shouldn't we then try all permutations of a given basis? Or

do the elements have to be in descending order of length? I don't

think this is assumed for Gram-Schmidt generally, is it?

> > Can we determine for certain whether we do or don't in any

> particular

> > case?

>

> I think in such small cases that might be possible; in general this

> turns into a non-polynomial time mess.

Well, it would be cool to be sure of uniqueness.

> > > Existence we do have,

> > > since LLL will work for this range of d.

>

> > I was thinking that two different choices of b might turn out to

be

> > equal contenders for the LLL throne for a particular equivalence

> > class of b's. Possible or impossible?

>

> In theory possible, in practice I think we may be OK. We are not

> working in the 101 limit, after all.

Well, having equal contenders us something that, to my intuition,

could happen just as easily in few dimensions as in many . . . no?

In-Reply-To: <9t7fj7+odag@eGroups.com>

Gene wrote:

> Starting from an ordered lattice basis [b_1, b_2, ..., b_m] we can

> obtain another basis [g_1, g_2, ..., g_m] by the Gram-Schmidt

> orthogonalization process. This will span the same linear subspace,

> but *not*, normally, the same lattice. We also obtain Gram-Schmidt

> coefficients:

What's an ordered basis?

> c[i,j] = <b_i, g_j>/<g_j, g_j>

I've implemented this in Python code:

import operator

def dotprod(x,y):

return reduce(operator.add,

map(operator.mul, x, y))

def makec(b,g):

c = []

for i in range(len(g)):

row = []

for j in range(len(g)):

row.append(float(dotprod(b[i],g[j]))

/ dotprod(g[j], g[j]))

c.append(row)

return c

Please say if I'm going wrong anywhere. One thing that worries me is that

I'm getting floating point results where you stay with integers.

> We do this by the standard Gram-Schmidt recursion from linear algebra:

>

> g_1 = b_1

> g_i = b_i - sum_{j = 1 to i - 1} c[i,j] g_j

Again, I've implemented that

def LLL(b, g):

c = makec(b,g)

newg = []

for i in range(len(g)):

row = []

for k in range(len(g)):

thing = b[i][k]

for j in range(i):

thing -= c[i][j] * j[j][k]

row.append(thing)

newg.append(row)

return newg

I'm assuming that repeatedly applying this will give the right result, but

it clearly doesn't. For one thing, the first ratio can never change.

Also, you haven't said what seed value to use for g. I'm using the

initial value of b.

If you can supply an alternative algorithm in some kind of procedural

code, that would be cood too. I've found LiDIA, and it comes with the

source code but an imperfect license. I'll have a look at it sometimt.

Graham

--- In tuning-math@y..., graham@m... wrote:

> What's an ordered basis?

It's a basis you take in a certain order--first element, second

element, etc.

> > c[i,j] = <b_i, g_j>/<g_j, g_j>

> I've implemented this in Python code:

> Please say if I'm going wrong anywhere.

It looks right to me, but I don't know Python.

One thing that worries me is that

> I'm getting floating point results where you stay with integers.

If you start with integers, you get rational numbers, and hence you

will get floating points in your implementation of Gram-Schmidt. The

integers I am staying with are simply the integers in a reduced basis

that starts out with integers.

> Again, I've implemented that

> I'm assuming that repeatedly applying this will give the right

result, but

> it clearly doesn't.

What I gave was a definition of LLL reduced, not the LLL algorithm

itself; that has two different kinds of steps in it.

Here is some code I took off the web, which comes with a reference.

Starting with the pseudocode in a reference book on algorithms, or

trying to adapt some code in another implementation, seems like the

way to go to me.

# This follows the algorithm given in Maurice Mignotte's

# "Mathematics for Computer Algebra"

LLL := proc()

local dim, basis, new_basis, local_iprod, local_norm, local_c, i,

j, u, B, k, h, l;

basis := args[1];

dim := nops(basis);

local_iprod := linalg[innerprod];

local_norm := proc(v::vector)

RETURN(linalg[norm](v,2));

end:

local_c := 0.75;

for i from 2 to nargs do

if op(1,args[i]) = iprod then

local_iprod := op(2,args[i]);

local_norm := proc(v::vector)

RETURN(sqrt(local_iprod(v,v)));

end;

elif op(1,args[i]) = norm then

local_norm := op(2,args[i]);

elif op(1,args[i]) = c then

local_c := op(2,args[i]);

fi;

od;

For one thing, the first ratio can never change.

> Also, you haven't said what seed value to use for g. I'm using the

> initial value of b.

g doesn't need a seed value, it's just the Gram-Schmidt

orthogonalization of b.

Paul wrote:

> Well that kind of slams the door on the "classification" I was hoping

> to acheive, though I suppose a classification by optimal generators

> would be good enough (and would eliminate the need to use LLL, but I

> feel we should still have some lower bound on

> allowable "orthogonality").

The mapping by period and generator is unique for any linear temperament,

which should be relevant here. Some caveats:

There are always two choices of generator within the period. I used to

always take the smaller, but sometimes it became the larger after

optimization. So you can make the arbitrary choice that the first nonzero

term be negative.

For similar reasons, there may be more than one period mapping. I think we

can do without it, and take the raw generator mapping, including a common

factor showing the number of periods to the octave.

If you're going to do that, you have to get rid of torsion. I expect any

decent LLL algorithm will do this.

Going from unison vectors to linear temperaments is a done deal. I have a

fresh installation of Mandrake 8.1 here, which comes with Numeric Python as

standard! So, all I need to do is make a local copy of my website and I can

look into these things.

It also comes with Octave. Is that supposed to do LLL?

Graham

--- In tuning-math@y..., Graham Breed <graham@m...> wrote:

> > Well that kind of slams the door on the "classification" I was

hoping

> > to acheive, though I suppose a classification by optimal

generators

> > would be good enough (and would eliminate the need to use LLL,

but I

> > feel we should still have some lower bound on

> > allowable "orthogonality").

> The mapping by period and generator is unique for any linear

temperament,

> which should be relevant here. Some caveats:

Minkowski reduction should do what Paul wants; the caveats would

concern symmetries which shouldn't happen.

> There are always two choices of generator within the period. I

used to

> always take the smaller, but sometimes it became the larger after

> optimization. So you can make the arbitrary choice that the first

nonzero

> term be negative.

The lattice reduction approach doesn't always give us

generator/period; it is optimizing steps to get to good intevals, and

that isn't always generator/period.

In-Reply-To: <9tefqp+vq4h@eGroups.com>

Gene wrote:

> The lattice reduction approach doesn't always give us

> generator/period; it is optimizing steps to get to good intevals, and

> that isn't always generator/period.

If Paul's conjecture is correct (and last time I mentioned it, you said

you'd proved it) you should always be able to get the generator and period

for a complete, linearly independent set of chromatic and commatic unison

vectors. I also have a program, the source code of which you can inspect,

that does the job. I did run into problems using an arbitrary chromatic

unison vector last weekend, and will look at that next weekend. It should

be possible to get a mapping by generator and period, but the number of

steps to either won't be defined.

See <http://x31eq.com/vectors.html>.

None of this requires the lattice to be reduced. That's for going in the

other direction (mapping to unison vectors) and getting the simplest

results. As I'm having trouble getting this working, perhaps you could

reduce this octave-specific basis for me. Then I can see if it's what I

want.

[46, -29, 0, 0, 0, 0]

[-14, 0, -29, 29, 0, 0]

[33, 0, 29, 0, -29, 0]

[7, 0, 0, 0, 29, -29]

Graham

--- In tuning-math@y..., graham@m... wrote:

> If Paul's conjecture is correct (and last time I mentioned it, you

said

> you'd proved it) you should always be able to get the generator and

period

> for a complete, linearly independent set of chromatic and commatic

unison

> vectors.

I certainly didn't mean to suggest I'd proven your statement; I had

to put conditions on to get a proof.

As I'm having trouble getting this working, perhaps you could

> reduce this octave-specific basis for me. Then I can see if it's

what I

> want.

>

> [46, -29, 0, 0, 0, 0]

> [-14, 0, -29, 29, 0, 0]

> [33, 0, 29, 0, -29, 0]

> [7, 0, 0, 0, 29, -29]

Since I don't know what inner product you want, I made no adjustment,

and got the following:

[-14, 0, -29, 29, 0, 0]

[19, 0, 0, 29, -29, 0]

[7, 0, 0, 0, 29, -29]

[13, -29, -29, 0, 29, 0]

In-Reply-To: <9tg1rs+ddom@eGroups.com>

Me:

> As I'm having trouble getting this working, perhaps you could

> > reduce this octave-specific basis for me. Then I can see if it's

> what I

> > want.

> >

> > [46, -29, 0, 0, 0, 0]

> > [-14, 0, -29, 29, 0, 0]

> > [33, 0, 29, 0, -29, 0]

> > [7, 0, 0, 0, 29, -29]

Gene:

> Since I don't know what inner product you want, I made no adjustment,

> and got the following:

>

> [-14, 0, -29, 29, 0, 0]

> [19, 0, 0, 29, -29, 0]

> [7, 0, 0, 0, 29, -29]

> [13, -29, -29, 0, 29, 0]

Oh dear. I wanted something like this:

(2, -1, 2, -1, 0)

(0, -3, 1, 1, 1)

(-3, 1, -1, -1, 1)

(-3, 0, 0, 1, -1)

So it looks like this off the shelf LLL algorithm isn't what I want at

all. Do you know of any way of doing that kind of reduction? Or, more

specifically, of getting a simple set of unison vectors for the consistent

29+58 temperament?

mapping by period and generator:

[(29, 0), (46, 0), (67, 1), (81, 1), (100, 1), (107, 1)]

mapping by steps:

[(29, 0), (46, 0), (66, 1), (80, 1), (99, 1), (106, 1)]

Graham

--- In tuning-math@y..., graham@m... wrote:

> Oh dear. I wanted something like this:

>

> (2, -1, 2, -1, 0)

> (0, -3, 1, 1, 1)

> (-3, 1, -1, -1, 1)

> (-3, 0, 0, 1, -1)

I can certainly LLL reduce this, after adding in the 2s; I get

<196/195, 364/363, 441/440, 1575/1573>

> So it looks like this off the shelf LLL algorithm isn't what I want

at

> all. Do you know of any way of doing that kind of reduction? Or,

more

> specifically, of getting a simple set of unison vectors for the

consistent

> 29+58 temperament?

The above does it; however you did most of the work, so I presume you

must have some idea how to proceed. One can brute force it by first

getting a 13-limit notation with a basis of about the right size,

dual to a set of ets containing 29 and 58, and then searching for

elements of the kernel of 29&58--one should not need exponents beyond

+-2, so a search would be feasible. If we find something of rank 4

which can be extended to make a basis for the kernel of both 29 and

58, we are ready to LLL reduce it. Since you did the above

calculation, however, perhaps you have another idea.

In-Reply-To: <9thca4+t6o3@eGroups.com>

Gene wrote:

> The above does it; however you did most of the work, so I presume you

> must have some idea how to proceed.

It took me a lot of work to get the "correct" lattice. I had to guess a

transformation matrix that would allow me to divide through by a common

factor of 29, and check that it worked in Mathcad. And I wasn't following

a deterministic process. I had to think quite hard about how the unison

vectors could be combined to get the 29s, and still be linearly

independent. I expect I could get a computer to do the same job, but that

would mean alot more work, and I don't want to go reinventing any wheels.

I'm not that hung up on unison vectors anyway. The program already does

what I want it to.

> One can brute force it by first

> getting a 13-limit notation with a basis of about the right size,

> dual to a set of ets containing 29 and 58, and then searching for

> elements of the kernel of 29&58--one should not need exponents beyond

> +-2, so a search would be feasible. If we find something of rank 4

> which can be extended to make a basis for the kernel of both 29 and

> 58, we are ready to LLL reduce it. Since you did the above

> calculation, however, perhaps you have another idea.

Yes, brute force would give the right results, but I'm not happy with it

as a solution. For the theory to be complete, we really need to prove

that you can always find a set of simplified unison vectors, and that a

particular algorithm will always produce them. I think that'd be harder

for brute force.

This may even be more important than going from unison vectors to a

temperament. History shows that good temperaments don't get discovered by

somebody sitting down and saying "Hey, I wonder what would happen if I

simplified all these intervals". It's more likely that they'll hit on a

good temperament, and then think about what simplifications it makes.

Graham