All;

I'm looking for a way to generate all chords of a

given card. in a given odd limit. The tonality

diamond is easy, but I want ASSs too. According

to Graham, there aren't that many, so I could just

dope them in. Naturally, this is unacceptable. :)

Besides, while I trust Graham, I can't divine from

his presentation a proof that his method catches

all the ASSs, so to speak.

So far, I can think of no better method than brute

force... building up the chords by 2nds, and removing

those that break the odd-limit at each iteration.

Can anyone do better?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> I'm looking for a way to generate all chords of a

> given card. in a given odd limit. The tonality

> diamond is easy, but I want ASSs too. According

> to Graham, there aren't that many, so I could just

> dope them in. Naturally, this is unacceptable. :)

> Besides, while I trust Graham, I can't divine from

> his presentation a proof that his method catches

> all the ASSs, so to speak.

Brute force sounds like a good way to be sure you are getting everything you want. Is the problem a limit on your computing power?

>> I'm looking for a way to generate all chords of a

>> given card. in a given odd limit. The tonality

>> diamond is easy, but I want ASSs too. According

>> to Graham, there aren't that many, so I could just

>> dope them in. Naturally, this is unacceptable. :)

>> Besides, while I trust Graham, I can't divine from

>> his presentation a proof that his method catches

>> all the ASSs, so to speak.

>

>Brute force sounds like a good way to be sure you are

>getting everything you want. Is the problem a limit on

>your computing power?

I don't have a scheme compiler, so, yes, it will be.

At limit n, there are on the order of n^2 dyads, and

at card k, there will be (n^2)^k untested chords, and

the test will cost an average of (k^2)/2, or something.

-Carl

>>I'm looking for a way to generate all chords of a

>>given card. in a given odd limit. The tonality

>>diamond is easy, but I want ASSs too. According

>>to Graham, there aren't that many, so I could just

>>dope them in. Naturally, this is unacceptable. :)

>>Besides, while I trust Graham, I can't divine from

>>his presentation a proof that his method catches

>>all the ASSs, so to speak.

>

>Brute force sounds like a good way to be sure you

>are getting everything you want. Is the problem a

>limit on your computing power?

Anyway, Gene, what I was thinking... under the

dyadic definition, all n-limit chords must be

connected on the n-limit lattice, and must have

a Hahn-diameter of 1. See:

http://library.wustl.edu/~manynote/music.html

There ought to be a geometric way to find these

structures...

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Anyway, Gene, what I was thinking... under the

> dyadic definition, all n-limit chords must be

> connected on the n-limit lattice, and must have

> a Hahn-diameter of 1. See:

>

> http://library.wustl.edu/~manynote/music.html

>

> There ought to be a geometric way to find these

> structures...

Does this have to work for temperaments or only rational scales?

>Does this have to work for temperaments or only rational scales?

Paul H. applied it to temperaments, and scales. I'm using it

to define n-limit chords (rational only). Did I make a mistake?

Part of the difficulty for me is, the smallest ASSs are 9-limit,

and that requires more than 3 dimensions.

-Carl

> Paul H. applied it to temperaments, and scales. I'm using it

> to define n-limit chords (rational only). Did I make a mistake?

To put it another way: every note in the chord must be connected

to every other note by exactly one lattice link. In the brute-

force method, the huge majority of chords will later fail the

test. Why not generate them directly by searching the lattice?

There's got to be some graph theory somewhere that will do this.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Paul H. applied it to temperaments, and scales. I'm using it

> to define n-limit chords (rational only). Did I make a mistake?

> Part of the difficulty for me is, the smallest ASSs are 9-limit,

> and that requires more than 3 dimensions.

I would work in 3-dimensions for the 9-limit, and just make 3 half the size of 5 or 7. In other words,

||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

would be the length of 3^a 5^b 7^c. Everything in a radius of 2 of anything will be consonant.

>>Paul H. applied it to temperaments, and scales. I'm using it

>>to define n-limit chords (rational only). Did I make a mistake?

Turns out _diameter_ is already graph-theory terminology, and it

is the term we want.

I'm afraid I didn't keep the URL for the source of:

"Let G be a graph and v be a vertex of G. The eccentricity of the

vertex v is the maximum distance from v to any vertex. That is,

e(v)=max{d(v,w):w in V(G)}."

"The diameter of G is the maximum eccentricity among the vertices

of G. Thus, diameter(G)=max{e(v):v in V(G)}."

The "radius" of G is the minimum eccentricity.

-Carl

>>Paul H. applied it to temperaments, and scales. I'm using it

>>to define n-limit chords (rational only). Did I make a mistake?

>>Part of the difficulty for me is, the smallest ASSs are 9-limit,

>>and that requires more than 3 dimensions.

>

>I would work in 3-dimensions for the 9-limit, and just make 3 half

>the size of 5 or 7. In other words,

>

> ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

>

>would be the length of 3^a 5^b 7^c. Everything in a radius of 2 of

>anything will be consonant.

Thanks, Gene. I _really_ can't visualize this, but perhaps it

will provide a general method for finding the chords I seek.

Would everything still hold if I used 4 dimensions and kept all

edges the same length? By gods, I can't figure out where you're

getting the coefficients here. And what are the double pipes?

Not abs. -- there's a sqrt on the other side... I confess I

don't know the distance formula for triangular plots. I could

derrive it with trig. . . nope, it's a mess, 'cause there are many

different triangles involved in the different diagonals. So I

guess I would use the standard Euclidean distance, but I need to

know how to get delta(x) and delta(y) off a triagular lattice.

-Carl

In-Reply-To: <a20hs9+7sd3@eGroups.com>

Gene:

> >I would work in 3-dimensions for the 9-limit, and just make 3 half

> >the size of 5 or 7. In other words,

> >

> > ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

> >

> >would be the length of 3^a 5^b 7^c. Everything in a radius of 2 of

> >anything will be consonant.

Carl:

> Thanks, Gene. I _really_ can't visualize this, but perhaps it

> will provide a general method for finding the chords I seek.

> Would everything still hold if I used 4 dimensions and kept all

> edges the same length? By gods, I can't figure out where you're

> getting the coefficients here. And what are the double pipes?

> Not abs. -- there's a sqrt on the other side... I confess I

> don't know the distance formula for triangular plots. I could

> derrive it with trig. . . nope, it's a mess, 'cause there are many

> different triangles involved in the different diagonals. So I

> guess I would use the standard Euclidean distance, but I need to

> know how to get delta(x) and delta(y) off a triagular lattice.

The double pipes are simply for the Euclidean distance. I think the

triangular generalisation is like this:

Set the x axis to be constant. The y axis is then the 5:4 direction. To

get distances, first convert to new axes

x' = x + y cos(theta)

y' = y sin(theta)

where theta is the angle between the x and y axis, 90 degrees for a square

lattice or 60 degrees for equilateral triangles. The Euclidean distance

is simply sqrt((x')**2 + (y')**2).

I did work out the general, multidimensional case, but I don't have the

result to hand. Probably, something like

x' = x + y*cos(theta) + z*cos(theta)

y' = y*sin(theta) + z*cos(theta)

z' = z*sin(phi)

will work. I don't know how to get from theta, the angle between axes, to

phi, the angle between the z axis and the x-y plane. Certainly not in

general.

Oh, for the algorithm, trying all combinations of consonances above the

tonic should work. That'll be O(n**m) where m is the number of notes in

the chord, but shouldn't be a problem for the kind of numbers we're

talking about. A more efficient way would be to use the general method

for ASSes I give on the web page, but you said you don't trust that.

Graham

Hi Graham,

>The double pipes are simply for the Euclidean distance.

Thanks!

>I think the triangular generalisation is like this:

>

>Set the x axis to be constant. The y axis is then the 5:4

>direction. To get distances, first convert to new axes

>

> x' = x + y cos(theta)

> y' = y sin(theta)

>

>where theta is the angle between the x and y axis, 90 degrees

>for a square lattice or 60 degrees for equilateral triangles.

>The Euclidean distance is simply sqrt((x')**2 + (y')**2).

Sweet! I'll see if I can play with this in the coming week.

>I did work out the general, multidimensional case, but I don't

>have the result to hand. Probably, something like

>

> x' = x + y*cos(theta) + z*cos(theta)

> y' = y*sin(theta) + z*cos(theta)

> z' = z*sin(phi)

>

>will work. I don't know how to get from theta, the angle between

>axes, to phi, the angle between the z axis and the x-y plane.

>Certainly not in general.

Can't it always just be 60 degrees? There is that point where

this is no longer the closest packing -- that's bad I presume...

Paul once posted something from Mathworld...

"The analog of face-centered cubic packing is the densest lattice

packing in 4- and 5-D. In 8-D, the densest lattice packing is made

up of two copies of face-centered cubic. In 6- and 7-D, the densest

lattice packings are cross sections of the 8-D case. In 24-D, the

densest packing appears to be the Leech Lattice."

>Oh, for the algorithm, trying all combinations of consonances

>above the tonic should work. That'll be O(n**m) where m is the

>number of notes in the chord, but shouldn't be a problem for the

>kind of numbers we're talking about.

What's the double-star? Combinations of notes or intervals?

Notes gives you CPSs, and intervals don't get all the chords

because some chords contain more than one instance of an

interval. Plus, order matters, at least for lists of 2nds,

so we wind up with the procedure I described when complaining

about my lack of a scheme compiler.

>A more efficient way would be to use the general method for

>ASSes I give on the web page, but you said you don't trust that.

I trust it, but I don't understand it. So I can't _really_

trust it.

-Carl

Wait a minute. Gene, what are you using to generate all the

chords (magic and otherwise) in 31-tET, on the main list?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Can't it always just be 60 degrees? There is that point where

> this is no longer the closest packing -- that's bad I presume...

> Paul once posted something from Mathworld...

That's exactly what I've been talking about and presenting for the last few months!

For the 5-limit, it is

||3^a 5^b|| = sqrt(a^2 + ab + b^2)

For the 7-limit

||3^a 5^b 7^c|| = sqrt(a^2 + b^2 + c^2 + ab + ac + bc)

Beyond that we need to decide if 3 stays the same size as 5, 7, and 11, or is half as long.

> "The analog of face-centered cubic packing is the densest lattice

> packing in 4- and 5-D. In 8-D, the densest lattice packing is made

> up of two copies of face-centered cubic. In 6- and 7-D, the densest

> lattice packings are cross sections of the 8-D case. In 24-D, the

> densest packing appears to be the Leech Lattice."

Forget the densest packing, we want the packing corresponding to octave classes of intervals. There's a lot of great mathematics in the above if you ever get interested in pure math, though. :)

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Wait a minute. Gene, what are you using to generate all the

> chords (magic and otherwise) in 31-tET, on the main list?

Pute brute force, using a routine which steps through all the partions of 31 and tests them. In the case of 72, there are five million or so and I will probably need to do something different if I do it.

>>Wait a minute. Gene, what are you using to generate all the

>>chords (magic and otherwise) in 31-tET, on the main list?

>

>Pute brute force, using a routine which steps through all the

>partions of 31 and tests them. In the case of 72, there are

>five million or so and I will probably need to do something

>different if I do it.

Sorry to cross-list things here, but I assume you get magic

chords just because you define the edges in degrees of the

tempered system in the beginning... leaving you no easy way

to get only the magic chords at the end. The posts have

been great, but it's a lot of work to figure out which ones

are magic, even with the ratios.

On finding all the non-magic chords, Graham has a point.

Looking at all the "faces" of the Euler genus has potential.

Actually, I fully expect Dave Keenan to just pull something

out of his hat here. How's it going Dave! Back from your

trip to the coast (was it?). How was it! I'm glad you seem

to be able to balance your list involvement lately. Looks

like I've slipped back in to total addiction, for the time

being.

-Carl

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "clumma" <carl@l...> wrote:

>

> > Paul H. applied it to temperaments, and scales. I'm using it

> > to define n-limit chords (rational only). Did I make a mistake?

> > Part of the difficulty for me is, the smallest ASSs are 9-limit,

> > and that requires more than 3 dimensions.

>

> I would work in 3-dimensions for the 9-limit, and just make 3 half

the size of 5 or 7. In other words,

>

> ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

>

> would be the length of 3^a 5^b 7^c. Everything in a radius of 2 of

> anything will be consonant.

What happens to 9:5 and 9:7?

>>Can't it always just be 60 degrees? There is that point where

>>this is no longer the closest packing -- that's bad I presume...

>>Paul once posted something from Mathworld...

>

>That's exactly what I've been talking about and presenting for

>the last few months!

Wow, sorry I've been so dense. There's a big hole in my algebra

where polynomials of degree >2 should be. I was amazed in 10th

grade when my questions about cubic and higher polynomials were

explicitly dismissed, and I haven't had algebra since.

>For the 5-limit, it is

>

> ||3^a 5^b|| = sqrt(a^2 + ab + b^2)

>

> For the 7-limit

>

> ||3^a 5^b 7^c|| = sqrt(a^2 + b^2 + c^2 + ab + ac + bc)

Is there a post where the derivation for this is given?

>Beyond that we need to decide if 3 stays the same size as 5, 7,

>and 11, or is half as long.

You can expect me to become completely confused when dealing

with prime limits. I've spent all my time thinking in odd-limits,

and frankly I think all the stuff having to do with music supports

this. Why bother with different lengths -- unless you're trying

to weight for consonance, which is a different matter? It is true

that log(n) lengths unifies the taxicab metric for odd- and prime-

limits. I understand that. But I still want to visualize my

9-edges apart from my 3-edges. Musically, I don't care if the

polynomials are easier to factor with prime coefficients. I want

the option of keeping my lengths unweighted, and thinking in the

9-limit. Can your distance stuff be adapted to odd limits?

>Forget the densest packing, we want the packing corresponding to

>octave classes of intervals.

What does this mean?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >For the 5-limit, it is

> >

> > ||3^a 5^b|| = sqrt(a^2 + ab + b^2)

> >

> > For the 7-limit

> >

> > ||3^a 5^b 7^c|| = sqrt(a^2 + b^2 + c^2 + ab + ac + bc)

>

> Is there a post where the derivation for this is given?

>

> >Beyond that we need to decide if 3 stays the same size as 5, 7,

> >and 11, or is half as long.

>

> You can expect me to become completely confused when dealing

> with prime limits.

Gene is talking about odd limits, just as we are. He's just

attempting to capture them in a Euclidean lattice with prime axes.

I'm concerned that this won't always work -- Gene, perhaps you could

post the implied lengths for

1:3

1:5

3:5

1:7

3:7

5:7

1:9

5:9

7:9

1:11

3:11

5:11

7:11

9:11

etc. (up to whatever odd limit you could see yourself being

interested in).

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> On finding all the non-magic chords, Graham has a point.

> Looking at all the "faces" of the Euler genus has potential.

> Actually, I fully expect Dave Keenan to just pull something

> out of his hat here.

Sorry, nothing in my hat but bird poop.

> How's it going Dave! Back from your

> trip to the coast (was it?). How was it!

It was a coral cay on the Great Barrier Reef. Human population 40,

bird population 40,000.

> I'm glad you seem

> to be able to balance your list involvement lately. Looks

> like I've slipped back in to total addiction, for the time

> being.

Thanks for the encouragment. Just say no. ;-)

Dave, perhaps you can help with this Kees lattice business. It's pure

math, but Gene appears to have no interest in it. It seems to me that

we should be able, based on Kees's lattice with a 'taxicab' metric,

be able to define an error function so that the optimal temperament

according to that error function, which tempers out the small

interval n:d, should have an error proportional to

|n-d|/(d*log(d))

I've shown that our usual RMS error function works for this within

about a factor of 2, based on ten examples with an extremely wide

range of n and d values. But we should be able to do much better.

Also, a 'complexity' measure similar to the 'gens' ones we've been

using, but defined appropriately, should turn out to be proportional

to

log(d)

I've shown that our usual RMS 'gens' measure works for this within

about a factor of 2, based on the same ten examples. But we should be

able to do much better.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

> >

> > would be the length of 3^a 5^b 7^c. Everything in a radius of 2 of

> > anything will be consonant.

> What happens to 9:5 and 9:7?

||9/5|| = ||3^2 5^(-)|| = sqrt(4+4+0-4+0+0) = 2

||9/7|| = ||3^2 7^(-1)|| = sqrt(4+0+4+0-4+0) = 2

Hence both 9/5 and 9/7 are consonant with 1.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > > ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

> > >

> > > would be the length of 3^a 5^b 7^c. Everything in a radius of 2

of

> > > anything will be consonant.

>

> > What happens to 9:5 and 9:7?

>

> ||9/5|| = ||3^2 5^(-)|| = sqrt(4+4+0-4+0+0) = 2

>

> ||9/7|| = ||3^2 7^(-1)|| = sqrt(4+0+4+0-4+0) = 2

>

> Hence both 9/5 and 9/7 are consonant with 1.

I'm very impressed, Gene! And there's a Euclidean model for this?

Will it break down in, say, the 15-limit?

>>> ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

>>>

>>>would be the length of 3^a 5^b 7^c. Everything in a radius of 2

>>>of anything will be consonant.

>>

>>What happens to 9:5 and 9:7?

>

> ||9/5|| = ||3^2 5^(-)|| = sqrt(4+4+0-4+0+0) = 2

>

> ||9/7|| = ||3^2 7^(-1)|| = sqrt(4+0+4+0-4+0) = 2

>

>Hence both 9/5 and 9/7 are consonant with 1.

But they should have the same distance from 1 as 3/2,

or any other tonality-diamond pitch, no?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >>> ||3^a 5^b 7^c|| = sqrt(a^2 + 4b^2 + 4c^2 + 2ab + 2ac + 4bc)

> >>>

> >>>would be the length of 3^a 5^b 7^c. Everything in a radius of 2

> >>>of anything will be consonant.

> >>

> >>What happens to 9:5 and 9:7?

> >

> > ||9/5|| = ||3^2 5^(-)|| = sqrt(4+4+0-4+0+0) = 2

> >

> > ||9/7|| = ||3^2 7^(-1)|| = sqrt(4+0+4+0-4+0) = 2

> >

> >Hence both 9/5 and 9/7 are consonant with 1.

>

> But they should have the same distance from 1 as 3/2,

> or any other tonality-diamond pitch, no?

Why? Certainly if we were picky about it, we'd want a precise length

to be associated with each interval, and whether these are equal or

un-equal, we'd almost certainly be in non-Euclidean space pretty

fast. But Gene has simply found a handy mathematical formula for

asking the binary question "is a certain interval o-limit consonant" -

- that was the context in which he brought this up.

>

> -Carl

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> I'm concerned that this won't always work -- Gene, perhaps you could

> post the implied lengths for

As I said, you need to decide for anything above the 7-limit what to do about prime powers.

Here are 7-limit lengths:

> 1:3

> 1:5

> 3:5

> 1:7

> 3:7

> 5:7

All one.

Now let us suppose ||3||=1 and ||5||=||7||=||9||=||11||=2, then

> 1:9

> 5:9

> 7:9

> 1:11

All 2.

> 3:11

sqrt(3)

> 5:11

> 7:11

> 9:11

All 2.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> I'm very impressed, Gene! And there's a Euclidean model for this?

This is based on a positive-definite quadratic form, so it *is* a Euclidean model.

> Will it break down in, say, the 15-limit?

Let o be an odd limit, and suppose q_i are the largest odd prime powers for each prime <= o. We may write any "2-unit" rational number (meaning one with odd numerator and denominator) uniquely in the form

r = q_1^e_1 ... q_k^e_k.

Then if we define a quadratic form on the exponents by

A(r) = \sum_{i <= j} e_i e_j

we have

A(q_i) = 1,

A(q_i/q_j) = 1

Since the number of coefficients of A is k choose 2, this defines A uniquely.

We then can define the canonical o-limit length of any 2-unit rational number, and therefore of any octave equivalence class, by

||r|| = sqrt(A(r))

Since A is positive definite (it is in fact a well-known such form in mathematics) it defines a Euclidean distance.

If we like, we may adjust matters by multiplying through by the lcm of the exponents of the largest prime powers, so as to be able to work with integers.

This sort of thing is what I meant when I said I came upon hexanies and the like geometrically. This metric is useful partly because two octave equivalence classes separated by a distance of one or less are o-consonant, and by a distance of greater than one are o-dissonant.

>>>>What happens to 9:5 and 9:7?

>>>

>>> ||9/5|| = ||3^2 5^(-)|| = sqrt(4+4+0-4+0+0) = 2

>>>

>>> ||9/7|| = ||3^2 7^(-1)|| = sqrt(4+0+4+0-4+0) = 2

>>>

>>>Hence both 9/5 and 9/7 are consonant with 1.

>>

>>But they should have the same distance from 1 as 3/2,

>>or any other tonality-diamond pitch, no?

>

>Why? Certainly if we were picky about it, we'd want a precise

>length to be associated with each interval, and whether these are

>equal or un-equal, we'd almost certainly be in non-Euclidean space

>pretty fast. But Gene has simply found a handy mathematical

>formula for asking the binary question "is a certain interval

>o-limit consonant" - that was the context in which he brought this

>up.

Okay, I guess I was just thinking too taxicab-ish. But I'm a

long way from understanding this stuff. What can it do that

a taxicab metric can't?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >>>Hence both 9/5 and 9/7 are consonant with 1.

> >>

> >>But they should have the same distance from 1 as 3/2,

> >>or any other tonality-diamond pitch, no?

In fact, ||3/2|| = 1, not 2.

> >Why? Certainly if we were picky about it, we'd want a precise

> >length to be associated with each interval, and whether these are

> >equal or un-equal, we'd almost certainly be in non-Euclidean space

> >pretty fast.

No, everthing gets a length and it is all Euclidean.

> Okay, I guess I was just thinking too taxicab-ish. But I'm a

> long way from understanding this stuff. What can it do that

> a taxicab metric can't?

It's not trying to be a consonance measure; what it does is the same kind of stuff as the lattice diagrams with triangles and tetrahedra and what not. These are called the "root lattices" A2 and A3, and have corresponding bilinear forms or "Gram matrices"; it can be generalized to An but then we need to decide what to do about the prime power problem. My o-limit quadratic forms correspond to lattices which are unions of translates of An.

By taking things a certain fixed distance from points, we get various kinds of things such as diamonds, hexanies and so forth; I was starting to explain this once but got the impression people understood it in their own way. One nice thing about the lattice point of view is that these things can be enumerated via theta functions, which may be worth explaining. This is all also connected to the transformation groups Robert and I were playing with.

>>But they should have the same distance from 1 as 3/2,

>>or any other tonality-diamond pitch, no?

>

> In fact, ||3/2|| = 1, not 2.

This was my objection.

>>Why? Certainly if we were picky about it, we'd want a precise

>>length to be associated with each interval, and whether these

>>are equal or un-equal, we'd almost certainly be in non-Euclidean

>>space pretty fast.

>

> No, everthing gets a length and it is all Euclidean.

I wouldn't think we'd need to break any of the the Euclidean

axioms, either way. If you define anything > 3-D non-Euclidean,

then...

>>Okay, I guess I was just thinking too taxicab-ish. But I'm a

>>long way from understanding this stuff. What can it do that

>>a taxicab metric can't?

>

>It's not trying to be a consonance measure; what it does is the

>same kind of stuff as the lattice diagrams with triangles and

>tetrahedra and what not. These are called the "root lattices" A2

>and A3, and have corresponding bilinear forms or "Gram matrices";

>it can be generalized to An but then we need to decide what to do

>about the prime power problem. My o-limit quadratic forms

>correspond to lattices which are unions of translates of An.

What's the problem?

>By taking things a certain fixed distance from points, we get

>various kinds of things such as diamonds, hexanies and so forth;

>I was starting to explain this once but got the impression people

>understood it in their own way. One nice thing about the lattice

>point of view is that these things can be enumerated via theta

>functions, which may be worth explaining. This is all also

>connected to the transformation groups Robert and I were playing

>with.

For radius r, what determines what you get? Wether you start on

a vertex or not, etc.? I believe your stuff works, but I'm not

clear on how to use it to turn out what I want.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> I wouldn't think we'd need to break any of the the Euclidean

> axioms, either way. If you define anything > 3-D non-Euclidean,

> then...

Please don't do that! "Euclidean" means, more or less, affine R^n with inner product--it has R^n as a vector space as a model.

> What's the problem?

Is 9 the same length as 7, or twice as long?

> For radius r, what determines what you get? Wether you start on

> a vertex or not, etc.?

Right--it's what the center is--a vertex, a face, and edge, a "deep hole" center such as the center of the octahedron, or what have you.

In-Reply-To: <a21q6u+hlkj@eGroups.com>

Me:

> >I did work out the general, multidimensional case, but I don't

> >have the result to hand. Probably, something like

> >

> > x' = x + y*cos(theta) + z*cos(theta)

> > y' = y*sin(theta) + z*cos(theta)

> > z' = z*sin(phi)

> >

> >will work. I don't know how to get from theta, the angle between

> >axes, to phi, the angle between the z axis and the x-y plane.

> >Certainly not in general.

Carl:

> Can't it always just be 60 degrees? There is that point where

> this is no longer the closest packing -- that's bad I presume...

> Paul once posted something from Mathworld...

That's what Gene seems to have done, so go with that. I was thinking of

the general case with a free parameter that can be set to give either

triangular or rectangular lattice, or anything in between.

Me:

> >Oh, for the algorithm, trying all combinations of consonances

> >above the tonic should work. That'll be O(n**m) where m is the

> >number of notes in the chord, but shouldn't be a problem for the

> >kind of numbers we're talking about.

Carl:

> What's the double-star? Combinations of notes or intervals?

> Notes gives you CPSs, and intervals don't get all the chords

> because some chords contain more than one instance of an

> interval. Plus, order matters, at least for lists of 2nds,

> so we wind up with the procedure I described when complaining

> about my lack of a scheme compiler.

** is Fortran for exponentiation. Actually, that formula's wrong, but

probably isn't important anyway. The number of consonances is more

significant than the number of notes in the chords.

I'm guessing notes (that is, intervals relative to the 1/1) will give a

more manageable calculation. That also means it is the usual combinations

algorithm, which I posted before in Python, and should be easier in

Scheme.

Example: 5-limit

Consonances are 1/1, 5/4, 6/5, 4/3, 3/2, 8/5, 5/3. 1/1 is redundant, so

take all pairs of the others.

1/1 5/4 6/5 contains an interval of 25:24, so throw it out.

1/1 5/4 4/3 contains an interval of 16:15, so throw it out.

1/1 5/4 3/2 is consonant

1/1 5/4 8/5 contains, what, 32:25? So throw that out.

1/1 5/4 5/3 is consonant

1/1 6/5 4/3 contains an interval of 10:9, so throw it out.

and so on.

The consonance checking is also a combinations problem: take each pair of

notes. You can also exclude the 1/1, because everything's consonant

relative to that, so we're only choosing 1 from 1 in the 5-limit.

These calculations are laborious to do by hand, but shouldn't be a problem

for a computer.

Graham

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Dave, perhaps you can help with this Kees lattice business. It's

pure

> math, but Gene appears to have no interest in it. It seems to me

that

> we should be able, based on Kees's lattice with a 'taxicab' metric,

> be able to define an error function so that

...

Sorry. No time. But here's how I'd approach it. Set up a spreadsheet

with the ten examples in 10 rows. Columns would be n and d of ratio to

be tempered out, and for each (octave-equivalent) interval the error

in cents and the number of generators required, then the errror and

gens functions of n and d that you propose. Then I'd try a weighted

rms error and a weighted rms gens where the weights can be changed for

each interval (indep. wts for err and gens). I'd calculate the

least-squares difference between the weighted rms results and your

proposed functions of n and d. Then I'd fool around with the error

weights to minimise the error in the errors and fool around with the

gens weights to minimise the error in the gens. I hope this makes

sense.

>Carl:

>> Can't it always just be 60 degrees? There is that point where

>> this is no longer the closest packing -- that's bad I presume...

>> Paul once posted something from Mathworld...

>

>That's what Gene seems to have done, so go with that. I was

>thinking of the general case with a free parameter that can be

>set to give either triangular or rectangular lattice, or anything

>in between.

Okay. Thanks for the follow-up.

>I'm guessing notes (that is, intervals relative to the 1/1) will

>give a more manageable calculation.

...than intervals. You're probably right. It seems to get around

the problem of having more than one instance of the same thing,

turing n^2 into n!. But for my ultimate purpose, I'll need to

consider all inversions if I use notes, which I think turns it

back into n^2.

>That also means it is the usual combinations algorithm, which I

>posted before in Python, and should be easier in Scheme.

I didn't see your python post, but here's the scheme:

(define combo

(lambda (k ls)

(if (zero? k)

(list (list))

(if (null? ls)

(list)

(append (combo k (cdr ls))

(map (lambda (y) (cons (car ls) y))

(combo (sub1 k) (cdr ls))))))))

You can't count lines of code in scheme like you can in

a block-structured imperitive language. And comparing

expressions -- () pairs -- to # of lines isn't fair. Frankly,

I don't no how they measure algorithms in scheme, but I'll

bet egg's benedict you can't get more compact than this,

considering this exposes some of the actual operations on

the UTM tape (car and cdr).

>The consonance checking is also a combinations problem: take each

>pair of notes. You can also exclude the 1/1, because everything's

>consonant relative to that, so we're only choosing 1 from 1 in

>the 5-limit.

>

>These calculations are laborious to do by hand, but shouldn't be

>a problem for a computer.

Without a compiler, I think any of these brute-force methods

are all off limits, so to speak. Scheme compilers are all mucho

bucks, industrial (though Chez hints at plans to release a

consumer version within the year!), except ones that "compile"

to C, which I've never been able to get to work. Anyhow, it's

high time I learned another language.

In the mean time, I'm convinced there's a better way than

brute force. Gene may have found one...

-Carl

>>What's the problem?

>

>Is 9 the same length as 7, or twice as long?

I suppose I should plug these choices in, and see how the results

differ... having a hard time seeing how there could be uncertainty.

Either a given r exists that encloses all consonances and excludes

everything else, or not.

>>For radius r, what determines what you get? Wether you start on

>>a vertex or not, etc.?

>

>Right--it's what the center is--a vertex, a face, and edge, a

>"deep hole" center such as the center of the octahedron, or what

>have you.

Okay, so now all we need are rules that tell us what we get

for a given r and origin pair.

I don't suppose there's any hope of getting subsets directly

from this method?

-Carl

What is supposed to be the point of the space-removal

thing? I can't fathom it. And "view unformatted message"

does nothing (my eyes, the goggles do nothing!).

Needless to say, my beautiful indenting was destroyed.

> I didn't see your python post, but here's the scheme:

>

> (define combo

> (lambda (k ls)

> (if (zero? k)

> (list (list))

> (if (null? ls)

> (list)

> (append (combo k (cdr ls))

> (map (lambda (y) (cons (car ls) y))

> (combo (sub1 k) (cdr ls))))))))

-C.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >>What's the problem?

> >

> >Is 9 the same length as 7, or twice as long?

>

> I suppose I should plug these choices in, and see how the results

> differ... having a hard time seeing how there could be uncertainty.

> Either a given r exists that encloses all consonances and excludes

> everything else, or not.

If you want the 9-limit, 9 is the same length as 7. If you want the

7-limit, 9 is twice as long.

> Okay, so now all we need are rules that tell us what we get

> for a given r and origin pair.

>

> I don't suppose there's any hope of getting subsets directly

> from this method?

I'll put it on my list of things to think about. I'm afraid I haven't really given the problem any thought, partly because at the moment I'm more interested in tempered versions of it.

>>>Is 9 the same length as 7, or twice as long?

>>

>>I suppose I should plug these choices in, and see how the results

>>differ... having a hard time seeing how there could be uncertainty.

>>Either a given r exists that encloses all consonances and excludes

>>everything else, or not.

>

>If you want the 9-limit, 9 is the same length as 7. If you want the

>7-limit, 9 is twice as long.

So, problem solved. n is a fair variable.

>>Okay, so now all we need are rules that tell us what we get

>>for a given r and origin pair.

>>

>>I don't suppose there's any hope of getting subsets directly

>>from this method?

>

>I'll put it on my list of things to think about. I'm afraid I

>haven't really given the problem any thought, partly because

>at the moment I'm more interested in tempered versions of it.

Cool! One thing it will allow us to do is subtract out the

natural JI chords and leave only magic chords in your lists

on the main list.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> What is supposed to be the point of the space-removal

> thing? I can't fathom it. And "view unformatted message"

> does nothing (my eyes, the goggles do nothing!).

I think Yahoo screwed up and forgot to put </PRE> around

everything. The workaround is, when formatting matters, tell your

readers, if they are viewing it on the web they need to click Message

Index then Expand Messages.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Dave, perhaps you can help with this Kees lattice business. It's

> pure

> > math, but Gene appears to have no interest in it. It seems to me

> that

> > we should be able, based on Kees's lattice with a 'taxicab'

metric,

> > be able to define an error function so that

> ...

>

> Sorry. No time. But here's how I'd approach it. Set up a

spreadsheet

> with the ten examples in 10 rows. Columns would be n and d of ratio

to

> be tempered out, and for each (octave-equivalent) interval the

error

> in cents and the number of generators required, then the errror and

> gens functions of n and d that you propose. Then I'd try a weighted

> rms error and a weighted rms gens where the weights can be changed

for

> each interval (indep. wts for err and gens). I'd calculate the

> least-squares difference between the weighted rms results and your

> proposed functions of n and d. Then I'd fool around with the error

> weights to minimise the error in the errors and fool around with

the

> gens weights to minimise the error in the gens. I hope this makes

> sense.

Thanks, but I don't think RMS will work. That implies a Euclidean

metric, but a "taxicab" metric seems to be what we want here.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > I'm concerned that this won't always work -- Gene, perhaps you

could

> > post the implied lengths for

>

> As I said, you need to decide for anything above the 7-limit what

to do about prime powers.

>

> Here are 7-limit lengths:

>

> > 1:3

> > 1:5

> > 3:5

> > 1:7

> > 3:7

> > 5:7

>

> All one.

>

> Now let us suppose ||3||=1 and ||5||=||7||=||9||=||11||=2, then

>

> > 1:9

> > 5:9

> > 7:9

> > 1:11

>

> All 2.

>

> > 3:11

>

> sqrt(3)

>

> > 5:11

> > 7:11

> > 9:11

>

> All 2.

How about 1:15? What is the shortest "dissonant" interval?

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Cool! One thing it will allow us to do is subtract out the

> natural JI chords and leave only magic chords in your lists

> on the main list.

How's this as a method: using the standard o-limit metric, take everything in a radius of 1 of the unison, which should give you the o-limit diamond. Now take all subsets of size k, find the centroid by averaging the coordinates (which should be in the prime-power basis,

so that in the 9-limit 5/3 would be 9^(-1/2) * 5^1 * 7^0 =

[-1/2, 1, 0], for instance) and test if everything is within a radius of 1/2 of the centroid, in which case put it on your list. For larger values of o, this would be faster than simply testing for pairwise consonance.

>>Cool! One thing it will allow us to do is subtract out the

>>natural JI chords and leave only magic chords in your lists

>>on the main list.

>

>How's this as a method: using the standard o-limit metric, take

>everything in a radius of 1 of the unison, which should give you

>the o-limit diamond. Now take all subsets of size k, find the

>centroid by averaging the coordinates (which should be in the

>prime-power basis, so that in the 9-limit 5/3 would be

>9^(-1/2) * 5^1 * 7^0 = [-1/2, 1, 0], for instance) and test if

>everything is within a radius of 1/2 of the centroid, in which

>case put it on your list. For larger values of o, this would be

>faster than simply testing for pairwise consonance.

Okay, thanks. To answer your question, I'm going to have to

do some homework. Anyone else is welcome to beat me to it!

-Carl

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> How about 1:15?

The 7-limit quadratic form is

A7(3^a 5^b 7^c) = a^2 + b^2 _+ c^2 + a*b + a*c + b*c

If I substitute a/2 for a in this, I get what I just called (with more optimism than accuracy) the "standard" o-limit form,

A9(3^a 5^b 7^c) = 1/4(a^2+4*b^2+4*c^2+2*a*b+2*a*c+4*b*c)

Clearing the denominators so as to be able to work only with integers gives us the equivalent

B9(3^a 5^b 7^c) = a^2+4*b^2+4*c^2+2*a*b+2*a*c+4*b*c

If I plug 15 = 3^1 5^1 7^0 into this, I get 7, so the length is

sqrt(7).

What is the shortest "dissonant" interval?

The theta series of the above quadratic form, defined as sum from

-infinity to infinity of i, j, and k of

q^B9(i,j,k)

is the q-series

Th9(q)=1+2*q+4*q^3+12*q^4+4*q^5+8*q^7+6*q^8+6*q^9+4*q^11+24*q^12+

12*q^13+8*q^15+12*q^16+8*q^17+12*q^19+24*q^20+8*q^21+8*q^23+

8*q^24+14*q^25+16*q^27+48*q^28+4*q^29+16*q^31+6*q^32+16*q^33+

8*q^35+36*q^36+20*q^37+8*q^39+24*q^40+8*q^41+20*q^43+24*q^44+

20*q^45+16*q^47+24*q^48+18*q^49+ ...

This defines a modular form, but I presume the deeper properties we needn't worry about. I see from the term 4 q^5 that there are four terms of length sqrt(5), the shortest 9-limit dissonances. These turn out to be 7/15, 5/21, 21/5, and 15/7, which we can also write as

14/5, 20/21, 21/20 and 15/14.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > How about 1:15?

>

> The 7-limit quadratic form is

I thought we were talking 11-limit.

> I see from the term 4 q^5 that there are four terms of length sqrt

(5), the shortest 9-limit dissonances. These turn out to be 7/15,

5/21, 21/5, and 15/7, which we can also write as

> 14/5, 20/21, 21/20 and 15/14.

This holds true for the 11-limit, I presume?

BTW, the taxicab metric and Kees' lattice, which I keep banging my

head against but am not enough of a mathematician for, would seem to

be a more natural approach -- this should interest Carl especially.

> From: Carl Lumma <carl@lumma.org>

> To: monz <joemonz@yahoo.com>

> Sent: Wednesday, January 16, 2002 12:45 PM

> Subject: Re: [tuning-math] yahoo spaced out (was re: algorithm sought)

>

>

> Yahoo is undoing the 30-year-old tradition of ASCI art in

> mailing lists, for no apparent reason whatever.

Yeah, when I read this post this afternoon (just before

leaving for work), I wrote to the guy at Yahoo who handles

mail regarding copyright violations -- only because I couldn't

find anyone appropriate to contact. I explained this business

to him, and gave him a sample link to a tuning-math post

with lots of ASCII graphics. Waiting for a response now.

-monz

_________________________________________________________

Do You Yahoo!?

Get your free @yahoo.com address at http://mail.yahoo.com

In-Reply-To: <a24ku6+60k1@eGroups.com>

carl wrote:

> ...than intervals. You're probably right. It seems to get around

> the problem of having more than one instance of the same thing,

> turing n^2 into n!. But for my ultimate purpose, I'll need to

> consider all inversions if I use notes, which I think turns it

> back into n^2.

n^2 is a piddling calculation for a modern computer. In this case, though

it is quite a bit higher. But n is also quite low, so no problem.

> I didn't see your python post, but here's the scheme:

>

> (define combo

> (lambda (k ls)

> (if (zero? k)

> (list (list))

> (if (null? ls)

> (list)

> (append (combo k (cdr ls))

> (map (lambda (y) (cons (car ls) y))

> (combo (sub1 k) (cdr ls))))))))

>

> You can't count lines of code in scheme like you can in

> a block-structured imperitive language. And comparing

> expressions -- () pairs -- to # of lines isn't fair. Frankly,

> I don't no how they measure algorithms in scheme, but I'll

> bet egg's benedict you can't get more compact than this,

> considering this exposes some of the actual operations on

> the UTM tape (car and cdr).

The best way of measuring algorithms in any language is to use a profiler.

I'm guessing the biggest constraints on this in Python are going to be

the recursion depth and the size of the returned list. It works fine for

4 combinations of 29, which is the same as enumerating all 5-note 11-limit

chords. 1001 of them, apparently. 5 from 29 takes 12 seconds. 6 from 29

takes about 52 seconds.

Scheme is optimised for recursion, so that won't be a problem. You can

get around storing the resulting list in both languages if you really want

to. I'm guessing that'll be easier for Scheme.

> Without a compiler, I think any of these brute-force methods

> are all off limits, so to speak. Scheme compilers are all mucho

> bucks, industrial (though Chez hints at plans to release a

> consumer version within the year!), except ones that "compile"

> to C, which I've never been able to get to work. Anyhow, it's

> high time I learned another language.

I thought there were good, Free compilers for either Scheme or Common

Lisp. Whatever, a compiler should only buy you about an order of

magnitude improvement, which is the difference between finishing while you

wait and finishing while you go and make a cup of tea. The real problem,

not finishing at all within the lifetime of the universe, won't be solved

by either.

> In the mean time, I'm convinced there's a better way than

> brute force. Gene may have found one...

The method on <http://x31eq.com/ass.htm> combined with a normal

otonal and utonal enumeration will work for JI. All you need brute force

for is to verify that, which is certainly possible in the 19-limit. Asses

are only 4 note chords, which is 3-from-whatever. If there are

exceptions, you'll find them with a 4-from-whatever search.

I don't know off-hand how many intervals there are in the 15-limit. But

it can't be more than 64. 4 from 64 is 229 seconds for my Python

algorithm. If you're actually doing something with them, it should still

be within the cup-of-tea timescale. There's a lot of disk activity, so

use a generator or whatever and it should speed it up. Rejecting all

chords that are already outside the limit before you add notes to them

should speed it up a great deal more. Give it a try.

The usual rule of thumb is that Scheme is faster than Python. Perhaps

that assumes you have a compiler.

Graham

In-Reply-To: <a24mrc+bhm9@eGroups.com>

carl wrote:

> What is supposed to be the point of the space-removal

> thing? I can't fathom it. And "view unformatted message"

> does nothing (my eyes, the goggles do nothing!).

It's partly to reduce the bandwidth, not having to send so many spaces.

Also, to allow them to put adverts in the messages.

> Needless to say, my beautiful indenting was destroyed.

Be glad you're not using a language with syntactic indentation.

Graham

>>...than intervals. You're probably right. It seems to get around

>>the problem of having more than one instance of the same thing,

>>turing n^2 into n!. But for my ultimate purpose, I'll need to

>>consider all inversions if I use notes, which I think turns it

>>back into n^2.

>

>n^2 is a piddling calculation for a modern computer.

Whoops. That was supposed to be n^k for intervals, n!/(n-k)!

for pitches, and k(n!)/(n-k)! for all inversions of pitches.

>The best way of measuring algorithms in any language is to use a

>profiler.

Measuring algorithms, or their compactness in different languages?

>I'm guessing the biggest constraints on this in Python are going

>to be the recursion depth and the size of the returned list. It

>works fine for 4 combinations of 29, which is the same as

>enumerating all 5-note 11-limit chords.

Aren't there any ASSes that contain more than one instance of

an 11-limit interval?

>>Without a compiler, I think any of these brute-force methods

>>are all off limits, so to speak. Scheme compilers are all mucho

>>bucks, industrial (though Chez hints at plans to release a

>>consumer version within the year!), except ones that "compile"

>>to C, which I've never been able to get to work. Anyhow, it's

>>high time I learned another language.

>

>I thought there were good, Free compilers for either Scheme or

>Common Lisp.

Not that I could find. The only ones I know of are Gambit scheme

(C translator), chicken (ditto), and something that comes with

Dr. Scheme from Rice university, which I can't stand, though I

probably should have looked at more closely. I did verify that

the Dr. Scheme interpreter blows up at the same point the Chez

one does.

>Whatever, a compiler should only buy you about an order of

>magnitude improvement, which is the difference between finishing

>while you wait and finishing while you go and make a cup of tea.

Mmm, tea.

I'm not sure. The problem isn't cps, it's memory. The garbage

collection thingy runs out of RAM and starts hitting the swap,

and you're better off waiting for the sun to collapse than for

Windows to recover. I assume compiled code doesn't have this

problem, for some reason.

>The real problem, not finishing at all within the lifetime of

>the universe, won't be solved by either.

There's nothing NP here that I can see.

>> In the mean time, I'm convinced there's a better way than

>> brute force. Gene may have found one...

>

>The method on <http://x31eq.com/ass.htm> combined

>with a normal otonal and utonal enumeration will work for JI.

As in, I have to tack on the ASSes. I can just use your table.

I admitted this is quite satisfactory, that I was just being

obstinate, from the start. It isn't just question of results

for me -- it's understanding. The nature of the problem is

fairly simple, and the ASSes and o- and u-tonalities should all

spin out from the same process. If you could get your ASS

method to produce Partchian tonalities...

Actually, the picking notes instead of intervals thing is worth

trying. I'll do that. Thanks. I'm sure I've given away to

you, by stating my eventual need for inversions, that I'm going

after n-adic uniqueness.

>I don't know off-hand how many intervals there are in the 15-

>limit. But it can't be more than 64. 4 from 64 is 229 seconds

>for my Python algorithm. If you're actually doing something

>with them, it should still be within the cup-of-tea timescale.

>There's a lot of disk activity, so use a generator or whatever

>and it should speed it up.

Generator?

>Rejecting all chords that are already outside the limit before

>you add notes to them should speed it up a great deal more.

Yeah, it pays to check every round I think.

>Give it a try.

Will do.

>The usual rule of thumb is that Scheme is faster than Python.

>Perhaps that assumes you have a compiler.

It must. I suspect also that companies like Chez intentionally

leave out smarts from their free interpreters in order to protect

their industrial licenses. The interp. you get for free (the

"petite") isn't the same one that comes with a $3K licensce, I'll

wager.

>Be glad you're not using a language with syntactic indentation.

True, ()'s are great. Any scheme expression can be unambiguously

parsed no matter how you lay it out (even on one line).

-Carl

>>The best way of measuring algorithms in any language is to use a

>>profiler.

>

>Measuring algorithms, or their compactness in different languages?

As in, I have no idea what a profiler is, but it sounds like

something which gets rid of the difference between languages

rather than exposing it?

>>I'm guessing the biggest constraints on this in Python are going

>>to be the recursion depth and the size of the returned list. It

>>works fine for 4 combinations of 29, which is the same as

>>enumerating all 5-note 11-limit chords.

>

>Aren't there any ASSes that contain more than one instance of

>an 11-limit interval?

Of course there are -- they wouldn't be ASSes otherwise, they'd

just be subsets of Partchian tonalities. Hmm, that's a shortcut

right there. . . .

-C.

In-Reply-To: <a26lrb+nnau@eGroups.com>

Me:

> >The best way of measuring algorithms in any language is to use a

> >profiler.

Carl:

> Measuring algorithms, or their compactness in different languages?

Measuring efficiency in one particular language implementation. Profilers

are tools that inspect running code, and record how much time is spent in

each section. That means you can go straight to the least efficient

parts, and not waste effort optimising things that aren't a problem in the

first place.

Carl:

> Aren't there any ASSes that contain more than one instance of

> an 11-limit interval?

If you're measuring from the root, that'd mean two notes at the same

pitch, which is silly.

Me:

> >I thought there were good, Free compilers for either Scheme or

> >Common Lisp.

Carl:

> Not that I could find. The only ones I know of are Gambit scheme

> (C translator), chicken (ditto), and something that comes with

> Dr. Scheme from Rice university, which I can't stand, though I

> probably should have looked at more closely. I did verify that

> the Dr. Scheme interpreter blows up at the same point the Chez

> one does.

There's a GNU interpreter for Common Lisp, at least. Perhaps it's one of

the C translators.

Me:

> >Whatever, a compiler should only buy you about an order of

> >magnitude improvement, which is the difference between finishing

> >while you wait and finishing while you go and make a cup of tea.

Carl:

> Mmm, tea.

>

> I'm not sure. The problem isn't cps, it's memory. The garbage

> collection thingy runs out of RAM and starts hitting the swap,

> and you're better off waiting for the sun to collapse than for

> Windows to recover. I assume compiled code doesn't have this

> problem, for some reason.

You may need a good garbage collector, but that's independent of being

compiled or interpreted. I expect a GNU interpreter will be fine. RMS is

a Lisp programmer, after all.

> >The real problem, not finishing at all within the lifetime of

> >the universe, won't be solved by either.

>

> There's nothing NP here that I can see.

Polynomial time is all you need if it gets complex enough. 10^50

operations does it. That's roughly the total number of chess games. And

it's only 48!.

Me:

> >The method on <http://x31eq.com/ass.htm> combined

> >with a normal otonal and utonal enumeration will work for JI.

Carl:

> As in, I have to tack on the ASSes. I can just use your table.

> I admitted this is quite satisfactory, that I was just being

> obstinate, from the start. It isn't just question of results

> for me -- it's understanding. The nature of the problem is

> fairly simple, and the ASSes and o- and u-tonalities should all

> spin out from the same process. If you could get your ASS

> method to produce Partchian tonalities...

It might be useful to generalise the method to work with inharmonic

timbres, provided the consonances are defined on a short list of partials.

You can get all three chord types as subsets of Euler genera. Perhaps

that would be a helpful approach.

Me:

> >I don't know off-hand how many intervals there are in the 15-

> >limit. But it can't be more than 64. 4 from 64 is 229 seconds

> >for my Python algorithm. If you're actually doing something

> >with them, it should still be within the cup-of-tea timescale.

> >There's a lot of disk activity, so use a generator or whatever

> >and it should speed it up.

Carl:

> Generator?

See <http://python.sourceforge.net/peps/pep-0255.html> for Simple

Generators. The code I have now uses a function to generate a huge list,

and then iterates over it. If I turned the function into a generator,

it'd only have to return one result at a time, without the code getting

much more complex. I haven't looked at this yet, partly because I'm

trying to keep compatibility with older versions of Python.

I think you can do similar, but more advanced things with the Lisp family.

The interpreter/compiler may even do them for you. You can also write a

function that takes a function as argument, and calls it for each item.

But in this case it'll probably be easier to forget about reusability, and

put the functionality in the middle of the combinations code, so that the

following can work:

> >Rejecting all chords that are already outside the limit before

> >you add notes to them should speed it up a great deal more.

>

> Yeah, it pays to check every round I think.

>

> >Give it a try.

>

> Will do.

Me:

> >The usual rule of thumb is that Scheme is faster than Python.

> >Perhaps that assumes you have a compiler.

Carl:

> It must. I suspect also that companies like Chez intentionally

> leave out smarts from their free interpreters in order to protect

> their industrial licenses. The interp. you get for free (the

> "petite") isn't the same one that comes with a $3K licensce, I'll

> wager.

I think a GNU implementation of some Lisp dialect has beaten Python in

benchmarks.

Me:

> >Be glad you're not using a language with syntactic indentation.

Carl:

> True, ()'s are great. Any scheme expression can be unambiguously

> parsed no matter how you lay it out (even on one line).

Oh, syntactic indentation's fine in itself. But it does mean your code

gets rendered invalid by Yahoo's white space stripping.

Graham

In-Reply-To: <a26m3j+6noq@eGroups.com>

I've been doing a bit of research. There are two GNU implementations of

Common Lisp:

<http://www.gnu.org/software/gcl/gcl.html>

<http://clisp.cons.org/summary.html>

I don't actually know the differences between Common Lisp and Scheme,

hence why it would be heretical to move from one to the other.

Enumerating all 15-limit, 5-note chords is actually a 4 from 48

combinations problem, which takes 45 seconds on my machine with

unoptimized Python.

Graham

>>Measuring algorithms, or their compactness in different languages?

>

>Measuring efficiency in one particular language implementation.

>Profilers are tools that inspect running code, and record how much

>time is spent in each section. That means you can go straight to

>the least efficient parts, and not waste effort optimising things

>that aren't a problem in the first place.

Sounds handy, but I wasn't referring to how much work the computer

does in scheme. Obviously, at least in my implemetation, it's

quite a bit more than in other languages! I was referring to how

compactly the language represents algorithms to humans. Like Knuth,

I think that computer programs can be useful not only for the

answers they give, but for the explanations they present to humans.

There's a continuum in abstraction from assembly to C to functional

languages like scheme, to actual math. For me, scheme is the ideal

point on this continuum -- math is too compact, C is egregious.

>>Aren't there any ASSes that contain more than one instance of

>>an 11-limit interval?

Measuring intervals from the root is equivalent to the choosing

notes method, and I thought you weren't talking about that. Just

got our wires crossed, is all.

>There's a GNU interpreter for Common Lisp, at least. Perhaps it's

>one of the C translators.

Oh no, I don't think so. Too bad scheme code doesn't compile as

common lisp.

>You may need a good garbage collector, but that's independent of

>being compiled or interpreted.

I suppose that's true.

>I expect a GNU interpreter will be fine.

As I say, I suspect these commercial outfits cripple their

free interpreters.

>RMS is a Lisp programmer, after all.

Who's he?

>> There's nothing NP here that I can see.

>

>Polynomial time is all you need if it gets complex enough. 10^50

>operations does it. That's roughly the total number of chess

>games. And it's only 48!.

That's true.

> Carl:

>> Generator?

>

>See <http://python.sourceforge.net/peps/pep-0255.html> for Simple

>Generators. The code I have now uses a function to generate a huge

>list, and then iterates over it. If I turned the function into a

>generator, it'd only have to return one result at a time, without

>the code getting much more complex. I haven't looked at this yet,

>partly because I'm trying to keep compatibility with older versions

>of Python.

Interesting. These guys are really up to some cool stuff. To me,

as someone who finally got decent at scheme, it all seems redundant.

But compared to C, it sounds pretty revolutionary.

>I think you can do similar, but more advanced things with the Lisp

>family. The interpreter/compiler may even do them for you. You

>can also write a function that takes a function as argument, and

>calls it for each item.

That's map. It's just simple recursion on the cdr.

(define map

(lambda (proc ls)

(if (null? ls)

'()

(cons (proc (car ls) (map proc (cdr ls))))))

Then you can pass it any function that returns a list, including

map! And if you pass it a function that doesn't return a list,

instead of getting a bug, you get an error. What a concept. But

I digress, this OT.

>But in this case it'll probably be easier to forget about

>reusability, and put the functionality in the middle of the

>combinations code, so that the following can work:

>

>>>Rejecting all chords that are already outside the limit before

>>>you add notes to them should speed it up a great deal more.

There may be a way to keep the combinations code generalized

and still do this, by taking advantage of lazy evaluation. I

can't think of how, though, at the moment.

>I think a GNU implementation of some Lisp dialect has beaten

>Python in benchmarks.

I'm surprised. I'll look in to how much it would take to port

myself over to common Lisp.

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> As in, I have to tack on the ASSes. I can just use your table.

> I admitted this is quite satisfactory, that I was just being

> obstinate, from the start. It isn't just question of results

> for me -- it's understanding. The nature of the problem is

> fairly simple, and the ASSes and o- and u-tonalities should all

> spin out from the same process. If you could get your ASS

> method to produce Partchian tonalities...

Mine should, but I'm with Graham--this problem doesn't sound that bad. Should I give it a try, and see if I'm wrong?

> Actually, the picking notes instead of intervals thing is worth

> trying. I'll do that. Thanks. I'm sure I've given away to

> you, by stating my eventual need for inversions, that I'm going

> after n-adic uniqueness.

If you start saying "scheme" and "n-adic" in the same sentence I'm going to get confused.

>>There's a GNU interpreter for Common Lisp, at least. Perhaps it's

>>one of the C translators.

>

>Oh no, I don't think so.

Well, that's wrong as of the first list item on the page. :)

Just goes to show how low-level C is, that they can get the

kind of performance they claim by translating to it.

-Carl

>>As in, I have to tack on the ASSes. I can just use your table.

>>I admitted this is quite satisfactory, that I was just being

>>obstinate, from the start. It isn't just question of results

>>for me -- it's understanding. The nature of the problem is

>>fairly simple, and the ASSes and o- and u-tonalities should all

>>spin out from the same process. If you could get your ASS

>>method to produce Partchian tonalities...

>

>Mine should, but I'm with Graham--this problem doesn't sound

>that bad. Should I give it a try, and see if I'm wrong?

It depends on how long you're willing to wait to find out. It

could take me a long time indeed to be able to tell you.

>>Actually, the picking notes instead of intervals thing is worth

>>trying. I'll do that. Thanks. I'm sure I've given away to

>>you, by stating my eventual need for inversions, that I'm going

>>after n-adic uniqueness.

>

> If you start saying "scheme" and "n-adic" in the same sentence

>I'm going to get confused.

?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> > If you start saying "scheme" and "n-adic" in the same sentence

> >I'm going to get confused.

>

> ?

>>>If you start saying "scheme" and "n-adic" in the same sentence

>>>I'm going to get confused.

>>

>> ?

>

>http://mathworld.wolfram.com/Scheme.html

>http://mathworld.wolfram.com/p-adicNumber.html

MCCT- For all pairs of terms [t_i, t_(i+1)], from finite

alphabet [gamma], it is possible to select a pair of

terms [m_i, m_(i+1)] from the Mathworld website which

are identical.

-C.

carl wrote:

> Sounds handy, but I wasn't referring to how much work the computer

> does in scheme. Obviously, at least in my implemetation, it's

> quite a bit more than in other languages! I was referring to how

> compactly the language represents algorithms to humans. Like Knuth,

> I think that computer programs can be useful not only for the

> answers they give, but for the explanations they present to humans.

> There's a continuum in abstraction from assembly to C to functional

> languages like scheme, to actual math. For me, scheme is the ideal

> point on this continuum -- math is too compact, C is egregious.

Well, this is something I'd prefer to leave subjective. Everybody has

their own preferences with languages. Also, a lot depends on what library

functions your allowed to use as "gifts". Some of the stuff with

periodicity blocks is simple iff you have a library for handling and

inverting matrices. There's a sense in which a brute force algorithm is

less satisfactory than another kind. Comparing execution time between

different programs using the same language implementation can be a good

way of indicating this.

Where is Haskell on your continuum?

> Measuring intervals from the root is equivalent to the choosing

> notes method, and I thought you weren't talking about that. Just

> got our wires crossed, is all.

Yes. Well, I've gone ahead and implemented this. Code is at

<http://microtonal.co.uk/temper.py> if I remember. The relevant stuff is

in the "Tonality diamonds" section, with the combinations routine in

"Utilities". CGI may follow.

Here are all the 9-limit tetrads:

>> for chord in temper.limit9.allConsonantChords(4):

print chord

[(0, 0, 0), (1, 0, 0), (-1, 0, 0), (-1, 1, 0)]

[(0, 0, 0), (1, 0, 0), (-1, 0, 0), (1, -1, 0)]

[(0, 0, 0), (1, 0, 0), (-1, 0, 0), (-1, 0, 1)]

[(0, 0, 0), (1, 0, 0), (-1, 0, 0), (1, 0, -1)]

[(0, 0, 0), (1, 0, 0), (0, 1, 0), (-1, 1, 0)]

[(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1)]

[(0, 0, 0), (1, 0, 0), (0, 1, 0), (2, 0, 0)]

[(0, 0, 0), (1, 0, 0), (-1, 1, 0), (-1, 0, 1)]

[(0, 0, 0), (1, 0, 0), (1, -1, 0), (1, 0, -1)]

[(0, 0, 0), (1, 0, 0), (1, -1, 0), (2, -1, 0)]

[(0, 0, 0), (1, 0, 0), (0, 0, 1), (-1, 0, 1)]

[(0, 0, 0), (1, 0, 0), (0, 0, 1), (2, 0, 0)]

[(0, 0, 0), (1, 0, 0), (1, 0, -1), (2, 0, -1)]

[(0, 0, 0), (1, 0, 0), (2, 0, 0), (2, -1, 0)]

[(0, 0, 0), (1, 0, 0), (2, 0, 0), (2, 0, -1)]

[(0, 0, 0), (1, 0, 0), (2, -1, 0), (2, 0, -1)]

[(0, 0, 0), (-1, 0, 0), (0, -1, 0), (1, -1, 0)]

[(0, 0, 0), (-1, 0, 0), (0, -1, 0), (0, 0, -1)]

[(0, 0, 0), (-1, 0, 0), (0, -1, 0), (-2, 0, 0)]

[(0, 0, 0), (-1, 0, 0), (-1, 1, 0), (-1, 0, 1)]

[(0, 0, 0), (-1, 0, 0), (-1, 1, 0), (-2, 1, 0)]

[(0, 0, 0), (-1, 0, 0), (1, -1, 0), (1, 0, -1)]

[(0, 0, 0), (-1, 0, 0), (0, 0, -1), (1, 0, -1)]

[(0, 0, 0), (-1, 0, 0), (0, 0, -1), (-2, 0, 0)]

[(0, 0, 0), (-1, 0, 0), (-1, 0, 1), (-2, 0, 1)]

[(0, 0, 0), (-1, 0, 0), (-2, 0, 0), (-2, 1, 0)]

[(0, 0, 0), (-1, 0, 0), (-2, 0, 0), (-2, 0, 1)]

[(0, 0, 0), (-1, 0, 0), (-2, 1, 0), (-2, 0, 1)]

[(0, 0, 0), (0, 1, 0), (-1, 1, 0), (0, 1, -1)]

[(0, 0, 0), (0, 1, 0), (-1, 1, 0), (-2, 1, 0)]

[(0, 0, 0), (0, 1, 0), (0, 0, 1), (2, 0, 0)]

[(0, 0, 0), (0, 1, 0), (0, 1, -1), (-2, 1, 0)]

[(0, 0, 0), (0, -1, 0), (1, -1, 0), (0, -1, 1)]

[(0, 0, 0), (0, -1, 0), (1, -1, 0), (2, -1, 0)]

[(0, 0, 0), (0, -1, 0), (0, 0, -1), (-2, 0, 0)]

[(0, 0, 0), (0, -1, 0), (0, -1, 1), (2, -1, 0)]

[(0, 0, 0), (-1, 1, 0), (0, 1, -1), (-2, 1, 0)]

[(0, 0, 0), (1, -1, 0), (0, -1, 1), (2, -1, 0)]

[(0, 0, 0), (0, 0, 1), (-1, 0, 1), (0, -1, 1)]

[(0, 0, 0), (0, 0, 1), (-1, 0, 1), (-2, 0, 1)]

[(0, 0, 0), (0, 0, 1), (0, -1, 1), (-2, 0, 1)]

[(0, 0, 0), (0, 0, -1), (1, 0, -1), (0, 1, -1)]

[(0, 0, 0), (0, 0, -1), (1, 0, -1), (2, 0, -1)]

[(0, 0, 0), (0, 0, -1), (0, 1, -1), (2, 0, -1)]

[(0, 0, 0), (-1, 0, 1), (0, -1, 1), (-2, 0, 1)]

[(0, 0, 0), (1, 0, -1), (0, 1, -1), (2, 0, -1)]

[(0, 0, 0), (2, 0, 0), (2, -1, 0), (2, 0, -1)]

[(0, 0, 0), (-2, 0, 0), (-2, 1, 0), (-2, 0, 1)]

The output's rather clumsy -- it's octave-equivalent vectors in no

particular order. Whatever, I did spot at least one ASS in there.

15-limit tetrads take a bit of time to work out, but no much. Here's how

to count them:

>>> len(temper.limit15.allConsonantChords(4))

612

There are 590 15-limit pentads, and it takes two minutes to count them all

on this machine. Here's how I did that:

>>> import time

>>> stamp=time.time();len(temper.limit15.allConsonantChords(5)

);time.time()-stamp

590

126.549999952

If all is correct, that should include no ASSes. The execution time may

well be too great for CGI at my site. Although I have now cut it in half

with some superficial changes. I wonder what I did right.

Graham

>>There's a continuum in abstraction from assembly to C to functional

>>languages like scheme, to actual math. For me, scheme is the ideal

>>point on this continuum -- math is too compact, C is egregious.

>

>Well, this is something I'd prefer to leave subjective. Everybody

>has their own preferences with languages.

Very true. I was just stating mine.

>Also, a lot depends on what library functions your allowed to use

>as "gifts".

Very true, and this should certainly be taken into account.

Actually, it's nothing but library functions, all the way down;

in a sense, this is my argument.

As a stock language, scheme comes with very little more than a

UTM. Math comes with the biggest library of all.

>There's a sense in which a brute force algorithm is

>less satisfactory than another kind.

Right.

>Where is Haskell on your continuum?

Near scheme, closer to math.

>Yes. Well, I've gone ahead and implemented this. Code is at

><http://microtonal.co.uk/temper.py> if I remember. The relevant

>stuff is in the "Tonality diamonds" section, with the

>combinations routine in "Utilities". CGI may follow.

Suh-weet!

>The output's rather clumsy -- it's octave-equivalent vectors

>in no particular order.

D'oh!

>Whatever, I did spot at least one ASS in there.

Cool.

>15-limit tetrads take a bit of time to work out, but no much.

>Here's how to count them:

>

> >>> len(temper.limit15.allConsonantChords(4))

> 612

>

> There are 590 15-limit pentads, and it takes two minutes to

> count them all on this machine. Here's how I did that:

>

> >>> import time

> >>> stamp=time.time();len(temper.limit15.allConsonantChords(5)

> );time.time()-stamp

> 590

> 126.549999952

>

> If all is correct, that should include no ASSes. The execution

> time may well be too great for CGI at my site. Although I have

> now cut it in half with some superficial changes. I wonder what

> I did right.

Thanks, Graham!

-C.

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> MCCT- For all pairs of terms [t_i, t_(i+1)], from finite

> alphabet [gamma], it is possible to select a pair of

> terms [m_i, m_(i+1)] from the Mathworld website which

> are identical.

My brother Robin was complaining to me about this, and brought up the word "pencil". It turned out he was *not* referring to

http://mathworld.wolfram.com/Pencil.html

but had picked a word at random. He was pretty triumphant when he found out it had also been made off with by mathematicians, though I think the mathematical use is older than those yellow things.

>My brother Robin was complaining to me about this, and brought

>up the word "pencil". It turned out he was *not* referring to

>

> http://mathworld.wolfram.com/Pencil.html

>

>but had picked a word at random. He was pretty triumphant when he

>found out it had also been made off with by mathematicians, though

>I think the mathematical use is older than those yellow things.

It says 1960...?

-Carl

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> >My brother Robin was complaining to me about this, and brought

> >up the word "pencil". It turned out he was *not* referring to

> >

> > http://mathworld.wolfram.com/Pencil.html

> >

> >but had picked a word at random. He was pretty triumphant when he

> >found out it had also been made off with by mathematicians, though

> >I think the mathematical use is older than those yellow things.

>

> It says 1960...?

>

> -Carl

Cremona was 1960. Desargues lived 1591-1661.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Thanks, but I don't think RMS will work. That implies a Euclidean

> metric, but a "taxicab" metric seems to be what we want here.

Yes of course. Sorry. Just replace every ocurrence of "rms" with

"taxicab" in what I wrote.

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> > Thanks, but I don't think RMS will work. That implies a Euclidean

> > metric, but a "taxicab" metric seems to be what we want here.

>

> Yes of course. Sorry. Just replace every ocurrence of "rms" with

> "taxicab" in what I wrote.

How do you calculate "taxicab" error?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> How do you calculate "taxicab" error?

What corresponds to rms in L1 (unweighted taxicab) is the median in place of the mean, and the mean of the sum of the absolute values of the deviations from the median in place of rms.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > How do you calculate "taxicab" error?

>

> What corresponds to rms in L1 (unweighted taxicab) is the median in

>place of the mean, and the mean of the sum of the absolute values of

>the deviations from the median in place of rms.

THANK YOU GENE! Can this be applied to a _triangular_, instead of

quadrangular, city-block graph? If so, can you tell be how to apply

that to this lattice metric:

http://www.kees.cc/tuning/perbl.html

Note that the only distances I'm concerned with "accurately

capturing" are those of intervals m:n where m/n is approximately 1.

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> THANK YOU GENE! Can this be applied to a _triangular_, instead of

> quadrangular, city-block graph? If so, can you tell be how to apply

> that to this lattice metric:

>

> http://www.kees.cc/tuning/perbl.html

The hexagonal region H consisting of all (m,n) with measure less than or equal to 1 is convex, so this defines a norm for a normed vector space: if v = (m, v) then ||v|| = r is the maximum nonnegative r such that r*v is in H.

Given a set of vectors {v1, ... , vk} you could then seek to find a vector t which minimizes

||v1 - t|| + ||v2-t|| + ... + ||vk-t||

which could be used as a central point, and define error as the minimum value thus achieved.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > THANK YOU GENE! Can this be applied to a _triangular_, instead of

> > quadrangular, city-block graph? If so, can you tell be how to

apply

> > that to this lattice metric:

> >

> > http://www.kees.cc/tuning/perbl.html

>

> The hexagonal region H consisting of all (m,n) with measure less

than or equal to 1 is convex, so this defines a norm for a normed

vector space: if v = (m, v) then ||v|| = r is the maximum nonnegative

r such that r*v is in H.

>

> Given a set of vectors {v1, ... , vk} you could then seek to find a

vector t which minimizes

>

> ||v1 - t|| + ||v2-t|| + ... + ||vk-t||

>

> which could be used as a central point, and define error as the

minimum value thus achieved.

I'm not getting this last part. Will it help make my heuristic work?

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> I'm not getting this last part. Will it help make my heuristic work?

I don't know. What I'd like to know is what a version of your heuristic would be which applies to sets of commas--is this what you are aiming at?

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

>

> > I'm not getting this last part. Will it help make my heuristic

work?

>

> I don't know. What I'd like to know is what a version of your

>heuristic would be which applies to sets of commas--is this what you

>are aiming at?

Eventually. It would probably involve some definition of the dot

product of the commas in a tri-taxicab metric. But I like to start

simple, and perhaps if we can formulate the right error measure in 5-

limit, we can generalize it and use it for 7-limit even without

knowing how one would apply the heuristic.