In-Reply-To: <a31vrt+4sau@eGroups.com>

genewardsmith wrote:

> What's an inconsistent chord?

I meant chords inconsistent with the "official" mapping but which

approximate some interval better. So in 34-as-twintone, anything from h34

is inconsistent. Using the other diaschismic mappings (h34&h46 or h34&h22

which could be written as h46&h58 and h56&h22 respectively using only

consistent ETs) it's chords from g34 that are inconsistent.

> > > It shows 34-et as a part of a range of twintone et possibilities.

> >

> > Well, that's a radical idea. I'm sure I'd never have worked that out

> > myself.

>

> You seemed to be objecting to it strongly, so I don't know what your

> point is.

What am I objecting to now? I thought it was what I said before but you

described as "ridiculous". Still, it was something I hadn't said that was

ridiculous then as well.

I can't find any more 7-limit diaschismics covering 34 with a complexity

of less than 34. The closest is g34&h46 (prime mapping of 46 with

alternative mapping of 34) which has a complexity of exactly 34 (so 36

notes for two complete otonalities) and a minimax error of 4 cents.

Period/generator mapping

[(2, 0), (3, 1), (5, -2), (3, 15)]

and my wedge invariant

(2, -4, 30, -11, 42, 81)

(this is different to Gene's wedge invariants which I still don't know how

to get).

Oh, time for a new conjecture:

The linear temperament formed by combining two consistent equal

temperaments will never have a higher complexity than the number of notes

in the more complex ET.

This is using the same definitions of complexity and consistency as my

program. Anybody care to prove/refute it?

> That might depend on what inner product I use, and I don't know what

> this is supposed to represent. If I use the standard dot product, I get

...

Yes, that's what I meant. It's one of the examples from the book, and my

function doesn't get it right. So thanks for confirming that I'm wrong

and not the book.

I'm not clear what to do with the function when it is working. It takes

square matrices, but the matrices formed by commatic unison vectors aren't

square.

Do you have an inner product that works for octave-equivalent harmony? At

least then we could get a reduced basis for a periodicity block.

Also, the book covers simultaneous Diophantine approximations (of which

more sometime) but not simultaneous linear Diophantine equations. So I

still don't know how to get an original basis without torsion. Any clues?

Graham

--- In tuning-math@y..., graham@m... wrote:

> [(2, 0), (3, 1), (5, -2), (3, 15)]

>

> and my wedge invariant

>

> (2, -4, 30, -11, 42, 81)

>

> (this is different to Gene's wedge invariants which I still don't know how

> to get).

What about a compromise? You change the order of the basis elements, so that this reads (2, -4, 30, 81, 42, -11). The point is to make the wedgie calclulated from the period matrix or from two ets the same as the wedgie calculated from two commas. In return, I could normalize by your method. Mine was chosen to correspond with the 5-limit, where a wedgie is just the comma of the temperament, and where the obvious way to normalize is to make the comma greater than one. However this isn't going to work in the 11-limit, and we should decide on a single system which may as well be the one you are using now.

>

> Oh, time for a new conjecture:

>

> The linear temperament formed by combining two consistent equal

> temperaments will never have a higher complexity than the number of notes

> in the more complex ET.

>

> This is using the same definitions of complexity and consistency as my

> program. Anybody care to prove/refute it?

Have you done enough calculations to check its plausibility? It sounds plausible, though.

> I'm not clear what to do with the function when it is working. It takes

> square matrices, but the matrices formed by commatic unison vectors aren't

> square.

You should be able to change the code to nonsquare easily enough. If that doesn't work, try filling out the matrix with rows of zeros.

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> > This is using the same definitions of complexity and consistency as my

> > program. Anybody care to prove/refute it?

>

> Have you done enough calculations to check its plausibility? It sounds plausible, though.

It occurs to me that the theorems I've already posted here should suffice for this. I'll get back to it.

In-Reply-To: <a349q8+dd3a@eGroups.com>

Me:

> > (2, -4, 30, -11, 42, 81)

> >

> > (this is different to Gene's wedge invariants which I still don't

> > know how to get).

Gene:

> What about a compromise? You change the order of the basis elements, so

> that this reads (2, -4, 30, 81, 42, -11). The point is to make the

> wedgie calclulated from the period matrix or from two ets the same as

> the wedgie calculated from two commas.

Ah! Well, in my case this isn't true. This is the first time I've seen

you acknowledge the problem. The wedgie above is

{(2, 3): 81, (0, 1): 2, (1, 3): 42, (0, 3): 30, (0, 2): -4, (1, 2): -11}

which has the invariant

(2, -4, 30, -11, 42, 81)

its complement is

{(2, 3): 2, (1, 2): 30, (1, 3): 4, (0, 3): -11, (0, 2): -42, (0, 1): 81}

with invariant

(81, -42, -11, 30, 4, 2)

How am I supposed to know which of these to take? They're dimensionally

identical. It looks like your invariant function magically removes the

distinction, but like I said before, I don't know how to do that.

I did make a mistake above. It's

{(2, 3): 2, (1, 2): 30, (1, 3): 4, (0, 3): -11, (0, 2): -42, (0, 1): 81}

that gives the period/generator mapping

[(2, 0), (0, 1), (11, -2), (-42, 15)]

Whereas

{(2, 3): 81, (0, 1): 2, (1, 3): 42, (0, 3): 30, (0, 2): -4, (1, 2): -11}

gives

[(1, 0), (-74, 81), (38, -42), (10, -11)]

which has a stonking 3256 cent 7-limit minimax, but is still legally

defined. How do you tell which is "correct"? I really don't know.

Gene:

> In return, I could normalize by

> your method. Mine was chosen to correspond with the 5-limit, where a

> wedgie is just the comma of the temperament, and where the obvious way

> to normalize is to make the comma greater than one. However this isn't

> going to work in the 11-limit, and we should decide on a single system

> which may as well be the one you are using now.

All I do is take the simpler of a "pair" of wedgies, which is ambiguous

only in the 7 prime limit. Then, I sort each basis into ascending order

and sort the whole thing in ascending order of bases.

Me:

> > The linear temperament formed by combining two consistent equal

> > temperaments will never have a higher complexity than the number of

> > notes in the more complex ET.

> >

> > This is using the same definitions of complexity and consistency as

> > my program. Anybody care to prove/refute it?

Gene:

> Have you done enough calculations to check its plausibility? It sounds

> plausible, though.

I haven't found any exceptions yet, but haven't done a systematic search.

It's easy to refute if you don't enforce consistency. Pretty much

anything can be written as g0&g1 with the right g0, g1 and period.

Me:

> > I'm not clear what to do with the function when it is working. It

> > takes square matrices, but the matrices formed by commatic unison

> > vectors aren't square.

Gene:

> You should be able to change the code to nonsquare easily enough. If

> that doesn't work, try filling out the matrix with rows of zeros.

The former might work. I'll have to see if I can get my brain around it.

The latter certainly won't. The Gram-Schmidt orthogonalization involves

dividing by inner products, which will be zero if a row is zero.

Graham