I put together the ones Carl mentioned with the ones I cooked up, and compared them using some of my measures and some from Scala. Playing about with them, I got the impression that high lumma stability, propriety, and CS (which seemed to go together) were good things for scales to have, so that the ones with the most harmony did not necessarily sound the best melodically. I'm still trying to figure out all the arcane measures Carl and Graham are tossing at each other; maybe they could explain using these scales as examples.

I also put in "Wille's k value" to get a start on some of the measures I don't understand; this seemed to be a good place to start since it makes no sense to me at all. Does anyone have a clue?

Class

[1, 21/20, 35/32, 6/5, 5/4, 21/16, 7/5, 3/2, 25/16, 42/25, 7/4, 15/8]

triads 26 intervals 31 connectivity 3

improper CS lumma .043920 k 437

Stelhex

[1, 21/20, 7/6, 6/5, 5/4, 21/16, 7/5, 3/2, 8/5, 42/25, 7/4, 9/5]

triads 26 intervals 30 connectivity 3

improper lumma .081284 k 787

Euchex

[1, 15/14, 8/7, 6/5, 5/4, 4/3, 10/7, 3/2, 8/5, 12/7, 7/4, 15/8]

triads 25 intervals 30 connectivity 3

strictly proper CS lumma .253235 k 787

Prism

[1, 16/15, 28/25, 7/6, 5/4, 4/3, 7/5, 112/75, 8/5, 5/3, 7/4, 28/15]

triads 24 intervals 30 connectivity 3

strictly proper CS lumma .440966 k 262

Tet-a

[1, 21/20, 35/32, 6/5, 5/4, 21/16, 7/5, 3/2, 8/5, 5/3, 7/4, 15/8]

triads 24 intervals 30 connectivity 2

improper CS lumma .321977 k 262

Tet-b

[1, 21/20, 35/32, 7/6, 5/4, 21/16, 7/5, 3/2, 8/5, 5/3, 7/4, 15/8]

triads 22 intervals 29 connectivity 2

improper CS lumma .355766 k 262

Lumma

[1, 36/35, 8/7, 6/5, 5/4, 48/35, 10/7, 3/2, 5/3, 12/7, 9/5, 40/21]

triads 22 intervals 29 connectivity 3

improper lumma .081284 k 262

Euctetrad

[1, 15/14, 35/32, 6/5, 5/4, 21/16, 7/5, 3/2, 8/5, 5/3, 7/4, 15/8]

triads 21 intervals 28 connectivity 2

improper CS lumma .166834 k 1837

Gene

[1, 21/20, 9/8, 6/5, 5/4, 4/3, 7/5, 3/2, 8/5, 5/3, 7/4, 15/8]

triads 19 intervals 27 connectivity 2

strictly proper CS lumma .437710 k 112

Eucvert

[1, 15/14, 8/7, 6/5, 5/4, 4/3, 7/5, 3/2, 8/5, 5/3, 7/4, 28/15]

triads 19 intervals 27 connectivity 2

strictly proper CS lumma .262832 k 367

Lester

[1, 21/20, 9/8, 7/6, 5/4, 4/3, 7/5, 3/2, 14/9, 5/3, 7/4, 15/8]

triads 18 intervals 26 connectivity 2

strictly proper CS lumma .490032 k 337

Gene-a

[1, 15/14, 9/8, 6/5, 5/4, 4/3, 10/7, 3/2, 8/5, 5/3, 7/4, 15/8]

triads 18 intervals 26 connectivity 2

strictly proper CS lumma .333977 k 787

genewardsmith wrote:

> I put together the ones Carl mentioned with the ones I cooked up, and

> compared them using some of my measures and some from Scala. Playing

> about with them, I got the impression that high lumma stability,

> propriety, and CS (which seemed to go together) were good things for

> scales to have, so that the ones with the most harmony did not

> necessarily sound the best melodically. I'm still trying to figure out

> all the arcane measures Carl and Graham are tossing at each other;

> maybe they could explain using these scales as examples.

The main ones you're missing are Rothenberg stability and efficiency. The

former is the proportion of ambiguous intervals in the scale. Ambiguous

intervals have the same size, but belong to different interval classes.

The canonical example is the tritone in the 12-equal diatonic, which is

both a fourth and a fifth.

Rothenberg stability is undefined for improper scales, and unity for

strictly proper scales. As all your examples fall into these categories

(all JI scales do) they can't be used as examples for it. Unless you

interpret a pair of intervals as being so close they'll be heard as equal.

If a scale has low Rothenberg stability, you need a lot of context to

decide which interval class each interval belongs to. An example of such

a scale is the rendering of Rast in 10-equal, which has a propriety grid

2 1 1 2 2 1 1

3 2 3 4 3 2 3

4 4 5 5 4 4 4

6 6 6 6 6 5 5

8 7 7 8 7 6 7

9 8 9 9 8 8 9

That shows the number of steps of 10-equal to each second, third, fourth

and so on down the grid. Seconds and thirds can both be 2 steps, thirds

and fourths can both be 4 steps, fourths and fifths can both be 5 steps,

fifths and sixths can both be 6 steps and sixths and sevenths can both be

8 steps. It's only intervals of 1, 3, 7 or 9 steps that are unambiguous.

If a proper scale has a small number of ambiguous intervals that may be

better than none at all. Rothenberg says that an ambiguous interval

resolving to an ambiguous one adds to the cadential effect. I'm dubious

about this -- the tritone is used in diatonic cadences because there's

only one of them, so it almost describes the scale. But tritone

substitutions only work because of the ambiguity, so if you want an analog

of tritone substitutions you need at least one ambiguous interval.

Rothenberg efficiency relates to patterns of notes that uniquely determine

the key. The more notes you can play without the key being specified, the

higher the efficiency. MOS scales with an octave period always have a

high efficiency because you can play all but one note and there's still

ambiguity. The diatonic scale in 12-equal is particularly efficient

because the interval that shares the same interval class as the generator

(the tritone) is ambiguous. In 31-equal, if you assume the listener can

distinguish 7:5 and 10:7, you can uniquely specify the key by playing one

such interval.

Scales with low efficiency either have lots of different sized intervals

or a number of periods to the octave. The latter case is what Rothenberg

usually seems to be thinking of by low efficiency scales. He says they're

suitable for atonal type music because they don't have a clearly defined

key center. That's the opposite of what efficiency is supposed to show,

and only works this way because of a peculiarity of the definition.

So, I'm coming to dislike efficiency. I'd rather rate scales according to

the number of periods to the octave (which should be low for a diatonic),

the largest number of notes you can play without establishing the key

(which should be high for modulation to work, and is easier to calculate

than Rothenberg efficiency) and the smallest number of notes you need to

establish the key (which should be low, to give a strong sense of

tonality).

Carl did ask before about Consistency, the measure Rothenberg uses to rate

how good a scale will be for modal transposition. I think I've worked it

out now.

You start with the ordered mapping rather than the matrix above. But

after working out what the ordered mapping (alpha_ij) is, I don't think it

matters. So, I'll keep working with the usual matrix (delta_ij). With

the only example Rothenberg works through, the 12-equal diatonic, the two

are the same anyway.

2 2 2 1 2 2 1

4 4 3 3 4 3 3

6 5 5 5 5 5 5

7 7 7 6 7 7 7

9 9 8 8 9 9 8

11 10 10 10 11 10 10

You then construct "consistent sets" out of columns of this matrix that

don't share ambiguous intervals. In this example, any set that doesn't

contain both tritones is consistent.

Consistency is defined as the average of the proportion of sets with each

number of columns greater than 2 which are consistent.

So, for this example there are 7 consistent sets of 12 column, which will

be true for all 7 note scales so we ignore this statistic.

There are 20 consistent sets of 2 columns. The total number of

combinations of 2 objects from 7 is 21. One of those is invalid -- the

two columns that both have a tritone in them. Hence 21-1=20.

30 out of 35 sets with 3 columns are consistent.

25 out of 35 sets with 4 columns are consistent.

11 out of 21 sets with 5 columns are consistent.

2 out of 7 sets with 6 columns are consistent (the ones you get by

excluding either of the columns with a tritone in it).

There are no consistent sets with 7 columns.

The calculation then is

((20/21) + (30/35) + (25/35) + (11/21) + (2/7))/6

= 70/126 = 5/9 = 0.556 which agrees with Rothenberg.

So there you go. I prefer Lumma stability which does much the same job,

and is much simpler. It also works for improper an strictly proper

scales. Which means it doesn't depend on assumptions about the smallest

perceivable differences between intervals and doesn't suddenly change when

you hit an equal temperament.

Graham

>If a proper scale has a small number of ambiguous intervals that may be

>better than none at all. Rothenberg says that an ambiguous interval

>resolving to an ambiguous one adds to the cadential effect. I'm dubious

>about this -- the tritone is used in diatonic cadences because there's

>only one of them, so it almost describes the scale.

That's due to it's effect as a nearly "sufficient subset" of the scale,

which R. also discuses.

>Scales with low efficiency either have lots of different sized intervals

>or a number of periods to the octave. The latter case is what Rothenberg

>usually seems to be thinking of by low efficiency scales. He says they're

>suitable for atonal type music because they don't have a clearly defined

>key center. That's the opposite of what efficiency is supposed to show,

>and only works this way because of a peculiarity of the definition.

Namely, that you only need to hear one note of the scale to determine

the key... or maybe you can never determine the key. I think my solution

at lumma.org/gd3.txt is elegant.

>So, I'm coming to dislike efficiency. I'd rather rate scales according

>to the number of periods to the octave (which should be low for a

>diatonic),

Agree.

>the largest number of notes you can play without establishing the key

>(which should be high for modulation to work,

Modulation? Are you sure?

>the smallest number of notes you need to establish the key (which should

>be low, to give a strong sense of tonality).

So you're saying you want there to be one sufficient subset which is

small, and the rest maximally large? This is a tonality measure, which

as I say, I'm avoiding on purpose. It could be a good one, though.

>You start with the ordered mapping rather than the matrix above. But

>after working out what the ordered mapping (alpha_ij) is, I don't think

>it matters. So, I'll keep working with the usual matrix (delta_ij).

>With the only example Rothenberg works through, the 12-equal diatonic,

>the two are the same anyway.

It doesn't matter, but with the ordered mapping you get results

applicable to any scale that shares it, whereas the usual matrix

uniquely specifies the scale in question.

> 2 2 2 1 2 2 1

> 4 4 3 3 4 3 3

> 6 5 5 5 5 5 5

> 7 7 7 6 7 7 7

> 9 9 8 8 9 9 8

>11 10 10 10 11 10 10

>

>You then construct "consistent sets" out of columns of this matrix that

>don't share ambiguous intervals. In this example, any set that doesn't

>contain both tritones is consistent.

>

>Consistency is defined as the average of the proportion of sets with each

>number of columns greater than 2 which are consistent.

>

>So, for this example there are 7 consistent sets of 12 column, which will

>be true for all 7 note scales so we ignore this statistic.

>

>There are 20 consistent sets of 2 columns. The total number of

>combinations of 2 objects from 7 is 21. One of those is invalid -- the

>two columns that both have a tritone in them. Hence 21-1=20.

>

>30 out of 35 sets with 3 columns are consistent.

>

>25 out of 35 sets with 4 columns are consistent.

>

>11 out of 21 sets with 5 columns are consistent.

>

>2 out of 7 sets with 6 columns are consistent (the ones you get by

>excluding either of the columns with a tritone in it).

>

>There are no consistent sets with 7 columns.

>

>The calculation then is

>

>((20/21) + (30/35) + (25/35) + (11/21) + (2/7))/6

>

>= 70/126 = 5/9 = 0.556 which agrees with Rothenberg.

>

>

>So there you go. I prefer Lumma stability which does much the same job,

>and is much simpler. It also works for improper an strictly proper

>scales. Which means it doesn't depend on assumptions about the smallest

>perceivable differences between intervals and doesn't suddenly change when

>you hit an equal temperament.

Thanks for explaining this, Graham. This rings a bell now. Yes, I

agree that Lumma stability is usually better, especially when you

leave the world of et subsets that R. worked in.

-Carl

Me:

> >Scales with low efficiency either have lots of different sized

> intervals >or a number of periods to the octave. The latter case is

> what Rothenberg >usually seems to be thinking of by low efficiency

> scales. He says they're

> >suitable for atonal type music because they don't have a clearly

> defined

> >key center. That's the opposite of what efficiency is supposed to

> show, >and only works this way because of a peculiarity of the

> definition.

Carl:

> Namely, that you only need to hear one note of the scale to determine

> the key... or maybe you can never determine the key. I think my

> solution

> at lumma.org/gd3.txt is elegant.

I suppose the idea is that once you've heard a few notes (one for an equal

temperament, a number of pairs for an octatonic scale) you know all the

notes in the scale, even if there isn't a key center.

Um, which bit of gd3?

> >the largest number of notes you can play without establishing the key

> >(which should be high for modulation to work,

>

> Modulation? Are you sure?

I don't see anything wrong there. If you want to change key according to

shared notes you need a lot of notes to be shared. There may be other

ways of doing modulations, but this is an obvious criterion to add.

An alternative would be to modulate between pairs of diatonics, like I

suggested for neutral third scales.

> >the smallest number of notes you need to establish the key (which

> should

> >be low, to give a strong sense of tonality).

>

> So you're saying you want there to be one sufficient subset which is

> small, and the rest maximally large? This is a tonality measure, which

> as I say, I'm avoiding on purpose. It could be a good one, though.

I'd rather have one maximally large sufficient subset, and the rest small.

It's only the former that's rare, and apart from the bizarre way periodic

modes are treated I don't see another reason for efficiency being

considered.

> Thanks for explaining this, Graham. This rings a bell now. Yes, I

> agree that Lumma stability is usually better, especially when you

> leave the world of et subsets that R. worked in.

Rothenberg worked with discrete measuring scales, not ET subsets. You

could apply all the measures to arbitrary scales if you defined a

tolerance within which intervals are considered equal. And you really

wanted to.

Graham

>Um, which bit of gd3?

Where I set the score of the efficiency section equal to the generalized

fifths section if there are only 1 or 2 unique keys.

>I'd rather have one maximally large sufficient subset, and the rest small.

>It's only the former that's rare, and apart from the bizarre way periodic

>modes are treated I don't see another reason for efficiency being

>considered.

I'm afraid this would be far too specific for me.

>> Thanks for explaining this, Graham. This rings a bell now. Yes, I

>> agree that Lumma stability is usually better, especially when you

>> leave the world of et subsets that R. worked in.

>

>Rothenberg worked with discrete measuring scales, not ET subsets. You

>could apply all the measures to arbitrary scales if you defined a

>tolerance within which intervals are considered equal. And you really

>wanted to.

Yes, he says that a couple of times, but he only ever calculates anything

based on et subsets.

-Carl