back to list

More lists

🔗graham@microtonal.co.uk

12/6/2001 12:48:00 PM

I've updated the script at <http://x31eq.com/temper.html> to
produce files using Dave Keenan's new figure of demerit. That is

width**2 * math.exp((error/self.stdError*3)**2)

The stdError is from some complexity calculations we did before. I forget
what, but it's 17 cents. The results are at

<http://x31eq.com/limit5.gauss>
<http://x31eq.com/limit7.gauss>
<http://x31eq.com/limit9.gauss>
<http://x31eq.com/limit11.gauss>
<http://x31eq.com/limit13.gauss>
<http://x31eq.com/limit15.gauss>
<http://x31eq.com/limit17.gauss>
<http://x31eq.com/limit19.gauss>
<http://x31eq.com/limit21.gauss>

They seem to make good enough sense. I haven't taken the training wheels
off completely, but loosened them as far as I did for the
microtemperaments. The other files haven't been updated, and I'm not even
calculating the MOS-rated list any more.

I've also changed the program to print out equivalences between
second-order ratios instead of unison vectors. That means the higher
limits have a huge number of equivalences. For example, at the bottom of
the 21-limit list there's an 11-limit unique temperament consistent with
111 and 282. It has a complexity of 174 and all intervals to within 2
cents of just. With something that complex, are there any second-order
equivalences? Yes, lots. Including one interval that can be taken 11
different ways:

144:143 =~ 196:195 =~ 171:170 =~ 210:209 =~ 225:224 =~ 209:208 =~ 221:220
=~ 170:169 =~ 273:272 =~ 289:288 =~ 190:189

and that picked out of 197 lines of numerical vomit. I could clean it up,
but I don't know if I should. If anybody thought the extended 21-limit
was pretty, they can't have been paying attention.

It should be possible to get some unison vectors without torsion from this
list! If the temperament's second-order unique, I'll have to use the
original method. Some 5-limit temperaments are, but they aren't a problem
anyway. A few 7-limit temperaments are too, notably including shrutar.
Ennealimmal for all its complexity has

49:40 =~ 60:49
50:49 =~ 49:48

One problem with calculating the unison vectors from these equivalences is
I'd have to check they were linearly independent without using Numeric.
Or move the generating function to vectors.py. But I don't know if I'll
bother, because the equivalences are the important things anyway.

Another idea would be to take all the intervals between second-order
intervals below a certain size, and use them as unison vectors to generate
temperaments. I might try that.

Oh yes. Seeing as a 7-limit microtemperament is now causing something of
a storm, notice that the top 11-limit one is 26+46 (complexity of 30,
errors within 2.5 cents). And the simplest with all errors below a cent
is 118+152 (complexity of 74).

Graham

🔗paulerlich <paul@stretch-music.com>

12/6/2001 6:45:07 PM

--- In tuning-math@y..., graham@m... wrote:
> I've updated the script at
<http://x31eq.com/temper.html> to
> produce files using Dave Keenan's new figure of demerit. That is
>
> width**2 * math.exp((error/self.stdError*3)**2)

I thought Dave Keenan wanted to use Gene's "step" measure. In
addition, I think it should be weighted to favor the simpler
consonances.
>
> The stdError is from some complexity calculations we did before. I
forget
> what, but it's 17 cents. The results are at
>
> <http://x31eq.com/limit5.gauss>
> <http://x31eq.com/limit7.gauss>
> <http://x31eq.com/limit9.gauss>
> <http://x31eq.com/limit11.gauss>
> <http://x31eq.com/limit13.gauss>
> <http://x31eq.com/limit15.gauss>
> <http://x31eq.com/limit17.gauss>
> <http://x31eq.com/limit19.gauss>
> <http://x31eq.com/limit21.gauss>
>
> They seem to make good enough sense.

Are you missing any "slippery" examples that don't come easily out of
two ETs?

Since you're doing so much work to get the unison vectors, shouldn't
we be thinking about _starting_ with unison vectors?
>
> Another idea would be to take all the intervals between second-
order
> intervals below a certain size, and use them as unison vectors to
generate
> temperaments. I might try that.

That should plug a lot of holes.

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/6/2001 7:12:24 PM

--- In tuning-math@y..., graham@m... wrote:
> I've updated the script at <http://x31eq.com/temper.html>
to
> produce files using Dave Keenan's new figure of demerit. That is
>
> width**2 * math.exp((error/self.stdError*3)**2)

Thanks for doing that Graham.

I note that Graham is using maximum width and (optimised) maximum
error where Gene is using rms width and (optimised) rms error. It will
be interesting to see if this alone makes much difference to the
rankings. I doubt it.

> The stdError is from some complexity calculations we did before. I
forget
> what, but it's 17 cents.

Actually that looks like the 1% std dev in frequency that came from
some dude's experiments with actual live humans experiencing actual
air vibrations. Paul can you remind us who it was and what s/he
measured?

So I see that while the gaussian with std error of 17 cents seems to
do the right thing in eliminating temperaments with tiny errors but
huge numbers of generators, it is too hard on those with larger
errors. Notice that Ennealimmal is still in the 7-limit list (about
number 22). The problem is that Paultone isn't there at all! It has
17.5 c error with 6 gens per tetrad.

Those lists don't contain any temperament with errors greater than 10
cents. The 5-limit 163 cent neutral second temperament has the largest
at 9.8 cents, with 5 generators per triad.

So I have to agree with Paul that
badness = num_gens^2 / gaussian(error/17c)
doesn't work.

I realised there's no need to have non-linear functions of _both_
num_gens and error (steps and cents) in this badness metric. e.g.
This will give the same ranking as the above:

badness = num_gens / gaussian(error/(17c * sqrt(2)))

So all we really want to know is the relationship between error and
num_gens. What shape is a line of constant badness (an isobad) on a
plot of number of generators needed for a complete otonality (or
diamond) against error in cents.

The simplest badness, num_gens * error, would mean the isobads are
hyperbolas, (and num_gens^2 * error or equivalently num_gens *
sqrt(error) is of course very similar) but I think it is clear that,
for constant badness, as error goes to zero, num-gens does _not_ go to
infinity, but levels off. Even for zero error there is a limit to how
many generators you can tolerate. I find it difficult to imagine
anyone being seriously interested in using a temperament that needs 30
generators to get a single complete otonality, no matter how small the
error is. And I think this limiting number of generators decreases as
the odd-limit decreases.

We can introduce this as a sudden limit as Gene suggested, or we can
use some continuous function to make it come on gradually

An isobad will also have a maximum number of cents error that can be
tolerated even when everything is approximated by a single generator.
Notice that the number of generators can't go below 1 (even for rms),
so we don't care what an isobad does for num_gens < 1.

What's a nice simple badness metric that will give us these effects?

> Oh yes. Seeing as a 7-limit microtemperament is now causing
something of
> a storm, notice that the top 11-limit one is 26+46 (complexity of
30,
> errors within 2.5 cents). And the simplest with all errors below a
cent
> is 118+152 (complexity of 74).

Yes, even though we don't consider it a microtemperament at the
11-limit, Miracle temperament is really a serious 11-limit optimum, by
any (reasonable) goodness measure. You have to pay an enormous cost in
extra complexity to get the max error even _slightly_ lower than
11-limit-Miracle's 3.3 cents, or an enormous cost in cents to get the
complexity down even slightly below 11-limit-Miracle's 22 generators.
Is that what you are indicating?

🔗paulerlich <paul@stretch-music.com>

12/6/2001 7:38:36 PM

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Actually that looks like the 1% std dev in frequency that came from
> some dude's experiments with actual live humans experiencing actual
> air vibrations. Paul can you remind us who it was and what s/he
> measured?

It measured the typical uncertainties with which sine-wave partials
in an optimal frequency range were heard, based on the uncertainties
with which the virtual fundamentals were heard.

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/6/2001 7:47:20 PM

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> What's a nice simple badness metric that will give us these effects?

Hey! What's wrong with simply

badness = num_gens + error_in_cents

(i.e. steps + cents)

or if that seems too arbitrary, how about agreeing on some value of k
in

badness = k * num_gens + error_in_cents, where k ~= 1

or maybe even

badness = k/odd_limit * num_gens + error_in_cents, where k ~= 5

Wanna give this one a spin Graham?

🔗graham@microtonal.co.uk

12/7/2001 1:14:00 PM

Dave Keenan wrote:

> I note that Graham is using maximum width and (optimised) maximum
> error where Gene is using rms width and (optimised) rms error. It will
> be interesting to see if this alone makes much difference to the
> rankings. I doubt it.

I've implemented RMS error now. It's actually faster than the minimax, so
I've made it the default. I've uploaded new copies of the .txt and .gauss
files. There are also other changes to the code to make it more
efficient. As it stands, the ET matching is broken. I've fixed that, but
not uploaded.

You could implement the RMS width easily enough, but I expect it'll slow
down execution, so you can do it on your own time.

> So I see that while the gaussian with std error of 17 cents seems to
> do the right thing in eliminating temperaments with tiny errors but
> huge numbers of generators, it is too hard on those with larger
> errors. Notice that Ennealimmal is still in the 7-limit list (about
> number 22). The problem is that Paultone isn't there at all! It has
> 17.5 c error with 6 gens per tetrad.

I'm dividing the 17 cents by 3 in this case, to give a figure more like
what you asked for.

> Those lists don't contain any temperament with errors greater than 10
> cents. The 5-limit 163 cent neutral second temperament has the largest
> at 9.8 cents, with 5 generators per triad.
>
> So I have to agree with Paul that
> badness = num_gens^2 / gaussian(error/17c)
> doesn't work.

It works fine. You asked for errors of around 6 cents, so why should you
expect errors greater than 10 cents?

Graham

🔗graham@microtonal.co.uk

12/7/2001 1:14:00 PM

paul@stretch-music.com (paulerlich) wrote:

> --- In tuning-math@y..., graham@m... wrote:
> > I've updated the script at
> <http://x31eq.com/temper.html> to
> > produce files using Dave Keenan's new figure of demerit. That is
> >
> > width**2 * math.exp((error/self.stdError*3)**2)
>
> I thought Dave Keenan wanted to use Gene's "step" measure. In
> addition, I think it should be weighted to favor the simpler
> consonances.

Yes, but width is the analog for the way I'm calculating it.

> Are you missing any "slippery" examples that don't come easily out of
> two ETs?

Yes, as always.

> Since you're doing so much work to get the unison vectors, shouldn't
> we be thinking about _starting_ with unison vectors?

Yes, I've done that, and so has Gene.

> > Another idea would be to take all the intervals between second-
> order
> > intervals below a certain size, and use them as unison vectors to
> generate
> > temperaments. I might try that.
>
> That should plug a lot of holes.

I need to be able to take all combinations. So far, I can only do that
for the 7-limit, where they're pairs. I'll have to think about the
general case. It'll probably involve recursion. I'm also worried about
the speed of this search, because there are going to be a lot more unison
vector combinations that ET pairs for the higher limits.

It may be more efficient to take different readings of inconsistent ETs.

Graham

🔗graham@microtonal.co.uk

12/7/2001 1:33:00 PM

Dave Keenan wrote:

> I note that Graham is using maximum width and (optimised) maximum
> error where Gene is using rms width and (optimised) rms error. It will
> be interesting to see if this alone makes much difference to the
> rankings. I doubt it.

Oh yes, I forgot to say before. Here's the difference RMS errors make in
the 11-limit:

1 1
2 2
4 4
3 3
7 6
5 9
15 5
12 14
6 16
14 17

The left hand column is the minimax ranking in terms of the RMS one, and
the other one is the other way round. So they mostly agree on the best
ones, but disagree on the mediocre ones.

To check my RMS optimization's working, is a 116.6722643 cent generator
right for Miracle in the 11-limit? RMS error of 1.9732 cents.

Graham

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/7/2001 2:33:29 PM

--- In tuning-math@y..., graham@m... wrote:
> Oh yes, I forgot to say before. Here's the difference RMS errors
make in
> the 11-limit:
...
> The left hand column is the minimax ranking in terms of the RMS one,
and
> the other one is the other way round. So they mostly agree on the
best
> ones, but disagree on the mediocre ones.

Ok. Thanks. That was a good way of showing it.

> To check my RMS optimization's working, is a 116.6722643 cent
generator
> right for Miracle in the 11-limit? RMS error of 1.9732 cents.

I get 116.678 and 1.9017. Did you include the squared error for 1:3
twice? I think you should since it occurs twice in an 11-limit hexad,
as both 1:3 and 3:9. So then you must divide by 15, not 14, to get the
mean.

Actually, I see that this doesn't explain our discrepancy.

🔗genewardsmith <genewardsmith@juno.com>

12/7/2001 4:30:26 PM

--- In tuning-math@y..., graham@m... wrote:

> I need to be able to take all combinations. So far, I can only do
that
> for the 7-limit, where they're pairs. I'll have to think about the
> general case. It'll probably involve recursion.

It's certainly possible to start with a certain prime limit, and use
that for the next one; I've been thinking about that from the point
of view of 5-->7.

I'm also worried about
> the speed of this search, because there are going to be a lot more
unison
> vector combinations that ET pairs for the higher limits.

Already for the 11-limit you need to wedge three unison vectors to
get the wedgie for a linear temperament, but only two ets. However,
to get the wedgie for a *planar* temperament, it is two unisons vs
three ets, and in higher limits it gets more involved yet.

🔗genewardsmith <genewardsmith@juno.com>

12/7/2001 4:58:28 PM

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> --- In tuning-math@y..., graham@m... wrote:
> > Oh yes, I forgot to say before. Here's the difference RMS errors
> make in
> > the 11-limit:
> ...
> > The left hand column is the minimax ranking in terms of the RMS
one,
> and
> > the other one is the other way round. So they mostly agree on
the
> best
> > ones, but disagree on the mediocre ones.
>
> Ok. Thanks. That was a good way of showing it.
>
> > To check my RMS optimization's working, is a 116.6722643 cent
> generator
> > right for Miracle in the 11-limit? RMS error of 1.9732 cents.
>
> I get 116.678 and 1.9017.

I got 116.672264296056... which checks with Graham, so that's
progress of some kind.

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/7/2001 6:11:04 PM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> I got 116.672264296056... which checks with Graham, so that's
> progress of some kind.

So what's wrong with this spreadsheet?

http://dkeenan.com/Music/Miracle/Miracle11RMS.xls

🔗paulerlich <paul@stretch-music.com>

12/7/2001 6:12:56 PM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> I got 116.672264296056... which checks with Graham, so that's
> progress of some kind.

I get 116.6775720762089, which agrees with Dave. Gene, did you have
15 error terms like we did?

🔗graham@microtonal.co.uk

12/7/2001 9:01:00 PM

Me:
> > To check my RMS optimization's working, is a 116.6722643 cent
> generator
> > right for Miracle in the 11-limit? RMS error of 1.9732 cents.

Dave:
> I get 116.678 and 1.9017. Did you include the squared error for 1:3
> twice? I think you should since it occurs twice in an 11-limit hexad,
> as both 1:3 and 3:9. So then you must divide by 15, not 14, to get the
> mean.

I include 1:3 and 1:9

> Actually, I see that this doesn't explain our discrepancy.

It may depend on whether or not you include the zero error for 1/1 in the
mean.

Graham

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/7/2001 10:22:38 PM

--- In tuning-math@y..., graham@m... wrote:
> It may depend on whether or not you include the zero error for 1/1
in the
> mean.

I don't. Seems like a silly idea. And that wouldn't change _where_ the
minimum occurs.

Are you able to look at the Excel spreadsheet I gave the URL for in my
previous message in this thread?

🔗graham@microtonal.co.uk

12/8/2001 1:32:00 PM

Me:
> > It may depend on whether or not you include the zero error for 1/1
> in the
> > mean.

Dave:
> I don't. Seems like a silly idea. And that wouldn't change _where_ the
> minimum occurs.

Yes, won't change the position. But, looking carefully at your previous
mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it. I remove
the duplicates.

> Are you able to look at the Excel spreadsheet I gave the URL for in my
> previous message in this thread?

I'll be able to look at it on Monday, when I get back to work. I *might*
be able to check it in Star Office first, but probably won't.

Graham

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/9/2001 7:35:32 PM

--- In tuning-math@y..., graham@m... wrote:
> Yes, won't change the position. But, looking carefully at your
previous
> mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it. I
remove
> the duplicates.

Whoa there! It's arguable that one could omit 3:9 since it is a
duplicate of 1:3 (although Paul and I agree it should stay duplicated
because the interval really does occur at two different places in the
complete chord) but 1:9 isn't a duplicate of anything. You can't
ignore the 1:9 error. It really exists. When you listen to a bare 4:9
you don't hear two 2:3 errors.

🔗paulerlich <paul@stretch-music.com>

12/9/2001 7:47:28 PM

--- In tuning-math@y..., graham@m... wrote:
> Me:
> > > It may depend on whether or not you include the zero error for
1/1
> > in the
> > > mean.
>
> Dave:
> > I don't. Seems like a silly idea. And that wouldn't change
_where_ the
> > minimum occurs.
>
> Yes, won't change the position. But, looking carefully at your
previous
> mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it. I
remove
> the duplicates.

9/3 is not a duplicate of 3/1 -- all saturated chords in the 11-limit
contain not one, but two of the intervals that this is equal to. And
9/1 is a duplicate of what?

🔗genewardsmith <genewardsmith@juno.com>

12/9/2001 9:07:46 PM

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> Whoa there! It's arguable that one could omit 3:9 since it is a
> duplicate of 1:3 (although Paul and I agree it should stay
duplicated
> because the interval really does occur at two different places in
the
> complete chord) but 1:9 isn't a duplicate of anything.

Since Graham checked with me, he must be including 9, and counting 3
once (on the grounds that there is only one 3 among all the numbers.)
Since we already double up on 3 a lot by having 3,9,5/3,9/5,7/3,9/7,
11/3,11/9 we are giving it quite a lot of weight as it is. I don't
think there are any pragmatic arguments either way, but counting it
twice seems a little random to me.

🔗graham@microtonal.co.uk

12/10/2001 5:06:00 AM

In-Reply-To: <9v1b8g+f9bd@eGroups.com>
Me:
> > Yes, won't change the position. But, looking carefully at your
> previous
> > mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it. I
> remove
> > the duplicates.

Paul:
> 9/3 is not a duplicate of 3/1 -- all saturated chords in the 11-limit
> contain not one, but two of the intervals that this is equal to. And
> 9/1 is a duplicate of what?

9/3 and 3/1 are duplicates. The both simplify to be the same. 9/1 isn't
a duplicate, so I don't remove it.

What about 3/3, 5/5, 7/7, 9/9 and 11/11? Should they be included in the
average? How about 11/8, 16/11, 32/11, 11/4, 11/2, etc?

Graham

🔗paulerlich <paul@stretch-music.com>

12/10/2001 12:35:55 PM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
>
> > Whoa there! It's arguable that one could omit 3:9 since it is a
> > duplicate of 1:3 (although Paul and I agree it should stay
> duplicated
> > because the interval really does occur at two different places in
> the
> > complete chord) but 1:9 isn't a duplicate of anything.
>
> Since Graham checked with me, he must be including 9, and counting
3
> once (on the grounds that there is only one 3 among all the
numbers.)
> Since we already double up on 3 a lot by having 3,9,5/3,9/5,7/3,9/7,
> 11/3,11/9 we are giving it quite a lot of weight as it is. I don't
> think there are any pragmatic arguments either way, but counting it
> twice seems a little random to me.

Sorry, I have to disagree. Graham is specifically considering the
harmonic entity that consists of the first N odd numbers, in a chord.
If a particular interval occurs twice, then we _have_ to weight it
twice. And this is to say nothing of all the other saturated chords
Graham is not using!

🔗paulerlich <paul@stretch-music.com>

12/10/2001 12:37:57 PM

--- In tuning-math@y..., graham@m... wrote:

> What about 3/3, 5/5, 7/7, 9/9 and 11/11? Should they be included
in the
> average?

It wouldn't affect the result.

> How about 11/8, 16/11, 32/11, 11/4, 11/2, etc?

If you included an equal number of octave-equivalents to each ratio,
again, the result wouldn't be affected.

🔗genewardsmith <genewardsmith@juno.com>

12/10/2001 12:42:42 PM

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Sorry, I have to disagree. Graham is specifically considering the
> harmonic entity that consists of the first N odd numbers, in a
chord.

That may be what Graham was doing, but it wasn't what I was doing; I
seldom go beyond four parts.

🔗paulerlich <paul@stretch-music.com>

12/10/2001 2:49:25 PM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
>
> > Sorry, I have to disagree. Graham is specifically considering the
> > harmonic entity that consists of the first N odd numbers, in a
> chord.
>
> That may be what Graham was doing, but it wasn't what I was doing;
I
> seldom go beyond four parts.

Even if you don't, don't you think chords like

1:3:5:9
1:3:7:9
1:3:9:11
10:12:15:18
12:14:18:21
18:22:24:33

which contain only 11-limit consonant intervals, would be important
to your music?

🔗genewardsmith <genewardsmith@juno.com>

12/10/2001 10:49:47 PM

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> 1:3:5:9
> 1:3:7:9
> 1:3:9:11
> 10:12:15:18
> 12:14:18:21
> 18:22:24:33
>
> which contain only 11-limit consonant intervals, would be important
> to your music?

Indeed they are, but they are taken care of.

🔗paulerlich <paul@stretch-music.com>

12/10/2001 11:30:38 PM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
>
> > 1:3:5:9
> > 1:3:7:9
> > 1:3:9:11
> > 10:12:15:18
> > 12:14:18:21
> > 18:22:24:33
> >
> > which contain only 11-limit consonant intervals, would be
important
> > to your music?
>
> Indeed they are, but they are taken care of.

Shouldn't you weight them _twice_ if they're occuring twice as often?
How can you justify equal-weighting?

🔗genewardsmith <genewardsmith@juno.com>

12/10/2001 11:58:41 PM

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Shouldn't you weight them _twice_ if they're occuring twice as
often?
> How can you justify equal-weighting?

I do weight them twice, more or less, depending on how you define
this. 3 is weighted once as a 3, and then its error is doubled, so it
is weighted again 4 times as much from 3 and 9 together; so 3 is
weighted 5 times, or 9 1.25 times, from one point of view. Then we
double dip with 5/3, 5/9 etc. with similar effect.

🔗paulerlich <paul@stretch-music.com>

12/11/2001 12:06:23 AM

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
>
> > Shouldn't you weight them _twice_ if they're occuring twice as
> often?
> > How can you justify equal-weighting?
>
> I do weight them twice, more or less, depending on how you define
> this. 3 is weighted once as a 3, and then its error is doubled, so
it
> is weighted again 4 times as much from 3 and 9 together; so 3 is
> weighted 5 times, or 9 1.25 times, from one point of view. Then we
> double dip with 5/3, 5/9 etc. with similar effect.

I don't see it that way. 9:1 is an interval of its own and needs to
be weighted independently of whether any 3:1s or 9:3s are actually
used. 5/3 and 5/9 could be seen as weighting 5 commensurately more, I
don't buy the "double dip" bit one bit!

These are conclusions I've reached after years of playing with
tunings with large errors and comparing them and thinking hard about
this problem.

🔗graham@microtonal.co.uk

12/11/2001 4:55:00 AM

In-Reply-To: <9v3e5l+102l7@eGroups.com>
Paul:
> > > Sorry, I have to disagree. Graham is specifically considering the
> > > harmonic entity that consists of the first N odd numbers, in a
> > chord.

I'm considering a set of consonant intervals with equal weighting. 9:3
and 3:1 are the same interval. Perhaps I should improve my minimax
algorithm so the whole debate becomes moot.

Gene:
> > That may be what Graham was doing, but it wasn't what I was doing;
> I
> > seldom go beyond four parts.

Paul:
> Even if you don't, don't you think chords like
>
> 1:3:5:9
> 1:3:7:9
> 1:3:9:11
> 10:12:15:18
> 12:14:18:21
> 18:22:24:33
>
> which contain only 11-limit consonant intervals, would be important
> to your music?

Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4. And 1/1:11/9:3/2,
which has two neutral thirds (and so far I've not used 11-limit
temperaments in which this doesn't work) so should they be weighted
double? My experience so far of Miracle is that the "wolf fourth" of 4
secors is also important, but I don't have a rational approximation (21:16
isn't quite right). It may be that chords of 0-2-4-6-8 secors become
important, in which case 8:7 should be weighted three times as high as
12:7 and twice as high as 3:2.

I'd much rather stay with the simple rule that all consonant intervals are
weighted equally until we can come up with an improved, subjective
weighting. For that, I'm thinking of taking Partch at his word weighting
more complex intervals higher. But Paul was talking about a Tenney
metric, which would have the opposite effect. So it looks like we're not
going to agree on that one.

Graham

🔗paulerlich <paul@stretch-music.com>

12/11/2001 6:21:53 PM

--- In tuning-math@y..., graham@m... wrote:

> Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4.

But you can do that with _any_ interval.

> And 1/1:11/9:3/2,
> which has two neutral thirds (and so far I've not used 11-limit
> temperaments in which this doesn't work) so should they be weighted
> double?

Only if that were your target harmony. I thought hexads were your
target harmony.

> My experience so far of Miracle is that the "wolf fourth" of 4
> secors is also important, but I don't have a rational approximation
(21:16
> isn't quite right).

What do you mean it's important?

> It may be that chords of 0-2-4-6-8 secors become
> important, in which case 8:7 should be weighted three times as high
as
> 12:7 and twice as high as 3:2.

If that was the harmony you were targeting, sure.

> I'd much rather stay with the simple rule that all consonant
intervals are
> weighted equally until we can come up with an improved, subjective
> weighting. For that, I'm thinking of taking Partch at his word
weighting
> more complex intervals higher. But Paul was talking about a Tenney
> metric, which would have the opposite effect. So it looks like
we're not
> going to agree on that one.

If you don't agree with me that you're targeting the hexad (I thought
you had said as much at one point, when I asked you to consider
running some lists for other saturated chords), then maybe we better
go to minimax (of course, we'll still have a problem in cases like
paultone, where the maximum error is fixed -- what do we do then, go
to 2nd-worst error?).

🔗dkeenanuqnetau <d.keenan@uq.net.au>

12/11/2001 8:13:51 PM

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> If you don't agree with me that you're targeting the hexad (I
thought
> you had said as much at one point, when I asked you to consider
> running some lists for other saturated chords), then maybe we better
> go to minimax (of course, we'll still have a problem in cases like
> paultone, where the maximum error is fixed -- what do we do then, go
> to 2nd-worst error?).

Yes. That's what I do. You still give the error as the worst one, but
you give the optimum generator based on the worst error that actually
_depends_ on the generator (as opposed to being fixed because it only
depends on the period).

🔗graham@microtonal.co.uk

12/12/2001 2:47:00 AM

In-Reply-To: <9v6f01+1bdh@eGroups.com>
Paul wrote:

> If you don't agree with me that you're targeting the hexad (I thought
> you had said as much at one point, when I asked you to consider
> running some lists for other saturated chords), then maybe we better
> go to minimax (of course, we'll still have a problem in cases like
> paultone, where the maximum error is fixed -- what do we do then, go
> to 2nd-worst error?).

The hexads are targetted by the complexity formula. But that's because
it's the simplest such measure, not because I actually think they're
musically useful. I'm coming to the opinion that anything over a 7-limit
tetrad is quite ugly, but some smaller 11-limit chords (and some chords
with the for secor wolf) are strikingly beautiful, if they're tuned right.
So Blackjack is a good 11-limit scale although it doesn't contain any
hexads.

I've always preferred minimax as a measure, but currently speed is the
most important factor. The RMS optimum can be calculated much faster, and
although I can improve the minimax algorithm I don't think it can be made
as fast.

Scales such as Paultone can be handled by excluding all intervals that
don't depend on the generator. But the value used for rankings still has
to include all intervals. My program should be doing this, but I'm not
sure if it is working correctly, so if you'd like to check this should be
Paultone minimax:

2/11, 106.8 cent generator

basis:
(0.5, 0.089035952556318909)

mapping by period and generator:
[(2, 0), (3, 1), (5, -2), (6, -2)]

mapping by steps:
[(12, 10), (19, 16), (28, 23), (34, 28)]

highest interval width: 3
complexity measure: 6 (8 for smallest MOS)
highest error: 0.014573 (17.488 cents)
unique

9:7 =~ 32:25 =~ 64:49
8:7 =~ 9:8
4:3 =~ 21:16
35:32 =~ 10:9

consistent with: 10, 12, 22

Hmm, why isn't 7:5 =~ 10:7 on that list?

Graham

🔗paulerlich <paul@stretch-music.com>

12/12/2001 1:04:37 PM

--- In tuning-math@y..., graham@m... wrote:

> so if you'd like to check this should be
> Paultone minimax:
>
>
> 2/11, 106.8 cent generator

That's clearly wrong, as the 7:4 is off by 17.5 cents!

> basis:
> (0.5, 0.089035952556318909)
>
> mapping by period and generator:
> [(2, 0), (3, 1), (5, -2), (6, -2)]
>
> mapping by steps:
> [(12, 10), (19, 16), (28, 23), (34, 28)]
>
> highest interval width: 3
> complexity measure: 6 (8 for smallest MOS)
> highest error: 0.014573 (17.488 cents)
> unique

I don't think it should count as unique since

> 7:5 =~ 10:7

🔗graham@microtonal.co.uk

12/13/2001 3:47:00 AM

In-Reply-To: <9v8gp5+lv0o@eGroups.com>
Me:
> > so if you'd like to check this should be
> > Paultone minimax:
> >
> >
> > 2/11, 106.8 cent generator

Paul:
> That's clearly wrong, as the 7:4 is off by 17.5 cents!

Well, that is interesting. It turns out the minimax temperament
corresponds to a just 35:24. But I'd previously assumed that the minimax
had to have a just interval within the consonance limit. Well, I've added
some sticking tape to the algorithm, but I'm not sure it'll hold.

This is what I get now

2/11, 109.4 cent generator

basis:
(0.5, 0.09113589675523795)

mapping by period and generator:
[(2, 0), (3, 1), (5, -2), (6, -2)]

mapping by steps:
[(12, 10), (19, 16), (28, 23), (34, 28)]

highest interval width: 3
complexity measure: 6 (8 for smallest MOS)
highest error: 0.014573 (17.488 cents)

7:5 =~ 10:7

consistent with: 10, 12, 22

> I don't think it should count as unique since
>
> > 7:5 =~ 10:7

Yes, that was a different problem. I wasn't including
tritone-equivalences as duplicates.

Graham