back to list

Metastable Intervals and Tenney Complexity

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

10/9/2007 8:16:39 AM

Hi,

if the point of metastability between two intervals should be
proportional to the complexities of the two intervals why not weight
them in the calculation with something that really is proportional to
the their complexities, for example Tenney weighting?

Kalle Aho

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/9/2007 5:56:59 PM

Hi Kalle,

That would probably give a reasonable result. But don't take the
planetary analogy too seriously. It isn't really a "two-body problem".
Consider the metastable major third. It must avoid not only 3:4 and
4:5 but also 7:9 and to some small degree 11:14. The appropriate part
of the Stern Brocot tree looks like the following. (If viewing on the
web, choose: Show message option/Use fixed width font):

4/3

9/7
23/18
14/11 \/\/\...

5/4

1/1

With tenney weighting you would get a different result depending on
which adjacent two you choose. With Phi weighting, it doesn't matter
which adjacent two you choose, you will always get the same result. In
effect, the phi weighting takes them all into account, by giving the
limit of the fibbonacci-like series. That is

NobleMediant(1/1, 4/3) =
NobleMediant(4/3, 5/4) =
NobleMediant(5/4, 4/3) =
NobleMediant(9/7, 14/11) =
....
= (1+4*phi)/(1+3*phi)
~= 1.276
~= 422.5 cents

Assuming a harmonic timbre, this is a point at which the partials of
the two notes maximally fail to align with each other. You could call
it "maximal beating" from a theoretical or physics point of view, but
it doesn't _sound_ like beating at all, presumably because there are
so many harmonics interacting in such complex ways.

I suggest we do not refer to these as "noble" or "golden" intervals
since those terms are already in use for intervals which are noble
fractions of an octave (logarithmically) and as such refer to a
melodic property rather than the harmonic one we are referring to here.

We could call them metastable or "noble intonation"/"nobly intoned"
(NI) intervals. I would prefer to define these by the way they _sound_
when their pitch is varied up and down, in the same way I like to
define justly intoned intervals by the way _they_ sound when varied.

That way we can hopefully agree they exist without having to agree on
any particular theory as to why.

-- Dave Keenan

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
>
> Hi,
>
> if the point of metastability between two intervals should be
> proportional to the complexities of the two intervals why not weight
> them in the calculation with something that really is proportional to
> the their complexities, for example Tenney weighting?
>
> Kalle Aho
>

🔗Carl Lumma <carl@lumma.org>

10/9/2007 11:28:14 PM

Dave- wouldn't it be better just to take the maxima of harmonic
entropy? Has anybody checked if these correspond to noble mediants?

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 12:14:00 AM

Carl,

RTFM ;-)

http://dkeenan.com/Music/NobleMediant.txt

As shown in the paper, the largest discrepancy we found was 8.5 cents
for the metastable subfourth (between 7:9 and 3:4). Most were within
2.6 cents.

If you can look up a table or a chart of HE then sure. But which value
of the S parameter and what kind of mediant should you use to generate
this chart or table?

An enormous amount of computation is involved in HE. In comparison,
the noble mediant function is a closed form expression that is easy to
remember and calculate whenever you need it. It's just a mediant
that's phi-weighted towards the most complex interval. As we say in
the paper, consider it a rule-of-thumb. And metastable intervals are
not audibly as sharply defined as just intervals.

-- Dave Keenan

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> Dave- wouldn't it be better just to take the maxima of harmonic
> entropy? Has anybody checked if these correspond to noble mediants?
>
> -Carl
>

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

10/10/2007 1:34:55 AM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
>
> Hi Kalle,
>
> That would probably give a reasonable result. But don't take the
> planetary analogy too seriously. It isn't really a "two-body problem".

Ok, is there another metastable interval then that is nearer the less
complex interval?

> Consider the metastable major third. It must avoid not only 3:4 and
> 4:5 but also 7:9 and to some small degree 11:14. The appropriate part
> of the Stern Brocot tree looks like the following. (If viewing on the
> web, choose: Show message option/Use fixed width font):
>
> 4/3
>
>
>
> 9/7
> 23/18
> 14/11 \/\/\...
>
> 5/4
>
>
>
>
>
>
>
> 1/1
>
> With tenney weighting you would get a different result depending on
> which adjacent two you choose. With Phi weighting, it doesn't matter
> which adjacent two you choose, you will always get the same result. In
> effect, the phi weighting takes them all into account, by giving the
> limit of the fibbonacci-like series. That is
>
> NobleMediant(1/1, 4/3) =
> NobleMediant(4/3, 5/4) =
> NobleMediant(5/4, 4/3) =
> NobleMediant(9/7, 14/11) =
> ....
> = (1+4*phi)/(1+3*phi)
> ~= 1.276
> ~= 422.5 cents

Is there then some kind of shadow of Stern-Brocot tree which contains
all the metastable intervals? Do they have any kind of systematic
relationships between them, do they form an algebra? Would it be
possible to build scales with metastable intervals? Perhaps those
wouldn't be octave- but phi-based. That's a lot of questions, I know,
but this is über-interesting!

> Assuming a harmonic timbre, this is a point at which the partials of
> the two notes maximally fail to align with each other. You could call
> it "maximal beating" from a theoretical or physics point of view, but
> it doesn't _sound_ like beating at all, presumably because there are
> so many harmonics interacting in such complex ways.

This suggests that the notes are also heard as maximally separate,
handy feature in dissonances that "want" to resolve.

I wonder, is there some kind of ideal diminished fifth or augmented
fourth between 3:2 and 4:3? What about a metastable "meantone" between
9:8 and 10:9? That gives a fifth pretty close to LucyTuning fifth.

> I suggest we do not refer to these as "noble" or "golden" intervals
> since those terms are already in use for intervals which are noble
> fractions of an octave (logarithmically) and as such refer to a
> melodic property rather than the harmonic one we are referring to here.
>
> We could call them metastable or "noble intonation"/"nobly intoned"
> (NI) intervals. I would prefer to define these by the way they _sound_
> when their pitch is varied up and down, in the same way I like to
> define justly intoned intervals by the way _they_ sound when varied.
>
> That way we can hopefully agree they exist without having to agree on
> any particular theory as to why.
>
> -- Dave Keenan

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 5:52:51 AM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
> Ok, is there another metastable interval then that is nearer the less
> complex interval?

There may be, in the case where there is a third intrerval, more
complex but still audibly just, betwen the outer two. I encourage
anyone interested in this to do their own listening experiments.

I must emphasise that not all noble numbers correspond to audibly
metastable intervals, but we do expect that every audibly metastable
interval will be at a simple noble number.

This is related to the fact that not all rational numbers correspond
to audibly just intervals but every audibly just interval is a simple
rational. Assuming harmonic timbres in both cases of course.

> Is there then some kind of shadow of Stern-Brocot tree which contains
> all the metastable intervals?

Yes. And although I can find no publication of Erv's that relates the
nobles (other than Lorne Temes phi) to what we are now calling
metastable intervals, he certainly did a beautiful job of diagramming
the noble numbers and their relationship to the Stern-Brocot tree.

See http://www.anaphoria.com/sctree.PDF

Only those few whose dotted lines start high up on the page are likely
to correspond to metastable intervals.

> Do they have any kind of systematic
> relationships between them,

Sure. They are all of the form

i + m*phi
---------
j + n*phi

where i, j, m and n are natural numbers, and i*j < m*n, and i/j and
m/n are adjacent on the Stern-Brocot tree.

> do they form an algebra?

I'm sorry I don't know. Sounds like a question for Gene on the
tuning-math list.

> Would it be
> possible to build scales with metastable intervals?

I expect so. But probably not _all_ the intervals could be metastable,
just as not _all_ the intervals of a typical JI scale are audibly
just, unless it is a very small scale (say 5 notes or less).

> Perhaps those
> wouldn't be octave- but phi-based.

Could be, but that certainly won't be experienced as an interval of
equivalence.

> That's a lot of questions, I know,
> but this is über-interesting!

Great! Thank's Cameron, for igniting interest in this topic.

> This suggests that the notes are also heard as maximally separate,

No, I don't think that is the case, but you must listen for yourself.

> handy feature in dissonances that "want" to resolve.

But it presumably doesn't have any great preference as to which side
it wants to resolve too. And, as Cameron puts it, it might function as
either (or both) stable and unstable depending on context.

I don't think the whole resolution thing is very well understood,
particularly in its relation to melody and voice-leading (at least not
by me :-).

I would really love to hear something done with this. Do you have any
demonstration .mpg's or .mid's Cameron?

> I wonder, is there some kind of ideal diminished fifth or augmented
> fourth between 3:2 and 4:3?

In theory at least. So we pull up a spreadsheet and calculate
=(SQRT(5)+1)/2
which gives us phi
and then we use that (say it's in cell A1) to calculate
=(3+4*A1)/(2+3*A1)
and we get 1.381966...
Then we convert that (say it's in cell A2) to cents with
= ln(A2)/ln(2)*1200
and we get 560 cents.

No need for decimal places of cents since these things are not sharply
defined.

> What about a metastable "meantone" between
> 9:8 and 10:9? That gives a fifth pretty close to LucyTuning fifth.

Well now, I think a metastable wholetone or major second is unlikely,
as 8:9 and particularly 9:10 are not particularly strong attractors to
begin with. But it's worth a listen. And yes, that's a 190 cent whole
tone and a 695 cent meantone fifth will generate it.

As Carl suggested, take a look at the local maxima on a Harmonic
Entropy chart, as a better predictor of where to find metastable
intervals. A lot depends on the actual timbre of course.
http://sonic-arts.org/td/erlich/entropy-erlich.htm

-- Dave Keenan

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 5:58:12 AM

I should have said, "but we do expect that every audibly metastable
interval will be _near_ a simple noble number."

-- Dave Keenan

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 7:38:55 AM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
> Do they have any kind of systematic
> relationships between them, do they form an algebra?

Here's the Mathworld definition.
http://mathworld.wolfram.com/NobleNumber.html

-- Dave Keenan

🔗Carl Lumma <carl@lumma.org>

10/10/2007 9:51:35 AM

Dave wrote...

> If you can look up a table or a chart of HE then sure. But
> which value of the S parameter and what kind of mediant
> should you use to generate this chart or table?

I prefer 1% for s. Do you mean what kind of series do we
take the mediants of? Paul's showed it doesn't much matter,
but that Tenney complexity is in some sense the most
natural choice.

> In comparison, the noble mediant function is a closed form
> expression that is easy to remember and calculate whenever
> you need it.

Well, that's a point.

-Carl

🔗George D. Secor <gdsecor@yahoo.com>

10/10/2007 2:29:15 PM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
> --- In tuning@yahoogroups.com, "Carl Lumma" <carl@> wrote:
> >
> > Dave- wouldn't it be better just to take the maxima of harmonic
> > entropy? Has anybody checked if these correspond to noble
mediants?
> >
> > -Carl
> >
>
> Carl,
>
> RTFM ;-)
>
> http://dkeenan.com/Music/NobleMediant.txt
>
> As shown in the paper, the largest discrepancy we found was 8.5
cents
> for the metastable subfourth (between 7:9 and 3:4). Most were within
> 2.6 cents.
>
> If you can look up a table or a chart of HE then sure. But which
value
> of the S parameter and what kind of mediant should you use to
generate
> this chart or table?

Hi Dave,

Several years ago Paul Erlich ran a Vos-curve for me with n*d<65536,
s=1%:
/harmonic_entropy/topicId_652.html#667
for which I asked for raw data in increments of 1 cent, which Paul
supplied:
/harmonic_entropy/topicId_652.html#669

These raw figures have been transferred to a spreadsheet, which I
just copied here:
/tuning/files/Secor/raw-
entr.xls

The raw entropy numbers are in column C. (I was experimenting with a
formula to convert these numbers to reflect consonance relative to
1:1 in Col. B, which you can ignore.)

Anyway, I was curious to see how closely metastable intervals
correspond to this curve, so I added some cells to calculate these
(result in I19:I20). I noticed that reversing the order of the
consonances gives a different result, so I calculated that also, in
J19:J20.

I tried finding metastable intervals for several combinations of
intervals and found that they all come within a cent or two of local
HE maxima in this curve. So something seems to be working for the
particular parameters Paul used for this curve.

As Paul explained to me, changing the value of S can change the
sensitivity of the curve in such a way that local maxima can become
new local minima (or vice versa), so selection of a value can be a
bit arbitrary.

In case you're wondering how we arrived at this, you'll need to see
how this discussion started:
/tuning/topicId_38179.html#38291
The curve that Paul produced was a result of tweaking the parameters
in order to conform as closely as possible to the results of a
listening experiment I conducted with a retuned electronic organ many
years previous (note that at first I tended to use the term "maximum"
to refer to local maxima of *consonance* rather than harmonic
entropy.). I also ran a couple more experiments with my Scalatron
during the course of our discussion to re-evaluate what I perceived
to be the most harmonically dissonant intervals just above 1:1 and
below 1:2, which gave Paul some idea as to what to aim for in
adjusting the HE parameters to produce a curve with HE maxima near
these points. (This was after I wrote my 17-tone paper, in which I
estimated the most discordant "semitone" at around 70 cents.) In the
curve we ended up with the global max. HE is at 67 cents, with a
local sub-octave max.at 1139 cents.

I find the general agreement of the HE maxima in this curve with the
figures for metastable intervals most encouraging, to say the least.

> An enormous amount of computation is involved in HE. In comparison,
> the noble mediant function is a closed form expression that is easy
to
> remember and calculate whenever you need it. It's just a mediant
> that's phi-weighted towards the most complex interval. As we say in
> the paper, consider it a rule-of-thumb. And metastable intervals are
> not audibly as sharply defined as just intervals.

Now that I've said all that, I guess I should go back and read the
noble mediant paper. :-)

--George

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 3:13:54 PM

--- In tuning@yahoogroups.com, "George D. Secor" <gdsecor@...> wrote:
> Anyway, I was curious to see how closely metastable intervals
> correspond to this curve, so I added some cells to calculate these
> (result in I19:I20). I noticed that reversing the order of the
> consonances gives a different result, so I calculated that also, in
> J19:J20.

I note that we define the noble mediant only as applying the phi
weight to the most complex interval. So you could easily alter your
formula to choose the correct one of the two values with an
IF(i*j<m*n, ...

> I tried finding metastable intervals for several combinations of
> intervals and found that they all come within a cent or two of local
> HE maxima in this curve. So something seems to be working for the
> particular parameters Paul used for this curve.

Thanks George. That's great.

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 4:27:15 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> Dave wrote...
>
> > If you can look up a table or a chart of HE then sure. But
> > which value of the S parameter and what kind of mediant
> > should you use to generate this chart or table?
>
> I prefer 1% for s. Do you mean what kind of series do we
> take the mediants of? Paul's showed it doesn't much matter,
> but that Tenney complexity is in some sense the most
> natural choice.

Hi Carl,

That's another parameter I had forgotten. I think there is a stage
where he takes mediants. I thought he had experimented with some
weighted ones. As we say in the paper, perhaps he should use noble
mediants in calculating HE.

George,

You speak of "calculating metastable intervals". Being my usual
pedantic self, I'd prefer to speak of "calculating noble mediants"
which may or may not correspond to metastable intervals.

-- Dave Keenan

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/10/2007 8:19:05 PM

--- In tuning@yahoogroups.com, "George D. Secor" <gdsecor@...> wrote:
> Anyway, I was curious to see how closely metastable intervals
> correspond to this curve, so I added some cells to calculate these
> (result in I19:I20).
...
> I tried finding metastable intervals for several combinations of
> intervals and found that they all come within a cent or two of local
> HE maxima in this curve. So something seems to be working for the
> particular parameters Paul used for this curve.

Yes indeed. But I note that on no HE curve do noble mediants agree
with the maxima near the unison and octave (and to a lesser degree
near the fifth and fourth) i.e. near the really consonant intervals.

I used your spreadsheet to generate the following list of noble
mediants most likely to be perceived as metastable intervals.

Limit of Cents Description
-----------------------------------------------
5:6 6:7 11:13 ... 284c wide subminor third
4:5 5:6 9:11 ... 339c narrow neutral third
4:5 7:9 11:14 ... 422c narrow supermajor third
3:4 7:9 10:13 ... 448c narrow subfourth
3:4 5:7 8:11 ... 560c wide superfourth
2:3 5:7 7:10 ... 607c narrow diminished fifth
2:3 5:8 7:11 ... 792c narrow minor sixth
3:5 5:8 8:13 ... 833c narrow neutral sixth
3:5 4:7 7:12 ... 943c narrow subminor seventh or supermajor sixth
4:7 5:9 9:16 ... 1002c minor seventh
3:7 4:9 7:16 ... 1424c narrow supermajor 9th
2:5 3:7 5:12 ... 1503c narrow minor 10th
2:5 3:8 5:13 ... 1666c sub-11th
1:3 3:8 4:11 ... 1735c narrow super-11th
1:3 3:10 4:13 ... 2055c neutral 13th
2:7 3:10 5:17 ... 2109c wide major 13th or narrow supermajor 13th
2:7 3:11 5:18 ... 2226c narrow neutral 14th

It does appear, as Kalle Aho suggested, that repeating at the phi
neutral sixth would be a good way to make a scale out of them.

Now I really must get back to other things. Sorry Carl.

-- Dave Keenan

🔗Cameron Bobro <misterbobro@yahoo.com>

10/11/2007 1:50:36 AM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
>
> Hi Kalle,
>
> That would probably give a reasonable result. But don't take the
> planetary analogy too seriously. It isn't really a "two-body
>problem".

I believe the phi-mediant works with two bodies as well, creating
maximally "suspended" intervals between the two "bodies" you choose.
I have used .5Pi in this way, then tweaking by ear, quite close to
Phi, but I think Phi is better.

> Consider the metastable major third. It must avoid not only 3:4 and
> 4:5 but also 7:9 and to some small degree 11:14. The appropriate
>part
> of the Stern Brocot tree looks like the following. (If viewing on
the
> web, choose: Show message option/Use fixed width font):
>
> 4/3
>
>
>
> 9/7
> 23/18
> 14/11 \/\/\...
>
> 5/4
>
>
>
>
>
>
>
> 1/1
>
> With tenney weighting you would get a different result depending on
> which adjacent two you choose. With Phi weighting, it doesn't
matter
> which adjacent two you choose, you will always get the same
result. In
> effect, the phi weighting takes them all into account, by giving
the
> limit of the fibbonacci-like series. That is
>
> NobleMediant(1/1, 4/3) =
> NobleMediant(4/3, 5/4) =
> NobleMediant(5/4, 4/3) =
> NobleMediant(9/7, 14/11) =
> ....
> = (1+4*phi)/(1+3*phi)
> ~= 1.276
> ~= 422.5 cents
>
> Assuming a harmonic timbre, this is a point at which the partials
of
> the two notes maximally fail to align with each other. You could
call
> it "maximal beating" from a theoretical or physics point of view,
but
> it doesn't _sound_ like beating at all, presumably because there
are
> so many harmonics interacting in such complex ways.
>
> I suggest we do not refer to these as "noble" or "golden" intervals
> since those terms are already in use for intervals which are noble
> fractions of an octave (logarithmically) and as such refer to a
> melodic property rather than the harmonic one we are referring to
here.
>
> We could call them metastable or "noble intonation"/"nobly intoned"
> (NI) intervals.

That sounds quite nice! I've been calling the set of them
the "Other" shadow, as my approach to tuning is based on
interlacing different "shadow tunings" with "JI".

>I would prefer to define these by the way they _sound_
> when their pitch is varied up and down, in the same way I like to
> define justly intoned intervals by the way _they_ sound when
>varied.

Heartily agree here, as I discovered them by ear first then
picked and poked and guessed as to ways to tune them up with
numbers.

I believe that the points of maximum "floating" are narrow
regions which are affect by timbre, so we'd have to weight
according to spectra each time to really nail them.

-Cameron Bobro

🔗Graham Breed <gbreed@gmail.com>

10/11/2007 2:17:39 AM

Kalle Aho wrote:
> Hi,
> > if the point of metastability between two intervals should be
> proportional to the complexities of the two intervals why not weight
> them in the calculation with something that really is proportional to
> the their complexities, for example Tenney weighting? What's so special about Tenney weighting? I suggest we turn the question around, and find a weighting scheme that gives the right metastable intervals.

Note that Tenney, in his Cage paper, used the term "tolerance range" for this concept. The more complex an interval, the smaller its tolerance range. He specifically avoids making it quantitative, but the size being inversely proportional to his harmonic distance would have the right basic properties. (This is the opposite of what so called Tenney-weighting of prime limits does, which solves a different problem, and not one Tenney himself talked about.)

A simple model of consonance would be maxima at simple ratios and minima at the noble mediants. It's easy to use this model to rate the error of a temperament. You could take the (worst or average) distance from the correct intervals to the tempered ones as a proportion of the tolerance range. If an approximation moves beyond the metastable point, and therefore outside its tolerance range, the degree of approximation is meaningless.

For this to work you need to specify a finite set of intervals. Perhaps there's a way of fixing the relative sizes of the tolerance ranges for simple intervals regardless of the limit and, who knows, maybe Tenney complexity approximates it.

Using noble mediants as the metastable points would naturally give an error measure that's asymmetrical with regard to sharp and flat mistunings. The regular mapping paradigm is fine with this as far as I can see.

Graham

🔗Graham Breed <gbreed@gmail.com>

10/11/2007 2:18:58 AM

Kalle Aho wrote:

<snip> What about a metastable "meantone" between
> 9:8 and 10:9? That gives a fifth pretty close to LucyTuning fifth. The interval between a pair of neighboring intervals on the Stern-Brocot tree is always a superparticular ratio. So as superparticular ratios tend to make good unison vectors you can often associate a noble mediant with a unison vector. I don't read any more into it than that.

Graham

🔗Graham Breed <gbreed@gmail.com>

10/11/2007 2:19:11 AM

Dave Keenan wrote:
> --- In tuning@yahoogroups.com, "George D. Secor" <gdsecor@...> wrote:
> >>Anyway, I was curious to see how closely metastable intervals >>correspond to this curve, so I added some cells to calculate these >>(result in I19:I20). I noticed that reversing the order of the >>consonances gives a different result, so I calculated that also, in >>J19:J20.
> > > I note that we define the noble mediant only as applying the phi
> weight to the most complex interval. So you could easily alter your
> formula to choose the correct one of the two values with an
> IF(i*j<m*n, ...

It's still a noble mediant if you do it the wrong way round, but of a different pair of intervals. To get the right pair you replace the more complex interval with the classic mediant of the two intervals.

Graham

🔗Cameron Bobro <misterbobro@yahoo.com>

10/11/2007 2:19:57 AM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
>
> --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> > Ok, is there another metastable interval then that is nearer the
>less
> > complex interval?
>
> There may be, in the case where there is a third intrerval, more
> complex but still audibly just, betwen the outer two. I encourage
> anyone interested in this to do their own listening experiments.
Kalle:
>
> > I wonder, is there some kind of ideal diminished fifth or
>augmented
> > fourth between 3:2 and 4:3?

I found 674 cents by ear to have the juice- if you're looking for
something nearish to a very simple interval, the noble mediant
of 3/2 and 10/7 might do you right, check it out.

> > is it possible to build scales with metastable intervals?

Almost all my tunings use them. As I wrote on the list a couple
of months ago:

"> > According to the charts on your page, the skeleton of
> > my music basically consists of the points of
> > maximum harmonic entropy."

("your page" referring to Ehrlich HE charts at Carl's
crib. ) To which Carl replied that there's "nothing wrong
with that". I guess where I disagree with Carl is that
I find these intervals (among others) to fulfill my
requirements for non-Just, non-equal intervals with
a characteristic sound and internal cohesiveness/integrity
of their own- and I find them strangely "consonant".

>>
> Great! Thank's Cameron, for igniting interest in this topic.

Well I'm glad someone's interested, as it's a main-ingredient
thing for me.

Roughly equal "34" tunings give you noble mediants interlaced
with "Just" intervals. So you have alternating minimum and maximum,
and in context the noble mediants function as Just intervals
enharmonically.

>
> I would really love to hear something done with this. Do you have
>any
> demonstration .mpg's or .mid's Cameron?

Well everything at my zebox page incorporates these intervals,
but a particular example would be the second tune, "Lunica".

http://www.zebox.com/bobro

And I'll get a "tall chord workout" piece or two up ASAP
(have to mix, bounce and convert) that demonstrate the
enharmonic min/max effect.

> > What about a metastable "meantone" between
> > 9:8 and 10:9? That gives a fifth pretty close to LucyTuning
fifth.

That sounds pretty good, and I suspect that it's more
solid than it might appear at first, because I believe
that octaves of the audible overtones, 9:8 in this case, are
slightly "heavier" than their number might indicate, and
I find the means and mediants of all kinds are stronger
when the intervals in question are "linked"- in this case,
consecutive superparticulars "linked" at the ninth partial.
I predict it will work much better than it "should".
>
> Well now, I think a metastable wholetone or major second is
unlikely,
> as 8:9 and particularly 9:10 are not particularly strong
attractors to
> begin with. But it's worth a listen. And yes, that's a 190 cent
whole
> tone and a 695 cent meantone fifth will generate it.
Dave wrote....
> A lot depends on the actual timbre of course.

Aye, there's the rub, and probably why I find myself using a
number of microscopically different "shadow" intervals- depends
on timbre.

-Cameron Bobro

🔗Cameron Bobro <misterbobro@yahoo.com>

10/11/2007 2:28:56 AM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:

>
> It's still a noble mediant if you do it the wrong way round,
> but of a different pair of intervals. To get the right pair
> you replace the more complex interval with the classic
> mediant of the two intervals.

The true "floating" effect, to my ears, requires that the interval
lean toward the more complex interval- but the ~670 interval
I found is backwards in this respect, if it really is suspended
between 3/2 and 10/7 and not something else, have to check...

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/11/2007 4:24:26 AM

--- In tuning@yahoogroups.com, "Cameron Bobro" <misterbobro@...> wrote:
>
> --- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@> wrote:
> >
> > --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> > > I wonder, is there some kind of ideal diminished fifth or
> >augmented
> > > fourth between 3:2 and 4:3?
>
> I found 674 cents by ear to have the juice- if you're looking for
> something nearish to a very simple interval, the noble mediant
> of 3/2 and 10/7 might do you right, check it out.

I think this just shows that harmonic entropy maxima are better
predictors of metastability close to very consonant intervals. The
noble mediant of 3/2 and 10/7 is only 630 cents.

> I find these intervals (among others) to fulfill my
> requirements for non-Just, non-equal intervals with
> a characteristic sound and internal cohesiveness/integrity
> of their own- and I find them strangely "consonant".

Well, we might have to agree not to call them "consonant" as such, as
this could cause a lot of confusion, but they seem to have
_something_, as Erv Wilson agrees.

> Roughly equal "34" tunings give you noble mediants interlaced
> with "Just" intervals. So you have alternating minimum and maximum,
> and in context the noble mediants function as Just intervals
> enharmonically.

Neat.

> > I would really love to hear something done with this. Do you have
> > any demonstration .mpg's or .mid's Cameron?
>
> Well everything at my zebox page incorporates these intervals,
> but a particular example would be the second tune, "Lunica".
>
> http://www.zebox.com/bobro

Master Bobro. You have blown my mind! Into a parallel universe. Or
into a "shadow" universe.

That's awesome stuff.

Sorry George (Secor),

Stop the presses (for the extreme sagittal JI notations). You remember
how you had a hard time finding meaningful commas for some of those
mina symbols. Do any of them happen to coincide with irrational
"commas" for simple noble mediants?

-- Dave Keenan

🔗Charles Lucy <lucy@harmonics.com>

10/11/2007 9:16:14 AM

This short piece of footage may amuse some of the tunaniks.
The plot is "Holly" explains her life to prospective "friends" via her webcam.

This is her reality of some "running exercise".

The sound design on the voices, and the final music mix are yet to be completed; although you should get the general idea;-)

http://www.lucytune.com/Holly/HollyChaseMovie.m4v

Film runs in iTunes movie format.

Charles Lucy lucy@lucytune.com

----- Promoting global harmony through LucyTuning -----

For information on LucyTuning go to: http://www.lucytune.com

LucyTuned Lullabies (from around the world):
http://www.lullabies.co.uk

Skype user = lucytune

http://www.myspace.com/lucytuning

On 11 Oct 2007, at 12:24, Dave Keenan wrote:

🔗Carl Lumma <carl@lumma.org>

10/11/2007 10:28:08 AM

Graham wrote...

> Note that Tenney, in his Cage paper, used the term
> "tolerance range" for this concept. The more complex an
> interval, the smaller its tolerance range. He specifically
> avoids making it quantitative, but the size being inversely
> proportional to his harmonic distance would have the right
> basic properties. (This is the opposite of what so called
> Tenney-weighting of prime limits does, which solves a
> different problem, and not one Tenney himself talked about.)

The justification for Tenney-weighting in Paul's middle
path paper is quite different from the one in your primes
paper. Being such a simple concept, it probably has many
justifications. I can't say I fully understand the one in
your primes paper, actually.

-Carl

🔗George D. Secor <gdsecor@yahoo.com>

10/11/2007 10:58:07 AM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
>
> --- In tuning@yahoogroups.com, "Cameron Bobro" <misterbobro@> wrote:
> >
> > --- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@> wrote:
> > >
> > > --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> > > > I wonder, is there some kind of ideal diminished fifth or
augmented
> > > > fourth between 3:2 and 4:3?
> >
> > I found 674 cents by ear to have the juice- if you're looking for
> > something nearish to a very simple interval, the noble mediant
> > of 3/2 and 10/7 might do you right, check it out.
>
> I think this just shows that harmonic entropy maxima are better
> predictors of metastability close to very consonant intervals. The
> noble mediant of 3/2 and 10/7 is only 630 cents.

Hmmm, the local max. entropy (in Paul's Secor4 file) for the region
just below 3/2 is at 654 cents, significantly different IMO. The
first local max. that shows up below 3/2 is at 664 cents, only half
as far, but still different from what you arrived at by ear.

> > I find these intervals (among others) to fulfill my
> > requirements for non-Just, non-equal intervals with
> > a characteristic sound and internal cohesiveness/integrity
> > of their own- and I find them strangely "consonant".
>
> Well, we might have to agree not to call them "consonant" as such,
as
> this could cause a lot of confusion, but they seem to have
> _something_, as Erv Wilson agrees.
>
> > Roughly equal "34" tunings give you noble mediants interlaced
> > with "Just" intervals. So you have alternating minimum and
maximum,
> > and in context the noble mediants function as Just intervals
> > enharmonically.
>
> Neat.
>
> > > I would really love to hear something done with this. Do you
have
> > > any demonstration .mpg's or .mid's Cameron?
> >
> > Well everything at my zebox page incorporates these intervals,
> > but a particular example would be the second tune, "Lunica".
> >
> > http://www.zebox.com/bobro
>
> Master Bobro. You have blown my mind! Into a parallel universe. Or
> into a "shadow" universe.
>
> That's awesome stuff.

I'd love to hear it, but the link didn't work for me.

> Sorry George (Secor),

Yeah, I really wanted to hear it.

> Stop the presses (for the extreme sagittal JI notations). You
remember
> how you had a hard time finding meaningful commas for some of those
> mina symbols. Do any of them happen to coincide with irrational
> "commas" for simple noble mediants?

I don't have anything set up to start with a particular symbol (or
mina, or tina), find the pitches that might use that symbol (computed
by taking each symbol in combination with at least a couple dozen
nominals), and then compare those pitches with a list of noble
mediant pitches (which I don't have).

It's much easier to do this in reverse by pasting the cents values
for a noble mediant interval into cell T8 of the last notation
calculation spreadsheet I sent you, reading off the 2 or 3 possible
comma sizes (below cell E8), and then comparing these with our symbol
definitions. So I tried the commas required for the preferred
spellings of three noble mediants, and all 3 of them fell within the
same *tina* boundaries (a max. possible error of 0.14 cents!) as
existing olympian-level symbols. (Sorry, can't take up any more time
with this -- other things to get to. But why was I doing this
anyway? -- we're supposed be notating JI!)

--George

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

10/11/2007 3:35:35 PM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
>
> --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:

> > Do they have any kind of systematic
> > relationships between them,
>
> Sure. They are all of the form
>
> i + m*phi
> ---------
> j + n*phi
>
> where i, j, m and n are natural numbers, and i*j < m*n, and i/j and
> m/n are adjacent on the Stern-Brocot tree.

I mean between them, for example the noble number 1.38196601... plus
phi is 3. By the way, while it probably doesn't mean anything their
geometric mean happens to be 1/4-comma meantone fifth. :)

> > Would it be
> > possible to build scales with metastable intervals?
>
> I expect so. But probably not _all_ the intervals could be metastable,
> just as not _all_ the intervals of a typical JI scale are audibly
> just, unless it is a very small scale (say 5 notes or less).
>
> > Perhaps those
> > wouldn't be octave- but phi-based.
>
> Could be, but that certainly won't be experienced as an interval of
> equivalence.

In his piece "Stria" John Chowning uses frequency modulation with
phi-ratio between the carrier and modulator. This is supposed to cause
interval of 1:phi to become some kind of pseudo-octave. But I don't
know if the other noble numbers work with this kind of nonharmonic
timbre.

> > This suggests that the notes are also heard as maximally separate,
>
> No, I don't think that is the case, but you must listen for yourself.

But if the partials of the two notes maximally fail to align with each
other this means that there is minimal timbral fusion of the notes
which should mean that they are heard as separate. So why not?

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/11/2007 5:12:08 PM

--- In tuning@yahoogroups.com, "George D. Secor" <gdsecor@...> wrote:
> It's much easier to do this in reverse by pasting the cents values
> for a noble mediant interval into cell T8 of the last notation
> calculation spreadsheet I sent you, reading off the 2 or 3 possible
> comma sizes (below cell E8), and then comparing these with our symbol
> definitions. So I tried the commas required for the preferred
> spellings of three noble mediants, and all 3 of them fell within the
> same *tina* boundaries (a max. possible error of 0.14 cents!) as
> existing olympian-level symbols. (Sorry, can't take up any more time
> with this -- other things to get to. But why was I doing this
> anyway? -- we're supposed be notating JI!)

Sorry. It was a dumb idea. I over-reacted. But my question was
effectively, "How should we notate noble mediants?" I gave a list of
some here
/tuning/topicId_73794.html#73819
but only to the nearest cent.

But it's clear to me now, that the JI notation system (which extends
to an RI notation system) would be sufficient. A noble mediant would
end up being notated the same as one of the more complex ratios in the
zig-zag chain of classic mediants whose limit it is. The composer
would simply specify that these symbols do not represent rationals but
noble mediants, relative to 1/1. So no need to spend any more time on it.

[RI stands for rational intonation, which means that a tuning is
specified by ratios (sometimes quite complex) but there is no
suggestion that they _sound_ justly intoned.]

e.g. Since phi is the limit of
1/1 2/1 3/2 5/3 8/5 13/8 21/13 34/21 55/34 89/55 ...
it would be notated the same as the last one shown above. The last
three are respectively 1.1c, 0.4c and 0.16c away from it, and as bare
dyads they are probably too complex to be heard as justly intoned
anyway (assuming harmonic timbres).

So with 1/1 = C the phi neutral sixth would be notated Ab.//| (where
.//| is our way of representing in plain text, what is really a single
accented sagittal symbol). And the accent would be dropped if it was
unambiguous in the context.
http://dkeenan.com/Sagittal

-- Dave Keenan

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/11/2007 11:33:00 PM

--- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@...> wrote:
> I mean between them, for example the noble number 1.38196601... plus
> phi is 3.

Right. But any such properties should hopefully be discernable from
the above form of the noble number -- in lowest terms.

Note that, as Graham Breed mentioned,

i + m*phi
--------- =
j + n*phi

m-i + i*phi
---------
n-j + j*phi

That operation can be repeated until m-i and n-j are as low as
possible without going negative. This puts the noble number in lowest
terms, for easy comparison with other noble mediants.

If you have two noble numbers in lowest terms, where the numerator of
one is the same as the denominator of the other, then you know the two
intervals can be stacked (ratios multiplied) to give another noble number.

I also had the thought that it might be more useful to write them as

m + i/phi
---------
n + j/phi

as they do on the Mathworld page, thereby minimising the irrational part.

> But if the partials of the two notes maximally fail to align with each
> other this means that there is minimal timbral fusion of the notes
> which should mean that they are heard as separate. So why not?

I don't know. But Margo just added her voice to those who have
listened and found it not so, at least in the case of Phi itself, in
the latest version of

http://users.bigpond.net/d.keenan/Music/NobleMediant.txt

Maybe the analysis problem just becomes too difficult with standard
ear brain wiring.

-- Dave

🔗Graham Breed <gbreed@gmail.com>

10/12/2007 12:28:46 AM

Carl Lumma wrote:

> The justification for Tenney-weighting in Paul's middle
> path paper is quite different from the one in your primes
> paper. Being such a simple concept, it probably has many
> justifications. I can't say I fully understand the one in
> your primes paper, actually.

He mixes up his justification of Tenney weighting with his justification of prime-based weights in general and the minimax in particular. I start with the properties of a certain family of prime-based weighting schemes (in a take it or leave it kind of way) and then argue for Tenney weighting.

Ultimately, Paul's justification for Tenney weighting is that it gives TOP-max the useful property of being the same weighted minimax for any sufficiently large subset of a prime limit. That property follows from Tenney weighting treating all primes on an equal footing (relative to interval size) which is also the property I argue for.

I disagree with the idea that Tenney weighting correlates with the sensitivity of an interval to mistuning. The fact that TOP errors depend so much on which prime limit you choose (and don't have a sensible limit for an arbitrarily large prime limit) tells me that it doesn't work. A truly subjective weighting will have to punish more complex intervals more severely.

I also disagree that a minimax is the best way of measuring errors in a weighted prime limit. Because the logic is that we're following the sensitivity to mistuning, rather than overall damage as a result of mistuning, we should take an average that includes the mistuning of all intervals we're interested in. A minimax makes more sense for the kind of weighting Tenney proposed -- where simple intervals have more wiggle room, not less.

Graham

🔗Cameron Bobro <misterbobro@yahoo.com>

10/12/2007 12:33:22 AM

--- In tuning@yahoogroups.com, "George D. Secor" <gdsecor@...> wrote:
>
> --- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@> wrote:
> >
> > --- In tuning@yahoogroups.com, "Cameron Bobro" <misterbobro@>
wrote:
> > >
> > > --- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@> wrote:
> > > >
> > > > --- In tuning@yahoogroups.com, "Kalle Aho" <kalleaho@> wrote:
> > > > > I wonder, is there some kind of ideal diminished fifth or
> augmented
> > > > > fourth between 3:2 and 4:3?
> > >
> > > I found 674 cents by ear to have the juice- if you're looking
for
> > > something nearish to a very simple interval, the noble mediant
> > > of 3/2 and 10/7 might do you right, check it out.
> >
> > I think this just shows that harmonic entropy maxima are better
> > predictors of metastability close to very consonant intervals.
The
> > noble mediant of 3/2 and 10/7 is only 630 cents.
>
> Hmmm, the local max. entropy (in Paul's Secor4 file) for the
>region
> just below 3/2 is at 654 cents, significantly different IMO.

But I wasn't talking about the local maximum entropy- I was
suggesting a possible ideal diminished fifth to go with
the max. HE minor and major third. Because the min. and
max. points of HE go so well together, simply slapping a 5/4
on top of the max. HE minor third makes a suitable low "fifth".
The half-octave is 6/5 above the max. HE minor third, you'll
notice as well.

>The
> first local max. that shows up below 3/2 is at 664 cents, only
>half
> as far, but still different from what you arrived at by ear.

It would be a grievous mistake to limit the use of the noble
mediant to those points where it coincides with max. HE.
Even more grievous would be to fail to grasp the most
important implication of these intervals' audible
relation to Just intervals.

> But why was I doing this
> anyway? -- we're supposed be notating JI!

There is no need to use special notation in any tuning
where these intervals are also, enharmonically, Just intervals.
Since these tunings make the strongest use of the noble
intervals, IMO, any problems would probably show up only in
custom one-off kinds of situations.

-Cameron Bobro

🔗Cameron Bobro <misterbobro@yahoo.com>

10/12/2007 2:44:48 AM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:

> > I find these intervals (among others) to fulfill my
> > requirements for non-Just, non-equal intervals with
> > a characteristic sound and internal cohesiveness/integrity
> > of their own- and I find them strangely "consonant".
>
> Well, we might have to agree not to call them "consonant" as such, as
> this could cause a lot of confusion, but they seem to have
> _something_, as Erv Wilson agrees.

I've also been calling these and some other intervals "distant" or
"soft" consonances. Strangely enough, when I first
played them for him, a friend of mine who is a math teacher as well as
professional contrabassist described these intervals in exactly the
same manner I have, even eerily using similar hand gestures (hadn't
seen him for a year)- soft, weeping and far-away. He and
the singer he works found the chords of tonic, max.HE minor third,
Phi-6th, and tonic, middle-3d, Phi-6th to be the cat's pajamas.

There's more to this than just correspondence to HE maxima-
hopefully I'll soon get a chance to post some demonstrations of
this, but gotta run, and thanks for the kind words, Dave!

More on Margot's zeta tuning asap, too, take care.

-Cameron Bobro

🔗Carl Lumma <carl@lumma.org>

10/12/2007 3:27:46 PM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote...

> > The justification for Tenney-weighting in Paul's middle
> > path paper is quite different from the one in your primes
> > paper. Being such a simple concept, it probably has many
> > justifications. I can't say I fully understand the one in
> > your primes paper, actually.
>
> He mixes up his justification of Tenney weighting with his
> justification of prime-based weights in general and the
> minimax in particular.

Can you quote where? Tenney weighting is the weighting that
makes max weighted-primes error and max weighted-odds error
come out the same. And probably it makes the average
versions of these come out as close to the same as possible.

> Ultimately, Paul's justification for Tenney weighting is
> that it gives TOP-max the useful property of being the same
> weighted minimax for any sufficiently large subset of a
> prime limit.

Sufficiently large??

> That property follows from Tenney weighting
> treating all primes on an equal footing (relative to
> interval size) which is also the property I argue for.

I never did, and still don't, understand what size has
to do with anything.

> I disagree with the idea that Tenney weighting correlates
> with the sensitivity of an interval to mistuning.

Well I'm at work and I forget what he says about that
exactly, but simple ratios are associated with both deeper
and wider harmonic entropy minima. But they are more
deeper than they are wider. Therefore mistuning them
in constant cents results in a greater increase in
absolute entropy.
Historically, the most popular temperaments have distributed
errors least to the octaves. Meantone does favor thirds
over fifths, but 12-ET is the other way around. Personally,
I would rather a 5-cent error on thirds than on fifths.

> The fact that TOP errors depend so much on which prime
> limit you choose (and don't have a sensible limit for an
> arbitrarily large prime limit) tells me that it doesn't
> work.

Not sure what you mean here. TOP errors don't change
with limit unless you need to throw a new set of commas
in the kernel. But usually there is more than one way
to extend a temperament to higher limits, so one would
hope the tuning would change between them.

Or maybe you meant something to do with the following...
For conceptual reasons, it has long appealed to me to use
a weighting strong than Tenney's, such that any sort of
harmonic limit would be unnecessary -- the changes in the
error would get so small beyond, say, the 17-limit, that
you could just as well not bother to calculate them.

This approach would appeal to someone who likes the
harmonic series but wants to simplify his pitch set (say,
for use on a keyboard). The problem is, the number of
notes needed to make accurate temperament of these larger
limits grows very rapidly. The 'juicy plums' of these
extended tunings seem to be the new intervals themselves,
with modulation on top of it being too much to handle at
this stage (in the evolution of music). Therefore,
targeting temperaments at the 7-limit and so forth seems
a useful thing to do, and just use JI in the 17-limit.
OK, I'm thinking aloud at this point.

> I also disagree that a minimax is the best way of measuring
> errors in a weighted prime limit.

How different are minimax and RMS tunings, really?

> A minimax makes more sense for the kind of
> weighting Tenney proposed -- where simple intervals have
> more wiggle room, not less.

You're saying this is what he proposed in his Cage paper?
I've been meaning to read that, after having downloaded
it recently.

-Carl

🔗Dave Keenan <d.keenan@bigpond.net.au>

10/12/2007 4:32:56 PM

--- In tuning@yahoogroups.com, "Dave Keenan" <d.keenan@...> wrote:
> If you have two noble numbers in lowest terms, where the numerator of
> one is the same as the denominator of the other, then you know the two
> intervals can be stacked (ratios multiplied) to give another noble
> number.

This is wrong because not all numbers of the form

i + m*phi
---------
j + n*phi

are noble numbers, only those where i/j and m/n are adjacent in some
Farey Series, or equivalently, adjacent in the Stern-Brocot tree. For
any noble number i/j and m/n are any two sucessive convergents of the
noble number. And I suspect that when i/j and m/n are its first two
convergents, then the noble number is expressed in lowest terms.

If you multiply two noble numbers in this form, to get another number
in this form, and you find that either the new i/j or the new m/n is
not in lowest terms, or when you take their classic mediant
(i+m)/(j+n) it is not in lowest terms, e.g. you get something like
4/2, then you know it does not represent a noble number.

This stuff really should be in tuning-math. Sorry. But I thought I'd
better post this correction to the same list as the mistake.

-- Dave Keenan

🔗Graham Breed <gbreed@gmail.com>

10/13/2007 3:41:18 AM

Carl Lumma wrote:
> --- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote...
> >>>The justification for Tenney-weighting in Paul's middle
>>>path paper is quite different from the one in your primes
>>>paper. Being such a simple concept, it probably has many
>>>justifications. I can't say I fully understand the one in
>>>your primes paper, actually.
>>
>>He mixes up his justification of Tenney weighting with his >>justification of prime-based weights in general and the >>minimax in particular.
> > Can you quote where? Tenney weighting is the weighting that
> makes max weighted-primes error and max weighted-odds error
> come out the same. And probably it makes the average
> versions of these come out as close to the same as possible.

It's on page 13. Bar the averages (which he doesn't mention and I don't think what you say's right) that's what I say next:

>>Ultimately, Paul's justification for Tenney weighting is >>that it gives TOP-max the useful property of being the same >>weighted minimax for any sufficiently large subset of a >>prime limit.
> > Sufficiently large??

You need at least as many intervals as there are primes. And presumably a linearly independent set at that point.

>>That property follows from Tenney weighting >>treating all primes on an equal footing (relative to >>interval size) which is also the property I argue for.
> > I never did, and still don't, understand what size has
> to do with anything.

For any regular temperament, 9:4 will approximate to an interval twice as large as 3:2. The error will also be twice as large. With prime weighting, the buoyancy's always twice as large as well. That means both intervals have the same weighted error. That's the point of prime weighting, so the obvious choice of weights is to weight all partials according to their sizes.

>>I disagree with the idea that Tenney weighting correlates >>with the sensitivity of an interval to mistuning.
> > Well I'm at work and I forget what he says about that
> exactly, but simple ratios are associated with both deeper
> and wider harmonic entropy minima. But they are more
> deeper than they are wider. Therefore mistuning them
> in constant cents results in a greater increase in
> absolute entropy.

There you go, arguing for prime weighting in general rather than Tenney weighting in particular.

> Historically, the most popular temperaments have distributed
> errors least to the octaves. Meantone does favor thirds
> over fifths, but 12-ET is the other way around. Personally,
> I would rather a 5-cent error on thirds than on fifths.

This is a poor argument. Why would you only include a single octave in your optimizing set? You're likely to care about 8:1 and 16:1 as well. The error in a 2:1 is always going to be a third that in the 8:1 and a quarter that in the 16:1. If you look at the factorization of all intervals you're interested in, trying to treat the most complex ones on equal terms, you'll end up with something close to Tenney weighting. So firstly, equal weighting of intervals is consistent with what you observe, and secondly, Tenney weighting of primes gives similar results to equal weighting of intervals, for arbitrarily complex intervals.

As for fifths, it depends on how you like added ninths.

>>The fact that TOP errors depend so much on which prime
>>limit you choose (and don't have a sensible limit for an
>>arbitrarily large prime limit) tells me that it doesn't
>>work.
> > Not sure what you mean here. TOP errors don't change
> with limit unless you need to throw a new set of commas
> in the kernel. But usually there is more than one way
> to extend a temperament to higher limits, so one would
> hope the tuning would change between them.

You can always choose a smaller prime limit in a unique way. When you work it out, dropping a load of inaudible primes always has too much impact with Tenney weighting.

> Or maybe you meant something to do with the following...
> For conceptual reasons, it has long appealed to me to use
> a weighting strong than Tenney's, such that any sort of
> harmonic limit would be unnecessary -- the changes in the
> error would get so small beyond, say, the 17-limit, that
> you could just as well not bother to calculate them.

Yes. But you have to make sure you don't penalize 7-limit intervals so much that they don't count either. You need to be nice to the intervals you want and severely punish the ones you don't want. Which means it becomes highly subjective and I don't think we have the empirical data to decide on that yet. You may as well choose a prime or odd limit.

> This approach would appeal to someone who likes the
> harmonic series but wants to simplify his pitch set (say,
> for use on a keyboard). The problem is, the number of
> notes needed to make accurate temperament of these larger
> limits grows very rapidly. The 'juicy plums' of these
> extended tunings seem to be the new intervals themselves,
> with modulation on top of it being too much to handle at
> this stage (in the evolution of music). Therefore,
> targeting temperaments at the 7-limit and so forth seems
> a useful thing to do, and just use JI in the 17-limit.
> OK, I'm thinking aloud at this point.

When you use temperaments there's always a chance that a higher prime interval will be simple on the keyboard. So by all means concentrate on simple intervals but it's better not to segregate according to prime limits. Except that it makes everything easier so we do it anyway.

>>I also disagree that a minimax is the best way of measuring >>errors in a weighted prime limit.
> > How different are minimax and RMS tunings, really?

It seems to be more important in higher limits. But if they're similar, why not go for RMS, which is simpler to calculate? The default still seems to be minimax.

>>A minimax makes more sense for the kind of >>weighting Tenney proposed -- where simple intervals have >>more wiggle room, not less.
> > You're saying this is what he proposed in his Cage paper?
> I've been meaning to read that, after having downloaded
> it recently.

It's on page 22. About consonance, not temperament.

Graham

🔗Carl Lumma <carl@lumma.org>

10/13/2007 3:48:40 PM

Graham wrote...

> >>>The justification for Tenney-weighting in Paul's middle
> >>>path paper is quite different from the one in your primes
> >>>paper. Being such a simple concept, it probably has many
> >>>justifications. I can't say I fully understand the one in
> >>>your primes paper, actually.
> >>
> >>He mixes up his justification of Tenney weighting with his
> >>justification of prime-based weights in general and the
> >>minimax in particular.
> >
> > Can you quote where? Tenney weighting is the weighting that
> > makes max weighted-primes error and max weighted-odds error
> > come out the same. And probably it makes the average
> > versions of these come out as close to the same as possible.
>
> It's on page 13.

That's not a quote. If something's wrong you should be able
to say exactly what it is.

> Bar the averages (which he doesn't mention
> and I don't think what you say's right)

The max Tenney-weighted error of (2 3 5 7) and of
(2 3 5 7 9) are the same for any tuning. That's wrong?

> >>Ultimately, Paul's justification for Tenney weighting is
> >>that it gives TOP-max the useful property of being the same
> >>weighted minimax for any sufficiently large subset of a
> >>prime limit.
> >
> > Sufficiently large??
>
> You need at least as many intervals as there are primes.
> And presumably a linearly independent set at that point.

The TOP damage of any single interval will be <= to the
TOP damage for the tuning. So I don't know what you mean.

> >>That property follows from Tenney weighting
> >>treating all primes on an equal footing (relative to
> >>interval size) which is also the property I argue for.
> >
> > I never did, and still don't, understand what size has
> > to do with anything.
>
> For any regular temperament, 9:4 will approximate to an
> interval twice as large as 3:2. The error will also be
> twice as large. With prime weighting, the buoyancy's always
> twice as large as well. That means both intervals have the
> same weighted error. That's the point of prime weighting,

That's the point of _Tenney_ weighted primes. There are
other weighting schemes. The "point of prime weighting", as
you say, is to 1. save calculations and 2. be vague about
what intervals you're interested in.

> so the obvious choice of weights is to weight all partials
> according to their sizes.

Then the buoyancy of 225/224 will be large compared to the
buoyancies of 3, 5, and 7, yet it is a much smaller interval.
So this seems a really odd way to explain it.

> >>I disagree with the idea that Tenney weighting correlates
> >>with the sensitivity of an interval to mistuning.
> >
> > Well I'm at work and I forget what he says about that
> > exactly, but simple ratios are associated with both deeper
> > and wider harmonic entropy minima. But they are more
> > deeper than they are wider. Therefore mistuning them
> > in constant cents results in a greater increase in
> > absolute entropy.
>
> There you go, arguing for prime weighting in general rather
> than Tenney weighting in particular.

Yes. But we might want to choose the weighting that makes
cents detuning cause roughly the same entropy increase
regardless of what interval we're detuning. I believe that's
what Paul thinks Tenney weighting does.

> > Historically, the most popular temperaments have distributed
> > errors least to the octaves. Meantone does favor thirds
> > over fifths, but 12-ET is the other way around. Personally,
> > I would rather a 5-cent error on thirds than on fifths.
>
> This is a poor argument. Why would you only include a
> single octave in your optimizing set?

Notice all of the intervals I mentioned are also primes.
In fact I was talking about primes here, not intervals.

> You're likely to care
> about 8:1 and 16:1 as well. The error in a 2:1 is always
> going to be a third that in the 8:1 and a quarter that in
> the 16:1. If you look at the factorization of all intervals
> you're interested in, trying to treat the most complex ones
> on equal terms, you'll end up with something close to Tenney
> weighting.

Sure.

> So firstly, equal weighting of intervals is
> consistent with what you observe,

??? Equal weighting of all "the most complex ones", yes.
Or the most simple ones. But not both.

> and secondly, Tenney
> weighting of primes gives similar results to equal weighting
> of intervals, for arbitrarily complex intervals.

What about for simple intervals (the ones where error
matters in the first place)?

> >>The fact that TOP errors depend so much on which prime
> >>limit you choose (and don't have a sensible limit for an
> >>arbitrarily large prime limit) tells me that it doesn't
> >>work.
> >
> > Not sure what you mean here. TOP errors don't change
> > with limit unless you need to throw a new set of commas
> > in the kernel. But usually there is more than one way
> > to extend a temperament to higher limits, so one would
> > hope the tuning would change between them.
>
> You can always choose a smaller prime limit in a unique way.
> When you work it out, dropping a load of inaudible primes
> always has too much impact with Tenney weighting.

Don't know what you mean here. Can you give an example?

> > Or maybe you meant something to do with the following...
> > For conceptual reasons, it has long appealed to me to use
> > a weighting stronger than Tenney's, such that any sort of
> > harmonic limit would be unnecessary -- the changes in the
> > error would get so small beyond, say, the 17-limit, that
> > you could just as well not bother to calculate them.
>
> Yes. But you have to make sure you don't penalize 7-limit
> intervals so much that they don't count either. You need to
> be nice to the intervals you want and severely punish the
> ones you don't want. Which means it becomes highly
> subjective and I don't think we have the empirical data to
> decide on that yet.

Yes, I ran into this. My first thought was to weight
according to the typical timbre of the human voice, which
according to

Schwartz, Howe, Purves
The Statistical Structure of Human Speech Sounds Predicts
Musical Universals
The Journal of Neuroscience, August 6, 2003

is ~ 1/n^2 for harmonic n. But yeah, this probably
penalizes 7 and 11 too much (though I haven't calculated
any tunings, just looked at the curve). You can
subtract 1 from the harmonic number before squaring to
help a little bit (and to fix the octave value at 1, which
is nice), but it probably still isn't enough.

Plain 1/n has the justification that it is the fraction
of partials with n as a factor.

And then there's Gene's 1/sqrt(p), "apeing the Zeta function
on the critical line".

But all of these seem too flat between 15 and 23.

There's also something suggested by Vos in his experiments,
but I never got the precise details of what it was, I just
remember it says 1. simpler intervals suffer worse from a
given amount of detuning and 2. the suffering is an exponential
of the detuning.

> > This approach would appeal to someone who likes the
> > harmonic series but wants to simplify his pitch set (say,
> > for use on a keyboard). The problem is, the number of
> > notes needed to make accurate temperament of these larger
> > limits grows very rapidly. The 'juicy plums' of these
> > extended tunings seem to be the new intervals themselves,
> > with modulation on top of it being too much to handle at
> > this stage (in the evolution of music). Therefore,
> > targeting temperaments at the 7-limit and so forth seems
> > a useful thing to do, and just use JI in the 17-limit.
> > OK, I'm thinking aloud at this point.
>
> When you use temperaments there's always a chance that a
> higher prime interval will be simple on the keyboard. So by
> all means concentrate on simple intervals but it's better
> not to segregate according to prime limits. Except that it
> makes everything easier so we do it anyway.

What I meant was, high-limit JI has plenty of low-hanging
fruit to occupy composers without lots of modulation. If
you're interested in this fruit, you're probably not going
to want to sacrifice the purity of the intervals enough to
improve modulation much on physical instruments.

> >>I also disagree that a minimax is the best way of measuring
> >>errors in a weighted prime limit.
> >
> > How different are minimax and RMS tunings, really?
>
> It seems to be more important in higher limits. But if
> they're similar, why not go for RMS, which is simpler to
> calculate? The default still seems to be minimax.

Minimax is just easier to understand the implications of.
Maybe that's only the case because Paul's paper is easier
to read than yours, I dunno.

> >>A minimax makes more sense for the kind of
> >>weighting Tenney proposed -- where simple intervals have
> >>more wiggle room, not less.
> >
> > You're saying this is what he proposed in his Cage paper?
> > I've been meaning to read that, after having downloaded
> > it recently.
>
> It's on page 22. About consonance, not temperament.

Well simpler intervals are associated with wider harmonic
entropy minima (as I said). So consonance maybe should be
a two-dimensional affair -- one for the slope of minima
and one for the width. Any suggestions how that might be
formulated?

-Carl

🔗Graham Breed <gbreed@gmail.com>

10/13/2007 10:40:20 PM

Carl Lumma wrote:
> Graham wrote...
> >>>>>The justification for Tenney-weighting in Paul's middle
>>>>>path paper is quite different from the one in your primes
>>>>>paper. Being such a simple concept, it probably has many
>>>>>justifications. I can't say I fully understand the one in
>>>>>your primes paper, actually.
>>>>
>>>>He mixes up his justification of Tenney weighting with his >>>>justification of prime-based weights in general and the >>>>minimax in particular.
>>>
>>>Can you quote where? Tenney weighting is the weighting that
>>>makes max weighted-primes error and max weighted-odds error
>>>come out the same. And probably it makes the average
>>>versions of these come out as close to the same as possible.
>>
>>It's on page 13.
> > That's not a quote. If something's wrong you should be able
> to say exactly what it is.

What's wrong? I didn't say anything was wrong. I said the justification for Tenney weighting was mixed up with the justification for prime weights in general. They're mixed up over the bottom of page 12 and a large part of page 13. I'm not going to copy all that.

>>Bar the averages (which he doesn't mention >>and I don't think what you say's right)
> > The max Tenney-weighted error of (2 3 5 7) and of
> (2 3 5 7 9) are the same for any tuning. That's wrong?

That's a max error. I don't think any analagous property of other averages depends on the weighting.

>>>>Ultimately, Paul's justification for Tenney weighting is >>>>that it gives TOP-max the useful property of being the same >>>>weighted minimax for any sufficiently large subset of a >>>>prime limit.
>>>
>>>Sufficiently large??
>>
>>You need at least as many intervals as there are primes. >>And presumably a linearly independent set at that point.
> > The TOP damage of any single interval will be <= to the
> TOP damage for the tuning. So I don't know what you mean.

And how do you know the TOP-max damage until you've looked at the errors of a sufficiently large subset of the intervals in a TOP-max tuning?

>>For any regular temperament, 9:4 will approximate to an >>interval twice as large as 3:2. The error will also be >>twice as large. With prime weighting, the buoyancy's always >>twice as large as well. That means both intervals have the >>same weighted error. That's the point of prime weighting,
> > That's the point of _Tenney_ weighted primes. There are
> other weighting schemes. The "point of prime weighting", as
> you say, is to 1. save calculations and 2. be vague about
> what intervals you're interested in.

There are different ways of justifiying prime weighting. Paul doesn't mention the simplicity. But whatever your reasons are for choosing them, the weights always add up like this.

>>so the obvious choice of weights is to weight all partials >>according to their sizes.
> > Then the buoyancy of 225/224 will be large compared to the
> buoyancies of 3, 5, and 7, yet it is a much smaller interval.
> So this seems a really odd way to explain it.

225/224 isn't a partial of a harmonic series. It's an interval between partials, and so it's buoyancy is the sum of the buoyancies of the partials it's measured between. This is true for all prime weightings, and so identifies buoyancy with complexity and makes the weighting inversely proportional to the complexity.

Obviously you can't weight all intervals by their sizes. You can weight them by complexity but the definition of complexity gets circular. So instead you can look at the weights of the partials. Tenney weighting is the unique choice where the weight of a partial depends only on its size. Alternatively, the weight of a ratio depends on the sizes of the numbers and not their prime factorizations.

Tenney weighting is the only prime weighting scheme that's fair to all primes. It takes account of the fact that nature didn't create all primes equal.

> Yes. But we might want to choose the weighting that makes
> cents detuning cause roughly the same entropy increase
> regardless of what interval we're detuning. I believe that's
> what Paul thinks Tenney weighting does.

So does it? Can you show a correlation between Tenney weighting and harmonic entropy gradients?

>>>Historically, the most popular temperaments have distributed
>>>errors least to the octaves. Meantone does favor thirds
>>>over fifths, but 12-ET is the other way around. Personally,
>>>I would rather a 5-cent error on thirds than on fifths.
>>
>>This is a poor argument. Why would you only include a >>single octave in your optimizing set?
> > Notice all of the intervals I mentioned are also primes.
> In fact I was talking about primes here, not intervals.

You were making a historical argument. Historically, musicians were not only interested in primes. The tunings they used were chosen in the light of all the intervals they used.

You're also betraying octave-equivalent thinking if you equate thirds and fifths with prime numbers. In octave-equivalent terms, the optimal tuning of octaves is always pure. When you look at the full prime factorization you'll see that 2 is still naturally more important.

>>So firstly, equal weighting of intervals is >>consistent with what you observe,
> > ??? Equal weighting of all "the most complex ones", yes.
> Or the most simple ones. But not both.

If you weight intervals equally and optimize a temperament, you'll still see some intervals have a larger error than others. Octaves will naturally be among the intervals with a relatively small error. Maybe also fifths.

>>and secondly, Tenney >>weighting of primes gives similar results to equal weighting >>of intervals, for arbitrarily complex intervals.
> > What about for simple intervals (the ones where error
> matters in the first place)?

The exact results for equal weighting depend strongly on which intervals you choose.

>>>>The fact that TOP errors depend so much on which prime
>>>>limit you choose (and don't have a sensible limit for an
>>>>arbitrarily large prime limit) tells me that it doesn't
>>>>work.
<snip>

> Don't know what you mean here. Can you give an example?

I did some experiments calculating the errors of equal temperaments in stupidly large prime limits a while back. I don't have the results to hand. But I think you're aware of the problems from what you say below.

> Yes, I ran into this. My first thought was to weight
> according to the typical timbre of the human voice, which
> according to <snip>
> is ~ 1/n^2 for harmonic n. But yeah, this probably
> penalizes 7 and 11 too much (though I haven't calculated
> any tunings, just looked at the curve). You can
> subtract 1 from the harmonic number before squaring to
> help a little bit (and to fix the octave value at 1, which
> is nice), but it probably still isn't enough.

I think 1/n^2 will converge on a single answer, but still too slowly. You can try it anyway.

> Plain 1/n has the justification that it is the fraction
> of partials with n as a factor.

Sum of 1/n doesn't converge. Perhaps it'll work for errors, because they're randomly distributed about zero, but it's still biased towards stupidly high primes.

> And then there's Gene's 1/sqrt(p), "apeing the Zeta function
> on the critical line".
> > But all of these seem too flat between 15 and 23.

There you go. It has to be more complicated.

> There's also something suggested by Vos in his experiments,
> but I never got the precise details of what it was, I just
> remember it says 1. simpler intervals suffer worse from a
> given amount of detuning and 2. the suffering is an exponential
> of the detuning.

That sounds right. An exponential will give more emphasis to simpler intervals.

> What I meant was, high-limit JI has plenty of low-hanging
> fruit to occupy composers without lots of modulation. If
> you're interested in this fruit, you're probably not going
> to want to sacrifice the purity of the intervals enough to
> improve modulation much on physical instruments.

There are also considerations of notation for flexible pitched instruments. It depends on how high your limit gets. I think there are applications for temperaments (or at least mappings) in middling-limits. And prime limits are a crutch.

> Minimax is just easier to understand the implications of.
> Maybe that's only the case because Paul's paper is easier
> to read than yours, I dunno.

You should have known about TOP-max before reading the paper. I was arguing for TOP-RMS before it was written. I thought maybe it was a question of branding, because "TOP" sounds better than "Tenney-weighted prime least squares optimum" or whatever mouthful I had to describe TOP-RMS as. But I couldn't find a better acronym than TOP, so I redefined TOP to mean the wider family of tunings. Unfortunately this hasn't caught on so "TOP-RMS" still looks more complicated than plain "TOP".

TOP-RMS is such a blindingly simple concept that you shouldn't need to read a paper to understand. It's the least squares optimal tuning of weighted primes. What more obvious way could there be to optimize a temperament, given weighted prime limits? Any explanation I try to give is going to fail because whatever words I use are going to get in the way of the brute simplicity of the concept. I'd expect people to rediscover it themselves as the most obvious thing to do. Gene did so by at least one route. Most of the paper's about less obvious ways of calculating it.

The argument for TOP-RMS would be that it's a crude approximation to average mistuning pain. The pain of mistuning an interval depends on its complexity and the amount of mistuning. The dependence on mistuning is more than linear because 10 cents out is more than twice as bad as 5 cents out. Maybe it should be exponential but quadratic will do given all the other things that are wrong.

I'm assuming that mistuning pain manifests itself as an increased dissonance in the interval. So different mistuings will be perceived with different pains but this doesn't depend on the listener recognizing the target interval.

Because listeners are only hearing relative mistuning pain for different tunings, the overall mistuning pain of a chord will be the average mistuning pain of its intervals. Mistuning also contributes to the overall dissonance of a chord but for that you have to start simpler intervals with a lower dissonance.

I can see that it does make sense to assume that listeners do some kind of prime factorization, and are then troubled by the discrepancy between theory and practice. In this case they will be aware of the absolute amount of mistuning. But it's easier to recognize simple intervals with a given amount of mistuning. So TOP-like measures don't work here. It's Tenney's tolerance range that we need to look at.

What, then, does the minimax of the prime-weighted errors mean? It isn't the worst mistuning pain because that isn't linearly proportional to the error. And with this weighting of errors, listeners aren't aware of the absolute pain, only the relative pain for different tunings. So how are they hearing the worst absolute mistuning pain? This has never made sense to me.

> Well simpler intervals are associated with wider harmonic
> entropy minima (as I said). So consonance maybe should be
> a two-dimensional affair -- one for the slope of minima
> and one for the width. Any suggestions how that might be
> formulated?

We know that consonance is more than one dimensional. But this is more about mistuning pain. Even a one dimensional plot of sensory consonance can support different ways of measuring mistuning pain. I can think of three:

1) Average mistuning pain resulting from dissonance

This is what TOP-RMS approximates. For a more rigorous calculation, you need a dissonance curve. Note that the model behind Sethares-type dissonance naturally leads to average dissonance of intervals in a chord as the thing you look at, rather than worst dissonance.

2) Most dissonant interval

You can see odd-limit minimaxes as approximating this. The main advantage of an odd-limit minimax is that it's the easiest kind of error to explain: you take these intervals, and they're all no more than so many cents out of tune. Realistically, that's going to be the justification for using them as well.

You can also draw equal-dissonance lines on a dissonance graph and see that there's a point where all intervals of an odd-limit have about the same mistuning. So choosing an odd limit and a worst error means no intervals will be worse than a given dissonance.

This does assume that you'll only use consonances with the intervals you chose. I think it's also less valid for higher limits. If I'm using the 11-limit, that doesn't mean I want all 11-limit intervals to be of equal dissonance. But if you're happy with a set of intervals being not much more dissonant than the worst 11-limit ones you can choose temperaments this way.

3) Mistuning pain from identifying JI archetypes

This is where complex intervals should be more in tune because they're harder to recognize. You have to choose the intervals you think it's possible to hear relative to JI. It follows Tenney's tolerance range or Partch's observation whatever. As I said a while back in this thread, metastable points may be useful in determining the tolerance range.

It makes sense to take a minimax of this, but maybe an average is valid as well.

Graham

🔗Carl Lumma <carl@lumma.org>

10/14/2007 1:29:06 AM

Graham wrote...

> >>>>Ultimately, Paul's justification for Tenney weighting is
> >>>>that it gives TOP-max the useful property of being the same
> >>>>weighted minimax for any sufficiently large subset of a
> >>>>prime limit.
> >>>
> >>>Sufficiently large??
> >>
> >>You need at least as many intervals as there are primes.
> >>And presumably a linearly independent set at that point.
> >
> > The TOP damage of any single interval will be <= to the
> > TOP damage for the tuning. So I don't know what you mean.
>
> And how do you know the TOP-max damage until you've looked
> at the errors of a sufficiently large subset of the
> intervals in a TOP-max tuning?

Calling the primes of a prime limit a "sufficiently large
subset" of the prime limit seems unnatural to me, even if
it is technically correct.

> >>For any regular temperament, 9:4 will approximate to an
> >>interval twice as large as 3:2. The error will also be
> >>twice as large. With prime weighting, the buoyancy's always
> >>twice as large as well. That means both intervals have the
> >>same weighted error. That's the point of prime weighting,
> >
> > That's the point of _Tenney_ weighted primes. There are
> > other weighting schemes. The "point of prime weighting", as
> > you say, is to 1. save calculations and 2. be vague about
> > what intervals you're interested in.
>
> There are different ways of justifiying prime weighting.
> Paul doesn't mention the simplicity. But whatever your
> reasons are for choosing them, the weights always add up
> like this.

It always adds up, but it doesn't always come out the same as
weighting the compound interval directly.

> >>so the obvious choice of weights is to weight all partials
> >>according to their sizes.
> >
> > Then the buoyancy of 225/224 will be large compared to the
> > buoyancies of 3, 5, and 7, yet it is a much smaller interval.
> > So this seems a really odd way to explain it.
>
> 225/224 isn't a partial of a harmonic series.

So we have intervals, primes, and now partials. What's
the point of invoking partials?

> Obviously you can't weight all intervals by their sizes.
> You can weight them by complexity but the definition of
> complexity gets circular. So instead you can look at the
> weights of the partials. Tenney weighting is the unique
> choice where the weight of a partial depends only on its
> size.

Maybe so, but I don't see how it's significant.

> Tenney weighting is the only prime weighting scheme that's
> fair to all primes. It takes account of the fact that
> nature didn't create all primes equal.

I still don't know what you mean by this.

> > Yes. But we might want to choose the weighting that makes
> > cents detuning cause roughly the same entropy increase
> > regardless of what interval we're detuning. I believe that's
> > what Paul thinks Tenney weighting does.
>
> So does it? Can you show a correlation between Tenney
> weighting and harmonic entropy gradients?

There's this:

http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls

Looks like there's room for improvement.

> >>>Historically, the most popular temperaments have distributed
> >>>errors least to the octaves. Meantone does favor thirds
> >>>over fifths, but 12-ET is the other way around. Personally,
> >>>I would rather a 5-cent error on thirds than on fifths.
> >>
> >>This is a poor argument. Why would you only include a
> >>single octave in your optimizing set?
> >
> > Notice all of the intervals I mentioned are also primes.
> > In fact I was talking about primes here, not intervals.
>
> You were making a historical argument. Historically,
> musicians were not only interested in primes. The tunings
> they used were chosen in the light of all the intervals they
> used.

These tunings can certainly be thought of in terms of
primes, and the errors of the minor third and perfect
fourth have not gotten anywhere near the attention the
errors of the major third and perfect fifth have.

> You're also betraying octave-equivalent thinking if you
> equate thirds and fifths with prime numbers.

Historically the 2 wasn't tempered, so it's valid.

> In
> octave-equivalent terms, the optimal tuning of octaves is
> always pure.

I don't see how you could justify this statement.

> I was arguing for TOP-RMS before it was written.

I believe I was the first to suggest RMS of weighted
primes, but I could be wrong. I think I made it within
a few days of Paul's first TOP e-mail.

> TOP-RMS is such a blindingly simple concept that you
> shouldn't need to read a paper to understand. It's the
> least squares optimal tuning of weighted primes.

Yes, but the behavior of the errors of all the invervals
is harder to understand.

> Any explanation I try to give is
> going to fail because whatever words I use are going to get
> in the way of the brute simplicity of the concept.

Can you write an inequality relating the weighted error of
an arbitrary interval with the RMS error of the weighted
primes?

> What, then, does the minimax of the prime-weighted errors
> mean? It isn't the worst mistuning pain because that isn't
> linearly proportional to the error.
//
> And with this weighting
> of errors, listeners aren't aware of the absolute pain, only
> the relative pain for different tunings. So how are they
> hearing the worst absolute mistuning pain? This has never
> made sense to me.

Really, average dyadic errors are most appropriate for
dyadic music. Adapting them to heptads and stuff is always
going to be a compromise, I think.

-Carl

🔗Graham Breed <gbreed@gmail.com>

10/15/2007 9:53:35 PM

Carl Lumma wrote:
> Graham wrote...

>>>>For any regular temperament, 9:4 will approximate to an >>>>interval twice as large as 3:2. The error will also be >>>>twice as large. With prime weighting, the buoyancy's always >>>>twice as large as well. That means both intervals have the >>>>same weighted error. That's the point of prime weighting,
<snip>
> It always adds up, but it doesn't always come out the same as
> weighting the compound interval directly.

It does if you're doubling an interval with all it's factors. At least, with the way I interpret prime weightings. It's all about interpretation because all that's specified are the prime intervals.

>>>>so the obvious choice of weights is to weight all partials >>>>according to their sizes.
>>>
>>>Then the buoyancy of 225/224 will be large compared to the
>>>buoyancies of 3, 5, and 7, yet it is a much smaller interval.
>>>So this seems a really odd way to explain it.
>>
>>225/224 isn't a partial of a harmonic series.
> > So we have intervals, primes, and now partials. What's
> the point of invoking partials?

Because they're the intervals for which the Tenney complexity equals their size. The weighting of primes determines the weighting of partials. And the weighting of other intervals is determined by the weighting of the partials. So once you can think about the partials when you choose the weighting of the primes. But once the partials are fixed you don't have any choice about the other intervals.

>>Tenney weighting is the only prime weighting scheme that's >>fair to all primes. It takes account of the fact that >>nature didn't create all primes equal.
> > I still don't know what you mean by this.

The weight of a partial only depends on its size, and not its prime factorization. Any other prime weighting suggests the ear is doing some kind of factorization and assumes that it finds some primes easier to deal with than others. Tenney weighting assumes that the ear doesn't care about primes.

> There's this:
> > http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls
> > Looks like there's room for improvement.

How can I read that? I don't think I have a spreadsheet installed.

>>You're also betraying octave-equivalent thinking if you >>equate thirds and fifths with prime numbers.
> > Historically the 2 wasn't tempered, so it's valid.

But you were using this as an argument for *why* the 2 wasn't tempered!

>>In >>octave-equivalent terms, the optimal tuning of octaves is >>always pure.
> > I don't see how you could justify this statement.

If all intervals separated by a number of octaves are equivalent to one another for harmony, then they all have equal harmonic complexity, and so errors in them should be given equal weight. That means an interval of an infinite number of octaves has the same weight as a unison. (Common sense tells you that such intervals are meaningless, but octave equivalence stops you formulating a rule to say where to stop.) Any finite mistuning in an octave will mean an infinite mistuning in an infinite number of octaves. So before you worry about the other intervals you have to set octaves pure.

>>I was arguing for TOP-RMS before it was written.
> > I believe I was the first to suggest RMS of weighted
> primes, but I could be wrong. I think I made it within
> a few days of Paul's first TOP e-mail.

There you go!

>>Any explanation I try to give is >>going to fail because whatever words I use are going to get >>in the way of the brute simplicity of the concept.
> > Can you write an inequality relating the weighted error of
> an arbitrary interval with the RMS error of the weighted
> primes?

The TOP-RMS error is always less than the TOP-max. Or the optimal RMS is always less than the minimax.

The rationale for RMS is that it's some kind of average over all intervals, or a prediction for a random set of intervals. I've got a PDF in the tuning-math file section that tries to prove something to do with this. (Maybe the algebra's wrong, but the result's right. Gene was happy with it anyway.)

> Really, average dyadic errors are most appropriate for
> dyadic music. Adapting them to heptads and stuff is always
> going to be a compromise, I think.

Weighted prime limits are always going to be a compromise. Any optimal temperament that isn't intended for a single piece of music is a compromise.

Graham

🔗Carl Lumma <carl@lumma.org>

10/16/2007 10:58:09 AM

> >>>>For any regular temperament, 9:4 will approximate to an
> >>>>interval twice as large as 3:2. The error will also be
> >>>>twice as large. With prime weighting, the buoyancy's always
> >>>>twice as large as well. That means both intervals have the
> >>>>same weighted error. That's the point of prime weighting,
> <snip>
> > It always adds up, but it doesn't always come out the same as
> > weighting the compound interval directly.
>
> It does if you're doubling an interval with all it's
> factors. At least, with the way I interpret prime
> weightings. It's all about interpretation because all
> that's specified are the prime intervals.

What other weighting schemes besides Teneny have
B(3:2) + B(3:2) = B(9:4)?

> > There's this:
> >
> > http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls
> >
> > Looks like there's room for improvement.
>
> How can I read that? I don't think I have a spreadsheet
> installed.

http://docs.google.com

> >>You're also betraying octave-equivalent thinking if you
> >>equate thirds and fifths with prime numbers.
> >
> > Historically the 2 wasn't tempered, so it's valid.
>
> But you were using this as an argument for *why* the 2
> wasn't tempered!

That too. I don't see a problem. The history of western
keyboard tuning fully supports weighting simpler intervals
more.

-Carl

🔗Graham Breed <gbreed@gmail.com>

10/17/2007 4:45:14 AM

Carl Lumma wrote:

> What other weighting schemes besides Teneny have
> B(3:2) + B(3:2) = B(9:4)?

There's equal prime weighting:

1+1 + 1+1 = 2+2

or an 8-limit weighting, where an interval is weighted according to the maximum number of factors it has in integers up to 8:

1+3 + 1+3 = 2+6

or a similar 9-integer limit weighting

2+3 + 2+3 = 4+6

These weights can be used to get the prime-based functions to approximate odd- or integer-limit functions. The least squares optimizations also leave rational intervals perfectly tuned (which Gene feels is important) without being as crude as equal prime weighting.

Generally, any prime weighting scheme follows this identity.

>>>http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls
>>>
>>>Looks like there's room for improvement.
>>
>>How can I read that? I don't think I have a spreadsheet >>installed.
> > http://docs.google.com

Oh, yes, that works! And indeed, not a good correlation.

>>>>You're also betraying octave-equivalent thinking if you >>>>equate thirds and fifths with prime numbers.
>>>
>>>Historically the 2 wasn't tempered, so it's valid.
>>
>>But you were using this as an argument for *why* the 2 >>wasn't tempered!
> > That too. I don't see a problem. The history of western
> keyboard tuning fully supports weighting simpler intervals
> more.

More than what null hypothesis? And how can a system that ignores octaves say anything about how you should weight octaves?

Graham

🔗Carl Lumma <carl@lumma.org>

10/17/2007 6:46:15 PM

--- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Carl Lumma wrote:
>
> > What other weighting schemes besides Teneny have
> > B(3:2) + B(3:2) = B(9:4)?
>
> There's equal prime weighting:
>
> 1+1 + 1+1 = 2+2

You're factoring 9. That's not an odd limit. The
original example I gave was damage of (2 3 5 7) =
damage of (2 3 5 7 9).

> >>>http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls
> >>>
> >>>Looks like there's room for improvement.
> >>
> >>How can I read that? I don't think I have a spreadsheet
> >>installed.
> >
> > http://docs.google.com
>
> Oh, yes, that works! And indeed, not a good correlation.

It's not too bad, now that I fixed one wrong cell (depending
on when you pulled the file).

> >>>>You're also betraying octave-equivalent thinking if you
> >>>>equate thirds and fifths with prime numbers.
> >>>
> >>>Historically the 2 wasn't tempered, so it's valid.
> >>
> >>But you were using this as an argument for *why* the 2
> >>wasn't tempered!
> >
> > That too. I don't see a problem. The history of western
> > keyboard tuning fully supports weighting simpler intervals
> > more.
>
> More than what null hypothesis? And how can a system that
> ignores octaves say anything about how you should weight
> octaves?

The octave is not ignored, it is zero-tempered. Other things
that could have happened are TOP 12-ET, stick with 1/4-comma
meantone favoring 5 over 3 instead of evolving to favor 3
over 5 with 12-ET, etc. etc.

-Carl

🔗Graham Breed <gbreed@gmail.com>

10/17/2007 10:54:04 PM

Carl Lumma wrote:
> --- In tuning@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >>Carl Lumma wrote:
>>
>>
>>>What other weighting schemes besides Teneny have
>>>B(3:2) + B(3:2) = B(9:4)?
>>
>>There's equal prime weighting:
>>
>>1+1 + 1+1 = 2+2
> > You're factoring 9. That's not an odd limit. The
> original example I gave was damage of (2 3 5 7) =
> damage of (2 3 5 7 9).

I'm factorizing 9, yes. Finding its prime factors. That's what you do to use prime weighting.

Indeed it isn't an odd limit. What have odd limits got to do with it?

You gave that example in a different context. But what of it? As 9 is a prime power, it will have the same prime-weighted error of 3 for any prime weighting. That means the worst weighted error of {2 3, 5, 7} will be the same as for {2, 3, 5, 7, 9} given any prime weighting, regardless of the tuning. The RMS, however, will be different because by including a power of 3 you're giving 3 more weight and this affects the average.

The point -- way, way back in the thread -- is that I don't think there's anything analagous to the special property of TOP-max for the TOP-RMS, or other averages, that requires Tenney weighting. For that you need to look at composite numbers, and ratios of them.

>>>>>http://lumma.org/stuff/TenneyWeightingAndHarmonicEntropy.xls
>>>>>
>>>>>Looks like there's room for improvement.
<snip>
> It's not too bad, now that I fixed one wrong cell (depending
> on when you pulled the file).

Not good enough to derive Tenney weighting from what I can see.

>>>>>>You're also betraying octave-equivalent thinking if you >>>>>>equate thirds and fifths with prime numbers.
>>>>>
>>>>>Historically the 2 wasn't tempered, so it's valid.
>>>>
>>>>But you were using this as an argument for *why* the 2 >>>>wasn't tempered!
>>>
>>>That too. I don't see a problem. The history of western
>>>keyboard tuning fully supports weighting simpler intervals
>>>more.
>>
>>More than what null hypothesis? And how can a system that >>ignores octaves say anything about how you should weight >>octaves?
> > The octave is not ignored, it is zero-tempered. Other things
> that could have happened are TOP 12-ET, stick with 1/4-comma
> meantone favoring 5 over 3 instead of evolving to favor 3
> over 5 with 12-ET, etc. etc.

How can a system that considers intervals separated by octaves to be the same not ignore octaves?

At the start of this thread you were arguing *for* Tenney weighting. Now you're using the same arguments *against* it! What gives?

1/4 comma meantone does not favor 3 over 5. It treats all 5-limit errors equally and minimizes the worst error. 12-ET does not favor 3 over 5. It's a simple equal temperament with a relatively small 5-limit error. You can't deduce anything about Tenney vs equal weighting from these examples.

Graham

🔗Carl Lumma <carl@lumma.org>

10/18/2007 6:02:31 PM

Graham wrote...

> >>There's equal prime weighting:
> >>
> >>1+1 + 1+1 = 2+2
> >
> > You're factoring 9. That's not an odd limit. The
> > original example I gave was damage of (2 3 5 7) =
> > damage of (2 3 5 7 9).
>
> I'm factorizing 9, yes. Finding its prime factors. That's
> what you do to use prime weighting.
>
> Indeed it isn't an odd limit. What have odd limits got to
> do with it?

I was just saying, you can treat 9 as a prime in
Tenney weighting. It doesn't make a difference.

> The point -- way, way back in the thread -- is that I don't
> think there's anything analagous to the special property of
> TOP-max for the TOP-RMS, or other averages, that requires
> Tenney weighting. For that you need to look at composite
> numbers, and ratios of them.

I asked for an inequality, and you said TOP-RMS was always
less than TOP-max, which seems obvious. TOP gives an
inequality between the max errors of the primes and the
max errors of all intervals. That's a nice-to-have, and I
thought you had shown something like this for RMS but I can't
remember what it was.

> > The octave is not ignored, it is zero-tempered. Other things
> > that could have happened are TOP 12-ET, stick with 1/4-comma
> > meantone favoring 5 over 3 instead of evolving to favor 3
> > over 5 with 12-ET, etc. etc.
>
> How can a system that considers intervals separated by
> octaves to be the same not ignore octaves?

Tempered octaves and octave equivalence aren't mutually
exclusive.

> At the start of this thread you were arguing *for* Tenney
> weighting. Now you're using the same arguments *against*
> it! What gives?

As far as I know, I've been consistently arguing for
weighting simpler intervals more. Tenney weighting is
one such weighting.

> 1/4 comma meantone does not favor 3 over 5.

I said 5 over 3.

> It treats all
> 5-limit errors equally and minimizes the worst error.

That's one of at least four possible interpretations:

| Max. error |Sum-squared error|Sum-absolute error|
---------+------------+-----------------+------------------+
Inverse | 697.3465 | 696.5354 | 696.5784 |
Limit | 3/14-comma | 63/250-comma | 1/4-comma |
Weighted | Riccati |TD 162.10 5/5/99 | Aron |
---------+------------+-----------------+------------------+
Equal | 696.5784 | 696.1648 | 696.5784 |
Weighted | 1/4-comma | 7/26-comma | 1/4-comma |
| Aron | Woolhouse | Aron |
---------+------------+-----------------+------------------+
Limit | 695.9810 | 696.0187 | 696.5784 |
Weighted | 5/18-comma | 175/634-comma | 1/4-comma |
| ~Smith | Erlich (TTTTTT) | Aron |

But really I think it was just the easiest to tune.

> 12-ET does not favor 3 over 5. It's a simple equal temperament
> with a relatively small 5-limit error. You can't deduce
> anything about Tenney vs equal weighting from these examples.

I can deduce something about whether simpler intervals should
be weighted more or less.

-Carl

🔗Graham Breed <gbreed@gmail.com>

10/19/2007 2:34:52 AM

Carl Lumma wrote:

> I asked for an inequality, and you said TOP-RMS was always
> less than TOP-max, which seems obvious. TOP gives an
> inequality between the max errors of the primes and the
> max errors of all intervals. That's a nice-to-have, and I
> thought you had shown something like this for RMS but I can't
> remember what it was.

That's the only inequality I can give because RMS is about averages, not worst cases.

TOP doesn't give an inequality to do with max errors. It deals with an arbitrary function of the max error. That's nice to have but not crucial.

I worked out a multi-dimensional integral, which is in the tuning-math files section. Gene asked the question, but he's worryingly quiet now, so he can't explain what it means. It looks like TOP-RMS is the optimal tuning for the whole Tenney-weighted prime limit though. Working out what the error means is harder. I can't solve the interval, and I can't find the right normalized average that converges numerically.

In practice I don't care about any of this because I know that Tenney weighting's wrong. All that matters is that it gives better results for the temperaments we know about than anything else as simple. It's easy to work out the numbers and I give a lot of examples in the PDF. (I could do that better but I don't like to think about it because it reminds me how much work there is left.) I don't hink TOP-RMS is any worse in practice than TOP-max so it wins by being simpler (Occam's razor).

If you want to find an average for which TOP-RMS is the approximation, you could look at Bayesian statistics. It might keep you quiet at least ;-) I say this because the book I'm reading now says "Bayesian analysis is an excellent alternative to use of large sample asymptotic statistical procedures" and it seems to know what it's talking about. I think you objected to the large sample part. More of this would have to be on tuning-math.

>>>The octave is not ignored, it is zero-tempered. Other things
>>>that could have happened are TOP 12-ET, stick with 1/4-comma
>>>meantone favoring 5 over 3 instead of evolving to favor 3
>>>over 5 with 12-ET, etc. etc.
>>
>>How can a system that considers intervals separated by >>octaves to be the same not ignore octaves?
> > Tempered octaves and octave equivalence aren't mutually
> exclusive.

So here we're talking about whether you're really talking about octave equivalence, which isn't interesting.

>>At the start of this thread you were arguing *for* Tenney >>weighting. Now you're using the same arguments *against* >>it! What gives?
> > As far as I know, I've been consistently arguing for
> weighting simpler intervals more. Tenney weighting is
> one such weighting.

More than what? What intervals?

>>1/4 comma meantone does not favor 3 over 5.
> > I said 5 over 3.

Sorry, but it still doesn't.

>>It treats all >>5-limit errors equally and minimizes the worst error.
> > > That's one of at least four possible interpretations:
<snip>

Right, and without a definite interpretation we can't prove anything.

> But really I think it was just the easiest to tune.

Both for pure thirds and pure octaves.

>>12-ET does not favor 3 over 5. It's a simple equal temperament >>with a relatively small 5-limit error. You can't deduce >>anything about Tenney vs equal weighting from these examples.
> > I can deduce something about whether simpler intervals should
> be weighted more or less.

No, you can speculate. There isn't enough evidence for a deduction.

Graham

🔗Carl Lumma <carl@lumma.org>

10/19/2007 10:49:28 AM

Graham wrote...
> >> At the start of this thread you were arguing *for* Tenney
> >> weighting. Now you're using the same arguments *against*
> >> it! What gives?
> >
> > As far as I know, I've been consistently arguing for
> > weighting simpler intervals more. Tenney weighting is
> > one such weighting.
>
> More than what? What intervals?

Complex ones!

> > But really meantone was just the easiest to tune.
>
> Both for pure thirds and pure octaves.

Pythagorean tuning has pure fifths and octaves, and
meantone was the other way. Circulating temperaments
have only pure octaves. It shows that octaves are the
last thing people wanted to temper, at least.

-Carl