back to list

Re: Harmonic entropy

🔗Glen Peterson <Glen@xxxxxxxxxxxxx.xxxx>

11/11/1999 9:30:35 PM

Given an interval within an octave which does not correspond exactly with
any small whole number ratio, how will it be heard?

Consider 300 cents. It is generally heard as a 16 cent flat 6/5 even though
it's only 11 cents sharp of 13/11. This implies that smaller ratios have
more "pull" on the ear than larger ones. Partch described this as
Observation One, or ratio magnetism, or gravity. Why do I ask? see:

http://www.organicdesign.org/peterson/tuning/equal.html

Definition of Harmonic Entropy:
> A concept developed by Paul Erlich, measuring the dissonance
> of an interval
> based on the uncertainty involved in interpreting that
> interval in terms of
> an integer ratio. The underlying mathematics, adapted from
> Van Eck, appear
> to confirm Partch's Observation One. Harmonic entropy is
> intended to be a
> second component in measuring the sonance of an interval, alongside
> roughness.

Don't have Van Eck's book. What are the underlying mathematics? I know
Partch speaks of relating the gravity of a ratio to it's limit. Hence the
300 cent note in our earlier example would be pulled 5/18 toward 13/11 and
13/18 toward 6/5. (The number 18 represents the two limits added together
or 13 + 5.)

I've played around with some numbers and come up with this table showing the
relative gravitational pull of ratios of different limits:

1-limit - 90 cents sharp or flat
2-limit - 45 cents
3-limit - 30 cents
5-limit - 18 cents
7-limit - 13 cents
11-limit - 8 cents
13-limit - 7 cents

Has anyone experimentally come up with more accurate numbers? Based on
anyone's aural observations, should I include 9-limit? I was trying to
experiment with a guitar, but my dog apparently didn't like it whined
intermittently, making an incredibly high pitched sound and nullifying my
ability to hear beats or tune accurately. I'll try again when he's asleep,
but it sure would be easier to just confirm or modify someone else's gravity
ranges!

Thanks Paul for your earlier response, you really helped me clarify my
question.

Thanks to whoever else reads this. I know I can't keep up with every
thread. I appreciate your time.

---
Glen Peterson
Peterson Stringed Instruments
30 Elm Street North Andover, MA 01845
(978) 975-1527
http://www.organicdesign.org/peterson

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/11/1999 10:09:18 PM

See http://www.ixpres.com/interval/td/entropy.htm, which contains a
compilation of my postings to the list on the subject.

Glen Peterson wrote,

>Don't have Van Eck's book. What are the underlying mathematics?

I'll get to that later. Maybe tomorrow.

>I know
>Partch speaks of relating the gravity of a ratio to it's limit. Hence the
>300 cent note in our earlier example would be pulled 5/18 toward 13/11 and
>13/18 toward 6/5. (The number 18 represents the two limits added together
>or 13 + 5.)

Would 295 cents and 305 cents also come out exactly the same? You're not
taking into account the fact that the interval is closer to 13/11 than to
6/5.

>I've played around with some numbers and come up with this table showing
the
>relative gravitational pull of ratios of different limits:

>1-limit - 90 cents sharp or flat
>2-limit - 45 cents
>3-limit - 30 cents
>5-limit - 18 cents
>7-limit - 13 cents
>11-limit - 8 cents
>13-limit - 7 cents

>Has anyone experimentally come up with more accurate numbers?

I certainly don't think it goes according to _prime_ limit, as you're
suggesting!!! Partch went by _odd_ limit for this and other
consonance-related issues. If you want to distinguish intervals by octave
and inversion, then something like the sum or product of the numerator and
denominator comes out looking like the best rating scheme (for simple
ratios).

>Based on
>anyone's aural observations, should I include 9-limit?

Well, I thought you _meant_ to use prime limit, but if you really meant odd
limit the above table has a totally different meaning (for example, 25:16
has a prime limit of 5 but an odd limit of 25), and I might be willing to
accept it (if you included something like "9-limit - 10 cents). But then it
would make no sense to include 2-limit as a seperate category, unless you
really meant _integer_ limit, in which case you should include 4, 6, 8, 10,
and 12 as well. In which case you'd be looking just at the numerator, though
the denominator seems to matter too . . .

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/11/1999 10:34:39 PM

Glen, Partch in fact gives the field of attraction of the 1-limit as 1200
cents, and continues:

" . . . 400 cents, is the field of 3; . . . 240 cents, is the field of 5; .
. . 171+ cents, is the field of 7, . . . 133+ cents, is the field of 9, . .
. 109+ cents, is the field of 11."

(_Genesis of a Music_, p. 184).

Clearly Partch is referencing his "field" to a much lower degree of
certainty than you are. Real gravitational fields don't end, they just peter
out, but you can draw a surface where a specific strength of attraction is
reached. Similarly, the field of attraction of an ratio can only be given a
sharp boundary if you specify what level of certainty you'd like the ratio
to be heard with. I hope that addresses your question:

>Has anyone experimentally come up with more accurate numbers?

🔗gbreed@xxx.xxxxxxxxx.xx.xxxxxxxxxxxxxxxx)

11/12/1999 6:56:00 AM

In-Reply-To: <942385839.13495@onelist.com>
Glen Peterson wrote in digest 392.23:

> I've played around with some numbers and come up with this table
> showing the
> relative gravitational pull of ratios of different limits:
>
> 1-limit - 90 cents sharp or flat
> 2-limit - 45 cents
> 3-limit - 30 cents
> 5-limit - 18 cents
> 7-limit - 13 cents
> 11-limit - 8 cents
> 13-limit - 7 cents
>
> Has anyone experimentally come up with more accurate numbers? Based on
> anyone's aural observations, should I include 9-limit?

I haven't made a systematic study of this, partly because I don't really
understand the question. However, I have made some observations.

Firstly, when working out a tuning for my guitar, I tried to work out the
flattest meantone fifth where the 4:6:9 chord still sounded okay. The
chord is two stacked fifths, so the most complex (and out-of-tune)
interval is the 4:9.

I concluded it got bad somewhere between 1/4-comma meantone and the golden
meantone. The change was so abrupt, I ascribed it to moving past a tuning
step on my synthesizer, so the 4:9 became another cent out of tune.
Anyway, that gives a threshold for 4:9 sounding "okay" at between 10.3 and
10.8 cents. This is over precise -- say it gets bad at around 11 cents.
As this is in the same region as your figures, I wonder if there's a
connection.

The other thing is that, after a fair amount of experience with 31-equal,
I've decided it isn't good enough in the 7-limit. It works great in the
5-limit. I think it's around the optimum mistuning (yes, beats can be
good). But in the 7-limit, although all intervals are clearly
recognizable, I've found it difficult to get strong consonances.

The same is true of my near-Pythagorean schismic tuning. However, using a
schismic tuning where 4:7 is just, all of a sudden the 7-limit consonances
become a lot more stable. I think this is another artificial threshold on
the synth, but I haven't worked out which intervals are affected.

To put numbers on it: for 4:7 just, the worst interval is 3:5 at 4.3
cents. (The worst interval of 7 is 5:7 at 4.1 cents.) For Pythagorean
schismic, the 5:7 is 5.8 cents out of tune. So, let's say for 7-limit
chords to be highly consonant, all intervals should be nearer than 5 cents
to JI, ideally 4 cents or even less.

The worst 5-limit intervals in 31= are about 6 cents out, and I think this
is some kind of optimum. If mistuning it inversely proportional to the
limit, we get

3-limit 10.0 cents
5-limit 6.0 cents
7-limit 4.3 cents
9-limit 3.3 cents
11-limit 2.7 cents
13-limit 2.3 cents

for optimum tuning of the worst intervals. The 7-limit agreement is
remarkable, and probably coincidental. 10 cent flat fifths sound a bit
much to me.

A 9-limit complete chord contains more 5-limit intervals than a 7-limit
complete chord. So, it may break the pattern in that more deviation is
allowed. Another complication is that sharp intervals are probably more
acceptable than flat ones.

For more on schismic temperament, see
http://x31eq.com/schismic.htm

🔗Glen Peterson <Glen@xxxxxxxxxxxxx.xxxx>

11/12/1999 6:02:15 PM

> from Paul:
> See http://www.ixpres.com/interval/td/entropy.htm, which contains a
> compilation of my postings to the list on the subject.

Thanks! Interesting stuff - especially the graphs! I read this page a
couple times before, and it seems to represent a bold push forward toward
systemizing our perception of tonality or dare I say, consonance. I wish
there was a companion page which shows how a group of people reacted to
different pitches, and how the graph of their reactions related to the
Harmonic Entropy graphs. I think you have a great theory, has anyone set
about proving or disproving it?

If people are interested, I'd be willing to help put a web site together
that would play audio samples and allow people to vote on their relative
consonance/dissonance. I would need someone to help me with MIDI files for
the sound samples. Maybe the people on the list would be guinea-pigs and
take the test, then we could tabulate the results, and see if the average
response of the human ear would be similar to your predicted results. I bet
it would be very similar.

> > Me:
> >I know
> >Partch speaks of relating the gravity of a ratio to it's
> limit. Hence the
> >300 cent note in our earlier example would be pulled 5/18
> toward 13/11 and
> >13/18 toward 6/5. (The number 18 represents the two limits
> added together
> >or 13 + 5.)
>
> Paul:
> Would 295 cents and 305 cents also come out exactly the same?

No.

> You're not
> taking into account the fact that the interval is closer to
> 13/11 than to
> 6/5.

That was my point! Obviously I should have given my question a little more
context. Does the following clarify or just confuse things?

13/11 = 289 cents up to 296 cents.
6/5 = 316 cents down to 298 cents.

So 295 would be heard as 13/11, 305 would be heard as 6/5.

> table showing
> the
> >relative gravitational pull of ratios of different limits:
>
> >1-limit - 90 cents sharp or flat
> >2-limit - 45 cents
> >3-limit - 30 cents
> >5-limit - 18 cents
> >7-limit - 13 cents
> >11-limit - 8 cents
> >13-limit - 7 cents
>
> I certainly don't think it goes according to _prime_ limit, as you're
> suggesting!!! Partch went by _odd_ limit for this and other
> consonance-related issues. If you want to distinguish
> intervals by octave
> and inversion, then something like the sum or product of the
> numerator and
> denominator comes out looking like the best rating scheme (for simple
> ratios).

I usually work with odd limit, but since I was confused, I thought prime
limit safer somehow. I'll go back to odd limit. I'm not quite ready for
the sum or product scheme.

> I might
> be willing to
> accept it (if you included something like "9-limit - 10
> cents).

9-limit - 10 cents.

> would make no sense to include 2-limit as a seperate
> category, unless you
> really meant _integer_ limit, in which case you should
> include 4, 6, 8, 10,
> and 12 as well.

OK. "Odd or prime limit." Meaning: All odd numbers and the number 2.

> Glen, Partch in fact gives the field of attraction of the
> 1-limit as 1200
> cents, and continues:
>
> " . . . 400 cents, is the field of 3; . . . 240 cents, is the
> field of 5; .
> . . 171+ cents, is the field of 7, . . . 133+ cents, is the
> field of 9, . .
> . 109+ cents, is the field of 11."
> (_Genesis of a Music_, p. 184).

I don't find that very useful. Some intervals sound like a pure ratio.
Others like an out of tune ratio. Others, just dissonant. That is what I
want to capture in my analysis.

> reached. Similarly, the field of attraction of an ratio can
> only be given a
> sharp boundary if you specify what level of certainty you'd
> like the ratio
> to be heard with.

That's the meat of my question. What is the average ear's level of
certainty? How far from perfect can an interval be and still sound like a
pure ratio? How far from that will it be recognized as an out-of tune
ratio? How far from that will it just sound rough and be ready to go
anywhere.

> I hope that addresses your question:

Thanks! The more you answer, the more I seem to ask.

> From: Joe Monzo <monz@juno.com>
> I haven't experimented with this to gain any empirical
> knowledge myself, but years ago I wrote a computer program
> to calculate Partch's 'Field of Attraction' for every
> possible chord progression in the 19-Limit Tonality Diamond.
> The printout gives the ratios and cents values in roughly
> graphical page-layout for both the starting and ending
> chords, with the interval size in cents for each 'permissible'
> resolution. I followed Partch's guidelines exactly.

Sounds interesting. How much output is there? Is it small enough to post
somewhere?

---
Glen Peterson
Peterson Stringed Instruments
30 Elm Street North Andover, MA 01845
(978) 975-1527
http://www.organicdesign.org/peterson

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/14/1999 1:26:13 AM

[Glen Peterson:]
>Consider 300 cents. It is generally heard as a 16 cent flat 6/5 even
though it's only 11 cents sharp of 13/11. This implies that smaller
ratios have more "pull" on the ear than larger ones. Partch described
this as Observation One, or ratio magnetism, or gravity.

Awhile back I was very interested in seeing if I could come up with a
simple algorithm for this sort of a thing. I think that this one was
probably the most successful. (Successful meaning that I thought that
this one gave the most agreeable results.)

If you take:

[((x1*y)*2)+1]
--------------
[((x2*y)*2)+1]

as a margin of "gravity" to the left, in other words a minus border,
and:

[((x1*y)*2)-1]
--------------
[((x2*y)*2)-1]

as a plus border, and let "x1" and "x2" be [N/(D/a)/b] and
[D/(D/a)/b], and "a" and "b" be self-regulating factors at a=N*D and
b=N+D, and let "y" be [(log(N)-log(D))*(12/log(2))], you could
illustrate say an 8/7 through 4/3 sequence as:

8/7 219 231 244.43
15/13 248
7/6 253.26 267 282.03
13/11 289 297.15
19/16 298
6/5 299.73 316 333.34
17/14 336
11/9 338.49 347 356.81
16/13 359
5/4 367.15 386 407.60
19/15 409
14/11 410.42 418
9/7 424.14 435 446.61
13/10 454 462.13
17/13 464 470.46
21/16 470.78
4/3 473.93 498 525

And if this should seem too fine, or for some reason not fine enough,
you could rather easily adjust it to make it just about as fine or as
course as you'd like by adding or subtracting mediants and truncating
any overlapping borders to the stronger, or smaller ratios by
eliminating the weak border.

Dan

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/13/1999 3:00:19 PM

Graham Breed wrote,

>A 9-limit complete chord contains more 5-limit intervals

Meaning, in this case, more ratios of 3. We need to come up with some better
terminology

>than a 7-limit
>complete chord. So, it may break the pattern in that more deviation is
>allowed.

I don't know quite what you mean. I thought the mistuning allowances you
posted were for ratios of N, not averages for complete N-limit chords. But
if they were for the latter, the 9-limit would only break the pattern a
little bit. It's these small modifications that Brian McLaren's incorrect
formula for the number of distinct ratios in a tonality diamond (namely,
N�-N-1) failed to take into account. (I only brought it up again because a
giant trove of McLaren's essays have appeared on the Web.)

>Another complication is that sharp intervals are probably more
>acceptable than flat ones.

That would cancel itself out if you're averaging within a given limit.
Stretching the octaves a bit can help if you evaluate intervals on an
octave-specific level.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/13/1999 4:00:25 PM

Glen wrote,

>Thanks! Interesting stuff - especially the graphs! I read this page a
>couple times before, and it seems to represent a bold push forward toward
>systemizing our perception of tonality or dare I say, consonance. I wish
>there was a companion page which shows how a group of people reacted to
>different pitches, and how the graph of their reactions related to the
>Harmonic Entropy graphs. I think you have a great theory, has anyone set
>about proving or disproving it?

Well, there have been many papers empirically investigating the perception
of consonance. Some of them (Plomp, for example)claim that with sine waves,
nothing happens outside the critical band except for a small effect near the
octave. If this were true, Sethares would be the last word on consonance.
But I've seen other papers where people _did_ prefer sine waves in just
ratios. I think one of the authors of the one I remenber best was named
Vos(?). According to that paper, some of this was due to combination tones,
but even when the examples were played so quietly that there were no
combination tones, there was still an effect. That's one way to start
investigating harmonic entropy, but sine waves are very different from real
musical sounds. With those, you'd have to disentangle three effects --
harmonic entropy, Plomp/Sethares roughness, and combination tones. Very
hard.

>> reached. Similarly, the field of attraction of an ratio can
>> only be given a
>> sharp boundary if you specify what level of certainty you'd
>> like the ratio
>> to be heard with.

>That's the meat of my question. What is the average ear's level of
>certainty?

The harmonic entropy requires the central virtual pitch processor's
resolution as an input. I often use 1% (17 cents) as a nice round figure, in
the range of Goldstein's (JASA 1974) results.

>How far from perfect can an interval be and still sound like a
>pure ratio?

That depends more on the perception of beats and the answer is like 0.1-0.2
cents.

>How far from that will it be recognized as an out-of tune
>ratio? How far from that will it just sound rough and be ready to go
>anywhere.

Again, I don't think you can put a strict boundary on these, but Sethares'
graphs and my harmonic entropy graphs give a good general idea.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/13/1999 4:11:47 PM

Dan Stearns wrote,

>Awhile back I was very interested in seeing if I could come up with a
>simple algorithm for this sort of a thing. I think that this one was
>probably the most successful. (Successful meaning that I thought that
>this one gave the most agreeable results.)

I remember this one and it's cool. I thought you had shelved it due to some
problems with overlapping boundaries. If you've fixed it, I wonder what kind
of theoretical underpinning we can give it.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/14/1999 3:27:57 PM

[Paul H. Erlich:]
>I thought you had shelved it due to some problems with overlapping
boundaries.

If you write "y" as [(log(N)-log(D))*(n/log(2))], you can empirically
tweak "n" to eliminate any overlap... and while this will always give
you clearly defined borders, it can sometimes seem
counterproductive... for instance, in the example I gave in the
previous post where "n" was 12, I manually truncated the overlapping
boundaries (and while this is certainly tedious, it isn't necessarily
an undesirable way to go about it), as the n=12 results seemed
especially nice in the context of the 300� example Glen was using...
If you set n=~18.9 you eliminate any overlap, but you also make 300� a
19/16:

7/6 258 267 276
13/11 284 289 294.1957
19/16 294.1961 298 301
6/5 305 316 327
17/14 332 336 340

If you wanted 300� to be a 6/5 with non-overlapping boundaries you
could have

7/6 253 267 281.7918
13/11 281.7921 289 297
6/5 299.95 316 333
11/9 338 347 357

where n=~12.2.

>If you've fixed it, I wonder what kind of theoretical underpinning we
can give it.

So yes, I did fix it, but as far as any theoretical underpinning goes,
I'm not really sure... In a recent reply to Glen Peterson you wrote "I
don't think you can put a strict boundary on these, but Sethares'
graphs and my harmonic entropy graphs give a good general idea," to
his question "how far from that will it be recognized as an out-of
tune ratio? How far from that will it just sound rough and be ready to
go anywhere." And therein lies the problem with an algorithm like
this - it does give cut and dried boundaries, and worse yet they're
adjustable... So as I agree with you when you say that you can't
really put strict boundaries on an intervals field of pull, or range
of distinction, I guess I'd have to say that I saw it as a simple way
to put some actual numbers on something along the lines of Partch's
observation one... But I'm certainly open to any suggestions.

Dan

🔗gbreed@xxx.xxxxxxxxx.xx.xxxxxxxxxxxxxxxx)

11/14/1999 12:19:00 PM

In-Reply-To: <942569358.21736@onelist.com>
Paul Erlich wrote in digest 395.14:

> >A 9-limit complete chord contains more 5-limit intervals
>
> Meaning, in this case, more ratios of 3. We need to come up with some
> better
> terminology

Yes. A better example would be a maj7 chord, where each note takes part
in at least two 5-limit intervals, but the chord as a whole is 15-limit.

5---15
/ \ /
1---3

> >than a 7-limit
> >complete chord. So, it may break the pattern in that more deviation is
> >allowed.
>
> I don't know quite what you mean. I thought the mistuning allowances you
> posted were for ratios of N, not averages for complete N-limit chords.

The deviations are the largest allowable for an N-limit chord. That
largest interval needn't be a ratio of N. I found 7-limit triads to be
less of a problem than tetrads, so it may be as much a factor of the
number of "independent" notes in the chord as the odd limit.

Now, I've discovered that the tuning tolerance can be much improved by
turning the chorusing down. That done, the 7-limit in 31= sounds great.
I wonder I didn't think of this before...

With the chorusing on, observation 1 did hold very well. Fifths did start
becoming less severe when they were 10 cents out of tune, and got really
out of tune at around 30 cents. Without the chorusing, the transitions
are less easy to hear. 3-limit chords sound bad at 25 cents mistuning,
but not that bad. 4:6:9 chords sound fine in all meantones.

> But if they were for [complete chords], the 9-limit would
> only break the pattern a little bit.

Yes. Most new intervals are still new.

> >Another complication is that sharp intervals are probably more
> >acceptable than flat ones.
>
> That would cancel itself out if you're averaging within a given limit.
> Stretching the octaves a bit can help if you evaluate intervals on an
> octave-specific level.

Being a complication, it isn't as simple as that! I think a significant
problem with 7-limit harmony is the smallness of the interval 8/7. So,
it's better off being sharp, and a flat 7/4 is much less important.
Octave stretching would also help.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/14/1999 5:25:14 PM

Dan Stearns wrote,

>If you write "y" as [(log(N)-log(D))*(n/log(2))], you can empirically
>tweak "n" to eliminate any overlap...

What do you mean? Isn't it true that you get a nonzero range for any
conceivable ratio? If that's true, then there has to be some overlap, since
there are an infinite number of ratios arbitrarily near any given ratio.

To accomplish what you are trying to do, I'd simply look at the ratio that
gets the highest probability going into the harmonic entropy calculation.
Complex ratios will never be the most likely interpretation and so will get
a zero range this way. I'll try to post some calculation results soon.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/14/1999 7:05:04 PM

Graham, since chorusing doubles each note with a vibratoed one, at the crest
and peak of each chorusing cycle, a given interval would really be a tetrad
whose six intervals include two instances of the original interval, an
enlarged version and a contracted version of that interval, and two very
small intervals. Clearly, when moving an interval away from just intonation,
the contracted and enlarged versions of the interval would hit whatever
bounds of permissible mistuning there might be before the original interval.
So that may be why you found more mistuning acceptable when you turned the
chorusing off.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/15/1999 10:30:51 AM

[Paul H. Erlich:]
> What do you mean?

Well if you take 7/6 and 6/5 and the 13/11 mediant, the first (whole)
number that would give non-overlapping borders would be n=13:

7/6 254 267 281
13/11 282 289 297
6/5 301 316 332

n=12.2 would be

7/6 253.47 266.87 281.77
13/11 281.80 289.21 297.02
6/5 299.98 315.64 333.04

So the question is (in an example like this) is where to stop - what
minor thirds in this space (and in this type of an example) no longer
matter... If you wanted to include 20/17 and 19/16 n=19.4 would give:

7/6 258 267 276
20/17 278 281 284
13/11 285 289 294.1
19/16 294.3 298 301
6/5 306 316 326

etc., etc.

> To accomplish what you are trying to do, I'd simply look at the
ratio that gets the highest probability going into the harmonic
entropy calculation.

I'm not sure I completely understand... could you explain a bit more
(maybe some examples would make it clearer to me).

> Complex ratios will never be the most likely interpretation and so
will get a zero range this way.

Right, but one of the things I was interested in was creating
'decisive' boundaries in the context of denser or more clustered ratio
interpretations of a single, or shades of a single, interval... the
difficulty as I see it is how far to go without veering off into a
counterproductive, or largely irrelevant extreme.

Dan

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/14/1999 7:31:48 PM

I wrote,

>> To accomplish what you are trying to do, I'd simply look at the
>>ratio that gets the highest probability going into the harmonic
>>entropy calculation.

Dan Stearns wrote,

>I'm not sure I completely understand... could you explain a bit more
>(maybe some examples would make it clearer to me).

Do you follow the idea of harmonic entropy as I've presented it so far? If
not, it may be time for a tutorial, as soon as I finish the Fokker
periodicity block ones (I'm working on them, really).

>> Complex ratios will never be the most likely interpretation and so
>>will get a zero range this way.

>Right, but one of the things I was interested in was creating
>'decisive' boundaries in the context of denser or more clustered ratio
>interpretations of a single, or shades of a single, interval... the
>difficulty as I see it is how far to go without veering off into a
>counterproductive, or largely irrelevant extreme.

Looking at the ratio with second-highest probability (in the harmonic
entropy model) may capture what you want here without getting too hairy.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/15/1999 11:12:37 AM

[Paul H. Erlich:]
> Do you follow the idea of harmonic entropy as I've presented it so
far?

Yes, I think so... But a good part of what I was interested in was
giving a bit more of a 'numerically decisive' answer to say Glen
Peterson's "Consider 300 cents. It is generally heard as a 16 cent
flat 6/5 even though it's only 11 cents sharp of 13/11." In other
words, mine is an attempt to say something along the lines of "this is
'exactly how flat' a 6/5 is *if* your going to consider 13/11 (or
19/16, etc.) a distinct intervallic entity..." But don't misunderstand
me here, I think that your harmonic entropy graphs represent most all
of this as it should be (for as I said earlier, I agree with you that
these sorts of boundaries can't be decisively or strictly rendered). I
just think that perhaps we are asking, or dealing with, slightly
different questions here, and that they're ones who's answers really
only differ trivially.

Dan

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/14/1999 8:09:46 PM

But what I was saying is that you _can_ assign sharp boundaries to ratios if
you only consider the ratio given the largest probability in the harmonic
entropy model.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/15/1999 3:22:21 PM

Glen Peterson wrote,

>Don't have Van Eck's book. What are the underlying mathematics?

If you didn't get that from my harmonic entropy page, try the version with
Joe Monzo's comments: http://www.ixpres.com/interval/td/erlich/entropy.htm

If anything is still unclear, come back to me and I'll put some kind of
tutorial together.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/15/1999 8:13:38 PM

I wrote,

>> To accomplish what you are trying to do, I'd simply look at the
>>ratio that gets the highest probability going into the harmonic
>>entropy calculation.

Dan Stearns wrote,

>I'm not sure I completely understand... could you explain a bit more
>(maybe some examples would make it clearer to me).

OK -- I looked at the region from 250 cents to 335 cents using a Farey
series of order 100 (should be high enough) and assuming a standard
deviation of 1% in the central pitch processor's accuracy. The most likely
interpretations are 7:6 up to 286 cents, 13:11 from 287 to 292 cents, and
6:5 from 293 cents up. The harmonic entropy reaches a local minimum at 273
cents, a local maximum at 283 cents, and another local minimum at 315 cents.
So in the entire 13:11 region one is being "pulled" toward 6:5 by the strong
gravitation of the latter.

Then I tried a standard deviation of 0.6%, that of the most accurate
listener in Goldstein's experiment. The most likely interpretations are
15:13 up to 252 cents, 7:6 from 253 to 281 cents, 13:11 from 282 to 297
cents, 19:16 from 298 to 299 cents, 6:5 from 300 to 331 cents, and 17:14
from 332 cents up. The harmonic entropy had a local minimum at 268 cents, a
local maximum at 290 cents, and another local minimum at 315 cents. So in
this case the 13:11 region, which is larger than before, is divided into a
region that is pulled toward 7:6 and a region that is pulled toward 6:5.

Interestingly, in either case 13:11 is not a point of stability but a point
of relative instability.

I repeated the study with a Farey series of order 150. Using a 1% standard
deviation, the borders were the same as with order 100, the local minima
were at 273 and 318 cents, and the local maximum was at 286 cents. Using a
0.6% standard deviation, the borders are the same as with order 100 (except
that 13:11 takes over 298 cents from 19:16, leaving the latter as the most
likely interpretation only at 299 cents), the local minima were at 268 cents
and 316 cents, and the local maximum was at 291 cents.

So, thankfully, the results do not change significantly when increasing the
Farey order from 100 to 150.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/16/1999 11:55:14 AM

[Paul H. Erlich:]
> Then I tried a standard deviation of 0.6%, that of the most accurate
listener in Goldstein's experiment.

Interestingly these results are almost identical to the ones I
originally posted in response to Glen Peterson:

15/13 248
7/6 253.26 267 282.03
13/11 289 297.15
19/16 298
6/5 299.73 316 333.34
17/14 336

where the "n" of "y" was 12... which I empirically decided on (as the
results it gave seemed most right), and then manually truncated.

So maybe this algorithm is really just a poor man's harmonic entropy
formula after all...

Dan

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/16/1999 9:54:10 AM

>So maybe this algorithm is really just a poor man's harmonic entropy
>formula after all...

If so, that could be worthwhile -- the calculations for my last message
required 220 million mathematical operations (performed by a computer, of
course). It took only a few minutes, though, on my new Dell Dimension XPS
T550.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/17/1999 2:06:10 AM

[Paul Erlich:]
> If so, that could be worthwhile -- the calculations for my last
message required 220 million mathematical operations

No doubt about it - that's a lot of operations! Although I really
couldn't do much of anything with it until I got Excel, the algorithm
I gave was hammered out with pen, paper, and a five dollar pocket
calculator... Let me try giving you another truncated chunk, say 3/2
through 2/1, and if all goes as I think it should, then this should
resemble a Farey series of order 100 (or 150) using the 0.6% standard
deviation.

So if you take:

[((x1*y)*2)+1]
--------------
[((x2*y)*2)+1]

as the margin of gravity to the left, and:

[((x1*y)*2)-1]
--------------
[((x2*y)*2)-1]

as the margin of gravity to the right, where "x1" and "x2" are
x1=[N/(D/a)/b] and x2=[D/(D/a)/b], and "a" and "b" are a=N*D and
b=N+D, and "y" is y=[(log(N)-log(D))*(n/log(2))], and if you then let
n=12, this would be the truncated 3/2 through 2/1 that results:

3/2 669.31 701.96 737.99
23/15 740.01
20/13 740.67 745.79
17/11 747.62 753.64
14/9 757.61 764.92 772.37
25/16 772.63
11/7 773.20 782.49 792.01
19/12 795.56
8/5 800.93 813.69 826.87
21/13 830.25
13/8 832.61 840.53 848.60
18/11 852.59 858.42
23/14 859.45
5/3 863.97 884.36 905.76
27/16 905.87
22/13 906.06 910.79
17/10 912.52 918.64
12/7 924.47 933.13 941.96
19/11 946.20 951.76
26/15 952.26
7/4 954.02 968 984.11
23/13 987.75
16/9 989.53 996 1002.74
25/14 1003.80
9/5 1005.97 1017 1029.50
20/11 1035.00
11/6 1039.80 1049 1059.11
24/13 1061.43
13/7 1063.57 1071 1079.96
28/15 1080.56
15/8 1081.20 1088 1095.43
17/9 1101.05 1107.37
19/10 1111.20 1116.86
21/11 1119.46 1124.59
23/12 1126.32 1131.00
25/13 1132.10 1136.41
27/14 1137.04 1141.03
29/15 1141.31 1145.02
31/16 1145.04
2/1 1148.32 1200.00 1256.77

Dan

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/16/1999 1:45:49 PM

I went from 665 cents to 1260 cents with 0.6% standard deviation and Farey
order 100. I only sampled at integer cent values, so some ratios may have
been left out. Here are the ranges for most likely interpretations I got:

??? - 668�: 22:15
669 - 672�: 25:17
673 - 676�: 28:19
677 - 727�: 3:2
728 - 737�: 26:17
738 - 739�: 23:15
740 - 747�: 20:13
748 - 756�: 17:11
757 - 772�: 14:9
773 - 793�: 11:7
794 - 799�: 19:12
800 - 829�: 8:5
830 - 849�: 13:8
850 - 860�: 18:11
861 - 864�: 23:14
865 - 904�: 5:3
905 - 911�: 22:13
912 - 922�: 17:10
923 - 943�: 12:7
944 - 952�: 19:11
953 - 985�: 7:4
986 -1003�: 16:9
1004-1031�: 9:5
1032-1037�: 20:11
1038-1061�: 11:6
1062-1080�: 13:7
1081-1096�: 15:8
1097-1107�: 17:9
1108-1115�: 19:10
1118-1125�: 21:11
1126-1130�: 23:12
1131-1135�: 25:13
1136-1142�: 27:14
1143-1146�: 29:15
1147-1148�: 33:17
1149-1157�: 35:18
1158-1159�: 37:19
1160-1162�: 39:20
1163-1164�: 41:21
1165�: 53:27
1166-1234�: 2:1
1235-1236�: 53:26
1237-1238�: 41:20
1239-1241�: 39:19
1242-1244�: 37:18
1245-1252�: 35:17
1253-1255�: 33:16
1256-1259�: 29:14
1260-????�: 27:13

Whew! That required 182 million flops (floating-point operations).

The more complex ratios here are heavily influenced by the Farey limit of
100. I tried 150 but my computer ran out of memory. Anyway, our results seem
to be in pretty good agreement except for those more complex ratios and the
chunks of the really simple ratios that they steal from. I'm not sure what
your truncation involves and why you have sometimes 1, sometimes 2, and
sometimes 3 cents values for a given ratio.

But none of this is harmonic entropy. That's just Van Eck's model. I'll
e-mail you and Joe a graph of the harmonic entropy associated with these
results (perhaps Joe can put it up on the web). It has the following local
extrema:

max @ 673�: 3.8331
min @ 702�: 2.3441 (3:2)
max @ 731�: 3.7903
min @ 782�: 3.5505 (11:7)
max @ 796�: 3.5909
min @ 814�: 3.4390 (8:5)
max @ 834�: 3.6059
min @ 843�: 3.5898 (13:8)
max @ 859�: 3.6417
min @ 884�: 2.9408 (5:3)
max @ 911�: 3.6259
min @ 935�: 3.4790 (12:7)
max @ 947�: 3.5138
min @ 968�: 3.2208 (7:4)
max @ 992�: 3.5537
min @1019�: 3.3214 (9:5)
max @1036�: 3.4490
min @1050�: 3.3874 (11:6)
max @1067�: 3.4391
min @1080�: 3.4348 (insubstantial dip; between 13:7 and 15:8)
max @1112�: 3.4970
min @1119�: 3.4955 (insubstantial dip; 21:11?)
max @1163�: 3.7285
min @1200�: 0.6962 (2:1)
max @1236�: 3.6972

These results provide another possible answer to the "gravitational field"
question, in this case fewer ratios (3:2, 11:7, 8:5, 13:8, 5:3, 12:7, 7:4,
9:5) being alloted a field. I might say that from 1036-1067� you're pulled
weakly toward 11:6, then there's a "plateau" from 1067-1080�, a "slope"
rising from 1080�-1112�, another "plateau" from 1112�-1119�, and a "hill"
rising from 1119-1163�, whereupon you see the gaping "valley" of 2:1 below.
Also note that some of the local maxima are lower than some of the local
minima -- so with this intepretation it's even hard to justify calling 11:7
and 13:8 "consonant".

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/16/1999 4:25:36 PM

Thanks, Daniel. There are many differences between Barlow's idea and
harmonic entropy. Harmonic entropy makes no reference to a scale, referring
instead to a simultaneous dyad presented with no musical context. I don't
even agree with Barlow that there is a "182 cent major second between the
second and third degrees of a major scale" -- since even the strictest of
strict JI advocates would lower the second degree to 204 cents below the
third degree when the second degree is used in a subdominant function. In
fact, I see no evidence that melodic seconds or thirds in a scale
automatically get interpreted in terms of simple ratios -- if they did, how
could one explain the wide variety of values they take in the world's
scales?

Besides these qualitative differences, note that harmonic entropy really
only has one "arbitrary" parameter -- the standard deviation -- while Barlow
is discussing "1. the deviation of the imagined pitch from the given one",
"2. the contraction or expansion of the steps in the scale resulting from
the aforesaid stretch of the imagination", 3. an arbitrary prime weighting
scheme.

Note that harmonic entropy starts with no assumptions about simple ratios
being more important that complex ratios. The attraction to simple ratios
comes out of the harmonic entropy calculations simply because, for any Farey
order (i.e. integer limit), the simple ratios have no near neighbor ratios
while the complex ratios have many. So even if _a priori_ we can recognize
ratios of arbitrary complexity (which is what the harmonic entropy model
assumes), the imprecision of our hearing mechanism will spread any stimulus
over a finite range in interval space. If the true interval is near a simple
ratio, there won't be much confusion as to which ratio is heard; otherwise,
many ratios will be included within the "smear" and the listener is
confused. Harmonic entropy models dissonance with a mathematical measure of
this confusion, namely the entropy of the set of probabilities associated
with the proportion of the "smear" that is assigned to the various ratios.
Entropy is the natural measure for disorder and also for informational
complexity.

Now it may be that simple ratios _are_ more important even apart from the
"smearing", in which case harmonic entropy gives too much importance to
complex ratios. Using a standard deviation of 0.6% is also quite ideal; even
the best listener in Goldstein's experiment only acheived this accuracy in
at some optimal frequency (was it 2000Hz) and did worse than 2% in other
musically relevant areas. So the results I presented were absolutely the
best-case scenario for perceiving complex ratios, aside from
combination-tone effects. I think this is an important exercise before
plunging into the endless array of ratios that are our disposal for
constructing JI systems or evaluating tempered systems.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/16/1999 4:49:40 PM

I wrote,

>1149-1157�: 35:18
>1158-1159�: 37:19
>1160-1162�: 39:20
>1163-1164�: 41:21
> 1165�: 53:27
>1166-1234�: 2:1

>The more complex ratios here are heavily influenced by the Farey >limit of
100. I tried 150 but my computer ran out of memory.

To save on memory, I looked at just the region from 1155-1180�, this time
with Farey order 200. The most likely interpretations in this region were
now:

????-1156�: 35:18
1157-1164�: 41:21
1165�: 43:22
1166-1170�: 51:26
1171-????�: 2:1

So clearly the ratios of the most likely interpretations in this region are
highly dependent on the Farey limit.

What about the local extrema? The 200-limit version had a local maximum at
1168�, compared with 1163� for the 100-limit version. That is more worrisome
-- perhaps the extrema do not converge to a limit as the Farey limit
approaches infinity. Well, then the Farey limit may truly be an additional
"arbitrary" parameter.

🔗D.Stearns <stearns@xxxxxxx.xxxx>

11/17/1999 9:22:04 AM

[Paul H. Erlich:]
> I'm not sure what your truncation involves, and why you have
sometimes 1, sometimes 2, and sometimes 3 cents values for a given
ratio.

Well n=12 was an empirically chosen parameter, and for any single
ratio the flat and sharp range is based on both the size of N & D and
what you call "n," so the ranges are adjustable... Theoretically, I'm
starting with 1/1 & 2/1, and then I'm incrementally filling out the
n=12 spaces with mediants. The truncation involves an after the fact
manual elimination any overlapping flat and sharp ranges... in other
words keeping the smaller ratios ranges intact by eliminating the
larger, more complex ratios borders... this creates a cents continuum
through a range of ratios.

Dan

BTW, thanks for the graph.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/17/1999 1:39:48 PM

Daniel Wolfd wrote,

>I understand that. But unless considered in a particular context, I can't
>ascribe much importance to a single interval,

Important or not, a single interval will evoke a sensation involving a
greater or lesser degree of "blending", and harmonic entropy is one model
for explaining this.

>and Barlough seems to be the
>first to propose some criteria for evaluating such a context.

If that context is a melodic scale, I object to Barlow's view of the scale
as implying a single, immutable set of ratios. If not, can you explain?

>Your argument makes a strange circle here. Any given limit is an assumption

>of greater importance for simpler ratios in that the limit is (a) a limit

Good point! Though if I include all ratios of numbers up to 200, isn't the
fact that you only get local minima around, say, 11-odd-limit ratios
significant? Isn't the fact that, even though 63:50 and dozens of other
ratios closer to 400 cents are included in the model on an equal footing
with 5:4, the model still says we're most likely to hear 400 cents as 5:4,
significant?

>and
>(b) the distribution of prime factors in any given octave will be biased
>towards the lower factors.

I don't know what you mean. Prime factors have nothing to do with either the
premises or the results of the harmonic entropy model.

>While the
>entropy idea may have no such assumption in itself, by using the Farey
>series for your calculations you are are stuck with this bias.

I agree that there is some bias, but there are general features of the
harmonic entropy curves that seem fairly constant over Farey series of
various orders, series defined with the sum of the numberator and
denominator less that various constants, and probably some of the
alternatives we've discussed on this list that I haven't been able to try
yet.

>I don't happen
>to disagree with the results, but you won't get a significantly different
>answer by taking a set similar to that chosen by Barlow.

I'll put that to the test. But first there's a question we have to answer.
In the Farey and other sets I'd consider, the ratios are delimited by the
mediants (or freshman sums), which are simply the first fractions which
would come between the ratios as the Farey (or other) order is increased.
That would not be the case for Barlow's set. Although the mediant between
two of Barlow's fractions may have a very high prime limit, it would not be
a part of a "Barlow set" of slightly increased order, which would conform
instead to a prime limit of 13 or 17. If we can come up with a definition of
Barlow set that is defined on a single "order" (or complexity) parameter,
then we can systematically (but with more difficulty that simply taking the
freshman sum) delimit the ratios and process to calculate entropies. The
naive suspicion is that these will show local minima at ratios with higher
numbers but low prime factors (like 15:8 and 27:16), while my preferred
models do not.

>This "confusion" doesn't conform to any real listening experience I've ever

>had. In experimental settings -- my work with all those minor thirds last
>year, for example -- the length of the listening period was everything in
>sorting out a series of intervals. My sensation in analysing an interval
>over time is not that I am sorting through a smear of possible identities
but
>rather that a single identity is gradually speaking, and that identity is
not
>isolated from beating or difference tones.

Let me quote the first two paragraphs of the conclusion of the article
"Vision: A Window on Consciousness" in the Nov. '99 _Scientific American_:

"What do these findings reveal about visual awareness? First, they
show that we are unaware of a great deal of activity in our brains. We have
long known that we are mostly unaware of the activity in the brain that
maintains the body in a stable state--one of its evolutionarily most ancient
tasks. Our experiments show that we are also unaware of much of the neural
activity that generates--at least in part--our conscious experiences.
"We can say this because many neurons in our brains respond to
stimuli that we are not conscious of. Only a tiny fraction of neurons seem
to be plausible candidates for what physiologists call the 'neural
correlate' of conscious perception--that is, they respond in a manner that
reliably reflects perception"

What I'm suggesting is that the confusion exists on a neural level (which is
why an informational approach such as entropy is appropriate) but the many
stages of neural processing between the sense perception and the conscious
experience serve to provide the illusion that our experience is in most
respects clear and distinct. It may be that one is only conscious in a
general sense of the difficulty with which this neural processing occurs,
and that that contributes to the sensation we call "dissonance" -- this is
one interpretation of the harmonic entropy model. Nevertheless, I've had the
conscious experience of listening to a 400-cent major third and hearing it
as a confusion between 5:4 (mostly) and 9:7 (less so), involving a
fluctuation in time between the two, much like in the visual experiments
described in this article describe a shifting perception when different
patterns are presented to each eye.

What is known is that the experiments on virtual fundamental perception do
indicate a range of "fudging" in our perception of intervals and that this
range does not extend much below 0.6%. For example, with three sine waves at
1100Hz, 1300Hz, and 1500Hz, subjects report agreement between the perceived
fundamental and a probe tone at both 217 Hz (suggesting a 5:6:7
intepretation) and at 186 Hz (suggesting a 6:7:8 interpretation) but not at
other pitches. See Hall, _Musical Acoustics_ and Plomp, _Aspects of Tone
Sensation_. Clearly there are at least two harmonic interpretations here, if
not simultaneous at the conscious level, then quite relevant to what can be
consciously perceived. Terhardt's and Parncutt's understanding of consonance
includes a factor related to the degree of certainty in making these
harmonic interpretations. Is that so unreasonable?

>But similar to Tenney's use of entropy as a measure of variation in musical

>form, or the various applications of cybernetic theory in analysis used by
>Franz-Jochen Herfert, I am still left with the feeling that a power tool
is
>being used to tackle a task that can best be done manually

Entropy is defined by an exceedingly simple function, H = -sum(p*log(p)),
and I know of nothing else that could do the job. Feel free to suggest any
alternatives you may think of. The actual technical derivation of the
entropy function involves the following two conditions:

Continuity: H(p,1-p) is a continuous function of p

Grouping: H(p1,p2,p3,...,pM) = H(p1+p2,p3,...,pM) + (p1+
p2)*H(p1/(p1+p2),p2/(p1+p2))

These are sufficient to pin down the definition of entropy. The impetus for
continuity should be clear; the impetus for grouping is that it implies that
splitting one possibility into two increases the entropy by an amount equal
to the entropy of those two possibilities alone (as if they were the only
possibilities) multiplied by the total probability represented by those two
possibilites. In other words the total disorder of a system increases
exactly as you would expect when you increase the disorder of a subset of
the system.

>I am certain that Goldstein was working with very limited sample periods

I seriously doubt his sample periods were shorter than the length of time
most intervals are heard in most real music. We should check the original
reference to be sure (Carl, didn't you dig this up recently)?

>and
>a realistic evaluation of musical listening is going to have to take this
>into account, as well as work with timbres where beating becomes
unavoidable.

Well, I have always stated that an additional component of dissonance
calculations, besides harmonic entropy, should be the
Helmoltz/Plomp/Sethares-type measures where interference between
closely-spaced partials (beating, roughness) is summed up in some way. The
former was never intended to replace the latter.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/17/1999 7:41:06 PM

>Sorry - just strike the word prime. The fact is plain: one half of the
terms
>will be factor of two, one third will be factors of three, etc. There's
just
>a built-in-bias to lower factors.

That's how the integers, and thus the harmonic series, work. You just don't
run across numbers that are multiples of your telephone number every day.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/17/1999 7:52:28 PM

><< For example, with three sine waves at
> 1100Hz, 1300Hz, and 1500Hz, subjects report agreement between the
perceived
> fundamental and a probe tone at both 217 Hz (suggesting a 5:6:7
> intepretation) and at 186 Hz (suggesting a 6:7:8 interpretation) but not
at
> other pitches. >>

>But then, with three harmonic timbres, the ambiguity disappears.

Very often this is true. That's because you then get 11:13:15 and related
relationships between the upper partials, which tend to be in the frequency
range in which the central pitch processor's precision. If it is 1% up
there, as for an average listener, the errors involved in intepreting
11:13:15 as 5:6:7 or 6:7:8 (which are greater than 1%) are not very likely
to occur. Note that, as I recently reported, for a 1% precision the 11:13
_alone_ occupies a full 6 cents in most-likely-interpretation space (though
sharpening it only decreases its harmonic entropy until 6:5 is reached). The
13:15 and 11:15 in the full triad would give the ear extra information and
make it even easier to recognize the 11:13 ratio correctly.

>(I imagine
>one could make a lovely piece of orchestration by taking advantage of this
>effect -- a chord of flutes with one fundamental supported by a bass flute,

>say, and then a chord of oboes with the other...).

Sounds nice!

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/17/1999 9:06:25 PM

I wrote,

>>1149-1157�: 35:18
>>1158-1159�: 37:19
>>1160-1162�: 39:20
>>1163-1164�: 41:21
>> 1165�: 53:27
>>1166-1234�: 2:1

>>The more complex ratios here are heavily influenced by the Farey >>limit
of 100. I tried 150 but my computer ran out of memory.

>To save on memory, I looked at just the region from 1155-1180�, this >time
with Farey order 200. The most likely interpretations in this >region were
now:

>????-1156�: 35:18
>1157-1164�: 41:21
> 1165�: 43:22
>1166-1170�: 51:26
>1171-????�: 2:1

>So clearly the ratios of the most likely interpretations in this >region
are highly dependent on the Farey limit.

>What about the local extrema? The 200-limit version had a local >maximum at
1168�, compared with 1163� for the 100-limit version. >That is more
worrisome -- perhaps the extrema do not converge to a >limit as the Farey
limit approaches infinity. Well, then the Farey >limit may truly be an
additional "arbitrary" parameter.

I tried 1165-1180� with Farey limit 400, maintaining the 0.6% precision
estimate. The lower limit of the 2:1 region was now 1173�, and he local
maximum of harmonic entropy was at 1170�. Converging?

🔗Carl Lumma <clumma@xxx.xxxx>

11/17/1999 9:53:42 PM

>I seriously doubt his sample periods were shorter than the length of time
>most intervals are heard in most real music. We should check the original
>reference to be sure (Carl, didn't you dig this up recently)?

You gave me: "The Central Origin of the Pitch of Complex Tones: Evidence
from Musical Interval Recognition." It was interesting, but has nothing to
do with the smearing factor. The reference which does is...

Goldstein, J. L. 1973. "An optimum processor theory for the central
formation of the pitch of complex tones." J. Acoust. Soc. Amer. Vol. 54 p.
1499.

I haven't read this one, but I worry about your application of his sine
tone results to complex tones.

-Carl

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

11/18/1999 10:05:06 AM

Carl Lumma wrote,

>I haven't read this one, but I worry about your application of his sine
>tone results to complex tones.

I got the idea from Parncutt and Van Eck.

🔗David J. Finnamore <daeron@bellsouth.net>

8/28/2000 6:16:07 AM

You guys are writing way to fast and furious for
me at the moment. Forgive me if something
similar to this has been dealt with in the past
couple of issues.

Keenan's brave and brilliant attempt to
analogize the harmonic landscape and harmonic
entropy has inspired me to try, once again, to
understand what harmonic entropy is and what it
means to music and tuning. I think I might be
starting to get it. Since I understood "entropy"
only from high school physics, not information
theory, I was trying to understand harmonic
entropy in terms of wasted harmonic energy. It
seems to me now that it's really a measure of
harmonic ambiguity. Am I on the right track?
This is something that happens in the human
perceptual system, not merely on paper, right?
And there's nothing wrong with an interval,
scale, or tuning exhibiting harmonic entropy,
then. The primary usefulness of the concept is
in qualifying temperaments, the primary intent
of which is to approximate a certain set of low
prime ratios. The local minima are essentially
what Gary Morrison used to call "buoys in the
the water of intonation." Am I there yet?

--
David J. Finnamore
Nashville, TN, USA
http://members.xoom.com/dfinn.1
--

🔗Pierre Lamothe <plamothe@aei.ca>

9/22/2000 3:04:30 PM

Carl,

In 13297 you wrote :

<< The problem with 1-D for triads is that there doesn't seem
to be a single "simplest" triad "between" two given triads.
For example, what's the simplest triad "between" 3:4:5 and
4:5:6? >>

I would like to say that if analogue of mediants don't exist generally in
Z^3, there exist partial possibility for that. Your example just shows the
mediant property. Like in Stern-Brocot tree, starting values 0/1 and 1/0
are insignificant.

I've soon explored a triad tree which has mediant property and had surprise
that mediant was also valid on IG (incremental generator) I use for showing
internal numerical structure on chords. (I wrote definition of IG in post
12905 and quote it in Post Scriptum). I didn't hope to find new way there.
I was (as I'm often) on search automatic pilot.

I got partial valid tree since minimum variations on a:b:c triad are
(a-1):(b-1):(c-1) and (a+1):(b+1):(c+1). So, I tried mediant starting with
3:4:5 between 2:3:4 and 4:5:6 and it was ok. As such it's nothing since
it's partial. I tried IG and it was also ok. It's only there I had surprise
for IG reveals why it's ok. On representation of chord with ratios like
(3:4:5) or harmonics like (1 3 5) it's not obvious.

-----------------------------------------
2:3:4
9:13:17
7:10:13
12:17:22
5:7:9
13:18:23
8:11:14
11:15:19
3:4:5
13:17:21
10:13:16
17:22:27
7:9:11
18:23:28
11:14:17
15:19:23
4:5:6
-----------------------------------------

(9 13 17)
(5 7 13)
(3 11 17)
(5 7 9)
(9 13 23)
(1 7 11)
(11 15 19)
(1 3 5)
(13 17 21)
(1 5 13)
(11 17 27)
(7 9 11)
(7 9 23)
(11 17 27)
(15 19 23)

-----------------------------------------

However with IG representation that corresponds to mediant starting with IG
11-1 between IGs 11-0 and 11-2 (but keeping order aa-b for calculation :
normally, by definition, if b=2c then aa-b is reduced at most compact form
cb-b.).

44-1
33-1
15-5
22-1
55-3
13-3
44-3
11-1
44-5
23-3
55-7
22-3
45-5
33-5
44-7

Mediant property become obvious. It's ok only because there are only two
distinct values a and b. There is no chance to have reductible ratio. The
tree is isomorph to Stern-Brocot tree.

Pierre Lamothe

---------------------------------

[Post Scriptum : IG definition]

<<
I use incremental generator value abc-d [...] for chords
comparaison. It's not a deep theoritical concept, it's
only useful. Values are differentials in most compact form.

abc-d == [a+b+c+d]:[2a+b+c+d]:[2a+2b+c+d]:[2a+2b+2c+d]

If a:b:c:d is the most compact form then

a:b:c:d == [b-a][c-b][d-c]-[2a-d]

Examples :

111-1 == 4:5:6:7 == (1 3 5 7)
121-1 == 5:6:8:9 == (1 3 5 9)
123-3 == 9:10:12:15 == (3 5 9 15) 10:12:15:18 (m7 Chord)
12:15:18:20 (6 Chord)
425-9 == 20:24:26:31 == (3 5 13 31) 24:31:40:52 0-441-884-1326
>>