back to list

Harmonic Entropy

🔗Graham Breed <g.breed@xxx.xx.xxx>

3/23/1999 5:39:40 AM

I discussed this privately with Paul Erlich a while back. I was happy with
the derivation, except for it's singling out of the denominator rule. It
seemed to me that the following rules all followed:

Hold the upper note constant, and entropy is proportional to the
denominator.

Hold the lower note constant, and entropy is proportional to the numerator.

Hold the arithmetic mean of the frequencies of the two notes constant, and
entropy is proportional to the arithmetic mean of the numerator and
denominator. This is a sum rule.

Hold the geometric mean of the frequencies of the two notes constant, and
entropy is proportional to the geometric mean of the numerator and
denominator. This is a product or LCM rule.

So, Paul, are you happy with this or do we have to start that row again?

Dave Keenan's conclusion:

>What we are suggesting here is that the dissonance is related to the period
>(inverse of frequency) of the virtual fundamental (VF). When the ratio for
>an interval is in lowest terms, the VF corresponds to a 1.

is in the right area but wrong. Harmonic entropy, which may or may not be
related to dissonance, is in fact related to the frequency an LCM above the
VF. I believe this is known as the "guide tone" in the literature. So, the
lower the guide tone, the lower the entropy.

Dave Keenan's assertion:

>If we want to talk about the realtive dissonance of intervals (or chords
>for that metter) without considering frequency, then the only sensible
>thing to do is to compare them all with the same average frequency, hence
>n+d or (n+d)/2 is the right approximation for dyads, not d.

seems to apply also to the geometric mean. But maybe we're splitting hairs.
I'll consider all these rules to be equally good for the time being. They
are all qualitatively the same for most intervals within an octave.
However, for large intervals they are divergent. 15/1 is the same as 5/3
with a product rule. With a denominator rule, 15/1 is better. With a sum
rule, 5/3 is better. With a numerator rule, 5/3 is much better. So, the
four rules can be ranked in order of how much they prefer large intervals:

big is better ->
numerator sum product denominator

As large intervals do appear to be less consonant, the denominator rule
looks like the most "musical" of the four. I am not convinced that the
harmonic entropy derivation proves it to be so.

I don't know how to generalise the harmonic entropy beyond dyads. However,
my intuition strongly suggests that it will still lead to the guide tone.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

3/24/1999 12:50:30 PM

Graham Breed wrote,

>Hold the upper note constant, and entropy is proportional to the
>denominator.

>Hold the lower note constant, and entropy is proportional to the
numerator.

>Hold the arithmetic mean of the frequencies of the two notes constant,
and
>entropy is proportional to the arithmetic mean of the numerator and
>denominator. This is a sum rule.

>Hold the geometric mean of the frequencies of the two notes constant,
and
>entropy is proportional to the geometric mean of the numerator and
>denominator. This is a product or LCM rule.

>So, Paul, are you happy with this or do we have to start that row
again?

First of all, entropy is not the right word to use in the above
statements. "Width" or even "field of attraction" would be better, but
then the proportionalities hold inversely. It is an input into the
entropy formula, not the output.

Secondly, I've only proved and numerically verified that the first of
the four statements above (suitable reworded) is a good approximation. I
just tried the third one numerically and the results look very poor --
the width for some of the most complex ratios can be up to half the
width for 1/1! That may explain some of the weird minima I found using
the Mann (num+den<N) series. You tried modifying my derivation to prove
the last one but I thought your manipulations were not valid, since they
assumed a contradiction. What I just found for the last case is that in
fact the width is, to an even better approximation that the one for the
first (Farey) case, inversely proportional to the _square root_ of the
product of the numerator and denominator. This is interesting!

>>What we are suggesting here is that the dissonance is related to the
period
>>(inverse of frequency) of the virtual fundamental (VF). When the ratio
for
>>an interval is in lowest terms, the VF corresponds to a 1.

>is in the right area but wrong. Harmonic entropy, which may or may not
be
>related to dissonance, is in fact related to the frequency an LCM above
the
>VF. I believe this is known as the "guide tone" in the literature.
So, the
>lower the guide tone, the lower the entropy.

I tend to side with Keenan here and don't really know where you're
coming from.

>I don't know how to generalise the harmonic entropy beyond dyads.
However,
>my intuition strongly suggests that it will still lead to the guide
tone.

The guide tone is lower and simpler for utonal chords than otonal
chords. However, harmonic entropy will definitely be lower for otonal
chords.

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

3/24/1999 2:18:17 PM

Of course, the square root of the product _is_ the geometric mean. So
Graham's fourth rule is correct, except entropy needs to be replaced
with width and the proportionality is inverse. Now the geometric mean is
of course the center of the interval, as normally measured (e.g., in
cents). So this is probably the most sensible way of comparing
intervals. I don't see how LCM figures into it. If we can somehow
believe that for triads, the "area" is inversely proportional to the
geometric mean of the three integers most simply expressing the chord
(e.g., 4:5:6 or 10:12:15), then we have a way to compute harmonic
entropy for triads. But if you look at a 2-d plot (such as a Dalitz
plot) of all the triads, you'll see that this "area" may have a very
strange shape and might not be possible to define with some
generalization of mediants. Then again, it might . . .

The conceptual advantage of the Farey series (which leads to a
denominator rule) is that is seems reasonable to believe that the brain
has a "template" of sorts of the harmonic series up to a certain limit.
Then all intervals within that template are possible interpretations,
and others aren't. If this is the way it works, it doesn't matter if you
are holding the lower note, upper note, arithmetic mean, or geometric
mean constant. With the other series, there isn't a corresponding
"template" that the brain could be referencing. I admit that this isn't
a major conceptual advantage, though.

🔗Daniel Wolf <DJWOLF_MATERIAL@xxxxxxxxxx.xxxx>

3/24/1999 2:22:33 PM

Message text written by Paul Erlich
> mediants<

Isn't the word "median"? Mediant is a musical term for a tonal function.

Daniel Wolf

🔗Paul H. Erlich <PErlich@xxxxxxxxxxxxx.xxxx>

3/24/1999 3:19:11 PM

I wrote,

>I just tried the third one numerically and the results look very poor
-- the width for some of the most >complex ratios can be up to half the
width for 1/1! That may explain some of the weird minima I found >using
the Mann (num+den<N) series.

Well, I don't know about that last statement, but here's an example of
the first one:

For N=80, the nearest fractions to 1/1 are 39/40 and 40/39. The mediants
are therefore 40/41 and 41/40, for a width of 1681/1600 = 85.5 cents.
Now the nearest fractions to 39/1 are 40/1 and 77/2. The mediants are
79/2 and 116/3, for a width of 36.9 cents. This doesn't compare well
with the 20-fold decrease in width Graham's third rule would predict!

Using just the denominator (i.e., smaller integer) works much better for
widths in the Mann series than using the sum. But the approximation of
using just the denominator is rougher in the Mann series than in the
Farey series.

🔗Paul H. Erlich <PErlich@Acadian-Asset.com>

3/25/1999 2:05:44 PM

I wrote,

>> mediants<

Daniel Wolf wrote,

>Isn't the word "median"? Mediant is a musical term for a tonal
function.

The word is mediant; it is a mathematical term as well as a musical
term. The median is a type of average which takes the central entry in a
rank-ordered list.