back to list

Calculating Harmonic Entropy

🔗John A. deLaubenfels <jdl@adaptune.com>

9/9/2000 5:12:59 AM

[Carl Lumma:]
>>>Sorry for not being more clear. Could you show the 20 ratios, from
>>>the farey series used, with the lowest harmonic entropy?

[Paul Erlich:]
>>I've only calculated the harmonic entropy at cents values.

[Carl:]
>Can you ask it about the cents values represented by the ratios of a
>given farey order?

I've just gotten a C++ program up and running; it calculates the
following values:

For 551.32, entropy is 4.234279
For 617.49, entropy is 4.179904
For 782.49, entropy is 4.080664
For 231.17, entropy is 4.383823
For 1071.70, entropy is 3.940651
For 435.08, entropy is 4.270048
For 933.13, entropy is 4.015730
For 266.87, entropy is 4.341114
For 1049.36, entropy is 3.922407
For 813.69, entropy is 4.028784
For 1017.60, entropy is 3.912540
For 315.64, entropy is 4.257082
For 582.51, entropy is 4.119130
For 968.83, entropy is 3.875105
For 386.31, entropy is 4.141619
For 884.36, entropy is 3.742208
For 498.04, entropy is 3.905651
For 701.96, entropy is 3.405798
For 1200.00, entropy is 1.840298
For 0.00, entropy is 2.023188

(these are your 20 points, plus 1/1, in reverse order). This is for
Farey N=100, s=.01 (internally changed to .01/log(2)).

[Paul:]
>>>>right (although what I call s=1% becomes, in your units,
>>>>s=1%/log(2)=1.4427.

[Carl:]
>>>Eh? How does s apply to what I did?

[Paul:]
>>It doesn't, but if you're going to proceed to calculate harmonic
>>entropy, it'll be good to have this straight.

[Carl:]
>Ah, thanks. I think I'll need to go over to something a little
>higher-level than LISP before I calculate harmonic entropy. I have
>Maple V, but perhaps I should start with Mathematica or Matlab...

Carl, I don't know LISP, but I'm sure it has plenty of power! You've
already done the hard part, sorting the Farey series and figuring the
mediant widths. Here's all you've got to do to finish the job (Paul E,
correct me if I'm wrong about anything here!):

. You've already got a list of ratios representing the Farey series
for N=100 (natch, N can be anything else as well!). You've already
sorted the list and figured out the logarithmic "width" of each
interval using the mediant method; thus, for example, 3/2 has width
0.01414622 octaves, as you've calculated already.

. Pick any interval for which you want to calculate harmonic entropy.
Paul traverses even cent values, but the entropy curve is a
continuous function. Express the interval as number of octaves, to
align with the widths above. Example: 3/2, is approx 701.96 cents,
or 0.5849625 octaves.

. Do a summation of the interval you've picked over all the ratios
in the Farey series. For each ratio, calculate a "raw probability"
as follows:

dist = (pickedInterval - fareyInterval)
sdev = dist / s; // s may be .01/log(2), etc.
prob = exp(-sdev * sdev / 2) * intervalWidth;

Suppose, for example, you are calculating the entropy at 699 cents.
the "raw probability" that this will be heard as 3/2 is:

pickedInterval = .5825000 octaves
fareyInterval = .5849625 octaves
dist = -.0024625 octaves
s = .0144270 octaves
sdev = -.1706875
prob = exp(-0.0145671) * 0.01414622
= 0.9855385 * 0.01414622
= 0.0139416

So, you'll calculate a raw probability that the 699 cents will be
heard as every one of the Farey series ratios; as you might expect,
the values get very small as you compare, say, 699 cents to 1/1!

. Sum all the "raw probabilities"; they should add up to 1, but of
course they won't! Save the sum, then make a second probability
pass and calculate the "true probability" that the interval will
be heard as a particular just ratio.

prob2 = prob / probSum; // sigma(prob2) = 1.0

. In this second pass you're going to build the actual entropy value
using each of the individual probabilities, by summing:

entropy = entropy - (prob2 * log(prob2));

That's all there is to it! For dyads, that is...

Note that entropy is positive because each probability is less than
one and therefore prob*log(prob) is negative.

JdL

🔗Carl Lumma <CLUMMA@NNI.COM>

9/10/2000 11:04:55 AM

>>Can you ask it about the cents values represented by the ratios of a
>>given farey order?
>
>I've just gotten a C++ program up and running; it calculates the
>following values:
//
>(these are your 20 points, plus 1/1, in reverse order). This is for
>Farey N=100, s=.01 (internally changed to .01/log(2)).

Cool, but I was asking for the 20 ratios from Farey order 100 with
the lowest harmonic entropy, as opposed to the h.e. values for the
"widest" 20 ratios from Farey order 100.

>Carl, I don't know LISP, but I'm sure it has plenty of power!

Well, you can code any computable function with it, so in theory,
it can do everything. And, it's very fast and efficient for an
interpreted language. But it only has as much power as you give
it -- data types, even basic math, must all be built from simple
list-processing routines. I even had to write a procedure to do
base-2 logs (I'm actually using a striped-down dialect of LISP
known as Scheme).

> sdev = dist / s; // s may be .01/log(2), etc.

? Isn't it pickedInterval * s?

> prob = exp(-sdev * sdev / 2) * intervalWidth;

What's intervalWidth? The width of the fareyInterval?

I'm not sure what's going on here, but I'm to believe it's equivalent
to finding the area of a standard distribution centered on pickedInterval
that's sectioned off by the width of fareyInterval?

> . Sum all the "raw probabilities"; they should add up to 1, but of
> course they won't! Save the sum, then make a second probability
> pass and calculate the "true probability" that the interval will
> be heard as a particular just ratio.
>
> prob2 = prob / probSum; // sigma(prob2) = 1.0

Okay, we're finding what portion of the total probability each raw
probability is. This finishes the job I doubted the step above did?

> . In this second pass you're going to build the actual entropy value
> using each of the individual probabilities, by summing:
>
> entropy = entropy - (prob2 * log(prob2));
>
>Note that entropy is positive because each probability is less than
>one and therefore prob*log(prob) is negative.

Right. Zee familiar formula.

>That's all there is to it!

Thanks dude. Maybe I can do this in Scheme.

-Carl

🔗John A. deLaubenfels <jdl@adaptune.com>

9/12/2000 6:35:39 AM

[I wrote:]
>>(these are your 20 points, plus 1/1, in reverse order). This is for
>>Farey N=100, s=.01 (internally changed to .01/log(2)).

[Carl Lumma:]
>Cool, but I was asking for the 20 ratios from Farey order 100 with
>the lowest harmonic entropy, as opposed to the h.e. values for the
>"widest" 20 ratios from Farey order 100.

Oops! Sorry, Carl. I don't have a minimum-finding wrapper routine
written yet.

[JdL:]
>>Carl, I don't know LISP, but I'm sure it has plenty of power!

[Carl:]
>Well, you can code any computable function with it, so in theory,
>it can do everything. And, it's very fast and efficient for an
>interpreted language. But it only has as much power as you give
>it -- data types, even basic math, must all be built from simple
>list-processing routines. I even had to write a procedure to do
>base-2 logs (I'm actually using a striped-down dialect of LISP
>known as Scheme).

Woah - had to write your own log function? I'm impressed! I do happen
to remember the infinite series for the exp() function, if you're
interested.

[Carl:]
>What's intervalWidth? The width of the fareyInterval?

Yes.

[Carl:]
>I'm not sure what's going on here, but I'm to believe it's equivalent
>to finding the area of a standard distribution centered on
>pickedInterval that's sectioned off by the width of fareyInterval?

Yes, the bell curve is centered around pickedInterval, and crosses each
of the fareyInterval's.

>> . Sum all the "raw probabilities"; they should add up to 1, but of
>> course they won't! Save the sum, then make a second probability
>> pass and calculate the "true probability" that the interval will
>> be heard as a particular just ratio.
>>
>> prob2 = prob / probSum; // sigma(prob2) = 1.0
>
>Okay, we're finding what portion of the total probability each raw
>probability is. This finishes the job I doubted the step above did?

If I'm understanding your question, yes.

[Paul E:]
>These are pretty close to my values, except for the most consonant
>ratios, which are farther off (I get 2.2122 for 0.00 cents).

[JdL:]
>>. Sum all the "raw probabilities"; they should add up to 1, but of
>> course they won't!

[Paul:]
>They should sum to sqrt(2*pi*s).

OK. Glad to have that number; it provides a check for the original
sum! When I said "should", I meant, "should be normalized to, before
entropy is calculated"

[Paul:]
>I think I figured out why my results differ from John deLaubenfels' --
>I'm actually integrating over the bell curve from mediant to mediant
>(by subtracting the two corresponding values of the Error Function),
>while John is approximating the integral with the area of a rectangle.

That would fit the observation that the biggest discrepancies are at
locations like 1/1! OK, Paul, help me out: the "Error Function" is
the integral of the bell curve function, exp(-x^2/2), yes? If memory
serves, that's one of those nasty function for which there is no nice
expression to represent the integral, true? So, how do I write the
"Error Function"? I can, of course, do it by brute force: integrate
exp(-x^2/2) in tiny slices, save some of the values in a compile-time
array, and interpolate. Got any better ideas?

JdL

🔗John A. deLaubenfels <jdl@adaptune.com>

9/12/2000 8:28:59 AM

[I wrote:]
>>OK, Paul, help me out: the "Error Function" is the integral of the
>>bell curve function, exp(-x^2/2), yes? If memory serves, that's one
>>of those nasty function for which there is no nice expression to
>>represent the integral, true? So, how do I write the "Error
>>Function"?

[Manuel Op de Coul:]
>Yeah, use the approximation that I used. I get nearly the same entropy
>values as Paul.

Thanks! Do my eyes deceive me, or is that Pascal? Haven't seen that
in awhile... But Pascal is easy to translate into C, much easier than
Matlab!

JdL

🔗Ed Borasky <znmeb@teleport.com>

9/12/2000 8:34:42 PM
Attachments

> -----Original Message-----
> From: John A. deLaubenfels [mailto:jdl@adaptune.com]
> Sent: Tuesday, September 12, 2000 6:36 AM
> To: tuning@egroups.com
> Subject: [tuning] Re: Calculating Harmonic Entropy
> That would fit the observation that the biggest discrepancies are at
> locations like 1/1! OK, Paul, help me out: the "Error Function" is
> the integral of the bell curve function, exp(-x^2/2), yes? If memory
> serves, that's one of those nasty function for which there is no nice
> expression to represent the integral, true? So, how do I write the
> "Error Function"? I can, of course, do it by brute force: integrate
> exp(-x^2/2) in tiny slices, save some of the values in a compile-time
> array, and interpolate. Got any better ideas?

The error function (erf) and the probability integral (integral under the
bell curve) are similar but not identical; I don't have my tables handy but
I believe they differ only by a scaling factor. No, there is not a closed
form expression for either of them. However, there are good approximations
which do not require numerical integration. If you tell me which one it is
you want (error function or integral under the bell curve) I will look up
the approximations for you. Me, I just do it in Derive, where they're both
built in :-).

Speaking of which, I've figured out how to do Sethares' contour plots of
dissonance surfaces in Derive, and will post an example to the list in the
near future. How do I post a "JPEG"??
--
M. Edward (Ed) Borasky
znmeb@teleport.com
http://www.borasky-research.com/

🔗Monz <MONZ@JUNO.COM>

9/12/2000 9:48:05 PM

--- In tuning@egroups.com, "Ed Borasky" <znmeb@t...> wrote:
> http://www.egroups.com/message/tuning/12699
>
> ... I've figured out how to do Sethares' contour plots of
> dissonance surfaces in Derive, and will post an example to the
> list in the near future. How do I post a "JPEG"??

The best way to do this is to create a folder for your files
in the tuning 'files' section that egroups provides on the
Tuning List website

http://www.egroups.com/files/tuning/

Click on 'Add folder', then name it and give a brief
description. Then click 'Upload file' and it will
open a browser dialog-box where you select the file
from your hard-drive. That's all there is to it.

Don't try to send it to the List with your posting
as an attachment - most subscribers won't get it,
and neither will the archives.

-monz
http://www.ixpres.com/interval/monzo/homepage.html

🔗John A. deLaubenfels <jdl@adaptune.com>

9/13/2000 6:56:23 AM

[I wrote:]
>> Do my eyes deceive me, or is that Pascal?

[Manuel Op de Coul:]
>Nope, its big sister Ada.

Oops! I shoulda looked more closely! So you have access to an actual
Ada compiler, eh?

[I wrote:]
>>OK, Paul, help me out: the "Error Function" is
>>the integral of the bell curve function, exp(-x^2/2), yes?

[Paul E:]
>Basically, yes -- it's actually 2/sqrt(pi) * integral from 0 to x of
>exp(-t^2) dt.

OK, kyool...

[JdL:]
>>If memory
>>serves, that's one of those nasty function for which there is no nice
>>expression to represent the integral, true?

[Paul:]
>Right -- it's a "special function".

I hate it when that happens! It's SO much easier to integrate, say,
x^2.

[JdL:]
>>So, how do I write the
>>"Error Function"? I can, of course, do it by brute force: integrate
>>exp(-x^2/2) in tiny slices, save some of the values in a compile-time
>>array, and interpolate. Got any better ideas?

[Paul:]
>Matlab uses an approximation algorithm, published in "Rational
>Chebyshev approximations for the error function" by W. J. Cody, Math.
>Comp., 1969, PP. 631-638. See if you can dig that up.

Well, thanks for the ref; next time I'm near a good reference library
I'll hafta look it up! Meanwhile...

[Ed Borasky:]
>The error function (erf) and the probability integral (integral under
>the bell curve) are similar but not identical; I don't have my tables
>handy but I believe they differ only by a scaling factor. No, there is
>not a closed form expression for either of them. However, there are
>good approximations which do not require numerical integration. If you
>tell me which one it is you want (error function or integral under the
>bell curve) I will look up the approximations for you. Me, I just do it
>in Derive, where they're both built in :-).

If they're factors of each other, either is fine - I can figure out how
to scale the input & output. Thanks!

I just love those mathematical terms, "special function" and "no closed
form expression". Both my brothers were math majors, but, as will be
obvious, I only dabble.

BTW! Ed, I never did hear back from you on the Shostakovich Preludes
and Fugues, Opus 87. Didja like the retunings or no? I love the music,
and find it nicely retuned, but my ear is often very different from
others'! You directed me to download the complete MIDI (in 12-tET)
from:

http://www.geocities.com/Vienna/5619/

The web owner (whose name is not in front of me at the moment) did not
respond to my request to post tunings, which is a shame, IMO.

JdL