back to list

In-browser HE calculator

🔗Mike Battaglia <battaglia01@...>

3/4/2012 8:13:32 AM

Check it out:
http://www.mikebattagliamusic.com/HE-JS/HE.html

This calculates Harmonic Renyi Entropy in O(nlogn) time, which I
explained pretty thoroughly on XA some time ago. You'll all know what
most of this is, so play around. So far it's been confirmed to work on
Chrome and Firefox, and will probably not work in Internet Explorer.

There are a few things you may not know. The first may be Renyi
entropy itself, given by Ha(d) for some dyad d. As a refresher, the
"a" parameter is the thing that Renyi entropy adds to generalize
Shannon entropy. For those who weren't following, the point to this is
that it can be interpreted as the extent to which we believe the brain
is "actively guessing" at ratios.

A value of a=1, which corresponds to the usual Shannon entropy, means
that you're more or less content to assume that the incoming signal
remains in and is perceived as being in some kind of "ambiguous" state
that is in some meaningful sense a superposition of ratios. You might
assume this means that the pitches in the dyad are "noisy" and
fluctuate, or that phenomena from more than one ratio are occurring at
the same time, or something like that. A value of a=1 means that
you're not assuming the existence of any sort of intelligent process
that attempts to analyze the ambiguity to guess at the actual ratio:
you're simply measuring the extent of the uncertainty or ambiguity is
measured by the Shannon entropy. You might interpret this as an
affirmation that the signal remains perceptually in this sort of
state.

A value of a=Infinity, on the other hand, means that you're more or
less assuming that the incoming signal doesn't remain perceptually
ambiguous but is actively trying to be "resolved" or "guessed at," and
that ratios are in a competition in which one can "win." This type of
entropy is called "min-entropy," and is commonly used rather than
Shannon entropy to measure the security of cryptographic keys, because
it measures the worst-case susceptibility of a secret key to being
correctly cracked by an intelligent hacker with a priori knowledge of
the probability distribution. In our case, we might say it's measuring
the best-case susceptibility of an ambiguous ratio to being correctly
resolved by an intelligent brain with the same sort of a priori
knowledge. One need not make any assumptions about the sort of
algorithm that might be taking place, only that it exists as some sort
of black box, and is "as good as possible": min-entropy as a result
measures how surprising it would be if random key/ratio also turned
out to be the most common one in the distribution.

Intermediate values, such as a=2 (the "collision" entropy) demonstrate
behavior intermediate to a=1 and a=Inf. Paul liked a=2 a lot, because
it can be thought of as the probability that two random variables are
equal to one another, and so can be said to measure the uncertainty in
a "confirmation" of the incoming signal. "a" can thus be interpreted
as some kind of "activeness" or "adaptiveness" parameter. It would be
interesting to see if lots of musical training with complex ratios
correlates well with an increase in a (note that for the same s, an
increase in a can turn a maximum like 11/9 into a minimum).

Other than that, only thing you might not know is the "normalize by
Hartley entropy" option. The special case a=0 is the "Hartley
entropy," which is like an infinitely dumb guesser; it always treats
any probability distribution as a uniform distribution. This isn't all
that notable by itself, but it's notable for trying to get the HE
curve to converge as N increases. It's a well-known result that, in
general, the Renyi entropy of order a is greater than or equal to the
Renyi entropy of order b if a < b. A proof of this can be found in
proposition 2.4 here:
ftp://ftp.inf.ethz.ch/doc/dissertations/th12187.ps.gz - and an
implication of this is that the value a=0, called the "Hartley
entropy" or sometimes "max entropy," will always be the greatest Renyi
entropy possible and will never be exceeded by larger values of a.

The Hartley entropy is simply log(size of sample space). Therefore, by
simply looking at Ha(d)/H0(d), we can normalize the curve regardless
of N. H0(d) will always be log(size of sample space), or in our case
log(number of ratios in the set), so this is pretty simple.

-Mike