back to list

Reproducing HE exactly vs coming up with a sensible model

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 7:27:30 AM

If you've been to the HE Convolution Theorem page recently, you may have
noticed it says that the proof is broken. I noticed a pretty major oversight
which had invalidated a lot of it. I realized this after trying to come up
with a perfectly correlated output for Paul's HE - I soon discovered that
something was wrong with my model as far as exactly replicating HE is
concerned. In light of that, I sought to work things out and really see if
HE has anything to do with a convolution at all.

After some time, I finally got to the following horrid looking summation:

[image:
latex2png.2.php?z=100&eq=H%28d%29%20%3D%20-\frac{1}{%28G_s\star%20K%29%28d%29}*\left%28\sum_{i}{\left[%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29*log%28%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29%29\right]}-\sum_i{\left[%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29*log%28%28G_s\star%20K%29%28d%29%29\right]}\right%29]

About 4 hours and something like 34 steps later, I had finally managed to
work it down to three convolutions, and I think this is as best as it's
going to get:

[image:
latex2png.2.php?z=100&eq=H(d)%20%3D%20-\frac{W_2(d)}{W(d)}%2B\frac{W_3(d)}{2s^2W(d)}%2BlogW(d)]

If you can't see HTML, click here:

Horrid integral:
http://www.sitmo.com/gg/latex/latex2png.2.php?z=100&eq=H%28d%29%20%3D%20-\frac{1}{%28G_s\star%20K%29%28d%29}*\left%28\sum_{i}{\left[%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29*log%28%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29%29\right]}-\sum_i{\left[%28G_s\star\frac{\delta_{cents%28i%29}}{sqrt%28i_{num}*i_{den}%29}%29%28d%29*log%28%28G_s\star%20K%29%28d%29%29\right]}\right%29

End result:
http://www.sitmo.com/gg/latex/latex2png.2.php?z=100&eq=H%28d%29%20%3D%20-\frac{W_2%28d%29}{W%28d%29}%2B\frac{W_3%28d%29}{2s
^2W%28d%29}%2BlogW%28d%29

So each of those W_n(d)'s are different preconvolved vectors that you can
manipulate to regenerate the original H(d) curve, putting us still in
O(nlogn) time. All of them should have the same minima and maxima of the
final HE curve, but look slightly different themselves. This is the result
for sqrt(n*d) widths, and I'm not going to try and work it out for
mediant-to-mediant widths.

So now that I've spent all of this time doing that, this is what I have to
ask all of you:
- Before realizing the error in my proof, I erroneously stated that the one
simple convolution would yield HE exactly. Now it looks like we're up to
three, but we should now have a one to one correspondence with this and good
ol' legit classic HE.
- Well, that is, we have a correspondence, if I didn't screw it up again.
- Well, that is, we have a correspondence, if round-off error doesn't
accumulate differently than Paul's code.
- Well, that is, we have a correspondence, if I manage to even figure out
how to code this whole thing properly.
- Well, that is, we have a correspondence, if I ever somehow come across a
free 8 hour block to actually code this in MATLAB to begin with.
- And then there's the issue of whether all HE data plots are themselves
perfectly correlated, which Carl claims yes to, but Paul's code says no.

So my question to all of you is: what is the purpose of me doing any of this
at all, when I already have a perfectly good model that can probably be
computed 1,000 times as fast, is already finished and coded, appears
well-behaved, yields all of the proper minima and maxima and general
characteristics of the curve that everyone was trying to engineer HE to give
anyway, looks almost identical to a scaled version of HE, certainly holds
some psychoacoustically validity even in its embryonic stage, enables us to
start exploring tetrads and such straight away, naturally ties into the
proposed "fuzzy regular mapping" structure I've been talking about, etc?

The 1-convolution model, which I'll go back to calling "DC" again, is an
approximation of the 3-convolution model, which assuming I haven't screwed
up tonight's work, is what HE is actually doing under the hood. Exactly how
much of an approximation could be turned into a well-defined mathematical
statement. So DC is obviously related to HE in some sense. Call it "a fast
HE approximation but not actually HE" if you want.

Rather than waste any more time trying to replicate HE, I'm tomorrow going
to post some examples of what the model spits out with 1/(n*d) heights, a
Farey series generator, and s=1.0%.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/29/2011 12:22:18 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> So now that I've spent all of this time doing that, this is what I have to
> ask all of you:
> - Before realizing the error in my proof, I erroneously stated that the one
> simple convolution would yield HE exactly. Now it looks like we're up to
> three, but we should now have a one to one correspondence with this and good
> ol' legit classic HE.
> - Well, that is, we have a correspondence, if I didn't screw it up again.
> - Well, that is, we have a correspondence, if round-off error doesn't
> accumulate differently than Paul's code.
> - Well, that is, we have a correspondence, if I manage to even figure out
> how to code this whole thing properly.
> - Well, that is, we have a correspondence, if I ever somehow come across a
> free 8 hour block to actually code this in MATLAB to begin with.
> - And then there's the issue of whether all HE data plots are themselves
> perfectly correlated, which Carl claims yes to, but Paul's code says no.
>
> So my question to all of you is: what is the purpose of me doing any of this
> at all, when I already have a perfectly good model that can probably be
> computed 1,000 times as fast, is already finished and coded, appears
> well-behaved, yields all of the proper minima and maxima and general
> characteristics of the curve that everyone was trying to engineer HE to give
> anyway, looks almost identical to a scaled version of HE, certainly holds
> some psychoacoustically validity even in its embryonic stage, enables us to
> start exploring tetrads and such straight away, naturally ties into the
> proposed "fuzzy regular mapping" structure I've been talking about, etc?

Beats me. But I'm not Paul or Carl, I'm the guy who got you convolving ?(x), which is obviously evil. A function quick to compute and which Maple or Pari could work with would mean other people might find a use for the results. Does your web page, wherever it is, lay out what, exactly, you are convolving with a Gaussian?

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 6:19:20 PM

On Sat, Jan 29, 2011 at 3:22 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> Beats me. But I'm not Paul or Carl, I'm the guy who got you convolving ?(x), which is obviously evil. A function quick to compute and which Maple or Pari could work with would mean other people might find a use for the results. Does your web page, wherever it is, lay out what, exactly, you are convolving with a Gaussian?

The original model was to convolve a "basis kernel" with a Gaussian.
At one point I thought that HE itself, as derived from the entropy
summation, could be represented as the convolution of a certain
weird-looking kernel with a Gaussian, but what I'm saying in my above
message is that I screwed it up pretty bad.

Outside of trying to apply the original convolution technique to speed
up HE exactly, at one point I was messing with the idea of just making
the basis kernel actually equal to a bunch of impulses of height n*d.
This yields something that looks almost exactly like harmonic entropy,
and I thought for a while that it was. Turns out that it's not, but
it's really close. There do seem to be strict bounds on the
approximation; e.g. using n*d heights yields a reasonable curve, but
with sqrt(n*d) heights things don't converge correctly. I think it was
1/log(n*d+1) heights that make it look very similar to HE and is
computable in under a second.

BTW, I almost sent you a message about this last night, as I was so
frustrated with the derivation, to see if you could help me out with
it. In the middle of my message I figured out a way to keep going and
finished it, but now I'm paranoid that I've screwed something up
again. As a PS in this message, I asked if you could send a version of
?(2^x) that went from -1 to 2, but you didn't get the PS, because I
didn't send the message.

My MATLAB-fu isn't strong enough to figure out the interpolation for
the exponential version, so if you can send it again that would be
appreciated.

-Mike