back to list

Summing up the Convolution-HE stuff

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 5:06:07 AM

Let me make sure that I have this right before I go any further:

- The rationals numbers are uniformly distributed if you integrate
with respect to dx.
- The fact that the Farey/Tenney numbers are NOT uniformly distributed
if you integrate with respect to dx are due to the fact that these
series are SUBSETS of the rationals, and as such their distribution
isn't going to be uniform:
http://linas.org/math/chap-rat/chap-rat.html
- Therefore, when we say that intervals have 1/d widths or 1/sqrt(n*d)
widths, we're talking about their widths with respect to some series,
NOT with respect to the actual distribution of the rationals,
integrated with respect to dc.
- How uniform the Farey numbers end up getting seems to be subject to
the Riemann hypothesis, and although this is really interesting, in
light of the above, it doesn't really matter for this model.
- I have proven that as N goes to infinity, HE is better and better
modeled using the convolution of a Gaussian and a bunch of impulses:
http://www.mikebattagliamusic.com/music/HEConvolutionTheorem.html

So in light of this, my DC model was the same thing as HE all along.
By setting the impulses in DC as having different heights, you can
approximate the behavior in the limit of different subsets of the
rationals (series, enumerators, whatever). They're just two related
models that happen to converge at infinity, or rather they may or may
not converge because of some weird facet of number theory that
involves the Riemann hypothesis, but who cares. So I'm not going to
call it DC anymore, I'm just going to call it HE from now on.

So this is the way forward then:
1) Center an impulse around each rational.
2) Give these impulses heights that make some kind of logical sense.
If you want to go with the Farey series, give them 1/d heights and you
get Thomae's function. If you like the way the Tenney series looks,
give them 1/sqrt(n*d) heights.
3) Convolve with a Gaussian.

I have opted to give them 1/n*d heights. This is because I left
something out about #2 - to actually make it equivalent to Paul's HE
calculation, you don't actually give them 1/sqrt(n*d) heights, you
give them something like N/sqrt(n*d) * log(N/sqrt(n*d)) heights. This
is complicated and, since it's all pretty much arbitrary, and this is
basically still a scaled version of 1/n*d, why not just do that? I
think that 1/log(n*d+1) might be a better estimate for what's going
on, but might as well keep it simple.

If you do that, you get this, with all of the minima and maxima labeled:

http://www.mikebattagliamusic.com/music/HEs1.0ndN40.png

Keep in mind - I'm using the n*d approximation, which is more related
to the Tenney series widths, but I'm using the Farey series to
actually seed the entropy calculation. That is - I'm using the Farey
series to come up with the list of rationals that I'm going to use,
and then I'm giving them Tenney-ish heights. In this case, it doesn't
matter what series you use, so long as it generates rationals
relatively evenly - what matters is the heights you choose. The Farey
series can be calculated in real time, so that's good.

The same shoudl apply to the triadic case - the whole process of
coming up with an enumerator for the triads and then giving them
Voronoi cells will, in and of itself, cause a particular pattern of
"heights" for the triads; however, the triads should themselves be
just as evenly distributed as the rationals. Rather than waste time
playing with different series to find the most sensible looking one,
we can just work backwards and pick sensible heights to begin with.

So let's run everything above through the good ol Lumma shredder, and
assuming it passes that acid test, let's move onto triads and then,
finally, tetrads.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 5:19:21 AM

On Wed, Jan 26, 2011 at 8:06 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> - Therefore, when we say that intervals have 1/d widths or 1/sqrt(n*d)
> widths, we're talking about their widths with respect to some series,
> NOT with respect to the actual distribution of the rationals,
> integrated with respect to dc.

Typo - meant with respect to dc here.

Also, one last thing: if everyone is in agreement that this is a
sensible way to proceed - to try and develop a well-behaved function
that generalizes what HE is doing, rather than delving into the annals
of number theory to study the behavior of the rationals at infinite
scales - and if I haven't missed anything major and stupid, then all
of this can be done really quickly!

I keep talking about some mysterious speedup, and I've sent this to
Carl offlist already, but the idea is that by doing this, we're
basically "Gaussian blurring" the impulses in 1d space rather than 2d
space. Gaussian blurring = low pass filtering, so all of the high
frequencies get shut down. "Get shut down" is a statement that can be
made mathematically rigorous by looking at the magnitude response of
the FFT of the computed entropy curve.

Since this is the case, we can compute HE at like 10 cent resolution
and then use sinc interpolation to recover the original curve with
something like 0.01% error; whatever the exact number is, I've
overlaid the two on top of each other without seeing any visible
difference.

It is also the case that with this approach, it no longer becomes
necessary to use Farey series of N=80, 90, etc, since we're
pre-programming the heights into the equation. Empirically speaking,
N=40 seems to work really well, and N=20 seems to work well enough
except for some exaggerated entropy spikes near the maxima around 1/1,
3/2, etc. So this will make life easy for triads and tetrads, where
the computational complexity of all of this will really take off.

I'll go further in detail once I get the first part worked out.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 11:06:31 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> Let me make sure that I have this right before I go any further:
>
> - The rationals numbers are uniformly distributed if you integrate
> with respect to dx.
> - The fact that the Farey/Tenney numbers are NOT uniformly distributed
> if you integrate with respect to dx are due to the fact that these
> series are SUBSETS of the rationals, and as such their distribution
> isn't going to be uniform:

I wouldn't put it like that. We get a uniform measure dx if we use the equal divisions En of the interval you used in email. That is,
[0 1/n 2/n ... n/n], and take the limit. This is probably best done by way of powers, so they fit inside each other, as for instance with the dyadic rationals. We get another measure, d?, if we take a limit instead of Farey sequences. You can call either measure "correct"; it's just a matter of how you define the measure; however, dx makes the most sense for the reals and hence the most sense in general. You wouldn't want to do geometry with a d? measure.

> The same shoudl apply to the triadic case - the whole process of
> coming up with an enumerator for the triads and then giving them
> Voronoi cells will, in and of itself, cause a particular pattern of
> "heights" for the triads; however, the triads should themselves be
> just as evenly distributed as the rationals.

Good! Sounds worth pursuing.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 11:09:45 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Also, one last thing: if everyone is in agreement that this is a
> sensible way to proceed - to try and develop a well-behaved function
> that generalizes what HE is doing, rather than delving into the annals
> of number theory to study the behavior of the rationals at infinite
> scales - and if I haven't missed anything major and stupid, then all
> of this can be done really quickly!

I've always thought that was the way to go, since I didn't know why you were delving into those things anyway.

🔗Carl Lumma <carl@lumma.org>

1/26/2011 11:13:54 AM

>So let's run everything above through the good ol Lumma shredder, and
>assuming it passes that acid test, let's move onto triads and then,
>finally, tetrads.

The Lumma shredder likes tabular data. 1-cent increments, s=1% please.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 12:43:06 PM

On Wed, Jan 26, 2011 at 8:19 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Wed, Jan 26, 2011 at 8:06 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
>> - Therefore, when we say that intervals have 1/d widths or 1/sqrt(n*d)
>> widths, we're talking about their widths with respect to some series,
>> NOT with respect to the actual distribution of the rationals,
>> integrated with respect to dc.
>
> Typo - meant with respect to dc here.

Wow, I can't believe I just did this. With respect to dc, I mean. Just
kidding. dx.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 12:56:00 PM

On Wed, Jan 26, 2011 at 2:06 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> I wouldn't put it like that. We get a uniform measure dx if we use the equal divisions En of the interval you used in email. That is,
> [0 1/n 2/n ... n/n], and take the limit. This is probably best done by way of powers, so they fit inside each other, as for instance with the dyadic rationals. We get another measure, d?, if we take a limit instead of Farey sequences. You can call either measure "correct"; it's just a matter of how you define the measure; however, dx makes the most sense for the reals and hence the most sense in general. You wouldn't want to do geometry with a d? measure.

What do you mean if we take a limit instead of Farey sequences? A limit of what?

Would it be more correct to say the widths of different subsets of the
rationals, with respect to the dx measure specifically, vary depending
on the type of subset used? e.g. the Farey series has a different set
of widths than the Tenney series, and they both have a different set
of widths than the Stern-Brocot tree, etc.

In general I think I could devise a series with any set of widths I
wanted, and just as long as for two consecutive ratios a/b and c/d,
ad-bc = 1, we'll get unreduced fractions. All of these series are
basically subsets of the Stern-Brocot tree anyway that are distributed
more evenly. Maybe a good exercise would be to find a series in which
more complex fractions have a larger width than more simple ones, but
where ad-bc still equals 1. Anyways...

> > Also, one last thing: if everyone is in agreement that this is a
> > sensible way to proceed - to try and develop a well-behaved function
> > that generalizes what HE is doing, rather than delving into the annals
> > of number theory to study the behavior of the rationals at infinite
> > scales - and if I haven't missed anything major and stupid, then all
> > of this can be done really quickly!
>
> I've always thought that was the way to go, since I didn't know why you were delving into those things anyway.

Partly at Paul's behest; he thought that what I was doing was more or
less an attempt to generate a "curve of a certain shape" and wasn't as
deep as the actual HE model, which he claims spontanteously predicts
the complexity of intervals due to the distribution of the rationals.
As it seemed like we were saying above, this isn't true; it really
reflects the distribution of the Farey numbers, and/or the Tenney
numbers, which is why you can get 1/d widths for one series and
1/sqrt(n*d) widths for another. I don't know if Paul was referring to
using ?(x) as a measure or if he'd really explored it, but in general,
it seems like he was mistaken about this. Maybe a different way to
formalize it would have been to integrate the Gaussian in his model
with respect to ?(x).

Now that I've worked out all the math, I'm pretty confident in saying
that I'm doing something that really is harmonic entropy, or related
to harmonic entropy.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 1:05:07 PM

On Wed, Jan 26, 2011 at 3:56 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> In general I think I could devise a series with any set of widths I
> wanted, and just as long as for two consecutive ratios a/b and c/d,
> ad-bc = 1, we'll get unreduced fractions. All of these series are
> basically subsets of the Stern-Brocot tree anyway that are distributed
> more evenly. Maybe a good exercise would be to find a series in which
> more complex fractions have a larger width than more simple ones, but
> where ad-bc still equals 1. Anyways...

Actually, in light of this, a good way to go would be to find a series
that is as evenly distributed as possible, but is still a unimodular
series that passes through all of the simple ratios first. This would
make for the greatest ease of computing. Since we're going to
artificially seed their heights with n*d anyway, we actually want an
evenly distributed series anyway.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 1:28:21 PM

On Wed, Jan 26, 2011 at 2:13 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >So let's run everything above through the good ol Lumma shredder, and
> >assuming it passes that acid test, let's move onto triads and then,
> >finally, tetrads.
>
> The Lumma shredder likes tabular data. 1-cent increments, s=1% please.
>
> -Carl

OK, here's 3 examples:
/tuning-math/files/MikeBattaglia/

These correspond to Farey series of N=30, 40, and 80. Keep in mind
that I'm although I'm using the Farey series to seed the model, since
it can be computed really quickly, I have the model set up so that the
impulses have Tenney-ish heights.

Keep in mind when I send this to you that, last time I did this, s=1%
in the impulse model corresponded to I think something like s=1.2% in
HE. After looking at the equations it's probably because of what
happens when you multiply G(d) .*log(G(d)). When you check this,
compare to s=1.2% in HE as well. Then we'll find a way to normalize
it.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 1:30:00 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> What do you mean if we take a limit instead of Farey sequences? A limit of what?

A limit of the Farey sequences. If you treat the successive members of the Farey sequence of order n and go to the limit, you get a measure distinct from the usual one. If you are integrating f(x) on the unit interval, you take a limit the average of f evaluated at the Farey fractions over [0 1]. That's not the usual measure.

> Would it be more correct to say the widths of different subsets of the
> rationals, with respect to the dx measure specifically, vary depending
> on the type of subset used? e.g. the Farey series has a different set
> of widths than the Tenney series, and they both have a different set
> of widths than the Stern-Brocot tree, etc.

Not sure what you are saying here.

> Partly at Paul's behest; he thought that what I was doing was more or
> less an attempt to generate a "curve of a certain shape" and wasn't as
> deep as the actual HE model, which he claims spontanteously predicts
> the complexity of intervals due to the distribution of the rationals.

I've never been convinced of that. And I'd still like to see what you get by taking the derivative of various Weierstass transforms of ?. One difference we can predict right off is that it is periodic, assigning the same octave-equivalent numbers to a ratio, its inversion, and its octave translates. That, I am thinking, could be useful.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 1:32:04 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> What do you mean if we take a limit instead of Farey sequences? A limit of what?

A limit of the Farey sequences. If you treat the successive members of the Farey sequence of order n and go to the limit, you get a measure distinct from the usual one. If you are integrating f(x) on the unit interval, you take a limit the average of f evaluated at the Farey fractions over [0 1]. That's not the usual measure.

> Would it be more correct to say the widths of different subsets of the
> rationals, with respect to the dx measure specifically, vary depending
> on the type of subset used? e.g. the Farey series has a different set
> of widths than the Tenney series, and they both have a different set
> of widths than the Stern-Brocot tree, etc.

Not sure what you are saying here.

> Partly at Paul's behest; he thought that what I was doing was more or
> less an attempt to generate a "curve of a certain shape" and wasn't as
> deep as the actual HE model, which he claims spontanteously predicts
> the complexity of intervals due to the distribution of the rationals.

I've never been convinced of that. And I'd still like to see what you get by taking the derivative of various Weierstass transforms of ?. One difference we can predict right off is that it is periodic, assigning the same octave-equivalent numbers to a ratio, its inversion, and its octave translates. That, I am thinking, could be useful.

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 1:39:40 PM

On Wed, Jan 26, 2011 at 4:30 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> > What do you mean if we take a limit instead of Farey sequences? A limit of what?
>
> A limit of the Farey sequences. If you treat the successive members of the Farey sequence of order n and go to the limit, you get a measure distinct from the usual one. If you are integrating f(x) on the unit interval, you take a limit the average of f evaluated at the Farey fractions over [0 1]. That's not the usual measure.

OK, I see. So then this is just a more formal way to say what I've
been trying to say: while the Farey numbers do converge on the
rationals, their widths with respect to dx don't converge on the
widths of the rationals with respect to dx. Rather, if you use the
limit of the widths of the Farey sequences, you simply get one
measure, and it's this measure that actually generates the behavior of
the HE curve. If you use the limit of the widths of the Tenney
sequence, you get a different measure, and this measure generates a
better looking HE curve. If you use the limit of the widths of the
Stern-Brocot tree, you get yet another measure, etc.

> > Partly at Paul's behest; he thought that what I was doing was more or
> > less an attempt to generate a "curve of a certain shape" and wasn't as
> > deep as the actual HE model, which he claims spontanteously predicts
> > the complexity of intervals due to the distribution of the rationals.
>
> I've never been convinced of that.

But it does spontaneously predict the complexity of intervals due to
the distribution of the Farey numbers, right? Or whatever series you
decide to use.

> And I'd still like to see what you get by taking the derivative of various Weierstass transforms of ?. One difference we can predict right off is that it is periodic, assigning the same octave-equivalent numbers to a ratio, its inversion, and its octave translates. That, I am thinking, could be useful.

I need to find an algorithm to compute it. I found a MATLAB program to
do it and spent a long time on it, and couldn't get it to work
satisfactorily. I think the problem is that if I'm in MATLAB and I
compute ?(x) for 0:0.1:1200, I end up basically computing it for only
rationals. So I need some function that converges to ?(x) that I can
use.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 1:57:53 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I need to find an algorithm to compute it. I found a MATLAB program to
> do it and spent a long time on it, and couldn't get it to work
> satisfactorily. I think the problem is that if I'm in MATLAB and I
> compute ?(x) for 0:0.1:1200, I end up basically computing it for only
> rationals. So I need some function that converges to ?(x) that I can
> use.

Do you have a routine for computing the continued fraction of a floating point number? Starting from that, it's easy to compute ?(x).

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 2:01:00 PM

On Wed, Jan 26, 2011 at 4:57 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > I need to find an algorithm to compute it. I found a MATLAB program to
> > do it and spent a long time on it, and couldn't get it to work
> > satisfactorily. I think the problem is that if I'm in MATLAB and I
> > compute ?(x) for 0:0.1:1200, I end up basically computing it for only
> > rationals. So I need some function that converges to ?(x) that I can
> > use.
>
> Do you have a routine for computing the continued fraction of a floating point number? Starting from that, it's easy to compute ?(x).

I do, but like I said, all of the numbers we use are going to be
rational. Let's say we go between 0 and 1 with a resolution of 0.0001
- the best you're going to do is get a rational number that's 4
decimal places out. Is that still acceptable?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 2:45:06 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I do, but like I said, all of the numbers we use are going to be
> rational. Let's say we go between 0 and 1 with a resolution of 0.0001
> - the best you're going to do is get a rational number that's 4
> decimal places out. Is that still acceptable?

Since the dyadic rational this computes is precisely ?(x) for that value, of course.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 2:52:17 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I do, but like I said, all of the numbers we use are going to be
> rational. Let's say we go between 0 and 1 with a resolution of 0.0001
> - the best you're going to do is get a rational number that's 4
> decimal places out. Is that still acceptable?

I could send you a list of ?(i/10000) for i from 1 to 10000 if that would help.

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 2:54:17 PM

That would be nice. Can you put it in a CSV?

-Mike

On Wed, Jan 26, 2011 at 5:52 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > I do, but like I said, all of the numbers we use are going to be
> > rational. Let's say we go between 0 and 1 with a resolution of 0.0001
> > - the best you're going to do is get a rational number that's 4
> > decimal places out. Is that still acceptable?
>
> I could send you a list of ?(i/10000) for i from 1 to 10000 if that would help.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 3:03:31 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> That would be nice. Can you put it in a CSV?

I don't really know the specification of a CSV, but Maple output will simply consist of values separated by commas, and I could put each value on its own separate line, with or without terminating commas. How many digits of accuracy?

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 3:04:01 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> That would be nice. Can you put it in a CSV?

I don't really know the specification of a CSV, but Maple output will simply consist of values separated by commas, and I could put each value on its own separate line, with or without terminating commas. How many digits of accuracy?

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 3:07:44 PM

On Wed, Jan 26, 2011 at 6:03 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
> >
> > That would be nice. Can you put it in a CSV?
>
> I don't really know the specification of a CSV, but Maple output will simply consist of values separated by commas, and I could put each value on its own separate line, with or without terminating commas. How many digits of accuracy?

CSV stands for "comma-separated values," so there you go. How many
digits of accuracy - as many as you can cram into the file, doesn't
matter here.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 3:08:45 PM

Actually, do to this right, could you do all of the above for
?(log_2(x)) instead? The thing is that we're convolving in log space,
not linear space.

-Mike

On Wed, Jan 26, 2011 at 6:07 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Wed, Jan 26, 2011 at 6:03 PM, genewardsmith
> <genewardsmith@sbcglobal.net> wrote:
>>
>> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>> >
>> > That would be nice. Can you put it in a CSV?
>>
>> I don't really know the specification of a CSV, but Maple output will simply consist of values separated by commas, and I could put each value on its own separate line, with or without terminating commas. How many digits of accuracy?
>
> CSV stands for "comma-separated values," so there you go. How many
> digits of accuracy - as many as you can cram into the file, doesn't
> matter here.
>
> -Mike
>

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 3:18:29 PM

On Wed, Jan 26, 2011 at 6:08 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> Actually, do to this right, could you do all of the above for
> ?(log_2(x)) instead? The thing is that we're convolving in log space,
> not linear space.

Or I guess that it would be ?(2^x). Agh. But this brings up another
point, which is - how do we work this out?

So check out Thomae's function, which I keep coming back to because of
its being the projective version of almost the Lambdoma:

http://en.wikipedia.org/wiki/Euclid%27s_orchard

This is just between 0 and 1. So we want to map the intervals between
[0,1] to the intervals between [-Inf, Inf]? So I guess we'd have to
set up some kind of hyperbolic curve for this?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 3:32:36 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> Actually, do to this right, could you do all of the above for
> ?(log_2(x)) instead? The thing is that we're convolving in log space,
> not linear space.

Can we do both? I'm more interested in the results for linear space.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 3:48:44 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> So check out Thomae's function, which I keep coming back to because of
> its being the projective version of almost the Lambdoma:
>
> http://en.wikipedia.org/wiki/Euclid%27s_orchard
>
> This is just between 0 and 1. So we want to map the intervals between
> [0,1] to the intervals between [-Inf, Inf]? So I guess we'd have to
> set up some kind of hyperbolic curve for this?

Well, hell. Sure, tanh would work, but why all this complication? I really think ?(x) itself is the most interesting: it's octave equivalent by nature.

🔗genewardsmith <genewardsmith@sbcglobal.net>

1/26/2011 4:39:37 PM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

> Can we do both? I'm more interested in the results for linear space.
>

Ah hell, what am I saying? It won't work for an OE score. I'll send something shortly.

🔗Mike Battaglia <battaglia01@gmail.com>

1/26/2011 4:42:21 PM

On Wed, Jan 26, 2011 at 6:32 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
> >
> > Actually, do to this right, could you do all of the above for
> > ?(log_2(x)) instead? The thing is that we're convolving in log space,
> > not linear space.
>
> Can we do both? I'm more interested in the results for linear space.

Sure, but the result won't be the same as harmonic entropy. The log
space convolution is what produces the beauty of the model. Otherwise
you would end up with a result whereby wider intervals are more
sensitive to mistuning than narrower intervals. I don't think this is
true, but there might be a correlation between more -complex-
intervals being more sensitive to mistuning than narrower intervals.
Or less sensitive. But I don't think wideness has anything to do with
it.

If you send me both we can see what the model spits out anyways.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/27/2011 1:49:30 AM

>OK, here's 3 examples:
>/tuning-math/files/MikeBattaglia/
>
>These correspond to Farey series of N=30, 40, and 80. Keep in mind
>that I'm although I'm using the Farey series to seed the model, since
>it can be computed really quickly, I have the model set up so that the
>impulses have Tenney-ish heights.
>
>Keep in mind when I send this to you that, last time I did this, s=1%
>in the impulse model corresponded to I think something like s=1.2% in
>HE. After looking at the equations it's probably because of what
>happens when you multiply G(d) .*log(G(d)). When you check this,
>compare to s=1.2% in HE as well. Then we'll find a way to normalize
>it.

Here's a plot

http://i.min.us/ibTr3W.png

going to bed... more tomorrow. -C.

🔗Mike Battaglia <battaglia01@gmail.com>

1/27/2011 1:07:18 PM

On Thu, Jan 27, 2011 at 4:49 AM, Carl Lumma <carl@lumma.org> wrote:
>
> Here's a plot
>
> http://i.min.us/ibTr3W.png
>
> going to bed... more tomorrow. -C.

What is it, the difference between this and normal HE?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/27/2011 1:40:08 PM

At 01:07 PM 1/27/2011, you wrote:
>On Thu, Jan 27, 2011 at 4:49 AM, Carl Lumma <carl@lumma.org> wrote:
>>
>> Here's a plot
>>
>> http://i.min.us/ibTr3W.png
>>
>> going to bed... more tomorrow. -C.
>
>What is it, the difference between this and normal HE?

One plotted against the other, yep. -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/27/2011 2:17:22 PM

On Thu, Jan 27, 2011 at 4:40 PM, Carl Lumma <carl@lumma.org> wrote:
>
> At 01:07 PM 1/27/2011, you wrote:
> >
> >What is it, the difference between this and normal HE?
>
> One plotted against the other, yep. -Carl

I would assume that stems from this:

> I have opted to give them 1/n*d heights. This is because I left
> something out about #2 - to actually make it equivalent to Paul's HE
> calculation, you don't actually give them 1/sqrt(n*d) heights, you
> give them something like N/sqrt(n*d) * log(N/sqrt(n*d)) heights. This
> is complicated and, since it's all pretty much arbitrary, and this is
> basically still a scaled version of 1/n*d, why not just do that? I
> think that 1/log(n*d+1) might be a better estimate for what's going
> on, but might as well keep it simple.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/27/2011 11:21:54 PM

On Thu, Jan 27, 2011 at 5:17 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Thu, Jan 27, 2011 at 4:40 PM, Carl Lumma <carl@lumma.org> wrote:
>>
>> At 01:07 PM 1/27/2011, you wrote:
>> >
>> >What is it, the difference between this and normal HE?
>>
>> One plotted against the other, yep. -Carl
>
> I would assume that stems from this:
>
>> I have opted to give them 1/n*d heights. This is because I left
>> something out about #2 - to actually make it equivalent to Paul's HE
>> calculation, you don't actually give them 1/sqrt(n*d) heights, you
>> give them something like N/sqrt(n*d) * log(N/sqrt(n*d)) heights. This
>> is complicated and, since it's all pretty much arbitrary, and this is
>> basically still a scaled version of 1/n*d, why not just do that? I
>> think that 1/log(n*d+1) might be a better estimate for what's going
>> on, but might as well keep it simple.

So what are you getting at here, exactly? That this entire approach is
invalid unless I work out the Tenney series heights?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/27/2011 11:40:56 PM

What you have doesn't seem to correspond to harmonic entropy,
or exp(entropy), etc. That's what you claimed. That's all I'm
saying, no more no less. -Carl

>So what are you getting at here, exactly? That this entire approach is
>invalid unless I work out the Tenney series heights?
>
>-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/27/2011 11:54:12 PM

On Fri, Jan 28, 2011 at 2:40 AM, Carl Lumma <carl@lumma.org> wrote:
>
> What you have doesn't seem to correspond to harmonic entropy,
> or exp(entropy), etc. That's what you claimed. That's all I'm
> saying, no more no less. -Carl

I never claimed that the 3 examples I just posted were supposed to be
1-for-1 equivalent to harmonic entropy. In fact, my specific claim was
this:

> I have opted to give them 1/n*d heights. This is because I left
> something out about #2 - to actually make it equivalent to Paul's HE
> calculation, you don't actually give them 1/sqrt(n*d) heights, you
> give them something like N/sqrt(n*d) * log(N/sqrt(n*d)) heights. This
> is complicated and, since it's all pretty much arbitrary, and this is
> basically still a scaled version of 1/n*d, why not just do that? I
> think that 1/log(n*d+1) might be a better estimate for what's going
> on, but might as well keep it simple.

Given that, I'm not sure why you took the results from this and
plotted them against HE. For me to make it actually yield a cent for
cent equivalent to HE, I'd have to figure out the Tenney series widths
and work out the convolution integral and go from there.

But my real question is, there some reason why the use of the Tenney
series is supposed to be more psychoacoustically valid than other
series?

-Mike

🔗Graham Breed <gbreed@gmail.com>

1/28/2011 12:05:07 AM

Mike Battaglia <battaglia01@gmail.com> wrote:

> But my real question is, there some reason why the use of
> the Tenney series is supposed to be more
> psychoacoustically valid than other series?

It's a pretty good bet. We've done the best research we
can and we're happy with it. Of course, we're not
psychoacousticians, and we don't have a budget for
properly controlled experiments.

It's also a result that comes out of Harmonic Entropy. To
an extent, you get out what you feed in. I'm remembering
way back for this, but you have to specify a threshold for
intervals. I worked out that if you set that threshold
according to Tenney harmonic distance, you get consonances
ranked by Tenney harmonic distance. You can also set the
threshold according to the size of the numerator and get
get consonances ranked by either the numerator or
denominator. Or something. Like I said, it's a while ago,
and I can't even remember how I worked it out. But
Harmonic Entropy is still measuring facts about the
distribution of rationals, and there'll be a whole load of
rankings you can't get out of it.

Graham

🔗Carl Lumma <carl@lumma.org>

1/28/2011 12:07:20 AM

Mike wrote:

>> What you have doesn't seem to correspond to harmonic entropy,
>> or exp(entropy), etc. That's what you claimed. That's all I'm
>> saying, no more no less. -Carl
>
>I never claimed that the 3 examples I just posted were supposed to be
>1-for-1 equivalent to harmonic entropy.

I asked specifically for the equivalent of s=1.0 in one cent
increments and this is what you gave me!

You've been claiming it's identical to HE for eons already.
In fact, you've called it a way to speed up HE computations.
In fact, you just announced that you're going to start calling
it HE instead of DC. WTF are we supposed to think?

-Carl

🔗Carl Lumma <carl@lumma.org>

1/28/2011 12:10:07 AM

>It's also a result that comes out of Harmonic Entropy. To
>an extent, you get out what you feed in. I'm remembering
>way back for this, but you have to specify a threshold for
>intervals. I worked out that if you set that threshold
>according to Tenney harmonic distance, you get consonances
>ranked by Tenney harmonic distance. You can also set the
>threshold according to the size of the numerator and get
>get consonances ranked by either the numerator or
>denominator. Or something. Like I said, it's a while ago,
>and I can't even remember how I worked it out. But
>Harmonic Entropy is still measuring facts about the
>distribution of rationals, and there'll be a whole load of
>rankings you can't get out of it.

According to Paul, if you set the threshold according to
denominator, you still get consonances ranked by Tenney harmonic
distance. And same for sum of numerator and denominator. -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/28/2011 12:15:07 AM

On Fri, Jan 28, 2011 at 3:05 AM, Graham Breed <gbreed@gmail.com> wrote:
>
> Mike Battaglia <battaglia01@gmail.com> wrote:
>
> > But my real question is, there some reason why the use of
> > the Tenney series is supposed to be more
> > psychoacoustically valid than other series?
>
> It's a pretty good bet. We've done the best research we
> can and we're happy with it. Of course, we're not
> psychoacousticians, and we don't have a budget for
> properly controlled experiments.

Alright, I'll work it out for the Tenney series. I'm still not
convinced that it's that miraculous though.

> It's also a result that comes out of Harmonic Entropy. To
> an extent, you get out what you feed in. I'm remembering
> way back for this, but you have to specify a threshold for
> intervals. I worked out that if you set that threshold
> according to Tenney harmonic distance, you get consonances
> ranked by Tenney harmonic distance.

I think that what it is is 1/sqrt(n*d) rather than 1/log(n*d), but
yeah. However, if you do this with a Farey series, you just get 1/d
though, I believe.

> But Harmonic Entropy is still measuring facts about the
> distribution of rationals, and there'll be a whole load of
> rankings you can't get out of it.

Right, but as per this discussion we just had, it seems more like it's
measuring facts about the series you use. If you use a Farey series,
you get 1/d widths, and if you use a Tenney series, you get
1/sqrt(n*d) widths. If you use Gene's ?(x) function, you get something
else, and if you use the Stern-Brocot tree, you end up with something
that looks completely ridiculous, despite that it has the unimodular
property. According to Gene, at least, if you measure the actual
distribution of the rationals with respect to dx, they're completely
even. Most of what I've seen in my own research on this shows that
they are completely even, but that different subsets of them can be
distributed differently (e.g. Farey series).

That being said, I decided to run the model such that the consonances
really do end up with n*d-scaled minima, which you could probably
reverse engineer to work out some series that gives you widths
proportional to that. But I'll figure out equivalents for Farey,
Tenney series, etc.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/28/2011 12:18:15 AM

On Fri, Jan 28, 2011 at 3:07 AM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> >> What you have doesn't seem to correspond to harmonic entropy,
> >> or exp(entropy), etc. That's what you claimed. That's all I'm
> >> saying, no more no less. -Carl
> >
> >I never claimed that the 3 examples I just posted were supposed to be
> >1-for-1 equivalent to harmonic entropy.
>
> I asked specifically for the equivalent of s=1.0 in one cent
> increments and this is what you gave me!

Sorry, I thought you had been following the recent developments in the
thread. We went through a huge discussion about the sqrt(n*d) widths
being arbitrary and I thought you were on the same page. Plus, you
sent something offlist that says that as long as the function is
reasonably well-behaved, that would be enough to satisfy you. But for
proof of concept, I'll work out the Tenney-series equivalent version.

The one I sent ends up making the minima lower in entropy and smushes
the maxima together more, which is what your graph is showing.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/28/2011 12:22:32 AM

>> I asked specifically for the equivalent of s=1.0 in one cent
>> increments and this is what you gave me!
>
>Sorry, I thought you had been following the recent developments in the
>thread. We went through a huge discussion about the sqrt(n*d) widths
>being arbitrary and I thought you were on the same page.

Nope, I haven't been reading it. Since it lead you to conclude
that sqrt(n*d) is arbitrary I apparently didn't miss much.

>Plus, you
>sent something offlist that says that as long as the function is
>reasonably well-behaved, that would be enough to satisfy you.

It depends on the claims made. If you say you've got something
that does something, great. If you say you've got something that
you're calling "harmonic entropy" because it is harmonic entropy,
then show us the gravy.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/28/2011 8:15:22 PM

On Fri, Jan 28, 2011 at 3:22 AM, Carl Lumma <carl@lumma.org> wrote:
>
> >> I asked specifically for the equivalent of s=1.0 in one cent
> >> increments and this is what you gave me!
> >
> >Sorry, I thought you had been following the recent developments in the
> >thread. We went through a huge discussion about the sqrt(n*d) widths
> >being arbitrary and I thought you were on the same page.
>
> Nope, I haven't been reading it. Since it lead you to conclude that sqrt(n*d) is arbitrary I apparently didn't miss much.

I notice that you never do seem to miss much. I'll have to acquire that skill.

> It depends on the claims made. If you say you've got something
> that does something, great. If you say you've got something that
> you're calling "harmonic entropy" because it is harmonic entropy,
> then show us the gravy.

If you really want to do that kind of a plot, then you need to tell me
what you're plotting the results against. I'm working right now with
n*d < 10000, mediant-to-mediant widths. I can't find a clear
explanation anywhere of exactly how the sqrt(n*d) widths are set up to
compare.

If you plot Paul's n*d<10000 against his n*d<5000, or his sqrt(n*d)
widths against his mediant-to-mediant widths, I would expect that you
also, unsurprisingly, do not get a line.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/28/2011 8:41:16 PM

On Fri, Jan 28, 2011 at 11:15 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> I can't find a clear explanation anywhere of exactly how the sqrt(n*d) widths are set up to
> compare.

Paul just wrote this on my facebook wall:

"Each probability slice's area is simply assumed to be *proportional*
to width times Gaussian height at the corresponding ratio. Then one
normalizes all the areas so that they sum to one."

I think I'm going to go shoot myself in the face now.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/28/2011 10:17:37 PM

Mike wrote:

>If you really want to do that kind of a plot, then you need to tell me
>what you're plotting the results against.

The only thing that should matter is s, which as I said is 1%.
(In DC do you represent this as the variance of the Gaussian you
convolve with?)

>I'm working right now with n*d < 10000, mediant-to-mediant widths.
>I can't find a clear explanation anywhere of exactly how the sqrt(n*d)
>widths are set up to compare.

I divide the Gaussian's height at n/d by sqrt(n*d) and then
normalize the sum of these areas to 1. See here for the
triadic version: /tuning/topicId_93392.html#93468

Since you're after the heights of the impulses and there is
no Gaussian yet, I dunoo... you tell me. In our offlist chat
months ago you produced a breakdown of the contribution of each
impulse at a given point, and one of the columns was
normalized to 1...

>If you plot Paul's n*d<10000 against his n*d<5000, or his sqrt(n*d)
>widths against his mediant-to-mediant widths, I would expect that
>you also, unsurprisingly, do not get a line.

You certainly do.

-Carl

🔗Carl Lumma <carl@lumma.org>

1/28/2011 10:19:25 PM

>Paul just wrote this on my facebook wall:
>
>"Each probability slice's area is simply assumed to be *proportional*
>to width times Gaussian height at the corresponding ratio. Then one
>normalizes all the areas so that they sum to one."
>
>I think I'm going to go shoot myself in the face now.

Er, don't do that. I'm still not clear how you'll perform this
normalization and retain the speedup, since it seemingly requires
you 'pause' the computation for each incoming dyad... -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 12:31:34 AM

On Sat, Jan 29, 2011 at 1:17 AM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> >If you really want to do that kind of a plot, then you need to tell me
> >what you're plotting the results against.
>
> The only thing that should matter is s, which as I said is 1%.
> (In DC do you represent this as the variance of the Gaussian you
> convolve with?)

Right. Except it turns out that the convolution isn't that much of a
speedup, anyway. What does seem to be a real speedup comes from the
concept I sent you offlist a long time ago, which is the value of a
Gaussian of some mean m taken at a point p equals the value of a
Gaussian at mean p taken at a point m. If I start with a vector of
zeroes and manually add Gaussians, it's still really fast.

The "true HE" equivalent is to add slightly altered versions of each
Gaussian around each interval, rather than just add stock versions of
each Gaussian. I'm not sure how to model this as a convolution, but
since manually adding them is already pretty fast, and since there are
other speedups, I don't think it matters.

Read the thread. :)

> >I'm working right now with n*d < 10000, mediant-to-mediant widths.
> >I can't find a clear explanation anywhere of exactly how the sqrt(n*d)
> >widths are set up to compare.
>
> I divide the Gaussian's height at n/d by sqrt(n*d) and then
> normalize the sum of these areas to 1. See here for the
> triadic version: /tuning/topicId_93392.html#93468

Not dealing with triads yet.

> Since you're after the heights of the impulses and there is
> no Gaussian yet, I dunoo... you tell me. In our offlist chat
> months ago you produced a breakdown of the contribution of each
> impulse at a given point, and one of the columns was
> normalized to 1...

Right. Well, I have to work it all out again now that the rect
functions are going to disappear, but the way it works is that you
distribute the complexity of each point by -plogp, with normalized
p's, rather than just p.

A really good -approximation- for this is just to add p together,
which means that you can get a speedup by performing a convolution.
This doesn't seem to be as much of a speedup, as previously mentioned,
but it might come in handy when it comes to calculating tetrads and
such.

At one point I thought that the two converged at infinity, and then I
didn't think so. Then I did again, and then I didn't, and now in light
of what Paul has said, I think that it does again. I'm losing interest
in this aspect of it, though.

> >If you plot Paul's n*d<10000 against his n*d<5000, or his sqrt(n*d)
> >widths against his mediant-to-mediant widths, I would expect that
> >you also, unsurprisingly, do not get a line.
>
> You certainly do.

Not in Paul's code that he sent me, which uses mediant-to-mediant widths.

> Er, don't do that. I'm still not clear how you'll perform this
> normalization and retain the speedup, since it seemingly requires
> you 'pause' the computation for each incoming dyad... -Carl

The main speedup doesn't seem to be coming from the convolution anymore.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 9:29:51 AM

>Right. Except it turns out that the convolution isn't that much of a
>speedup, anyway. What does seem to be a real speedup comes from the
>concept I sent you offlist a long time ago, which is the value of a
>Gaussian of some mean m taken at a point p equals the value of a
>Gaussian at mean p taken at a point m. If I start with a vector of
>zeroes and manually add Gaussians, it's still really fast.

I don't see much room for a speedup there, since you still have
to do something for each incoming dyad.

>> I divide the Gaussian's height at n/d by sqrt(n*d) and then
>> normalize the sum of these areas to 1. See here for the
>> triadic version: /tuning/topicId_93392.html#93468
>
>Not dealing with triads yet.

There's very little difference in the formulas, shown there.

>A really good -approximation- for this is just to add p together,

That's what we discussed months ago -- whether the absolute sum
tends to be greater when it is dominated by one Gaussian. Until
you run it and compare we won't know how true it is.

>which means that you can get a speedup by performing a convolution.
>This doesn't seem to be as much of a speedup, as previously mentioned,
>but it might come in handy when it comes to calculating tetrads and
>such.

The real problem is the need to perform some task for each incoming
interval, the number of which is 1200^(n-1) for n-ads. Something
like McLoed pitch will get around that but I don't see DC doing it.

>> >If you plot Paul's n*d<10000 against his n*d<5000, or his sqrt(n*d)
>> >widths against his mediant-to-mediant widths, I would expect that
>> >you also, unsurprisingly, do not get a line.
>>
>> You certainly do.
>
>Not in Paul's code that he sent me, which uses mediant-to-mediant widths.

Paul ran such comparisons endlessly, which is the basis for his
claim that HE has only a single free variable, s.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 5:32:20 PM

On Sat, Jan 29, 2011 at 12:29 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >Right. Except it turns out that the convolution isn't that much of a
> >speedup, anyway. What does seem to be a real speedup comes from the
> >concept I sent you offlist a long time ago, which is the value of a
> >Gaussian of some mean m taken at a point p equals the value of a
> >Gaussian at mean p taken at a point m. If I start with a vector of
> >zeroes and manually add Gaussians, it's still really fast.
>
> I don't see much room for a speedup there, since you still have
> to do something for each incoming dyad.

The speedup is that after you realize that the Gaussian centered an
incoming dyad d, evaluated at a just interval i, is the same thing as
the Gaussian centered around the just interval and evaluated at the
dyad. So you can just put a Gaussian around each interval i, add them
together and sum them at each point d and get the probability at each
dyad, which can be represented by a convolution...

Or so I thought. It turns out, however, that you have to do what you
actually have to do is do the above three different times with
slightly different basis vectors, add them together and sum them at
each point d, and then take the log of that, then multiply it by
another pre-convolved vector, and then another, divide the last one by
1/sqrt(2*pi*c^2), etc.

Which is still nlogn time, assuming I haven't screwed up the proof,
which I very well may have, and which I'll get on my website some time
when I no longer have a real life.

But in case you didn't get it from my other thread, I am losing
interest in this part of the project. And I'm going to start computing
triads whether the results yield a 1 to 1 correspondence with HE or
not. You can pretend that it has nothing to do with HE, or that it's
just an HE approximation, or "HE-inspired," or a "freehand drawing,"
or whatever derogatory language you'd like to call the end model.

> >A really good -approximation- for this is just to add p together,
>
> That's what we discussed months ago -- whether the absolute sum
> tends to be greater when it is dominated by one Gaussian. Until
> you run it and compare we won't know how true it is.

I've run it and compared loads of times. The tabular data I just sent
you was precisely that; n*d heights but seeded with a Farey series.
Took me like 0.5 seconds to compute. It doesn't look exactly like HE.

> The real problem is the need to perform some task for each incoming
> interval, the number of which is 1200^(n-1) for n-ads. Something
> like McLoed pitch will get around that but I don't see DC doing it.

If McLeod pitch has to perform an autocorrelation for every dyad, then
you're going to be at nlogn time for every dyad that you measure.
Manually adding a Gaussian puts you at linear time for every dyad that
you measure.

> Paul ran such comparisons endlessly, which is the basis for his
> claim that HE has only a single free variable, s.

Not looking like that's the case for mediant-to-mediant widths.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 5:57:17 PM

>So you can just put a Gaussian around each interval i, add them
>together and sum them at each point d and get the probability at each
>dyad,

...an entropy at each dyad. That's the kicker.

>It turns out, however, that you have to do what you
>actually have to do is do the above three different times with
>slightly different basis vectors, add them together and sum them at
>each point d, and then take the log of that, then multiply it by
>another pre-convolved vector, and then another, divide the last one by
>1/sqrt(2*pi*c^2), etc.

Sounds complicated.

>And I'm going to start computing
>triads whether the results yield a 1 to 1 correspondence with HE or
>not. You can pretend that it has nothing to do with HE, or that it's
>just an HE approximation, or "HE-inspired," or a "freehand drawing,"
>or whatever derogatory language you'd like to call the end model.

Freehand drawing sounds about right.

>> Paul ran such comparisons endlessly, which is the basis for his
>> claim that HE has only a single free variable, s.
>
>Not looking like that's the case for mediant-to-mediant widths.

Not sure what you mean. Plots speak louder than words.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 5:59:33 PM

On Sat, Jan 29, 2011 at 8:57 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >And I'm going to start computing
> >triads whether the results yield a 1 to 1 correspondence with HE or
> >not. You can pretend that it has nothing to do with HE, or that it's
> >just an HE approximation, or "HE-inspired," or a "freehand drawing,"
> >or whatever derogatory language you'd like to call the end model.
>
> Freehand drawing sounds about right.

Right, but the second part is that while you can sit from your chair
and arbitrate about conceptual validity of some model that I spend
hours on, that's really just your humble opinion :)

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 6:50:00 PM

Furthermore, let's go back to here:

Carl wrote:
>
> Paul ran such comparisons endlessly, which is the basis for his
> claim that HE has only a single free variable, s.

It has at least two free variables, them being s, and the series that
you choose to seed the model with.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 6:55:48 PM

Mike wrote:

>> Freehand drawing sounds about right.
>
>Right, but the second part is that while you can sit from your chair
>and arbitrate about conceptual validity of some model that I spend
>hours on, that's really just your humble opinion :)

I didn't mean it as an insult. Convolving with a set of impulses
based on the rationals is a good idea, but it is also a broad brush.
From your webpage on the matter, it isn't clear you've succeeded in
excluding any possible convolution fitting that description.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 7:07:14 PM

On Sat, Jan 29, 2011 at 9:55 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> >> Freehand drawing sounds about right.
> >
> >Right, but the second part is that while you can sit from your chair
> >and arbitrate about conceptual validity of some model that I spend
> >hours on, that's really just your humble opinion :)
>
> I didn't mean it as an insult. Convolving with a set of impulses
> based on the rationals is a good idea, but it is also a broad brush.
> From your webpage on the matter, it isn't clear you've succeeded in
> excluding any possible convolution fitting that description.

What do you mean by "excluding any possible convolution?" And the
webpage is a bit out of date right now, in light of what happened last
night. I've also started considering that it might end up being
simpler to work it out for the mediant-to-mediant version after all,
but I can't delve into that right now.

The main point I'm getting at, which is something that you missed, is
in my followup message, where I assert that HE is a function of two
parameters: s, and a measure you use to figure out the distribution of
the rationals. This is what you missed last week, and props to Gene
for elucidating on it. This is outside of the convolution model now,
this is just a result in better understanding HE.

So if you use a Farey series, you get a certain measure for the
distribution of the rationals, if you use a Tenney series, you get a
different measure, etc. If you use Gene's ?(x) function, you get
another measure, if you use the Stern-Brocot tree, you get yet another
measure. If you use dx as a measure, you end up with all the widths
being even.

I think that for any measure that you CHOOSE to use, you can probably
reverse engineer a series that corresponds to it or converges to it in
the limit.

Does that all make sense?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 7:18:19 PM

Mike wrote:

>The main point I'm getting at, which is something that you missed, is
>in my followup message, where I assert that HE is a function of two
>parameters: s, and a measure you use to figure out the distribution of
>the rationals.

I didn't miss it. You missed the part where I said "plots
speak louder than words".

>So if you use a Farey series, you get a certain measure for the
>distribution of the rationals, if you use a Tenney series, you get a
>different measure, etc.

The "distribution of the rationals" isn't what we're measuring.

>If you use Gene's ?(x) function, you get
>another measure, if you use the Stern-Brocot tree, you get yet another
>measure. If you use dx as a measure, you end up with all the widths
>being even.

I think you're talking about what happens when you use one of
these in DC, not HE.

>I think that for any measure that you CHOOSE to use, you can probably
>reverse engineer a series that corresponds to it or converges to it in
>the limit.

In the case of HE, Paul showed that the relative entropy at the
rationals is always proportional to Tenney height, whether Farey or
Tenney or Mann series are CHOSEN. But even if he hadn't, Tenney
is the right choice for other reasons.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 7:31:05 PM

On Sat, Jan 29, 2011 at 10:18 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> >The main point I'm getting at, which is something that you missed, is
> >in my followup message, where I assert that HE is a function of two
> >parameters: s, and a measure you use to figure out the distribution of
> >the rationals.
>
> I didn't miss it. You missed the part where I said "plots
> speak louder than words".

There are plenty of plots that compare HE measured with a Farey series
to HE measured with a Tenney series, and you already know how they
differ.

> >So if you use a Farey series, you get a certain measure for the
> >distribution of the rationals, if you use a Tenney series, you get a
> >different measure, etc.
>
> The "distribution of the rationals" isn't what we're measuring.

It is in HE.

> >If you use Gene's ?(x) function, you get
> >another measure, if you use the Stern-Brocot tree, you get yet another
> >measure. If you use dx as a measure, you end up with all the widths
> >being even.
>
> I think you're talking about what happens when you use one of
> these in DC, not HE.

I'm talking about HE.

> >I think that for any measure that you CHOOSE to use, you can probably
> >reverse engineer a series that corresponds to it or converges to it in
> >the limit.
>
> In the case of HE, Paul showed that the relative entropy at the
> rationals is always proportional to Tenney height, whether Farey or
> Tenney or Mann series are CHOSEN. But even if he hadn't, Tenney
> is the right choice for other reasons.

Farey series have the widths proportional to 1/d, Tenney series have
the widths proportional to 1/sqrt(n*d). The Farey series has a curve
where all of the minima at 1/1, 2/1, 3/1, 4/1, 5/1, 6/1, 7/1, 8/1, etc
have relatively similar entropy, and the whole curve slopes down. I
know you've seen this before, so I'm not sure what you're saying here.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 9:34:43 PM

>> I didn't miss it. You missed the part where I said "plots
>> speak louder than words".
>
>There are plenty of plots that compare HE measured with a Farey series
>to HE measured with a Tenney series, and you already know how they
>differ.

Yes- minimally.

>> >So if you use a Farey series, you get a certain measure for the
>> >distribution of the rationals, if you use a Tenney series, you get a
>> >different measure, etc.
>>
>> The "distribution of the rationals" isn't what we're measuring.
>
>It is in HE.

No, it measures the entropy of the distribution.

>> >If you use Gene's ?(x) function, you get
>> >another measure, if you use the Stern-Brocot tree, you get yet another
>> >measure. If you use dx as a measure, you end up with all the widths
>> >being even.
>>
>> I think you're talking about what happens when you use one of
>> these in DC, not HE.
>
>I'm talking about HE.

Plots speak louder than words.

>> >I think that for any measure that you CHOOSE to use, you can probably
>> >reverse engineer a series that corresponds to it or converges to it in
>> >the limit.
>>
>> In the case of HE, Paul showed that the relative entropy at the
>> rationals is always proportional to Tenney height, whether Farey or
>> Tenney or Mann series are CHOSEN. But even if he hadn't, Tenney
>> is the right choice for other reasons.
>
>Farey series have the widths proportional to 1/d, Tenney series have
>the widths proportional to 1/sqrt(n*d).

I said *entropy*.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 10:13:34 PM

On Sun, Jan 30, 2011 at 12:34 AM, Carl Lumma <carl@lumma.org> wrote:
>
>> >So if you use a Farey series, you get a certain measure for the
> >> >distribution of the rationals, if you use a Tenney series, you get a
> >> >different measure, etc.
> >>
> >> The "distribution of the rationals" isn't what we're measuring.
> >
> >It is in HE.
>
> No, it measures the entropy of the distribution.

The distribution doesn't have an entropy. It measures the entropy of a
dyad by integrating a Gaussian over each interval's "domain," and each
interval's "domain" is equal to its "width," and its width is
completely dependent on what series you use.

> >> >If you use Gene's ?(x) function, you get
> >> >another measure, if you use the Stern-Brocot tree, you get yet another
> >> >measure. If you use dx as a measure, you end up with all the widths
> >> >being even.
> >>
> >> I think you're talking about what happens when you use one of
> >> these in DC, not HE.
> >
> >I'm talking about HE.
>
> Plots speak louder than words.

Here's Paul's Tenney plot:
/tuning-math/files/dyadic/default.gif
Here's his Farey plot:
/tuning-math/files/PaulErlich/harment.gif

You should see the Stern-Brocot version, which doesn't even look
remotely close to either of these.

> >Farey series have the widths proportional to 1/d, Tenney series have
> >the widths proportional to 1/sqrt(n*d).
>
> I said *entropy*.

The entropy results from the widths that you use.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/29/2011 10:30:02 PM

On Sun, Jan 30, 2011 at 1:13 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> You should see the Stern-Brocot version, which doesn't even look
> remotely close to either of these.

I should add to this that I can come up with a Stern-Brocot plot if
people want, but I realized what was wrong about Paul's code not
correlating to itself. Paul sent me the mediant-to-mediant version,
but the memory footprint on this is so extensive that it actually
won't run. I noticed this a long time ago and set it to only use
ratios from a Tenney series that are within 4 octaves of the lowest
and highest dyad that you want. So if we're going from 0 to 1200 cents
in 1 cent increments, it uses only ratios in which n*d < N and 1/8 <
n/d < 8/1.

This is a lot faster to calculate but could be the reason that the
curves don't correlate properly. So I'll have to tweak the code a bit
and see how it works then.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/29/2011 11:06:01 PM

>> Plots speak louder than words.
>
>Here's his Farey plot:
> /tuning-math/files/PaulErlich/harment.gif

That's not "Paul's Farey plot".

>You should see the Stern-Brocot version, which doesn't even look
>remotely close to either of these.

Great, where is it?

>> >Farey series have the widths proportional to 1/d, Tenney series have
>> >the widths proportional to 1/sqrt(n*d).
>>
>> I said *entropy*.
>
>The entropy results from the widths that you use.

Show me.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 12:42:30 AM

On Sun, Jan 30, 2011 at 2:06 AM, Carl Lumma <carl@lumma.org> wrote:
>
> >> Plots speak louder than words.
> >
> >Here's his Farey plot:
> > /tuning-math/files/PaulErlich/harment.gif
>
> That's not "Paul's Farey plot".

You'll have to stop "being" "so cryptic." This is "the Farey series HE
plot" in his "files folder" on "tuning." This is what he told me to
look at, this is exactly what he described the Farey series version as
looking like, and this is why he told me he moved onto using the
"Tenney series."

Here's a message from him on Facebook, reprinted with his permission,
about this very subject:

> If we look at a Farey series, where the numerators are limited by N, the "widths" then go as 1/d rather than 1/sqrt(n*d) -- and I proved that rigorously in the original version of my 22 paper. The result is a harmonic entropy curve that has an overall downward slope. But other than the overall slope it looks incredibly similar to the curve deriving from a "Tenney" series; in particular, the "dips" at the major simple ratios seem to maintain the same width and even depth relative to the "crests" or any other features immediately in their vicinity.

But let's back up here. I'm working under the belief that the choice
of series affects the resultant curve. This is backed up by Paul's own
statements, the curves I've seen and now posted comparing the two, the
discussion we had this last week about number theory and the
distribution of the rationals, my own understanding of the calculus
involved in the entropy summation, and pretty much every tidbit of
information on the models I've ever found at all.

You came in asserting that whatever results I posted should correlate
perfectly to whatever random HE dataset you decide to plot them
against, because the choice of series doesn't matter, and because the
choice of n*d doesn't matter, and because the use of
mediant-to-mediant widths vs sqrt(n*d) doesn't matter. That's quite an
extraordinary claim to make, and I'd like to see some proof of it.

> >You should see the Stern-Brocot version, which doesn't even look
> >remotely close to either of these.
>
> Great, where is it?
//
> Show me.

See followup message.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/30/2011 1:04:03 AM

>You'll have to stop "being" "so cryptic." This is "the Farey series HE
>plot" in his "files folder" on "tuning." This is what he told me to
>look at, this is exactly what he described the Farey series version as
>looking like, and this is why he told me he moved onto using the
>"Tenney series."

He probably intended it as an exaggerated example to show the trend
in the curve -- the Farey order is only 100!

>Here's a message from him on Facebook, reprinted with his permission,
>about this very subject:
>
>If we look at a Farey series, where the numerators are limited by N,
>the "widths" then go as 1/d rather than 1/sqrt(n*d)

The mediant-mediant widths, not the widths of anything on the
resulting entropy curve.

>You came in asserting that whatever results I posted should correlate
>perfectly to whatever random HE dataset

It's odd that you think this. You drew the comparison to HE, not me.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 1:24:21 AM

On Sun, Jan 30, 2011 at 4:04 AM, Carl Lumma <carl@lumma.org> wrote:
>
> >You'll have to stop "being" "so cryptic." This is "the Farey series HE
> >plot" in his "files folder" on "tuning." This is what he told me to
> >look at, this is exactly what he described the Farey series version as
> >looking like, and this is why he told me he moved onto using the
> >"Tenney series."
>
> He probably intended it as an exaggerated example to show the trend
> in the curve -- the Farey order is only 100!

Here are some other Farey series graphs:

http://sonic-arts.org/td/erlich/entropy.htm
/tuning-math/files/PaulErlich/ent_006.jpg
/tuning-math/files/PaulErlich/ent_015.jpg

Here's one where N goes from 80 to 405:
/tuning-math/files/PaulErlich/manuel.jpg

Here's exp(entropy), showing that the curve actually gets steeper as N
gets higher:
/tuning-math/files/PaulErlich/manuel2.jpg

This would mean if you plotted them against one another you wouldn't get a line.

> >Here's a message from him on Facebook, reprinted with his permission,
> >about this very subject:
> >
> >If we look at a Farey series, where the numerators are limited by N,
> >the "widths" then go as 1/d rather than 1/sqrt(n*d)
>
> The mediant-mediant widths, not the widths of anything on the
> resulting entropy curve.

Why did you cut out the next sentence? "The result is a harmonic
entropy curve that has an overall downward slope."

> >You came in asserting that whatever results I posted should correlate
> >perfectly to whatever random HE dataset
>
> It's odd that you think this. You drew the comparison to HE, not me.

I drew the comparison to HE based off of a bunch of calculus that I
had done. You came in asserting that the curves should correlate
perfectly to HE, despite the fact that what I was claiming at the time
was that they'd converge as N -> Infinity. However, I decided to play
along, since I wanted to see how close the models really would get.
OK, fine, burden of proof is on me, and it turns out that there was an
error in my derivation anyway, which I admitted.

But to continue to strive for a solution, I asked you what data I'm
supposed to be correlating to, and you said that you weren't going to
tell me because it doesn't matter, because the curve looks pretty much
the same no matter what N is set to, no matter what series you use, no
matter what widths you use, etc. This is a pretty novel claim and it
isn't supported by the charts I've seen and posted, by what Paul has
said about the use of sqrt(n*d) widths, or by my current understanding
of the entropy calculation, or my running an optimized version of
Paul's code, which may or may not be accurate, but generally reflected
the charts.

You then said that the choice of series in particular doesn't matter,
and that HE has only one free parameter - s. Since the entire point of
my work, and the main result of last's weeks discussion that you
missed is that the choice of series is all-important and dictates the
end result, and that the end result can be efficiently approximated by
picking reasonable heights and convolving with a Gaussian, this is yet
another pretty bold claim to make. It again doesn't seem to match up
with anything I've heard or seen, or what Paul has said, or what the
math itself would seem to suggest, although I could have always
screwed up the math again.

So now I ask you, since you're claiming that the only way my model is
validated is if it matches up to all HE curves, since they're all
apparently correlated, and since you're also claiming that the choice
of series doesn't do anything to the end curve, for some proof of
these statements. It's good to hold my work to a high standard, but I
don't think these specific claims are true and I don't think Paul does
either.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/30/2011 2:20:49 AM

>> He probably intended it as an exaggerated example to show the trend
>> in the curve -- the Farey order is only 100!
>
>Here are some other Farey series graphs:
>
>http://sonic-arts.org/td/erlich/entropy.htm
>/tuning-math/files/PaulErlich/ent_006.jpg
>/tuning-math/files/PaulErlich/ent_015.jpg

N=167 is the highest those go. That's starting to look reasonable
but note he says it's too low right on it!

>Here's one where N goes from 80 to 405:
>/tuning-math/files/PaulErlich/manuel.jpg

Labeled "discordance".

>Here's exp(entropy), showing that the curve actually gets steeper as N
>gets higher:
>/tuning-math/files/PaulErlich/manuel2.jpg

That's probably exp amplification at the higher starting points.

>> >If we look at a Farey series, where the numerators are limited by N,
>> >the "widths" then go as 1/d rather than 1/sqrt(n*d)
>>
>> The mediant-mediant widths, not the widths of anything on the
>> resulting entropy curve.
>
>Why did you cut out the next sentence? "The result is a harmonic
>entropy curve that has an overall downward slope."

I know about the slope. Why do you keep talking about mediant
to mediant widths as if they came out in the result?

>But to continue to strive for a solution, I asked you what data I'm
>supposed to be correlating to, and you said that you weren't going to
>tell me because it doesn't matter,

I said I thought the standard harmonic entropy formulation is a
good target. That's n*d < 10000, 1 <= n/d <= 2, 1/sqrt(n*d) widths,
s = 1%. I even showed that your thing doesn't match up to it.
What else would you like me to do?

>You then said that the choice of series in particular doesn't matter,
>and that HE has only one free parameter - s. Since the entire point of
>my work, and the main result of last's weeks discussion that you
>missed is that the choice of series is all-important and dictates the
>end result,

I'm waiting for evidence of this.

>and that the end result can be efficiently approximated by
>picking reasonable heights and convolving with a Gaussian,

Evidence of this?

>So now I ask you, since you're claiming that the only way my model is
>validated is if it matches up to all HE curves,

I didn't say that. I said you could alternatively demonstrate
why your model makes sense -- why you're not free to choose any
convolution integral you want, or how such a choice doesn't change
the result. You replied by saying you think you need to do three
convolutions and cobble them together, or something.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 3:09:51 AM

On Sun, Jan 30, 2011 at 5:20 AM, Carl Lumma <carl@lumma.org> wrote:
>
> >Here's one where N goes from 80 to 405:
> >/tuning-math/files/PaulErlich/manuel.jpg
>
> Labeled "discordance".

Yes...?

> >Here's exp(entropy), showing that the curve actually gets steeper as N
> >gets higher:
> >/tuning-math/files/PaulErlich/manuel2.jpg
>
> That's probably exp amplification at the higher starting points.

OK, good point.

> >Why did you cut out the next sentence? "The result is a harmonic
> >entropy curve that has an overall downward slope."
>
> I know about the slope. Why do you keep talking about mediant
> to mediant widths as if they came out in the result?

You said that the minima of harmonic entropy are scaled resulting to
Tenney height, but now you're saying you knew about the slope? Look at
the Farey picture - what has a lower entropy, 2/1 or 1/1? What has
lower entropy, the maxima right before 2/1, or 4/3? Look on the N=405
plot at the top here:
/tuning-math/files/PaulErlich/manuel.jpg

While I never claimed that the Farey series curve has no valid
interpretation, which it obviously does, I am saying that the choice
of series biases the curve at the end of the day.

> >But to continue to strive for a solution, I asked you what data I'm
> >supposed to be correlating to, and you said that you weren't going to
> >tell me because it doesn't matter,
>
> I said I thought the standard harmonic entropy formulation is a
> good target. That's n*d < 10000, 1 <= n/d <= 2, 1/sqrt(n*d) widths,
> s = 1%. I even showed that your thing doesn't match up to it.
> What else would you like me to do?

You never told me n*d < 10000 or 1/sqrt(n*d) widths.

> >You then said that the choice of series in particular doesn't matter,
> >and that HE has only one free parameter - s. Since the entire point of
> >my work, and the main result of last's weeks discussion that you
> >missed is that the choice of series is all-important and dictates the
> >end result,
>
> I'm waiting for evidence of this.

I'm shoving it in your face. Go look at the Tenney series results and
then go look at the Farey series results. The entire curve changes.
And if you'd be patient, you'll have Stern-Brocot tree results soon as
well.

> >and that the end result can be efficiently approximated by
> >picking reasonable heights and convolving with a Gaussian,
>
> Evidence of this?

Here's a plot with heights that I think are reasonable; 1/(n*d) heights:

/tuning-math/files/MikeBattaglia/ndheights.png

Here's it with atan(4/nd) heights, which smooths out the curve a bit:

/tuning-math/files/MikeBattaglia/atan-4nd-heights.png

This is not going to give a 1-1 correspondence to HE, so I'm not going
to bother to give you plots (the first one is the one I gave you plots
of).

If you think that there is no utility in the fact that this model
generates a curve like that and can be computed in under a second,
then I don't really know what to tell you.

> >So now I ask you, since you're claiming that the only way my model is
> >validated is if it matches up to all HE curves,
>
> I didn't say that. I said you could alternatively demonstrate
> why your model makes sense -- why you're not free to choose any
> convolution integral you want, or how such a choice doesn't change
> the result. You replied by saying you think you need to do three
> convolutions and cobble them together, or something.

You can choose any convolution integral that you want. You can also
choose any series that you want in HE, and you'll get out what you put
in.

In DC, you choose a basis function that consists of impulses weighted
sensibly according to complexity, so that when a Gaussian is convolved
with it, the model produces sensible results. This is analogous to
what you guys did when you switched from the Farey to the Tenney
series - made it produce more sensible results. And when I figure out
how to run Paul's code, I'll post the Stern-Brocot tree example so you
can see for yourself how dumb it looks.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 3:11:33 AM

On Sun, Jan 30, 2011 at 6:09 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> This is not going to give a 1-1 correspondence to HE, so I'm not going
> to bother to give you plots (the first one is the one I gave you plots
> of).

Sorry, I meant tabular data. The first one is the one I already gave
you tabular data of.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/30/2011 12:30:35 PM

>> >Here's one where N goes from 80 to 405:
>> >/tuning-math/files/PaulErlich/manuel.jpg
>>
>> Labeled "discordance".
>
>Yes...?

So, what is it?

>You said that the minima of harmonic entropy are scaled resulting to
>Tenney height, but now you're saying you knew about the slope?

Yep, I've written several times recently about the slope on several
of the lists.

>> >You then said that the choice of series in particular doesn't matter,
>> >and that HE has only one free parameter - s. Since the entire point of
>> >my work, and the main result of last's weeks discussion that you
>> >missed is that the choice of series is all-important and dictates the
>> >end result,
>>
>> I'm waiting for evidence of this.
>
>I'm shoving it in your face. Go look at the Tenney series results and
>then go look at the Farey series results. The entire curve changes.
>And if you'd be patient, you'll have Stern-Brocot tree results soon as
>well.

The entropies of the minima are proportional to log(n*d) regardless
of the series used. That is Paul's result. You have a similar
result for DC?

>> >and that the end result can be efficiently approximated by
>> >picking reasonable heights and convolving with a Gaussian,
>>
>> Evidence of this?
>
>Here's a plot with heights that I think are reasonable; 1/(n*d) heights:
>
>/tuning-math/files/MikeBattaglia/ndh
>eights.png

Looks vaguely reasonable. Hard to tell much it.

>Here's it with atan(4/nd) heights, which smooths out the curve a bit:

Why bother? Just get a pencil and connect the dots.

>If you think that there is no utility in the fact that this model
>generates a curve like that and can be computed in under a second,
>then I don't really know what to tell you.

I've said it's a good idea.

>In DC, you choose a basis function that consists of impulses weighted
>sensibly according to complexity, so that when a Gaussian is convolved
>with it, the model produces sensible results. This is analogous to
>what you guys did when you switched from the Farey to the Tenney
>series - made it produce more sensible results. And when I figure out
>how to run Paul's code, I'll post the Stern-Brocot tree example so you
>can see for yourself how dumb it looks.

Why would you use the Stern-Brocot tree??

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 12:56:50 PM

On Sun, Jan 30, 2011 at 3:30 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >> >Here's one where N goes from 80 to 405:
> >> >/tuning-math/files/PaulErlich/manuel.jpg
> >>
> >> Labeled "discordance".
> >
> >Yes...?
>
> So, what is it?

Isn't it just entropy? He calls entropy discordance a lot of times on
these plots, and the numbers on the side are generally the same range
as the nats this model generates as a measure of entropy.

> >You said that the minima of harmonic entropy are scaled resulting to
> >Tenney height, but now you're saying you knew about the slope?
>
> Yep, I've written several times recently about the slope on several
> of the lists.

OK...? So then you get my point?

> The entropies of the minima are proportional to log(n*d) regardless
> of the series used.

This is a false statement. Please stop making it. I responded to this
statement in my last message, and you cut the statement out, didn't
respond to it, and just said the same thing again. It's getting absurd
now.

15/1 has a lower entropy than 3/2 if a Farey series is used.

> You have a similar result for DC?

That's the whole point of DC. Rather than picking a series to
indirectly engineer the curve to have a certain slope and/or shape,
and picking a value of N high enough to give the curve linear behavior
and avoid clipping, you just give it perfectly linear behavior from
the outset with the convolution and pick the minima that you want.

> >Here's it with atan(4/nd) heights, which smooths out the curve a bit:
>
> Why bother? Just get a pencil and connect the dots.

Troll.

> I've said it's a good idea.

You've said it in nothing but the most unflattering derogatory terms
that you can think of, which although you might find amusing, is not
as amusing from my vantage point after spending countless hours on
this.

You left for a week, and in that week we had a huge discussion about
number theory, did a huge investigation into why exactly DC and HE
yield such similar results, investigated the distribution of different
subsets of the rationals, figured out why different series yield
different curves, etc. You came back a week later, missed the entire
thing, arrogantly asserted you "apparently didn't miss much," and now
you're telling me that this is an arbitrary freehand drawing? Jesus
Carl, WTF? That is absolutely ridiculous.

> >In DC, you choose a basis function that consists of impulses weighted
> >sensibly according to complexity, so that when a Gaussian is convolved
> >with it, the model produces sensible results. This is analogous to
> >what you guys did when you switched from the Farey to the Tenney
> >series - made it produce more sensible results. And when I figure out
> >how to run Paul's code, I'll post the Stern-Brocot tree example so you
> >can see for yourself how dumb it looks.
>
> Why would you use the Stern-Brocot tree??

Why wouldn't you? It generates simpler rationals first and more
complex ones after, has the unimodular property, and generally has all
of the properties that people like about the Farey and Tenney series.
The problem is that there are so few rationals around the simple
ratios that the curve ends up looking ridiculous. It looks like a
"distorted" or "clipped" version of the Tenney series version.

I initially thought to use it because it's a lot quicker to compute
than the Tenney series version, and it messed the curve up so bad that
it was unusable. You had 1/1 with a ridiculously, unrealistically
large field of attraction, etc.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/30/2011 1:40:49 PM

>> >> Labeled "discordance".
>> >
>> >Yes...?
>>
>> So, what is it?
>
>Isn't it just entropy? He calls entropy discordance a lot of times on
>these plots, and the numbers on the side are generally the same range
>as the nats this model generates as a measure of entropy.

I don't know. He did a lot of weird experiments over the years.

>> Yep, I've written several times recently about the slope on several
>> of the lists.
>
>OK...? So then you get my point?

No...

>> The entropies of the minima are proportional to log(n*d) regardless
>> of the series used.
>
>This is a false statement. Please stop making it. I responded to this
>statement in my last message, and you cut the statement out, didn't
>respond to it, and just said the same thing again. It's getting absurd
>now.

Paul made this claim. I haven't seen anything from you that
debunks it.

>15/1 has a lower entropy than 3/2 if a Farey series is used.

Troll.

>You left for a week, and in that week we had a huge discussion about
>number theory, did a huge investigation into why exactly DC and HE
>yield such similar results, investigated the distribution of different
>subsets of the rationals, figured out why different series yield
>different curves, etc.

I wasn't gone, I just thought it was bullshit. If you've reached
any conclusions in this business I doubt anyone here can name one.

>> Why would you use the Stern-Brocot tree??
>
>Why wouldn't you? It generates simpler rationals first and more
>complex ones after, has the unimodular property, and generally has all
>of the properties that people like about the Farey and Tenney series.

Here is the relation of the subtree between 1/1 and 2/1 to the
Tenney series

> (df-dyads 2 (df-limit 2 10))
(3/2 2)
> (df-dyads 2 (df-limit 2 12))
(4/3 3/2 2)
> (df-dyads 2 (df-limit 2 15))
(4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 20))
(5/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 28))
(5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 30))
(6/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 35))
(6/5 7/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 40))
(6/5 7/5 8/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 42))
(7/6 6/5 7/5 8/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 45))
(7/6 6/5 7/5 8/5 9/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 56))
(8/7 7/6 6/5 7/5 8/5 9/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 63))
(8/7 9/7 7/6 6/5 7/5 8/5 9/5 5/4 7/4 4/3 5/3 3/2 2)
> (df-dyads 2 (df-limit 2 66))
(8/7 9/7 7/6 11/6 6/5 7/5 8/5 9/5 5/4 7/4 4/3 5/3 3/2 2)

1/1 2/1
3/2
4/3 3/2
4/3 3/2 5/3
5/4 4/3 3/2 5/3
5/4 4/3 3/2 5/3 7/4
6/5 5/4 4/3 3/2 5/3 7/4
6/5 5/4 4/3 7/5 3/2 5/3 7/4
6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4
7/6 6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4
7/6 6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4 9/5
8/7 7/6 6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4 9/5
8/7 7/6 6/5 5/4 9/7 4/3 7/5 3/2 8/5 5/3 7/4 9/5
8/7 7/6 6/5 5/4 9/7 4/3 7/5 3/2 8/5 5/3 7/4 9/5 11/6

So it's almost close enough to consider using, but probably
not quite. I dunno, I haven't tried it.

>The problem is that there are so few rationals around the simple
>ratios that the curve ends up looking ridiculous. It looks like a
>"distorted" or "clipped" version of the Tenney series version.

It sounds like you did something wrong - I don't expect that big
of a difference.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 2:08:02 PM

On Sun, Jan 30, 2011 at 4:40 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >> The entropies of the minima are proportional to log(n*d) regardless
> >> of the series used.
> >
> >This is a false statement. Please stop making it. I responded to this
> >statement in my last message, and you cut the statement out, didn't
> >respond to it, and just said the same thing again. It's getting absurd
> >now.
>
> Paul made this claim. I haven't seen anything from you that
> debunks it.

The fact that this is true stems because the entropy equation is being used
with a series that generates simple rationals first and tends to cluster
rationals together in packs of complex ratios. If you come up with a series
that doesn't behave like that, you aren't going to get the entropy curve
that you all know and love, because the simple ratios won't have the largest
widths and eat up most of the Gaussian.

The fact that the slope occurs happens because the reciprocal of the Farey
series that Paul is using will always have the highest rationals distributed
the most sparsely.

> >You left for a week, and in that week we had a huge discussion about
> >number theory, did a huge investigation into why exactly DC and HE
> >yield such similar results, investigated the distribution of different
> >subsets of the rationals, figured out why different series yield
> >different curves, etc.
>
> I wasn't gone, I just thought it was bullshit.

The main conclusion was that your claim that the rationals tend to cluster
away from simple ratios, even in the limit, isn't true. It depends on what
measure of "width" you use. If you use the Farey measure of width, you get a
setup where the rationals end up having larger relative widths in the
infinite limit, if you use the Tenney measure of width, you get a different
setup, and if you use the Stern-Brocot tree, you get a different setup, and
if you use ?(x), you get a different setup. And if you use dx, it's even.

> So it's almost close enough to consider using, but probably
> not quite. I dunno, I haven't tried it.

Why are we comparing it to the Tenney series? I don't understand.

> >The problem is that there are so few rationals around the simple
> >ratios that the curve ends up looking ridiculous. It looks like a
> >"distorted" or "clipped" version of the Tenney series version.
>
> It sounds like you did something wrong - I don't expect that big
> of a difference.

We'll find out once I fix Paul's code.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 2:22:49 PM

On Sun, Jan 30, 2011 at 5:08 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> The fact that this is true stems because the entropy equation is being used
> with a series that generates simple rationals first and tends to cluster
> rationals together in packs of complex ratios. If you come up with a series
> that doesn't behave like that, you aren't going to get the entropy curve
> that you all know and love, because the simple ratios won't have the largest
> widths and eat up most of the Gaussian.

I should also add that the choice of mediants is also responsible for
a lot of it, which means that the "midpoint" is placed closer to the
more complex ratio. The use of mean-to-mean widths ends up working out
the same anyway for the series' that we're used to. Not going to be
the case for this one:

http://mathworld.wolfram.com/Calkin-WilfTree.html

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/30/2011 3:03:42 PM

On Sun, Jan 30, 2011 at 5:22 PM, Mike Battaglia <battaglia01@gmail.com>
wrote:
> Not going to be the case for this one:
>
> http://mathworld.wolfram.com/Calkin-WilfTree.html

Sorry, it appears I was misunderstanding the Calkin-Wilf tree - I didn't
realize that you had to concatenate the previous entries each time as well.
But if a suitable pathological example is needed in which a series can be
used to seed HE that will still completely enumerate the rationals, has the
unimodular property, and still screws up the resultant entropy of the curve
anyway, I can probably work something out :)

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 4:20:43 PM

>
>> >> The entropies of the minima are proportional to log(n*d) regardless
>> >> of the series used.
>> >
>> >This is a false statement. Please stop making it. I responded to this
>> >statement in my last message, and you cut the statement out, didn't
>> >respond to it, and just said the same thing again. It's getting absurd
>> >now.
>>
>> Paul made this claim. I haven't seen anything from you that
>> debunks it.
>
>The fact that this is true stems because the entropy equation is being used with a series that generates simple rationals first and tends to cluster rationals together in packs of complex ratios. If you come up with a series that doesn't behave like that, you aren't going to get the entropy curve that you all know and love,

By "series", you honestly thought I meant any possible number series?

>The main conclusion was that your claim that the rationals tend to cluster away from simple ratios, even in the limit, isn't true. It depends on what measure of "width" you use.

Width? There's no width, there's distance, log(a)-log(b). At the
limit the rationals are dense, so that's the opposite of what I said
during the time you claim I wasn't here.

>Why are we comparing it to the Tenney series? I don't understand.

Because the Tenney series models how we hear.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 4:46:33 PM

On Mon, Jan 31, 2011 at 7:20 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >The fact that this is true stems because the entropy equation is being used with a series that generates simple rationals first and tends to cluster rationals together in packs of complex ratios. If you come up with a series that doesn't behave like that, you aren't going to get the entropy curve that you all know and love,
>
> By "series", you honestly thought I meant any possible number series?

Yes. You said that HE has one free parameter, s, and that me picking
n*d heights and convolving was just a freehand drawing. I'm saying
that no, it doesn't have one free parameter: it depends on you picking
a series that meets whatever psychoacoustic criteria you're trying to
fulfill.

Paul has made the claim that HE spontaneously predicts the dissonance
and consonance of intervals based on the distribution of the rational
numbers, and I'm saying no, it spontaneously predicts the dissonance
and consonance of intervals based on the distribution of a series that
you choose based a priori on what intervals are consonant and
dissonant. How in the hell does that make it not a freehand drawing?
You guys are indirectly engineering a "freehand drawing" that I am
trying to draw directly, and much more quickly.

The widths translate over to heights, and the use of a Gaussian
spreads everything out Gaussian-wise. An almost indistinguishably
similar thing happens in DC, and I have yet to fully resolve whether
or not it approaches a real convolution in the limit (talking to Gene
off-list about this now). If not, it's some kind of nonlinear
quasi-convolution, which only makes a difference at very small
intervals. In any case the two obviously don't seem to yield very
different results, and to answer your question, yes, the maxima and
minima end up being the same in DC as they are in HE.

So why, in DC, can't you convolve with any kernel you want? Well, you
can, and you can also choose whatever series you want in HE, but in
both cases it makes sense to pick one that matches some kind of
psychoacoustically reasonable criterion. So please stop pretending
otherwise in HE.

> >The main conclusion was that your claim that the rationals tend to cluster away from simple ratios, even in the limit, isn't true. It depends on what measure of "width" you use.
>
> Width? There's no width, there's distance, log(a)-log(b).

This is either a word game or I don't understand what you mean here.

> At the limit the rationals are dense, so that's the opposite of what I said
> during the time you claim I wasn't here.

They are dense, but you can still claim that there are differing
relative distances between "pairs of adjacent rationals" based on
whether you're using dx as a measure of width, or something else. If
you look at the Farey numbers, and you observe their distribution as
the limit of N goes to infinity, it does not approach a perfectly even
distribution.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 4:55:03 PM

>Paul has made the claim that HE spontaneously predicts the dissonance
>and consonance of intervals based on the distribution of the rational
>numbers, and I'm saying no, it spontaneously predicts the dissonance
>and consonance of intervals based on the distribution of a series that
>you choose based a priori on what intervals are consonant and
>dissonant.

For any list of rationals bounded by some reasonable measure of
their complexity, the distance between them when ordered by size
will be a function of that same measure of complexity.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 5:01:13 PM

On Mon, Jan 31, 2011 at 7:55 PM, Carl Lumma <carl@lumma.org> wrote:
>
> For any list of rationals bounded by some reasonable measure of
> their complexity

This is not a negligible restriction to place on the model.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 5:37:48 PM

>I should also add that the choice of mediants is also responsible for
>a lot of it, which means that the "midpoint" is placed closer to the
>more complex ratio.

The fact that mediants generally give you the simplest rational between
to others should tell you something.

Mediants also aren't needed. You can just take the distance all the
way to neighbors on either side. For the Farey series, that'll be
the same as using the previous-order series with mediants, but with
Tenney or Mann it won't.

-Carl

🔗Carl Lumma <carl@lumma.org>

1/31/2011 5:38:31 PM

>> For any list of rationals bounded by some reasonable measure of
>> their complexity
>
>This is not a negligible restriction to place on the model.

It's the whole point of the model -- hardly a free parameter.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 5:40:04 PM

On Mon, Jan 31, 2011 at 8:38 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >> For any list of rationals bounded by some reasonable measure of
> >> their complexity
> >
> >This is not a negligible restriction to place on the model.
>
> It's the whole point of the model -- hardly a free parameter.

Well then there's your answer about why you can't use any convolution
kernel you want in DC.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 5:45:33 PM

>> >> For any list of rationals bounded by some reasonable measure of
>> >> their complexity
>> >
>> >This is not a negligible restriction to place on the model.
>>
>> It's the whole point of the model -- hardly a free parameter.
>
>Well then there's your answer about why you can't use any convolution
>kernel you want in DC.

That was my point- you seemed to be using any kernel you wanted.
Assuming you agree to desist on that front, I'm still worried about
the convolution integral.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 5:47:58 PM

On Mon, Jan 31, 2011 at 8:45 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >Well then there's your answer about why you can't use any convolution
> >kernel you want in DC.
>
> That was my point- you seemed to be using any kernel you wanted.

I always picked kernels that order intervals by complexity. Even the
atan(4/(n*d)) one was ordered like that, but your response was "why
not just connect the dots?" Something fishy's going on here.

> Assuming you agree to desist on that front, I'm still worried about
> the convolution integral.

Worried about it how?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 7:07:02 PM

On Mon, Jan 31, 2011 at 7:55 PM, Carl Lumma <carl@lumma.org> wrote:
>
> For any list of rationals bounded by some reasonable measure of
> their complexity, the distance between them when ordered by size
> will be a function of that same measure of complexity.

In fact, let's go back to this. I don't think that this is true. What
happens if you use Tenney-Euclidean complexity, rather than just
Tenney height? You'll get 15/8 appearing before 9/8, which is I doubt
what you want. 1/n+1/d, which is the same thing as (n*d)/(n+d), also
yields some pretty gnarly results.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 7:35:04 PM

At 07:07 PM 1/31/2011, you wrote:
>On Mon, Jan 31, 2011 at 7:55 PM, Carl Lumma <carl@lumma.org> wrote:
>>
>> For any list of rationals bounded by some reasonable measure of
>> their complexity, the distance between them when ordered by size
>> will be a function of that same measure of complexity.
>
>In fact, let's go back to this. I don't think that this is true. What
>happens if you use Tenney-Euclidean complexity, rather than just
>Tenney height? You'll get 15/8 appearing before 9/8, which is I doubt
>what you want. 1/n+1/d, which is the same thing as (n*d)/(n+d), also
>yields some pretty gnarly results.

I didn't say any reasonable complexity gives good agreement with
psychoacoustics, I said it gives lists of rationals that are
distributed unevenly. In this sense, the rationals do have the
property I claimed. -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 7:42:04 PM

On Mon, Jan 31, 2011 at 10:35 PM, Carl Lumma <carl@lumma.org> wrote:
>
> I didn't say any reasonable complexity gives good agreement with
> psychoacoustics, I said it gives lists of rationals that are
> distributed unevenly. In this sense, the rationals do have the
> property I claimed. -Carl

1/n + 1/d also completely breaks the model. So if your goal is to
tweak the model to give good results with psychoacoustics, then there
are two parameters you're tweaking: s, and a method of enumerate the
rationals by complexity. Not every method will work. These are two
parameters.

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 7:45:54 PM

At 07:42 PM 1/31/2011, you wrote:
>On Mon, Jan 31, 2011 at 10:35 PM, Carl Lumma <carl@lumma.org> wrote:
>>
>> I didn't say any reasonable complexity gives good agreement with
>> psychoacoustics, I said it gives lists of rationals that are
>> distributed unevenly. In this sense, the rationals do have the
>> property I claimed. -Carl
>
>1/n + 1/d also completely breaks the model.

I should say. n/1 + d/1 doesn't though.

>So if your goal is to
>tweak the model to give good results with psychoacoustics, then there
>are two parameters you're tweaking: s, and a method of enumerate the
>rationals by complexity. Not every method will work.

Every method should work reasonably well in an entropy-based model.
I dunno about DC.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

1/31/2011 7:51:10 PM

On Mon, Jan 31, 2011 at 10:45 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >1/n + 1/d also completely breaks the model.
>
> I should say. n/1 + d/1 doesn't though.

Oops. (n*d)/(n+d) might though.

> >So if your goal is to
> >tweak the model to give good results with psychoacoustics, then there
> >are two parameters you're tweaking: s, and a method of enumerate the
> >rationals by complexity. Not every method will work.
>
> Every method should work reasonably well in an entropy-based model.
> I dunno about DC.

We're talking about strict HE now. So if I can come up with a series
that bounds things by some measure of complexity other than n*d, n+d,
and d that doesn't work reasonably well, that means I can claim it's
two parameters, right?

-Mike

🔗Carl Lumma <carl@lumma.org>

1/31/2011 7:58:08 PM

>We're talking about strict HE now. So if I can come up with a series
>that bounds things by some measure of complexity other than n*d, n+d,
>and d that doesn't work reasonably well, that means I can claim it's
>two parameters, right?

That depends on how many posts you force me to reply to between
now and then. :P

-Carl