back to list

re: Dan, Dave, experience, entropy

🔗Carl Lumma <ekin@...>

10/20/2005 11:00:57 AM

Just saw a post from 2001 (when I wasn't on the lists) about
11:9 having lower entropy than 8:5 with very large s.

The thread apparently started here...
/tuning/topicId_18643.html#18643
...with s=1.5.

Does anyone else think this is a highly anomalous result?

More recent data from Paul, with s=1.0, gives...

347 4.6234905
814 4.5761788

...which seems much better.

Comments?

-Carl

🔗wallyesterpaulrus <wallyesterpaulrus@...>

10/20/2005 5:46:53 PM

--- In harmonic_entropy@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>
> Just saw a post from 2001 (when I wasn't on the lists) about
> 11:9 having lower entropy than 8:5 with very large s.
>
> The thread apparently started here...
> /tuning/topicId_18643.html#18643
> ...with s=1.5.
>
> Does anyone else think this is a highly anomalous result?

Not I.

> More recent data from Paul, with s=1.0, gives...
>
> 347 4.6234905
> 814 4.5761788
>
> ...which seems much better.
>
> Comments?
>
> -Carl

Have you looked at all four graphs, not just stearns.jpg referred to
in the message above, but also stearns2.jpg, stearns3.jpg, and
stearns4.jpg?

🔗Carl Lumma <ekin@...>

10/20/2005 9:09:44 PM

>> Just saw a post from 2001 (when I wasn't on the lists) about
>> 11:9 having lower entropy than 8:5 with very large s.
>>
>> The thread apparently started here...
>> /tuning/topicId_18643.html#18643
>> ...with s=1.5.
>>
>> Does anyone else think this is a highly anomalous result?
>
>Not I.
>
>> More recent data from Paul, with s=1.0, gives...
>>
>> 347 4.6234905
>> 814 4.5761788
>>
>> ...which seems much better.
>>
>> Comments?
>>
>> -Carl
>
>Have you looked at all four graphs, not just stearns.jpg referred to
>in the message above, but also stearns2.jpg, stearns3.jpg, and
>stearns4.jpg?

Where do I get them?

-Carl

🔗yahya_melb <yahya@...>

10/21/2005 7:13:27 AM

--- In harmonic_entropy@yahoogroups.com, Carl Lumma <ekin@l...>
wrote:
>
> >> Just saw a post from 2001 (when I wasn't on the lists) about
> >> 11:9 having lower entropy than 8:5 with very large s.
> >>
> >> The thread apparently started here...
> >> /tuning/topicId_18643.html#18643
> >> ...with s=1.5.
> >>
> >> Does anyone else think this is a highly anomalous result?
> >
> >Not I.
> >
> >> More recent data from Paul, with s=1.0, gives...
> >>
> >> 347 4.6234905
> >> 814 4.5761788
> >>
> >> ...which seems much better.
> >>
> >> Comments?
> >>
> >> -Carl
> >
> >Have you looked at all four graphs, not just stearns.jpg referred
to
> >in the message above, but also stearns2.jpg, stearns3.jpg, and
> >stearns4.jpg?
>
> Where do I get them?
>
> -Carl

The reference for stearns.jpg is to a file on the files list;
however, that list says it can't find it. So please do tell us
where they are.

I'd also be interested to know just how sensitive this kind
of "anomalous" result is to the factor s (=1.2% in the graph in the
group description.) What is the experimentally observed range of s?

Regards,
Yahya

🔗wallyesterpaulrus <wallyesterpaulrus@...>

10/26/2005 2:28:45 PM

--- In harmonic_entropy@yahoogroups.com, "yahya_melb" <yahya@m...>
wrote:
>
> --- In harmonic_entropy@yahoogroups.com, Carl Lumma <ekin@l...>
> wrote:
> >
> > >> Just saw a post from 2001 (when I wasn't on the lists) about
> > >> 11:9 having lower entropy than 8:5 with very large s.
> > >>
> > >> The thread apparently started here...
> > >> /tuning/topicId_18643.html#18643
> > >> ...with s=1.5.
> > >>
> > >> Does anyone else think this is a highly anomalous result?
> > >
> > >Not I.
> > >
> > >> More recent data from Paul, with s=1.0, gives...
> > >>
> > >> 347 4.6234905
> > >> 814 4.5761788
> > >>
> > >> ...which seems much better.
> > >>
> > >> Comments?
> > >>
> > >> -Carl
> > >
> > >Have you looked at all four graphs, not just stearns.jpg
referred
> to
> > >in the message above, but also stearns2.jpg, stearns3.jpg, and
> > >stearns4.jpg?
> >
> > Where do I get them?
> >
> > -Carl
>
> The reference for stearns.jpg is to a file on the files list;
> however, that list says it can't find it. So please do tell us
> where they are.
>
> I'd also be interested to know just how sensitive this kind
> of "anomalous" result is to the factor s (=1.2% in the graph in the
> group description.)

Well that's part of what the graphs tell you:

/harmonic_entropy/files/perlich/stearns.jpg
/harmonic_entropy/files/perlich/stearns2.jpg
/harmonic_entropy/files/perlich/stearns3.jpg
/harmonic_entropy/files/perlich/stearns4.jpg

Two different values of s are used; for s significantly less than the
smaller value or significantly more than the larger value, this
particular "anomaly" concerning 11:9 disappears.

>What is the experimentally observed range of s?

In one Goldsmith experiment, using small numbers of sine waves, s was
found to be as low as 0.6% for the best listener in the best
frequency range (~3 KHz), and as high as 3% or more for most
listeners once one is well outside that range.

🔗Magnus Jonsson <magnus@...>

10/26/2005 3:16:56 PM

On Wed, 26 Oct 2005, wallyesterpaulrus wrote:

> Well that's part of what the graphs tell you:
>
> /harmonic_entropy/files/perlich/stearns.jpg
> /harmonic_entropy/files/perlich/stearns2.jpg
> /harmonic_entropy/files/perlich/stearns3.jpg
> /harmonic_entropy/files/perlich/stearns4.jpg

I will go a bit offtopic here, thinking about n*d. It will probably be a newbie question:

As far as I can see, n*d measures how sensitive an interval is to mistuning. It is also a measurement of the period of the temporal or spectral repetition is.

Another possible measurement which I propose would be "fraction of harmonics that overlap". For properly factored dyadic intervals that would be 1/(n+d-1). It can be inverted to get a measurement of discordance or spectral messyness: n+d-1.

I am wondering if there is a reason to prefer using n*d to n+d-1. Could they be combined to get an overall better approximation?

- Magnus

🔗wallyesterpaulrus <wallyesterpaulrus@...>

10/26/2005 6:21:13 PM

--- In harmonic_entropy@yahoogroups.com, Magnus Jonsson <magnus@s...>
wrote:
>
> On Wed, 26 Oct 2005, wallyesterpaulrus wrote:
>
> > Well that's part of what the graphs tell you:
> >
> >
/harmonic_entropy/files/perlich/stearns.jpg
> >
/harmonic_entropy/files/perlich/stearns2.jpg
> >
/harmonic_entropy/files/perlich/stearns3.jpg
> >
/harmonic_entropy/files/perlich/stearns4.jpg
>
> I will go a bit offtopic here, thinking about n*d. It will probably
> be a newbie question:
>
> As far as I can see, n*d measures how sensitive an interval is to
> mistuning.

Unless n*d is too large, or undefined (as it is for irrational
intervals).

> It is also a measurement of the period of the temporal or
> spectral repetition is.
>
> Another possible measurement which I propose would be "fraction
> of harmonics that overlap". For properly factored dyadic intervals
> that would be 1/(n+d-1). It can be inverted to get a measurement of
> discordance or spectral messyness: n+d-1.

If every nth harmonic of note a overlaps with every dth harmonic of
note b, it seems that, overall, ((1/d)+(1/n)/2) or (n+d)/(n*d)/2 of
the harmonics overlap. Why don't we agree?

> I am wondering if there is a reason to prefer using n*d to n+d-1.

When used for the seeding, the former gives a harmonic entropy curve
which has an overall flat character, while the latter gives a
harmonic entropy curve with a decreasing trend as one moves to larger
and larger intervals.

>Could
> they be combined to get an overall better approximation?

For these graphs, n*d was used, in a sense, for both quantities --
it's used to seed the harmonic entropy calculation, and to compute a
naive discordance value for various simple just intervals. When you
use n+d or n+d-1 to seed the harmonic entropy calculation, you'd get
a worse agreement whether you use n*d or n+d-1 to compute the naive
discordance value for various simple just intervals. But maybe you
meant "an overall better approximation" in some other sense?

🔗Magnus Jonsson <magnus@...>

10/27/2005 3:29:25 AM

On Thu, 27 Oct 2005, wallyesterpaulrus wrote:

> --- In harmonic_entropy@yahoogroups.com, Magnus Jonsson <magnus@s...>
> wrote:
>>
>> As far as I can see, n*d measures how sensitive an interval is to
>> mistuning.
>
> Unless n*d is too large, or undefined (as it is for irrational
> intervals).

Ah, that is possible.

>> It is also a measurement of the period of the temporal or
>> spectral repetition is.
>>
>> Another possible measurement which I propose would be "fraction
>> of harmonics that overlap". For properly factored dyadic intervals
>> that would be 1/(n+d-1). It can be inverted to get a measurement of
>> discordance or spectral messyness: n+d-1.
>
> If every nth harmonic of note a overlaps with every dth harmonic of
> note b, it seems that, overall, ((1/d)+(1/n)/2) or (n+d)/(n*d)/2 of
> the harmonics overlap. Why don't we agree?

Let's take the 5/4 case as an example (I will assume a very bright/sharp timbre in which all harmonics are strong):

5.......: |....|....|....|....|....|....|....|....|..
4.......: |...|...|...|...|...|...|...|...|...|...|..
combined: |...||..|.|.|..||...|...||..|.|.|..||...|..
period..: {..................}{..................}{..

(whew, that's a lot of dots)

Within each period, there is one shared harmonic, and
8 harmonics total (combined). That gives 1/8.
One of the combined harmonics is the shared one, three
come from the 5, and four come from the 4.

Your formula gives here 9/40 ~= 1/4.

In the n/d case: The period will be n*d. n will have d harmonics in each period and d will have n harmonics in each period. One harmonic will overlap in each period, so there will be a total of n+d-1 harmonics in each period. Hence 1/(n+d-1).

>> I am wondering if there is a reason to prefer using n*d to n+d-1.
>
> When used for the seeding, the former gives a harmonic entropy curve
> which has an overall flat character, while the latter gives a
> harmonic entropy curve with a decreasing trend as one moves to larger
> and larger intervals.

I see.

>> Could
>> they be combined to get an overall better approximation?
>
> For these graphs, n*d was used, in a sense, for both quantities --
> it's used to seed the harmonic entropy calculation, and to compute a
> naive discordance value for various simple just intervals. When you
> use n+d or n+d-1 to seed the harmonic entropy calculation, you'd get
> a worse agreement whether you use n*d or n+d-1 to compute the naive
> discordance value for various simple just intervals. But maybe you
> meant "an overall better approximation" in some other sense?

I meant using n*d or n+d to estimate which intervals sound more discordant than others. I assume that the entropy calculation is very accurate and does this almost perfectly. So whichever formula agrees most closely with the entropy curve (in terms of ordering, not straightness of lines) would be better.

One way to combine them would be to use for example n*d+n+d or (n+1)*(d+1)
instead of n*d.

- Magnus Jonsson

🔗wallyesterpaulrus <wallyesterpaulrus@...>

10/27/2005 2:22:06 PM

--- In harmonic_entropy@yahoogroups.com, Magnus Jonsson <magnus@s...>
wrote:
>
> On Thu, 27 Oct 2005, wallyesterpaulrus wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Magnus Jonsson
<magnus@s...>
> > wrote:
> >>
> >> As far as I can see, n*d measures how sensitive an interval is to
> >> mistuning.
> >
> > Unless n*d is too large, or undefined (as it is for irrational
> > intervals).
>
> Ah, that is possible.
>
> >> It is also a measurement of the period of the temporal or
> >> spectral repetition is.
> >>
> >> Another possible measurement which I propose would be "fraction
> >> of harmonics that overlap". For properly factored dyadic
intervals
> >> that would be 1/(n+d-1). It can be inverted to get a measurement
of
> >> discordance or spectral messyness: n+d-1.
> >
> > If every nth harmonic of note a overlaps with every dth harmonic
of
> > note b, it seems that, overall, ((1/d)+(1/n)/2) or (n+d)/(n*d)/2
of
> > the harmonics overlap. Why don't we agree?
>
> Let's take the 5/4 case as an example (I will assume a very
bright/sharp
> timbre in which all harmonics are strong):
>
> 5.......: |....|....|....|....|....|....|....|....|..
> 4.......: |...|...|...|...|...|...|...|...|...|...|..
> combined: |...||..|.|.|..||...|...||..|.|.|..||...|..
> period..: {..................}{..................}{..
>
> (whew, that's a lot of dots)
>
> Within each period, there is one shared harmonic, and
> 8 harmonics total (combined). That gives 1/8.
> One of the combined harmonics is the shared one, three
> come from the 5, and four come from the 4.

I count 9 harmonics in each "period", 2 of which overlap.

> Your formula gives here 9/40 ~= 1/4.

2/9 = 0.2222
9/40 = 0.2250
1/4 = 0.25

> >> I am wondering if there is a reason to prefer using n*d to n+d-1.
> >
> > When used for the seeding, the former gives a harmonic entropy
curve
> > which has an overall flat character, while the latter gives a
> > harmonic entropy curve with a decreasing trend as one moves to
larger
> > and larger intervals.
>
> I see.
>
> >> Could
> >> they be combined to get an overall better approximation?
> >
> > For these graphs, n*d was used, in a sense, for both quantities --
> > it's used to seed the harmonic entropy calculation, and to
compute a
> > naive discordance value for various simple just intervals. When
you
> > use n+d or n+d-1 to seed the harmonic entropy calculation, you'd
get
> > a worse agreement whether you use n*d or n+d-1 to compute the
naive
> > discordance value for various simple just intervals. But maybe you
> > meant "an overall better approximation" in some other sense?
>
> I meant using n*d or n+d to estimate which intervals sound more
discordant
> than others. I assume that the entropy calculation is very accurate
and
> does this almost perfectly.

But you have to seed it somehow. You assume that a finite probability
gets associated with each ratio in some huge set of ratios. The huge
set of ratios has to be selected somehow. I often use n*d < 10000 or
n*d < 65536 . . .

> So whichever formula agrees most closely with
> the entropy curve (in terms of ordering, not straightness of lines)
would
> be better.

n*d agrees with the entropy curve much better when the entropy curve
is seeded with n*d. I don't know how to seed it to make n+d agree
with the curve, but it doesn't look straighforward . . .

In these early postings, I used n to seed and I also used n+d to
seed . . . I hadn't tried n*d yet . . .

http://sonic-arts.org/td/erlich/entropy-erlich.htm

Since you're a newbie, it might be good to read this . . .

As you can see, using n leads to much less variation in the shape of
the curve as you increment your seed limit on n, than using n+d does
as you increment your seed limit on n+d. Using n seems more "stable"
than using n+d . . . Using n*d keeps the resulting curve "flat" in
its overall trend,

What's striking is that it seems that no matter how you seed the
calculation, the *depth* of the (inverted) humps, from crest to
trough, at the simple-integer ratios goes according to an n*d
ranking, and the set of ratios which show humps at all is the set
under some limit of n*d. So n*d "pops out" even if you didn't put it
in in the first place.