back to list

About harmonic entropy

🔗massimilianolabardi <labardi@...>

3/3/2009 11:47:34 PM

I would like to understand better the concept of harmonic entropy illustrated by Erlich

http://sonic-arts.org/td/erlich/entropy-erlich.htm

Unfortunately I wasn't able to find much further documentation about it. Particularly I would like to learn more about the procedure to calculate such entropy, that is described only qualitatively in the above note.

I have tried myself to translate in mathematical terms the algorithm described in the note above, ending out in the following formula for a "kind of" harmonic entropy calculated from fractions of maximum numerator and denominator of N, as a function of frequency ratio x:

S(x)=1 - Sum(for j=1 to N) [Sum (for i=j to N) [(1/(i j)) Exp(-(x-(i/j))^2 / (2 sigma^2))]]

where the Exponential is a normal distribution of standard deviation sigma, and 1/(i j) is a weight function.

For givex frequency ratio x and integer numerator i and denominator j, some value is subtracted to S(x) only when x is as close as (about) sigma to the ratio i/j. The larger both i and j, the smaller the weight (this is to give more importance to small ratios than to large ratios). For instance, when x = 1.52, a value ( Exp[-(1.52-1.50)^2/(2 sigma^2)] ) will be subtracted to S(x) when i=3 and j=2 (so i/j = 1.50) with weight 1/(3 2) = 1/6, while for instance when i = 23 and j = 15 (so i/j = 1.533) a smaller value will be subtracted (because i/j is further than 1 sigma from x, and also because the weight 1/(23 15) = 0.0029 is small), and when i = 79 and j = 52 (i/j = 1.519), the normal function will have higher value ( Exp[-(1.52-1.519)^2/(2 sigma^2)] ) but the weight will be even smaller ( 1/(79 52)= 0.00024 ).

For sigma = 0.01, n=20,50 and 80, I get plots (uploaded in my directory Max here on Tuning list named "HE.jpg") that resemble the ones reported by Erlich. However, I have not intentionally used information theory definition for entropy, in particular any function of probability p of the form p log p. Also, I have not used Farey series, but just summed over all possible ratios, therefore including ratios not contained in the Farey series (for instance, I have used 3/2 as well as 6/4.... but the weight of 6/4 is less than the one of 3/2 although their probability is the same).

Nevertheless, the result I get looks to me rather similar to the one by Erlich.

The formula I used for calculations is reported on the plot in clearer form than above in my post.

In order to assess similarity and differences, would it be possible to find a reference, or some of you knows how to get some information on how exactly are the plots on the Erlich's note calculated?

Thanks a lot,

Max

🔗Carl Lumma <carl@...>

3/4/2009 1:03:15 AM

--- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@...> wrote:
>
> I would like to understand better the concept of harmonic entropy
> illustrated by Erlich
>
> http://sonic-arts.org/td/erlich/entropy-erlich.htm
>
> Unfortunately I wasn't able to find much further documentation
> about it. Particularly I would like to learn more about the
> procedure to calculate such entropy, that is described only
> qualitatively in the above note.

Have you seen these two posts?
/harmonic_entropy/topicId_347.html#350
/harmonic_entropy/topicId_707.html#708

-Carl

🔗massimilianolabardi <labardi@...>

3/4/2009 1:16:43 AM

Thank you Carl, looks great. Lots of formulae in there.

Surely it will be useful.

Max

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> Have you seen these two posts?
> /harmonic_entropy/topicId_347.html#350
> /harmonic_entropy/topicId_707.html#708
>
> -Carl
>

🔗rick_ballan <rick_ballan@...>

3/4/2009 7:46:06 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> --- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@> wrote:
> >
> > I would like to understand better the concept of harmonic entropy
> > illustrated by Erlich
> >
> > http://sonic-arts.org/td/erlich/entropy-erlich.htm
> >
> > Unfortunately I wasn't able to find much further documentation
> > about it. Particularly I would like to learn more about the
> > procedure to calculate such entropy, that is described only
> > qualitatively in the above note.
>
> Have you seen these two posts?
> /harmonic_entropy/topicId_347.html#350
> /harmonic_entropy/topicId_707.html#708
>
> -Carl

>Hi Carl,

I too am interested in finding out more about harmonic entropy. Haven't delved into the maths of these postings yet but just one preliminary question. It's been a vague idea building in the back of my mind for some time now that in reality a type of uncertainty principle applies to each harmonic interval. For eg, the brain/ear can still hear a perfect fifth when it is fairly out of tune, but how far before it becomes, say, a flat fifth? Like many things we take for granted, this seems basic enough but it is actually very difficult to answer properly (like trying to pin down how far is "almost"). I've also imagined that probability maths will come into it at some point. My question is, does harmonic entropy provide a method for explaining this 'give' around each interval?

Thanks mate

-Rick

🔗Carl Lumma <carl@...>

3/4/2009 8:31:20 AM

--- In tuning@yahoogroups.com, "rick_ballan" <rick_ballan@...> wrote:

> My question is, does harmonic entropy provide a method for
> explaining this 'give' around each interval?
>
> Thanks mate
>
> -Rick

That's exactly right. Try this article:

http://www.soundofindia.com/showarticle.asp?in_article_id=1905806937

-Carl

🔗Michael Sheiman <djtrancendance@...>

3/4/2009 8:03:35 AM

Taken from http://sonic-arts.org/td/erlich/entropy-erlich.htm,
---But a phenomenon called "virtual pitch"
or "fundamental ---tracking" is central to Parncutt's treatment of
dissonance and ---does represent, I believe, an additional factor besides
critical ---band roughness.

---There is a very strong propensity for the ear to
try to fit what it ---hears into one or a small number of harmonic series,
and the ---fundamentals of these series, even if not physically present,
are ---either heard outright, or provide a more subtle sense of overall
---pitch known to musicians as the "root".

   In the past I have only really looked at Plomp's algorithm in the form of a dissonance curves (as done by Sethares) before and, on the side, known that the harmonic series is categorized in the brain as a stronger tone than, say, a sine wave played on the root tone.
*****************************
   Oddly enough, in fact, my last JI-type scale based on the x/16 harmonic series tuning (where x = about 16 to 32) was built on the idea of simplifying a series of tone to point to one root tone.  And, meanwhile, my 9/8 * 10/9 * 11/10 * 12/11 * 10/9 * 11/10 * 12/11 pointed to two different harmonic roots about the ratio of 6/5th away from each other.  

    So they both appear to lend themselves toward the concept of virtual pitch...if I have it right.  In fact, I wonder (if virtual pitch is so important), why he don't make all non-adaptive scales based on only one or two harmonic series when searching for consonance in that form?
********************************
   But, still...I've had as good if not better luck with my PHI scale (as explained on http://www.geocities.com/djtrancendance/PHINOT.html in which virtually every note beats, but the beating somehow feels uniform/predictable and not random. 
    So it seems to hint that some degree of beating can actually aid consonance and result in a different kind of consonance.  Same goes with, say, violins (whose strings cause beating all over the place) vs. organs (whose overtones barely beat)...violins "mysteriously" sound just as good if not better.
*****************************************************
    I believe, Max, you had said something before about that either all tones must be beating or virtually none across the spectrum to result in true consonance.
   In that case I wonder if perhaps my PHI scale finds the solution of the scenario where all notes are beating while stacking harmonic series (ALA JI) approaches the solution where no notes are beating (although that solution, ultimately, appears to lie in adaptive JI and not a fixed scale).

-Michael

🔗Chris Vaisvil <chrisvaisvil@...>

3/5/2009 1:45:43 PM

Good subject.

I find it interesting and counter-intuitive that 3:1 and 4:1 would be ranked
substantially less consonant that 2:1

(I found a graph to 4:1 by following a link somewhere in this thread with
Indian music in the title on the page.)

A question if anyone knows the answer

is a triad perceived as 3 notes sounding at once

or 3 dyads? C-E, C-G, E-G ?

Chris

On Wed, Mar 4, 2009 at 2:47 AM, massimilianolabardi <labardi@...>wrote:

> I would like to understand better the concept of harmonic entropy
> illustrated by Erlich
>
> http://sonic-arts.org/td/erlich/entropy-erlich.htm
>
> Unfortunately I wasn't able to find much further documentation about it.
> Particularly I would like to learn more about the procedure to calculate
> such entropy, that is described only qualitatively in the above note.
>
> I have tried myself to translate in mathematical terms the algorithm
> described in the note above, ending out in the following formula for a "kind
> of" harmonic entropy calculated from fractions of maximum numerator and
> denominator of N, as a function of frequency ratio x:
>
> S(x)=1 - Sum(for j=1 to N) [Sum (for i=j to N) [(1/(i j)) Exp(-(x-(i/j))^2
> / (2 sigma^2))]]
>
> where the Exponential is a normal distribution of standard deviation sigma,
> and 1/(i j) is a weight function.
>
> For givex frequency ratio x and integer numerator i and denominator j, some
> value is subtracted to S(x) only when x is as close as (about) sigma to the
> ratio i/j. The larger both i and j, the smaller the weight (this is to give
> more importance to small ratios than to large ratios). For instance, when x
> = 1.52, a value ( Exp[-(1.52-1.50)^2/(2 sigma^2)] ) will be subtracted to
> S(x) when i=3 and j=2 (so i/j = 1.50) with weight 1/(3 2) = 1/6, while for
> instance when i = 23 and j = 15 (so i/j = 1.533) a smaller value will be
> subtracted (because i/j is further than 1 sigma from x, and also because the
> weight 1/(23 15) = 0.0029 is small), and when i = 79 and j = 52 (i/j =
> 1.519), the normal function will have higher value ( Exp[-(1.52-1.519)^2/(2
> sigma^2)] ) but the weight will be even smaller ( 1/(79 52)= 0.00024 ).
>
> For sigma = 0.01, n=20,50 and 80, I get plots (uploaded in my directory Max
> here on Tuning list named "HE.jpg") that resemble the ones reported by
> Erlich. However, I have not intentionally used information theory definition
> for entropy, in particular any function of probability p of the form p log
> p. Also, I have not used Farey series, but just summed over all possible
> ratios, therefore including ratios not contained in the Farey series (for
> instance, I have used 3/2 as well as 6/4.... but the weight of 6/4 is less
> than the one of 3/2 although their probability is the same).
>
> Nevertheless, the result I get looks to me rather similar to the one by
> Erlich.
>
> The formula I used for calculations is reported on the plot in clearer form
> than above in my post.
>
> In order to assess similarity and differences, would it be possible to find
> a reference, or some of you knows how to get some information on how exactly
> are the plots on the Erlich's note calculated?
>
> Thanks a lot,
>
> Max
>
>
>

🔗Carl Lumma <carl@...>

3/5/2009 1:59:03 PM

--- In tuning@yahoogroups.com, Chris Vaisvil <chrisvaisvil@...> wrote:
>
> Good subject.
>
> I find it interesting and counter-intuitive that 3:1 and 4:1
> would be ranked substantially less consonant that 2:1
> (I found a graph to 4:1 by following a link somewhere in this
> thread with Indian music in the title on the page.)

As I recall some later versions of harmonic entropy got rid
of the overall downward slope. But generally, the consensus
of theorists here has been that as intervals get very wide
(4:1, 8:1, etc.) they lose both consonance and dissonance.
That is, they do not interact as strongly either way. In that
light, I would certainly call 4:1 less consonant than 2:1.

This is part of the view of consonance as a thing in and of
itself -- not merely the lack of dissonance.

> A question if anyone knows the answer
> is a triad perceived as 3 notes sounding at once
> or 3 dyads? C-E, C-G, E-G ?

Presently, _all_ models of consonance (which have any serious
support) are dyadic theories. It remains a major challenge in
music theory to directly model larger chords like triads and
tetrads. Paul came tantalizingly to doing triadic harmonic
entropy calculations. We know that for triads composed of small
whole numbers, the triadic harmonic entropy is proportional to
the geometric mean of the product of the chord tones -- the
a*b*c rule. A full triadic harmonic entropy calc would also be
able to handle chords containing irrational (or large rational)
intervals.

-Carl

🔗Chris Vaisvil <chrisvaisvil@...>

3/5/2009 2:18:08 PM

Thanks for the really good information.

Now here is a thought.

If :

" But generally, the consensus
of theorists here has been that as intervals get very wide
(4:1, 8:1, etc.) they lose both consonance and dissonance.
That is, they do not interact as strongly either way. In that
light, I would certainly call 4:1 less consonant than 2:1."

is true - and as you go up 3:1, 4:1, 5:1 you are replicating the harmonic
series
is there a correlation between the harmonic entropy ranking of the harmonic
number and the interval its octave "normalized" (I guess reduced?) note in a
dyad?

After some thought - this is probably trivial.

On Thu, Mar 5, 2009 at 4:59 PM, Carl Lumma <carl@...> wrote:

> --- In tuning@yahoogroups.com <tuning%40yahoogroups.com>, Chris Vaisvil
> <chrisvaisvil@...> wrote:
> >
> > Good subject.
> >
> > I find it interesting and counter-intuitive that 3:1 and 4:1
> > would be ranked substantially less consonant that 2:1
> > (I found a graph to 4:1 by following a link somewhere in this
> > thread with Indian music in the title on the page.)
>
> As I recall some later versions of harmonic entropy got rid
> of the overall downward slope. But generally, the consensus
> of theorists here has been that as intervals get very wide
> (4:1, 8:1, etc.) they lose both consonance and dissonance.
> That is, they do not interact as strongly either way. In that
> light, I would certainly call 4:1 less consonant than 2:1.
>
> This is part of the view of consonance as a thing in and of
> itself -- not merely the lack of dissonance.
>
> > A question if anyone knows the answer
> > is a triad perceived as 3 notes sounding at once
> > or 3 dyads? C-E, C-G, E-G ?
>
> Presently, _all_ models of consonance (which have any serious
> support) are dyadic theories. It remains a major challenge in
> music theory to directly model larger chords like triads and
> tetrads. Paul came tantalizingly to doing triadic harmonic
> entropy calculations. We know that for triads composed of small
> whole numbers, the triadic harmonic entropy is proportional to
> the geometric mean of the product of the chord tones -- the
> a*b*c rule. A full triadic harmonic entropy calc would also be
> able to handle chords containing irrational (or large rational)
> intervals.
>
> -Carl
>
>
>

🔗rick_ballan <rick_ballan@...>

3/5/2009 9:28:36 PM

--- In tuning@yahoogroups.com, Michael Sheiman <djtrancendance@...> wrote:
>
> Taken from http://sonic-arts.org/td/erlich/entropy-erlich.htm,
> ---But a phenomenon called "virtual pitch"
> or "fundamental ---tracking" is central to Parncutt's treatment of
> dissonance and ---does represent, I believe, an additional factor besides
> critical ---band roughness.
>
> ---There is a very strong propensity for the ear to
> try to fit what it ---hears into one or a small number of harmonic series,
> and the ---fundamentals of these series, even if not physically present,
> are ---either heard outright, or provide a more subtle sense of overall
> ---pitch known to musicians as the "root".
>
> In the past I have only really looked at Plomp's algorithm in the form of a dissonance curves (as done by Sethares) before and, on the side, known that the harmonic series is categorized in the brain as a stronger tone than, say, a sine wave played on the root tone.
> *****************************
> Oddly enough, in fact, my last JI-type scale based on the x/16 harmonic series tuning (where x = about 16 to 32) was built on the idea of simplifying a series of tone to point to one root tone. And, meanwhile, my 9/8 * 10/9 * 11/10 * 12/11 * 10/9 * 11/10 * 12/11 pointed to two different harmonic roots about the ratio of 6/5th away from each other.
>
> So they both appear to lend themselves toward the concept of virtual pitch...if I have it right. In fact, I wonder (if virtual pitch is so important), why he don't make all non-adaptive scales based on only one or two harmonic series when searching for consonance in that form?
> ********************************
> But, still...I've had as good if not better luck with my PHI scale (as explained on http://www.geocities.com/djtrancendance/PHINOT.html in which virtually every note beats, but the beating somehow feels uniform/predictable and not random.
> So it seems to hint that some degree of beating can actually aid consonance and result in a different kind of consonance. Same goes with, say, violins (whose strings cause beating all over the place) vs. organs (whose overtones barely beat)...violins "mysteriously" sound just as good if not better.
> *****************************************************
> I believe, Max, you had said something before about that either all tones must be beating or virtually none across the spectrum to result in true consonance.
> In that case I wonder if perhaps my PHI scale finds the solution of the scenario where all notes are beating while stacking harmonic series (ALA JI) approaches the solution where no notes are beating (although that solution, ultimately, appears to lie in adaptive JI and not a fixed scale).
>
> -Michael
>
Hi Chris,

A question if anyone knows the answer
> is a triad perceived as 3 notes sounding at once
> or 3 dyads? C-E, C-G, E-G ?

It is a triad, firstly because this is the meaning of the Latin "tri" and secondly because if you play three G notes simultaneously then you are merely changing the tone or volume of a single G note.

-Rick

🔗Charles Lucy <lucy@...>

3/6/2009 12:11:17 AM

http://www.bbc.co.uk/radio4/history/inourtime/inourtime.shtml

Roger Penrose and others discuss the measurement paradox, which (to my mind) is extremely pertinent to our studies of musical tuning.

Wave functions, probabilities, super-positions etc.

Charles Lucy
lucy@...

- Promoting global harmony through LucyTuning -

for information on LucyTuning go to:
http://www.lucytune.com

For LucyTuned Lullabies go to:
http://www.lullabies.co.uk

🔗massimilianolabardi <labardi@...>

3/6/2009 12:33:46 AM

--- In tuning@yahoogroups.com, Charles Lucy <lucy@...> wrote:
>
> http://www.bbc.co.uk/radio4/history/inourtime/inourtime.shtml
>
> Roger Penrose and others discuss the measurement paradox, which (to my
> mind) is extremely pertinent to our studies of musical tuning.
>
> Wave functions, probabilities, super-positions etc.

Wave functions and superpositions are ingredients of wave mechanics, while probability concerns more properly statistical mechanics, thermodynamics (btw entropy is a thermodynamic quantity) and quantum physics. Sound is waves, therefore most probably wave mechanics applies pretty well there. In my view, statistical properties could concern the psychoacoustical problem, in the sense that our brain could perform a kind of statistical analysis of aural data. I might be wrong, but instead I see no way how uncertainty principle, that is the ground of "measurement paradox," could apply to both acoustics and psychoacoustics. Uncertainty (or Heisemberg) principle applies to quantum objects (atoms, photons...) but its implications are hidden (kind of averaged out) when we deal with huge ensembles of such objects like in ordinary matter and at ordinary length and time scales. If quantum effects were observable at a macroscopic scale, they would be part of everyday's experience, while evidently they are not.

Max

🔗massimilianolabardi <labardi@...>

3/7/2009 7:37:01 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> Have you seen these two posts?
> /harmonic_entropy/topicId_347.html#350
> /harmonic_entropy/topicId_707.html#708

I have tried to use information within. Now I apply (p log p) form recalling entropy definition. Still, I get qualitatively similar results compared to my previous simpler modeling. The new plot is at:

/tuning/files/Max/HE2.jpg

(while the old one was at /tuning/files/Max/HE2.jpg).

What I retain from the previous model is: weighing simpler ratios with higher probability 1/ij (i=numerator, j=denominator), and using all of the possible ratios up to an integer N for both i and j. So basically I am not using Farey sequence, nor its mediants, because still I don't fully understand whether it is just an arbitrary choice or, in turn, what is the underlying reason for such a choice. Also, I use linear scale and not log ones.

Before going on, I notice that, in essence, plots obtained by giving a higher weight to simpler ratios come out all similar to each other. Also, in my opinion, the difference between max and min of harmonic entropy for ratios like 5/4 and 6/5 looks too small, compared for instance to the depth of the minimum at x=3/2 and x=4/3, to account for clear differences in perception of dyadic consonance (as far as the latter is one of the aims of this model). So, at present, I am not really persuaded of the importance of "details" (like using different ensembles of fractions, or log units, or...) in this description.

I would be glad to learn whether I miss something...

Max

🔗Carl Lumma <carl@...>

3/7/2009 1:33:04 PM

--- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@...> wrote:
>
> --- In tuning@yahoogroups.com, "Carl Lumma" <carl@> wrote:
>
> > Have you seen these two posts?
> > /harmonic_entropy/topicId_347.html#350
> > /harmonic_entropy/topicId_707.html#708
>
> I have tried to use information within. Now I apply (p log p)
> form recalling entropy definition. Still, I get qualitatively
> similar results compared to my previous simpler modeling. The
> new plot is at:
>
> /tuning/files/Max/HE2.jpg

What are the three contours here?

> (while the old one was at http://launch.groups.yahoo.com/group
> /tuning/files/Max/HE2.jpg).

You gave the same link twice.

> What I retain from the previous model is: weighing simpler
> ratios with higher probability 1/ij (i=numerator, j=denominator),
> and using all of the possible ratios up to an integer N for
> both i and j. So basically I am not using Farey sequence, nor its
> mediants, because still I don't fully understand whether it is
> just an arbitrary choice or, in turn, what is the underlying
> reason for such a choice. Also, I use linear scale and not log
> ones.

Paul showed that the harmonic entropy converges (or at least
becomes stable in some sense) as the order of the farey series
goes to infinity.

> Before going on, I notice that, in essence, plots obtained by
> giving a higher weight to simpler ratios come out all similar
> to each other.

Yup.

> Also, in my opinion, the difference between max
> and min of harmonic entropy for ratios like 5/4 and 6/5 looks
> too small, compared for instance to the depth of the minimum
> at x=3/2 and x=4/3, to account for clear differences in
> perception of dyadic consonance (as far as the latter is one
> of the aims of this model).

Can you explain a bit more what you mean here?

-Carl

🔗massimilianolabardi <labardi@...>

3/7/2009 3:01:59 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> >
> > /tuning/files/Max/HE2.jpg
>
> What are the three contours here?

N=20, N=50, N=80 (from top to bottom).

> > (while the old one was at http://launch.groups.yahoo.com/group
> > /tuning/files/Max/HE2.jpg).
>
> You gave the same link twice.
>

Sorry. /tuning/files/Max/HE.jpg

>
> Paul showed that the harmonic entropy converges (or at least
> becomes stable in some sense) as the order of the farey series
> goes to infinity.
>

Ok. I noticed more convergence with the simpler model (HE.jpg). Instead by using "p log p" (HE2.jpg) there is less. I guess some normalization is needed, but that will depend on the choice of fraction ensemble. I'll try to work out this better.

> > Also, in my opinion, the difference between max
> > and min of harmonic entropy for ratios like 5/4 and 6/5 looks
> > too small, compared for instance to the depth of the minimum
> > at x=3/2 and x=4/3, to account for clear differences in
> > perception of dyadic consonance (as far as the latter is one
> > of the aims of this model).
>
> Can you explain a bit more what you mean here?

I meant, by looking at Erlich's plots and concentrating on the entropy maximum at 348 cents, in between the two local minima at 315 and 387 cents (the ones corresponding to 6/5 and 5/4 fractions), the difference in entropy looks not so high (of the order of 0.1-0.2 units while the one corresponding to the interval 3/2 is 1 and to 4/3 is 0.5). In turn, if I understand correctly, an interval close to 6/5 (minor third) is classified by our brain as belonging to a very different category than an interval close to 5/4 (major third), whereas an interval in between (i.e. on the local entropy maximum) is hard to classify in one or the other category and therefore would sound more "strange" or dissonant.

My observation is just that major and minor "categories" have very strong character (indeed, a lot in western music is based on the contrast between the two "modes") but on the other hand their harmonic entropy is not so different as one could expect. However, this is just an impression.

Max

🔗Carl Lumma <carl@...>

3/7/2009 9:23:28 PM

--- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@...> wrote:
>
> > > /tuning/files/Max/HE2.jpg
> >
> > What are the three contours here?
>
> N=20, N=50, N=80 (from top to bottom).

Ok, yes, that looks about right.

> > > (while the old one was at http://launch.groups.yahoo.com/group
> > > /tuning/files/Max/HE2.jpg).
> >
> > You gave the same link twice.
>
> Sorry. /tuning/files/Max/HE.jpg

How are you assigning i and j for a given x? Just taking
the nearest ratio i/j to x?

> I meant, by looking at Erlich's plots and concentrating on the
> entropy maximum at 348 cents, in between the two local minima
> at 315 and 387 cents (the ones corresponding to 6/5 and 5/4
> fractions), the difference in entropy looks not so high (of the
> order of 0.1-0.2 units while the one corresponding to the
> interval 3/2 is 1 and to 4/3 is 0.5).

Using the formulation of harmonic entropy I prefer, the
minimum at 5/4 is 4.4903844 nats, and the maximum near 11/9
is 4.6245254 nats, for a difference of ~ 0.13.

3/2 is at 4.1300928 and 659 cents is at 4.6575969, for
a difference of ~ 0.53. You can get a difference on the
order 1 nat, between the octave and a wolf octave.

> In turn, if I understand
> correctly, an interval close to 6/5 (minor third) is classified
> by our brain as belonging to a very different category than an
> interval close to 5/4 (major third), whereas an interval in
> between (i.e. on the local entropy maximum) is hard to classify
> in one or the other category and therefore would sound more
> "strange" or dissonant.
>
> My observation is just that major and minor "categories" have
> very strong character (indeed, a lot in western music is based
> on the contrast between the two "modes") but on the other hand
> their harmonic entropy is not so different as one could expect.
> However, this is just an impression.

One must bear in mind that these harmonic entropy calculations
model what we hear between a pair of sine tones. It's a
simplification to apply it to the pitches of normal timbres.
Nevertheless, the numbers above aren't entirely unreasonable.
They say the dissonance increase between a 5th and a wolf 5th
is about five times greater than the increase between a 3rd
and a neutral 3rd.

-Carl

🔗massimilianolabardi <labardi@...>

3/8/2009 1:58:00 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> > Sorry. /tuning/files/Max/HE.jpg
>
> How are you assigning i and j for a given x? Just taking
> the nearest ratio i/j to x?
>

Basically, yes. More precisely, given a ratio x, one calculates the difference x - i/j for each of the fractions i/j in a given ensamble. Then you have a normal probability function centered at i/j-x and with a width sigma, that works as follows:

If x - i/j = 0 (that is, x is exactly equal to the considered fraction i/j) then the Gaussian is maximum.

If x - i/j is close to zero, say within 1 or 2 sigma (widths), there is still some probability but its value is smaller.

If x is far from i/j, then probability is negligible.

this would give the same significance to very close ratios, say to 3/2 and 3001/2000 when x=1.5. So we need a weight function to give more importance to ratios made up of the smallest numerator and denominator. I have chosen 1/ij as such function. This means, 3/2 will count as 1/6, while 3001/2000 will count as 1/6002000, that means, negligible.

So in essence I am not taking the nearest ratio, but taking all ratios within a width sigma around x, but weighted in such a way that simpler ratios contribute a lot while more complex ones contribute much less. I think this might be about the same of what done with Farey sequence and its mediants. Perhaps the role of the weight function is taken by the width of the integration interval for each couple of elements of the Farey series.

> Using the formulation of harmonic entropy I prefer, the
> minimum at 5/4 is 4.4903844 nats, and the maximum near 11/9
> is 4.6245254 nats, for a difference of ~ 0.13.

What is such formulation, and the plot you refer to?

> 3/2 is at 4.1300928 and 659 cents is at 4.6575969, for
> a difference of ~ 0.53. You can get a difference on the
> order 1 nat, between the octave and a wolf octave.
<snip>
> Nevertheless, the numbers above aren't entirely unreasonable.
> They say the dissonance increase between a 5th and a wolf 5th
> is about five times greater than the increase between a 3rd
> and a neutral 3rd.

Ok, I see. Comparing a consonant ratio with a close dissonant one makes more sense. Comparing absolute values of H.E. all over the octave is more difficult.

Thanks a lot Carl,

Max

🔗massimilianolabardi <labardi@...>

3/8/2009 5:45:17 AM

--- In tuning@yahoogroups.com, Michael Sheiman <djtrancendance@...> wrote:

>     I believe, Max, you had said something before about that either all tones must be beating or virtually none across the spectrum to result in true consonance.
>    In that case I wonder if perhaps my PHI scale finds the solution of the scenario where all notes are beating while stacking harmonic series (ALA JI) approaches the solution where no notes are beating (although that solution, ultimately, appears to lie in adaptive JI and not a fixed scale).
>

At the beginnign I tried to put forward the hypothesis that most consonant triads could be characterized by equal (or related by simple ratios) beat frequency. Afterwards, the issue was to understand whether the effect of such beats could be audible or not, and conclusions were rather controversial. So, I really don't know.

About the tunings involvong Golden ratio, I think that the Phi irrational has some real peculiarity with respect to other irrationals - trascendental or not - sometimes used to build up temperaments. The interesting property of Phi is, in my opinion, that 1/Phi = 1 - Phi. This implies that, for instance, if you press a cello string at the golden ratio position (61.8% of full length) and pluck the two parts, the difference of such two frequencies is at the frequency of the full string. I have really no idea whether possible implications of this may exist for the musical world... but at least it's something! I couldn't find any similar mathematical or physical role for e.g. greek Pi or other peculiar irrationals so far.

Also, if you make a triad as 1:(Phi^2)/2:Phi, that is 1:1.309:1.618, it happens that the 2nd harmonic of 1.309 is 2.618 and then the difference frequency between the 2nd harmonic of 1.309 and 1.618 is exactly 1. Similarly, the difference frequency between the 2nd harmonic of 1.618 (3.236) and the fourth harmonic of 1.309 (5.236) is exactly the 2nd harmonic of 1 (2).... I have no idea of how such chord sounds like, but surely its harmonic properties seem peculiar, as long as difference tones play some role.

Max

🔗Carl Lumma <carl@...>

3/8/2009 10:48:06 AM

--- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@...> wrote:

> > How are you assigning i and j for a given x? Just taking
> > the nearest ratio i/j to x?
>
> Basically, yes. More precisely, given a ratio x, one calculates
> the difference x - i/j for each of the fractions i/j in a given
> ensamble.

Ah, good.

> So in essence I am not taking the nearest ratio, but taking all
> ratios within a width sigma around x, but weighted in such a way
> that simpler ratios contribute a lot while more complex ones
> contribute much less. I think this might be about the same of what
> done with Farey sequence and its mediants.

Yes indeed.

>Perhaps the role of the weight function is taken by the width of
>the integration interval for each couple of elements of the Farey
>series.

When using the Farey series, there is no weight function.
The mediants are used to partition the region under a
Gaussian centered on x. The weight function (1/sqrt(n*d))
is used instead of the mediant-mediant distances.

> > Using the formulation of harmonic entropy I prefer, the
> > minimum at 5/4 is 4.4903844 nats, and the maximum near 11/9
> > is 4.6245254 nats, for a difference of ~ 0.13.
>
> What is such formulation, and the plot you refer to?

I've never bothered to plot it; I just have the data.
The formulation is 1/sqrt(n*d) weighting function where
the Gaussian on x has a standard deviation of 1% in
frequency space.

> Ok, I see. Comparing a consonant ratio with a close dissonant
> one makes more sense. Comparing absolute values of H.E. all
> over the octave is more difficult.

No, that should be possible too. I didn't follow the example
you were giving, so I suggested an alternative.

-Carl

🔗massimilianolabardi <labardi@...>

3/12/2009 3:07:12 AM

--- In tuning@yahoogroups.com, "massimilianolabardi" <labardi@...> wrote:

> Also, if you make a triad as 1:(Phi^2)/2:Phi, that is 1:1.309:1.618,
>it happens that the 2nd harmonic of 1.309 is 2.618 and then the
>difference frequency between the 2nd harmonic of 1.309 and 1.618 is
>exactly 1. Similarly, the difference frequency between the 2nd
>harmonic of 1.618 (3.236) and the fourth harmonic of 1.309 (5.236) is
>exactly the 2nd harmonic of 1 (2).... I have no idea of how such
>chord sounds like, but surely its harmonic properties seem peculiar,
>as long as difference tones play some role.

Actually, I realize that this is not a so special property. It is typical for rationals, e.g. 1:5/4:3/2 you get 2*(5/4)=10/4 and 10/4 - 3/2 =1. Also, 2*(3/2)=3 and (5/4)*4=5 so that their difference is 5 - 3 = 2 (second harmonic of 1), so same property. Therefore harmonic properties could be the same characterizing rationals.... only, Phi is irrational.

But... of course you can make any number (also irrational and even trascendental) behave like that, if you properly define intervals. E.g. taking greek Pi/2 = 1.57079.... instead of Phi, you can calculate the "middle" tone of the chord x having the same property: x/2 = Pi/2 - 1 and you get x = Pi - 2 (proof: (Pi - 2)/2 = Pi/2 - 1). So the chord would be in this case 1:Pi-2:Pi/2 (1:1.1415...:1.57079...).

Phi differs in that the middle tone coincides with the one resulting from the "cicle of Phi", while in the other cases (included the ratonals) it seems it is not the case.

I have no idea whether this observation is a trivial one or it could give some hint, I hope so.

Max