back to list

Re: [harmonic_entropy] Digest Number 53

🔗Robert Walker <robert_walker@...>

2/1/2001 6:06:48 AM

Paul,

Sorry, need some context to understand it.

Can you work through some simple example, take an example diad,
and show how it works?

Ex. of where you lose me:

> OK. For the dyadic case, first you calculate all the ratios such that the
> product does not exceed a certain limit. Then either:

What is the scope of the "all" here? Product of what with what?

Probably some small detail that I'm missing, after which it will
make sense.

The thing is, I haven't really been following the discussion on
this list in any detail yet, otherwise, maybe I would understand.

Robert

> OK. For the dyadic case, first you calculate all the ratios such that the
> product does not exceed a certain limit. Then either:
>
> (a) calculate the mediants between the adjacent ratios, and assign each of
> the original ratios a "width" according to the distance between the mediants
>
> (b) assign a "width" to each ratio based on an approximate formula (this
> approach will have to be used for the triadic case since no one has figured
> out how to generalize mediants to 2-d).
>
> Now, for the interval in question (often, every cents value from 0 to 2400),
> construct a bell curve centered around that interval and with the particular
> standard deviation you've assumed. Assign a probability to each of the
> original set of ratios: the probability is proportional to the product of
> the ratio's "width" times the height of the bell curve at that ratio.
> Finally, calculate the following sum over all the probabilities p:
>
> entropy = -sum(p*log(p))
>
> Does this make sense? If so, I'll proceed to explain the triadic case (which
> is virtually identical in concept).
>
>
> ________________________________________________________________________
> ________________________________________________________________________
>
>
>
>

🔗manuel.op.de.coul@...

2/1/2001 5:06:34 AM

Maybe my code will help. I hope the little bit of optimisation
doesn't make it harder to understand.

Manuel

with Pitches;
with Quick_Sort;
with Rational_Math_Lib;
with Scale_Math;

subtype Pitch is Pitches.Pitch;
type Long_Floats is array(Positive range <>) of Long_Float;
type Subscale is array(Positive range <>) of Pitch;

-- If Farey_Order > 160 then make these arrays larger.
Farey : Subscale(1 .. 16_000);
Bounds : Long_Floats(Farey'Range);
Farey_Count : Natural := 0;
Farey_Order : Natural := 0;

procedure Set_Entropy_Order (The_Order : in Natural) is
Numl, Numr, Denl, Denr : Integer;
Templ, Tempr : Rational;

function Smaller_Than (Left, Right : in Pitch) return Boolean is
begin
return Pitches.To_Float(Left) < Pitches.To_Float(Right);
end Smaller_Than;

procedure Pitch_Sort is new Quick_Sort
(Item => Pitch,
Index => Positive,
Items => Subscale,
"<" => Smaller_Than,
"<=" => Pitches."<=");
begin
Farey_Count := 0;
Farey_Order := The_Order;
for Den in 1 .. The_Order loop
for Num in 1 .. The_Order loop
if Scale_Math.Greatest_Common_Divisor(Num, Den) = 1 then
Farey_Count := Farey_Count + 1;
Farey(Farey_Count) := Pitches.To_Pitch(
Rational_Math_Lib.Construct_Unchecked(Num, Den));
end if;
end loop;
end loop;
Pitch_Sort(Farey(1 .. Farey_Count));
for Index in 2 .. Farey_Count loop
Templ := Pitches.To_Rational(Farey(Index - 1));
Tempr := Pitches.To_Rational(Farey(Index));
Rational_Math_Lib.Value_Of(Templ, Numl, Denl);
Rational_Math_Lib.Value_Of(Tempr, Numr, Denr);
Bounds(Index) := Scale_Math.Log2(Long_Float(Numl + Numr) /
Long_Float(Denl + Denr));
end loop;
exception
when Constraint_Error =>
raise Argument_Error;
end Set_Entropy_Order;

function Harmonic_Entropy (The_Pitch : in Pitch;
Sigma : in Long_Float := 0.01)
return Long_Float is
Log_Pitch, Log_Diff, Prob, Cur_Prob, Last_Prob : Long_Float;
Float_Pitch : constant Long_Float := Pitches.To_Float(The_Pitch);
Cin : constant Long_Float := 0.5 * Scale_Math.Square_Root(2.0) / Sigma;
First_Todo : Integer;
Result : Long_Float := 0.0;
Started_Adding : Boolean := False;
begin
if Float_Pitch = 0.0 then
raise Argument_Error;
end if;
-- Initialize Farey bounds if not done already.
if Farey_Count = 0 then
Set_Entropy_Order(80);
end if;
Log_Pitch := abs Scale_Math.Log2(Float_Pitch);
First_Todo := Farey_Count / 3;
Log_Diff := Bounds(First_Todo - 1) - Log_Pitch;
Last_Prob := Scale_Math.Erf(Cin * Log_Diff);
for Index in First_Todo .. Farey_Count loop
Log_Diff := Bounds(Index) - Log_Pitch;
Cur_Prob := Scale_Math.Erf(Cin * Log_Diff);
Prob := Cur_Prob - Last_Prob;
if Prob >= 1.0E-9 then
Result := Result - Prob * Scale_Math.Log(Prob);
Started_Adding := True;
elsif Prob = 0.0 then
exit when Started_Adding;
end if;
Last_Prob := Cur_Prob;
end loop;
return Result;
end Harmonic_Entropy;

🔗Paul H. Erlich <PERLICH@...>

2/1/2001 12:15:58 PM

Hi Robert:

I went through a sample calculation before -- check the archives . . .

🔗Paul H. Erlich <PERLICH@...>

2/1/2001 12:17:35 PM

Hi Manuel:

I presume you're aware that I've essentially moved completely from the Farey
series to the "Tenney series" approach? The "Tenney series" is defined by a
particular limit on the product of numerator and denominator.

-Paul

🔗manuel.op.de.coul@...

2/5/2001 3:21:08 AM

>I presume you're aware that I've essentially moved completely from the
Farey
>series to the "Tenney series" approach? The "Tenney series" is defined by
a
>particular limit on the product of numerator and denominator.

Yes, I agree too it's better. I'll change it, it's only a small change.

Manuel