back to list

Re: [tuning] [MMM] Re: various - any tuning is potentially a good tuning - for someone.

🔗Kurt Bigler <kkb@...>

5/3/2004 10:45:39 PM

on 5/3/04 9:18 PM, wallyesterpaulrus <paul@...> wrote:

> --- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>
>> I was thinking possibly worse than that. I was just thinking with
> my
>> "neural intuition" and figuring that any expectation of continuity
> might not
>> be based in neural reality.
>
> Hmm . . . I don't see how continuity could fail to hold, because
> frequency resolution is always finite in practice . . .
> Discontinuities would seem to require infinite precision, which would
> seem to require infinite duration . . .

Faulty neural intuition! ;)

>> Goal: to make the *least* consonant intervals in a scale more
> consonant by
>> sacrificing the consonance of some of the more consonant ones. In
> short to
>> reduce the variation in consonance among all the intervals in the
> scale such
>> that the worse ones are less bad and the best ones less good.
>>
>> I thought about it some more, thought about how prime factors work,
> with the
>> example of stacking two 5:4s as representative of the typical
> problem which
>> is 2 stacked intervals inevitably producing a higher-odd-limit
> interval,
>> nothing you can do about that basically, unless you are tempering
> something
>> away from just.
>>
>> That's what I *was* thinking and your comment about "unrealistic"
> sort of
>> inspired that thinking and tentatively confirmed it.
>>
>> But now I'm thinking that if you chose carfully what intervals are
> available
>> to stack, maybe you can improve things. But on the other hand you
> had
>> apparently thought about it and considered Duodene to be typical,
> neither
>> particularly good nor particularly bad, and I took that to apply to
> my
>> criterion of optimizing the least clear interval in a scale.
>
> Probably the best you could do in this regard would be a big 12-tone
> harmonic series (or subharmonic series, if you're only looking
> dyadically it's just as good).

Well that's an interesting persepctive. Indeed that's literally correct.
And it proves that that wasn't what I was really looking for. So my other
point about "otonal clarity" (which I think may still be vague to you) turns
out to be part of the requirement.

>>> Did you see this message:
>>>
>>> /tuning/topicId_53249.html#53280
>>
>> I understand abs exponential and pointy. So you are saying the
> points ones
>> are so pointy that inevitably any change will drive down the value
> of the
>> consonance function.
>
> Pretty much.
>
>> But seems to me that depends on how you add them up
>> over all the intervals being optimized, and doing an ad-hoc
>> smoothing step
>> might not really be to the point.
>
> Don't follow.

You apply the consonance function to independently to a whole set of target
intervals (the ones being optimized) don't you? If so then you have to
combine all those results into a single metric that you can use to compare
one scale to another, right?

>>> This regards the use of "pointy" rather than "smooth" consonance
>>> functions. You can see some pictures here:
>>>
>>> smooth: /harmonic_entropy/
>>
>> Hmm. You just pointed me to the H.E. page. No picture there.
>
> Really? That's odd . . . no picture on the homepage? I see one.

Hmm. I tried another browser and no different. There is a *missing*
picture to the left of the description. It fails to load and fails to
reload upon manual request.

>>>> and Duodene has a 4:5:6:7 approximation in two keys,
>>> In two keys? Perhaps you really just meant "positions"
>>> or "transpositions", not "keys"?
>> I meant the 128:225 appears twice in any fixed tuning of the
>> scale. The 225:224-based approximations of the following 7-limit ratios
>> occur asfollows:
>>
>> 9:7 (actually 16: 25) appears 4 times
>> 6:7 (actually 32: 75) appears 3 times
>> 4:7 (actually 128:225) appears 2 times
>>
>> I'm sure you know that so this must be just some language thing.
>> To me I use a fixed Duodene tuning in various "keys". I thought that was
>> normal lingo.
>
> "Keys" to me means C major, D minor, etc.

Yes, exactly. For a Duodene with F-C-G-D in the middle row, the 128:225
appears in two keys: C# and G#. Right?

>>> But if so, what would your remark
>>> above -- "with hopefully a choice of several chords achieving
> that in
>>> most keys" -- mean? Did you really just mean "on most possible
>>> roots/tonics"?
>>
>> Yes, exactly. I'd settle for "most" if I couldn't get "all".
>
> Then the big harmonic series scale probably isn't for you . . .

Right, not when the otonal clarity requirement as I perhaps too vaguely
stated it is introduced. I'll try to restate it here to incorporate the
additional idea of key clarity....

The otonal clarity of a *chord* is the degree to which a single or a very
small number of implied fundamentals are clearly present. A strong single
implied fundamental is the condition of good clarity. A weaker implied
fundamental or a blending of several without a clear "winner" is indicative
of less clarity.

Otonal clarity is key-appropriate when the strong implied fundamental is in
agreement with the root of the chord. (I personally tend to call this the
key of the chord and I'm not sure that is good music-theory lingo.)

The otonally-clear chord variety of a key in a scale is the degree to which
there is a variety of available otonally clear chords appropriate to that
key.

The key-relevant otonal versatility of a scale over a set of specified
important keys is the degree to which all the important keys have good
otonally-clear chord variety. There may be a secondary set of keys that
have at least one otonally-clear chord but no variety, and this would be
part of the overall measure of otonal versatility of a scale.

>> You can reply there next time if
>> appropriate, perhaps with a note here that you are making the
>> transition.
>
> I'm making the transition with my next post.

Ok, well this is my final post on this thread to the tunings list. I will
also post it to the H.E. list, so you and others can reply there.

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/4/2004 8:14:23 AM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

> >> But seems to me that depends on how you add them up
> >> over all the intervals being optimized, and doing an ad-hoc
> >> smoothing step
> >> might not really be to the point.
> >
> > Don't follow.
>
> You apply the consonance function to independently to a whole set
of target
> intervals (the ones being optimized) don't you? If so then you
have to
> combine all those results into a single metric that you can use to
compare
> one scale to another, right?

Yes, and admittedly, we could question the straight additivity. But
what do you mean by "ad-hoc smoothing step"? That didn't seem to be
clarified above . . .

> >>> This regards the use of "pointy" rather than "smooth" consonance
> >>> functions. You can see some pictures here:
> >>>
> >>> smooth: /harmonic_entropy/
> >>
> >> Hmm. You just pointed me to the H.E. page. No picture there.
> >
> > Really? That's odd . . . no picture on the homepage? I see one.
>
> Hmm. I tried another browser and no different. There is a
*missing*
> picture to the left of the description. It fails to load and fails
to
> reload upon manual request.

Really strange. I don't have these problems at all. Anyone else?

Try:

/harmonic_entropy/files/dyadic/default.gi
f

> >>>> and Duodene has a 4:5:6:7 approximation in two keys,
> >>> In two keys? Perhaps you really just meant "positions"
> >>> or "transpositions", not "keys"?
> >> I meant the 128:225 appears twice in any fixed tuning of the
> >> scale. The 225:224-based approximations of the following 7-
limit ratios
> >> occur asfollows:
> >>
> >> 9:7 (actually 16: 25) appears 4 times
> >> 6:7 (actually 32: 75) appears 3 times
> >> 4:7 (actually 128:225) appears 2 times
> >>
> >> I'm sure you know that so this must be just some language thing.
> >> To me I use a fixed Duodene tuning in various "keys". I thought
that was
> >> normal lingo.
> >
> > "Keys" to me means C major, D minor, etc.
>
> Yes, exactly. For a Duodene with F-C-G-D in the middle row, the
128:225
> appears in two keys: C# and G#. Right?

I wouldn't put it that way. "The key of C#" by itself usually means
the key of C# major, which would be the notes C#, D#, E#, F#, G#, A#,
B#. One could say that a 128:225 appears in the key of F#, since F#
major has both B and C# among its pitches.

> >>> But if so, what would your remark
> >>> above -- "with hopefully a choice of several chords achieving
> > that in
> >>> most keys" -- mean? Did you really just mean "on most possible
> >>> roots/tonics"?
> >>
> >> Yes, exactly. I'd settle for "most" if I couldn't get "all".
> >
> > Then the big harmonic series scale probably isn't for you . . .
>
> Right, not when the otonal clarity requirement as I perhaps too
vaguely
> stated it is introduced. I'll try to restate it here to
incorporate the
> additional idea of key clarity....
>
> The otonal clarity of a *chord* is the degree to which a single or
a very
> small number of implied fundamentals are clearly present. A strong
single
> implied fundamental is the condition of good clarity. A weaker
implied
> fundamental or a blending of several without a clear "winner" is
indicative
> of less clarity.

Ok, so the single harmonic series chord looks good so far.

> Otonal clarity is key-appropriate when the strong implied
fundamental is in
> agreement with the root of the chord. (I personally tend to call
this the
> key of the chord and I'm not sure that is good music-theory lingo.)

It would be better to stick with "root".

> The otonally-clear chord variety of a key in a scale is the degree
to which
> there is a variety of available otonally clear chords appropriate
to that
> key.

That root?

🔗Kurt Bigler <kkb@...>

5/4/2004 5:32:52 PM

on 5/4/04 8:14 AM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>
>>>> But seems to me that depends on how you add them up
>>>> over all the intervals being optimized, and doing an ad-hoc
>>>> smoothing step
>>>> might not really be to the point.
>>>
>>> Don't follow.
>>
>> You apply the consonance function to independently to a whole set
> of target
>> intervals (the ones being optimized) don't you? If so then you
> have to
>> combine all those results into a single metric that you can use to
> compare
>> one scale to another, right?
>
> Yes, and admittedly, we could question the straight additivity. But
> what do you mean by "ad-hoc smoothing step"? That didn't seem to be
> clarified above . . .

I just meant that if you *have* a function that is "correct" based on
theory, and its not working, applying smoothing to it strikes me as an
ad-hoc response, especially if there is a conceivable alternative to work
on.

My random thoughts are that if you square the difference between something
pointy and the value of the tip of the point, then you get something
rounded, as in squared(abs(x)) equals squared(x). But unless the HE
function decomposes naturally into independent functions that each have a
single "point" whose value can be computed, then I can think of no way to
make use of this. But maybe you can? Mind you that's like recreating the
HE function and I don't know that that can be justified *either* unless
there is a "natural" reason for it.

>>>>> This regards the use of "pointy" rather than "smooth" consonance
>>>>> functions. You can see some pictures here:
>>>>>
>>>>> smooth: /harmonic_entropy/
>>>>
>>>> Hmm. You just pointed me to the H.E. page. No picture there.
>>>
>>> Really? That's odd . . . no picture on the homepage? I see one.
>>
>> Hmm. I tried another browser and no different. There is a
> *missing*
>> picture to the left of the description. It fails to load and fails
> to
>> reload upon manual request.
>
> Really strange. I don't have these problems at all. Anyone else?
>
> Try:
>
> /harmonic_entropy/files/dyadic/default.gi
> f

Its the same old problem. Entering the above url gives me a specific mirror
url with a log of gibberish and starting with

http://f4.grp.yahoofs.com/

I changed the f4 to f3 and then I saw the picture. But also I already found
it on the HE page of the tuning dictionary.

>>>>>> and Duodene has a 4:5:6:7 approximation in two keys,
>>>>> In two keys? Perhaps you really just meant "positions"
>>>>> or "transpositions", not "keys"?
>>>> I meant the 128:225 appears twice in any fixed tuning of the
>>>> scale. The 225:224-based approximations of the following 7-
> limit ratios
>>>> occur asfollows:
>>>>
>>>> 9:7 (actually 16: 25) appears 4 times
>>>> 6:7 (actually 32: 75) appears 3 times
>>>> 4:7 (actually 128:225) appears 2 times
>>>>
>>>> I'm sure you know that so this must be just some language thing.
>>>> To me I use a fixed Duodene tuning in various "keys". I thought
> that was
>>>> normal lingo.
>>>
>>> "Keys" to me means C major, D minor, etc.
>>
>> Yes, exactly. For a Duodene with F-C-G-D in the middle row, the
> 128:225
>> appears in two keys: C# and G#. Right?
>
> I wouldn't put it that way. "The key of C#" by itself usually means
> the key of C# major, which would be the notes C#, D#, E#, F#, G#, A#,
> B#. One could say that a 128:225 appears in the key of F#, since F#
> major has both B and C# among its pitches.

Right. Implicit is that in constructing chords, intervals that are not good
in a given key won't be used, and so I ignore them, and in that sense the
4:7 approx appears in C#. But also the 4:7 spelling, having a power of 2 at
the bottom, kind of implies that 4 is the root and therefore the "key".

I guess I'm getting my music theory lingo updated to a more functioning
level. I always hated music theory and tried to ignore it because it was
"stupid". ;) 4:5:6:7 makes much more sense to me then
subdominant-augmented-major-bologna.

>>>>> But if so, what would your remark
>>>>> above -- "with hopefully a choice of several chords achieving
>>> that in
>>>>> most keys" -- mean? Did you really just mean "on most possible
>>>>> roots/tonics"?
>>>>
>>>> Yes, exactly. I'd settle for "most" if I couldn't get "all".
>>>
>>> Then the big harmonic series scale probably isn't for you . . .
>>
>> Right, not when the otonal clarity requirement as I perhaps too
> vaguely
>> stated it is introduced. I'll try to restate it here to
> incorporate the
>> additional idea of key clarity....
>>
>> The otonal clarity of a *chord* is the degree to which a single or
> a very
>> small number of implied fundamentals are clearly present. A strong
> single
>> implied fundamental is the condition of good clarity. A weaker
> implied
>> fundamental or a blending of several without a clear "winner" is
> indicative
>> of less clarity.
>
> Ok, so the single harmonic series chord looks good so far.

Yes, but you can't expect such a chord to exist in many keys of a
fixed-tuning scale, so it is doomed to failure unless you have dynamic
retuning.

>> Otonal clarity is key-appropriate when the strong implied
> fundamental is in
>> agreement with the root of the chord. (I personally tend to call
> this the
>> key of the chord and I'm not sure that is good music-theory lingo.)
>
> It would be better to stick with "root".

So a little OT music theory question here. If you have a piece in C and you
are playing a G major chord isn't it correct to say that the piece has at
that moment modulated to G major?

>> The otonally-clear chord variety of a key in a scale is the degree
> to which
>> there is a variety of available otonally clear chords appropriate
> to that
>> key.
>
> That root?

Yes.

Mind you those definitions need some work, but it seems to me they are a
start at expressing something that is rather essential to the functioning of
a scale, especially a finite (fixed tuning) scale.

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/5/2004 11:50:56 AM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/4/04 8:14 AM, wallyesterpaulrus <wallyesterpaulrus@y...> wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >
> >>>> But seems to me that depends on how you add them up
> >>>> over all the intervals being optimized, and doing an ad-hoc
> >>>> smoothing step
> >>>> might not really be to the point.
> >>>
> >>> Don't follow.
> >>
> >> You apply the consonance function to independently to a whole set
> > of target
> >> intervals (the ones being optimized) don't you? If so then you
> > have to
> >> combine all those results into a single metric that you can use
to
> > compare
> >> one scale to another, right?
> >
> > Yes, and admittedly, we could question the straight additivity.
But
> > what do you mean by "ad-hoc smoothing step"? That didn't seem to
be
> > clarified above . . .
>
> I just meant that if you *have* a function that is "correct" based
on
> theory, and its not working, applying smoothing to it strikes me as
an
> ad-hoc response, especially if there is a conceivable alternative
to work
> on.

Who mentioned smoothing? I must be missing something. There was
nothing "ad-hoc" or less "correct" about the curves with quadratic
minima vs. those with abs-exponential minima, if that's what you have
in mind. In fact, the theory behind the former is more clear --
gaussian-distributed uncertainty spreads are typical, and can be
derived from a central limit theorem, etc. . . . while abs-
exponential-distrubuted uncertainty spreads are entirely ad-hoc as
far as I know.

> My random thoughts are that if you square the difference between
something
> pointy and the value of the tip of the point, then you get something
> rounded, as in squared(abs(x)) equals squared(x).

Hmm . . . I don't think this would work here because (exp|x|)^2 = exp
(|2x|), right? Still pointy.

> But unless the HE
> function decomposes naturally into independent functions that each
have a
> single "point" whose value can be computed, then I can think of no
way to
> make use of this. But maybe you can?

I lost you.

> >>>>>> and Duodene has a 4:5:6:7 approximation in two keys,
> >>>>> In two keys? Perhaps you really just meant "positions"
> >>>>> or "transpositions", not "keys"?
> >>>> I meant the 128:225 appears twice in any fixed tuning of the
> >>>> scale. The 225:224-based approximations of the following 7-
> > limit ratios
> >>>> occur asfollows:
> >>>>
> >>>> 9:7 (actually 16: 25) appears 4 times
> >>>> 6:7 (actually 32: 75) appears 3 times
> >>>> 4:7 (actually 128:225) appears 2 times
> >>>>
> >>>> I'm sure you know that so this must be just some language
thing.
> >>>> To me I use a fixed Duodene tuning in various "keys". I
thought
> > that was
> >>>> normal lingo.
> >>>
> >>> "Keys" to me means C major, D minor, etc.
> >>
> >> Yes, exactly. For a Duodene with F-C-G-D in the middle row, the
> > 128:225
> >> appears in two keys: C# and G#. Right?
> >
> > I wouldn't put it that way. "The key of C#" by itself usually
means
> > the key of C# major, which would be the notes C#, D#, E#, F#, G#,
A#,
> > B#. One could say that a 128:225 appears in the key of F#, since
F#
> > major has both B and C# among its pitches.
>
> Right. Implicit is that in constructing chords, intervals that are
not good
> in a given key won't be used, and so I ignore them, and in that
sense the
> 4:7 approx appears in C#. But also the 4:7 spelling, having a
power of 2 at
> the bottom, kind of implies that 4 is the root and therefore
the "key".

Root yes, key no (to me).

> >> Otonal clarity is key-appropriate when the strong implied
> > fundamental is in
> >> agreement with the root of the chord. (I personally tend to call
> > this the
> >> key of the chord and I'm not sure that is good music-theory
lingo.)
> >
> > It would be better to stick with "root".
>
> So a little OT music theory question here. If you have a piece in
C and you
> are playing a G major chord isn't it correct to say that the piece
has at
> that moment modulated to G major?

No, it would only have modulated to G major if you played a D7 chord
or F# half-dimininshed seventh of something similar resolving to G
major in a true cadence. Have you read my paper
http://www.lumma.org/tuning/erlich/erlich-decatonic.pdf ? There I
explain this conventional thinking in terms of the "characteristic
dissonances" the chord forms with the scale. The G major chord
contains a B, which forms a "characteristic dissonance" against F in
the scale, demanding resolution -- unless the F had already been
clearly replaced with F#. The only "resolved" chords in the "white
notes" scale, against which any melody in scale tones can be played
without involving the characteristic dissonance, are C major and A
minor. Thus these become the only two tonal 'keys' that consist of
the "white notes".

🔗Kurt Bigler <kkb@...>

5/5/2004 5:37:27 PM

on 5/5/04 11:50 AM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/4/04 8:14 AM, wallyesterpaulrus <wallyesterpaulrus@y...> wrote:
>>
>>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
> wrote:
>>>
>>>>>> But seems to me that depends on how you add them up
>>>>>> over all the intervals being optimized, and doing an ad-hoc
>>>>>> smoothing step
>>>>>> might not really be to the point.
>>>>>
>>>>> Don't follow.
>>>>
>>>> You apply the consonance function to independently to a whole set
>>> of target
>>>> intervals (the ones being optimized) don't you? If so then you
>>> have to
>>>> combine all those results into a single metric that you can use
> to
>>> compare
>>>> one scale to another, right?
>>>
>>> Yes, and admittedly, we could question the straight additivity.
> But
>>> what do you mean by "ad-hoc smoothing step"? That didn't seem to
> be
>>> clarified above . . .
>>
>> I just meant that if you *have* a function that is "correct" based
> on
>> theory, and its not working, applying smoothing to it strikes me as
> an
>> ad-hoc response, especially if there is a conceivable alternative
> to work
>> on.
>
> Who mentioned smoothing?

I got it by putting 2 and 2 together from these messages...

on 4/30/04 9:32 PM, Gene Ward Smith <gwsmith@...> wrote:
> --- In tuning@yahoogroups.com, "wallyesterpaulrus" <paul@s...> wrote:
>>> Yow. That suggests that if you ran a cubic spline through your points
>> I don't get this part . . .
> It's a way of smoothly interpolating; however I was assuming the
> function in question was supposed to be smooth.

on 5/3/04 2:29 PM, wallyesterpaulrus <paul@...> wrote:
> I don't think the problem necessarily was the consonance function
> (although a cubic spline, as Gene suggested, might have helped)

But of course you also said:

on 4/30/04 9:38 PM, wallyesterpaulrus <paul@...> wrote:
> The older stuff used a function that might be more
> amenable to "splining", since it is differentiable everywhere

> I must be missing something. There was
> nothing "ad-hoc" or less "correct" about the curves with quadratic
> minima vs. those with abs-exponential minima, if that's what you have
> in mind. In fact, the theory behind the former is more clear --
> gaussian-distributed uncertainty spreads are typical, and can be
> derived from a central limit theorem, etc. . . . while abs-
> exponential-distrubuted uncertainty spreads are entirely ad-hoc as
> far as I know.

Ok. I know too little about this and I'm probably not going to be able to
learn it quickly.

>> My random thoughts are that if you square the difference between
> something
>> pointy and the value of the tip of the point, then you get something
>> rounded, as in squared(abs(x)) equals squared(x).
>
> Hmm . . . I don't think this would work here because (exp|x|)^2 = exp
> (|2x|), right? Still pointy.

If it approaches linear in the limit of the point then the same idea
applies. Visually I thought it looked like it approached linear near the
points.

>> But unless the HE
>> function decomposes naturally into independent functions that each
> have a
>> single "point" whose value can be computed, then I can think of no
> way to
>> make use of this. But maybe you can?
>
> I lost you.

If a single point could be smoothed by squaring the difference near a point
(topic of previous quote) then this is addressing the question of how that
could be done for *all* the points.

>> Right. Implicit is that in constructing chords, intervals that are
> not good
>> in a given key won't be used, and so I ignore them, and in that
> sense the
>> 4:7 approx appears in C#. But also the 4:7 spelling, having a
> power of 2 at
>> the bottom, kind of implies that 4 is the root and therefore
> the "key".
>
> Root yes, key no (to me).

Well darn.

>>>> Otonal clarity is key-appropriate when the strong implied
>>> fundamental is in
>>>> agreement with the root of the chord. (I personally tend to call
>>> this the
>>>> key of the chord and I'm not sure that is good music-theory
> lingo.)
>>>
>>> It would be better to stick with "root".
>>
>> So a little OT music theory question here. If you have a piece in
> C and you
>> are playing a G major chord isn't it correct to say that the piece
> has at
>> that moment modulated to G major?
>
> No, it would only have modulated to G major if you played a D7 chord
> or F# half-dimininshed seventh of something similar resolving to G
> major in a true cadence. Have you read my paper
> http://www.lumma.org/tuning/erlich/erlich-decatonic.pdf ? There I
> explain this conventional thinking in terms of the "characteristic
> dissonances" the chord forms with the scale. The G major chord
> contains a B, which forms a "characteristic dissonance" against F in
> the scale, demanding resolution -- unless the F had already been
> clearly replaced with F#. The only "resolved" chords in the "white
> notes" scale, against which any melody in scale tones can be played
> without involving the characteristic dissonance, are C major and A
> minor. Thus these become the only two tonal 'keys' that consist of
> the "white notes".

Well I guess I have a lot to learn there.

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/5/2004 6:23:27 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/5/04 11:50 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >> on 5/4/04 8:14 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
> >>
> >>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
> > wrote:
> >>>
> >>>>>> But seems to me that depends on how you add them up
> >>>>>> over all the intervals being optimized, and doing an ad-hoc
> >>>>>> smoothing step
> >>>>>> might not really be to the point.
> >>>>>
> >>>>> Don't follow.
> >>>>
> >>>> You apply the consonance function to independently to a whole
set
> >>> of target
> >>>> intervals (the ones being optimized) don't you? If so then you
> >>> have to
> >>>> combine all those results into a single metric that you can use
> > to
> >>> compare
> >>>> one scale to another, right?
> >>>
> >>> Yes, and admittedly, we could question the straight additivity.
> > But
> >>> what do you mean by "ad-hoc smoothing step"? That didn't seem to
> > be
> >>> clarified above . . .
> >>
> >> I just meant that if you *have* a function that is "correct"
based
> > on
> >> theory, and its not working, applying smoothing to it strikes me
as
> > an
> >> ad-hoc response, especially if there is a conceivable alternative
> > to work
> >> on.
> >
> > Who mentioned smoothing?
>
>
>
> I got it by putting 2 and 2 together from these messages...

'Splining' is not smoothing. It's a method of *interpolation*. In
other words, if I've calculated a function only at integer cents
values, it can 'fill in the gaps' of the function based on the
already-calculated points -- and it's often a really good
approximation that can be computationally more efficient than
computing the function all over again at whatever fractional-cents
point you need when searching.

Gene said "however" below because the function with abs-exponential
minima, since it is not smooth to begin with, is not one where
splining is going to give good approximations. It'll "round out"
those minima too much.

> on 4/30/04 9:32 PM, Gene Ward Smith <gwsmith@s...> wrote:
> > --- In tuning@yahoogroups.com, "wallyesterpaulrus" <paul@s...>
wrote:
> >>> Yow. That suggests that if you ran a cubic spline through your
points
> >> I don't get this part . . .
> > It's a way of smoothly interpolating; however I was assuming the
> > function in question was supposed to be smooth.

In the next snippet, I'm suggesting that the 'spline' method of
interpolation could have worked better than the linear-interpolation
method I originally used, to avoid 'traps' in the 11-dimensional
search space.

> on 5/3/04 2:29 PM, wallyesterpaulrus <paul@s...> wrote:
> > I don't think the problem necessarily was the consonance function
> > (although a cubic spline, as Gene suggested, might have helped)

And the next snippet again refers to the fact that smooth functions
are more amenable to 'splining' than pointy functions:

> But of course you also said:
>
> on 4/30/04 9:38 PM, wallyesterpaulrus <paul@s...> wrote:
> > The older stuff used a function that might be more
> > amenable to "splining", since it is differentiable everywhere

Is this beginning to make sense?

> > I must be missing something. There was
> > nothing "ad-hoc" or less "correct" about the curves with quadratic
> > minima vs. those with abs-exponential minima, if that's what you
have
> > in mind. In fact, the theory behind the former is more clear --
> > gaussian-distributed uncertainty spreads are typical, and can be
> > derived from a central limit theorem, etc. . . . while abs-
> > exponential-distrubuted uncertainty spreads are entirely ad-hoc as
> > far as I know.
>
> Ok. I know too little about this and I'm probably not going to be
able to
> learn it quickly.

I wouldn't be so pessimistic. Keep asking questions.

> >> My random thoughts are that if you square the difference between
> > something
> >> pointy and the value of the tip of the point, then you get
something
> >> rounded, as in squared(abs(x)) equals squared(x).
> >
> > Hmm . . . I don't think this would work here because (exp|x|)^2 =
exp
> > (|2x|), right? Still pointy.
>
> If it approaches linear in the limit of the point then the same idea
> applies. Visually I thought it looked like it approached linear
near the
> points.

It was approaching exp(-|x|), actually. So upon squaring it would be
approaching exp(-|2x|), still pointy.

> >> But unless the HE
> >> function decomposes naturally into independent functions that
each
> > have a
> >> single "point" whose value can be computed, then I can think of
no
> > way to
> >> make use of this. But maybe you can?
> >
> > I lost you.
>
> If a single point could be smoothed by squaring the difference near
a point
> (topic of previous quote) then this is addressing the question of
how that
> could be done for *all* the points.

Ah . . . So is this moot now?

🔗Kurt Bigler <kkb@...>

5/6/2004 10:35:29 AM

on 5/5/04 6:23 PM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/5/04 11:50 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
> wrote:
>>
>>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
> wrote:
>>>> on 5/4/04 8:14 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
> wrote:
>>>>
>>>>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
>>> wrote:
>>>>>
>>>>>>>> But seems to me that depends on how you add them up
>>>>>>>> over all the intervals being optimized, and doing an ad-hoc
>>>>>>>> smoothing step
>>>>>>>> might not really be to the point.
>>>>>>>
>>>>>>> Don't follow.
>>>>>>
>>>>>> You apply the consonance function to independently to a whole
> set
>>>>> of target
>>>>>> intervals (the ones being optimized) don't you? If so then you
>>>>> have to
>>>>>> combine all those results into a single metric that you can use
>>> to
>>>>> compare
>>>>>> one scale to another, right?
>>>>>
>>>>> Yes, and admittedly, we could question the straight additivity.
>>> But
>>>>> what do you mean by "ad-hoc smoothing step"? That didn't seem to
>>> be
>>>>> clarified above . . .
>>>>
>>>> I just meant that if you *have* a function that is "correct"
> based
>>> on
>>>> theory, and its not working, applying smoothing to it strikes me
> as
>>> an
>>>> ad-hoc response, especially if there is a conceivable alternative
>>> to work
>>>> on.
>>>
>>> Who mentioned smoothing?
>>
>>
>>
>> I got it by putting 2 and 2 together from these messages...
>
> 'Splining' is not smoothing. It's a method of *interpolation*. In
> other words, if I've calculated a function only at integer cents
> values, it can 'fill in the gaps' of the function based on the
> already-calculated points -- and it's often a really good
> approximation that can be computationally more efficient than
> computing the function all over again at whatever fractional-cents
> point you need when searching.
>
> Gene said "however" below because the function with abs-exponential
> minima, since it is not smooth to begin with, is not one where
> splining is going to give good approximations. It'll "round out"
> those minima too much.

Ok, I got confused by that one sentence of Gene's. By "function in
question" I didn't understand "function being approximated" but rather
"function being calculated" and that threw me.

>> on 4/30/04 9:32 PM, Gene Ward Smith <gwsmith@s...> wrote:
>>> --- In tuning@yahoogroups.com, "wallyesterpaulrus" <paul@s...>
> wrote:
>>>>> Yow. That suggests that if you ran a cubic spline through your
> points
>>>> I don't get this part . . .
>>> It's a way of smoothly interpolating; however I was assuming the
>>> function in question was supposed to be smooth.
>
> In the next snippet, I'm suggesting that the 'spline' method of
> interpolation could have worked better than the linear-interpolation
> method I originally used, to avoid 'traps' in the 11-dimensional
> search space.

Ah, it was was wasn't said that caused me the confusion, and somehow you and
Gene just understood each other anyway. Not so unusual, of course.

> Is this beginning to make sense?

I get the whole thing now. Basically it amounted to the difference between
"smoothly interpolating" and "smoothing" something to make it smooth because
it was "supposed to be smooth".

>>> abs-exponential-distrubuted uncertainty spreads are entirely ad-hoc as far
>>> as I know.
>> Ok. I know too little about this and I'm probably not going to be able to
>> learn it quickly.
> I wouldn't be so pessimistic. Keep asking questions.

I'm merely realistic. I don't have time to ask about everything!

>>>> My random thoughts are that if you square the difference between
>>> something
>>>> pointy and the value of the tip of the point, then you get
> something
>>>> rounded, as in squared(abs(x)) equals squared(x).
>>>
>>> Hmm . . . I don't think this would work here because (exp|x|)^2 =
> exp
>>> (|2x|), right? Still pointy.

Now I know why you lost me in the following part. You're squaring the wrong
thing. Go back to what I originally said:

on 5/4/04 5:32 PM, Kurt Bigler <kkb@...> wrote:
> My random thoughts are that if you square the difference between something
> pointy and the value of the tip of the point, then you get something rounded,
> as in squared(abs(x)) equals squared(x).

So my understanding is that your function is continuous, and the first
derivative is continuous and differentiable except at the points. Therefore
it has a meaningful tangent at each side of the point, apparently having
identical slopes of the opposite sign, from what you say. At a given
"pointy" point you have the value f(x0). So what you square is f(x) -
f(x0). You probably didn't recognize this because it seems useless because
it deals with only one pointy point at x0 and doesn't help with the rest of
them. That's where this next part comes in:

on 5/4/04 5:32 PM, Kurt Bigler <kkb@...> wrote:
> But unless the HE function decomposes naturally into independent functions
> that each have a single "point" whose value can be computed, then I can think
> of no way to make use of this.

In other words I was looking for the way to apply this same fix
independently at each "pointy" point. But:

on 5/4/04 5:32 PM, Kurt Bigler <kkb@...> wrote:
> Mind you that's like recreating the HE function and I don't know that that can
> be justified *either* unless there is a "natural" reason for it.

>>
>> If it approaches linear in the limit of the point then the same idea
>> applies. Visually I thought it looked like it approached linear
> near the
>> points.
>
> It was approaching exp(-|x|), actually. So upon squaring it would be
> approaching exp(-|2x|), still pointy.

Yes, so square exp(-|x|) - 1 instead.

>>>> But unless the HE
>>>> function decomposes naturally into independent functions that
> each
>>> have a
>>>> single "point" whose value can be computed, then I can think of
> no
>>> way to
>>>> make use of this. But maybe you can?
>>>
>>> I lost you.
>>
>> If a single point could be smoothed by squaring the difference near
> a point
>> (topic of previous quote) then this is addressing the question of
> how that
>> could be done for *all* the points.
>
> Ah . . . So is this moot now?

I don't know. You tell me.

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/7/2004 4:16:24 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

>
> >>>> My random thoughts are that if you square the difference
between
> >>> something
> >>>> pointy and the value of the tip of the point, then you get
> > something
> >>>> rounded, as in squared(abs(x)) equals squared(x).
> >>>
> >>> Hmm . . . I don't think this would work here because (exp|x|)^2
=
> > exp
> >>> (|2x|), right? Still pointy.
>
> Now I know why you lost me in the following part. You're squaring
the wrong
> thing. Go back to what I originally said:
>
> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> > My random thoughts are that if you square the difference between
something
> > pointy and the value of the tip of the point, then you get
something rounded,
> > as in squared(abs(x)) equals squared(x).
>
> So my understanding is that your function is continuous, and the
first
> derivative is continuous and differentiable except at the points.
Therefore
> it has a meaningful tangent at each side of the point, apparently
having
> identical slopes of the opposite sign, from what you say. At a
given
> "pointy" point you have the value f(x0). So what you square is f
(x) -
> f(x0). You probably didn't recognize this because it seems useless
because
> it deals with only one pointy point at x0 and doesn't help with the
rest of
> them. That's where this next part comes in:
>
> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> > But unless the HE function decomposes naturally into independent
functions
> > that each have a single "point" whose value can be computed, then
I can think
> > of no way to make use of this.
>
> In other words I was looking for the way to apply this same fix
> independently at each "pointy" point. But:
>
> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> > Mind you that's like recreating the HE function and I don't know
that that can
> > be justified *either* unless there is a "natural" reason for it.
>
> >>
> >> If it approaches linear in the limit of the point then the same
idea
> >> applies. Visually I thought it looked like it approached linear
> > near the
> >> points.
> >
> > It was approaching exp(-|x|), actually. So upon squaring it would
be
> > approaching exp(-|2x|), still pointy.
>
> Yes, so square exp(-|x|) - 1 instead.

That gives you exp(-|2x)) + 2*exp(|x|) + 1, which is still pointy,
right?

> >>>> But unless the HE
> >>>> function decomposes naturally into independent functions that
> > each
> >>> have a
> >>>> single "point" whose value can be computed, then I can think of
> > no
> >>> way to
> >>>> make use of this. But maybe you can?
> >>>
> >>> I lost you.
> >>
> >> If a single point could be smoothed by squaring the difference
near
> > a point
> >> (topic of previous quote) then this is addressing the question of
> > how that
> >> could be done for *all* the points.
> >
> > Ah . . . So is this moot now?
>
> I don't know. You tell me.

Lovely day, isn't it? :)

🔗Kurt Bigler <kkb@...>

5/7/2004 6:10:53 PM

on 5/7/04 4:16 PM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>
>>
>>>>>> My random thoughts are that if you square the difference
> between
>>>>> something
>>>>>> pointy and the value of the tip of the point, then you get
>>> something
>>>>>> rounded, as in squared(abs(x)) equals squared(x).
>>>>>
>>>>> Hmm . . . I don't think this would work here because (exp|x|)^2
> =
>>> exp
>>>>> (|2x|), right? Still pointy.
>>
>> Now I know why you lost me in the following part. You're squaring
> the wrong
>> thing. Go back to what I originally said:
>>
>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>> My random thoughts are that if you square the difference between
> something
>>> pointy and the value of the tip of the point, then you get
> something rounded,
>>> as in squared(abs(x)) equals squared(x).
>>
>> So my understanding is that your function is continuous, and the
> first
>> derivative is continuous and differentiable except at the points.
> Therefore
>> it has a meaningful tangent at each side of the point, apparently
> having
>> identical slopes of the opposite sign, from what you say. At a
> given
>> "pointy" point you have the value f(x0). So what you square is f
> (x) -
>> f(x0). You probably didn't recognize this because it seems useless
> because
>> it deals with only one pointy point at x0 and doesn't help with the
> rest of
>> them. That's where this next part comes in:
>>
>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>> But unless the HE function decomposes naturally into independent
> functions
>>> that each have a single "point" whose value can be computed, then
> I can think
>>> of no way to make use of this.
>>
>> In other words I was looking for the way to apply this same fix
>> independently at each "pointy" point. But:
>>
>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>> Mind you that's like recreating the HE function and I don't know
> that that can
>>> be justified *either* unless there is a "natural" reason for it.
>>
>>>>
>>>> If it approaches linear in the limit of the point then the same
> idea
>>>> applies. Visually I thought it looked like it approached linear
>>> near the
>>>> points.
>>>
>>> It was approaching exp(-|x|), actually. So upon squaring it would
> be
>>> approaching exp(-|2x|), still pointy.
>>
>> Yes, so square exp(-|x|) - 1 instead.
>
> That gives you exp(-|2x)) + 2*exp(|x|) + 1, which is still pointy,
> right?

(exp(-|x|) - 1)(exp(-|x|) - 1) =

exp(-|x|)^2 - 2exp(-|x|) + 1 =

exp(-2|x|) - 2exp(-|x|) + 1 =

exp(-|2x|) - 2exp(-|x|) + 1

And the first derivative of that for x > 0 is

d/dx [exp(-2x) - 2exp(-x) + 1] = -2exp(-2x) + 2exp(-x)

which in the limit x=0 equals

-2 + 2 = 0

And you can confirm that the slope will approach zero on the x < 0 side
also.

So, the first two terms are pointy in exactly opposte ways, causing the
point to cancel out. Mathematica's plot function just confirmed this for
me. If you have it handy try:

Plot
[
{ (Exp[-Abs[x]] )^2,
2Exp[-Abs[x]],
(Exp[-Abs[x]] )^2 - 2Exp[-Abs[x]]
}, {x, -1, 1}
]

Its not easy enough to send you an attachment of a screenshot from OS X, but
if you really want me to, I will try.

But that's the difficult way of looking at it. The easy way which I was
alluding to (though not trivial to describe here...) is that anything
function with linear tangents that can be seen as a "fold" in the function
(a discontinuity at which the 1st derivative changes sign) creates the same
local situation as a simple line through that point (with the "fold"
artificially removed) when it comes to sqaring that function "around" the
point in question. By sqaring f(x) around x0 I mean squaring the difference
f(x) - f(x0).

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/10/2004 10:28:57 AM

Gotcha

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/7/04 4:16 PM, wallyesterpaulrus <wallyesterpaulrus@y...> wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >
> >>
> >>>>>> My random thoughts are that if you square the difference
> > between
> >>>>> something
> >>>>>> pointy and the value of the tip of the point, then you get
> >>> something
> >>>>>> rounded, as in squared(abs(x)) equals squared(x).
> >>>>>
> >>>>> Hmm . . . I don't think this would work here because (exp|x|)
^2
> > =
> >>> exp
> >>>>> (|2x|), right? Still pointy.
> >>
> >> Now I know why you lost me in the following part. You're
squaring
> > the wrong
> >> thing. Go back to what I originally said:
> >>
> >> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>> My random thoughts are that if you square the difference
between
> > something
> >>> pointy and the value of the tip of the point, then you get
> > something rounded,
> >>> as in squared(abs(x)) equals squared(x).
> >>
> >> So my understanding is that your function is continuous, and the
> > first
> >> derivative is continuous and differentiable except at the points.
> > Therefore
> >> it has a meaningful tangent at each side of the point, apparently
> > having
> >> identical slopes of the opposite sign, from what you say. At a
> > given
> >> "pointy" point you have the value f(x0). So what you square is f
> > (x) -
> >> f(x0). You probably didn't recognize this because it seems
useless
> > because
> >> it deals with only one pointy point at x0 and doesn't help with
the
> > rest of
> >> them. That's where this next part comes in:
> >>
> >> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>> But unless the HE function decomposes naturally into independent
> > functions
> >>> that each have a single "point" whose value can be computed,
then
> > I can think
> >>> of no way to make use of this.
> >>
> >> In other words I was looking for the way to apply this same fix
> >> independently at each "pointy" point. But:
> >>
> >> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>> Mind you that's like recreating the HE function and I don't know
> > that that can
> >>> be justified *either* unless there is a "natural" reason for it.
> >>
> >>>>
> >>>> If it approaches linear in the limit of the point then the same
> > idea
> >>>> applies. Visually I thought it looked like it approached
linear
> >>> near the
> >>>> points.
> >>>
> >>> It was approaching exp(-|x|), actually. So upon squaring it
would
> > be
> >>> approaching exp(-|2x|), still pointy.
> >>
> >> Yes, so square exp(-|x|) - 1 instead.
> >
> > That gives you exp(-|2x)) + 2*exp(|x|) + 1, which is still pointy,
> > right?
>
> (exp(-|x|) - 1)(exp(-|x|) - 1) =
>
> exp(-|x|)^2 - 2exp(-|x|) + 1 =
>
> exp(-2|x|) - 2exp(-|x|) + 1 =
>
> exp(-|2x|) - 2exp(-|x|) + 1
>
>
> And the first derivative of that for x > 0 is
>
> d/dx [exp(-2x) - 2exp(-x) + 1] = -2exp(-2x) + 2exp(-x)
>
> which in the limit x=0 equals
>
> -2 + 2 = 0
>
>
> And you can confirm that the slope will approach zero on the x < 0
side
> also.
>
>
> So, the first two terms are pointy in exactly opposte ways, causing
the
> point to cancel out. Mathematica's plot function just confirmed
this for
> me. If you have it handy try:
>
> Plot
> [
> { (Exp[-Abs[x]] )^2,
> 2Exp[-Abs[x]],
> (Exp[-Abs[x]] )^2 - 2Exp[-Abs[x]]
> }, {x, -1, 1}
> ]
>
> Its not easy enough to send you an attachment of a screenshot from
OS X, but
> if you really want me to, I will try.
>
> But that's the difficult way of looking at it. The easy way which
I was
> alluding to (though not trivial to describe here...) is that
anything
> function with linear tangents that can be seen as a "fold" in the
function
> (a discontinuity at which the 1st derivative changes sign) creates
the same
> local situation as a simple line through that point (with the "fold"
> artificially removed) when it comes to sqaring that
function "around" the
> point in question. By sqaring f(x) around x0 I mean squaring the
difference
> f(x) - f(x0).
>
>
>
> -Kurt

🔗Kurt Bigler <kkb@...>

5/10/2004 10:41:24 AM

on 5/10/04 10:28 AM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> Gotcha

Well then we can revisit the implications. I didn't say all that just to be
right. ;) Does the other stuff I wrote make more sense now and do you have
any new replies to the points that were previously pointless?

Thanks,
Kurt

>
> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/7/04 4:16 PM, wallyesterpaulrus <wallyesterpaulrus@y...> wrote:
>>
>>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
> wrote:
>>>
>>>>
>>>>>>>> My random thoughts are that if you square the difference
>>> between
>>>>>>> something
>>>>>>>> pointy and the value of the tip of the point, then you get
>>>>> something
>>>>>>>> rounded, as in squared(abs(x)) equals squared(x).
>>>>>>>
>>>>>>> Hmm . . . I don't think this would work here because (exp|x|)
> ^2
>>> =
>>>>> exp
>>>>>>> (|2x|), right? Still pointy.
>>>>
>>>> Now I know why you lost me in the following part. You're
> squaring
>>> the wrong
>>>> thing. Go back to what I originally said:
>>>>
>>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>>>> My random thoughts are that if you square the difference
> between
>>> something
>>>>> pointy and the value of the tip of the point, then you get
>>> something rounded,
>>>>> as in squared(abs(x)) equals squared(x).
>>>>
>>>> So my understanding is that your function is continuous, and the
>>> first
>>>> derivative is continuous and differentiable except at the points.
>>> Therefore
>>>> it has a meaningful tangent at each side of the point, apparently
>>> having
>>>> identical slopes of the opposite sign, from what you say. At a
>>> given
>>>> "pointy" point you have the value f(x0). So what you square is f
>>> (x) -
>>>> f(x0). You probably didn't recognize this because it seems
> useless
>>> because
>>>> it deals with only one pointy point at x0 and doesn't help with
> the
>>> rest of
>>>> them. That's where this next part comes in:
>>>>
>>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>>>> But unless the HE function decomposes naturally into independent
>>> functions
>>>>> that each have a single "point" whose value can be computed,
> then
>>> I can think
>>>>> of no way to make use of this.
>>>>
>>>> In other words I was looking for the way to apply this same fix
>>>> independently at each "pointy" point. But:
>>>>
>>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
>>>>> Mind you that's like recreating the HE function and I don't know
>>> that that can
>>>>> be justified *either* unless there is a "natural" reason for it.
>>>>
>>>>>>
>>>>>> If it approaches linear in the limit of the point then the same
>>> idea
>>>>>> applies. Visually I thought it looked like it approached
> linear
>>>>> near the
>>>>>> points.
>>>>>
>>>>> It was approaching exp(-|x|), actually. So upon squaring it
> would
>>> be
>>>>> approaching exp(-|2x|), still pointy.
>>>>
>>>> Yes, so square exp(-|x|) - 1 instead.
>>>
>>> That gives you exp(-|2x)) + 2*exp(|x|) + 1, which is still pointy,
>>> right?
>>
>> (exp(-|x|) - 1)(exp(-|x|) - 1) =
>>
>> exp(-|x|)^2 - 2exp(-|x|) + 1 =
>>
>> exp(-2|x|) - 2exp(-|x|) + 1 =
>>
>> exp(-|2x|) - 2exp(-|x|) + 1
>>
>>
>> And the first derivative of that for x > 0 is
>>
>> d/dx [exp(-2x) - 2exp(-x) + 1] = -2exp(-2x) + 2exp(-x)
>>
>> which in the limit x=0 equals
>>
>> -2 + 2 = 0
>>
>>
>> And you can confirm that the slope will approach zero on the x < 0
> side
>> also.
>>
>>
>> So, the first two terms are pointy in exactly opposte ways, causing
> the
>> point to cancel out. Mathematica's plot function just confirmed
> this for
>> me. If you have it handy try:
>>
>> Plot
>> [
>> { (Exp[-Abs[x]] )^2,
>> 2Exp[-Abs[x]],
>> (Exp[-Abs[x]] )^2 - 2Exp[-Abs[x]]
>> }, {x, -1, 1}
>> ]
>>
>> Its not easy enough to send you an attachment of a screenshot from
> OS X, but
>> if you really want me to, I will try.
>>
>> But that's the difficult way of looking at it. The easy way which
> I was
>> alluding to (though not trivial to describe here...) is that
> anything
>> function with linear tangents that can be seen as a "fold" in the
> function
>> (a discontinuity at which the 1st derivative changes sign) creates
> the same
>> local situation as a simple line through that point (with the "fold"
>> artificially removed) when it comes to sqaring that
> function "around" the
>> point in question. By sqaring f(x) around x0 I mean squaring the
> difference
>> f(x) - f(x0).
>>
>>
>>
>> -Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/10/2004 10:50:08 AM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/10/04 10:28 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > Gotcha
>
> Well then we can revisit the implications. I didn't say all that
just to be
> right. ;) Does the other stuff I wrote make more sense now and do
you have
> any new replies to the points that were previously pointless?
>
> Thanks,
> Kurt

Well, I'm not sure what do with this because

1. As you pointed out, you could only apply this to one mimimum at a
time, not clear how to apply it to the whole function;

2. I'm not sure what the purpose was in the first place . . .

Let me know
Paul

>
>
>
>
> >
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >> on 5/7/04 4:16 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
> >>
> >>> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
> > wrote:
> >>>
> >>>>
> >>>>>>>> My random thoughts are that if you square the difference
> >>> between
> >>>>>>> something
> >>>>>>>> pointy and the value of the tip of the point, then you get
> >>>>> something
> >>>>>>>> rounded, as in squared(abs(x)) equals squared(x).
> >>>>>>>
> >>>>>>> Hmm . . . I don't think this would work here because
(exp|x|)
> > ^2
> >>> =
> >>>>> exp
> >>>>>>> (|2x|), right? Still pointy.
> >>>>
> >>>> Now I know why you lost me in the following part. You're
> > squaring
> >>> the wrong
> >>>> thing. Go back to what I originally said:
> >>>>
> >>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>>>> My random thoughts are that if you square the difference
> > between
> >>> something
> >>>>> pointy and the value of the tip of the point, then you get
> >>> something rounded,
> >>>>> as in squared(abs(x)) equals squared(x).
> >>>>
> >>>> So my understanding is that your function is continuous, and
the
> >>> first
> >>>> derivative is continuous and differentiable except at the
points.
> >>> Therefore
> >>>> it has a meaningful tangent at each side of the point,
apparently
> >>> having
> >>>> identical slopes of the opposite sign, from what you say. At a
> >>> given
> >>>> "pointy" point you have the value f(x0). So what you square
is f
> >>> (x) -
> >>>> f(x0). You probably didn't recognize this because it seems
> > useless
> >>> because
> >>>> it deals with only one pointy point at x0 and doesn't help with
> > the
> >>> rest of
> >>>> them. That's where this next part comes in:
> >>>>
> >>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>>>> But unless the HE function decomposes naturally into
independent
> >>> functions
> >>>>> that each have a single "point" whose value can be computed,
> > then
> >>> I can think
> >>>>> of no way to make use of this.
> >>>>
> >>>> In other words I was looking for the way to apply this same fix
> >>>> independently at each "pointy" point. But:
> >>>>
> >>>> on 5/4/04 5:32 PM, Kurt Bigler <kkb@b...> wrote:
> >>>>> Mind you that's like recreating the HE function and I don't
know
> >>> that that can
> >>>>> be justified *either* unless there is a "natural" reason for
it.
> >>>>
> >>>>>>
> >>>>>> If it approaches linear in the limit of the point then the
same
> >>> idea
> >>>>>> applies. Visually I thought it looked like it approached
> > linear
> >>>>> near the
> >>>>>> points.
> >>>>>
> >>>>> It was approaching exp(-|x|), actually. So upon squaring it
> > would
> >>> be
> >>>>> approaching exp(-|2x|), still pointy.
> >>>>
> >>>> Yes, so square exp(-|x|) - 1 instead.
> >>>
> >>> That gives you exp(-|2x)) + 2*exp(|x|) + 1, which is still
pointy,
> >>> right?
> >>
> >> (exp(-|x|) - 1)(exp(-|x|) - 1) =
> >>
> >> exp(-|x|)^2 - 2exp(-|x|) + 1 =
> >>
> >> exp(-2|x|) - 2exp(-|x|) + 1 =
> >>
> >> exp(-|2x|) - 2exp(-|x|) + 1
> >>
> >>
> >> And the first derivative of that for x > 0 is
> >>
> >> d/dx [exp(-2x) - 2exp(-x) + 1] = -2exp(-2x) + 2exp(-x)
> >>
> >> which in the limit x=0 equals
> >>
> >> -2 + 2 = 0
> >>
> >>
> >> And you can confirm that the slope will approach zero on the x <
0
> > side
> >> also.
> >>
> >>
> >> So, the first two terms are pointy in exactly opposte ways,
causing
> > the
> >> point to cancel out. Mathematica's plot function just confirmed
> > this for
> >> me. If you have it handy try:
> >>
> >> Plot
> >> [
> >> { (Exp[-Abs[x]] )^2,
> >> 2Exp[-Abs[x]],
> >> (Exp[-Abs[x]] )^2 - 2Exp[-Abs[x]]
> >> }, {x, -1, 1}
> >> ]
> >>
> >> Its not easy enough to send you an attachment of a screenshot
from
> > OS X, but
> >> if you really want me to, I will try.
> >>
> >> But that's the difficult way of looking at it. The easy way
which
> > I was
> >> alluding to (though not trivial to describe here...) is that
> > anything
> >> function with linear tangents that can be seen as a "fold" in the
> > function
> >> (a discontinuity at which the 1st derivative changes sign)
creates
> > the same
> >> local situation as a simple line through that point (with
the "fold"
> >> artificially removed) when it comes to sqaring that
> > function "around" the
> >> point in question. By sqaring f(x) around x0 I mean squaring the
> > difference
> >> f(x) - f(x0).
> >>
> >>
> >>
> >> -Kurt

🔗Kurt Bigler <kkb@...>

5/10/2004 12:02:32 PM

on 5/10/04 10:50 AM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/10/04 10:28 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
> wrote:
>>
>>> Gotcha
>>
>> Well then we can revisit the implications. I didn't say all that
> just to be
>> right. ;) Does the other stuff I wrote make more sense now and do
> you have
>> any new replies to the points that were previously pointless?
>>
>> Thanks,
>> Kurt
>
> Well, I'm not sure what do with this because
>
> 1. As you pointed out, you could only apply this to one mimimum at a
> time, not clear how to apply it to the whole function;
>
> 2. I'm not sure what the purpose was in the first place . . .
>
> Let me know
> Paul

Ok, you're making me repeat myself, but I guess that's not too bad, since I
guess I wasn't clear enough. But throwing out half-baked ideas is part of
my style. I'm already thinking alone too much and this helps to create a
balance. ;)

In answer to point 1, this is why I asked the question about whether the HE
function could be decomposed into individual (additive) component functions
with a single pointy points each. In that case you could consider the idea
of applying this transformation to each component prior to adding them. And
as I said, that probably changes the resulting function so much that it may
not make sense.

So as a little half-baked hint, the next line of thinking might come from
realizing this decomposing idea has a real problem but might suggest an
alternate way of creating a H.E. funcition to avoid it. Maybe that's exatly
what your original smoother function was, maybe not. Part of the idea here
is to "push" the squaring down in to a deeper level of the HE function in
any way that that might make sense.

So does this make you think of anything useful? I don't think there is more
that I can add until I know a *lot* more about H.E and exactly how you came
up with those functions.

In answer to point 2 the goal is to avoid the problems that the pointy
places are causing for your optimization algorithm. It could also well be
that a customized optimization algorithm that is "used to" these pointy
places might be a way to go, if that turns out to make any sense.

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/10/2004 1:58:01 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/10/04 10:50 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >> on 5/10/04 10:28 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
> > wrote:
> >>
> >>> Gotcha
> >>
> >> Well then we can revisit the implications. I didn't say all that
> > just to be
> >> right. ;) Does the other stuff I wrote make more sense now and
do
> > you have
> >> any new replies to the points that were previously pointless?
> >>
> >> Thanks,
> >> Kurt
> >
> > Well, I'm not sure what do with this because
> >
> > 1. As you pointed out, you could only apply this to one mimimum
at a
> > time, not clear how to apply it to the whole function;
> >
> > 2. I'm not sure what the purpose was in the first place . . .
> >
> > Let me know
> > Paul
>
> Ok, you're making me repeat myself, but I guess that's not too bad,
since I
> guess I wasn't clear enough. But throwing out half-baked ideas is
part of
> my style. I'm already thinking alone too much and this helps to
create a
> balance. ;)
>
> In answer to point 1, this is why I asked the question about
whether the HE
> function could be decomposed into individual (additive) component
functions
> with a single pointy points each.

Yes, I remember that.

> In that case you could consider the idea
> of applying this transformation to each component prior to adding
them. And
> as I said, that probably changes the resulting function so much
that it may
> not make sense.

Well, harmonic entropy is computed partly by assuming that, in a
sense, there's a certain probability function for what interval you
hear when any interval is played, and this heard distribution is
centered around the played interval. In the "pointy" case this
distribution is a normalized exp(-|cx|) distribution, while in the
default case it's a normalized exp(-cx^2) distribution, i.e., a
normal distribution. So it would seem that the default harmonic
entropy curves get at your desires more directly without even
worrying about the question of additive decomposition, yes?

However, the question is a very interesting one and I hope some of
the mathematicians on this list will look into it.

> So as a little half-baked hint, the next line of thinking might
come from
> realizing this decomposing idea has a real problem but might
suggest an
> alternate way of creating a H.E. funcition to avoid it. Maybe
that's exatly
> what your original smoother function was, maybe not. Part of the
idea here
> is to "push" the squaring down in to a deeper level of the HE
function in
> any way that that might make sense.

Well there we go.

> So does this make you think of anything useful? I don't think
there is more
> that I can add until I know a *lot* more about H.E and exactly how
you came
> up with those functions.

Have you looked at, say, the most recent 20 posts on this list, as
well as the Tonalsoft pages?

> In answer to point 2 the goal is to avoid the problems that the
pointy
> places are causing for your optimization algorithm.

Right . . . well, I never tried optimizing using any of the "pointy"
harmonic entropy curves at all. As I tried to clarify, these problems
arose when using the smooth curves, but might have been getting stuck
because I was using linear interpolation, instead of splines, to get
the values for intervals in-between the integer-cents intervals where
the function had been calculated.

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/10/2004 2:04:45 PM

--- In harmonic_entropy@yahoogroups.com, "wallyesterpaulrus"
<wallyesterpaulrus@y...> wrote:

> Have you looked at, say, the most recent 20 posts on this list,

Actually I was thinking this one:

/harmonic_entropy/topicId_707.html#708

🔗Kurt Bigler <kkb@...>

5/10/2004 2:52:19 PM

on 5/10/04 1:58 PM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> Well, harmonic entropy is computed partly by assuming that, in a
> sense, there's a certain probability function for what interval you
> hear when any interval is played, and this heard distribution is
> centered around the played interval. In the "pointy" case this
> distribution is a normalized exp(-|cx|) distribution, while in the
> default case it's a normalized exp(-cx^2) distribution, i.e., a
> normal distribution. So it would seem that the default harmonic
> entropy curves get at your desires more directly without even
> worrying about the question of additive decomposition, yes?

Yes, that may indeed be true. I think I'm getting it more clearly now. The
different assumption of distribution affects the curve, but in a slightly
indirect way, right? So the difference between changing the distribution
assumption and altering the function more after-the-fact (including by
decomposition etc.) might have interesting subtle implications.

> However, the question is a very interesting one and I hope some of
> the mathematicians on this list will look into it.
>
>> So as a little half-baked hint, the next line of thinking might
> come from
>> realizing this decomposing idea has a real problem but might
> suggest an
>> alternate way of creating a H.E. funcition to avoid it. Maybe
> that's exatly
>> what your original smoother function was, maybe not. Part of the
> idea here
>> is to "push" the squaring down in to a deeper level of the HE
> function in
>> any way that that might make sense.
>
> Well there we go.

Yes, still different from changing the distribution assumption, or not?
Feel free to wait to answer until I have studied the H.E. stuff more.

> Have you looked at, say, the most recent 20 posts on this list,

That should be an easy enough assignment!

> as well as the Tonalsoft pages?

I've looked at the tuning dictionary pages at various times, but its been a
few months now. I'll also look at them again.

>> In answer to point 2 the goal is to avoid the problems that the
> pointy
>> places are causing for your optimization algorithm.
>
> Right . . . well, I never tried optimizing using any of the "pointy"
> harmonic entropy curves at all. As I tried to clarify, these problems
> arose when using the smooth curves, but might have been getting stuck
> because I was using linear interpolation, instead of splines, to get
> the values for intervals in-between the integer-cents intervals where
> the function had been calculated.

I thought the point was that the pointy ones wouldn't spline so well. So
the idea here was to find a variant on the pointy ones that would spline
better and therefore not have this problem. Of course if the answer is just
"don't use the pointy ones" that's fine. My impression was at one point
that the pointy ones represented something "better", or at least worth
trying because there are possibly better, and I was looking at how to retain
that [possible] advantage while getting rid of the pointy problem if in fact
that is not a contradiction, the point again being the possible "subtle
implications" I referred to above.

However, even the pointy ones could be represented by *segmented* spline
functions, no?

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/10/2004 3:07:12 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/10/04 1:58 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > Well, harmonic entropy is computed partly by assuming that, in a
> > sense, there's a certain probability function for what interval
you
> > hear when any interval is played, and this heard distribution is
> > centered around the played interval. In the "pointy" case this
> > distribution is a normalized exp(-|cx|) distribution, while in the
> > default case it's a normalized exp(-cx^2) distribution, i.e., a
> > normal distribution. So it would seem that the default harmonic
> > entropy curves get at your desires more directly without even
> > worrying about the question of additive decomposition, yes?
>
> Yes, that may indeed be true. I think I'm getting it more clearly
now. The
> different assumption of distribution affects the curve, but in a
slightly
> indirect way, right?

Yes.

> So the difference between changing the distribution
> assumption and altering the function more after-the-fact (including
by
> decomposition etc.) might have interesting subtle implications.

Don't rule out the possibility that there might be no difference!

> > However, the question is a very interesting one and I hope some of
> > the mathematicians on this list will look into it.
> >
> >> So as a little half-baked hint, the next line of thinking might
> > come from
> >> realizing this decomposing idea has a real problem but might
> > suggest an
> >> alternate way of creating a H.E. funcition to avoid it. Maybe
> > that's exatly
> >> what your original smoother function was, maybe not. Part of the
> > idea here
> >> is to "push" the squaring down in to a deeper level of the HE
> > function in
> >> any way that that might make sense.
> >
> > Well there we go.
>
> Yes, still different from changing the distribution assumption, or
>not?

I thought it sounded like changing the distribution assumption, since
that is at a deeper level.

> > Right . . . well, I never tried optimizing using any of
the "pointy"
> > harmonic entropy curves at all. As I tried to clarify, these
problems
> > arose when using the smooth curves, but might have been getting
stuck
> > because I was using linear interpolation, instead of splines, to
get
> > the values for intervals in-between the integer-cents intervals
where
> > the function had been calculated.
>
> I thought the point was that the pointy ones wouldn't spline so
well. So
> the idea here was to find a variant on the pointy ones that would
spline
> better and therefore not have this problem. Of course if the
answer is just
> "don't use the pointy ones" that's fine. My impression was at one
point
> that the pointy ones represented something "better", or at least
worth
> trying because there are possibly better, and I was looking at how
to retain
> that [possible] advantage while getting rid of the pointy problem
if in fact
> that is not a contradiction, the point again being the
possible "subtle
> implications" I referred to above.

Well, the pointy ones may be "better" because they will tend to favor
JI scales, and that may be what you or someone is looking for.
However, making them un-pointy will make them favor tempered scales,
so this would appear to defeat the whole purpose.

Optimization routines aren't going to work well on the pointy ones in
the first place. It would take a different strategy to find
the "global optima" of, say, 12-note scales such as to minimize total
dyadic entropy according to the pointy ones. One strategy might be to
plug in every 12-note scale in the Scala archive which has 2/1 as its
interval of repetition. Then one might get a sense of the pattern
of "goodness/badness" according to this measure, and perhaps discover
something even "gooder" :)

> However, even the pointy ones could be represented by *segmented*
>spline
> functions, no?

Well, they appear to be fractal in nature, with local minima at
*every* rational number. So you'd need an infinite number of
segments . . .

🔗Kurt Bigler <kkb@...>

5/11/2004 1:02:54 AM

on 5/10/04 3:07 PM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

>> So the difference between changing the distribution
>> assumption and altering the function more after-the-fact (including
> by
>> decomposition etc.) might have interesting subtle implications.
>
> Don't rule out the possibility that there might be no difference!

Right.

>>>> So as a little half-baked hint, the next line of thinking might
>>> come from
>>>> realizing this decomposing idea has a real problem but might
>>> suggest an
>>>> alternate way of creating a H.E. funcition to avoid it. Maybe
>>> that's exatly
>>>> what your original smoother function was, maybe not. Part of the
>>> idea here
>>>> is to "push" the squaring down in to a deeper level of the HE
>>> function in
>>>> any way that that might make sense.
>>>
>>> Well there we go.
>>
>> Yes, still different from changing the distribution assumption, or
>> not?
>
> I thought it sounded like changing the distribution assumption, since
> that is at a deeper level.

Yes, even deeper than I intended to go!

> Well, the pointy ones may be "better" because they will tend to favor
> JI scales, and that may be what you or someone is looking for.
> However, making them un-pointy will make them favor tempered scales,
> so this would appear to defeat the whole purpose.

Interesting.

> Optimization routines aren't going to work well on the pointy ones in
> the first place.

What kind of algorithms are you using?

> It would take a different strategy to find
> the "global optima" of, say, 12-note scales such as to minimize total
> dyadic entropy according to the pointy ones. One strategy might be to
> plug in every 12-note scale in the Scala archive which has 2/1 as its
> interval of repetition. Then one might get a sense of the pattern
> of "goodness/badness" according to this measure, and perhaps discover
> something even "gooder" :)

You are saying this is a way of rating the rating function? So you might
try some random alterations to the function, say between pointiness and
smoothness and see what degree of pointiness matches listening test
judgements?

>> However, even the pointy ones could be represented by *segmented*
>> spline
>> functions, no?
>
> Well, they appear to be fractal in nature, with local minima at
> *every* rational number. So you'd need an infinite number of
> segments . . .

But when you evaluate, you aren't doing an infinite calculation. So how do
you limit it? (Maybe you already answered that.) Hmm. Optimizing a
fractal function. Is there some literature on that?

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/11/2004 10:56:32 AM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/10/04 3:07 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
>
> >> So the difference between changing the distribution
> >> assumption and altering the function more after-the-fact
(including
> > by
> >> decomposition etc.) might have interesting subtle implications.
> >
> > Don't rule out the possibility that there might be no difference!
>
> Right.
>
> >>>> So as a little half-baked hint, the next line of thinking might
> >>> come from
> >>>> realizing this decomposing idea has a real problem but might
> >>> suggest an
> >>>> alternate way of creating a H.E. funcition to avoid it. Maybe
> >>> that's exatly
> >>>> what your original smoother function was, maybe not. Part of
the
> >>> idea here
> >>>> is to "push" the squaring down in to a deeper level of the HE
> >>> function in
> >>>> any way that that might make sense.
> >>>
> >>> Well there we go.
> >>
> >> Yes, still different from changing the distribution assumption,
or
> >> not?
> >
> > I thought it sounded like changing the distribution assumption,
since
> > that is at a deeper level.
>
> Yes, even deeper than I intended to go!
>
> > Well, the pointy ones may be "better" because they will tend to
favor
> > JI scales, and that may be what you or someone is looking for.
> > However, making them un-pointy will make them favor tempered
scales,
> > so this would appear to defeat the whole purpose.
>
> Interesting.
>
> > Optimization routines aren't going to work well on the pointy
ones in
> > the first place.
>
> What kind of algorithms are you using?

I have the Matlab Optimization Toolbox. It's got all the state-of-the-
art algorithms.

> > It would take a different strategy to find
> > the "global optima" of, say, 12-note scales such as to minimize
total
> > dyadic entropy according to the pointy ones. One strategy might
be to
> > plug in every 12-note scale in the Scala archive which has 2/1 as
its
> > interval of repetition. Then one might get a sense of the pattern
> > of "goodness/badness" according to this measure, and perhaps
discover
> > something even "gooder" :)
>
> You are saying this is a way of rating the rating function?

No, just the scales.

> So you might
> try some random alterations to the function, say between pointiness
and
> smoothness and see what degree of pointiness matches listening test
> judgements?

That would be an entirely different and separate issue.

> >> However, even the pointy ones could be represented by *segmented*
> >> spline
> >> functions, no?
> >
> > Well, they appear to be fractal in nature, with local minima at
> > *every* rational number. So you'd need an infinite number of
> > segments . . .
>
> But when you evaluate, you aren't doing an infinite calculation.

Right -- it's not a *true* fractal, but it approaches one.

> So how do
> you limit it?

Typically, with n*d<10000 or n*d<65536 or something like that. Are
you up to speed on the calculation now?

🔗Kurt Bigler <kkb@...>

5/11/2004 12:26:10 PM

on 5/11/04 10:56 AM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/10/04 3:07 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
> wrote:

>> What kind of algorithms are you using?
>
> I have the Matlab Optimization Toolbox. It's got all the state-of-the-
> art algorithms.

So does it choose them for you or do you choose them? Seems like there
should be a class of algorithms specific to fractal-like functions that
might offer more stability. You can't be the first person to run into this.

>> So how do
>> you limit it?
>
> Typically, with n*d<10000 or n*d<65536 or something like that. Are
> you up to speed on the calculation now?

I've started, but I've got a ways to go. I just looked at your answer to a
newbie question from february. What is the "s" parameter?

What are the consequences of using Farey series versus other approaches.
Does it make a big difference? Seems like the plots I remember seeing on
the tuning dictonary pages were all based on Farey series. (But I didn't
just look.)

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/11/2004 12:47:08 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/11/04 10:56 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >> on 5/10/04 3:07 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
> > wrote:
>
> >> What kind of algorithms are you using?
> >
> > I have the Matlab Optimization Toolbox. It's got all the state-of-
the-
> > art algorithms.
>
> So does it choose them for you or do you choose them?

Either way.

> Seems like there
> should be a class of algorithms specific to fractal-like functions
>that
> might offer more stability.

Umm . . . no

> You can't be the first person to run into this.

Well, finding *global* minima, even of smooth functions, is already
very hard, and there's no general method or algorithm. What I did was
a big monte-carlo to seed a whole bunch of *local* minima searches,
and took the minimum of those.

When the function isn't smooth, there will be local minima all over
the place. So I'd end up with almost as many scales as I started with
in the monte-carlo simulation. For scales with a decent number of
notes, this would take forever. So a more intelligent approach is
needed.

> >> So how do
> >> you limit it?
> >
> > Typically, with n*d<10000 or n*d<65536 or something like that. Are
> > you up to speed on the calculation now?
>
> I've started, but I've got a ways to go. I just looked at your
answer to a
> newbie question from february. What is the "s" parameter?

This measures the uncertainty of the listener's pitch resultion. The
width of the normal curve, or exp(-|x|) curve, that we were talking
about before.

> What are the consequences of using Farey series versus other
approaches.
> Does it make a big difference? Seems like the plots I remember
seeing on
> the tuning dictonary pages were all based on Farey series.

Yes, most of them were. As a result, they have an overall downward
slope. Using a series delimited by a maximum n*d, instead of a
maximum n, removes the overall downward slope, but leaves the same
pattern of local mimima. The overall downward slope can be a
disadvantage if you don't want to be biased toward very large
intervals.

🔗Kurt Bigler <kkb@...>

5/11/2004 1:18:34 PM

on 5/11/04 12:47 PM, wallyesterpaulrus <wallyesterpaulrus@...> wrote:

> --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 5/11/04 10:56 AM, wallyesterpaulrus <wallyesterpaulrus@y...>

>>> I have the Matlab Optimization Toolbox. It's got all the state-of-
>>> the-art algorithms.
>> So does it choose them for you or do you choose them?
> Either way.
>
>> Seems like there
>> should be a class of algorithms specific to fractal-like functions
>> that
>> might offer more stability.
> Umm . . . no
>
>> You can't be the first person to run into this.
>
> Well, finding *global* minima, even of smooth functions, is already
> very hard, and there's no general method or algorithm. What I did was
> a big monte-carlo to seed a whole bunch of *local* minima searches,
> and took the minimum of those.
>
> When the function isn't smooth, there will be local minima all over
> the place. So I'd end up with almost as many scales as I started with
> in the monte-carlo simulation. For scales with a decent number of
> notes, this would take forever. So a more intelligent approach is
> needed.

Well then my gut reaction (which I'm not attached too, either) is that there
may not *be* any more intelligent approach for automated optimization of
this kind of function. No local information leads to anything generalizable
to more global information. No helpful global pattern will be easy to come
but, just as is the case with the distribution of primes. The pattern is
only "itself" (a suchness, as they say) and it does not resemble anything
less complex which describes it approximately.

However, an approach that comes to mind that may be helpful is to be able to
create spaces that can be explored intuitively by means of visual maps,
selected projections of larger dimensional spaces for example. The question
is what degree of choice in reduced-dimensional presentation can be
achieved. Does this make any sense? My own personal way of learning is
heavily intuitively-weighted and extremely connected with my propensity to
create exploratory situations in which my experience can be enriched. Even
though such process are in some sense inherently free of goals, nonetheless
something like "goal availability" is facilitated as a result.

>> What is the "s" parameter?
>
> This measures the uncertainty of the listener's pitch resultion. The
> width of the normal curve, or exp(-|x|) curve, that we were talking
> about before.

So this assumes the listener has some absolute limits of perception that are
not overcome in certain contexts, such as an 11/8 being heard as exact when
in a larger chord? I know you're not doing larger chords yet, but the same
idea probably applies to a single interval, especially if the timbre is rich
enough (e.g. brass), i.e. that context will fine-tune perceptibility beyond
any preexisting limits of perception. This being connected with that
phenomenon of inner-ear whistles (whatever they are called) that aid in
pitch perception.

>
>> What are the consequences of using Farey series versus other
> approaches.
>> Does it make a big difference? Seems like the plots I remember
> seeing on
>> the tuning dictonary pages were all based on Farey series.
>
> Yes, most of them were. As a result, they have an overall downward
> slope. Using a series delimited by a maximum n*d, instead of a
> maximum n, removes the overall downward slope, but leaves the same
> pattern of local mimima. The overall downward slope can be a
> disadvantage if you don't want to be biased toward very large
> intervals.

Hmm. But isn't there some evidence to justify one way or the other? It
should be possible to cleanly distinguish preferences in a solution set from
measures of entropy. Or can you model how preferences interact with entropy
in individual perception?

-Kurt

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/11/2004 2:42:10 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 5/11/04 12:47 PM, wallyesterpaulrus <wallyesterpaulrus@y...>
wrote:
>
> > --- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...>
wrote:
> >> on 5/11/04 10:56 AM, wallyesterpaulrus <wallyesterpaulrus@y...>
>
> >>> I have the Matlab Optimization Toolbox. It's got all the state-
of-
> >>> the-art algorithms.
> >> So does it choose them for you or do you choose them?
> > Either way.
> >
> >> Seems like there
> >> should be a class of algorithms specific to fractal-like
functions
> >> that
> >> might offer more stability.
> > Umm . . . no
> >
> >> You can't be the first person to run into this.
> >
> > Well, finding *global* minima, even of smooth functions, is
already
> > very hard, and there's no general method or algorithm. What I did
was
> > a big monte-carlo to seed a whole bunch of *local* minima
searches,
> > and took the minimum of those.
> >
> > When the function isn't smooth, there will be local minima all
over
> > the place. So I'd end up with almost as many scales as I started
with
> > in the monte-carlo simulation. For scales with a decent number of
> > notes, this would take forever. So a more intelligent approach is
> > needed.
>
> Well then my gut reaction (which I'm not attached too, either) is
that there
> may not *be* any more intelligent approach for automated
optimization of
> this kind of function.

I think there may be, because the function has a lot of patterns,
like conforming to Tenney's Harmonic Distance function at the
simplest ratios. This, in turn, is why I suggested a lattice approach
in the first place.

> My own personal way of learning is
> heavily intuitively-weighted and extremely connected with my
>propensity to
> create exploratory situations in which my experience can be
>enriched.

Sounds like you'd be very sympathetic to the approach I just
suggested which starts with plugging in Scala scales . . .

> >> What is the "s" parameter?
> >
> > This measures the uncertainty of the listener's pitch resultion.
The
> > width of the normal curve, or exp(-|x|) curve, that we were
talking
> > about before.
>
> So this assumes the listener has some absolute limits of perception
that are
> not overcome in certain contexts, such as an 11/8 being heard as
exact when
> in a larger chord?

I don't know what you mean by "11/8 being heard as exact". At least,
this concept doesn't correspond to how harmonic entropy describes
anything. Entropy can be higher or lower, that's all.

> >> What are the consequences of using Farey series versus other
> > approaches.
> >> Does it make a big difference? Seems like the plots I remember
> > seeing on
> >> the tuning dictonary pages were all based on Farey series.
> >
> > Yes, most of them were. As a result, they have an overall downward
> > slope. Using a series delimited by a maximum n*d, instead of a
> > maximum n, removes the overall downward slope, but leaves the same
> > pattern of local mimima. The overall downward slope can be a
> > disadvantage if you don't want to be biased toward very large
> > intervals.
>
> Hmm. But isn't there some evidence to justify one way or the other?

I think the cop-out we came up with on the tuning list a while back
is that really wide intervals are neither consonant nor dissonant.

> It
> should be possible to cleanly distinguish preferences in a solution
set from
> measures of entropy.

I don't understand that sentence.

🔗wallyesterpaulrus <wallyesterpaulrus@...>

5/11/2004 10:19:12 PM

--- In harmonic_entropy@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

> >> What is the "s" parameter?
> >
> > This measures the uncertainty of the listener's pitch resultion.
The
> > width of the normal curve, or exp(-|x|) curve, that we were
talking
> > about before.
>
> So this assumes the listener has some absolute limits of perception
that are
> not overcome in certain contexts, such as an 11/8 being heard as
exact when
> in a larger chord? I know you're not doing larger chords yet, but
the same
> idea probably applies to a single interval, especially if the
timbre is rich
> enough (e.g. brass), i.e. that context will fine-tune
perceptibility beyond
> any preexisting limits of perception.

What I've always said is that timbre can have a large impact on the
appropriate choice of s. Timbres with weak partials will imply larger
values of s (less resolution) than timbres with strong harmonic
partials. It seems to me that the interaction of large harmonic-
series chords, when the fully chordal formulation of harmonic entropy
is worked out, will support this "rule of thumb". Harmonic overtones
can only strengthen the certainty with which whatever ratios the
fundamental is involved in are heard as such.