back to list

Re: Eureka part one (actually, complexity measures)

🔗Robert C Valentine <BVAL@...>

5/9/2001 12:56:27 AM

I only included "Eureka part I" because I'm waiting for Paul
and other mathematically inclined list members to decode that
paper for me.

As members of the tuning list know, I've been working on
a program which

1) inputs a scale described as a sequence of step-sizes
(for instance BBaBBBa which would be the familar
12tet Ionian when B=200 and a=50 cents)(currently
limited to diatonics but later will be extended
to more notes...)
2) constructs some sort of psuedo-harmonic-entropy
table (more later)
3) for all values of B and a (the scale must add up to
1200c and be monotonically increasing)
3b) for all modes (rotations) of the scale
3c) calculate complexity, accuracy, maximum error
and other good stuff against the table of
RI and complexity generated in step 2
4) take output into excel and ooo and ahhh when sensible
results appear (like 204c being the 'best' B for
the Ionian, followed by 194c), or (more likely)
say "now why did that happen?" when the local
minimum is occupied by a scale with some
extremely spicy RI intervals or (more puzzling),
overly out-of-tune simple intervals. ("now why
did that happen" is much more common than
"eureka" even though the results may be the
same).
5) see if interesting candidates can be used musically
(this stage takes a the rest of ones life)

Okay, so one thing that is interesting for input from this
list is 'step 2'. Paul has produced his lovely graphs, and
from the documentation I down;oaded, I couldn't figure out
a hack that produced similar ones.

Thats okay, I'm an engineer, not a mathematician. If I can
hack a solution that produces 'good enough' results (like
the 204c and 194c mentioned above) then I have something I
can feel good about investigating.

SO the current mechanism to populate the graph is to
perform a complexity calculation for all RI intervals
in the known universe. If it is less complex than the
current entry in the table, it takes over the RI that
is in that spot. A linear factor is applied to the
complexity two build the two sides of the bucket from this
point (actually an inverted cone). As long as it is less
than the value in the table, the RI value that lived there
with a certain complexity is REPLACED, with its complexity,
by the new RI value. So for instance, when adding 3/2 it
would look like

bucket bucket
from from
1/1 3/2
/ \
/ \
/ \ /
/ \ /
/ \ /
/ \/
/
------------------------------
The result of this is a very bumpy graph, but one I've
coerced into what I considered acceptable behvior. (The
main tuning was to get the 3/2 constrained to +-18c which
I'd seen as proposed limits on the tuning list. Once I got
it so that this became a local maximum between 3/2 and its
neighbor RI, then I was satisfied).

So, firstly, I believe that to produce something more like
Pauls HE graph I would sum at each point, rather than taking
minimum, while still using the existing technique to identify
the identity of the RI at that point on the graph?

Regarding the complexity calculation. I am currently just
using the product, with a pinch of fudging to favor otonal
relationships

complexity = numerator * denominator
if ( is_power_of_2( denominator )
complexity = complexity / 2

This moves 5/4 below 4/3 and 5/3. In point of fact, leaving
out the otonal 'correction' does not have much affect on the
final results once all rotations are considered. It may have
a more result on the modal minimums, which are also
interesting, but which I'm not looking at for now until I
feel 'better' about the basic algorithm.

Oh, an IMPORTANT point (probably THE most important) is the
data culling. Originally, I would sum the complexity read
out of the graph and look for local minimum there. Although
this sort of worked, it was apparent that sometimes, there
were nearby neighbors which actually appeared to be
preferable (by having less average error from the same RI
intervals for instance). What I currently do is just use the
psuedo-harmonic-entropy-graph to determine what the RI pockets
are. Then I take the complexity based on the RI intervals
played 'perfectly', then the error of this scale from the RI
scale, and also the standard deviation of the error. (maximum
error is important and will eventually be used in some sort
of consistency sense to weed out scales which 'are not doing
what they say they are doing').

The point here is that harmonic entropy mixes complexity of
the intervals being heard and the accuracy that they are being
produced at. Until I feel that I am mixing them in a sensible
manner, I'll get more trustworthy results by seperating them.

SOmething that came up on the other list, and is also something
to consider here, is ways to make my algorithm produce results
which are more 'lattice-like'. This may be an interesting are
for harmonic entropy anyhow, as I believe that the 9/8 pocket
SHOULD be deeper than the 8/7 or 10/9, although if this opens
a debate that can be settled by creating this structure with a
different name, I'm all for it.

SO (engineer, not scientist) a complexity measure that I
posed in the other list was to consider both the prime factors
and the distance on the lattice (exponents). For instance,
just the product of these would be

81/64 = 3^4/2^6 => 3*4*2*6 = 144

The case I had which this seemed to solve was that 19/15 at
409c swallowed (for better or worse) the 81/64 at 408. Although
pythagorean major scales came out as a minimum in the program,
the fact that they report the third as 19/15 suggests a useage
model that may be misleading.

There may be other good means here.

3*2* (4*6)^(1/2) uses a more accurate distance metric (2 comes
from the number of dimensions one is travelling) and

3*2* (4+6) uses more of a manhatten distance.

I haven't addresses octave equivalence here, and thats
important. A later thing to investigate may be multi-octave
scales, which will need some notion of multi-octave.

Any tips and advice? Questions? etc????

Bob Valentine