back to list

More prime errors and complexities

🔗Graham Breed <gbreed@gmail.com>

1/1/2007 11:35:46 PM

I've been working on my prime errors and complexities paper over the
past week. Because I've had a really bad Internet connection I
haven't been able to update you on it. I've discovered that Yahoo!
Groups is available, so I put a copy of the PDF in the files section
of this group. It only needs to stay there until I can get at my new
website -- at the moment I can connect by FTP but not transfer anything.

The new results are for octave-equivalent weighted standard-deviations
of a rank 2 temperament given two equal temperaments. I had a nice
equation for the error*complexity badness given the canonical mapping
(by period and generator) and I've now proved that it works for an
arbitrary pair of mappings. Along the way I found a simple formula
for the optimal error.

Because of this simplicity I now prefer the standard deviation error
to the TOP-RMS error it was designed to approximate. And
error*complexity badness is clearly the easiest kind to work with as
well. I probably need some cute names for them. Maybe TOPPO for TOP
Pure Octaves.

I've also implemented rank 2 temperament searches using the new
formulae. They're very efficient because you don't need to calculate
the canonical mapping to get the error and complexity. You still need
the generator mapping for the invariant but it's much easier to
calculate than the period mapping (I wonder why I didn't notice that
before...) I could previously calculate the error without the
canonical mapping but I had to fudge the calculation because it lost
precision. The new formula involves standard deviations of weighted
errors, so it holds the precision much better. Also, the same
standard deviations can be shared between the error and complexity
calculations. And some of them only depend on the equal temperaments,
so they only need to be calculated once for each rank 2 temperament
that involves a given equal temperament. With these improvements, and
some code profiling, I've reduced the runtime of the big pure-Python
searches by two-thirds. The small searches can't be improved much
because so little time's actually spent on the errors and
complexities. Anyway, I haven't uploaded the code yet, but this is to
show that it is practical.

There's a formula for the rank 2 badness that looks similar to the
geometric definition of a vector product. Perhaps a wedge product
would give similar results. I remember Gene giving a formula for
wedgie error before, and it not having much correlation with the other
errors. Perhaps this approach will do better.

Another thing is I've got confused about the units for weighted
complexity.

Graham

🔗Mohajeri Shahin <shahinm@kayson-ir.com>

1/2/2007 3:38:52 AM

Hi graham

do you permit me to put it in my site until your problem benig solved?

Shaahin Mohajeri

Tombak Player & Researcher , Microtonal Composer

My web site?? ???? ????? ?????? <http://240edo.googlepages.com/>

My farsi page in Harmonytalk ???? ??????? ?? ??????? ??? <http://www.harmonytalk.com/mohajeri>

Shaahin Mohajeri in Wikipedia ????? ?????? ??????? ??????? ???? ???? <http://en.wikipedia.org/wiki/Shaahin_mohajeri>

________________________________

From: tuning-math@yahoogroups.com [mailto:tuning-math@yahoogroups.com] On Behalf Of Graham Breed
Sent: Tuesday, January 02, 2007 11:06 AM
To: tuning-math@yahoogroups.com
Subject: [tuning-math] More prime errors and complexities

I've been working on my prime errors and complexities paper over the
past week. Because I've had a really bad Internet connection I
haven't been able to update you on it. I've discovered that Yahoo!
Groups is available, so I put a copy of the PDF in the files section
of this group. It only needs to stay there until I can get at my new
website -- at the moment I can connect by FTP but not transfer anything.

The new results are for octave-equivalent weighted standard-deviations
of a rank 2 temperament given two equal temperaments. I had a nice
equation for the error*complexity badness given the canonical mapping
(by period and generator) and I've now proved that it works for an
arbitrary pair of mappings. Along the way I found a simple formula
for the optimal error.

Because of this simplicity I now prefer the standard deviation error
to the TOP-RMS error it was designed to approximate. And
error*complexity badness is clearly the easiest kind to work with as
well. I probably need some cute names for them. Maybe TOPPO for TOP
Pure Octaves.

I've also implemented rank 2 temperament searches using the new
formulae. They're very efficient because you don't need to calculate
the canonical mapping to get the error and complexity. You still need
the generator mapping for the invariant but it's much easier to
calculate than the period mapping (I wonder why I didn't notice that
before...) I could previously calculate the error without the
canonical mapping but I had to fudge the calculation because it lost
precision. The new formula involves standard deviations of weighted
errors, so it holds the precision much better. Also, the same
standard deviations can be shared between the error and complexity
calculations. And some of them only depend on the equal temperaments,
so they only need to be calculated once for each rank 2 temperament
that involves a given equal temperament. With these improvements, and
some code profiling, I've reduced the runtime of the big pure-Python
searches by two-thirds. The small searches can't be improved much
because so little time's actually spent on the errors and
complexities. Anyway, I haven't uploaded the code yet, but this is to
show that it is practical.

There's a formula for the rank 2 badness that looks similar to the
geometric definition of a vector product. Perhaps a wedge product
would give similar results. I remember Gene giving a formula for
wedgie error before, and it not having much correlation with the other
errors. Perhaps this approach will do better.

Another thing is I've got confused about the units for weighted
complexity.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/2/2007 4:41:01 AM

On 02/01/07, Mohajeri Shahin <shahinm@kayson-ir.com> wrote:
>
>
> Hi graham
>
> do you permit me to put it in my site until your problem benig solved?

Certainly you may!

Graham

🔗Graham Breed <gbreed@gmail.com>

1/2/2007 11:30:47 PM

I've updated it now with a new equation that means the octave-specific
error might be almost as simple and stable as the octave-equivalent
one. That's Equation 32 on page 10. It's got a beautiful symmetry to
it, so it's probably the one to put on a T-shirt.

I've also got a feeling I uploaded an old file before. (I can't
*download* from Yahoo Groups to check.) This one should have today's
date at the top (January 3, 2007).

Graham

🔗Carl Lumma <ekin@lumma.org>

1/2/2007 11:44:36 PM

At 11:30 PM 1/2/2007, you wrote:
>I've updated it now with a new equation that means the octave-specific
>error might be almost as simple and stable as the octave-equivalent
>one. That's Equation 32 on page 10. It's got a beautiful symmetry to
>it, so it's probably the one to put on a T-shirt.
>
>I've also got a feeling I uploaded an old file before.

You did.

-Carl

🔗Carl Lumma <ekin@lumma.org>

1/2/2007 11:48:17 PM

At 11:30 PM 1/2/2007, you wrote:
>I've updated it now with a new equation that means the octave-specific
>error might be almost as simple and stable as the octave-equivalent
>one. That's Equation 32 on page 10. It's got a beautiful symmetry to
>it, so it's probably the one to put on a T-shirt.

Good work, dude!

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/3/2007 12:26:55 AM

On 03/01/07, Carl Lumma <ekin@lumma.org> wrote:
> At 11:30 PM 1/2/2007, you wrote:
> >I've updated it now with a new equation that means the octave-specific
> >error might be almost as simple and stable as the octave-equivalent
> >one. That's Equation 32 on page 10. It's got a beautiful symmetry to
> >it, so it's probably the one to put on a T-shirt.
>
> Good work, dude!

Thanks!

One thing to note is that the numerator's an approximation to badness
squared. As the whole thing's error squared that means the
denominator must be an approximation to complexity squared. And it
works!

It doesn't appear to calculate a generator mapping or wedge product by
the back door. But it is the determinant of a 2x2 matrix containing
the means-of-errors-of-products so there is a similarity.

Graham

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

1/3/2007 2:27:27 PM

--- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:
>
> I've been working on my prime errors and complexities paper over the
> past week.

Wow--looks great! I look forward to reading it.

> I've also implemented rank 2 temperament searches using the new
> formulae. They're very efficient because you don't need to
calculate
> the canonical mapping to get the error and complexity.

Right.

You still need
> the generator mapping for the invariant but it's much easier to
> calculate than the period mapping (I wonder why I didn't notice that
> before...)

> There's a formula for the rank 2 badness that looks similar to the
> geometric definition of a vector product. Perhaps a wedge product
> would give similar results. I remember Gene giving a formula for
> wedgie error before, and it not having much correlation with the
other
> errors. Perhaps this approach will do better.

It seems worth comparing; some sort of wedgie computation, without
ever calculating generators, would seem to be the way to go anyway.

🔗Graham Breed <gbreed@gmail.com>

1/4/2007 3:33:13 AM

On 04/01/07, Gene Ward Smith <genewardsmith@coolgoose.com> wrote:
> --- In tuning-math@yahoogroups.com, "Graham Breed" <gbreed@...> wrote:
> > There's a formula for the rank 2 badness that looks similar to the
> > geometric definition of a vector product. Perhaps a wedge product
> > would give similar results. I remember Gene giving a formula for
> > wedgie error before, and it not having much correlation with the
> other
> > errors. Perhaps this approach will do better.
>
> It seems worth comparing; some sort of wedgie computation, without
> ever calculating generators, would seem to be the way to go anyway.

I've got a good correlation between the std error*compelxity badness and

m*n*std(err_m ^ err_n)/sqrt(n_primes)

where

m and n are the numbers of notes to the octave in each ET
err_m and err_n are the weighted errors of the ETs
n_primes is the number of prime intervals (including 2)
std is the sample standard deviation
sqrt is the square root
^ is an exterior product, giving a result in vector form

For some perfectly reasonable temperaments the two badnesses agree to
2 significant figures, which is probably not a coincidence. As we
have a reasonable wedgie complexity this is what we need for a
reasonable wedgie error.

It's also a better agreement than that between what I call rin (the
square root of 1 minus the square of the statistical correlation) and
the sine of the angle between the error vectors, generalized as
|a^b|/|a||b|

The trouble is, I don't know how to extract this weighted-error wedge
product from the standard wedgie.

Graham

🔗Paul G Hjelmstad <paul_hjelmstad@allianzlife.com>

1/5/2007 9:02:44 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@...> wrote:
>
> At 11:30 PM 1/2/2007, you wrote:
> >I've updated it now with a new equation that means the octave-
specific
> >error might be almost as simple and stable as the octave-equivalent
> >one. That's Equation 32 on page 10. It's got a beautiful symmetry
to
> >it, so it's probably the one to put on a T-shirt.
>
> Good work, dude!
>
> -Carl

I agree. I hope to finish your paper this weekend. You make it
meaningful too which really helps me get my mind around it :)

- Paul Hj

🔗Graham Breed <gbreed@gmail.com>

1/6/2007 9:40:22 PM

I wrote:

> One thing to note is that the numerator's an approximation to badness
> squared. As the whole thing's error squared that means the
> denominator must be an approximation to complexity squared. And it
> works!
>
> It doesn't appear to calculate a generator mapping or wedge product by
> the back door. But it is the determinant of a 2x2 matrix containing
> the means-of-errors-of-products so there is a similarity.

You can also generalize it to an nxn matrix containing
means-of-errors-of-products. That gives us a complexity measure that
works for temperaments of any rank. Previously we needed the wedgie
for this. You can also calculate it with wedge products, of course.
It's close to the std-complexity, and so is also correlated with the
Kees-max complexity

Graham.