back to list

Yet another octave-stretching method

🔗Gene Ward Smith <gwsmith@svpal.org>

10/7/2005 11:43:04 PM

Here's yet another method for stretching octaves which could be
interesting. I'll explain it by way of an example.

The projections for 7-limit meantone form a four-parameter family.
Within that family, we can solve for the projection which is closest
to the identity according to some metric. I solved using the
unweighted L2 metric, but make no claim that is the best choice. The
result was

[|117/446 73/223 58/223 -61/446>,
|73/223 93/223 80/223 -19/223>,
|58/223 80/223 88/223 46/223>,
|-61/446 -19/223 46/223 413/446>]

Applying this to <1 log2(3) log2(5) log2(7)| gives a meantone tuning
with an octave stretch of 1.34 cents, a fifth of 697.22 cents, a
twevlth of 1898.56 cents and so forth. Pretty reasonable looking values.

🔗Graham Breed <gbreed@gmail.com>

10/9/2005 7:07:31 AM

Gene Ward Smith wrote:
> Here's yet another method for stretching octaves which could be
> interesting. I'll explain it by way of an example.
> > The projections for 7-limit meantone form a four-parameter family.
> Within that family, we can solve for the projection which is closest
> to the identity according to some metric. I solved using the
> unweighted L2 metric, but make no claim that is the best choice. The
> result was

You lost me there. Is an L2 metric like the least squares optimization?

> [|117/446 73/223 58/223 -61/446>,
> |73/223 93/223 80/223 -19/223>,
> |58/223 80/223 88/223 46/223>,
> |-61/446 -19/223 46/223 413/446>]

I really have no idea where this comes from. I notice that it's equal to its transpose, and that each row and column roughly adds up to 1. I did work out a quantity that you could be minimizing, but it looked like it had too many unknowns.

> Applying this to <1 log2(3) log2(5) log2(7)| gives a meantone tuning
> with an octave stretch of 1.34 cents, a fifth of 697.22 cents, a
> twevlth of 1898.56 cents and so forth. Pretty reasonable looking values.

I can see how you get that from the matrix. I'm guessing this has similar implications to the weighted least-squares. But it's mathematically distinct, and you're putting the weighting in at a later stage. I can't see it as simpler, or more valid than the weighted least-squares. But it'll probably give reasonable results.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

10/9/2005 7:58:51 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> You lost me there. Is an L2 metric like the least squares optimization?

I meant by that the Frobenius norm I defined in a subsequent post: the
square root of the sum of the squares of the matrix coefficients.

> > [|117/446 73/223 58/223 -61/446>,
> > |73/223 93/223 80/223 -19/223>,
> > |58/223 80/223 88/223 46/223>,
> > |-61/446 -19/223 46/223 413/446>]
>
> I really have no idea where this comes from. I notice that it's equal
> to its transpose, and that each row and column roughly adds up to 1.

If you read the rows as exponents (which is why I wrote them
monzo-fashion) you'll see they are appromimations to the primes; for
instance 2^(117/446) 3^(73/223) 5^(58/223) 7^(-61/446) is nearly 2, etc.

> I can see how you get that from the matrix. I'm guessing this has
> similar implications to the weighted least-squares. But it's
> mathematically distinct, and you're putting the weighting in at a later
> stage. I can't see it as simpler, or more valid than the weighted
> least-squares. But it'll probably give reasonable results.

It's something like a weighted least-squares, but it has a different
origin.