back to list

Observations about wedgie defined measures

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/29/2010 10:56:23 PM

If W is an equal temperament val for n-edo, then ||W|| is about n, which is right for complexity. ||W^J|| is proportional to standard deviation, and ||W^J||/||W|| makes sense as an error measure; the adjustment to a variance from the mean value means that tendencies towards sharpness(eg 27edo) or flatness (eg 19edo) are taken into account. So it clearly works.

||W^J|| is defined on the Kees subspace or whatever we should call it, where k*J is projected down to 0 and the rest gives us a positive definite inner product and so a Euclidean space.

What about higher ranks? Once again, there's a projection down to a lower dimensional space with a Euclidean structure on that space going on, because the subspace of all u^J (for bivals), or all u^v^J (trivals) and etc. will be sent to zero. So you start out with a semidefinite form, associated to a symmetric matrix with some eigenvalues zero and the rest positive, and can convert it into a definite one by projection. What that all really means for us I have not thought about as yet. But there certainly seems to be no barrier to defining logflatness starting from here.

🔗Graham Breed <gbreed@gmail.com>

5/29/2010 11:01:58 PM

On 30 May 2010 09:56, genewardsmith <genewardsmith@sbcglobal.net> wrote:
> If W is an equal temperament val for n-edo, then ||W|| is about n,
> which is right for complexity. ||W^J|| is proportional to standard
> deviation, and ||W^J||/||W|| makes sense as an error measure;
> the adjustment to a variance from the mean value means that
> tendencies towards sharpness(eg 27edo) or flatness (eg 19edo)
> are taken into account. So it clearly works.

Proportional to what standard deviation?

> ||W^J|| is defined on the Kees subspace or whatever we should
> call it, where k*J is projected down to 0 and the rest gives us a
> positive definite inner product and so a Euclidean space.

Kess subspace? You mean it's positive definite if you define away the
zeros (just intonation)?

> What about higher ranks? Once again, there's a projection down
> to a lower dimensional space with a Euclidean structure on
> that space going on, because the subspace of all u^J (for bivals),
> or all u^v^J (trivals) and etc. will be sent to zero. So you start out
> with a semidefinite form, associated to a symmetric matrix with
> some eigenvalues zero and the rest positive, and can convert it
> into a definite one by projection. What that all really means for us
> I have not thought about as yet. But there certainly seems to be
> no barrier to defining logflatness starting from here.

Yes, it's positive semidefinite, and also a projection into a space
with one less rank.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/29/2010 11:48:07 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 30 May 2010 09:56, genewardsmith <genewardsmith@...> wrote:
> > If W is an equal temperament val for n-edo, then ||W|| is about n,
> > which is right for complexity. ||W^J|| is proportional to standard
> > deviation, and ||W^J||/||W|| makes sense as an error measure;
> > the adjustment to a variance from the mean value means that
> > tendencies towards sharpness(eg 27edo) or flatness (eg 19edo)
> > are taken into account. So it clearly works.
>
> Proportional to what standard deviation?

The standard deviation of the coordinates of W.

> > ||W^J|| is defined on the Kees subspace or whatever we should
> > call it, where k*J is projected down to 0 and the rest gives us a
> > positive definite inner product and so a Euclidean space.
>
> Kess subspace? You mean it's positive definite if you define away the
> zeros (just intonation)?

I don't know what you mean. I was talking about the space of "zero sized intervals", where J maps the whole thing to 0 (in other words, it's the null space for J.)

🔗Graham Breed <gbreed@gmail.com>

5/30/2010 10:01:54 PM

On 30 May 2010 10:48, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>>
>> On 30 May 2010 09:56, genewardsmith <genewardsmith@...> wrote:
>> > If W is an equal temperament val for n-edo, then ||W|| is about n,
>> > which is right for complexity. ||W^J|| is proportional to standard
>> > deviation, and ||W^J||/||W|| makes sense as an error measure;
>> > the adjustment to a variance from the mean value means that
>> > tendencies towards sharpness(eg 27edo) or flatness (eg 19edo)
>> > are taken into account. So it clearly works.
>>
>> Proportional to what standard deviation?
>
> The standard deviation of the coordinates of W.

The full rule is that the scalar badness is the square root of the
determinant of the covariance matrix of the weighted mapping. This is
a standard deviation on the special case of rank 1.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/30/2010 10:31:35 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> The full rule is that the scalar badness is the square root of the
> determinant of the covariance matrix of the weighted mapping. This is
> a standard deviation on the special case of rank 1.

It's also probability and statistics terminology, which is confusing since it isn't appropriate in this case. I would prefer to say that you take a list of vals Vi in weighted coordinates, subtract off the average value times the JIP, obtaining a new list Ui, and then form the square matrix of dot products [Ui.Uj]. The determinant of that is, you are claiming, the same as ||W^J||? Or what?

🔗Graham Breed <gbreed@gmail.com>

5/30/2010 10:44:10 PM

On 31 May 2010 09:31, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> The full rule is that the scalar badness is the square root of the
>> determinant of the covariance matrix of the weighted mapping.  This is
>> a standard deviation on the special case of rank 1.
>
> It's also probability and statistics terminology, which is confusing
> since it isn't appropriate in this case. I would prefer to say that
> you take a list of vals Vi in weighted coordinates, subtract off the
> average value times the JIP, obtaining a new list Ui, and then form
> the square matrix of dot products [Ui.Uj]. The determinant of that
> is, you are claiming, the same as ||W^J||? Or what?

You started the statistics by mentioning the standard deviation. In
this case it's a special case of the covariance matrix. And the fact
that this term "covariance matrix" exists for exactly what we want is
surely worth mentioning, once statistics comes into it.

I'm assuming Tenney weighting, so the JIP is what implies the sum.
Otherwise, I think you're describing it correctly.

It's also an orthogonal projection. It shouldn't be a surprise that
wedge products are linked to orthogonal projections. It happens that
the covariances are also related. But because they're in different
books these relationships aren't usually remarked on.

Graham