back to list

Cangwu Badness

🔗Graham Breed <gbreed@gmail.com>

7/4/2010 11:36:16 PM

This is about the badness function I came up with as a generalization
of scalar badness. I decided to call it "Cangwu badness" although
I've never needed to use the name. If you want a cute name to compare
it with other badnesses you can call it that. It's also parametric
scalar badness, and I see I've called it scalar parametric badness,
which looks like a bad word order, but never mind.

It's documented at http://x31eq.com/badness.pdf and that's what I'm
using to check the details. I've just spotted an error in the
formulas, which runs all the way through the file, so I'll attend to
that sometime. W^T should always be W^2 because W is its own
transpose so it doesn't make sense otherwise.

I'll need to talk about transposes of matrices for this, so I'll use
bra-ket notation. A column vector is [X> and a row vector is <X]. A
scalar product is <X|Y>. A product involving a rectangular matrix is
<X|A|Y>. Sometimes rectangular matrices are also written as bras or
kets and, whatever the standard, <A] is always the transpose of [A>.

Scalar complexity is ||<M|W><W|M>/<H|W><W|H>|| where [M> is the
mapping matrix with vals as columns, [H> is the sizes of prime
intervals as a column vector, and <W] is a matrix containing the
weights of primes, which means it will be diagonal, so <W] = [W>.
||...|| is the square root of the determinant.

To save typing, I'll set a metric G to be the square of W, that is G =
[W><W]. If you're using weighted mappings, G, [W> and <W] are all the
identity matrix anyway. So the scalar complexity is now
||<M|G|M>/<H|G|H>||.

The formula for scalar badness is:

||<M|G|M>/<H|G|H> - <M|G|H><H|G|M>/<H|G|H><H|G|H>||

The parametric badness is a mixture of scalar badness and scalar
complexity with a parameter E_k squared that I'll call x.

B(x) = ||<M|G|M>/<H|G|H>(1 + x) - <M|G|H><H|G|M>/<H|G|H><H|G|H>||

When x=0, this is identical to scalar badness. As x tends to
infinity, the badness tends to (1+x) times scalar complexity.

It's also possible to write it as a function of operators. If
||<M|K|M>|| is scalar complexity and ||<M|B|M>|| is scalar badness,
then ||<M|K|M>x + <M|B|M>|| is the parametric badness. Note that K is
not as simple as G/<H|G|H> but it is still a matrix. When 0 < x < 1,
the parametric badness is defined by a positive definite matrix,
making it an inner product.

For equal temperaments, the determinant is redundant. So the square
of the parametric badness is the sum of the squares of the scalar
badness and the scalar complexity multiplied by E_k (the square root
of x). This looks like a sum of squares of two error times complexity
badnesses. That's why E_k is written with an E. It has dimensions of
error. It's roughly the worst error of temperaments you're interested
in.

Because the scalar complexity of an equal temperament is almost
exactly the number of notes to the octave, you can predict the most
notes you need to look at for equal temperaments within a given
badness cutoff. That makes it possible to produce a guaranteed top 10
list of equal temperaments, which is very useful.

As Cangwu badness is an inner product, it obeys the Cauchy-Schwartz
inequality. That means the badness of a rank 2 temperament is always
less than the product of the badnesses of the equal temperaments that
generate it. This is extremely useful when you're generating
temperament classes by pairing equal temperaments. Any rank 2
temperament will have, for any given parameter, an optimal pair of
generating equal temperaments. Geometrically, they're roughly
orthogonal. Their badnesses can't vary by much, because if they did
there'd be a better one between them. So there's a high chance that
the best rank 2 temperament is generated by the two best equal
temperaments.

The hardest rank 2 temperaments to find happen to be the ones like
Mystery or Compton that are an equal temperament with a different
mapping for one or more primes. For these, it's difficult to get a
pair of orthogonal equal temperaments, so the poorer one can be quite
high. There are a lot of such temperaments in the higher prime
searches. If you're not interested in such things, the searches
become a lot easier.

You can see how efficient the searches are with my web application
(http://x31eq.com/temper/). The higher rank results come from pairing
rank 2 temperaments with equal temperaments, and each of those with
the equal temperaments again, and so on.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/5/2010 5:07:05 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> This is about the badness function I came up with as a generalization
> of scalar badness. I decided to call it "Cangwu badness" although
> I've never needed to use the name. If you want a cute name to compare
> it with other badnesses you can call it that.

I'm all for cute names, and this looks interesting. But would you PLEASE stop using the appalling phrases "scalar badness" and "scalar complexity" when these are always given as real numbers, and hence there are no vector-valued, matrix-valued, multivector-valued, tensor-valued, division ring valued or anything else of that sort valued badnesses to distinguish it from. ANY badness measure is "scalar badness" and hence the characterization means nothing and is inherently confusing. And a number should be called a number, not a "scalar", unless in a context where you need to distinguish it from vectors, etc. Would you call something "number badness" or "real number badness"? Didn't think so.

I'll need to wait to make substantive comments, and I hope this does not annoy you too greatly, but it really would help if you would pay more attention to things which serve to confuse people.

And FYI, I have no idea what you even mean by "scalar badness". Do you mean one of the badness measures defined from wedge products?

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/5/2010 5:29:39 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Scalar complexity is ||<M|W><W|M>/<H|W><W|H>|| where [M> is the
> mapping matrix with vals as columns, [H> is the sizes of prime
> intervals as a column vector, and <W] is a matrix containing the
> weights of primes, which means it will be diagonal, so <W] = [W>.
> ||...|| is the square root of the determinant.

What determinant? You've got a ratio of two numbers inside ||...||, don't you? Weighted dot products.

> The formula for scalar badness is:
>
> ||<M|G|M>/<H|G|H> - <M|G|H><H|G|M>/<H|G|H><H|G|H>||

Assuming G is the identity, this becomes

||M.M/H.H - (M.H/H.H)^2||

Where does this formula come from, and what does "[H> is the sizes of prime intervals as a column vector" mean?

🔗Graham Breed <gbreed@gmail.com>

7/5/2010 10:35:55 AM

On 5 July 2010 15:29, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> Scalar complexity is ||<M|W><W|M>/<H|W><W|H>|| where [M> is the
>> mapping matrix with vals as columns, [H> is the sizes of prime
>> intervals as a column vector, and <W] is a matrix containing the
>> weights of primes, which means it will be diagonal, so <W] = [W>.
>> ||...|| is the square root of the determinant.
>
> What determinant? You've got a ratio of two numbers inside ||...||, don't you? Weighted dot products.

You know what a determinant is, don't you? No, it isn't a ratio of two numbers.

>> The formula for scalar badness is:
>>
>> ||<M|G|M>/<H|G|H> - <M|G|H><H|G|M>/<H|G|H><H|G|H>||
>
> Assuming G is the identity, this becomes
>
> ||M.M/H.H - (M.H/H.H)^2||

Maybe, but when you write it like that it looks like a vector formula.

> Where does this formula come from, and what does "[H> is the sizes of prime intervals as a column vector" mean?

It comes from primerr.pdf. Or it's the same scalar badness we had
last time around. It's an orthogonal projection.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/5/2010 12:53:42 PM

I won't be able to follow this very well if I keep being distracted by the word "scalar", so below I replace it with "breed".

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> breed complexity is ||<M|W><W|M>/<H|W><W|H>|| where [M> is the
> mapping matrix with vals as columns, [H> is the sizes of prime
> intervals as a column vector, and <W] is a matrix containing the
> weights of primes, which means it will be diagonal, so <W] = [W>.
> ||...|| is the square root of the determinant.

OK, I see that M is a matrix, not a vector, so getting rid of the W by assuming it is the identity, we get

sqrt(det(Matrix(vi.vj))) / ||H||

as the breed complexity. Here Matrix(vi.vi) is the matrix whose i,j-th component is the dot product of the (weighted) ith val dot jth val, and [H> is "the sizes of prime intervals as a column vector". Graham refuses to say what that means, and I can't do a very good job of translating this unless I know. Is anyone else following this? Is there an obvious point I am missing? If we assume that H is the JIP, then ||H|| becomes sqrt(n), where n is the number of primes, and the complexity becomes

sqrt(det(Matrix(vi.vj))/n)

This would make sense in terms of the wedgie-defined complexity sqrt(wedgie.wedgie/n).

> The formula for breed badness is:
>
> ||<M|G|M>/<H|G|H> - <M|G|H><H|G|M>/<H|G|H><H|G|H>||

Or in other words

sqrt(det(Matrix(vi.vj) - (JIP.M)^2)/n)

This formula doesn't appear to make sense, as I am subtracting a scalar (note how the word is correctly used) from a matrix. So if anyone else is following this, please chime in.

🔗Graham Breed <gbreed@gmail.com>

7/5/2010 9:52:12 PM

On 5 July 2010 16:07, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> I'm all for cute names, and this looks interesting. But would you
> PLEASE stop using the appalling phrases "scalar badness" and
> "scalar complexity" when these are always given as real numbers,

No, please stop arguing about terminology every time I say something.
I've been calling it scalar complexity for the past three years. You
were on the list when I announced it. That was the proper time to
object to the term.

> And FYI, I have no idea what you even mean by "scalar badness". Do you mean one of the badness measures defined from wedge products?

I did define it in that message. It's the badness defined from scalar
products of wedge products.

Graham

🔗Graham Breed <gbreed@gmail.com>

7/5/2010 10:06:37 PM

On 5 July 2010 23:53, genewardsmith <genewardsmith@sbcglobal.net> wrote:

>> breed complexity is ||<M|W><W|M>/<H|W><W|H>|| where [M> is the
>> mapping matrix with vals as columns, [H> is the sizes of prime
>> intervals as a column vector, and <W] is a matrix containing the
>> weights of primes, which means it will be diagonal, so <W] = [W>.
>> ||...|| is the square root of the determinant.
>
> OK, I see that M is a matrix, not a vector, so getting rid of the W by assuming it is the identity, we get
>
> sqrt(det(Matrix(vi.vj))) / ||H||

I don't think it's safe to take the ||H|| outside the determinant.

> as the breed complexity. Here Matrix(vi.vi) is the matrix whose i,j-th
> component is the dot product of the (weighted) ith val dot jth val, and [H> is
> "the sizes of prime intervals as a column vector". Graham refuses to
> say what that means, and I can't do a very good job of translating this
> unless I know. Is anyone else following this? Is there an obvious point I am
> missing? If we assume that H is the JIP, then ||H|| becomes sqrt(n), where n
> is the number of primes, and the complexity becomes

I didn't refuse to define it. I missed that question when I went
online last night. Yes, H is like the JIP, but not weighted (unless
you make it so by setting W=I) and not a point.

> sqrt(det(Matrix(vi.vj))/n)
>
> This would make sense in terms of the wedgie-defined complexity
> sqrt(wedgie.wedgie/n).

It's one wedgie-defined complexity. The scalar product of the wedgie
with itself.

>> The formula for breed badness is:
>>
>> ||<M|G|M>/<H|G|H> - <M|G|H><H|G|M>/<H|G|H><H|G|H>||
>
> Or in other words
>
> sqrt(det(Matrix(vi.vj) - (JIP.M)^2)/n)
>
> This formula doesn't appear to make sense, as I am subtracting a scalar
> (note how the word is correctly used) from a matrix. So if anyone else is
> following this, please chime in.

No, you're using the word "scalar" incorrectly. Or, at least, you may
genuinely think <M|H><H|M> gives a scalar but you're wrong. Try
thinking about which way round [H> and <H] are instead of jumping to a
dot product that appears to be commutative but isn't.

[H> is a column vector and <H] is a row vector. The product <H|H>
gives a scalar, even if you stick a square weighting matrix in the
middle. The product [H><H] is a column multiplied by a row, so it's a
square matrix, the same size as the weighting. Hence <M|H><H|M> is
the same size as <M|M> and the two matrices can be subtracted.

<M|H><H|M>/<H|H> is something to do with orthogonal projections. I
forget exactly what right now. It isn't surprising that a formula
with orthogonal projections gives the same result as one with wedge
products.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/6/2010 12:17:35 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 5 July 2010 16:07, genewardsmith <genewardsmith@...> wrote:
>
> > I'm all for cute names, and this looks interesting. But would you
> > PLEASE stop using the appalling phrases "scalar badness" and
> > "scalar complexity" when these are always given as real numbers,
>
> No, please stop arguing about terminology every time I say something.
> I've been calling it scalar complexity for the past three years. You
> were on the list when I announced it. That was the proper time to
> object to the term.

I didn't know what it was. For all I knew, there was some reason for you use of the term. I have since learned that the only reason is your limpet-like insistence on clinging to a solecism. If you are too pig-headed to admit you've made a mistake in your mathematical diction and change it, I reserve the right to point out your error when and if it suits me, because wrong is wrong. Or you could come up with a reason why "scalar complexity" makes sense when "number complexity clearly would not. Do that, and amaze us all.

> I did define it in that message. It's the badness defined from scalar
> products of wedge products.

I can't see how scalar products could be so used in any very meaningful sense. You can rescale a wedge product by means of a scalar product, but that hardly amounts to using it to define a numerical quantity; you can always avoid doing that and achieve the same end, since in the end you are going to produce a number, and multiplication is commutative.

What is this scalar product which you claim justifies the use of the term?

Graham, I am trying to understand you. So far as I can tell, no one else is. Does that matter to you at all?

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/6/2010 12:56:55 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> It's one wedgie-defined complexity. The scalar product of the wedgie
> with itself.

The light dawns! This is a usage of "scalar product" common among physicists which mathematicans generally avoid. The reason they avoid it is that it's bad terminology, since scalar multiplication is also sometimes called the "scalar product", and I thought that's what you meant. Mathematicians do not usually think of vectors as arrows, regard them as constructed over arbitrary fields, think inner products can live in any ordered field and Hermitian inner products over any complexification of one, and in general don't inhabit the same world as physicists and engineers, and I'm sorry if this caused confusion. Mathematicians call what you and many others call a "scalar product" an inner product, or a dot product if it's the standard orthonormal basis version, or in some contexts a positive-definite symmetric bilinear form.

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/6/2010 1:08:25 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > sqrt(det(Matrix(vi.vj))) / ||H||
>
> I don't think it's safe to take the ||H|| outside the determinant.

Good point. I'm not sure what it's doing there, however.

> I didn't refuse to define it. I missed that question when I went
> online last night. Yes, H is like the JIP, but not weighted (unless
> you make it so by setting W=I) and not a point.

Sorry, I don't know what this means. Do you mean it isn't living inside any kind of space you are considering, but is somehow a vector anyway? Maybe you could give an example.

🔗Graham Breed <gbreed@gmail.com>

7/6/2010 2:33:12 AM

On 6 July 2010 12:08, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> I didn't refuse to define it.  I missed that question when I went
>> online last night.  Yes, H is like the JIP, but not weighted (unless
>> you make it so by setting W=I) and not a point.
>
> Sorry, I don't know what this means. Do you mean it isn't living inside
> any kind of space you are considering, but is somehow a vector
> anyway? Maybe you could give an example.

It lives in the vector space, so it's a vector. Maybe it's my physics
training coming back again, but I remember vectors not being points.
It is pretty much what you call the JIP. I don't think any music
theoretic planes will crash if you keep calling it the JIP.

Graham

🔗Graham Breed <gbreed@gmail.com>

7/6/2010 3:43:14 AM

On 6 July 2010 11:56, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> It's one wedgie-defined complexity.  The scalar product of the wedgie
>> with itself.
>
> The light dawns! This is a usage of "scalar product" common among
> physicists which mathematicans generally avoid. The reason they avoid
> it is that it's bad terminology, since scalar multiplication is also sometimes
> called the "scalar product", and I thought that's what you meant.
> Mathematicians do not usually think of vectors as arrows, regard them as
> constructed over arbitrary fields, think inner products can live in any
> ordered field and Hermitian inner products over any complexification of one,
> and in general don't inhabit the same world as physicists and engineers,
> and I'm sorry if this caused confusion. Mathematicians call what you and
> many others call a "scalar product" an inner product, or a dot product if
> it's the standard orthonormal basis version, or in some contexts a
> positive-definite symmetric bilinear form.

I got it from Browne, and it looks like I got it wrong. Checking now
I see he reserves "scalar product" for grade 1. But the Wikipedia
article on geometric algebra says that inner and scalar products are
the same.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/6/2010 7:50:04 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> > Sorry, I don't know what this means. Do you mean it isn't living inside
> > any kind of space you are considering, but is somehow a vector
> > anyway? Maybe you could give an example.
>
> It lives in the vector space, so it's a vector. Maybe it's my physics
> training coming back again, but I remember vectors not being points.

Mathematicians think vectors are elements of an algebraic structure called a "vector space", and since it's a sort of space, the elements of it are sort of points. Since adding an inner product gives the standard model for the axioms of Euclidean geometry, and not adding it the standard model for affine space, that seems well justified.

> It is pretty much what you call the JIP. I don't think any music
> theoretic planes will crash if you keep calling it the JIP.

Could be, but what else should it be called?

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/6/2010 7:58:45 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> I see he reserves "scalar product" for grade 1. But the Wikipedia
> article on geometric algebra says that inner and scalar products are
> the same.

Pure math people are not in love with geometric algebra either, by the way. Clifford algebras are much in use in certain areas. Originally and I think still most commonly "scalar product" is confined to being the dot product in three dimensions. I hope Wikipedia isn't spreading the usage of the term, which really should be depreciated. I'll check Wikipedia out--I would have thought they would claim the dot product and scalar product were the same, not inner product and scalar product.

🔗Mike Battaglia <battaglia01@gmail.com>

7/6/2010 2:05:52 PM

On Tue, Jul 6, 2010 at 3:17 AM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> Graham, I am trying to understand you. So far as I can tell, no one else is. Does that matter to you at all?

For the record, I generally try to understand everything that both of
you say on this list. The problem is that the scalar product of my
music theory knowledge and my knowledge of abstract math is too low,
for the moment.

-Mike

🔗Graham Breed <gbreed@gmail.com>

7/6/2010 9:32:25 PM

On 6 July 2010 18:50, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

>> It is pretty much what you call the JIP.  I don't think any music
>> theoretic planes will crash if you keep calling it the JIP.
>
> Could be, but what else should it be called?

I don't care what you call it. I call it H.

How are you getting on with the parametric badness formula?

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/7/2010 5:27:12 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> How are you getting on with the parametric badness formula?

I can compute things, and maybe will need to move to that stage. I don't see the reasoning behind these formulas, and I have been looking at what my old stuff about Frobenius norms and projection maps in the hope of enlightenment and to give myself a break from Cangwu. It makes more sense when weighted, defines an inner product, and is clearly highly relevant but I can't see if it's relevant to this business.

Here's the old article:

/tuning-math/message/12836

To bring it up to date, so to speak, just weight all vals and monzos, and the Frobenius tuning is now exactly the same as TOP-rms. The Frobenius projection map is easily computed from the definition in the unweighted case, but it's not a good idea to try in the weighted case. But an alternative definition in terms of the pseudoinverse works. So here is a positive-semidefinite (eigenvalues either 0 or 1) matrix defining an inner product on intervals and mappings belonging to a regular temperament, and giving a projection to it. Does this plug in to what you are doing?

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/7/2010 10:45:30 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

The parametric badness is a mixture of scalar badness and scalar
complexity with a parameter E_k squared that I'll call x.

B(x) = ||<M|G|M>/<H|G|H>(1 + x) - <M|G|H><H|G|M>/<H|G|H><H|G|H>||

When x=0, this is identical to scalar badness. As x tends to
infinity, the badness tends to (1+x) times scalar complexity.

What's the payoff here? You can take simple badness, which I think is
the same as your so-called "scalar badness" and which is a term I
thought you had agreed you could use, and multiply it by some power of
complexity, producing a one-parameter family of badness measures. Why
don't we keep doing that?

🔗Graham Breed <gbreed@gmail.com>

7/8/2010 8:28:39 AM

On 08/07/2010, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> What's the payoff here? You can take simple badness, which I think
> is the same as your so-called "scalar badness" and which is a term
> I thought you had agreed you could use, and multiply it by some
> power of complexity, producing a one-parameter family of
> badness measures. Why don't we keep doing that?

Because it isn't a positive definite quadratic form. If it obeys the
Cauchy-Schwartz inequality, I can't prove it. If a badness cutoff
corresponds to a complexity cutoff for equal temperaments, I don't
know how to find it.

I used things like this before. I always needed a complexity cutoff
as well as the parameter you mentioned. If there's a way around that
I don't know it.

Graham