back to list

Scalar complexity from unison vectors

🔗Graham Breed <gbreed@gmail.com>

9/9/2007 1:20:02 AM

I've mentioned scalar complexity here before but it still isn't in my errors and complexities PDF. One definition is that it's the size of the wedge product of the weighted equal temperament mappings that define the mapping, using a Euclidian metric. That gives the length of an equal temperament, the area of a linear temperament, and so on. You divide it by the number of prime dimensions to make it comparable to the related complexity measures.

The other definition is that you use the determinant of the matrix formed from a matrix containing the weighted equal temperament mappings multiplied by its transpose so as to give an RxR matrix for a rank R temperament. The square root of this determinant divided by the number of prime dimensions is the scalar complexity. You can also form the matrix by taking the [i,j]th element as the dot product of the ith and jth mappings.

I've been looking at the PDF book of Grassman Algebra and concluded that the two definitions are provably identical. This being the case, it should also be possible to calculate it using unison vectors (or whatever you want to call them) instead of equal temperament mappings.

I still haven't noticed an agreed term for unison vectors. They're commas which are tempered to a unison in a given regular temperament. Each regular temperament class can be defined using a minimal set of unison vectors. In this case we need them to be without torsion.

You weight a unison vector by multiplying each element by the size of the corresponding prime. (This is the opposite to the weighting of mappings, where you divide instead of multiply, as befits the algebraic dual.) To turn the wedge product of unison vectors into the wedge product of mappings you have to take the complement and divide by the product of the sizes of the prime intervals (or something like that).

Unison vectors being the dual of mappings you can also calculate the complexity using a matrix of unison vectors. So you put the minimal set of weighted unison vectors into a matrix. Then multiply it by its transpose to get the smaller square matrix ((D-R)x(D-R) for a rank R temperament with D prime dimensions)). Alternatively, construct the matrix so that the [i,j]th element is the dot product of the ith and jth weighted unison vectors. The scalar complexity is the square root of the determinant of this matrix, divided by the product of the prime weights and the number of prime dimensions.

This should be the most efficient way of calculating the complexity from unison vectors most of the time because it avoids the need to calculate the wedgie, and wedgies can get big. So it may be useful as part of a search for temperaments by unison vectors because it tell you when a set of unison vectors is already too complex for what you're looking for. It's particularly efficient for such searches because the square matrices will keep re-using values. For example, if you construct an NxN lookup table for your N candidate commas where the [i,j]th entry is the dot product of the ith and jth commas, it will contain all the numbers you need for the matrices at the expense of an up-front calculation that's quadratic in the number of candidate commas. (About half its entries are duplicates as well.)

Here's some Python code to tie down the algorithm. It uses a Numeric array "primes" containing the sizes of the prime intervals in octaves, and takes the unison vectors as a sequence of sequences.

def scalarComplexity(uvs):
dimension = len(uvs[0])
active_primes = primes[:dimension]
prime_product = 1.0
for prime in active_primes:
prime_product *= prime
weighted = [uv*active_primes for uv in uvs]

MT = Numeric.array(weighted)
M = Numeric.transpose(MT)
MTM = Numeric.matrixmultiply(MT, M)
det = LinearAlgebra.determinant(MTM)
return math.sqrt(det)/prime_product/dimension

I still don't have an efficient way of calculating the error of a temperament from its unison vectors.

Graham

🔗Carl Lumma <ekin@lumma.org>

9/9/2007 8:23:31 AM

At 01:20 AM 9/9/2007, you wrote:
>I've mentioned scalar complexity here before but it still
>isn't in my errors and complexities PDF. One definition is
>that it's the size of the wedge product of the weighted
>equal temperament mappings that define the mapping, using a
>Euclidian metric. That gives the length of an equal
>temperament, the area of a linear temperament, and so on.
>You divide it by the number of prime dimensions to make it
>comparable to the related complexity measures.

That sounds like what Paul uses in Middle Path.

>The other definition is that you use the determinant of the
>matrix formed from a matrix containing the weighted equal
>temperament mappings multiplied by its transpose so as to
>give an RxR matrix for a rank R temperament. The square
>root of this determinant divided by the number of prime
>dimensions is the scalar complexity. You can also form the
>matrix by taking the [i,j]th element as the dot product of
>the ith and jth mappings.

Huh.

>I've been looking at the PDF book of Grassman Algebra and
>concluded that the two definitions are provably identical.
>This being the case, it should also be possible to calculate
>it using unison vectors (or whatever you want to call them)
>instead of equal temperament mappings.
>
>I still haven't noticed an agreed term for unison vectors.
>They're commas which are tempered to a unison in a given
>regular temperament. Each regular temperament class can be
>defined using a minimal set of unison vectors. In this case
>we need them to be without torsion.
>
>You weight a unison vector by multiplying each element by
>the size of the corresponding prime. (This is the opposite
>to the weighting of mappings, where you divide instead of
>multiply, as befits the algebraic dual.) To turn the wedge
>product of unison vectors into the wedge product of mappings
>you have to take the complement and divide by the product of
>the sizes of the prime intervals (or something like that).
>
>Unison vectors being the dual of mappings you can also
>calculate the complexity using a matrix of unison vectors.
>So you put the minimal set of weighted unison vectors into a
>matrix. Then multiply it by its transpose to get the
>smaller square matrix ((D-R)x(D-R) for a rank R temperament
>with D prime dimensions)). Alternatively, construct the
>matrix so that the [i,j]th element is the dot product of the
>ith and jth weighted unison vectors. The scalar complexity
>is the square root of the determinant of this matrix,
>divided by the product of the prime weights and the number
>of prime dimensions.
>
>This should be the most efficient way of calculating the
>complexity from unison vectors most of the time because it
>avoids the need to calculate the wedgie, and wedgies can get
>big. So it may be useful as part of a search for
>temperaments by unison vectors because it tell you when a
>set of unison vectors is already too complex for what you're
>looking for. It's particularly efficient for such searches
>because the square matrices will keep re-using values. For
>example, if you construct an NxN lookup table for your N
>candidate commas where the [i,j]th entry is the dot product
>of the ith and jth commas, it will contain all the numbers
>you need for the matrices at the expense of an up-front
>calculation that's quadratic in the number of candidate
>commas. (About half its entries are duplicates as well.)
>
>Here's some Python code to tie down the algorithm. It uses
>a Numeric array "primes" containing the sizes of the prime
>intervals in octaves, and takes the unison vectors as a
>sequence of sequences.
>
>def scalarComplexity(uvs):
> dimension = len(uvs[0])
> active_primes = primes[:dimension]
> prime_product = 1.0
> for prime in active_primes:
> prime_product *= prime
> weighted = [uv*active_primes for uv in uvs]
>
> MT = Numeric.array(weighted)
> M = Numeric.transpose(MT)
> MTM = Numeric.matrixmultiply(MT, M)
> det = LinearAlgebra.determinant(MTM)
> return math.sqrt(det)/prime_product/dimension
>
>I still don't have an efficient way of calculating the error
>of a temperament from its unison vectors.

Wow.

-Carl

🔗Graham Breed <gbreed@gmail.com>

9/9/2007 5:54:35 PM

Carl Lumma wrote:
> At 01:20 AM 9/9/2007, you wrote:
> >>I've mentioned scalar complexity here before but it still >>isn't in my errors and complexities PDF. One definition is >>that it's the size of the wedge product of the weighted >>equal temperament mappings that define the mapping, using a >>Euclidian metric. That gives the length of an equal >>temperament, the area of a linear temperament, and so on. >>You divide it by the number of prime dimensions to make it >>comparable to the related complexity measures.
> > That sounds like what Paul uses in Middle Path.

According to my PDF, he used the sum-abs of the weighted wedgie. That's proportional to the Tenney harmonic distance of a single unison vector, and probably a sensible taxicab distance for an equal temperament. In general, I don't know what it means. How do you calculate areas with a taxicab metric anyway?

Note that Fokker's periodicity block determinants are a special case of unweighted scalar complexity.

Graham