back to list

TE complexity

🔗Mike Battaglia <battaglia01@gmail.com>

4/26/2012 9:57:21 PM

Is the TE complexity of a temperament the area of the primitive cell
of the lattice formed by the vals that support it? (ignoring
contorsion)

-Mike

🔗Graham Breed <gbreed@gmail.com>

4/27/2012 1:09:13 AM

On 27/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> Is the TE complexity of a temperament the area of the primitive cell
> of the lattice formed by the vals that support it? (ignoring
> contorsion)

Yes. Something like that.

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

4/27/2012 2:17:50 AM

On Fri, Apr 27, 2012 at 4:09 AM, Graham Breed <gbreed@gmail.com> wrote:
>
> On 27/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> > Is the TE complexity of a temperament the area of the primitive cell
> > of the lattice formed by the vals that support it? (ignoring
> > contorsion)
>
> Yes. Something like that.

OK, so therefore the TE complexity is the exact same thing as the
hypervolume of the parallelopiped formed by any n linearly independent
generators which form and saturate that lattice.

Since this is exactly the same thing as the magnitude of the
multivector formed by those vectors, TE complexity is the exact same
thing as the RMS of the coefficients of the wedgie for that
temperament.

Is this true? How have I never noticed this before?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

4/27/2012 2:20:47 AM

On Fri, Apr 27, 2012 at 5:17 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Fri, Apr 27, 2012 at 4:09 AM, Graham Breed <gbreed@gmail.com> wrote:
>>
>> On 27/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
>> > Is the TE complexity of a temperament the area of the primitive cell
>> > of the lattice formed by the vals that support it? (ignoring
>> > contorsion)
>>
>> Yes. Something like that.
>
> OK, so therefore the TE complexity is the exact same thing as the
> hypervolume of the parallelopiped formed by any n linearly independent
> generators which form and saturate that lattice.

This should say "parallelotope."

-Mike

🔗Wolf Peuker <wolfpeuker@googlemail.com>

4/27/2012 2:25:08 AM

Hi Mike,

Am 27.04.2012 11:17, schrieb Mike Battaglia:
> OK, so therefore the TE complexity is the exact same thing as the
> hypervolume of the parallelopiped formed by any n linearly independent
> generators which form and saturate that lattice.
>
> Since this is exactly the same thing as the magnitude of the
> multivector formed by those vectors, TE complexity is the exact same
> thing as the RMS of the coefficients of the wedgie for that
> temperament.
What means RMS?
Where can I find a description of TE complexity for non-mathematicians?

Thanks!
Wolf

🔗Mike Battaglia <battaglia01@gmail.com>

4/27/2012 2:51:08 AM

On Fri, Apr 27, 2012 at 5:25 AM, Wolf Peuker <wolfpeuker@googlemail.com>
wrote:
>
> Hi Mike,
>
> Am 27.04.2012 11:17, schrieb Mike Battaglia:
>
>
> > OK, so therefore the TE complexity is the exact same thing as the
> > hypervolume of the parallelopiped formed by any n linearly independent
> > generators which form and saturate that lattice.
> >
> > Since this is exactly the same thing as the magnitude of the
> > multivector formed by those vectors, TE complexity is the exact same
> > thing as the RMS of the coefficients of the wedgie for that
> > temperament.
> What means RMS?
> Where can I find a description of TE complexity for non-mathematicians?
>
> Thanks!
> Wolf

RMS means "root mean square." It's a type of average. Given a set of
numbers like {1, 2, 3}, the usual average (the mean) is (1 + 2 + 3)/3
= 2. The RMS of this set is sqrt((1^2 + 2^2 + 3^2)/3), which is about
2.160.

If Gene or Graham answers my question about TE complexity in the
affirmative, then you'll have your description of TE complexity for
non-mathematicians. This way of thinking about it is much simpler and
more elegant, and I hope it's correct.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/27/2012 9:22:53 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Is this true? How have I never noticed this before?

I don't know. Did you read this:

http://xenharmonic.wikispaces.com/Tenney-Euclidean+temperament+measures

"Given a multival or multimonzo which is a wedge product of weighted vals or monzos, we may define a norm by means of the usual Euclidean norm. We can rescale this by taking the sum of squares of the entries of the multivector, dividing by the number of entries, and taking the square root. This will give a norm which is the RMS (root mean square) average of the entries of the multivector."

"Given a wedgie M, that is a canonically reduced r-val correspondng to a temperament of rank r, the norm ||M|| is a measure of the complexity of M; that is, how many notes in some sort of weighted average it takes to get to intervals."

Note, however, that Graham doesn't normalize via the RMS route.

🔗Mike Battaglia <battaglia01@gmail.com>

4/28/2012 12:44:08 AM

On Fri, Apr 27, 2012 at 12:22 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> "Given a multival or multimonzo which is a wedge product of weighted vals
> or monzos, we may define a norm by means of the usual Euclidean norm. We can
> rescale this by taking the sum of squares of the entries of the multivector,
> dividing by the number of entries, and taking the square root. This will
> give a norm which is the RMS (root mean square) average of the entries of
> the multivector."

So what happens if you apply a different Lp norm to the wedgie? If
using L2 gives you a geometric interpretation that's something like
the area of the parallelogram formed by the factors of the wedgie,
what geometric interpretation would using something like the L1 or
Linf norm have?

> Note, however, that Graham doesn't normalize via the RMS route.

He just does sum-squared or something?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

4/28/2012 1:32:56 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> So what happens if you apply a different Lp norm to the wedgie? If
> using L2 gives you a geometric interpretation that's something like
> the area of the parallelogram formed by the factors of the wedgie,
> what geometric interpretation would using something like the L1 or
> Linf norm have?

That's what we were asking ourselves twen years ago.

> > Note, however, that Graham doesn't normalize via the RMS route.
>
> He just does sum-squared or something?

Or something.

By the way, I think I can define the tuning map by a purely wedgie approach, but it's bedtime so I'm not going to try right now.

🔗Mike Battaglia <battaglia01@gmail.com>

4/29/2012 4:19:11 AM

On Sat, Apr 28, 2012 at 4:32 AM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > So what happens if you apply a different Lp norm to the wedgie? If
> > using L2 gives you a geometric interpretation that's something like
> > the area of the parallelogram formed by the factors of the wedgie,
> > what geometric interpretation would using something like the L1 or
> > Linf norm have?
>
> That's what we were asking ourselves twen years ago.

I guess these different Lp norms generalize the "area" of a bivector
differently, just like different Lp norms generalize the length of a
vector differently.

So if multivectors generalize vectors as being signed hypervolumes in
a subspace, then the components of a wedgie in this case are like the
signed areas of the projection of the wedgie onto various component
subspaces. So for a bivector like <<1 4 4||, the 1 represents the area
of the parallelogram you get by projecting onto the e2^e3 plane, the 4
is the area of the parallelogram you get by projecting onto the e2^e5
plane, and the other 4 is the area you get by projecting onto the
e3^e5 plane. And then the norm tells you how to take this info to
reconstruct the area of the original parallelogram.

So for L2, using De Gua's theorem, which Keenan found out about, it
all works out to reconstruct the area. L1 is basically just the
manhattan version of all that. If you look at the L1 norm of a vector,
it's like saying "first you have to go this length along component x,
then you turn and go another length along component y, then do it
again for z, and count the total distance you travel." So for
bivectors, that'd be something like, "count the surface area you need
to cover along component x^y, then count the surface area you need to
cover along component x^z, then count the surface area you need to
cover along component y^z, and add up the total surface area you've
covered."

So it would appear to be half the surface area of some sort of
bounding parallelotope for the parallelogram. I'm not sure exactly how
to imagine it though, only that it's the half the surface area for
some sort of bounding thing.

-Mike

🔗Graham Breed <gbreed@gmail.com>

5/3/2012 10:02:07 AM

On 28/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Fri, Apr 27, 2012 at 12:22 PM, genewardsmith
> <genewardsmith@sbcglobal.net> wrote:

>> Note, however, that Graham doesn't normalize via the RMS route.
>
> He just does sum-squared or something?

I do it properly, so that the geometry works out. The measure of a
rank 2 mapping is the area of the parallelogram defined by the two
mappings that produce it. Normalizing by the number of elements of
the wedgie is wrong. It breaks theorems.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/3/2012 6:10:04 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 28/04/2012, Mike Battaglia <battaglia01@...> wrote:
> > On Fri, Apr 27, 2012 at 12:22 PM, genewardsmith
> > <genewardsmith@...> wrote:
>
> >> Note, however, that Graham doesn't normalize via the RMS route.
> >
> > He just does sum-squared or something?
>
> I do it properly, so that the geometry works out. The measure of a
> rank 2 mapping is the area of the parallelogram defined by the two
> mappings that produce it. Normalizing by the number of elements of
> the wedgie is wrong. It breaks theorems.

Any allegedly "broken" theorem can always be reformulated so as not to be broken. The point is to come up with a definition which works well for our specific purposes. Some questions:

(1) How well does your definition perform in high ranks? Doesn't the complexity tend to become small?

(2) How well does it work when comparing a wedgie with its dual?

🔗Mike Battaglia <battaglia01@gmail.com>

5/3/2012 11:51:03 PM

On Thu, May 3, 2012 at 1:02 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> On 28/04/2012, Mike Battaglia <battaglia01@gmail.com> wrote:
> > On Fri, Apr 27, 2012 at 12:22 PM, genewardsmith
> > <genewardsmith@sbcglobal.net> wrote:
>
> >> Note, however, that Graham doesn't normalize via the RMS route.
> >
> > He just does sum-squared or something?
>
> I do it properly, so that the geometry works out. The measure of a
> rank 2 mapping is the area of the parallelogram defined by the two
> mappings that produce it. Normalizing by the number of elements of
> the wedgie is wrong. It breaks theorems.
>
> Graham

OK, so it'd just be root-sum-squared, right?

-Mike

🔗Graham Breed <gbreed@gmail.com>

5/7/2012 3:50:47 AM

"genewardsmith" <genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed
> <gbreed@...> wrote:

> > I do it properly, so that the geometry works out. The
> > measure of a rank 2 mapping is the area of the
> > parallelogram defined by the two mappings that produce
> > it. Normalizing by the number of elements of the
> > wedgie is wrong. It breaks theorems.
>
> Any allegedly "broken" theorem can always be reformulated
> so as not to be broken. The point is to come up with a
> definition which works well for our specific purposes.
> Some questions:

Yes, you can always add complications to the theorems to
match the complications you added to the definition.

> (1) How well does your definition perform in high ranks?
> Doesn't the complexity tend to become small?

It works fine in any rank. It doesn't matter what size the
complexity is.

> (2) How well does it work when comparing a wedgie with
> its dual?

That would depend on how the dual metric's defined. It
should probably work so the complexity of an interval has
something to do with its Tenney height.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/7/2012 4:04:28 AM

Mike Battaglia <battaglia01@gmail.com> wrote:

> OK, so it'd just be root-sum-squared, right?

You could do that. The units I usually use are normalized
to be similar to other weighted complexity measures. To
normalize, you divide by some power of the number of
primes. You can make this part of the weighting. Divide
by whatever power of the number of primes (I think it's
the square root) and then you can take sums of squares of
wedgies or deteminants.

The argument for the square root: for an ET, the complexity
should be roughly the number of steps to the octave. Call
the weighted primes (prime mappings divided by the sizes of
the primes) w, and that gives

RMS(w) = sqrt(sum(w[i]**2)/n)

for n primes. That can be rewritten

RMS(w) = sqrt(sum((w[i]/sqrt(n))**2))

So it becomes a root-sum-squared when w[i]->w[i]/sqrt(n)

Graham