back to list

Redefining complexity to get rid of those damned subgroups

🔗Mike Battaglia <battaglia01@gmail.com>

6/27/2011 1:34:52 PM

Let's say that you have two different 13-limit temperaments:

1) This temperament covers 90% of the 13-limit tonality diamond within
10 generators, but doesn't finish covering the whole thing until 300
generators out.
2) This temperament covers 5% of the 13-limit tonality diamond within
10 generators, but finishes covering the whole thing by 100 generators
out.

After consulting with Graham, it seems like temperament #2 would end
up being rated as lower in TE complexity, by virtue of the fact that
it covers the whole diamond by 100 generators out. However, the first
temperament would probably be a great candidate for a subgroup
temperament, whereas the second one is a good candidate for the trash.
My aim is to come up with a notion of complexity that would rank the
first one as being higher than the second one.

In general, if we could come up with a notion of complexity that
satisfies the following requirements, I think we'd have a good vehicle
to get away from subgroups:

1) A temperament's "simplicity" should be greatly boosted for
increases in the coverage of the lattice (or tonality diamond) that it
makes early on in the chain of generators.
2) A temperament's "simplicity" should be boosted less for increases
that come later on in the chain.
3) A temperament should receive a greater boost in "simplicity" when a
successive generator ends up covering a really simple chunk of the
lattice than when it covers a really complex one.

Holy grail #4) Ideally, a temperament search for temperaments in the
n-limit, sorted by badness, would converge on some specific list as n
tends to infinity.

I think that if we can pin down requirements 1-3, we might get #4,
which would enable us to not only get away from subgroups, but limits
as well - after all, if something covers a huge portion of the
11-limit tonality diamond early on, then it also covers a large
portion of the 13-limit tonality diamond early on as well.

I have what I think is a decent starting algorithm towards this ideal,
but I'm currently struggling to see if I can relate it to TE scalar
complexity. However, before I reinvent the wheel, does anyone know if
something like this already exists, or have any ideas on how to
formally implement it?

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

6/27/2011 1:39:49 PM

On Mon, Jun 27, 2011 at 4:34 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> My aim is to come up with a notion of complexity that would rank the
> first one as being higher than the second one.

Er, as being lower than the second one. As being more simple. As being
better. Less complexity. Something that manages to cover 95% of the
13-limit lattice within 10 generators should be rated as being fairly
low in complexity, even if it doesn't manage to get 5/1 until a
million generators out.

-Mike

🔗Carl Lumma <carl@lumma.org>

6/27/2011 11:49:58 PM

Mike wrote:

>Let's say that you have two different 13-limit temperaments:
>1) This temperament covers 90% of the 13-limit tonality diamond within
>10 generators, but doesn't finish covering the whole thing until 300
>generators out.
>2) This temperament covers 5% of the 13-limit tonality diamond within
>10 generators, but finishes covering the whole thing by 100 generators
>out.
>After consulting with Graham, it seems like temperament #2 would end
>up being rated as lower in TE complexity, by virtue of the fact that
>it covers the whole diamond by 100 generators out. However, the first
>temperament would probably be a great candidate for a subgroup
>temperament, whereas the second one is a good candidate for the trash.
>My aim is to come up with a notion of complexity that would rank the
>first one as being higher than the second one.

Every element in the 13-limit diamond is a combination of no more
than 2 primes, so it can't take more than twice the TE complexity
to map the diamond. But I get your meaning.

>Holy grail #4) Ideally, a temperament search for temperaments in the
>n-limit, sorted by badness, would converge on some specific list as n
>tends to infinity.

Yes, I've pondered this. One way is to use an error weighting
that's stronger than Tenney height, so that the weighted errors
of the large primes approach zero. The problem is finding a
weighting with this property, which also gives reasonable
relative weights to the small primes, like 5 vs 7. And you lose
the nice property of log Tenney height, where the weights add the
same way error does on the lattice, e.g. log(9) = log(3)+log(3).

Then there's zeta tuning, which seems to be the right thing, but
I don't know how to express it as a weighting function so I
feel like I don't really understand it.

A final note is that, while getting rid of limits is attractive,
there is some expressive power in the concept. Like, the zeta
tuning is often close to the 7-limit TOP tuning, but you might
be writing a 5-limit piece and if so you'll always be able to get
better results by using a 5-limit optimization.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

6/28/2011 12:18:05 AM

On Tue, Jun 28, 2011 at 2:49 AM, Carl Lumma <carl@lumma.org> wrote:
>
>
> >Holy grail #4) Ideally, a temperament search for temperaments in the
> >n-limit, sorted by badness, would converge on some specific list as n
> >tends to infinity.
>
> Yes, I've pondered this. One way is to use an error weighting
> that's stronger than Tenney height, so that the weighted errors
> of the large primes approach zero. The problem is finding a
> weighting with this property, which also gives reasonable
> relative weights to the small primes, like 5 vs 7. And you lose
> the nice property of log Tenney height, where the weights add the
> same way error does on the lattice, e.g. log(9) = log(3)+log(3).

The spectral complexity thing I posted in the other thread seems like
it'd be a good start. I'm not on a computer with MATLAB, so I can't
test it, and after tomorrow I'm not going to have any time to test it
anyway. But it seems to be on the right track, and there's plenty of
ways to simply the general concept further. Perhaps just multiplying
that coefficient (I guess it's measuring "efficiency" more than
complexity) by TE scalar complexity would be useful.

> Then there's zeta tuning, which seems to be the right thing, but
> I don't know how to express it as a weighting function so I
> feel like I don't really understand it.

I believe that Gene has said it weights the primes roughly equal to
sqrt(n), or something like that, but also that it weights full
integers and not just primes. So that's in line with Tenney height.
Perhaps adapting it to something Graham's covered in composite.pdf
would do the trick.

> A final note is that, while getting rid of limits is attractive,
> there is some expressive power in the concept. Like, the zeta
> tuning is often close to the 7-limit TOP tuning, but you might
> be writing a 5-limit piece and if so you'll always be able to get
> better results by using a 5-limit optimization.

You can work with altered versions of the zeta function to exclude
certain primes, for example (the zeta function with holes in it), and
you can also build faux-zeta functions up by working with only a
limited subset of primes. I had an interesting discussion with Keenan
on tuning-math about it a little while ago, here

/tuning-math/message/19122
/tuning-math/message/19140

Check out Keenan's plots, they're pretty revealing. Actually, using
that logic, you could probably work with something like zeta^2 or
log(zeta^2) to alter the weighting of the primes involved in some kind
of desirable way.

-Mike