back to list

A weird structural issue with infinite-limit JI

🔗Mike Battaglia <battaglia01@gmail.com>

10/21/2012 9:36:18 PM

So let's say we want to deal with infinite-limit JI, which is just the
multiplicative group of Q+ in general. By the prime factorization
theorem, Q+ is isomorphic to the direct sum of an infinite number of
Z's. Since it's the direct sum and not the direct product, and only
finitely many nonzero elements are allowed in the resulting monzo, Q+
is still a free abelian group. |1 0 0 ...>, |0 1 0 ...>, |0 0 1 ...>,
etc serve as a (Hamel) basis for this group.

However, vals are a different story. The group Hom(Q+,Z) admits vals
with infinitely many nonzero elements, so now we're looking at a group
which is the direct -product- of an infinite number of Z's. This is
NOT a free abelian group, but a "Baer-Specker group" which has no
basis. However, if one considers Schauder bases that admit infinite
summations, then <1 0 0 ...|, <0 1 0 ...|, <0 0 1 ...|, etc serve as a
Schauder basis for this group.

Much of the work done in the past few years has involved embedding
these groups into vector spaces, so that we can define norms, tuning
maps, have the Hahn-Banach theorem to work with, etc. If we embed the
group of infinite-limit JI monzos into a vector space, we find that |1
0 0 ...>, |0 1 0 ...>, |0 0 1 ...> is a (Hamel) basis for this space.
The dual space of this space is R^N, the direct -product- of a
countably infinite number of R's, and so we get <1 0 0 ...|, <0 1 0
...|, <0 0 1 ...| as a Schauder basis for this space. However, these
vectors don't serve as a Hamel basis for the dual space, as the JIP,
for instance, isn't expressible as a finite linear combination of
these vectors.

However, it is a theorem of ZFC that a Hamel basis exists for R^N,
though it must be uncountable. Can a construction be given of such a
basis?

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

10/22/2012 7:44:09 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> However, it is a theorem of ZFC that a Hamel basis exists for R^N,
> though it must be uncountable. Can a construction be given of such a
> basis?

Can I use Zorn's Lemma?

🔗Mike Battaglia <battaglia01@gmail.com>

10/22/2012 7:56:58 AM

On Mon, Oct 22, 2012 at 10:44 AM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > However, it is a theorem of ZFC that a Hamel basis exists for R^N,
> > though it must be uncountable. Can a construction be given of such a
> > basis?
>
> Can I use Zorn's Lemma?

I'm hoping to find a way to express your Hamel basis in terms of the
Schauder basis <1 0 0 0 ...|, <0 1 0 0 ...|, <0 0 1 0 ...|, <0 0 0 1
...|, etc. Put another way, what I really want to know is what the
actual vals are in this Hamel basis, i.e. how they map 2/1, 3/1, etc.

So, if by using Zorn's lemma you just mean you're going to lay out the
standard proof that every vector space has a basis without expressing
it in terms of the standard Schauder basis, then no :)

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

10/22/2012 9:11:21 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> So, if by using Zorn's lemma you just mean you're going to lay out the
> standard proof that every vector space has a basis without expressing
> it in terms of the standard Schauder basis, then no :)

You could use Zorn's Lemma starting from the Schauder basis. This whole thing strikes me as a pretty dubious enterprise, though.

🔗Mike Battaglia <battaglia01@gmail.com>

11/11/2012 12:19:43 PM

On Mon, Oct 22, 2012 at 12:11 PM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > So, if by using Zorn's lemma you just mean you're going to lay out the
> > standard proof that every vector space has a basis without expressing
> > it in terms of the standard Schauder basis, then no :)
>
> You could use Zorn's Lemma starting from the Schauder basis. This whole
> thing strikes me as a pretty dubious enterprise, though.

Sorry, I missed this reply. Maybe this particular approach is dubious,
but my goal with this was to eventually figure out something like
tuning optimizations in infinite-limit JI, and this involves putting a
norm on vals and tuning maps. But vals can have infinitely many
nonzero coordinates, so how does something like the L2 norm even work?
I figured I needed to work out some sort of basis first, but maybe
not.

I guess we need to restrict our attention to the continuous dual
space, and that Keenan's work earlier showing that infinite-limit TOP
error is bounded for GPV's shows that most (all?) GPV's are in the
continuous dual space.

-Mike

🔗Carl Lumma <carl@lumma.org>

11/11/2012 2:26:04 PM

Mike wrote:
>my goal with this was to eventually figure out something like
>tuning optimizations in infinite-limit JI, and this involves putting a
>norm on vals and tuning maps. But vals can have infinitely many
>nonzero coordinates, so how does something like the L2 norm even work?
>I figured I needed to work out some sort of basis first, but maybe
>not.
>I guess we need to restrict our attention to the continuous dual
>space, and that Keenan's work earlier showing that infinite-limit TOP
>error is bounded for GPV's shows that most (all?) GPV's are in the
>continuous dual space.

What do you mean by 'being in the continuous dual space'?

Keenan's TOP-FP vals show that the weighted TOP error of vals
rapidly stops changing. What about the weighted complexity?
I think the weighted error works because you get to pick the best
approximation to each new prime whilst the weights are going down.
But the complexity terms aren't bounded to anything like 'half a
step', are they? With infinitely many primes to map, it seems like
the complexity must get infinitely large.

For any average complexity like L2 though, the prime number
theorem does mean we'll have to consider absolutely huge numbers
to change it. With an infinite val like <1 2 3 4...|, some large
prime p will have mapping approximately p/log(p) and weight
1/log(p), and will contribute only log(p)/p to the average.
That means its influence is 1/log(p). Since we're interested in
only the simplest temperaments, vals not much worse than <1 2 3...|
seem reasonable. Hm.

-Carl

🔗Carl Lumma <carl@lumma.org>

11/11/2012 2:41:21 PM

I wrote:
>For any average complexity like L2 though, the prime number
>theorem does mean we'll have to consider absolutely huge numbers
>to change it. With an infinite val like <1 2 3 4...|, some large
>prime p will have mapping approximately p/log(p) and weight
>1/log(p), and will contribute only log(p)/p to the average.
>That means its influence is 1/log(p).

Or more closely, p/log^3(p) for L2. -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

11/12/2012 12:47:21 AM

On Sun, Nov 11, 2012 at 5:26 PM, Carl Lumma <carl@lumma.org> wrote:
>
> What do you mean by 'being in the continuous dual space'?

Well, first off, I screwed up in what I said - it's not that the val
is in the continuous dual space, it's that the optimum error map for
every GPV seems to be in the continuous dual space.

But basically, for any finite-dimensional vector space, there's only
one dual space - you know, the dual space. This space is the set of
linear functionals mapping from vectors to scalars (aka tuning
maps/vals) and they all have this lovely property of being "bounded
linear operators," which is basically what powers all of tuning
optimization now.

When you take the difference from some tuning map to the JIP to get an
error map e, there is a bound on |<e|m>|/||m|| over all monzos m. Note
that |<e|m>| is the error of that monzo and ||m|| is its complexity,
so this is weighted error we're looking at. So the max weighted error
of every error map is bounded. More generally, the max "weighted
mapping" of every tuning map is bounded.

For finite dimensional vector spaces, this gets messy. There are now
two dual spaces: the algebraic dual space and the "continuous" dual
space. So in our case, our vector space is the set of all monzos with
only finitely many nonzero coordinates: e.g. |1 2 3 4 0 0 0 ...> is
fine, but |1 1 1 1 1 1 1 1 ...> is not. If you work the usual dual
space out by looking at the set of linear functionals that map
infinite-dimensional monzos to scalars, this is called the "algebraic
dual space"; it consists of all vals which can have infinitely many
nonzero coordinates: so <12 19 28 ... round(12*log2(p)) ...| is fine
for all p.

But not all of these vals have that nice bounded property. The ones
which do form the "continuous dual space," a subspace which you can
read about here:

http://en.wikipedia.org/wiki/Dual_space#Continuous_dual_space

This continuous dual space is a subspace of the algebraic dual space.
For this space and only this space does the concept of a dual norm
make sense. The choice norm you put on monzos affects which tuning
maps get identified as being in this privileged continuous dual space.
For instance, it looks like T1-weighted error is bounded for all GPV's
- but is T2-weighted error bounded for these vals? (I think it was
shown previously that it's not; TE error doesn't converge as the limit
goes up.)

So basically, the continuous dual space is more well-behaved for our
purposes where the infinite-limit is involved. And that's all I've
pieced together so far, so I don't want to say anything else yet and
lead anyone in the wrong direction.

> Keenan's TOP-FP vals show that the weighted TOP error of vals
> rapidly stops changing. What about the weighted complexity?

That's a good question, maybe Keenan can answer.

> I think the weighted error works because you get to pick the best
> approximation to each new prime whilst the weights are going down.
> But the complexity terms aren't bounded to anything like 'half a
> step', are they? With infinitely many primes to map, it seems like
> the complexity must get infinitely large.

If we're talking about "T1 complexity," meaning just the weighted Linf
norm of the val, I think it should be bounded for GPV's, since the
weighted error of a prime can never be more than
round(n*log2(p))/log2(p), where n is the raw and unrounded number of
steps to the octave that you're putting into the GPV calculation.

To see that this value is bounded across all p, note that
round(n*log2(p)) < n*log2(p)+1, so round(n*log2(p))/log2(p) <
n+1/log2(p). The p which maximizes the right hand side is p=2, which
means round(n*log2(p))/log2(p) < n+1.

So in other words, once you put the n-EDO GPV into weighted
coordinates, the weighted values must be less than n+1. So yeah, for
weighted T1 complexity, things are good. For T2 complexity or anything
that involves taking an infinite sum of some kind, this is definitely
going to snap.

> For any average complexity like L2 though, the prime number
> theorem does mean we'll have to consider absolutely huge numbers
> to change it. With an infinite val like <1 2 3 4...|, some large
> prime p will have mapping approximately p/log(p) and weight
> 1/log(p), and will contribute only log(p)/p to the average.
> That means its influence is 1/log(p). Since we're interested in
> only the simplest temperaments, vals not much worse than <1 2 3...|
> seem reasonable. Hm.

But even if you divide by log(p), L2 breaks here. L2 isn't RMS, it's
root-sum-squared, so you basically just have the sum-squared of an
infinite number of positive values inside of a square root. You could
perhaps try to make it RMS, but that requires you to divide by "the
square root of infinity." Maybe there's a better way to formalize the
means of infinite data sets that I don't know of. (Or maybe
nonstandard analysis might be useful here.)

So far, it looks like L1 is the name of the game for infinite-dimensional JI.

-Mike

🔗Graham Breed <gbreed@gmail.com>

11/12/2012 12:40:52 PM

Mike Battaglia <battaglia01@gmail.com> wrote:

> But even if you divide by log(p), L2 breaks here. L2
> isn't RMS, it's root-sum-squared, so you basically just
> have the sum-squared of an infinite number of positive
> values inside of a square root. You could perhaps try to
> make it RMS, but that requires you to divide by "the
> square root of infinity." Maybe there's a better way to
> formalize the means of infinite data sets that I don't
> know of. (Or maybe nonstandard analysis might be useful
> here.)

An RMS may converge in the limit of the prime limit
approaching infinity. It depends on the weighting. You
may be able to predict how much you'd expect it to increase
for random errors (within half a scale step for equal
temperaments) and normalize it to be relative to that.

Graham

🔗Carl Lumma <carl@lumma.org>

11/12/2012 1:00:27 PM

Mike wrote:
>> I think the weighted error works because you get to pick the best
>> approximation to each new prime whilst the weights are going down.
>> But the complexity terms aren't bounded to anything like 'half a
>> step', are they? With infinitely many primes to map, it seems like
>> the complexity must get infinitely large.
>
>If we're talking about "T1 complexity," meaning just the weighted Linf
>norm of the val, I think it should be bounded for GPV's, since the
>weighted error of a prime can never be more than
>round(n*log2(p))/log2(p), where n is the raw and unrounded number of
>steps to the octave that you're putting into the GPV calculation.

Derr, quite right. Each element is approximately n. Forget
everything I said. -Carl