back to list

Will this do?

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/21/2012 11:17:40 AM

http://xenharmonic.wikispaces.com/Lp+tuning#Applying the Hahn-Banach theorem

Comments?

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 1:23:01 PM

On Sat, Jul 21, 2012 at 2:17 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> http://xenharmonic.wikispaces.com/Lp+tuning#Applying the Hahn-Banach
> theorem
>
> Comments?

This is close, but there's one thing I think might make it more
understandable for someone who didn't follow the discussion here:

"By the Hahn–Banach theorem, Ɛ can be extended to an element Ƹ of the
full p-limit tuning space with the same norm; that is, so that ||Ɛ|| =
||Ƹ||. This norm must be minimal for the whole tuning space, or the
restriction of Ƹ to G would improve on Ɛ. Hence, Ƹ must be the tuning
for the full p-limit for the same group of null elements c generated
by the commas S. Thus to find the Lp tuning for the group G, we may
first find the tuning for the corresponding higher-rank temperament
for the full p-limit group, and then apply it to the normal interval
list giving the standard form of generators for G."

The thing is that while Ƹ indeed is the vector in the full-limit error
map, there's an infinite number of ways to turn this error map back
into a tuning map, all of which restrict correctly to Ɛ+sJIP.
Specifically, the entire coset Ƹ+preimage(sJIP) will restrict
correctly back to Ɛ+sJIP. As far as I know, Hahn-Banach doesn't single
out the choice of Ƹ+JIP as being special, but it seems like the
paragraph above is saying it does. But it doesn't need to single out
Ƹ+JIP as being special: the JIP is at least one point in
preimage(sJIP), so Ƹ+JIP is at least one tuning which will restrict
correctly back to the subgroup TOP tuning we want. Therefore the
algorithm works.

For instance, if you have a temperament in the 2.5/3 subgroup and do
the above algorithm, it's not just the corresponding 2.3.5 TOP tuning
map T which will restrict back to the 2.5/3 case correctly, but the
whole coset of T + k*<0 1 1| as well. But T is in that coset, so it
all works out.

I still wish we could at least firm up the terminology a bit as I
wrote in my last reply to you, especially because I'm still doing work
on this and the terminology becomes more of a mess the more I do.
There's a very nice, orderly structure here, but our terminology
doesn't fit it.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/21/2012 3:09:53 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> The thing is that while Ƹ indeed is the vector in the full-limit error
> map, there's an infinite number of ways to turn this error map back
> into a tuning map, all of which restrict correctly to Ɛ+sJIP.
> Specifically, the entire coset Ƹ+preimage(sJIP) will restrict
> correctly back to Ɛ+sJIP. As far as I know, Hahn-Banach doesn't single
> out the choice of Ƹ+JIP as being special, but it seems like the
> paragraph above is saying it does. But it doesn't need to single out
> Ƹ+JIP as being special: the JIP is at least one point in
> preimage(sJIP), so Ƹ+JIP is at least one tuning which will restrict
> correctly back to the subgroup TOP tuning we want. Therefore the
> algorithm works.

I did point out it singles out a tuning since by definition we want minimum error. How do I make that clearer?

> I still wish we could at least firm up the terminology a bit as I
> wrote in my last reply to you, especially because I'm still doing work
> on this and the terminology becomes more of a mess the more I do.
> There's a very nice, orderly structure here, but our terminology
> doesn't fit it.

You could start by not trying to call everything TOP.

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 3:49:29 PM

On Sat, Jul 21, 2012 at 6:09 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> I did point out it singles out a tuning since by definition we want
> minimum error. How do I make that clearer?

What you said by definition is that we get an error map E with minimum
norm. But for any such minimal error map E which restricts to a
minimal subgroup error e, the whole set of E+preimage(sJIP) restricts
to e+sJIP. e+sJIP is what we're attempting to calculate.

Thus E+JIP isn't the only vector which will fit the bill. Since the
goal is to get to e+sJIP, it doesn't matter if we restrict down to
that from E+JIP or e+JIP+ker(V-map). IOW, there are an infinite amount
of tuning maps which are horrible in the full-limit, but which
restrict properly down to the TOP tuning for the temperament on the
subgroup.

Of course, since our goal is to get to e+JIP, and we can pick anything
in E+preimage(sJIP), then the natural and obvious way to go is
obviously to use E+JIP for this purpose.

> > I still wish we could at least firm up the terminology a bit as I
> > wrote in my last reply to you, especially because I'm still doing work
> > on this and the terminology becomes more of a mess the more I do.
> > There's a very nice, orderly structure here, but our terminology
> > doesn't fit it.
>
> You could start by not trying to call everything TOP.

What are you saying I'm calling TOP that you don't think should be
called TOP? I only used the term "Lp-TOP" or "TOP-Lp" or whatever to
refer to refer to the optimal Tenney-weighted tuning for some choice
of Lp norm, and I did that to get away from calling everything under
the sun just "Lp".

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/21/2012 5:56:52 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Sat, Jul 21, 2012 at 6:09 PM, genewardsmith <genewardsmith@...>
> wrote:
> >
> > I did point out it singles out a tuning since by definition we want
> > minimum error. How do I make that clearer?
>
> What you said by definition is that we get an error map E with minimum
> norm.

But for any such minimal error map E which restricts to a
> minimal subgroup error e, the whole set of E+preimage(sJIP) restricts
> to e+sJIP. e+sJIP is what we're attempting to calculate.

Yes, and restricting E to e does it. What's your point?

>
> Thus E+JIP isn't the only vector which will fit the bill.

So what?

> What are you saying I'm calling TOP that you don't think should be
> called TOP?

Everything not using L1 on monzos, Linf on vals.

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 6:39:20 PM

On Sat, Jul 21, 2012 at 8:56 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> But for any such minimal error map E which restricts to a
> > minimal subgroup error e, the whole set of E+preimage(sJIP) restricts
> > to e+sJIP. e+sJIP is what we're attempting to calculate.
>
> Yes, and restricting E to e does it. What's your point?

I'm just saying where I think it'll be unclear. I think that after the
Hahn-Banach acrobatics, I don't think it'll be obvious to everyone
that this magic E you just derived is the same vector as the
full-limit Lp-TOP tuning minus the full-limit JIP. I'll just add a few
sentences and you can see how you like it.

> > What are you saying I'm calling TOP that you don't think should be
> > called TOP?
>
> Everything not using L1 on monzos, Linf on vals.

I haven't been calling it TOP, I've been calling it Lp-TOP. But even
if you don't like abbreviating "Lp-Tenney-weighted optimal tuning" to
Lp-TOP tuning, It's better to call the norm Lp, and the
Tenney-weighted optimal tuning [something else], than to call both of
them Lp.

The chances someone will confuse L2-TOP with (L1)-TOP are much less
than the chances someone will confuse L2 (Tenney-weighted optimal
tuning) with L2 (unweighted norm).

This is also a name that Paul approved of (although he thinks L1-TOP
is the only musically relevant one and etc).

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/21/2012 6:59:05 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I'm just saying where I think it'll be unclear. I think that after the
> Hahn-Banach acrobatics, I don't think it'll be obvious to everyone
> that this magic E you just derived is the same vector as the
> full-limit Lp-TOP tuning minus the full-limit JIP. I'll just add a few
> sentences and you can see how you like it.

OK. It wasn't my idea in the first place, though between you, Keenan and Claudi I don't know who thought it up.

But even
> if you don't like abbreviating "Lp-Tenney-weighted optimal tuning" to
> Lp-TOP tuning, It's better to call the norm Lp, and the
> Tenney-weighted optimal tuning [something else], than to call both of
> them Lp.

I'm calling one Lp, and the other Lp tuning. Not the same.

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 7:15:28 PM

On Sat, Jul 21, 2012 at 9:59 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...>
> wrote:
>
> > I'm just saying where I think it'll be unclear. I think that after the
> > Hahn-Banach acrobatics, I don't think it'll be obvious to everyone
> > that this magic E you just derived is the same vector as the
> > full-limit Lp-TOP tuning minus the full-limit JIP. I'll just add a few
> > sentences and you can see how you like it.
>
> OK. It wasn't my idea in the first place, though between you, Keenan and
> Claudi I don't know who thought it up.

I don't know who else was thinking of this independently, but from my
perspective all of this came from my thinking on V-maps, which was my
attempt to answer the question "what happens if you temper out a
val?". I wasn't sure how to prove a conjecture I had and then Claudi
filled in the missing piece showing that the quotient norm of (val
space)(ker(V-map)) is the same as the dual norm we were looking for.

I then realized that you can transform a subgroup Lp-TOP or Lp or
whatever problem into a full-limit version by figuring out if, for all
cosets, the collection of vals with minimum norm would form a
subspace, which would let you turn the subgroup temperament into a
same-rank higher-limit temperament where every higher-limit val would
have the same norm as the corresponding subgroup sval. Keenan then
figured out we can bypass that conjecture by instead looking at the
higher-rank higher-limit temperament that we're talking about now. And
that's the story.

> But even
> > if you don't like abbreviating "Lp-Tenney-weighted optimal tuning" to
> > Lp-TOP tuning, It's better to call the norm Lp, and the
> > Tenney-weighted optimal tuning [something else], than to call both of
> > them Lp.
>
> I'm calling one Lp, and the other Lp tuning. Not the same.

And I'm calling one TOP, and the other Lp-TOP. Also not the same.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/21/2012 11:42:10 PM

On Sat, Jul 21, 2012 at 10:15 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Sat, Jul 21, 2012 at 9:59 PM, genewardsmith <genewardsmith@sbcglobal.net>
> wrote:
>>
>> I'm calling one Lp, and the other Lp tuning. Not the same.
>
> And I'm calling one TOP, and the other Lp-TOP. Also not the same.

Anyways, I don't get why this is such a contentious issue. I've been
using the name "Lp-TOP", and have posted using it on this list before,
so it's not like I'm just inventing a new name for this on the spot.
But, if you really think it's terrible and confusing, I'll change it.
All I want is clear and unambiguous names for these things:

1) The unweighted Lp norm on intervals
2) The weighted Lp norm on intervals
3) The unweighted optimal Lp tuning on intervals
4) The weighted optimal Lp tuning on intervals

I proposed that #1 should be Lp, #2 should be Tenney-Lp, #3 should be
Lp-optimal or maybe Lp-Op or something for short, and #4 should be
Lp-TOP. If you instead want #2 to be "Lp" and #4 to be "Lp tuning,"
what do I call #1 and #3?

-Mike

🔗Carl Lumma <carl@lumma.org>

7/22/2012 11:17:48 AM

Mike wrote:

>>> I'm calling one Lp, and the other Lp tuning. Not the same.
>>
>> And I'm calling one TOP, and the other Lp-TOP. Also not the same.
>
>Anyways, I don't get why this is such a contentious issue. I've been
>using the name "Lp-TOP", and have posted using it on this list before,
>so it's not like I'm just inventing a new name for this on the spot.
>But, if you really think it's terrible and confusing, I'll change it.

TOP and Lp-TOP aren't the same? Lp and TOP are the same,
when p = 1.

>All I want is clear and unambiguous names for these things:
>1) The unweighted Lp norm on intervals
>2) The weighted Lp norm on intervals
>3) The unweighted optimal Lp tuning on intervals
>4) The weighted optimal Lp tuning on intervals

How do you get a norm on intervals without weighting?

-Carl

🔗Graham Breed <gbreed@gmail.com>

7/22/2012 11:25:49 AM

Mike Battaglia <battaglia01@gmail.com> wrote:

> Anyways, I don't get why this is such a contentious
> issue. I've been using the name "Lp-TOP", and have posted
> using it on this list before, so it's not like I'm just
> inventing a new name for this on the spot. But, if you
> really think it's terrible and confusing, I'll change it.
> All I want is clear and unambiguous names for these
> things:

If Paul supports the expansion of TOP for something like
Tenney-weighted optimal prime-based measures, then yes,
let's keep with that. But only if they're Tenney weighted,
optimal, impure of the octave, and preferably prime-based.
I don't see why Lp should imply Tenney weighting.

Graham

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 12:07:19 PM

On Sun, Jul 22, 2012 at 2:17 PM, Carl Lumma <carl@lumma.org> wrote:
>
> TOP and Lp-TOP aren't the same? Lp and TOP are the same,
> when p = 1.

Yeah, that's what I'm proposing. The special name for Lp-TOP in the
case of p=1 is TOP, and in the case of p=2 it's TE. And I think
there's some interesting stuff for when it's p=inf, which leads to you
using the L1 norm on vals.

> How do you get a norm on intervals without weighting?

You just apply the norm to the raw monzo without weighting anything.
So 2/1, 3/1, and 5/1 all have a norm of 1. Chalmers and Wilson used
this in an article in XH1 (which I'll be uploading soon).

This norm itself isn't as useful for directly measuring the complexity
of intervals, but it leads indirectly to useful outcomes, like
unweighted Lp tunings. For instance, the tuning you get if you use the
L2 unweighted norm is the Frobenius tuning. One use for this I've been
considering for these is: Igs was talking about computing unweighted
interval error for a set of target intervals, I thought you might be
able to use Graham's composite.pdf conjecture to prove some useful
theorems about unweighted subgroup error over different bounded
subsets of the lattice.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 12:15:58 PM

On Sun, Jul 22, 2012 at 2:17 PM, Carl Lumma <carl@lumma.org> wrote:
>
> TOP and Lp-TOP aren't the same? Lp and TOP are the same,
> when p = 1.

Yeah, that's what I'm proposing. The special name for Lp-TOP in the
case of p=1 is TOP, and in the case of p=2 it's TE. And I think
there's some interesting stuff for when it's p=inf, which leads to you
using the L1 norm on vals.

> How do you get a norm on intervals without weighting?

You just apply the norm to the raw monzo without weighting anything.
So 2/1, 3/1, and 5/1 all have a norm of 1. Chalmers and Wilson used
this in an article in XH1 (which I'll be uploading soon). Though it's
not this norm on intervals which is the important thing, but the dual
norm on vals, which leads to things like the Frobenius tuning for L2.

I've been wanting to work with these because of a conversation that I
had with Igs a while ago about computing unweighted interval error for
a set of target intervals. I thought you might be able to use Graham's
composite.pdf conjecture, along with one of these unweighted optimal
tunings, to compute something like that.

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 12:30:30 PM

On Sun, Jul 22, 2012 at 2:25 PM, Graham Breed <gbreed@gmail.com> wrote:
>
> If Paul supports the expansion of TOP for something like
> Tenney-weighted optimal prime-based measures, then yes,
> let's keep with that.

He approved the name offlist, but now I can't the email. I'll ask him
again to make sure he's OK with it. If he's not then I'll give up on
it.

> But only if they're Tenney weighted,
> optimal, impure of the octave, and preferably prime-based.
> I don't see why Lp should imply Tenney weighting.

Using the theorems we just proved, you can turn any subgroup
Lp/Lp-TOP/whatever problem into a full-limit one. There's a unique map
from the kernel of your temperament to a corresponding full-limit
kernel for a higher-rank temperament, and then the TOP tuning for this
restricts properly down to the TOP tuning on the subgroup.

So for instance, say you want to calculate the Lp-TOP/Lp/argh tuning
for 2.3.7/5 50/49. You first calculate the Lp-TOP/Lp/dsfkljhasdfl
tuning for 2.3.5.7 50/49, and then you just restrict it onto the
subgroup. This isn't just some nice random idea, but stems from the
definition of the dual norm to the induced Lp norm on a subgroup and
the stuff Claudi posted about the Hahn-Banach theorem. It always has
to work out that way.

-Mike

🔗Carl Lumma <carl@lumma.org>

7/22/2012 12:54:33 PM

Mike wrote:

>> TOP and Lp-TOP aren't the same? Lp and TOP are the same,
>> when p = 1.
>
>Yeah, that's what I'm proposing. The special name for Lp-TOP in the
>case of p=1 is TOP, and in the case of p=2 it's TE.

Why not Lp-TE?

>> How do you get a norm on intervals without weighting?
>
>You just apply the norm to the raw monzo without weighting anything.
>So 2/1, 3/1, and 5/1 all have a norm of 1.

OK.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 1:18:20 PM

On Sun, Jul 22, 2012 at 3:54 PM, Carl Lumma <carl@lumma.org> wrote:
>
> >Yeah, that's what I'm proposing. The special name for Lp-TOP in the
> >case of p=1 is TOP, and in the case of p=2 it's TE.
>
> Why not Lp-TE?

Because the E stands for Euclidean, so that wouldn't make much sense.

I found the conversation, which highlights another issue - the
distinction between the Lp-optimal error of a temperament and the Lp
error of a specific tuning map. I've gotten confused on this point
before. Here's the abridged recap of the thread. Original thread is
here: http://www.facebook.com/groups/xenharmonic/permalink/10150591117659482/?comment_id=10150594024754482&offset=0&total_comments=299

The basic gist is: Paul thinks that TE error refers to the weighted L2
error of a specific tuning map. However, it actually refers to the
OPTIMAL TE tuning for a temperament, what I'm calling the "L2-TOP"
error of a temperament. You can see the confusion below, and also that
Graham doesn't realize Paul's confused about it.

I'll put comments in [brackets].

--

Paul Erlich: ...What if you just uniformly stretch or squash the JI
tuning? [he means the JIP.] Won't the angle be zero, while the TE
error is greater than zero?

Graham Breed: What JI tuning of what? Any uniform stretching or
squashing doesn't change the angle of the vector. TE error really is
the sine of this angle. In tuning space, it's the distance from the
optimal tuning to the JI point with the distance from the JI point to
the origin fixed as 1 (or 1200 for cents per octave). The optimal
tuning is the nearest point on the temperament line to the JI point.
Draw the triangle and you'll find the sine.

Mike Battaglia: oh oh oh, nm, I see what's going on now. I believe
that "TE" error refers to the optimal RMS error, not the weighted RMS
error.

Mike Battaglia: In other words, "TE error" refers to an analogue of
"TOP damage," not just "weighted damage" in general.

Graham Breed: TE refers to the Tenney weighted RMS error. The
weighting is a property of the space/lattice defined by the inner
product.

Mike Battaglia: You mean optimal Tenney-weighted RMS error, right? Not
just Tenney-weighted RMS error in general.

Paul Erlich: ‎Graham, any uniform stretching or squashing of JI will
lead to a tuning with TE error > 0. TE error is L2 distance in the
tuning space I'm trying to talk about. The JIP is the only point in
that space with zero TE error. [Again, Paul's mistaken understanding
of the term "TE error"]

Paul Erlich: TE error is the Tenney-weighted sum of the squares of the
deviations of the primes from JI. Thus it is Euclidean distance in the
tuning space I'm trying to talk about. It's not an angle.

Paul Erlich: I don't know why "temperament line" is coming up here --
we are not discussing any classes of temperaments of any rank, and a
temperament line would correspond to a class of temperaments of a
specific rank.

Mike Battaglia: ‎Paul Erlich - every instance of TE error I've ever
heard of uses it to refer to optimal weighted RMS error. I think it
was originally called "TOP-RMS error," but then it was changed to TE
because I don't know why. I like the names TOP-L1, TOP-L2, TOP-L3 ...
TOP-Linf myself. But that's just me.

Paul Erlich: TE means Tenney-Euclidean; in other words, Euclidean
distance in the Tenney lattice. TOP-L2 would be a good way to refer to
tunings that *minimize* the TE error, because the "OP" stands for
"optimal".

--

So that's where it came from; I knew I wasn't just misremembering. But
the discussion goes on:

Mike Battaglia: Paul Erlich - that isn't what TE error means

Paul Erlich: Yup, it sure is. It's the Tenney-weighted sum-of-squares
of the primes' deviations from JI.

Mike Battaglia: ‎Paul Erlich - Didn't Graham define this term? That's
not how "TOP-RMS error" is defined in primerr or how TE error is
defined on the wiki.

Paul Erlich: What's the term for the Tenney-weighted sum-of-squares of
the primes' deviations from JI, aka Tenney-Euclidean distance in the
tuning space I was talking about, aka the thing that TOP-RMS or TE
tunings minimize?

Mike Battaglia: Tenney-weighted RMS error, as far as I know. Or
Tenney-weighted sum-squared error, in this case. This was why I was
suggesting we call it TEP or TEOP or whatever before.

Paul Erlich: OK. And TOP-RMS (TE) tunings minimize this mouthful of a
thing subject to some matrix of constraints, representing a basis for
the group of vanishing commas.

Paul Erlich: Well, this all came up here because of something Carl
posted. But I find it significant because of something I was
nitpicking about in another thread, which I'll elaborate offlist.
Anyway, this terminology of tuning space and TE and such strikes me as
absolutely horrible. Why can't we call tunings "x-optimal" if they
minimize error measure "x"? Hate to say it, but it seems that the
terminology that has sprung up over the last six years seems to lack
any consideration of how easy (or even possible) it would be to teach
this stuff to a newbie. Making logical terminology is half the battle
of making this stuff comprehensible to others, and if we can't make it
comprehensible to others, we are wasting our lives.

Mike Battaglia ‎Paul Erlich: I tried TE-optimal, people didn't like it...

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 1:20:47 PM

On Sun, Jul 22, 2012 at 4:18 PM, Mike Battaglia <battaglia01@gmail.com> wrote:
>
> Paul Erlich: Well, this all came up here because of something Carl
> posted. But I find it significant because of something I was
> nitpicking about in another thread, which I'll elaborate offlist.
> Anyway, this terminology of tuning space and TE and such strikes me as
> absolutely horrible. Why can't we call tunings "x-optimal" if they
> minimize error measure "x"? Hate to say it, but it seems that the
> terminology that has sprung up over the last six years seems to lack
> any consideration of how easy (or even possible) it would be to teach
> this stuff to a newbie. Making logical terminology is half the battle
> of making this stuff comprehensible to others, and if we can't make it
> comprehensible to others, we are wasting our lives.

Anyway, although I do sometimes feel that Paul's terminology is also
sometimes too complicated (like with the discussion over MOS), I agree
with him on this point here. I got thrown off by TE and so on as well.

And really, I don't see any reason why we shouldn't call these tunings
Lp-TOP. What's the big deal? Is this not a sensible name for the Lp
Tenney-weighted Optimal tuning for a temperament?

-Mike

🔗Carl Lumma <carl@lumma.org>

7/22/2012 1:42:35 PM

Mike wrote:

>The basic gist is: Paul thinks that TE error refers to the weighted L2
>error of a specific tuning map. However, it actually refers to the
>OPTIMAL TE tuning for a temperament, what I'm calling the "L2-TOP"
>error of a temperament.

Whoa, L2-TOP. You're talking about something different than
I thought you were. You want to redefine TOP to mean any Tenney-
weighted optimal tuning. Paul's OK with this? He's always been
a very strong advocate of the L1 norm.

>Paul Erlich: TE means Tenney-Euclidean; in other words, Euclidean
>distance in the Tenney lattice. TOP-L2 would be a good way to refer to
>tunings that *minimize* the TE error, because the "OP" stands for
>"optimal".

Hm. If we were to make this change, I think the TOP should
come first. TOP-L2 sounds better than L2-TOP.

But is the unweighted case really important enough to justify
the change? Seems like Tenney weighting is used the vast majority
of the time. Instead of Lp and TOP-Lp, it could be ULp and Lp.
But then the U interferes with the PO, as in POL2. UPOL2? Bah.
Maybe just say "unweighted POL2".

>Paul Erlich: Well, this all came up here because of something Carl
>posted. But I find it significant because of something I was
>nitpicking about in another thread, which I'll elaborate offlist.
>Anyway, this terminology of tuning space and TE and such strikes me as
>absolutely horrible. Why can't we call tunings "x-optimal" if they
>minimize error measure "x"? Hate to say it, but it seems that the
>terminology that has sprung up over the last six years seems to lack
>any consideration of how easy (or even possible) it would be to teach
>this stuff to a newbie. Making logical terminology is half the battle
>of making this stuff comprehensible to others, and if we can't make it
>comprehensible to others, we are wasting our lives.

I remember this comment, but I don't remember which post of mine
he's referring to. I've tripped up on this same distinction
myself. In the rank 1 case, Graham used to say stuff like "with
the optimal stretch". I took me a while to figure out he meant
"for the optimal tuning".

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

7/22/2012 2:06:01 PM

On Sun, Jul 22, 2012 at 4:42 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Whoa, L2-TOP. You're talking about something different than
> I thought you were. You want to redefine TOP to mean any Tenney-
> weighted optimal tuning. Paul's OK with this? He's always been
> a very strong advocate of the L1 norm.

He thinks that the L1 norm is best, but that the rest are useful
because of the stuff Graham figured out in composite.pdf. For
instance, TE approximates the RMS error of weighted L1-bounded subsets
in the lattice, and if Graham's conjecture is correct this
approximation becomes exact in the limit. If not, it's still close
enough. So these other things have uses anyway.

> >Paul Erlich: TE means Tenney-Euclidean; in other words, Euclidean
> >distance in the Tenney lattice. TOP-L2 would be a good way to refer to
> >tunings that *minimize* the TE error, because the "OP" stands for
> >"optimal".
>
> Hm. If we were to make this change, I think the TOP should
> come first. TOP-L2 sounds better than L2-TOP.

OK, that's fine.

> But is the unweighted case really important enough to justify
> the change? Seems like Tenney weighting is used the vast majority
> of the time. Instead of Lp and TOP-Lp, it could be ULp and Lp.
> But then the U interferes with the PO, as in POL2. UPOL2? Bah.
> Maybe just say "unweighted POL2".

You know, something I just thought of: the L in Lp stands for
Lebesgue. But now, we're talking about a variant of the Lp norm which
deliberately weights the axes in a musically relevant way, and which
Tenney came up with. So instead of talking about Lebesgue space and
the Lebesgue or Lp norm, we're talking about Tenney space (and we even
call it this), so we might as well just call it the Tenney or Tp norm.
(We could also call it Tenney-Lebesgue or the TLp norm, but I don't
think Lebesgue will care because he's dead).

So then it's simple: Tenney-Optimal tunings just become TOP_p tunings,
with a subscript p. Or maybe TOP-p to make it nicer in ASCII. Then the
phrase "unweighted TOP" is totally redundant, because the unweighted
Tenney norm is the Lebesgue norm, so it could just be
Lebesgue-Optimal, or LOP_p or LOP-p or whatever. I think something
like that would be nice and simple. We probably won't talk about LOP-p
tunings as much, but at least this way you can.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/22/2012 11:24:37 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> All I want is clear and unambiguous names for these things:
>
> 1) The unweighted Lp norm on intervals

Unweighted Lp.

> 2) The weighted Lp norm on intervals

Lp.

> 3) The unweighted optimal Lp tuning on intervals

Unweighted Lp tuning.

> 4) The weighted optimal Lp tuning on intervals

Lp tuning.

🔗Carl Lumma <carl@lumma.org>

7/23/2012 2:05:00 AM

Works for me. -Carl

At 11:24 PM 7/22/2012, Gene wrote:
>
>
>--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
>> All I want is clear and unambiguous names for these things:
>>
>> 1) The unweighted Lp norm on intervals
>
>Unweighted Lp.
>
>> 2) The weighted Lp norm on intervals
>
>Lp.
>
>> 3) The unweighted optimal Lp tuning on intervals
>
>Unweighted Lp tuning.
>
>> 4) The weighted optimal Lp tuning on intervals
>
>Lp tuning.
>
>

🔗Mike Battaglia <battaglia01@gmail.com>

7/23/2012 12:04:36 PM

On Mon, Jul 23, 2012 at 2:24 AM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> > 4) The weighted optimal Lp tuning on intervals
>
> Lp tuning.

Fine, then what do I call these two things?

5) The Lp distance from a specific tuning map to the JIP, e.g. the Lp
error for that tuning map
6) The optimal Lp error for a temperament

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/23/2012 2:29:48 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> 5) The Lp distance from a specific tuning map to the JIP, e.g. the Lp
> error for that tuning map

If the tuning map is T, it's the Lp error of T.

> 6) The optimal Lp error for a temperament

Lp error, or if the temperament is S, Lp error of S.

🔗Mike Battaglia <battaglia01@gmail.com>

7/23/2012 2:59:11 PM

On Mon, Jul 23, 2012 at 5:29 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> > 6) The optimal Lp error for a temperament
>
> Lp error, or if the temperament is S, Lp error of S.

OK, and then while we're at it again, how about these two:

7) The complexity of a tempered interval within a temperament, as
measured by the quotient norm on tempered intervals induced by the Lp
norm on intervals
8) The complexity of a temperament on the whole, as measured by taking
the Lp norm of the multivector*

?

-Mike

* (we also need to discuss whether it's the Lp norm of the multimonzo
or the multival you need to take, but that's another topic.)

🔗genewardsmith <genewardsmith@sbcglobal.net>

7/23/2012 4:34:30 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> OK, and then while we're at it again, how about these two:
>
> 7) The complexity of a tempered interval within a temperament, as
> measured by the quotient norm on tempered intervals induced by the Lp
> norm on intervals

Isn't this temperamental complexity? So Lp temperamental complexity.

> 8) The complexity of a temperament on the whole, as measured by taking
> the Lp norm of the multivector*

I'm not sure what a good measure is. You could try the complexity of the associated full p-limit temperament, but that doesn't make much sense.

🔗Mike Battaglia <battaglia01@gmail.com>

7/23/2012 4:47:44 PM

On Mon, Jul 23, 2012 at 7:34 PM, genewardsmith <genewardsmith@sbcglobal.net>
wrote:
>
> > 8) The complexity of a temperament on the whole, as measured by taking
> > the Lp norm of the multivector*
>
> I'm not sure what a good measure is. You could try the complexity of the
> associated full p-limit temperament, but that doesn't make much sense.

OK. So, this is "Lp complexity" then, and the other thing is "Lp
temperamental complexity"? I'll address it making sense in another
thread.

-Mike