back to list

TOP-RMS vs TOP-MEAN

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 10:00:42 AM

Playing with this:

http://x31eq.com/temper/net.html

suggests it gives results very close but not identical to what you get by dividing the weighted val (now a tuning space element because of the weighting) by its mean. I suspected this was going to be the case.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 10:55:54 AM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
> Playing with this:
>
> http://x31eq.com/temper/net.html
>
> suggests it gives results very close but not identical to what you get by dividing the weighted val (now a tuning space element because of the weighting) by its mean. I suspected this was going to be the case.

If you have two vals, both of which you weight and then divide by their mean, you obtain two elements u and v of tuning space which are near to the JI point of [1,1,1...1] and which have a mean value of 1. The linear combination t*u + (1-t)*v will also then have a mean value of 1, and the least squares value for t, which will be the same as the least squares value for t*u+(1-t)*v-JIP, will give the closest point on this line to the JI point. The result we may call the "TOP mean" tuning for the linear temperament for u and v. The same can be done for any number of vals. The results seem to be very close to Graham's, and since the method is so easy and the theory so straightforward it seems to me it's self-recommending.

🔗Carl Lumma <carl@lumma.org>

5/1/2010 9:38:15 PM

Gene wrote:

>If you have two vals, both of which you weight and then divide by
>their mean, you obtain two elements u and v of tuning space which are
>near to the JI point of [1,1,1...1] and which have a mean value of 1.
>The linear combination t*u + (1-t)*v will also then have a mean value
>of 1, and the least squares value for t, which will be the same as the
>least squares value for t*u+(1-t)*v-JIP, will give the closest point
>on this line to the JI point. The result we may call the "TOP mean"
>tuning for the linear temperament for u and v. The same can be done
>for any number of vals. The results seem to be very close to Graham's,
>and since the method is so easy and the theory so straightforward it
>seems to me it's self-recommending.

Mainly trying to understand the ET case first, but don't want
to let this get away. Can you give an example?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 10:53:06 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Mainly trying to understand the ET case first, but don't want
> to let this get away. Can you give an example?

Is this what you mean by an example?

Five limit meantone

[1201.3989399811339609, 1898.4492624304611520, 2788.2012897973087643];

Seven limit meantone

[1201.2437488484395853, 1898.4605315017968221, 2788.8671306134289470,
3368.4365799882536116]

Seven limit miracle

[1200.8219844838630001, 1901.3526190278542256, 2785.1802131502659039,
3368.9557419369252584]

Eleven limit miracle

[1200.7637319435031390, 1901.0055693890144328, 2785.3423854774129076,
3368.8772500153389856, 4152.1320575007845125]

🔗Carl Lumma <carl@lumma.org>

5/1/2010 11:01:18 PM

>--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
>> Mainly trying to understand the ET case first, but don't want
>> to let this get away. Can you give an example?
>
>Is this what you mean by an example?

Not exactly. Those are results, which certainly help, but I meant
an worked example of the process you described. Or maybe you could
post pseudocode. Something that shows what "the linear combination
t*u + (1-t)*v" is.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/1/2010 11:21:17 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Not exactly. Those are results, which certainly help, but I meant
> an worked example of the process you described. Or maybe you could
> post pseudocode. Something that shows what "the linear combination
> t*u + (1-t)*v" is.

Here's my Maple code, which most of the time makes pretty good pseudocode, one reason I like it.

weight := proc(l)
# log prime weighting of list l
local i, w;
for i from 1 to nops(l) do
w[i] := l[i]/log2(ithprime(i)) od;
convert(convert(w, array), list) end:

unweight := proc(l)
# log prime unweighting of list l
local i, w;
for i from 1 to nops(l) do
w[i] := l[i]*log2(ithprime(i)) od;
convert(convert(w, array), list) end:

tmean := proc(v)
# top-mean tuning of val v
local i, m, w;
w := weight(v);
m := mean(w);
w/m end:

topmean2 := proc(u, v)
# topmean tuning map of rank two temperament from u and v
local i, s, t, uu, vv, w;
uu := tmean(u);
vv := tmean(v);
w := expand(t*uu + (1-t)*vv);
s := 0;
for i from 1 to nops(u) do
s := s + (w[i]-1)^2 od;
s := solve(diff(s, t));
w := subs(t=s, w);
1200.0 * unweight(w) end:

🔗Carl Lumma <carl@lumma.org>

5/2/2010 1:06:55 AM

Gene wrote:
>> Not exactly. Those are results, which certainly help, but I meant
>> an worked example of the process you described. Or maybe you could
>> post pseudocode. Something that shows what "the linear combination
>> t*u + (1-t)*v" is.
>
>Here's my Maple code, which most of the time makes pretty good
>pseudocode, one reason I like it.
>
>weight := proc(l)
># log prime weighting of list l
>local i, w;
>for i from 1 to nops(l) do
>w[i] := l[i]/log2(ithprime(i)) od;
>convert(convert(w, array), list) end:

Here's how it looks in scheme

(define weight (lambda (l) (map / l (map log2 l))))

It's short enough I probably wouldn't spend a keyword on it.

>unweight := proc(l)
># log prime unweighting of list l
>local i, w;
>for i from 1 to nops(l) do
>w[i] := l[i]*log2(ithprime(i)) od;
>convert(convert(w, array), list) end:

As here, in scheme it's the same as above but with * instead of /.

>tmean := proc(v)
># top-mean tuning of val v
>local i, m, w;
>w := weight(v);
>m := mean(w);
>w/m end:

Ok, yep.

>topmean2 := proc(u, v)
># topmean tuning map of rank two temperament from u and v
>local i, s, t, uu, vv, w;
>uu := tmean(u);
>vv := tmean(v);
>w := expand(t*uu + (1-t)*vv);
>s := 0;
>for i from 1 to nops(u) do
>s := s + (w[i]-1)^2 od;
>s := solve(diff(s, t));
>w := subs(t=s, w);
>1200.0 * unweight(w) end:

Here we are. If uu = [1.04, 0.98], does t*uu = [1.04t, 0.98t]?

Would (1-t)*uu = [1.04-1.04t, 0.98-0.98t]?

And what would t*uu + (1-t)uu produce?

Whatever it produces, you subtract 1 from, and then square,
each element, and then add through. That looks like finding the
sum-squared error.

diff(s, t) looks like d/dt = s... the rate of change of s with
respect to t. Not sure exactly how solve() gives us the t and
which s is minimal, but then again, I don't really known calculus.
You reuse s to store this answer (shame on you) then stick it
into w in place of t, which was still an unknown. Then you get
cents out.

If you answer those three questions above, I'll try and track
down how to do this solve(diff()) stuff and I'll have an
implementation.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 5:14:25 AM

On 1 May 2010 21:55, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> If you have two vals, both of which you weight and then divide
> by their mean, you obtain two elements u and v of tuning space
> which are near to the JI point of [1,1,1...1] and which have a
> mean value of 1. The linear combination t*u + (1-t)*v will also
> then have a mean value of 1, and the least squares value for t,
> which will be the same as the least squares value for
> t*u+(1-t)*v-JIP, will give the closest point on this line to the JI
> point. The result we may call the "TOP mean" tuning for the
> linear temperament for u and v. The same can be done for
> any number of vals. The results seem to be very close to
> Graham's, and since the method is so easy and the theory so
> straightforward it seems to me it's self-recommending.

TOP mean is not a good term for this. TOP-max optimizes the max
weighted error. TOP-RMS optimizes the RMS weighted error. This
doesn't optimize the mean. It optimizes the RMS with the mean held
constant. I called it ZMD, for zero mean deviation, before. It's set
so that the mean deviations from JI are zero.

Yes, it's close to the TOP-RMS. This follows from the weighted
standard deviation being a good approximation to the TOP-RMS error.
You have

stretch = mean(w)/mean_sq(w)
= mean(w)/[var(w) + mean(w)^2]

Where var(w) is the variance defined as
var(w) = mean_sq(w) - mean(w)^2

Divide top and bottom by mean(w) and you get

stretch = 1/[var(w)/mean(w) + mean(w)]

If std(w) is roughly the TOP-RMS error, it means var(w) is roughly the
error squared. A 10 cent per octave error is about 7e-5 when you
square it. The means are roughly 1 for any reasonable temperament.
So you have

stretch = 1/mean(w)

to at least 1 part in 10^4. Proofs about the approximate equivalence
of std(e) and var(w) are in primerr.pdf.

Now, how is this easier and more straightforward than TOP-RMS? I
originally talked about it (November 2008 I think) but took it out
because it didn't seem to be important. TOP-RMS error is the shortest
Euclidean distance in Tenney-weighted tuning space. Simple as that.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 7:53:50 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Here we are. If uu = [1.04, 0.98], does t*uu = [1.04t, 0.98t]?

Yep.

> Would (1-t)*uu = [1.04-1.04t, 0.98-0.98t]?

Uh huh.

> And what would t*uu + (1-t)uu produce?

uu.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 8:02:01 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> TOP mean is not a good term for this. TOP-max optimizes the max
> weighted error. TOP-RMS optimizes the RMS weighted error. This
> doesn't optimize the mean. It optimizes the RMS with the mean held
> constant. I called it ZMD, for zero mean deviation, before. It's set
> so that the mean deviations from JI are zero.

OK, on the basis of priority ZMD it is.

> Now, how is this easier and more straightforward than TOP-RMS? I
> originally talked about it (November 2008 I think) but took it out
> because it didn't seem to be important. TOP-RMS error is the shortest
> Euclidean distance in Tenney-weighted tuning space. Simple as that.

If you tell us, then we'd know. Perhaps I did know a few years ago, but I was already having trouble reading things like pdf files so I'm far from certain. I note however that computationally ZMD *is* easier.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 9:31:13 AM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

> If you tell us, then we'd know. Perhaps I did know a few years ago, but I was already having trouble reading things like pdf files so I'm far from certain.

I'm happy to report that my code for TOP-RMS checks with Graham's results. Knowing what the hell he optimizing was a BIG help. It's the choice with the least Euclidean norm of the error, and hence lowest Euclidean height weighted errors, which seems like a good enough justification for using it, though I doubt it would make Paul happy.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 10:19:51 AM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

>I'm happy to report that my code for TOP-RMS checks with Graham's
>results. Knowing what the hell he optimizing was a BIG help. It's the
>choice with the least Euclidean norm of the error, and hence lowest
>Euclidean height weighted errors, which seems like a good enough
>justification for using it, though I doubt it would make Paul happy.

Why do you think Paul would object? -Carl

🔗Carl Lumma <carl@lumma.org>

5/2/2010 10:32:06 AM

Gene wrote:

>> Here we are. If uu = [1.04, 0.98], does t*uu = [1.04t, 0.98t]?
>
>Yep.
>
>> Would (1-t)*uu = [1.04-1.04t, 0.98-0.98t]?
>
>Uh huh.
>
>> And what would t*uu + (1-t)uu produce?
>
>uu.

Thanks!

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 12:14:56 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Why do you think Paul would object? -Carl
>

For one thing, because he did. I pointed out we could use Euclidean norms instead, and get what Graham is now using, and he didn't like it, because Tenney height makes more sense.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 12:47:19 PM

Gene wrote:

>> Why do you think Paul would object? -Carl
>
>For one thing, because he did. I pointed out we could use Euclidean
>norms instead, and get what Graham is now using, and he didn't like
>it, because Tenney height makes more sense.

Yeah, but I remember you demonstrated the difference between
taxicab and Euclidean is quite minor, at least in the 2-D.
But I guess I don't know why Graham prefers it... presumably for
geometric reasons. Does it make calculations easier for rank 2
and rank 3?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 1:17:52 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Yeah, but I remember you demonstrated the difference between
> taxicab and Euclidean is quite minor, at least in the 2-D.
> But I guess I don't know why Graham prefers it... presumably for
> geometric reasons. Does it make calculations easier for rank 2
> and rank 3?

It makes such calculations much easier, yes. The Euclidean norm isn't too awful, but it is less well-behaved. When you've got an interval involving more than one prime in the factorization, it likes it better.

Still, it's not as extreme as what you get from the L-infinity norm, where you factor into prime powers and just pick the biggest prime power and use that as a height. The Euclidean norm likes 33 better than 31 or 32, which seems unreasonable, but not to the degree that it says 33 is just the same as 11, which is pretty hard to buy into. But there are a whole range of norms you can use, and employing Holder's inequality, prove similar things about the boundedness of maximum relative error to TOP. Of them all, Euclidean is easiest, and the easiest to visualize. You even have angles.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 1:25:37 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> But I guess I don't know why Graham prefers it...

Graham can speak for himself, but one reason you might like it is that it gives practical, middle of the road answers. Although for many purposes constraining the octave to be pure is most practical, and you seem not to like that for some reason.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 2:04:07 PM

I wrote:

>If you answer those three questions above, I'll try and track
>down how to do this solve(diff()) stuff and I'll have an
>implementation.

The diff part looks like pretty simple symbolic jockeying.
I think I've already found the needed code online. The solve
part is another matter. Sigh.

Are there any neat shortcuts for doing this solving? Or
must one use the standard computer algebra system approach of
barraging it with every analytical trick in the book until
it cracks? I only care about rank 2.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 2:35:11 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Are there any neat shortcuts for doing this solving? Or
> must one use the standard computer algebra system approach of
> barraging it with every analytical trick in the book until
> it cracks? I only care about rank 2.

The general formulas are not as simple as in the rank one case, but I think they could be expressed in terms of elementary symmetric functions. I'm not enthusiastic about tackling the question, because all you need is a routine for solving two linear equations in two unknowns anyway.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 2:25:51 PM

Gene wrote:
>Graham can speak for himself, but one reason you might like it is that
>it gives practical, middle of the road answers. Although for many
>purposes constraining the octave to be pure is most practical, and you
>seem not to like that for some reason.

Pure octaves is just ad hoc. If you've got the right weighting
you should be able to treat all primes the same way.

It looks like ZMD, TOP-RMS, and TOP are close enough that nobody
would really balk at switching one for another. But it's nice to
have something practical to tell people in musical terms, like
'no interval will ever be worse than this'. You said the ZMD damage
"divided by the Tenney height" should be bounded. I assumed you
meant the height of the tallest interval in a bag, provided that we
don't add any intervals of height h until we've added all shorter
intervals. Correct? And I guess we still want to know how quickly
it approaches the bound as we add them. To say it another way,
the plot of

ZMD-damage(TenneySeries(h)) / h

as a function of h.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 3:03:30 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

You said the ZMD damage
> "divided by the Tenney height" should be bounded. I assumed you
> meant the height of the tallest interval in a bag, provided that we
> don't add any intervals of height h until we've added all shorter
> intervals. Correct?

No, I'm talking about all intervals in the prime limit of whatever temperament it is we are using. And actually, I was talkimg about TOP-RMS vs Tenney height, but it's also true for ZMD if we put on reasonable restrictions on just how goofy these things can be, so we can bound ZMD in terms of TOP-RMS.

And I guess we still want to know how quickly
> it approaches the bound as we add them.

Well, there's a fun computation project to test out some examples. For this purpose, you don't actually need to look at anything but integers in the prime limit you are using, which is nice.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 3:06:10 PM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

> The general formulas are not as simple as in the rank one case, but I think they could be expressed in terms of elementary symmetric functions.

If I do this formula thing I presume you would prefer the answer in terms of sums of powers of a list of lumbers rather than what you get as the coefficients of a polynomial with these as its roots?

🔗Carl Lumma <carl@lumma.org>

5/2/2010 3:54:22 PM

Gene wrote:

>>The general formulas are not as simple as in the rank one case, but
>>I think they could be expressed in terms of elementary symmetric
>>functions.
>
>If I do this formula thing I presume you would prefer the answer in
>terms of sums of powers of a list of numbers rather than what you get
>as the coefficients of a polynomial with these as its roots?

If I understand your question, yes. -Carl

🔗Carl Lumma <carl@lumma.org>

5/2/2010 3:54:29 PM

Gene wrote:

>> You said the ZMD damage
>> "divided by the Tenney height" should be bounded. I assumed you
>> meant the height of the tallest interval in a bag, provided that we
>> don't add any intervals of height h until we've added all shorter
>> intervals. Correct?
>
>No, I'm talking about all intervals in the prime limit of whatever
>temperament it is we are using.

The Tenney height would be infinite then, so I don't see how
that would work.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 4:00:25 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> >No, I'm talking about all intervals in the prime limit of whatever
> >temperament it is we are using.
>
> The Tenney height would be infinite then, so I don't see how
> that would work.

This is bad?

🔗Carl Lumma <carl@lumma.org>

5/2/2010 4:48:15 PM

Gene:

>> The Tenney height would be infinite then, so I don't see how
>> that would work.
>
>This is bad?

The weighted error (ZMD damage I guess) doesn't go to infinity,
does it? -C.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 5:07:17 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Gene:
>
> >> The Tenney height would be infinite then, so I don't see how
> >> that would work.
> >
> >This is bad?
>
> The weighted error (ZMD damage I guess) doesn't go to infinity,
> does it? -C.

Of course it does.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 5:24:27 PM

Gene:

>> >> The Tenney height would be infinite then, so I don't see how
>> >> that would work.
>> >
>> >This is bad?
>>
>> The weighted error (ZMD damage I guess) doesn't go to infinity,
>> does it? -C.
>
>Of course it does.

The addition of each prime to an interval increases its error but
decreases its weight. How does the weighted error get infinte?

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 7:17:12 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> The addition of each prime to an interval increases its error but
> decreases its weight. How does the weighted error get infinte?

Carl, I am totally confused. I have no idea what you are saying or what is bothering you. Maybe Herman or Graham will have a clue.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 8:54:10 PM

Gene wrote:

>> The addition of each prime to an interval increases its error but
>> decreases its weight. How does the weighted error get infinte?
>
>Carl, I am totally confused. I have no idea what you are saying or
>what is bothering you. Maybe Herman or Graham will have a clue.

Well thanks for trying, but it's me who's confused. I'm trying to
figure out what you meant in this message

/tuning-math/message/17614

I confess I haven't a clue. If Graham or Herman know and care
to chime in, that'd be awesome.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 9:37:19 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Well thanks for trying, but it's me who's confused. I'm trying to
> figure out what you meant in this message
>
> /tuning-math/message/17614
>
> I confess I haven't a clue. If Graham or Herman know and care
> to chime in, that'd be awesome.

If T is a p-limit tuning map, then for p-limit rational numbers q
sup |T(q) - cents(q)|/Tenney(q) exists and defines some sort of relative error measure for T, weighted relative error.

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 10:07:45 PM

--- In tuning-math@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:

The point, such as it is, is that while you can cook up a relative error measure which makes TOP-RMS the "best" system in terms of a somewhat dubious method of measuring relative error, you don't need much at all to prove boundedness wrt Tenney height, which is more or less obvious.

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:22:18 PM

On 3 May 2010 01:04, Carl Lumma <carl@lumma.org> wrote:

> Are there any neat shortcuts for doing this solving?  Or
> must one use the standard computer algebra system approach of
> barraging it with every analytical trick in the book until
> it cracks?  I only care about rank 2.

I've deleted some context that I didn't understand. The equations for
solving TOP-RMS are in http://x31eq.com/primerr.pdf and I probably
gave you rank 2 code in Scheme once. If that's not what you wanted
you'll have to explain.

To solve the general rank 2 case of TOP-max you need a linear
programming library.

I do have Python code at home for solving rank 2 ZMD but if there's
any simplification I missed it. The only thing I note from
primerr.pdf is that Equation 60 on page 15, for an approximate error,
can be simplified for ZMD stretch because the variances become
mean-squareds. But that's an illusory simplification because you need
the means to calculate the ZMD stretch. All you do is move them from
one part of the calculation to another.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:23:32 PM

On 3 May 2010 08:37, genewardsmith <genewardsmith@sbcglobal.net> wrote:

> If T is a p-limit tuning map, then for p-limit rational numbers q
> sup |T(q) - cents(q)|/Tenney(q) exists and defines some sort of relative error measure for T, weighted relative error.

Isn't that what TOP-max is optimizing?

Graham

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:33:14 PM

On 2 May 2010 21:19, Carl Lumma <carl@lumma.org> wrote:

> Why do you think Paul would object?  -Carl

In the past he resisted Euclidean metrics even as approximations to
taxi cab metrics. But he also wanted a geometric model for the error.
In arbitrary ranks, Euclidean geometry is the model that works, and
does so very simply. If taxi cabs can even be made to work, nobody's
shown me how. It really looks like his two goals are inconsistent,
and I'd like to argue it through with him, but we aren't in touch.

The fact is I can't think of any arguments for why a taxi cab metric
should be valid. It's something we've always assumed. Tenney used
it, I found it independently, and last we know Paul believed in it.
But where are the psychoacoustic laws to back it up? It makes some
things simpler, but when it becomes a complication you can throw it
away!

Graham

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:36:15 PM

On 3 May 2010 01:25, Carl Lumma <carl@lumma.org> wrote:
> Gene wrote:
>>Graham can speak for himself, but one reason you might like it is that
>>it gives practical, middle of the road answers. Although for many
>>purposes constraining the octave to be pure is most practical, and you
>>seem not to like that for some reason.
>
> Pure octaves is just ad hoc.  If you've got the right weighting
> you should be able to treat all primes the same way.

The approximations I use for pure octaves do treat all primes the same
way. If you define a standard scale stretch that doesn't involve pure
octaves, you can feed that in. The only problem is if you try to
optimize the scale stretch because you can get stupid results, with
everything collapsing to a unison.

Graham

🔗Carl Lumma <carl@lumma.org>

5/2/2010 10:42:37 PM

Graham wrote:

>I've deleted some context that I didn't understand. The equations for
>solving TOP-RMS are in http://x31eq.com/primerr.pdf and I probably
>gave you rank 2 code in Scheme once. If that's not what you wanted
>you'll have to explain.

I was trying to translate Gene's Maple ZMD to Scheme. If you gave
me TOP-RMS in Scheme I've definitely forgotten it. The closest thing
I can find in all my mail is

/tuning-math/message/17455?var=0&l=1

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:44:38 PM

On 2 May 2010 19:02, genewardsmith <genewardsmith@sbcglobal.net> wrote:

>> Now, how is this easier and more straightforward than TOP-RMS?  I
>> originally talked about it (November 2008 I think) but took it out
>> because it didn't seem to be important.  TOP-RMS error is the shortest
>> Euclidean distance in Tenney-weighted tuning space.  Simple as that.
>
> If you tell us, then we'd know. Perhaps I did know a few years ago, but I was already having trouble reading things like pdf files so I'm far from certain. I note however that computationally ZMD *is* easier.

It went in July 5th 2006 and came out November 11 2007.

The point is I never did find it made error or complexity calculations
easier. It relates to the STD error, which is simpler for ranks 1 and
2. ZMD is a good rule of thumb for finding the octave stretch. But
when I worked out the rank 2 formulas they weren't any simpler.

If you were having trouble with reading the PDFs you should have said
so. Note that some of the more recent ones come in a single column
format, with a larger font. The only problem with doing that for
primerr.pdf is that some of the tables are optimized to fit on a page.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/2/2010 10:50:29 PM

On 3 May 2010 09:42, Carl Lumma <carl@lumma.org> wrote:

> I was trying to translate Gene's Maple ZMD to Scheme.  If you gave
> me TOP-RMS in Scheme I've definitely forgotten it.  The closest thing
> I can find in all my mail is <snip>

See the file regular.scm in my code bundle then.
http://x31eq.com/temper/regular.zip or in the files section of
tuning-math.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/2/2010 11:24:39 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> On 3 May 2010 08:37, genewardsmith <genewardsmith@...> wrote:
>
> > If T is a p-limit tuning map, then for p-limit rational numbers q
> > sup |T(q) - cents(q)|/Tenney(q) exists and defines some sort of relative error measure for T, weighted relative error.
>
> Isn't that what TOP-max is optimizing?

Precisely.

🔗Carl Lumma <carl@lumma.org>

5/2/2010 11:56:39 PM

Gene wrote:

>> Well thanks for trying, but it's me who's confused. I'm trying to
>> figure out what you meant in this message
>> /tuning-math/message/17614
>> I confess I haven't a clue. If Graham or Herman know and care
>> to chime in, that'd be awesome.
>
>If T is a p-limit tuning map, then for p-limit rational numbers q
>sup |T(q) - cents(q)|/Tenney(q) exists and defines some sort of
>relative error measure for T, weighted relative error.

This is "TOP damage". Neatly, it's what TOP tuning minimizes.
I'm asking what ZMD tuning minimizes. Since ZMD is not TOP, it
can't be this. I gather you're saying that whatever it is, it's
close. I believe you. Just nice to know what it is when pushing
ZMD on the streets.

-Carl

🔗Carl Lumma <carl@lumma.org>

5/3/2010 12:22:02 AM

Graham wrote:

>> Pure octaves is just ad hoc. If you've got the right weighting
>> you should be able to treat all primes the same way.
>
>The approximations I use for pure octaves do treat all primes the same
>way.

I can't imagine how...

>If you define a standard scale stretch that doesn't involve pure
>octaves, you can feed that in. The only problem is if you try to
>optimize the scale stretch because you can get stupid results, with
>everything collapsing to a unison.

I have no idea what this means. The first occurrence of the word
"pure" in composite_onecol is in the table on pg.5 where you give
tunings with pure octaves. Next, you say this

""Table 1 shows the errors for different equal temperaments. The
errors on the left hand side assume a tuning with pure octaves. On
the right hand side, the tuning is chosen to optimize the RMS error.
Naturally, the optimized errors are generally lower than those for
pure octaves.""

The last sentence is certainly a reason to use tempered octaves.
Still it's not telling me how you're doing the pure versions.
Let's check primerr. You say,

""Equation 25 on page 8 and Equation 47 on the preceding page both
give a TOP error that's independent of the scale stretch. You can
verify this by replacing w with aw and noticing that it doesn't
alter the result.""

That didn't help much. Every equation in the paper requires
memorizing every prior equation to understand. You also say,

"For higher rank temperaments, you need to find the optimal
generators, and then unstretch the scale so that octaves are pure."

Ok, this sounds like you first temper the octaves along with
everything else normally and then stretch the scale pure.
A completely ad hoc operation.

Another way to do it is to leave 2 out. Gene's NOT did that I
think. Kees complexity is odd-limit based, so anything that
optimizes it should also.

Another way to do it would be to us a weighting that places far
more importance on 2 vs the other primes than Tenney weighting does.

All of these are ad hoc. Are you doing something else?

-Carl

🔗Carl Lumma <carl@lumma.org>

5/3/2010 12:27:13 AM

Graham wrote:

>> I was trying to translate Gene's Maple ZMD to Scheme. If you gave
>> me TOP-RMS in Scheme I've definitely forgotten it. The closest thing
>> I can find in all my mail is <snip>
>
>See the file regular.scm in my code bundle then.
>http://x31eq.com/temper/regular.zip or in the files section of
>tuning-math.

I don't see anything like Gene's topmean2 procedure jumping
out at me. -C.

🔗Graham Breed <gbreed@gmail.com>

5/3/2010 12:33:25 AM

On 3 May 2010 11:27, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>> I was trying to translate Gene's Maple ZMD to Scheme.  If you gave
>>> me TOP-RMS in Scheme I've definitely forgotten it.  The closest thing
>>> I can find in all my mail is <snip>
>>
>>See the file regular.scm in my code bundle then.
>>http://x31eq.com/temper/regular.zip or in the files section of
>>tuning-math.
>
> I don't see anything like Gene's topmean2 procedure jumping
> out at me.  -C.

That's right. It only does optimal errors, not tunings.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/3/2010 1:28:54 AM

On 3 May 2010 11:22, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:

>>The approximations I use for pure octaves do treat all primes the same
>>way.
>
> I can't imagine how...

Where do you see an error definition that gives special treatment to a prime?

>>If you define a standard scale stretch that doesn't involve pure
>>octaves, you can feed that in.  The only problem is if you try to
>>optimize the scale stretch because you can get stupid results, with
>>everything collapsing to a unison.
>
> I have no idea what this means.  The first occurrence of the word
> "pure" in composite_onecol is in the table on pg.5 where you give
> tunings with pure octaves.  Next, you say this

There are no pure octave approximations in that PDF.

> Let's check primerr.  You say,
>
> ""Equation 25 on page 8 and Equation 47 on the preceding page both
> give a TOP error that's independent of the scale stretch. You can
> verify this by replacing w with aw and noticing that it doesn't
> alter the result.""
>
> That didn't help much.  Every equation in the paper requires
> memorizing every prior equation to understand.  You also say,

They don't depend on previous equations. Equation 47 is obviously the
TOP-max error as a function of the weighted tuning map. It's the same
equation in w I gave last week. Surely you can see that? And yes,
try multiplying each w by a constant and you'll see it doesn't affect
the result. So this isn't an pure octaves approximation.

Equation 25 is for TOP-RMS error. I say that right above. So it
isn't an approximation and doesn't need pure octaves. It also uses the
same w as for TOP-max. It's explained in English underneath.

> "For higher rank temperaments, you need to find the optimal
> generators, and then unstretch the scale so that octaves are pure."
>
> Ok, this sounds like you first temper the octaves along with
> everything else normally and then stretch the scale pure.
> A completely ad hoc operation.

Yes.

> Another way to do it is to leave 2 out.  Gene's NOT did that I
> think.  Kees complexity is odd-limit based, so anything that
> optimizes it should also.

NOT did that and is a poorer approximation to the other measures.

Kees weighting is in section 4.3, page 15.

> Another way to do it would be to us a weighting that places far
> more importance on 2 vs the other primes than Tenney weighting does.

No, that'd still be octave-specific.

> All of these are ad hoc.  Are you doing something else?

What wouldn't count as ad hoc? The approximate error formulas are
Equations 50 and 51 under "Approximations". Then I give them in terms
of weighted errors. Like with NOT, you know the error in the octaves
is always zero when you leave them pure. But they approximate the
true TOP errors better. And I prove that. There's no logic other
than them being approximations.

Equation 56 gives you the generator for a rank 2 temperament in terms
of two weighted mappings. That's the equivalent of Gene's Maple
function. You don't need any other equations to understand it. It
depends on octaves being tempered pure, but only so that M_{00} works
as the number of periods to an octave. You can replace M_{00} with
the true number of periods to a 2:1.

Equation 60, the one I mentioned could be simplified with the ZMD,
gives you an approximate rank 2 error with no special treatment of
octaves.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/3/2010 10:48:23 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> All of these are ad hoc. Are you doing something else?

I'm not sure what counts as ad hoc. The "Euclidean tuning" and NOT approach is to constrain the set of possible solutions to those with pure octaves. Is that ad hoc?

🔗Carl Lumma <carl@lumma.org>

5/3/2010 11:34:58 AM

Gene wrote:

>I'm not sure what counts as ad hoc. The "Euclidean tuning" and NOT
>approach is to constrain the set of possible solutions to those with
>pure octaves. Is that ad hoc?

I don't know any way to say it other than what I've already said.
If you want to do this it means you have the wrong weighting function.

NOT, that is. I don't see what using a Euclidean metric has to do
with octaves.

-Carl

🔗Carl Lumma <carl@lumma.org>

5/3/2010 11:42:37 AM

Graham wrote:

>>>If you define a standard scale stretch that doesn't involve pure
>>>octaves, you can feed that in. The only problem is if you try to
>>>optimize the scale stretch because you can get stupid results, with
>>>everything collapsing to a unison.
>>
>> I have no idea what this means. The first occurrence of the word
>> "pure" in composite_onecol is in the table on pg.5 where you give
>> tunings with pure octaves. Next, you say this
>
>There are no pure octave approximations in that PDF.

Really! Perhaps you'd care to explain what table 1 shows then,
as well as the sentence you clipped above.

>> "For higher rank temperaments, you need to find the optimal
>> generators, and then unstretch the scale so that octaves are pure."
>>
>> Ok, this sounds like you first temper the octaves along with
>> everything else normally and then stretch the scale pure.
>> A completely ad hoc operation.
>
>Yes.

...which treats prime 2 specially.

>> All of these are ad hoc. Are you doing something else?
>
>What wouldn't count as ad hoc?

A psychoacoustically-derived weigthing (like Tenney height) that
puts so much weight on 2 it almost never gets tempered.

>The approximate error formulas are
>Equations 50 and 51 under "Approximations". Then I give them in terms
>of weighted errors. Like with NOT, you know the error in the octaves
>is always zero when you leave them pure. But they approximate the
>true TOP errors better. And I prove that. There's no logic other
>than them being approximations.

I don't know what an "approximation" is or why the heck you seem to
think something with pure octaves has lower TOP damage than TOP,
but I'm getting tired of you referring me to your paper instead of
making clear statements here.

>Equation 56 gives you the generator for a rank 2 temperament in terms
>of two weighted mappings. That's the equivalent of Gene's Maple
>function. You don't need any other equations to understand it.

I have no idea what any of those variables are. So I took a
screenshot so I could look at it and the glossary at the back
at the same time (convenient!). Looks like Gopt is the generator
size. Too bad for a rank 2 temperament, there are two generators.
The Ms are weighted tuning maps, where the usual row/column
formalism is conveniently reversed. I have no idea what the 00,
0, or 1 subscripts mean. Or how to parse the final ratio.
Looks like the STD of the product of M0 and M1 over the product
of M00 and the squared STD of M1, but since there are at least
five levels of text in the denominator alone, I won't say I'm sure.

-Carl

🔗Herman Miller <hmiller@IO.COM>

5/3/2010 7:47:54 PM

Graham Breed wrote:
> On 3 May 2010 01:04, Carl Lumma <carl@lumma.org> wrote:
> >> Are there any neat shortcuts for doing this solving? Or
>> must one use the standard computer algebra system approach of
>> barraging it with every analytical trick in the book until
>> it cracks? I only care about rank 2.
> > I've deleted some context that I didn't understand. The equations for
> solving TOP-RMS are in http://x31eq.com/primerr.pdf and I probably
> gave you rank 2 code in Scheme once. If that's not what you wanted
> you'll have to explain.
> > To solve the general rank 2 case of TOP-max you need a linear
> programming library.

This may be true, but I get by without a linear programming library by finding the TOP-max tunings of all the 3-prime commas and taking the one with the largest TOP error. Still, this process is more cumbersome than the TOP-RMS method, and I haven't tried to generalize it to higher rank temperaments.

🔗Graham Breed <gbreed@gmail.com>

5/3/2010 11:18:49 PM

On 3 May 2010 22:42, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:

>>There are no pure octave approximations in that PDF.
This is composite.pdf or composite_onecol.pdf
>
> Really!  Perhaps you'd care to explain what table 1 shows then,
> as well as the sentence you clipped above.

Table 1 shows the actual RMS error for pure octaves, and the actual
RMS error for optimized scale stretches. No approximations, other
than the decimal numbering system.

Which sentence then? I found two here:

>>If you define a standard scale stretch that doesn't involve pure
>>octaves, you can feed that in. The only problem is if you try to
>>optimize the scale stretch because you can get stupid results, with
>>everything collapsing to a unison.

The first one says you can choose a scale stretch that doesn't require
pure octaves. The second one says why you can't optimize the scale
stretch for the approximations that don't require you to.

And this one that you drew attention to: "Naturally, the optimized
errors are generally lower than those for pure octaves." That means
optimized errors are smaller than unoptimized errors, which shouldn't
be surprising.

>>> "For higher rank temperaments, you need to find the optimal
>>> generators, and then unstretch the scale so that octaves are pure."
>>>
>>> Ok, this sounds like you first temper the octaves along with
>>> everything else normally and then stretch the scale pure.
>>> A completely ad hoc operation.
>>
>>Yes.
>
> ...which treats prime 2 specially.

Yes, if you want to treat prime 2 specially. But you could restretch
them to get a ZMD stretch (zero mean weighted error) or a pure 3:1.

>>What wouldn't count as ad hoc?
>
> A psychoacoustically-derived weigthing (like Tenney height) that
> puts so much weight on 2 it almost never gets tempered.

I don't have anything like that for pure or impure octaves.

>>The approximate error formulas are
>>Equations 50 and 51 under "Approximations".  Then I give them in terms
>>of weighted errors.  Like with NOT, you know the error in the octaves
>>is always zero when you leave them pure.  But they approximate the
>>true TOP errors better.  And I prove that.  There's no logic other
>>than them being approximations.
>
> I don't know what an "approximation" is or why the heck you seem to
> think something with pure octaves has lower TOP damage than TOP,
> but I'm getting tired of you referring me to your paper instead of
> making clear statements here.

If you don't know what an approximation is, that's a big problem when
you're reading a section about approximations, isn't it? Wikipedia
defines them well:

http://en.wikipedia.org/wiki/Approximation

See also:

http://en.wikipedia.org/wiki/Orders_of_approximation

The octave equivalent errors are first order approximations to the TOP
errors in the mathematical sense. As TOP-RMS error is the square root
of a quadratic quantity, the STD error is also quadratic. So you
could call it a second order approximation to the square error.

>>Equation 56 gives you the generator for a rank 2 temperament in terms
>>of two weighted mappings.  That's the equivalent of Gene's Maple
>>function.  You don't need any other equations to understand it.
>
> I have no idea what any of those variables are.  So I took a
> screenshot so I could look at it and the glossary at the back
> at the same time (convenient!).  Looks like Gopt is the generator
> size.  Too bad for a rank 2 temperament, there are two generators.
> The Ms are weighted tuning maps, where the usual row/column
> formalism is conveniently reversed.  I have no idea what the 00,
> 0, or 1 subscripts mean.  Or how to parse the final ratio.
> Looks like the STD of the product of M0 and M1 over the product
> of M00 and the squared STD of M1, but since there are at least
> five levels of text in the denominator alone, I won't say I'm sure.

What happened to the sheet of paper you used to write down the new
symbols you came across? Can you understand other mathematical
articles without doing that?

It's not Gopt, it's g_opt. G would be a vector quantity, so a set of
generators. Yes, it's the generator for a rank 2 temperament. I say
that in the paragraph that's still quoted above. We sometimes talk
about "the generator" for a rank 2 temperament. It's the one
orthogonal to the octave. I define that in the background paper:

http://x31eq.com/paradigm.html#keygen

You don't need to know what the 00 subscript means because I define
M_{00} at the start of the section and again after it's introduced in
Equation 54. If you don't know what the single subscripts mean,
you're in trouble, because everything to do with rank 2 temperaments
uses them. The two mappings are defined at the top of section 2.5
"Rank 2 Temperaments". How can you expect to follow the equations for
rank 2 temperaments without understanding that?

What usual row/column formalism?

It's not the STD of the product of M_0 and M_1. It's the covariance
of M_0 and M_1. This is standard mathematical formalism. It's still
in Mathworld, although they seem to be phasing it out:

http://mathworld.wolfram.com/Covariance.html

I tell you the equation I defined it in, and when that's introduced
there's a citation. And if you still don't know what it means, look
at the equation before (56) which is the same thing in terms of
element-wise means.

What have levels of text got to do with anything?

Note: if you want a single tuning parameter that doesn't depend on
octaves, see Equation 124 on page 32. It's in an appendix because I
was trying to take octave-equivalent stuff out of the body. The W_0
and W_1 are weighted tuning maps. They're defined according to
octaves, but as we're talking about in this thread, you can use other
standard scale stretches if you like. Equation 119 tells you how to
get the weighted tuning map for a rank 2 temperament from them. If
you start with ZMD tunings you'll get a rank 2 ZMD tuning.

Graham

🔗Carl Lumma <carl@lumma.org>

5/4/2010 12:13:07 AM

Graham wrote:

>>>> Ok, this sounds like you first temper the octaves along with
>>>> everything else normally and then stretch the scale pure.
>>>> A completely ad hoc operation.
>>>
>>>Yes.
>>
>> ...which treats prime 2 specially.
>
>Yes, if you want to treat prime 2 specially. But you could restretch
>them to get a ZMD stretch (zero mean weighted error) or a pure 3:1.

I can stretch any scale any way I like! Or, I could use an
optimization based on a psychoacoustically-valid weighted error.

>>>What wouldn't count as ad hoc?
>>
>> A psychoacoustically-derived weigthing (like Tenney height) that
>> puts so much weight on 2 it almost never gets tempered.
>
>I don't have anything like that for pure or impure octaves.

You couldn't have one for impure octaves, could you, since it would
make them pure. Or are you hinting that you don't think Tenney height
is psychoacoustically valid? For sure it's the most valid weighting
you're gonna find that works over prime limits. For the record, let's
remember that it

* has deep connections with harmonic entropy ('Farey in Tenney out,
Mann in Tenney out, Tenney in Tenney out')

* explained results of a blind listening test on tetrads of complex
tones, which was about as large/rigorous as the average campus
psychoacoustics study on dyads of sine tones

* smashed all other prime-factor methods (digestibility, etc.) on
unblinded tests

* represents the period of a chord's composite waveform

* has been recommended by Galileo, Tenney, and Denny Genovese

>It's not Gopt, it's g_opt. G would be a vector quantity, so a set of
>generators. Yes, it's the generator for a rank 2 temperament. I say
>that in the paragraph that's still quoted above. We sometimes talk
>about "the generator" for a rank 2 temperament. It's the one
>orthogonal to the octave.

If there is an octave... which I why I think it's bad terminology.
There's nothing I know of preventing the octave being mapping by
both generators. Just that badness will tend to favor temperaments
that map 2 quickly. ... Hm, I may be wrong about this, since I
don't see a single exception in my rank 2 list. Perhaps there's a
way to guarantee you can always find a map such that one val starts
with zero?

But anyway, it's still bad terminology for no-2s temperaments.

>You don't need to know what the 00 subscript means because I define
>M_{00} at the start of the section and again after it's introduced in
>Equation 54.

Hey, you said equation 56 was self-contained!

>What usual row/column formalism?

Generators in rows, primes in columns.

>I tell you the equation I defined it in, and when that's introduced
>there's a citation. And if you still don't know what it means, look
>at the equation before (56) which is the same thing in terms of
>element-wise means.

This is an awful lot of work for something that is apparently
incredibly simple.

>What have levels of text got to do with anything?

Parsability. You've probably noticed I'm not an astute reader of
mathematical papers. I am, however, one of the 3.5 humans who have
attempted to read yours.

>Note: if you want a single tuning parameter that doesn't depend on
>octaves, see Equation 124 on page 32.

I can't imagine what it might mean for a parameter to depend on
octaves. The generator sizes depend on the size of the octave.
The error depends on the size of the octave...

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/4/2010 12:29:34 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

Hm, I may be wrong about this, since I
> don't see a single exception in my rank 2 list. Perhaps there's a
> way to guarantee you can always find a map such that one val starts
> with zero?

Certainly.

🔗Carl Lumma <carl@lumma.org>

5/4/2010 12:45:27 AM

Gene wrote:
>> Hm, I may be wrong about this, since I
>> don't see a single exception in my rank 2 list. Perhaps there's a
>> way to guarantee you can always find a map such that one val starts
>> with zero?
>
>Certainly.

I may have even known this once.

While we normally think of maps that wedge to a single wedgie as
equivalent, they have musical implications that may differentiate
them. People are liable to use them on the axes of a generalized
keyboard, for instance. The 'fingering complexity' of the various
primes is affected. So if folks have been doing this just so one
generator comes out close to an octave, it may be worth asking if
that's really the best way of doing it. I had thought Hermite normal
form was the recommended method for finding maps... does it always
give an octave val?

I think Herman has thought about this kind of thing... it may have
come up with Gene's Bosanquet lattices also.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/4/2010 1:13:23 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
I had thought Hermite normal
> form was the recommended method for finding maps... does it always
> give an octave val?

It gives a generator and period pair.

🔗Carl Lumma <carl@lumma.org>

5/4/2010 1:31:56 AM

>> I had thought Hermite normal form was the recommended method for
>> finding maps... does it always give an octave val?
>
>It gives a generator and period pair.

I think that's a way of saying one val starts with a zero.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/4/2010 4:50:45 AM

On 4 May 2010 12:31, Carl Lumma <carl@lumma.org> wrote:
>>> I had thought Hermite normal form was the recommended method for
>>> finding maps... does it always give an octave val?
>>
>>It gives a generator and period pair.
>
> I think that's a way of saying one val starts with a zero.

It does. You may get two vals starting with zeros, but only for a
very strange temperament. It'd mean octaves were tempered out.

Graham

🔗Graham Breed <gbreed@gmail.com>

5/4/2010 11:52:45 PM

On 4 May 2010 11:13, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:

>>Yes, if you want to treat prime 2 specially.  But you could restretch
>>them to get a ZMD stretch (zero mean weighted error) or a pure 3:1.
>
> I can stretch any scale any way I like!  Or, I could use an
> optimization based on a psychoacoustically-valid weighted error.

Yes!

>>>>What wouldn't count as ad hoc?
>>>
>>> A psychoacoustically-derived weigthing (like Tenney height) that
>>> puts so much weight on 2 it almost never gets tempered.
>>
>>I don't have anything like that for pure or impure octaves.
>
> You couldn't have one for impure octaves, could you, since it would
> make them pure.  Or are you hinting that you don't think Tenney height
> is psychoacoustically valid?  For sure it's the most valid weighting
> you're gonna find that works over prime limits.  For the record, let's
> remember that it

I said Tenney height isn't psychoacoustically derived and that's what
I meant. Don't put other words into my mouth and then attack me for
it. Your references are lousy. You haven't given any citations to
the psychoacoustic literature, and if you had they wouldn't convince
me because I probably wouldn't be able to understand them. So let's
leave psychoacoustics out of this.

The point is that the STD error is an extremely good approximation the
the TOP-RMS error. You can see that in Tables 5, 8, and 11 of
primer.pdf. That's what it was supposed to do. Given all the
ad-hoccery behind the TOP-RMS, it's an approximation that we can say
is practically indistinguishable from the original.

And it really is simpler. Compare Equation 60 to Equation 38. Note
that Equation 38 is more numerically stable. Compare Equations 49 and
56, which are also measuring the same things. You can work out the
figures if you like and you'll see that the one is a very good
approximation of the other. It gets that close because there's real
logic behind it.

>>It's not Gopt, it's g_opt.  G would be a vector quantity, so a set of
>>generators.  Yes, it's the generator for a rank 2 temperament.  I say
>>that in the paragraph that's still quoted above.  We sometimes talk
>>about "the generator" for a rank 2 temperament.  It's the one
>>orthogonal to the octave.
>
> If there is an octave... which I why I think it's bad terminology.
> There's nothing I know of preventing the octave being mapping by
> both generators.  Just that badness will tend to favor temperaments
> that map 2 quickly. ...  Hm, I may be wrong about this, since I
> don't see a single exception in my rank 2 list.  Perhaps there's a
> way to guarantee you can always find a map such that one val starts
> with zero?

If there isn't an octave, you have to define another equivalence
interval. That's old news.

The definition of the period prevents the octave (or equivalence
interval) being mapped by both intervals. The definition of the
single generator of a rank 2 temperament ensures that it isn't mapped
by the octave. Badness should not depend on the choice of generators.
Yes, you do seem to be wrong, as I hope you've realized from the rest
of the thread.

> But anyway, it's still bad terminology for no-2s temperaments.

So don't use it for no-2s temperaments!

>>You don't need to know what the 00 subscript means because I define
>>M_{00} at the start of the section and again after it's introduced in
>>Equation 54.
>
> Hey, you said equation 56 was self-contained!

Where? I introduced is as follows:

"""
Equation 56 gives you the generator for a rank 2 temperament in terms
of two weighted mappings. That's the equivalent of Gene's Maple
function. You don't need any other equations to understand it. It
depends on octaves being tempered pure, but only so that M_{00} works
as the number of periods to an octave. You can replace M_{00} with
the true number of periods to a 2:1.
"""

You then come back and tell me you don't know what it's a function of,
what it gives, and what M_{00} is. That really makes it look like you
aren't reading what I write. How else am i supposed to make it
clearer for you?

>>What usual row/column formalism?
>
> Generators in rows, primes in columns.

Which means what? As far as I can tell, all the matrix equations are
consistent in that PDF. Fortunately the section we're talking about
doesn't require matrices so this is irrelevant.

>>I tell you the equation I defined it in, and when that's introduced
>>there's a citation.  And if you still don't know what it means, look
>>at the equation before (56) which is the same thing in terms of
>>element-wise means.
>
> This is an awful lot of work for something that is apparently
> incredibly simple.

Who says it's incredibly simple? It's exactly as complicated as it
needs to be. And it's written in standard notation. Of course you'll
need to do a bit of reading if you don't already know that. No
statement in any language can be meaningful if you don't know the
language.

>>What have levels of text got to do with anything?
>
> Parsability.  You've probably noticed I'm not an astute reader of
> mathematical papers.  I am, however, one of the 3.5 humans who have
> attempted to read yours.

Complaining at this stage won't make any difference. If you had a
problem back in 2007 you could have said so and I'd have considered
revising it. You seem to be able to parse it fine, you just don't read
the definitions, or follow the hints. Really, how am I supposed to
make it easier if you don't read what I write? You missed the
definitions I gave you in the e-mail. You missed the word
"covariance" which is the fourth one after the equation. Everything
you needed to understand it was there.

>>Note: if you want a single tuning parameter that doesn't depend on
>>octaves, see Equation 124 on page 32.
>
> I can't imagine what it might mean for a parameter to depend on
> octaves.  The generator sizes depend on the size of the octave.
> The error depends on the size of the octave...

The generator size is a function of the weighted mappings. If that
function requires you to distinguish between the period and generator
mappings, then it depends on octaves.

The ratio of the period to the generator doesn't depend on the size of
the octave. The normalized errors don't depend on the size of the
octave.

Graham

🔗Carl Lumma <carl@lumma.org>

5/5/2010 1:09:25 AM

Graham wrote:

>I said Tenney height isn't psychoacoustically derived and that's what
>I meant.

But it was. Certainly Genovese and Erlich independently derived it
from psychoacoustics considerations. You may not like the references
but they're about as good as we've got, including the published
sensory dissonance literature.

>If there isn't an octave, you have to define another equivalence
>interval. That's old news.

Would you offer tunings with it capriciously stretched pure?
;)

>The definition of the period prevents the octave (or equivalence
>interval) being mapped by both intervals.

Why care if it is?

>Badness should not depend on the choice of generators.
>Yes, you do seem to be wrong, as I hope you've realized from the
>rest of the thread.

Did I say badness depends on the choice of generators?

>> But anyway, it's still bad terminology for no-2s temperaments.
>
>So don't use it for no-2s temperaments!

Quite a lot of the music output of the community lately has been in
no-2s temperaments. They had a symposium on BP about a month ago.

>Really, how am I supposed to
>make it easier if you don't read what I write? You missed the
>definitions I gave you in the e-mail. You missed the word
>"covariance" which is the fourth one after the equation. Everything
>you needed to understand it was there.

I'm not sure what you think I'm trying to understand, but it isn't
obvious any of this stuff is relevant. I'm not sure why I should
read 60 pages of PDF to understand why you're talking about
covariance, for example.

>The ratio of the period to the generator doesn't depend on the size of
>the octave.

So what? Do you intend to optimize it and then let people pick
whatever octave size they like?

>The normalized errors don't depend on the size of the octave.

I know about error and I know about weighted error. What's
normalized error, and why do you bring it up?

Your papers are apparently a major accomplishment but they definitely
take a nonobvious approach to some things. Take "octave stretch"
for example. I find it a bizarre concept. You go on about it at
length, admonishing readers what kinds of optimizations they can and
can't do with or without doing something about the octave stretch in
which order. It's weird! Why is Gene's version of this 5 lines
of code?

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/6/2010 7:42:29 AM

On 05/05/2010, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>I said Tenney height isn't psychoacoustically derived and that's what
>>I meant.
>
> But it was. Certainly Genovese and Erlich independently derived it
> from psychoacoustics considerations. You may not like the references
> but they're about as good as we've got, including the published
> sensory dissonance literature.

Neither are psychoacousticians that I'm aware of. Yes, the references
aren't good.

>>If there isn't an octave, you have to define another equivalence
>>interval. That's old news.
>
> Would you offer tunings with it capriciously stretched pure?
> ;)

What does that mean? Why not?

>>The definition of the period prevents the octave (or equivalence
>>interval) being mapped by both intervals.
>
> Why care if it is?

Because then you haven't identified the period correctly. What sense
would it make to call something a period that isn't the period?

>>Badness should not depend on the choice of generators.
>>Yes, you do seem to be wrong, as I hope you've realized from the
>>rest of the thread.
>
> Did I say badness depends on the choice of generators?

I don't know. Google's in simple mode today, so I can't check the history.

Of course, complexity can depend on the generator choice. Some
measures are functions of the octave-equivalent generator mapping. I
forgot about that before. So maybe that's what you were talking
about, whatever it was. These are the simplest kind of complexities,
and tie into the octave equivalent error measures, which is another
reason for them being in my PDF.

>>> But anyway, it's still bad terminology for no-2s temperaments.
>>
>>So don't use it for no-2s temperaments!
>
> Quite a lot of the music output of the community lately has been in
> no-2s temperaments. They had a symposium on BP about a month ago.

What's that got to do with periods and generators?

>>Really, how am I supposed to
>>make it easier if you don't read what I write? You missed the
>>definitions I gave you in the e-mail. You missed the word
>>"covariance" which is the fourth one after the equation. Everything
>>you needed to understand it was there.
>
> I'm not sure what you think I'm trying to understand, but it isn't
> obvious any of this stuff is relevant. I'm not sure why I should
> read 60 pages of PDF to understand why you're talking about
> covariance, for example.

Relevant to what? Where did you get 60 pages of PDF from? You asked
about these tunings, from what I remember. Or made a false statement
that I tried to correct.

>>The ratio of the period to the generator doesn't depend on the size of
>>the octave.
>
> So what? Do you intend to optimize it and then let people pick
> whatever octave size they like?

Yes.

>>The normalized errors don't depend on the size of the octave.
>
> I know about error and I know about weighted error. What's
> normalized error, and why do you bring it up?

Normalized errors are where the factors of M or w balance on the top
and bottom of the formula, so you get the same result regardless of
the octave stretch. I brought it up because you asked how something
couldn't depend on the size of the octave.

> Your papers are apparently a major accomplishment but they definitely
> take a nonobvious approach to some things. Take "octave stretch"
> for example. I find it a bizarre concept. You go on about it at
> length, admonishing readers what kinds of optimizations they can and
> can't do with or without doing something about the octave stretch in
> which order. It's weird! Why is Gene's version of this 5 lines
> of code?

You don't like octave stretch now? You were arguing in favor of it
before. Gene's code for what?

Anyway, as you mention code, I do have some more Scheme for you. I
can't log in to Yahoo! Groups today, and I don't really trust this
machine to connect to my website, so I'll past it in here.

Here are the methods for TOP-RMS generators. You can add them to
regular.scm somewhere.

; first generator in cents with optimal tuning
(define (lt.g1 lt)
(let* ((var1 (et.var (lt.et1 lt)))
(var2 (et.var (lt.et2 lt)))
(mean1 (et.mean (lt.et1 lt)))
(mean2 (et.mean (lt.et2 lt)))
(mw1 (+ 1 mean1))
(mw2 (+ 1 mean2))
(mw12 (+ 1 (lt.cov12 lt) (* mean1 mean2) mean1 mean2))
(mw1sq (+ 1 var1 (* mean1 mean1) (* 2 mean1)))
(mw2sq (+ 1 var2 (* mean2 mean2) (* 2 mean2))))
(* 1200
(/ (- (* mw1 mw2sq) (* mw2 mw12))
(- (* mw1sq mw2sq) (* mw12 mw12))
(car (et.mapping (lt.et1 lt)))))))

; second generator in cents with optimal tuning
(define (lt.g2 lt)
(let* ((var1 (et.var (lt.et1 lt)))
(var2 (et.var (lt.et2 lt)))
(mean1 (et.mean (lt.et1 lt)))
(mean2 (et.mean (lt.et2 lt)))
(mw1 (+ 1 mean1))
(mw2 (+ 1 mean2))
(mw12 (+ 1 (lt.cov12 lt) (* mean1 mean2) mean1 mean2))
(mw1sq (+ 1 var1 (* mean1 mean1) (* 2 mean1)))
(mw2sq (+ 1 var2 (* mean2 mean2) (* 2 mean2))))
(* 1200
(/ (- (* mw2 mw1sq) (* mw1 mw12))
(- (* mw1sq mw2sq) (* mw12 mw12))
(car (et.mapping (lt.et2 lt)))))))

They're very long, like with the error formula in that file, because
everything's set up for the octave-equivalent measures and I have to
convert back to the things that would have been easier to calculate
directly. So this isn't a good way of comparing the simplicity of the
formulas. You have to ignore all the let* stuff.

Here's an example of using it:

(load "regular.scm")
(define h31 (et.new '(31 49 72 87) (limit 7)))
(define h19 (et.new '(19 30 44 53) (limit 7)))
(define meantone (lt.new h19 h31 (limit 7)))
(lt.g1 meantone)
(lt.g2 meantone)

In Guile, that gives me the two generators.

Here's the equivalent code to add to regular_oe.scm:

; Equation 124 of primerr.pdf
(define (lt.tuning lt)
(let ((var1 (et.var (lt.et1 lt)))
(var2 (et.var (lt.et2 lt)))
(cov12 (lt.cov12 lt)))
(/ (- var2 cov12) (+ var1 var2 (* -2 cov12)))))

; generators in cents with optimal tuning
(define (lt.g1 lt)
(* 1200 (/ (lt.tuning lt) (car (et.mapping (lt.et1 lt))))))
(define (lt.g2 lt)
(* 1200 (/ (- 1 (lt.tuning lt)) (car (et.mapping (lt.et2 lt))))))

You can use it the same way as regular.scm and it'll give the octave
eqivalent generators.

Here's a re-scaling to get approximations to the TOP-RMS generators:

; approximation of TOP-RMS stretch; not even the true ZMD stretch
(define (lt.zmd-stretch lt)
(- 1 (* (lt.tuning lt) (et.mean (lt.et1 lt)))
(* (- 1 (lt.tuning lt)) (et.mean (lt.et2 lt)))))

; ad hoc approximations of the TOP-RMS generators
(define (lt.g1-zmd lt)
(* (lt.g1 lt) (lt.zmd-stretch lt)))
(define (lt.g2-zmd lt)
(* (lt.g2 lt) (lt.zmd-stretch lt)))

Here's an example of using them:

(load "regular_oe.scm")
(define h31 (et.new '(31 49 72 87) (limit 7)))
(define h19 (et.new '(19 30 44 53) (limit 7)))
(define meantone (lt.new h19 h31 (limit 7)))
(lt.g1-zmd meantone)
(lt.g2-zmd meantone)

In reality, you'd maybe save 1 or 2 lines of code by the STD-ZMD
method. But I think a fair comparison of the code above would
actually show TOP-RMS simpler. Maybe if you wrote everything to use
ZMD stretches all the way through it'd be simpler, because you
wouldn't need to distinguish variances from mean-squareds and
covariances from mean-products. I didn't find this when I looked at
ZMD stretches before because I wasn't bothered with the generators.
It still can't save you much because the true TOP-RMS search isn't
that complicated. There may be a use for it. Of course, it's
something I took out of primerr.pdf because I didn't think it was
interesting enough, and now it's come up in discussion. So you never
know.

Graham

🔗Carl Lumma <carl@lumma.org>

5/6/2010 10:35:27 AM

Graham wrote:

>>>If there isn't an octave, you have to define another equivalence
>>>interval. That's old news.
>>
>> Would you offer tunings with it capriciously stretched pure?
>> ;)
>
>What does that mean? Why not?

Because the optimization should be based on a psychoacoustally-
sound weighting. We really are going in circles, aren't we?

>>>The definition of the period prevents the octave (or equivalence
>>>interval) being mapped by both intervals.
>>
>> Why care if it is?
>
>Because then you haven't identified the period correctly. What sense
>would it make to call something a period that isn't the period?

Does "period" mean anything other than "the larger generator"?

>>>> But anyway, it's still bad terminology for no-2s temperaments.
>>>
>>>So don't use it for no-2s temperaments!
>>
>> Quite a lot of the music output of the community lately has been in
>> no-2s temperaments. They had a symposium on BP about a month ago.
>
>What's that got to do with periods and generators?

Sorry, I missed the "it for" in the >>> quoted sentence.

>>>Really, how am I supposed to
>>>make it easier if you don't read what I write? You missed the
>>>definitions I gave you in the e-mail. You missed the word
>>>"covariance" which is the fourth one after the equation. Everything
>>>you needed to understand it was there.
>>
>> I'm not sure what you think I'm trying to understand, but it isn't
>> obvious any of this stuff is relevant. I'm not sure why I should
>> read 60 pages of PDF to understand why you're talking about
>> covariance, for example.
>
>Relevant to what? Where did you get 60 pages of PDF from?

Primerr is 36 pages and composite is 24. That happens to make
exactly 60.

>>>The ratio of the period to the generator doesn't depend on the size of
>>>the octave.
>>
>> So what? Do you intend to optimize it and then let people pick
>> whatever octave size they like?
>
>Yes.

If you also tell them the optimal octave size, then they'd have
everything they need.

>> Your papers are apparently a major accomplishment but they definitely
>> take a nonobvious approach to some things. Take "octave stretch"
>> for example. I find it a bizarre concept. You go on about it at
>> length, admonishing readers what kinds of optimizations they can and
>> can't do with or without doing something about the octave stretch in
>> which order. It's weird! Why is Gene's version of this 5 lines
>> of code?
>
>You don't like octave stretch now? You were arguing in favor of it
>before. Gene's code for what?

Pulling it out as a parameter is what I find odd.

No, I've been arguing in favor of tempered octaves for many years.
In fact, I'm the guy who put the bug under Paul's skin. I also happen
to be the first guy to ask if a TOP-like approach could be made to
minimize RMS error instead of max error.

>Anyway, as you mention code, I do have some more Scheme for you. I
>can't log in to Yahoo! Groups today, and I don't really trust this
>machine to connect to my website, so I'll past it in here.
>
>Here are the methods for TOP-RMS generators. You can add them to
>regular.scm somewhere.
>
>; first generator in cents with optimal tuning
>(define (lt.g1 lt)
> (let* ((var1 (et.var (lt.et1 lt)))
> (var2 (et.var (lt.et2 lt)))
> (mean1 (et.mean (lt.et1 lt)))
> (mean2 (et.mean (lt.et2 lt)))
> (mw1 (+ 1 mean1))
> (mw2 (+ 1 mean2))
> (mw12 (+ 1 (lt.cov12 lt) (* mean1 mean2) mean1 mean2))
> (mw1sq (+ 1 var1 (* mean1 mean1) (* 2 mean1)))
> (mw2sq (+ 1 var2 (* mean2 mean2) (* 2 mean2))))
> (* 1200
> (/ (- (* mw1 mw2sq) (* mw2 mw12))
> (- (* mw1sq mw2sq) (* mw12 mw12))
> (car (et.mapping (lt.et1 lt)))))))
>
>; second generator in cents with optimal tuning
>(define (lt.g2 lt)
> (let* ((var1 (et.var (lt.et1 lt)))
> (var2 (et.var (lt.et2 lt)))
> (mean1 (et.mean (lt.et1 lt)))
> (mean2 (et.mean (lt.et2 lt)))
> (mw1 (+ 1 mean1))
> (mw2 (+ 1 mean2))
> (mw12 (+ 1 (lt.cov12 lt) (* mean1 mean2) mean1 mean2))
> (mw1sq (+ 1 var1 (* mean1 mean1) (* 2 mean1)))
> (mw2sq (+ 1 var2 (* mean2 mean2) (* 2 mean2))))
> (* 1200
> (/ (- (* mw2 mw1sq) (* mw1 mw12))
> (- (* mw1sq mw2sq) (* mw12 mw12))
> (car (et.mapping (lt.et2 lt)))))))
>
>They're very long, like with the error formula in that file, because
>everything's set up for the octave-equivalent measures and I have to
>convert back to the things that would have been easier to calculate
>directly. So this isn't a good way of comparing the simplicity of the
>formulas. You have to ignore all the let* stuff.

Thanks. I can't ignore the lets because they define the variables
below. And in the lets, it looks like you're using some objects
defined elsewhere (lt.var, lt.et1, et.mean, etc).

I did actually have a look at regular.scm recently but it seemed
impenetrable. Then again, I suck at reading other people's code
and it's generally a hard thing to do and as you say, you may be
supporting a history. The above looks completely straightforward
though, so maybe I should have another look for those object
definitions.

>Here's an example of using it:
>
>(load "regular.scm")
>(define h31 (et.new '(31 49 72 87) (limit 7)))
>(define h19 (et.new '(19 30 44 53) (limit 7)))
>(define meantone (lt.new h19 h31 (limit 7)))
>(lt.g1 meantone)
>(lt.g2 meantone)
>
>In Guile, that gives me the two generators.
>
>Here's the equivalent code to add to regular_oe.scm:
>
>; Equation 124 of primerr.pdf
>(define (lt.tuning lt)
> (let ((var1 (et.var (lt.et1 lt)))
> (var2 (et.var (lt.et2 lt)))
> (cov12 (lt.cov12 lt)))
> (/ (- var2 cov12) (+ var1 var2 (* -2 cov12)))))
>
>; generators in cents with optimal tuning
>(define (lt.g1 lt)
> (* 1200 (/ (lt.tuning lt) (car (et.mapping (lt.et1 lt))))))
>(define (lt.g2 lt)
> (* 1200 (/ (- 1 (lt.tuning lt)) (car (et.mapping (lt.et2 lt))))))
>
>You can use it the same way as regular.scm and it'll give the octave
>eqivalent generators.
>
>Here's a re-scaling to get approximations to the TOP-RMS generators:
>
>; approximation of TOP-RMS stretch; not even the true ZMD stretch
>(define (lt.zmd-stretch lt)
> (- 1 (* (lt.tuning lt) (et.mean (lt.et1 lt)))
> (* (- 1 (lt.tuning lt)) (et.mean (lt.et2 lt)))))
>
>; ad hoc approximations of the TOP-RMS generators
>(define (lt.g1-zmd lt)
> (* (lt.g1 lt) (lt.zmd-stretch lt)))
>(define (lt.g2-zmd lt)
> (* (lt.g2 lt) (lt.zmd-stretch lt)))
>
>Here's an example of using them:
>
>(load "regular_oe.scm")
>(define h31 (et.new '(31 49 72 87) (limit 7)))
>(define h19 (et.new '(19 30 44 53) (limit 7)))
>(define meantone (lt.new h19 h31 (limit 7)))
>(lt.g1-zmd meantone)
>(lt.g2-zmd meantone)
>
>In reality, you'd maybe save 1 or 2 lines of code by the STD-ZMD
>method. But I think a fair comparison of the code above would
>actually show TOP-RMS simpler. Maybe if you wrote everything to use
>ZMD stretches all the way through it'd be simpler, because you
>wouldn't need to distinguish variances from mean-squareds and
>covariances from mean-products.

Sounds complicated. The function just needs to take a map (two
vals) and a prime basis and return two points in tuning space
(the generators). Gene's procedure, which you said is equivalent
to ZMD, simply weights the vals and divides through by a least-
squares somethingorother. If I had maple's solve I could
schemify it in 2 minutes. Perhaps your variances and such are an
alternative to solving his derivative, in which case I'd be
interested to learn more.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/7/2010 10:34:50 AM

On 6 May 2010 21:35, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>>>If there isn't an octave, you have to define another equivalence
>>>>interval.  That's old news.
>>>
>>> Would you offer tunings with it capriciously stretched pure?
>>> ;)
>>
>>What does that mean?  Why not?
>
> Because the optimization should be based on a psychoacoustally-
> sound weighting.  We really are going in circles, aren't we?

Why can't I use whatever tuning I want, provided my instrument can handle it?

> Does "period" mean anything other than "the larger generator"?

It's the generator that equally divides the equivalence interval.

>>> I'm not sure what you think I'm trying to understand, but it isn't
>>> obvious any of this stuff is relevant.  I'm not sure why I should
>>> read 60 pages of PDF to understand why you're talking about
>>> covariance, for example.
>>
>>Relevant to what?  Where did you get 60 pages of PDF from?
>
> Primerr is 36 pages and composite is 24.  That happens to make
> exactly 60.

You don't need composite to understand covariances. And from primerr,
you don't need the sections on minimax errors, octave equivalence,
complexity, wedgies, or the appendices (that I remember). That cuts
it down quite a bit.

>>>>The ratio of the period to the generator doesn't depend on the size of
>>>>the octave.
>>>
>>> So what?  Do you intend to optimize it and then let people pick
>>> whatever octave size they like?
>>
>>Yes.
>
> If you also tell them the optimal octave size, then they'd have
> everything they need.

What's this "if"? I do tell them exactly that. Which is fine if
that's what they wanted. What if they were looking for pure octaves?

>>You don't like octave stretch now?  You were arguing in favor of it
>>before.  Gene's code for what?
>
> Pulling it out as a parameter is what I find odd.

The character of a temperament doesn't change much as you change the
octave stretch. Having the generator/period size and scale stretch as
parameters makes a lot of sense to me. I wish I'd written MIDI Relay
like that. It was difficult to experiment with impure octaves because
every time you changed the octave size you had to adjust the generator
to match.

> No, I've been arguing in favor of tempered octaves for many years.
> In fact, I'm the guy who put the bug under Paul's skin.  I also happen
> to be the first guy to ask if a TOP-like approach could be made to
> minimize RMS error instead of max error.

How do you know? I was talking about it around 1998.

>>They're very long, like with the error formula in that file, because
>>everything's set up for the octave-equivalent measures and I have to
>>convert back to the things that would have been easier to calculate
>>directly.  So this isn't a good way of comparing the simplicity of the
>>formulas.  You have to ignore all the let* stuff.
>
> Thanks.  I can't ignore the lets because they define the variables
> below.  And in the lets, it looks like you're using some objects
> defined elsewhere (lt.var, lt.et1, et.mean, etc).

You can ignore them for comparing the simplicity of the formulas.

> I did actually have a look at regular.scm recently but it seemed
> impenetrable.  Then again, I suck at reading other people's code
> and it's generally a hard thing to do and as you say, you may be
> supporting a history.  The above looks completely straightforward
> though, so maybe I should have another look for those object
> definitions.

One thing about the Scheme is that it's optimized for execution speed,
as well as floating point stability. And it's mostly about the
searches not finding the generators. The means and variances are
stored in the equal temperament objects because they can be reused,
which makes it faster.

There's all new code in parametric.py as well, if you have a recent
source bundle. I forget exactly how it works, though. Oh, no
generator sizes :P That's in regular.py. The most general case is
extremely simple because I use a library.

>>In reality, you'd maybe save 1 or 2 lines of code by the STD-ZMD
>>method.  But I think a fair comparison of the code above would
>>actually show TOP-RMS simpler.  Maybe if you wrote everything to use
>>ZMD stretches all the way through it'd be simpler, because you
>>wouldn't need to distinguish variances from mean-squareds and
>>covariances from mean-products.
>
> Sounds complicated.  The function just needs to take a map (two
> vals) and a prime basis and return two points in tuning space
> (the generators).  Gene's procedure, which you said is equivalent
> to ZMD, simply weights the vals and divides through by a least-
> squares somethingorother.  If I had maple's solve I could
> schemify it in 2 minutes.  Perhaps your variances and such are an
> alternative to solving his derivative, in which case I'd be
> interested to learn more.

If that's all you need, yes. You take the maps, weight them by the
prime basis, and calculate the variances and covariance. Are they
standalone functions in the Scheme? I'll check tonight. Then you use
the functions I gave you. My Scheme's rusty so I'm not sure about how
to do the weighting, but I must have worked it out before.

Actually, I don't think the TOP-RMS generators need variance or
covariance. They use the straight mean, mean-squared, and
mean-products. That's why I had to do extra work to reverse engineer
these from the (co)variances.

Here we go:

(mean1 (et.mean (lt.et1 lt)))
(mean2 (et.mean (lt.et2 lt)))

Weighted means of errors of ETs.

(mw1 (+ 1 mean1))
(mw2 (+ 1 mean2))

Weighted tuning maps of ETs: weighted vals divided by the numbers of
steps to the octave (don't panic: no sneaky octave equivalence).

(mw12 (+ 1 (lt.cov12 lt) (* mean1 mean2) mean1 mean2))

Mean of the element-wise products of the two weighted tuning maps.
That is, like a mean-squared, but with two different lists.

(mw1sq (+ 1 var1 (* mean1 mean1) (* 2 mean1)))
(mw2sq (+ 1 var2 (* mean2 mean2) (* 2 mean2))))

Mean squareds of the weighted tuning maps.

Gene's procedure does something with the ZMD, and then uses a library
to get the solution. You could do the same thing with the same
library and forget the ZMD. It all depends on having the library.
What I gave you, and is also in Python as well as a chunk of
primerr.pdf, is indeed about not needing the library for ranks 1 and
2. Also about going straight to the error, or even badness, without
ever calculating the generators, which is convenient for the searches.

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/7/2010 11:46:14 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> Actually, I don't think the TOP-RMS generators need variance or
> covariance.

Are you actually talking about random variables, or can we stick to bilinear and quadratic forms and, hopefully, simple dot products?

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 12:27:56 AM

On 8 May 2010 10:46, genewardsmith <genewardsmith@sbcglobal.net> wrote:
>
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
>> Actually, I don't think the TOP-RMS generators need variance or
>> covariance.
>
> Are you actually talking about random variables, or can we stick to bilinear and quadratic forms and, hopefully, simple dot products?

No, they're not random variables. They happen to be the same functions.

What's a bilinear form?

Graham

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/8/2010 2:18:27 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:

> What's a bilinear form?

It's a function B(u,v) on vectors in a vector space which is simultaneously linear in both slots.

http://en.wikipedia.org/wiki/Bilinear_form
http://en.wikipedia.org/wiki/Quadratic_form

Inner products are symmetric bilinear formms.

🔗Carl Lumma <carl@lumma.org>

5/8/2010 4:03:13 AM

Graham wrote:
>> Because the optimization should be based on a psychoacoustally-
>> sound weighting. We really are going in circles, aren't we?
>
>Why can't I use whatever tuning I want, provided my instrument can handle it?

You can, you just shouldn't call it optimal.

>> Does "period" mean anything other than "the larger generator"?
>
>It's the generator that equally divides the equivalence interval.

Why do I need an equivalence interval?

>>>>>The ratio of the period to the generator doesn't depend on the size of
>>>>>the octave.
>>>>
>>>> So what? Do you intend to optimize it and then let people pick
>>>> whatever octave size they like?
>>>
>>>Yes.
>>
>> If you also tell them the optimal octave size, then they'd have
>> everything they need.
>
>What's this "if"? I do tell them exactly that. Which is fine if
>that's what they wanted. What if they were looking for pure octaves?

If they want pure octaves, your parameterized octave stretch is
useful. Some instruments can't handle tempered octaves, so yes,
it is useful.

>> No, I've been arguing in favor of tempered octaves for many years.
>> In fact, I'm the guy who put the bug under Paul's skin. I also happen
>> to be the first guy to ask if a TOP-like approach could be made to
>> minimize RMS error instead of max error.
>
>How do you know? I was talking about it around 1998.

Were you? TOP dates to 2002 I think. Why didn't you claim
priority at the time?

>> I did actually have a look at regular.scm recently but it seemed
>> impenetrable. Then again, I suck at reading other people's code
>> and it's generally a hard thing to do and as you say, you may be
>> supporting a history. The above looks completely straightforward
>> though, so maybe I should have another look for those object
>> definitions.
>
>One thing about the Scheme is ... it's mostly about the searches
>not finding the generators.

Yes.

-Carl

🔗Graham Breed <gbreed@gmail.com>

5/8/2010 11:06:45 PM

On 8 May 2010 15:03, Carl Lumma <carl@lumma.org> wrote:
> Graham wrote:
>
>>Why can't I use whatever tuning I want, provided my instrument can handle it?
>
> You can, you just shouldn't call it optimal.

If you optimize the generator with the octaves constrained to be pure,
what you get is still an optimal generator.

>>> Does "period" mean anything other than "the larger generator"?
>>
>>It's the generator that equally divides the equivalence interval.
>
> Why do I need an equivalence interval?

Because you can't define the period otherwise. Where is this going?

> If they want pure octaves, your parameterized octave stretch is
> useful.  Some instruments can't handle tempered octaves, so yes,
> it is useful.

There you go.

>>> No, I've been arguing in favor of tempered octaves for many years.
>>> In fact, I'm the guy who put the bug under Paul's skin.  I also happen
>>> to be the first guy to ask if a TOP-like approach could be made to
>>> minimize RMS error instead of max error.
>>
>>How do you know?  I was talking about it around 1998.
>
> Were you?  TOP dates to 2002 I think.  Why didn't you claim
> priority at the time?

I don't have any evidence that I was talking about it. TOP was first
clearly stated in January 2004, along with the observation that a
weighted minimax of primes is also the minimax for an arbitrary set of
intervals. That was Paul's discovery. Everything else seemed
obvious, and I said so. The first statement of general TOP that I can
find is from you, from January 2003. But I can't have been paying
attention because I didn't say anything about it. Gene and I had both
argued for Euclidean metrics at different times. I haven't checked
the old tuning list archives or my private e-mails with Paul.

Once we had a kind of minimax error, it was obvious to apply it to
RMS. You seem to have been the first to suggest that, and very soon
after Paul started pushing TOP(-max), maybe because it suited your
time zone, or you were in the habit of making suggestions. There was
a discussion about Euclidean metrics at the same time, the context of
which I haven't tracked down. There seem to be some messages missing.

The other thing about 2004 is that I happened to learn the matrix
formula for solving arbitrary rank least squares problems. Until then
I could probably have solved TOP-RMS, but it wouldn't have been as
easy. So I stuck with odd limits instead, which I knew were octave
equivalent approximations to Tenney harmonic distance, but had the
advantage of only requiring one parameter to be optimized. Once I had
the formula, TOP-RMS became the easiest thing to calculate, and I
don't think anybody implemented it before me.

I don't remember what I was doing before odd limits. But I'm sure I
knew that Tenney weighting would do fine if you treated 2 equally with
other primes, that square lattices worked fine a long as you optimized
the octaves, and that Euclidean lattices could approximate taxi cab
ones. I don't know if I treated that optimization seriously. And I
didn't know that optimizations over different sets of intervals gave
roughly the same results.

Graham

🔗Carl Lumma <carl@lumma.org>

5/8/2010 11:29:53 PM

Graham wrote:

>>>> Does "period" mean anything other than "the larger generator"?
>>>
>>>It's the generator that equally divides the equivalence interval.
>>
>> Why do I need an equivalence interval?
>
>Because you can't define the period otherwise. Where is this going?

Sounds like a tautology to me. I don't think the term "period"
means anything other than "the largest generator" if we really want
to generalize octave equivalence away. I'm willing to be convinced
otherwise but all I ever get when I ask are circular definitions
(from Paul too).

>> If they want pure octaves, your parameterized octave stretch is
>> useful. Some instruments can't handle tempered octaves, so yes,
>> it is useful.
>
>There you go.

Yes ok, I give. My intention was only to get you to think about
generalizing as completely as possible.

>>>> No, I've been arguing in favor of tempered octaves for many years.
>>>> In fact, I'm the guy who put the bug under Paul's skin. I also happen
>>>> to be the first guy to ask if a TOP-like approach could be made to
>>>> minimize RMS error instead of max error.
>>>
>>>How do you know? I was talking about it around 1998.
>>
>> Were you? TOP dates to 2002 I think. Why didn't you claim
>> priority at the time?
>
>I don't have any evidence that I was talking about it. TOP was first
>clearly stated in January 2004,

Sorry, you're right. Can't believe I was 2 years off!

>along with the observation that a
>weighted minimax of primes is also the minimax for an arbitrary set of
>intervals. That was Paul's discovery. Everything else seemed
>obvious, and I said so.

I said so too. I also referenced Dave K's method for uniformly
tempering out a comma.

>The first statement of general TOP that I can
>find is from you, from January 2003.

Really? I don't recall that. What I remember is asking Paul, shortly
after the Jan. 2004 thread, if there could be a version that minimized
RMS error instead of max error. He didn't warm to the idea, but you
did, so probably you had been thinking about it, but you didn't come
back with TOP-RMS for a year or so. That's my recollection, which
apparently was just 2 years off so take it with a grain of salt.

>Once we had a kind of minimax error, it was obvious to apply it to
>RMS. You seem to have been the first to suggest that, and very soon
>after Paul started pushing TOP(-max), maybe because it suited your
>time zone, or you were in the habit of making suggestions.

I am always in the habit of making suggestions. In this case I
recalled that RMS did a better job than max error in describing
my ratings of tempered major and minor triads.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/9/2010 12:38:22 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Sounds like a tautology to me. I don't think the term "period"
> means anything other than "the largest generator" if we really want
> to generalize octave equivalence away. I'm willing to be convinced
> otherwise but all I ever get when I ask are circular definitions
> (from Paul too).

Start with two independent equal temperament vals of the same rank. Since they are et vals, they will map 2. Now form a two column matrix from these vals, and Hermite reduce it. The column with the nonzero integer n on top defines the period, and the actual period is 1/n. How's that?

Of course, if you put in weird values you may get a weird result. Using both 34 equal 7-limit maps gives a period of an octave, and a generator of exactly 3, aka Pythagorean tuning, for example. But that still has a period.

If you don't have a handy Hermite reduction package you can get the name result via number theory (continued fractions.)

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/9/2010 12:49:29 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> Really? I don't recall that. What I remember is asking Paul, shortly
> after the Jan. 2004 thread, if there could be a version that minimized
> RMS error instead of max error. He didn't warm to the idea, but you
> did, so probably you had been thinking about it, but you didn't come
> back with TOP-RMS for a year or so. That's my recollection, which
> apparently was just 2 years off so take it with a grain of salt.

My recollection was that I said "sure, it's easy" but no one seemed to really want it--not you, Graham, and certainly not Paul. I thought Graham had taken up something else, actually. The problem was that it was a convenience for theory and calculation, but didn't seem to have the canonical oomph of TOP and we were, in part, looking for a canonical system of defining tunings.

One confusing aspect is that there are these systems which are very close indeed to TOP-RMS, but not quite identical. ZMD is one of them. Another is the system, unnamed so far as I know, which requires the solution to have the same norm as the JI point.

🔗Carl Lumma <carl@lumma.org>

5/9/2010 1:00:00 AM

Gene wrote:

>> Sounds like a tautology to me. I don't think the term "period"
>> means anything other than "the largest generator" if we really want
>> to generalize octave equivalence away. I'm willing to be convinced
>> otherwise but all I ever get when I ask are circular definitions
>> (from Paul too).
>
>Start with two independent equal temperament vals of the same rank.
>Since they are et vals, they will map 2. Now form a two column matrix
>from these vals, and Hermite reduce it. The column with the nonzero
>integer n on top defines the period, and the actual period is 1/n.
>How's that?

I get that. It's a definition. What's its purpose? Is the point
of calling it a "period" based on the notion of octave equivalence?
Or is there a deeper reason for it? Couldn't I wrap a chain of
octaves inside a fifth and tile that?

498 294 90 588 384

90 294 384 498 588 702 792 996 1086 1200 1290 1404

-Carl

🔗Carl Lumma <carl@lumma.org>

5/16/2010 5:17:02 PM

I wrote:

>>Start with two independent equal temperament vals of the same rank.
>>Since they are et vals, they will map 2. Now form a two column matrix
>>from these vals, and Hermite reduce it. The column with the nonzero
>>integer n on top defines the period, and the actual period is 1/n.
>>How's that?
>
>I get that. It's a definition. What's its purpose? Is the point
>of calling it a "period" based on the notion of octave equivalence?
>Or is there a deeper reason for it? Couldn't I wrap a chain of
>octaves inside a fifth and tile that?
>
>498 294 90 588 384
>
>90 294 384 498 588 702 792 996 1086 1200 1290 1404

No answer? I assumed you'd know exactly what to call such
a transformation. -Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/18/2010 2:23:18 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> No answer? I assumed you'd know exactly what to call such
> a transformation. -Carl
>

If you get two step sizes inside the period, it's a MOS. Otherwise, it isn't, so call it soemthing else. I guess I don't get what you are asking.

🔗Carl Lumma <carl@lumma.org>

5/18/2010 10:58:02 AM

>> No answer? I assumed you'd know exactly what to call such
>> a transformation. -Carl
>
>If you get two step sizes inside the period, it's a MOS. Otherwise, it
>isn't, so call it soemthing else. I guess I don't get what you are asking.

I'm asking what makes it a "period", other than convention.
If the scale I gave had one too few notes for you, try this

!
blah
7
!
90.
180.
294.
384.
498.
588.
702.

This has a "period" of 3/2, and a "generator" of 2/1. Or does it?
You tell me.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/18/2010 2:22:33 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> I'm asking what makes it a "period", other than convention.

If you use it as a period when constructing a scale, it's a period.
Since scales conceptually are mostly periodic, that makes it generally easy if you happen to have a scale. Of course, if given a tetrachord with no set of assembly instructions in the box, all bets are off.

So far as pure rank two tunings go, there is a presumption in favor of 2^(1/n) or its approximates as being periods, since that the default assumption about scales which might be constructed using the tuning.

http://xenharmonic.wikispaces.com/Periodic+scale For the latest exciting news about periodic scales.

http://xenharmonic.wikispaces.com/MOSScales for important information concerning your MOS.

> If the scale I gave had one too few notes for you, try this
>
> !
> blah
> 7
> !
> 90.
> 180.
> 294.
> 384.
> 498.
> 588.
> 702.
>
> This has a "period" of 3/2, and a "generator" of 2/1. Or does it?
> You tell me.
>
> -Carl
>

🔗Carl Lumma <carl@lumma.org>

5/18/2010 2:35:39 PM

>If you use it as a period when constructing a scale, it's a period.

Stop dodging the question. The scale I just gave looks an awful lot
like the normal 12-tone pythagorean (which has a period of 2/1 and
a generator of 3/2) except my scale has a period of 3/2 and a
generator of 2/1.

>So far as pure rank two tunings go, there is a presumption in favor of
>2^(1/n) or its approximates as being periods, since that the default
>assumption about scales which might be constructed using the tuning.

Sounds like it's just a convention, which is what I said before
when you and Graham told me I was wrong. Recall that this all
started when I asked for a method to tell which generator of a
rank 2 temperament is the "period". I still haven't seen such
a method and the distinction appears totally arbitrary.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/18/2010 7:20:37 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> >If you use it as a period when constructing a scale, it's a period.
>
> Stop dodging the question.

Formulate a clear question and I'll try to answer it. That means for instance if you ask about 2 and 3/2, give a scale in rational numbers so I don't have to worry what ELSE you might mean, and why you gave cents instead.

The scale I just gave looks an awful lot
> like the normal 12-tone pythagorean (which has a period of 2/1 and
> a generator of 3/2) except my scale has a period of 3/2 and a
> generator of 2/1.

Great. And the point is?

> >So far as pure rank two tunings go, there is a presumption in favor of
> >2^(1/n) or its approximates as being periods, since that the default
> >assumption about scales which might be constructed using the tuning.
>
> Sounds like it's just a convention, which is what I said before
> when you and Graham told me I was wrong.

You seemed to me to be asking how "period and generator" were derived from an abstract temperament, and I told you how. But obviously, if g is a generator and P is a period, we can by definition iterate g and reduce modulo P. But we can just as well iterate P and reduce modulo g. That's sort of massively obvious from a mathematical point of view, so I presumed you weren't asking how you could tell which one you iterated. You can tell by doing the iterating, or looking at the results, but hardly from an abstract characterization of the temperament without additional assumptions (which happen usually to be true ones, due to the phenomenon of octave equivalence.)

Recall that this all
> started when I asked for a method to tell which generator of a
> rank 2 temperament is the "period". I still haven't seen such
> a method and the distinction appears totally arbitrary.

Aand I answered, giving such a method, for abstractly characterized rank 2 temperaments assuming the "paradigm". I didn't assume you were asking a manifestly silly question, which seems to be what you are now faulting me for.

🔗Carl Lumma <carl@lumma.org>

5/19/2010 12:06:00 AM

Gene wrote:

>You seemed to me to be asking how "period and generator" were derived
>from an abstract temperament, and I told you how. But obviously, if g
>is a generator and P is a period, we can by definition iterate g and
>reduce modulo P. But we can just as well iterate P and reduce modulo
>g. That's sort of massively obvious from a mathematical point of view,
>so I presumed you weren't asking how you could tell which one you
>iterated. You can tell by doing the iterating, or looking at the
>results,

The results look identical to me, at least for the example I gave.

>but hardly from an abstract characterization of the temperament
>without additional assumptions (which happen usually to be true ones,
>due to the phenomenon of octave equivalence.)

This started when I claimed that without octave equivalence,
there'd be no point in christening one of the generators a "period".
If that isn't accurate, you and Graham are certainly taking a
circuitous route to saying why.

-Carl

🔗genewardsmith <genewardsmith@sbcglobal.net>

5/19/2010 2:46:01 AM

--- In tuning-math@yahoogroups.com, Carl Lumma <carl@...> wrote:

> The results look identical to me, at least for the example I gave.

Since one repeats at 3/2 and the other at 2, clearly not.

> This started when I claimed that without octave equivalence,
> there'd be no point in christening one of the generators a "period".
> If that isn't accurate, you and Graham are certainly taking a
> circuitous route to saying why.

That's the sort of comment I see no need to reply to. It's a point in psychology, and doesn't bring up any tuning-math issues I can see.