Well, my big "mean" math question has to do with the idea of

the "root mean square" (RMS) method of finding averages that Graham

Breed was talking about on the "fat" list...

I'm actually intrigued by this, since I'm not understanding why

squaring everything, adding it all together and then taking the

SQUARE ROOT of the sum is going to lead to an accurate average...

Why is this done this way again?? This is pretty interesting,

actually...

_________ ________ _______

Joseph Pehrson

On 6/8/01 1:40 PM, "jpehrson@rcn.com" <jpehrson@rcn.com> wrote:

> Well, my big "mean" math question has to do with the idea of

> the "root mean square" (RMS) method of finding averages that Graham

> Breed was talking about on the "fat" list...

>

> I'm actually intrigued by this, since I'm not understanding why

> squaring everything, adding it all together and then taking the

> SQUARE ROOT of the sum is going to lead to an accurate average...

>

> Why is this done this way again?? This is pretty interesting,

> actually...

Joseph -

I think there's an easier explanation than using vectors.

It's basically using the Pythagorean triangle;

2 2 2

a + b = c

Try imagining having to take an average of only two variables.

I think it's the easiest way to visualize it.

Imagine an x-y grid.

You have your ideal point,

(by your problem, a JI interval?)

which is somewhere on the grid.

Then you have your test data scattered all over the grid.

If you want to give each variable equal "weight",

effectively you have to give it its own "dimension".

That is, if a 2% error is of equivalent deviance,

no matter which direction it's coming from,

then the error of any one set of two test values

would be equivalent to the distance between

that particular test point and the ideal point.

Which you can calculate by the Pythagorean triangle method.

The same method holds true for 3 dimensional space.

The geometric proof is very short.

Once you prove it true for 3 dimensional space,

given the original quest,

to plot the accuracy of equally relevant data,

you use the square root

of the sum of all the squares of deviances.

I'm in between naps,

so I hope you can follow this wording.

Marc

--- In tuning-math@y..., jpehrson@r... wrote:

> Well, my big "mean" math question has to do with the idea of

> the "root mean square" (RMS) method of finding averages that Graham

> Breed was talking about on the "fat" list...

>

> I'm actually intrigued by this, since I'm not understanding why

> squaring everything, adding it all together and then taking the

> SQUARE ROOT of the sum is going to lead to an accurate average...

>

> Why is this done this way again?? This is pretty interesting,

> actually...

You don't want to take the _straight_ average because it might be

zero just from positives and negative signs canceling out.

The two simplest alternatives are to take the _maximum_ error, or to

take the average of the absolute values of the errors (called MAD,

for Mean Absolute Deviation).

The RMS is known as the Standard Deviation in statistics. It's the

standard measure of error in science and engineering. There are

several reasons for this. Let me give you a rough idea of why it

makes some sense in this context.

Look at the dips in the harmonic entropy curve. Notice how they

are "rounded" at the bottom. Any curve with a round minimum like this

(not getting too technical) approximates a parabola more and more

closely the more you zoom in on the minimum. A parabola is just the

curve representing squared error. So if you sum the squared errors,

you're summing the dissonances, in a sense. And then you have to take

the square root at the end so that the result is comparable with the

units for a _single_ error. For example, in the 3-limit there's only

one interval to evaluate. Let's say it has a 2-cent error. So any

sort of _average_ over this one interval would have to be 2 cents. It

wouldn't make much sense to say the average was 4 cents when there's

only a single 2 cent error, would it? So that's why you have to take

the square root after summing. If you want to get more technical,

check out a statistics book.

On 6/8/01 5:01 PM, "Paul Erlich" <paul@stretch-music.com> wrote:

> The RMS is known as the Standard Deviation in statistics. It's the

> standard measure of error in science and engineering.

THAT'S what standard deviation IS? Ahh..

Thank you Paul. I don't think I ever knew that.

Or if I did, I managed to not retain it...

--- In tuning-math@y..., "Paul Erlich" <paul@s...> wrote:

Thanks, Paul... this gives me a good overview on this one! It's

pretty interesting...

________ _______ ______

Joseph Pehrson

--- In tuning-math@y..., "Orphon Soul, Inc." <tuning@o...> wrote:

> On 6/8/01 5:01 PM, "Paul Erlich" <paul@s...> wrote:

>

> > The RMS is known as the Standard Deviation in statistics. It's the

> > standard measure of error in science and engineering.

>

> THAT'S what standard deviation IS? Ahh..

>

> Thank you Paul. I don't think I ever knew that.

> Or if I did, I managed to not retain it...

Sorry, I was wrong about that. The Standard Deviation is something

different. It's actually the RMS deviation of a set of measurements

from their collective mean. The RMS we're talking about here, rather,

is the RMS deviation from a pre-determined standard, namely the JI

interval. SO it's somewhat different.

On 6/8/01 5:40 PM, "Paul Erlich" <paul@stretch-music.com> wrote:

> Sorry, I was wrong about that. The Standard Deviation is something

> different. It's actually the RMS deviation of a set of measurements

> from their collective mean.

Right, I remember it had to do with the mean,

just didn't know how it was calculated.

Thanks for clearing that up.

> The RMS we're talking about here, rather,

> is the RMS deviation from a pre-determined standard, namely the JI

> interval. SO it's somewhat different.

Actually I've worked with that myself.

Nice to see other people having the same

intuitions and/or conclusions.

[Joseph Pehrson wrote:]

>Well, my big "mean" math question has to do with the idea of

>the "root mean square" (RMS) method of finding averages that Graham

>Breed was talking about on the "fat" list...

>I'm actually intrigued by this, since I'm not understanding why

>squaring everything, adding it all together and then taking the

>SQUARE ROOT of the sum is going to lead to an accurate average...

>Why is this done this way again?? This is pretty interesting,

>actually...

Is it my imagination, or has nobody already caught the error in this?

Paul E, even you???

Before you take the square root, you divide by the number of values

whose square has been summed. Thus, the RMS of 3 and 4 is:

RMS = sqrt((3^2 + 4^2) / 2)

= sqrt((9 + 16) / 2)

= sqrt(12.5)

=~ 3.54

NOT sqrt(25) = 5!!

JdL

--- In tuning-math@y..., "John A. deLaubenfels" <jdl@a...> wrote:

> Is it my imagination, or has nobody already caught the error in

this?

> Paul E, even you???

>

> Before you take the square root, you divide by the number of values

> whose square has been summed. Thus, the RMS of 3 and 4 is:

>

> RMS = sqrt((3^2 + 4^2) / 2)

> = sqrt((9 + 16) / 2)

> = sqrt(12.5)

> =~ 3.54

>

> NOT sqrt(25) = 5!!

>

> JdL

Actually, John... this is interesting because, if I'd known this, I

probably wouldn't have been quite as "mystified" as I was after

Graham's original post. The method you outline immediately above

seems somewhat "averagy" to me... so it would have seemed more

sensible.

Here was Graham's original quote from post 24541:

>Averages are trickier, you do need to consider all intervals then.

>The most popular is the root mean squared (RMS). So you take the

>errors in all intervals, square them all, add them together and

>return the square root.

________ ______ _____

Joseph Pehrson

[Joseph Pehrson wrote:]

>Actually, John... this is interesting because, if I'd known this, I

>probably wouldn't have been quite as "mystified" as I was after

>Graham's original post. The method you outline immediately above

>seems somewhat "averagy" to me... so it would have seemed more

>sensible.

>Here was Graham's original quote from post 24541:

>>Averages are trickier, you do need to consider all intervals then.

>>The most popular is the root mean squared (RMS). So you take the

>>errors in all intervals, square them all, add them together and

>>return the square root.

Right. That'd be the "Root Sum Square", which, as you've surmised,

wouldn't be very "averagy". In fact, I'm not sure what it would be

useful for. I'm sure Graham, and probably all the other people who

responded to your post yesterday, _do_ know the correct definition, but

all had a "brain fart" (which I know a lot about, 'cause I get them all

the time!).

The RMS value will always be less than the largest absolute value which

goes into its calculation (or equal if all input values are the same or

-same). I can see that you were grasping for that in your original

post. So, you have a better math sense than you realized!

JdL

[I wrote:]

>Right. That'd be the "Root Sum Square", which, as you've surmised,

>wouldn't be very "averagy". In fact, I'm not sure what it would be

>useful for.

Oops! Well, maybe it'd be slightly useful for such abstractions as

the length of a hypotenuse of a right triangle. ;->

JdL

--- In tuning-math@y..., "John A. deLaubenfels" <jdl@a...> wrote:

> [Joseph Pehrson wrote:]

> >Actually, John... this is interesting because, if I'd known this,

I

> >probably wouldn't have been quite as "mystified" as I was after

> >Graham's original post. The method you outline immediately above

> >seems somewhat "averagy" to me... so it would have seemed more

> >sensible.

>

> >Here was Graham's original quote from post 24541:

>

> >>Averages are trickier, you do need to consider all intervals

then.

> >>The most popular is the root mean squared (RMS). So you take the

> >>errors in all intervals, square them all, add them together and

> >>return the square root.

>

> Right. That'd be the "Root Sum Square", which, as you've surmised,

> wouldn't be very "averagy". In fact, I'm not sure what it would be

> useful for. I'm sure Graham, and probably all the other people who

> responded to your post yesterday, _do_ know the correct definition,

but all had a "brain fart" (which I know a lot about, 'cause I get

them all the time!).

>

> The RMS value will always be less than the largest absolute value

which goes into its calculation (or equal if all input values are the

same or -same). I can see that you were grasping for that in your

original post. So, you have a better math sense than you realized!

>

> JdL

Actually, John... that's pretty funny and, frankly, encouraging.

It's never too late to learn at least *something!* I think part of

the problem was the math "training" I had as a student. Math was

always presented as a "practical" study with ugly and dull-

looking "engineering type" books. No wonder I would read art and

music books instead. The subject was *ruined* for me... or at least

for *my* particular sensibilities...

_______ _____ ________

Joseph Pehrson