back to list

Re: [tuning] Re: new-staff notations in Tonalsoft Musica

🔗Aaron K. Johnson <akjmicro@...>

12/20/2004 9:23:02 AM

On Monday 20 December 2004 09:19 am, Aaron K. Johnson wrote:

> I read a very convincing article a couple of years ago in Harper's about
> how when something has been around a long time, the probability of it
> staying around for the same amount of time from the present to the future
> are very, very, high. The reasoning was that we are most often in the
> middle of something (like life), very rare is the beginning (or end) of
> something.

It turns out that it was the New Yorker (12 July 1999), in an article written
by Timothy Ferris, called "How to Predict Everything".

This blurb below indicates that Gott, the subject of the article, is probably
wrong in his reasoning, so Monz, maybe your notation WILL take over!

###############

"Point, Counterpoint and the Duration of Everything"

by James Glanz, The New York Times, 8 February 2000, F5.

In our last issue we considered the article "How to Predict Everything" (The
New Yorker, 12 July 1999, pp. 35-39), which describes how physicist John Gott
proposes to compute prediction intervals for the future duration of any
observed phenomenon. Gott's method hinges on the "Copernican assumption" that
there is nothing special about the particular time of your observation, so
with 95% confidence it occurs in the middle 95% of the lifetime. If the
phenomenon is observed to have started A years ago, Gott infers that A
represents between 1/40 (2.5%) and 39/40 (97.5%) of the total life. He
therefore predicts that the remaining life will extend between A/39 and 39A
years into the future. (Given Gott's assumptions, this is simple algebra: if
A = (1/40)L where L is the total life, then the future life is L - A = 39A.)
Gott has used the method to predict everything from the run of Broadway plays
to the survival of the human species!

But can such broad applicability really be justified? Not according to Dr.
Carlton Caves, a physicist at the University of New Mexico (and a New Yorker
reader!) who has put together a systematic critique of Gott's work. His
article, "Predicting Future Duration from Present Age: A Critical
Assessment," will be published in Contemporary Physics.

Caves' ideas are based on Bayesian analysis. He says that Gott errs by
ignoring prior information about the lifetimes of the phenomena in question.
For example, Gott claims to have invented his method while standing at the
Berlin Wall in 1969, eight years after it was erected. With 50% confidence he
inferred that those eight years represented between 1/4 and 3/4 of its total
life, so he predicted that the Wall would last between 2 2/3 years and 24
years into the future. (For the record, twenty years later, the Wall did come
down.) But what sense does it make, asks Caves, to ignore historical and
political conditions when making such a prediction? Surely, such prior
knowledge is relevant, and Bayesian ideas provide a framework for
incorporating it. In Caves' view, failing to do so in favor of some
"universal rule" is unscientific.

To illustrate the matter more simply, Caves imagines discovering a party in
progress, where we learn that the guest of honor is celebrating her 50th
birthday. Gott's theory predicts that, with 95% certainty, she will live
between 1.28 and 1950 additional years, a range which Caves dismisses as too
wide to be useful. Even worse, he points out, would be to predict that with
33% confidence she is in the first third of her lifetime and thus has a 33%
chance to live past the age of 150! As a challenge to Gott, Caves has
produced a notarized list of 24 dogs owned by people associated with his
department. He identified the half dozen who are older than 10 years -- prior
information that Gott would presumably ignore. Gott's method would presumably
predict that each had a 50% chance of living to twice its current age. Caves
is willing to bet Gott $1000 on each dog, offering 2-to-1 odds that it won't
live that long. Caves cites Gott's refusal to bet as evidence that he doesn't
believe his own rule.

For more technical details, you can read Caves' paper online at

http://xxx.lanl.gov/abs/astro-ph/0001414

and Gott's rebuttal at

http://www.physicsweb.org/article/news/4/2/6/1/news-04-02-06a

As for the dogs, Gott thinks his analysis would apply to the whole sample, not
to each dog individually hand-picked by Caves. Trying to sort out all of this
is an interesting discussion exercise involving the notions of sampling,
confidence levels, and prediction.

################
Aaron Krister Johnson
http://www.akjmusic.com
http://www.dividebypi.com

🔗Graham Breed <graham@...>

12/21/2004 5:08:52 AM

There was a similar article in New Scientist, so perhaps he gets
around. I don't see any flaw in the reasoning. The supposed
rebuttal is simply about not including any other information. Well,
of course predictions will be poor if you only use one data point,
and certainly the article I read made it clear that no prior
information was assumed. Of course the predictions will improve with
prior information: who here would assume otherwise?

It looks like the academic habit of claiming controversy where there
is none, in order to push forward your own ideas.

Somewhat depressing for me, in that after waiting nearly 2 months for
the university to provide me with the computer I'm entitled to, I can
now expect to wait another 2 months, since I don't really have any
other information to go on :(

Graham