back to list

Re: Penrose's argument - was Re: Digest Number 1022 was re: Belief-o-matic

🔗Robert Walker <robertwalker@...>

7/10/2004 5:21:04 AM

Hi Carl,

> So in that sense he shows that
> no program can ever pass the
> turing argument in full generality.

Does that cover self-extensible programs?

Yes if it is done by adding axioms by
a program that works by using an inbuilt
axiom system to evaluate truth.

If it is done by some other method then
you are back in the same situation
as with the neural nets and expert systems
that you don't know how it works, so how
can you say that it understands truth
or even that it will always come up with
true answers.

Of course we don't know how we work either
but then we are the ones asking the
question and we know what truth is
and are trying to find out if a computer
can understand the same notion of truth
as a human being.

That's enough for me anyway. But
I can understand it mightn't convince
depending on ones views.
Maybe Roger Penrose has found
more elaborate arguments to deal
with them.

Even if the program is extensible,
it is still a program, and provided the
way it extends its programming is also
programmable, the way
you could do it is to consider
it as data for another (quite possibly smaller)
program that consists of an interpreter
that interprets that program and
executes its instructions (including its
instructions to overwrite parts of itself
or whatever it does). The interpreter
+ its initialisation data all as one thing
can then be considered as the program
you are testing for its understanding of
truth. That then is just a single
fixed size thing that doesn't change its programming
though the data it is using gets transformed
and changed and can get increased in size
etc.

So unless you say it also changes
the language in which it is written
as it goes on (and there is no
great advantage in doing that
as all computer languages are
universal as far as computing is concerned)
then that gives you a finite program
+ data.

So, one wonders having got that
far if there is some kind of diagonalisation
argument that could be used to show that
there exists a Godel sentence for any
program, maybe even a constructive
argument that can construct one.
But I won't try to figure that out,
I'm sure Roger Penrose would have followed
up that line of investigation if it can
be done, at least if it can be done in
any way I'm likely to hit on in a
short time - so I'll wait
until I read Shadoes of the Mind and
see if he covers it.

BTW sorry about the replies to digests
- I try to remember not to do that
but slip up now and again.

Robert

🔗Robert Walker <robertwalker@...>

7/10/2004 5:21:04 AM

Hi Carl,

> So in that sense he shows that
> no program can ever pass the
> turing argument in full generality.

Does that cover self-extensible programs?

Yes if it is done by adding axioms by
a program that works by using an inbuilt
axiom system to evaluate truth.

If it is done by some other method then
you are back in the same situation
as with the neural nets and expert systems
that you don't know how it works, so how
can you say that it understands truth
or even that it will always come up with
true answers.

Of course we don't know how we work either
but then we are the ones asking the
question and we know what truth is
and are trying to find out if a computer
can understand the same notion of truth
as a human being.

That's enough for me anyway. But
I can understand it mightn't convince
depending on ones views.
Maybe Roger Penrose has found
more elaborate arguments to deal
with them.

Even if the program is extensible,
it is still a program, and provided the
way it extends its programming is also
programmable, the way
you could do it is to consider
it as data for another (quite possibly smaller)
program that consists of an interpreter
that interprets that program and
executes its instructions (including its
instructions to overwrite parts of itself
or whatever it does). The interpreter
+ its initialisation data all as one thing
can then be considered as the program
you are testing for its understanding of
truth. That then is just a single
fixed size thing that doesn't change its programming
though the data it is using gets transformed
and changed and can get increased in size
etc.

So unless you say it also changes
the language in which it is written
as it goes on (and there is no
great advantage in doing that
as all computer languages are
universal as far as computing is concerned)
then that gives you a finite program
+ data.

So, one wonders having got that
far if there is some kind of diagonalisation
argument that could be used to show that
there exists a Godel sentence for any
program, maybe even a constructive
argument that can construct one.
But I won't try to figure that out,
I'm sure Roger Penrose would have followed
up that line of investigation if it can
be done, at least if it can be done in
any way I'm likely to hit on in a
short time - so I'll wait
until I read Shadoes of the Mind and
see if he covers it.

BTW sorry about the replies to digests
- I try to remember not to do that
but slip up now and again.

Robert

🔗Robert Walker <robertwalker@...>

7/10/2004 5:59:35 AM

HI Carl,

that got sent by accident
while I was still thinking
about the self-extensible program
idea:

Here is that bit again:

BTW a self-extensible program
is still a finite program
with a fixed number of instructions
if you consider it another way
as data for an interpreter
and consider the interpreter
executing those instructions as
the actual program running in a larger
machine. Then the self modifying
bit just becomes a particular
way of modifying that data.
So there is no real sharp
dividing line between data and
program and what you can do in
a self modifying program you
can do equally well using
just a normal program with a
finite number of instructions
fixed in size and modifying
data.

If you can change the programming
language too as you go on that
is a bit more complex, and one
could imagine that being used
- but it must eventually be
compiled to run on some machine
or other presumably, so you can
then just use a software simulator
of that machine as your interpreter.

If the machine can re-build itself
then that gets harder - because it
could deliberately or through evolution
build in glitches. Eventually it
might evolve into a truly aware
machine I suppose, that's an idea
that gets explored in SF,

I remember a short SF story in which
there is a great taboo amongst humans
on ever building self replicating machines
because then you could get imperfect
replication and evolution and
then who knows what could happen.
They can copy themselves but there
are always safeguards to limit how much
is done - they have to go up to a human
(or ET) surveyer who unlocks
them so that they can replicate.

But another group of ETs not knowing and
understanding the taboo (through temperament
not being so interested in robot technology
as much as humans are) buy some of the
human robots and remove the safeguards
to make it easier to construct lots of them.

By the time humans find out what they
have done it is too late to do anything
about it - and the story just leaves you
hanging wondering what happened next.

Just a short story, in one of the series
in a future universe idea set up by Asimov
as a context for stories written by others
- can't reemmber the name now though.

BTW sorry about the replies to digests
- I try to remember not to do that
but slip up now and again.

Robert

🔗Carl Lumma <clumma@...>

7/10/2004 9:44:23 AM

> > So in that sense he shows that
> > no program can ever pass the
> > turing argument in full generality.
>
> Does that cover self-extensible programs?
>
> Yes if it is done by adding axioms by
> a program that works by using an inbuilt
> axiom system to evaluate truth.

Hmm... sounds right.

> If it is done by some other method then
> you are back in the same situation
> as with the neural nets and expert systems
> that you don't know how it works, so how
> can you say that it understands truth
> or even that it will always come up with
> true answers.

You can say that they understand truth in
the same way you can say other humans
understand truth: by observing their behavior.

> Of course we don't know how we work either
> but then we are the ones asking the
> question and we know what truth is
> and are trying to find out if a computer
> can understand the same notion of truth
> as a human being.
>
> That's enough for me anyway. But
> I can understand it mightn't convince
> depending on ones views.

Yes: functional isomorphism is enough
for me.

> So unless you say it also changes
> the language in which it is written
> as it goes on (and there is no
> great advantage in doing that
> as all computer languages are
> universal as far as computing is concerned)
> then that gives you a finite program
> + data.

Yes. But it might still be the case that
G today is no longer G tomorrow.

-Carl

🔗Carl Lumma <clumma@...>

7/10/2004 9:57:45 AM

> HI Carl,
>
> that got sent by accident
> while I was still thinking
> about the self-extensible program
> idea:

Oops! I've already replied. I'll
play catch-up here...

> BTW a self-extensible program
> is still a finite program
> with a fixed number of instructions

Yes, it is finite at any point in
time.

> If the machine can re-build itself
> then that gets harder - because it
> could deliberately or through evolution
> build in glitches.

It doesn't need to rebuild itself to
get glitches. Turing machines already
have access to the most random
behaviors known.

> I remember a short SF story in which
> there is a great taboo amongst humans
> on ever building self replicating machines
> because then you could get imperfect
> replication and evolution and
> then who knows what could happen.
> They can copy themselves but there
> are always safeguards to limit how much
> is done - they have to go up to a human
> (or ET) surveyer who unlocks
> them so that they can replicate.

Our cells do have telomeres, which limit
the number of times they can divide.
It's believed to have evolved as an anti-
cancer measure, though I'm not sure if it
works in all cases.

> But another group of ETs not knowing and
> understanding the taboo (through temperament
> not being so interested in robot technology
> as much as humans are) buy some of the
> human robots and remove the safeguards
> to make it easier to construct lots of them.
>
> By the time humans find out what they
> have done it is too late to do anything
> about it - and the story just leaves you
> hanging wondering what happened next.

Cool!

-Carl

🔗Paul Erlich <PERLICH@...>

7/10/2004 10:09:36 PM

--- In metatuning@yahoogroups.com, "Carl Lumma" <clumma@y...> wrote:

> Turing machines already
> have access to the most random
> behaviors known.

They do? Can you elaborate? I thought quantum phenomena, like
radioactive decay, were the most random behaviors known. I thought
there were no algorithms that generate this level of randomness,
unless you count Chaitin's.

🔗Gene Ward Smith <gwsmith@...>

7/10/2004 10:48:22 PM

--- In metatuning@yahoogroups.com, "Paul Erlich" <PERLICH@A...> wrote:
> --- In metatuning@yahoogroups.com, "Carl Lumma" <clumma@y...> wrote:
>
> > Turing machines already
> > have access to the most random
> > behaviors known.
>
> They do? Can you elaborate? I thought quantum phenomena, like
> radioactive decay, were the most random behaviors known. I thought
> there were no algorithms that generate this level of randomness,
> unless you count Chaitin's.

"Anyone who considers arithmetical methods of producing random digits
is, of course, in a state of sin."
-- John Von Neumann, 1951

🔗Carl Lumma <clumma@...>

7/11/2004 12:19:35 AM

> > Turing machines already have access to the most random
> > behaviors known.
>
> They do? Can you elaborate? I thought quantum phenomena, like
> radioactive decay, were the most random behaviors known. I
> thought there were no algorithms that generate this level of
> randomness, unless you count Chaitin's.

Chaitin's stuff is the most random stuff there *isn't*. He
defines things so random, they're uncomputable. In fact the
act of computing them would make them known, and therefore
less random in some sense. There's an astoundingly beautiful
result that's like Godel's, but based on the Berry paradox
instead of the Russell paradox. He defines a number that's
too short to speak of!

-Carl

🔗Carl Lumma <clumma@...>

7/11/2004 1:19:04 AM

> > Turing machines already have access to the most random
> > behaviors known.
>
> They do? Can you elaborate? I thought quantum phenomena,
> like radioactive decay, were the most random behaviors
> known. I thought there were no algorithms that generate
> this level of randomness, unless you count Chaitin's.

Not according to Wolfram. He compares various empirical
sources of randomness to Rule 30 (as well as some more
well-known algorithmic methods that are embarassingly
poor!). Including:

() Electronic noise
() thermal (Johnson) noise
() shot noise
() flicker (1/f) noise
() Mechanical randomness
() dice
() roulette wheels
() Quantum randomness
"It is usually assumed that even if all else fails a
quantum process such as radioactive decay will yield
perfect randomness. But in practice the most accurate
measurements show phenomena such as 1/f noise,
presumably as a result of features of the detector and
perhaps of electromagnetic fields associated with decay
products. Acceptable randomness has however been
obtained at rates of tens of bits per second. Recent
attempts have also been made to produce quantum
randomness at megahertz rates by detecting paths of
single photons."
He's referring here to Bell-inequality-violating
experiments...
"In ordinary quantum theory, a straightforward calculation
implies that the expected value of the product of the two
measured spin values will be -Cos[theta]. But now imagine
instead that when each photen is produced it is assigned
a "hidden variable" Phi that in effect explicitly specifies
the angle of its polarization. Then assume that a polarizer
oriented at 0deg. will measure the spin of such a photon to
have value f[Phi] for some fixed function f. Now the
expected value of the product of the two measured spin
values is found just by averaging over Phi as
Integrate[f[Phi]f[theta-Phi], {Phi, 0, 2Pi}]/(2Pi)
A version of Bell's inequalities is then that this integral
can decrease with theta no faster than theta/(2Pi)-1 -- as
achieved when f = Sign (in 3-D Phi must be extended to a
sphere, but the same final result holds). Yet as mentioned
on pg. 1058, acutal experiments show that in fact the
decrease with theta is mor rapid -- and is instead consistent
with the quantum theory result -Cos[theta]. So what this
means is that there is in a sense more correlation between
measurements made on separated photons than can apparently
be explained by the individual photons carrying some kind
of hidden property. (In the standard formalism of quantum
theory this is normally explained by saying that the two
photons can only meaningfully be considered as part of a
single "entangled" state. Note that because of the
probabilistic nature of the correlations it turns out to
be impossible to use them to do anything that would
normally be considered communicating information faster
than the speed of light.)
A basic assumption in deriving Bell's inequalities is
that the choice of polarizer angle for measuring one
photon is not affected by the choice of". . .

Hey, I forgot, the book is now online!

http://www.wolframscience.com/nksonline/page-1065

You'll need to register, though.

Of course, he seldom gives references to back up his
claims. And he uses an annoying number of self-
references -- wormholes in the book. He obviously
tagged sections with variables that were later replaced
with page numbers once the book was complete. I think
programs like Lynx let you do this, but Wolfram
apparently used Mathematica.

In short, Wolfram says that there's ultimately only one
source for randomness, and that's "intrinsic randomness
generation" (which I attempted to characterize in a
previous post). He accuses randomness in chaotic systems
of having been injected in the initial conditions. And
where there really is randomness being observed, he
thinks its source is intrinsic randomness generation.

One prediction of such a position is that where one does
observe randomness in nature, one might occasionally
expect to see the same random sequence occurring again.
In fact Wolfram suggests the morphology of turbulent
flows (among other things) provides an example of this.

However, he says empirical sources of randomness are
generally poor in quality because whatever intrinsic
random generation is going on underneath is degraded
by correlations stemming from things like conservation
laws. For example, after a spark crosses between two
plates, effects to the medium between the plates make
it less (or more) likely that a spark will travel that
path at subsequent times.

Just don't go trying to *prove* that certain numbers
are random -- you get into all kinds of trouble. Mainly,
you need about as many bits of axioms as it takes to
write the number you're trying to prove is random.

-Carl