back to list

Re: [metatuning] Digest Number 1022

🔗Robert Walker <robertwalker@...>

7/9/2004 9:35:15 PM

Hi Carl,

> It may also be worth pointing out here that
> finding an algorithm that produces a given
> desired output can be a very hard problem. So
> hard, in fact, that I think it *extremely*
> unlikely any pure expert system will ever
> pass a Turing test.

Oh I agree, from what I've seen nothing in the field
of computing is anywhere near achieving this,
of being able to behave like a human, except
in specially set up situations where
rules are imposed that make it easy for
the computer to "bluff" its way as a
purported human.

Though, Penrose's argument doesn't actually
show it to be impossible to get good mimics
of human understanding that might
deceive a human for a long time.
But it does give a test that could
be used - if you have a program and have
access to its programming, you will
be able to construct a Godel sentence
for it. Give it as long as it needs
to try and see if it is true, and
same with a human, where of course
the human has to be a mathematician
I suppose.

To even out the field,
give the human access to whatever
they need in the way of tools to analyse
the statement - can do the same with
teh comptuer program but it won't do ti
any good to have them, because the
problem is that it won't understand
it at the meta level.

So in that sense he shows that
no program can ever pass the
turing argument in full generality.
But whether it could pass it
in a more limited sense I suppose
is another question. Whether
maybe we may have to deal with
robots that have a notion of pseudo
truth that almost but not quite
corresponds to human truth is interesting.
I wonder if anyone has explored that
in novels.

Of course Asimov's robots
were "positronic robots" and
with a bit of leniency one could
udnerstand them as relying on non
computable phenomena. Maybe
now they would be called
"gravitonic robots" and use
Penrose's gravitational collapse
of the wave fnction. But who
knows, perhaps that is just fun
SF and will never be practical.

> But why should the intoxicated view be
> "distorted", while the sober view is
> raised to the level of defining reality?

Because the distorted view distorts
ones interactions with others,
so that you no longer have a clear
shared reality in which to communicate.

> > Even in Maths actually I feel that
> > it is sometimes insightful to study the history
> > of the subject, not to understand the
> > proofs, but to understand the context
> > for the proof and what gave rise to it,
> > and to give it more interest and life
> by seeing it in its original setting.

> I actually study it to understand the proofs. :)

Yes, indeed. Later the proofs get streamlined
so much and explanatory material removed
from them, makes sense. I think it is a good
way to learn maths in fact, we are presented
with these ideas to learn that took humans
centuries to assimilate and we learn them
in a few years. Maybe one misses something
there and it could be a help to go back
through it again a bit more slowly.

> Once again, humans are irrelevant. Any proof
> a human can write could be written by Hilbert's
> automated proof-checker.

But humans wrote the proof checker. Every time
you make a new axiom system you could
do a proof checker for the axiom system.
But until you codify your ideas into an
axiom system you have no proof checker.

Historically and experientially, the
truth comes first, then you write an
axiom system to try and incorporate
it as much as possible in a system of
rules for a particular field, say
geometry, numbers or whatever.

Eventually people made these grand theories
like ZFC and you can have a proof checker for that
if you can persuade the mathematicians to
put their results into that form. But
that isnt' a complete theory and you
will never have such by Godel again.

If humans really are irrelevant,then
you need some way for the proof checker
to spontaneously assemble without the
work of human engineers or programmers.
Then who is it who says that it is true
anyway.

Not to say human have to mean humans as in
bipedal mammals who are closely related
genetically to chimpanzees. I'm sure
any concious being will be able to share
the same notions of truth if they
have a developed enough understanding
of the worldto be able to formulate the
concept. I think it is instinctive in
fact, so even if you can't express it
well conceptually, you have it within you,
like the character who discovered he had been
speaking prose all his life :-).

Even chimpanzees are capable of lying
I remember reading, in their actions,
which I think means they have some kind
of rudimentary probably not very well
formed notion of truth, you have to know
what truth is in order to be able to lie
don't you. Probably other creatures
also too have some kind of rudimentary
understanding that certain things
are true and others aren't, to guide
their actions. E.g. a dog knows
that there is meat in a room for instance
by smelling or whatever, coudl be said
to ahve some kind of notion of truth.
Of course it can't understand Godel's
theorem but it has the potential there,
just isn't bright enough to follow the
proof.

But a programmed computer I think
can't even have that spark that could
be developed into a fully fledged
notion of truth - because of
Penrose's argument. I used to think
that it was possible and that it might
be possible to be born as an aware
programmed computer. But I can't
see how it could be now. Could
mistakenly think you are a computer
possibly but you wouldn't actually
be and if aware, would depart from your programming
sort of when no-one is looking or something.

If somehow mysteriously awareness
capable of understanding truth could
inhabit a computer, then it could
only do so as a result of
hardware glitches etc somehow
patterned to let it survive
(maybe by gravitationally induced
quantum coherence again :-))
.
Probably a story of some kind
could be written there.

Robert

🔗Carl Lumma <clumma@...>

7/9/2004 11:38:23 PM

> Though, Penrose's argument doesn't actually
> show it to be impossible to get good mimics
> of human understanding that might
> deceive a human for a long time.
> But it does give a test that could
> be used - if you have a program and have
> access to its programming, you will
> be able to construct a Godel sentence
> for it. Give it as long as it needs
> to try and see if it is true, and
> same with a human, where of course
> the human has to be a mathematician
> I suppose.

Really!? I'd love to know how to give that
test!

> So in that sense he shows that
> no program can ever pass the
> turing argument in full generality.

Does that cover self-extensible programs?

> > But why should the intoxicated view be
> > "distorted", while the sober view is
> > raised to the level of defining reality?
>
> Because the distorted view distorts
> ones interactions with others,
> so that you no longer have a clear
> shared reality in which to communicate.

Oh, I dunno...

> If humans really are irrelevant,then
> you need some way for the proof checker
> to spontaneously assemble without the
> work of human engineers or programmers.
> Then who is it who says that it is true
> anyway.

There is work into minimal proofcheckers,
I think....

-Carl