According to techno-futurists,
the exponential development of technology in general and artificial
intelligence (“AI”) in particular — including the complete digital replication
of human brains — will radically transform humanity via two revolutions. The
first is the "singularity," when artificial intelligence will
redesign itself recursively and progressively, such that AI will become vastly
more powerful than human intelligence ("super strong AI"). The second
revolution will be "virtual immortality," when the fullness of our
mental selves can be uploaded perfectly to non-biological media (such as silicon
chips), and our mental selves will live on beyond the demise of our fleshy,
physical bodies.
AI singularity and virtual
immortality would mark a startling, transhuman world that techno-futurists
envision as inevitable and perhaps just over the horizon. They do not question
whether their vision can be actualized; they only debate when it will occur,
with estimates ranging from 10 to 100 years.
I'm not so sure. Actually, I'm a
skeptic — not because I doubt the science, but because I challenge the
philosophical foundation of the claims. Consciousness is the elephant in the
room, and most techno-futurists do not see it. Whatever consciousness may be,
it affects the nature of the AI singularity and determines whether virtual
immortality is even possible.
It is an open question,
post-singularity, whether super strong AI without inner
awareness would be in all respects just as powerful as super strong AI with inner awareness, and in no respects
deficient? In other words, are there kinds of cognition that, in principle or
of necessity, require true consciousness? For assessing the AI
singularity, the question of consciousness is profound.
Is virtual immortality
possible?
Now, what about virtual
immortality — digitizing and uploading the fullness of one's first-person
mental self (the "I") from wet, mushy, physical brains that die and decay
to new, more permanent (non-biological) media or substrates? Could this
actually work?
Again, the possibilities for
virtual immortality relate to each of the alternative causes of consciousness.
1. If consciousness is entirely
physical, then our first-person mental self would be uploadable, and some kind
of virtual immortality would be attainable. The technology might take hundreds
or thousands of years — not decades, as techno-optimists believe — but barring
human-wide catastrophe, it would happen.
2. If consciousness is an
independent, non-reducible feature of physical reality, then it would be
possible that our first-person mental self could be uploadable — though less
clearly than in No. 1 above, because not knowing what this
consciousness-causing feature would be, we could not know whether it could be
manipulated by technology, no matter how advanced. But because consciousness
would still be physical, efficacious manipulation and successful uploading
would seem possible.
3. If consciousness is a non-reducible
feature of each and every elementary physical field and particle (panpsychism),
then it would seem probable that our first-person mental self would be
uploadable, because there would probably be regularities in the way particles
would need to be aggregated to produce consciousness, and if regularities, then
advanced technologies could learn to control them.
4. If consciousness is a
radically separate, nonphysical substance (dualism), then it would seem
impossible to upload our first-person mental self by digitally replicating the
brain, because a necessary cause of our consciousness, this nonphysical
component, would be absent.
5. If consciousness is ultimate
reality, then consciousness would exist of itself, without any physical
prerequisites. But would the unique digital pattern of a complete physical
brain (derived, in this case, from consciousness) favor a specific segment of
the cosmic consciousness (i.e., our unique first-person mental self)? It's not
clear, in this extreme case, that uploading would make much difference (or much
sense).
In trying to distinguish these
alternatives, I am troubled by a simple observation. Assume that a perfect
digital replication of my brain does, in fact, generate human-level
consciousness (surely alternative 1, possibly 2, probably 3, not 4, 5 doesn’t
matter). This would mean that my first-person self and personal awareness could
be uploaded to a new medium (non biological or even, for that matter, a new
biological body). But if "I" can be replicated once, then I can be
replicated twice; and if twice, then an unlimited number of times.
So, what happens to my
first-person inner awareness? What happens to my "I"?
Assume I do the digital replication
procedure and it works perfectly — say, five times.
Where is my first-person inner
awareness located? Where am I?
Each of the five replicas would
state with unabashed certainty that he is "Robert Kuhn," and no one
could dispute them. (For simplicity of the argument, physical appearances of
the clones are neutralized.) Inhabiting my original body, I would also claim to
be the real “me,” but I could not prove my priority.
I'll frame the question more
precisely. Comparing my inner awareness from right before to right after the
replications, will I feel or sense differently? Here are four obvious
possibilities, with their implications:
I do not sense any difference in
my first-person awareness. This would mean that the five replicates are like
super-identical twins — they are independent conscious entities, such that each
begins instantly to diverge from the others. This would imply that
consciousness is the local expression or manifestation of a set of physical
factors or patterns. (An alternative explanation would be that the replicates
are zombies, with no inner awareness — a charge, of course, they will deny and
denounce.)
My first-person awareness
suddenly has six parts — my original and the five replicates in different
locations — and they all somehow merge or blur together into a single conscious
frame, the six conscious entities fusing into a single composite (if not
coherent) "picture." In this way, the unified effect of my six
conscious centers would be like the "binding problem" on steroids.
(The binding problem in psychology asks how do our separate sense modalities
like sight and sound come together such that our normal conscious experience
feels singular and smooth, not built up from discrete, disparate elements).
This would mean that consciousness has some kind of overarching presence or a
kind of supra-physical structure.
My personal first-person
awareness shifts from one conscious entity to another, or fragments, or
fractionates. These states are logically (if remotely) possible, but only, I
think, if consciousness would be an imperfect, incomplete emanation of
evolution, devoid of fundamental grounding.
My personal first-person
awareness disappears upon replication, although each of the six (original plus
five) claims to be the original and really believes it. (This, too, would make
consciousness even more mysterious.)
Suppose, after the replicates are
made, the original (me) is destroyed. What then? Almost certainly my
first-person awareness would vanish, although each of the five replicates would
assert indignantly that he is the real "Robert Kuhn" and would
advise, perhaps smugly, not to fret over the deceased and discarded original.
At some time in the future,
assuming that the deep cause of consciousness permits this, the technology will
be ready. If I were around, would I submit? I might, because I'm confident that
1 (above) is true and 2, 3 and 4 are false, and that the replication procedure
would not affect my first-person mental self one whit. (So I sure wouldn't let
them destroy the original.)
Bottom line, for me for now: The
AI singularity and virtual immortality must confront the deep cause of
consciousness.

No comments:
Post a Comment