DO ANDROIDS CATHECT?

by Loren Means



An android is an artificial human. It can be made out of flesh and blood and/or chemicals. Androids resemble humans except that they do not need to eat or drink or sleep. They can be stronger than humans. They breathe. They look exactly like humans so that they can be more effective when used as weapons. Androids are usually made to do work or to fight.

An android is different from a cyborg because a cyborg is part machine and part living tissue. An android is all human. (The word "cyborg" comes from "cybernetic organism." "Cybernetic" means a machine that controls itself. "Organism" is a living being. A "cyborg" is a combination of machine and human.) As Reese describes the cyborg in James Cameron's 1984 The Terminator: "Underneath it's a hyperalloy combat chassis. Microprocessor controlled. Fully armored. Very tough. But outside it's living, human tissue. Flesh, skin, hair...blood. All grown for the cyborgs." Cyborgs are invariably stronger than humans.

Androids and cyborgs are both different from robots because robots are all machine. The first robots that Isaac Asimov wrote about in his book I, Robot (starting in 1939) didn't have to look like humans, and everybody knew they were machines. Isaac Asimov wrote the Three Laws of Robotics, and invented the word "robotics," which means machines doing the work of humans and controlling themselves. We now say "android" when we mean manufactured humans, but the first time the word "robot" was used was in a play in 1923 called R. U. R. by Karel Capek. In that play, the robots looked just like humans, and were made from chemicals, so they were really androids, but the word "android" wasn't used then.

Androids, cyborgs, and robots all have computer memory banks which are superior to human memory and intelligence. Once they learn something, they can recall all of it immediately. Androids are made as adults, and don't have childhood memories because they don't have childhoods. They usually don't need to have feelings and empathy. Memories can be implanted to make them think they had a childhood, so they think that they are human. Therefore, they will be harder to identify by humans. In Do Androids Dream of Electric Sheep? by Philip K. Dick, the only way the humans can identify an android is by giving the android a test for empathy. The androids in this book have emotions (Roy Baty the android cries when his android wife is killed), but no empathy for other creatures (the androids pull the legs off a spider).

Issues of androids being endowed with human-like memories and emotions seem to come down to the concept of these attributes making androids easier to control by humans. This concept is crucial not only because of the inherent of superiority of androids, but because of the fact that if viable androids are developed, they will eventually outnumber humans.

Asimov was probably surprised when ensuing authors such as Dick and Silverberg actually had the temerity to ignore these laws and once again create murderous androids.

Ironically, the institution of these laws within robots which inhibits them from harming humans ultimately makes robots responsible for the welfare of humans, as surrogate parental figures. The robots are thus elevated into positions of authority over humans. The provocative aspects of this situation are pointedly alluded to in Asimov's series of novels about a human cop, Baley, who reluctantly finds himself working occasionally with a robot partner, Daneel. (These novels, The Caves of Steel, (1953), The Naked Sun, (1956), and The Robots of Dawn, (1983), probably have escaped the designation of the first tech noir fiction because Asimov's positivistic sensibility prohibited him from being adequately dystopic.) Despite the robot's superiority, the human must solve the cases primarily on his own, in order to stay in business.

The parental nature of Daneel's relation to Baley is exemplified in the opening of The Naked Sun, in which Daneel insists on protecting Baley from his claustrophobia by refusing to open the roof of the car in which they are riding:

The robot would, of course, be quite beyond the reach of force. Daneel's strength, if exerted fully, would be a hundred times that of flesh and blood. He would be perfectly capable of restraining Baley without ever hurting him...A threat of destruction was useless against a robot. Self-preservation was only the Third Law...It would not trouble Daneel to be destroyed if the alternative were breaking the First Law. And Baley did not wish to destroy Daneel...Yet he did want to see out of the car. It was becoming an obsession with him. He couldn't allow this nurse-infant relationship to build up.

Baley tricks Daneel by getting a robot ignorant of Baley's claustrophobia to open the roof. Daneel is proven right, of course.

Asimov's last robot story, "The Bicentennial Man," entails his ultimate attempt to define the difference between humans and robots. As James Gunn puts it in Isaac Asimov: The Foundations of Science Fiction, regarding the story's robot, Andrew, who becomes human:

But the final distinction is Andrew's sentimental and hard-to-rationalize desire to be human when he is so clearly superior to humans in every way. The sentimentality that threatens the story is essential to the argument: robots are always rational and humans are not. Humans act for emotional reasons, and, ultimately, so does Andrew. Andrew, indeed, has become human.

A prominent British science fiction author, Brian Aldiss, discusses the difference between humans and robots in The Mechanical God:

"Order is not possible in human affairs, or not at our present youthful evolutionary stage; nor will it be until we are reduced to a robotlike state of obedience. Robots, being amenable to laws and orders, are amenable to order. They make ideal citizens--but only of a dead culture.

The ideas robots conform to are, of course, humanity's ideas. But man comprises emotion as well as intellect. Man, being whole, is always in conflict with his own ideas. Robots are only half human. In consequence, they are able to conform to man's intellectual ideas against which his spirit constantly rebels.

If...we are to become beings without emotional tone, with merely automatic responses to given situations--then robots represent in symbolic form the next stage of human evolution. In which case, we should take heed of the warning and accept a measure of chaos in preference to a rule of logic. Such is the message we receive from the novels of Philip K. Dick, one of the best robotic-writers, because he generally uses his robots as buffers between the living and nonliving. Dick's...robots are paradigms of people isolated through illness, with low-voltage ontological currents."

Patricia Warrick believes that a robot with creativity lies in the realm of impossibility. In Many Futures, Many Worlds. Theme and Form in Science Fiction, Thomas D. Clareson, editor, she says:

"Granted that a machine can think logically, does not the human brain have another kind of capacity--creative intelligence and intuition--which the machine can never be made to duplicate? Since we have no agreement about what intelligence is and have little understanding of how the human brain works, it is not possible at this point to make any kind of meaningful prediction about whether it can ever be reproduced mechanically."

On the other hand, Ted Krulik, while quoting Warrick in his article on machine intelligence, takes issue with her in The Intersection of Science Fiction and Philosophy, Robert E. Myers, editor:

"Many computer programmers and robotics technicians, however, disagree with Warrick. Intelligence in the computer mind is observable: the computer is capable of accumulating vast amount of information, assimilating them, and presenting a workable output that may vary slightly according to the characteristics of the computer. Perhaps, when it is possible to break the wide variations in human traits into individual actions and to program them into a robot, a kind of creative ability and enjoyment may be inscribed onto the mind of the mechanical being."

Krulik's statement begs two questions: the tenuous and problematical relationship between "intelligence" on the one hand and "creative ability" and "enjoyment" on the other; and what possible justification there could be for the vast effort and expense of replicating "creative ability" and especially "enjoyment" in a robot.

Warrick sees Isaac Asimov as postulating the perfect union of the best properties of human and robot: In The Cybernetic Imagination in Science Fiction she says:

"The two robot detective novels, The Caves of Steel and The Naked Sun, illustrate Asimov's faith that man and machine can form a harmonious relationship. Machines can perform dependably as accurate logic machines, handling large masses of data and doing mathematical calculations at fantastic speeds. They are incorruptible because they are without emotions and consequently have no ambitions, loves, or other distractions to subvert the functioning of logic. Man, in contrast, is capable of creative problem solving and can exercise judgement in choosing between alternatives. His intuition can be of value if his insights are supported and developed through the mathematical logic that the computer provides."

In at least one of his robot short stories, "Risk," however, Asimov posited humans as superior to robots in at least one area, that of ingenuity. As Asimov's archetypal fictional roboticist, Susan Calvin, puts it:

"Robots have no ingenuity. Their minds are finite and can be calculated to the last decimal. That, in fact, is my job...`Find out what's wrong' is not an order you can give to a robot; only to a man. The human brain, so far at least, is beyond calculation."

Asimov himself delineates his perception of the differences between machine and human intelligence in his Machines that Think:

"If insight, intuition, creativity, the ability to view a problem as a whole and guess the answer by the "feel" of the situation is a measure of intelligence, computers are very unintelligent indeed. Nor can we see right now how this deficiency in computers can be easily remedied, since human beings cannot program a computer to be intuitive or creative for the very good reason that we do not know what we ourselves do when we exercise these qualities.

Two different intelligences, specializing in two different directions, each useful, can in a symbiotic relationship learn to cooperate with the natural law of the Universe far more efficiently than either could alone. Viewed in this fashion, the robot/computer will not replace us but will serve us as friend and ally in the march toward the glorious future."

One wonders if Asimov had his tongue in his cheek while writing the last sentence in the above statement. Patricia Warrick also engages in some fanciful flights while exploring the implications of Asimov's Three Laws in The Cybernetic Imagination in Science Fiction:

"Defining ideal behavior and writing a computer program to obtain it would be possible. The program would control the performance of the technology, not the performance of man himself. However, man increasingly expresses himself through technology. Programming the technology to operate according to ethical principles would be a great step toward an ethical society. The world's great religious systems have attempted to pro-gram man's mind with an ethical system, but they have been only partially effective because man's emotions, ambitions, and aggressions often override the programming. Overriding would not be a problem, however, in the computer program....The implementation of the model would mean some restriction in individual liberty; a degree of conformity would be the result...Perhaps in the real world ethical concepts could be operationalized in computer technology. No other science fiction writer has given the world that vision."

Later in the same essay, however, Warrick discusses the last Asimov stories (from the collection The Bicentennial Man, 1976), which carry these ideas further. "Feminine Intuition" (1969) postulates the use of the principle of uncertainty in the design of robot brains. As Warrick puts it, "If the uncertainty effect can be introduced into the robot brain, it will share the creativity of the human brain." This is, of course, an absurdly simplistic and reductive conception of the nature of creativity. "That Thou Art Mindful of Him" (1974), on the other hand, postulates the development of robots with the capacity for judgement. Warrick says "The possibility that machine intelligence may be both superior to human intelligence and likely to dominate human intelligence appears for the first time in this story."

"The Bicentennial Man" (1976) further develops these ideas, but in a negative way: a robot is produced which is rendered creative by a defect. The defect is corrected in later models. Warrick uses this story to arrive at a rather far-fetched conclusion:

"The story follows the movement of mechanical intelligence toward human intelligence and death...against man's development of technology and movement toward artificial intelligence and immortality. Knowledge or information eventually dies in the organic brain, but it can survive indefinitely in a mechanical brain. Thus the inorganic form may well be the most likely form for the survival of intelligence in the universe. As machine intelligence evolves to human form, human intelligence is evolving toward machine form."

The Swedish science fiction enthusiast Sam Lundwall concocted a provocative if simplistic view of the difference between robots and androids in Science Fiction: What It's All About

"But if man has succeeded in taming his robots, this is not the case with the androids. They are disagreeably like man in all respects save the ability to procreate. The androids are manufactured in android factories and are sent out into society with a production number stamped on the forehead. This number is the only thing that tells them apart from human beings; they even have a sex urge, and they are, like human beings, utterly undependable."

Stanislaw Lem wrote several stories about androids, many of them collected in the 1977 anthology Mortal Engines. One of the most challenging of these is the last, "The Mask," (1976). Mark Rose summarizes the use of human-like memories in the story in his book Alien Encounters

"Composed in the first person from the point of view of an anonymous robot programmed...to seduce and murder..., the narrative concerns the machine's discovery of its nature and its struggle to free itself from the instructions "written" within it...At first the machine regards itself as a woman. But gradually, meditating upon certain anomalies and inconsistencies in her memories--for example, she finds that she recalls several different personal histories--she begins to suspect that her human appearance is a mask...The multiple memories that play through the robot's consciousness whenever she turns inward to seek her true identity are a repertoire provided so that she will be able to assume various guises depending upon the requirements of the moment."

Here the memories are used to keep the robot from the self-knowledge which might allow it the free will to resist its programming to kill a human. The ending is ambiguous, but Lem's assumptions that a robot could manifest consciousness outside that programmed into it is not logically justified

In Robert Silverberg's 1970 Tower of Glass, the androids are highly emotional, sexual, and filled with religious fervor. The reason for building the androids with such human-mimetic characteristics is not revealed by Silverberg, with the exception of a passing reference to the fact that the androids were desirable because they were "complex in personality." The androids' religious fetish develops unbeknownst to their creator, and when he manifests indifference to their aspirations, the androids rise against the defenseless humans and kill them.

To endow an android with emotion is no mean task. All of the necessary qualities for the fulfillment of common slave duties: logic, memory, obedience--are present in that part of the human brain--the left hemisphere--which is duplicated to a certain extent in a computer. But to build in emotion entails building in infinitely more capacity, duplicating the right hemisphere of the brain and the limbic system to supply instinctual, sensory, and past-oriented random associations which make up the illogic, border at least to some extent on the neurotic, which makes a human. This is so monumental a task, for so flimsy a justification, that it suggests an error of judgement on Silverberg's part. Certainly science fiction writers who have postulated androids have not so blithely and cavalierly ascribed emotions to them. Given the internal logic of Silverberg's hypothesis, it follows that if the androids developed religion, they would probably also develop creativity.

James Cameron's two films entitled The Terminator (1984, 1991) postulate a cyborg so powerful that he cannot be stopped by humans unless it is manipulated into the clutches of another machine which will either stamp or melt it into an inert state. This machine is a representative of an army of cyborgs which are fighting a winning battle to stamp out humans in the future. The second film adds complexity by pitting the cyborg from the first film against the newer model which renders it obsolescent. These cyborgs appear human, but lack any vestige of human interiority, since they have no need of any. As the human who has returned from the future to combat the first Terminator puts it, "It absolutely will not stop. Ever. Until you are dead." The ultimate message of the films is that the only way to effectively combat such a machine is to never create it in the first place.

The child Connor tries to teach his postmodernist version of morality to the cyborg, who at the film's end annihilates himself to save humanity. Is this an outcropping of emotion, or simply refined logic?

Dick also postulated robots who reproduced themselves and tried to take over the world. As Patricia Warrick put it in Robots, Androids, and Mechanical Oddities: The Science Fiction of Philip K. Dick:

In "The Last of the Masters," [1955], we saw a world that had rebelled against the oppressive leadership of a robot capable only of maintaining a logical, regulated, mechanistic society. The robot could not respond with empathy to individual needs...in "To Serve the Master" [1956] the robot government has been replaced by a corporate structure run by men just as rigid and uncompromising as the robots were. They have driven the masses of little men into the underground of existence where they tunnel along in dull repetitive work...For Dick any large organizational structure, be it corporate or governmental, becomes oppressive. Ideally, work and creativity should combine...The good fortune of the craftsman who combines art and industry is defined by its sharp contrast with the worker trapped in mechanical corporate production.

Ridley Scott's Blade Runner was shot in 1982, based on a novel by Philip K. Dick entitled Do Androids Dream of Electric Sheep? which was published in 1968. There is little resemblance between the novel and the film, and the major difference between them is in Scott's emphasis on the problem of android consciousness.

An android has no memory except that which is programmed into it by the manufacturer. The cyborg in Terminator, for instance, has a computer in his brain with a "memory bank." This is computer memory, a concept derived from, but not identical to, human memory. Computer memory conceives time only in terms of order, duration, and attachment to dates. Computers have no conception of past and future, as humans think of them, since computer memory consists only of usable facts, and the human "past" consists in large part of unmotivated images with strong emotional resonances. This is a result of the fact that the human brain consists of two relatively distinct parts: the limbic system, or "old brain," which is principally concerned with sensory perception and emotions, and the cerebral cortex, or "new brain," which handles language and analysis. Arthur Koestler calls this evolutionary grafting "schizophysiology." An android is a computer with flesh, and it seems unlikely that such a product would be deemed to require the emotional aspects of the limbic system.

Note the use of the term "memory" in these excerpts from the novelization of the script of Terminator:

"He walked to the edge of the parking lot and looked out on the city below. A map from his memory overlaid itself on the scene below...He...studied the relief map of Los Angeles, planning a hundred strategies, charting a thousand pathways, and accumulating valuable environ-mental data before setting off on his mission...After he'd first stolen the station wagon, it had taken him about sixteen minutes to adjust to the random pattern of city traffic...But then he learned to calculate the ebb and flow of the vehicles and through memory and analysis of contextual activity piece together the rules of the road."

In Blade Runner an android is given imitations of human memories. This is the only android in science fiction thus equipped. Why did Scott find this necessary?

The androids in Dick's novel are indifferent to their four-year lifespan, and the issue of android memory is glossed over by Dick. The primary issue in his novel is that of empathy, but, although the only section which is lifted directly from the book is the empathy test, the theme itself is largely ignored by Scott.

Perhaps the most provocative of the passages in Do Androids Dream of Electric Sheep? to be omitted from Blade Runner is a section in which Rick Deckard, the protagonist, is captured by a cadre of androids posing as human police who call into question Deckard's standing as a human and a bounty hunter as well as his sanity. One of the androids, Inspector Garland, says of a bounty hunter under his jurisdiction: "We all came here together on the same ship from Mars. Not Resch; he stayed behind another week, receiving the synthetic memory system." Confronted with this assertion, Resch doubts his own humanity:

"Then at one time an authentic Garland existed, and somewhere along the way got replaced. Or--I've been impregnated with a false memory system. Maybe I only remember Garland over the whole time. But--only androids show up with false memory systems; it's been found ineffective in humans."

Not atypically for Dick, this reference is not amplified upon in the balance of the novel. (The issue of memory implants is much more fully developed, although in regard to humans, in Dick's short story of 1966, "We Can Remember It For You Wholesale," which formed a germ of the Paul Verhoeven's film Total Recall.) Resch is tested and found to be human, and disappears from the narrative. (In the film, Rachael suggests to Deckard that he might be an android who has replaced the original human Deckard.) The concept of androids duplicating and replacing humans, like that of memory implants, is alluded to only in this anomalous section, but is an integral part of Dick's work as a whole.

It was typical of Dick to borrow plot devices from earlier novels, and this section is an example of his conception of "parallel worlds." As Stanislaw Lem puts it in an essay praising Dick as the finest American science fiction writer in Science-Fiction Studies:

"The essential point is that a world equipped with the means of splitting perceived reality into indistinguishable likenesses of itself creates practical dilemmas that are known only to the theoretical speculations of philosophy. This is a world in which, so to speak, this philosophy goes out into the street and becomes for every ordinary mortal no less of a burning question than is for us the threatened destruction of the biosphere."

The issues of android emotion and memory are elevated to paramount importance in Scott's movie, however. Two verbal exchanges early in the film enunciate this point. In the first, Deckard is being briefed by his superior officer, Bryant:

Bryant: They were designed to copy human beings in every way except their emotions. The designers reckoned that after a few years they might develop their own emotional responses. You know, hate, love, fear, anger, envy. So they built in a fail-safe device.

Deckard: Which is what?

Bryant: Four-year lifespan.

Later, after successfully identifying an android using an empathy test, Deckard receives an explanation for the further humanization of the latest group of androids from their manufacturer:

Tyrell: We began to recognize in them strange obsession. After all, they are emotionally inexperienced, with only a few years in which to store up the experiences which you and I take for granted. If we gift them with a past, we create a cushion, a pillow for their emotions, and consequently we can control them better.

Deckard: Memories. You're talking about memories.

Although he speaks in the plural, Tyrell is referring to one experimental android, Rachael, an unique prototype who is, as Tyrell puts it, "more human than human." Scott apparently expected viewers to take these "explanations" at face value, and most viewers do. One wonders on what basis we should believe that androids would spontaneously develop emotions, or that a four-year lifespan would alleviate the problem (if it would indeed constitute a problem), or that the implantation of memories would make them easier to control.

Robert Silverberg, though admiring the visual qualities of the film, is quite straightforward in his condemnation of the script's incredibilities in Danny Peary, Editor, Omni's Screen Flights/Screen Fantasies

"Blade Runner is simply silly. We are asked to believe that...we have populated the stars with "replicants," synthetic human beings that are superior in most ways to ourselves, although they are designed to live only four years; and that a handful of these replicants, having rebelled at being assigned to slavery in the star-colonies, have found their way back to Earth and are running amok in Los Angeles. Out of this cluster of implausibilities is generated a perfunctory plot in which the androids, hoping to find a way to have their lifespans extended, seek to enlist the aid of their designer, while a peace office follows their trail, taking desperate measures to destroy them--at the risk of his own life, even though the androids have only a few weeks to live anyway."

A 1984 interview with Ridley Scott demonstrates that he held his assumptions about android consciousness to be self-evident:

"If you create a machine through genetic engineering, biochemistry, or whatever, the very fact that it has been created by a human being indicates to me that when it becomes truly sophisticated it will ultimately be free-thinking. I'm sure that in the near future, computers will start to think for themselves and develop at least a limited set of emotions, and make their own decisions."

Telotte is willing to accept the significance of android memory, although with a different emphasis. In Film Quarterly, (Spring 1983):

"Because they have programmed their replicant creatures with memories of a life that never was--even providing them with photographs of supposed relations and friends--and thus tried to convince them of their humanity, these engineers have erected a potentially dangerous bridge between the human and the android realms. In fact, they have succeeded too well, for they have unleashed a synthetic but powerful desire for life, one which--as is the case in films like Frankenstein, Invasion of the Body Snatchers, and The Thing--initially places itself in opposition to the possessors of normal life."

Whatever the merits of this argument, it is inaccurate when applied to Blade Runner. The only android equipped with memories in the film is Rachael, the experimental model of whom Tyrell spoke, and it is she, rather than the murderous androids from the "off-world" whom Deckard spares and attempts to save. The only other "replicant" (a term for android not employed in the novel) possessing photographs is Leon, who would not have shot bounty hunter Holden at the beginning of the film for inquiring about a "mother" if he had received memory implants of one. Fred Glass makes a similar assertion to Telotte's, even more vehemently in Film Quarterly, (Fall 1990):

"As in Robocop, the hero [of Total Recall] is an amnesiac, and the plot evolves from Quaid's attempt to recover his identity...This aspect of Total Recall recalls Bladerunner. The replicants' "memories," implanted at "birth," establish lives they've never lived, right down to photo albums of family and friends who have never existed. For Quaid the identity loss has occurred more recently. But for replicants, Robocop, and Quaid alike, their missing identity is a symbolic castration, a loss of power over their lives that must be regained...As individuals we are always attempting to recall things we have repressed...We are all amnesiacs, both in this individual-psychological sense and in a broader representation: as victims of social amnesia, the peculiar anti-historical mechanism of our culture that works to keep rulers and ruled in their places."

If androids don't go through the human developmental stages, wouldn't they be superior to humans because they would not inherit the neurotic baggage of human childhood? Androids also differ from humans in that they not only do not have childhood memories, they also don't have amnesia regarding these childhood memories, which is also an essential aspect of the psychological makeup of psychically healthy humans. As Fairbairn puts it in An Object-Relations Theory of the Personality:

'It is impossible for anyone to pass through childhood without having bad objects which are internalized and repressed...This would appear to be the real explanation of the classic massive amnesia for events of early childhood, which is only found to be absent in individuals whose ego is disintegrating (e.g. in incipient schizophrenics, who so often display a most remarkable capacity for reviving traumatic incidents of early childhood.'

Giuliana Bruno equates the fundamental condition of the android as akin to schizophrenia in Alien Zone, edited by Annette Kuhn:

"The schizophrenic condition is characterized by the inability to experience the persistence of the "I" over time. There is neither past nor future at the two poles of that which thus becomes a perpetual present. Jameson writes, "The schizophrenic does not have our experience of temporal continuity but is condemned to live a perpetual present with which the various moments of his or her past have little connection and for which there is no conceivable future on the horizon." Replicants are condemned to a life composed only of a present tense; they have neither past nor memory. There is for them no conceivable future. They are denied a personal identity, since they cannot name their "I" as an existence over time."

The relationship of the lack of affect manifested by the schizophrenic and the characteristic reaction patterns of androids is developed much more extensively in Dick's novel than in Scott's film, and the danger of killing a schizoid human who is mistaken for an android is a running theme throughout the book. Dick himself suffered from mental disturbances throughout his life, and it is speculated that the stroke which killed him was stress-related. Dick made this statement regarding the role of schizophrenia in his work:

"I draw a sharp line between the schizoid personality and actual schizophrenia, which I have the utmost respect for, and for the people who do it--or have it, whatever. I see it this way: the schizoid personality overuses his thinking function at the expense of his feeling function (in Jungian terms) and so has inappropriate or flattened affect; he is android-like. But in schizophrenia, the denied feeling function breaks through from the unconscious in an effort to establish balance and parity between the functions. Therefore it can be said that in essence I regard what is called "schizophrenia" as an attempt by a one-sided mind to compensate and achieve wholeness: schizophrenia is a brave journey into the realm of the archetypes, and those who take it--who will not longer settle for the cold schizoid personality--are to be honored. Many never survive this journey, and so trade imbalance for total chaos, which is tragic. Others, however, return from the journey in a state of wholeness; They are the fortunate ones, the truly sane. Thus I see schizophrenia as closer to sanity (whatever that may mean) than the schizoid is. The terrible danger about the schizoid is that he can function; he can even got hold of a position of power over others, whereas the lurid schizophrenic wears a palpable tag saying, "I am nuts, pay no attention to me."

Juliet Mitchell, Melanie Klein's biographer, says that Freud's theory of hysteria was based on a determinant in the past. The castration complex, which demands the repression of Oedipal wishes, "inaugurates history within the individual. The clearly observed phenomenon of an amnesia that covers our infancy indicates the construction of memory...Infancy is a perpetual present."

It is conceivable that it is the freedom from the sense of the temporal which renders androids innately superior to humans, and any acquisition of human traits by the androids is actually an impoverishment. Tyrell tries to suggest this to Roy Batty, the android whom Scott tries to elevate to heroic status, by suggesting to him that the intensity of Batty's experience should outweigh considerations of longevity. Batty insists on longevity instead, and when Tyrell cannot provide it, he twists the Oedipus situation in that he blinds his father in the process of killing him. Batty recapitulates the history of humanity by renouncing the perpetual present of infancy for the historicity of the Oedipal conflict.

The conception that the android state is inherently superior to that of the human is suggested by Scott's avowed desire to end the film with a strong suggestion that Deckard might be an android. As Mark Salisbury points out in Empire magazine:

"Blade Runner was not one of my favorite films," [Harrison Ford, who played Deckard] recalls. "I tangled with Ridley. He wanted the audience to find out that Deckard was a replicant, I fought that because I felt the audience needed somebody to cheer for."..."The original focus of the film ought to have been the fact, or at least the innuendo, that Harrison Ford is a replicant and that they were being turned loose deliberately," explains Scott now. "In other words, the whole thing was under control because that's the way the world was. I think that would have been the most satisfying ending. In a way it's a bleak ending, but it's also a bleak film..."

Rather than the android Deckard of Scott's desires, Harrison Ford and the Ladd Company's voiceovers provide instead a retired killer closer to another cinematic archetype, the noir hard-boiled detective. In his violent relationship to and revulsion toward the technological dystopia in which he finds himself, Deckard resembles Lemmy Caution in Jean-Luc Godard's 1965 Alphaville, perhaps the first tech noir film. Deckard and Caution are not only humans fighting technology, but reactionary humans, representative of frontier values. They are not only anti-technology, but anti-social. Alphaville doesn't deal with androids, but it entails recovering the past through poetry. The amnesia that Caution fights is the social amnesia of which Fred Glass speaks. Roud characterizes the female protagonist, Natasha, as:

"The woman who can just barely remember life before words like redbreast, autumn light, conscience, tears and tenderness were eradicated from the Bible/Dictionary owned by every inhabitant of Alphaville, new editions of which are distributed daily--as more and more words are forbidden."

Deckard and Caution are anachronisms in a way similar to that of the androids who are obsolete and replaced by a later model: Batty rendered outmoded by the memory-implanted Rachael, the earlier virtually invincible Terminator forced to fight a later superior model. This is especially noteworthy in Robocop, in which the older executive has to pit his superceded robot technology against that of the younger usurper's cyborg. The cyborg becomes a surrogate Oedipus, killing the father by destroying his alter ego, the machine.