Saturday, March 31, 2007

A challenge to the AI-deniers

Forget global warming and intelligent design. The real debate is whether or not (strong) AI is possible. Now this isn't much of a debate for myself or most people I encounter in my academic circles. We read Dennett and Hofstadter while still in diapers, and use "dualist" as some sort of slur. Of course, occasional dissent does creep in. At a recent openhouse for newly admitted PhD students, I was talking to several colleagues and was surprised when one -- a respected machine learning theorist -- was willing to stick out his neck against AI. Without quite committing to dualism, his argument (if I recall correctly) was along the lines of Penrose's; if I may paraphrase, consciousness is just too freaky to be explained by classical mechanics, and so must be swept under the quantum gravity rug. My friend Gahl (I surmise from our conversation) is basically being indoctrinated to reject strong AI in her freshman class.

So, I thought I'd use this space to give a strong AI opponent (or several) an opportunity to defend their views. Tell us why machines will never think or be conscious. Is it the missing ghost? Something magical about neural tissue? A mis-application of Godel's theorem?

Have I caricatured your stance? Here's your chance to set the record straight!

23 comments:

Anonymous said...

Why not start with Hofstadter, who in an interview in today's NY Times magazine says this:

Q: Your entry in Wikipedia says that your work has inspired many students to begin careers in computing and artificial intelligence.

A: I have no interest in computers. The entry is filled with inaccuracies, and it kind of depresses me.

Q: So fix it.

A: The next day someone will fix it back.

Q: You don’t have any interest in artificial intelligence?

A: I’ve taught a course called “Hype vs. Hope in A.I.” Why does this field inspire such nonsense? People who claim that computer programs can understand short stories, or compose great pieces of music — I find that stuff ridiculously overblown.

Q: What does a computer lack that a person has?

A: It has no concepts.

Q: I know some people who have no concepts.

A: They do have concepts. People are filled to the brim with concepts. You don’t have to know what a concept is in order to have one.

(http://tinyurl.com/383jfk)

Anonymous said...

Leo, you choosed again a topic as idly defined as "existence".

According to Wikipedia (vox populi...) "A computer enters the framework of strong AI if a machine approaches or supersedes human intelligence".

I am not a strong AI denier in this sense, I do expect this to happen some time.
But I don't see any need for consciousness to reach that goal (see my comments on a singularitarian website).

As for machine consciousness I am agnostic strongly leaning toward denial or rather a denier with a whiff of agnosticism.
Not because of any "spiritual" or "mystical" alleged properties of consciouness but because consciouness is an inner experience which we can in no way relate to external causes, it is the OTHER WAY AROUND.
We can relate external causes, the fives senses, anesthetics, psychotropics to perceived events in consciousness, but AT NO POINT in this do we have a place where we can observe the transition from "external" to "internal".
Even when self-experimenting you can ONLY observe before and after the "event" (just get drunk...) NOT the transition.
Therefore I don't see how we can ever understand consciousness emergence and postulating that "silicon" can breed consciousness seems a bit over the top given the VAST differences between organic chemistry and electronics.
But the strongest doubt for me comes from the fact that the so-called "external" events and evidences actually comes THRU consciousness (as I argued already), we must be AWARE that we are seeing an experiment in order to "think about it".
Conversely it has been shown long ago (dissociation, Pierre Janet 1892) that structured actions DO NOT NEED consciousness.

P.S. Did you meant this whole posting as an April Fool joke ;-)

Anonymous said...

Some reference material on this topic :
10 Important Differences Between Brains and Computers

Aryeh said...

First, let me assure the readers that this post was made fully in earnest. When this blog has been up and running for a few years, I'll be able to afford pranks like this and this, but for now I'm still trying to build credibility and would like to be taken at face value.

Kai von Fintel:
attacking Hofstadter, no matter how off-the-wall his recent comments might be, doesn't even begin to address the issue. Even if you quote him saying "Just kidding guys, everything I wrote in Godel, Escher, Bach is complete BS" -- that would not add anything useful to the debate. The disavowal of a key proponent does not invalidate a theory.

Kevembuangga, thanks for that 10 differences link -- very interesting and instructive stuff, indeed. You write,
Leo, you choosed again a topic as idly defined as "existence".
but this topic, unlike some previous ones, is actually far more grounded in reality.

What scientific principle prevents us from hypothetically, in some distant future, building machines that approache or supersede human intelligence? If you can't name such a principle, you must believe in the possibility of strong AI. Chris Chatham's post (in your link) underscores the many difficulties involved in bringing that about, and corrects some engineering misconceptions regarding the brain. But nothing in that post suggests that the brain's behavior is impossible to reproduce in principle.

Aryeh said...

I've never seen log treated as a bounded function. Inverse Ackerman -- maybe...

AI proponents certainly did make some extravagant claims -- mainly because the problems are deceptively hard (how hard could speech recognition be if 5-year-olds can do it?).

And no, I don't think we'll have strong AI in the forseeable future. I do think we will start to see machines performing more and more intelligent recognition,c classification, and decision-making and tasks. Part of the matter is that as soon as computers learn how to do something, we stop calling it an intelligent task!

Unknown said...

I know virtually nothing about formal definitions of "strong" AI; however, the discussion brought up two thoughts for me:

1) You're probably familiar with David Gelernter, who I imagine would be classified as an AI "denier". He recently debated Ray Kurzweil here at MIT: webcast here .... quite interesting.

I tend to agree with Gelernter's skepticism about the possibility of artificial consciousness because ...

2) ... I similarly lean toward denying the likelihood of machine consciousness for similar reasons as kevembuangga. Understanding the source of the inner life and the self-awareness of a human mind is something that we're not even approaching to understand, even with all our detailed knowledge about neural circuits, brain regions, biochemical pathways, etc. Until we start to have some degree of insight into what it is that makes people feel alive I don't think we have any hope of creating an artificial consciousness.

Personally, I think in the longer-term this will require bridging and to some degree reconciling the scientific-religious divide, and thereby gaining some understanding of the origin and source of life ("God?"), but all this gets much more difficult to argue about. Partly because purely rational arguments won't get us there ... we'll have to figure out how to integrate the irrational/emotional components.

Aryeh said...

Thanks for that link, Maxim -- worthwhile viewing for everyone. I pretty much side with Kurzweil on all the points. Gelernter concedes that consciousness is an emergent phenomenon, occurring at the level of the brain but not any single neuron. Yet his human-running-a-program example (essientially the Chinese room experiment (as noted by Kurzweil) seems to show a lack of the full appreciation of that notion. He also insists that consciousness remain "private" -- i.e., inaccessible to outside observers. That's a strange claim to make: already with fMRI we can gain some access to your mental states, and our ability to do so can only improve with time. Do you not believe that it should be possible, in principle, to know exactly what you're thinking and feeling from scanning your brain?

Your second objection reminds me of delicious epistemological trap. Suppose we build a machine that acts exactly if it were human. You know, laughs at (the right) jokes, ponders the mysteries of existence out loud, flirts with women in bars, etc. You can ask it if it's conscious and it'll certainly say "yes", and wholeheartedly believe it. Yet according to you, it would be wrong about its own consciousness status!

Anonymous said...

He also insists that consciousness remain "private" -- i.e., inaccessible to outside observers. That's a strange claim to make: already with fMRI we can gain some access to your mental states, and our ability to do so can only improve with time.

No this isn't a "strange claim to make", you are confusing the map and the territory, this the crux of the disagreement on ALL these matters.
When you say you "gain some access to your mental states" you are only able to figure what those mental state COULD BE because you assume that the "subject" mental states are akin to YOUR OWN.
If it were not for this very hypothetical assumption you would have ZERO KNOWLEDGE of the subject mental state.
What is the "mental state" of a dolphin or a bat when they "feel" their echolocation sense of the surrounding world?

As I already argued the taste of sugar is entirely "private" and cannot be deduced from the chemical formula of sugar, not from the gustatory papillae, nor from the nerve impulses to the brain, nor etc...
There is no certainty that my feeling of sweetness resemble yours in any way even if we use the same words to describe it, BECAUSE WE LEARNT THOSE WORDS from correlations with "external events".

Suppose we build a machine that acts exactly if it were human. You know, laughs at (the right) jokes, ponders the mysteries of existence out loud, flirts with women in bars, etc. You can ask it if it's conscious and it'll certainly say "yes", and wholeheartedly believe it. Yet according to you, it would be wrong about its own consciousness status!

Sorry but this is just a rephrasing of the Chinese Room argument, no more no less, and thus as inconclusive for either side in the controversy.

Aryeh said...

You are trying to have it both ways. On the one hand, you insist on having qualia -- "the taste of sugar is entirely 'private' and cannot be deduced...". On the other hand, you deny such qualia to a machine -- why? If your conscious experience is more than the sum of the neuron firings, why can not a machine's be more than the sum of the transistor firings?

How do you know that I am conscious, for that matter? And if I weren't, would you feel silly arguing with a non-conscious entity (about consciousness, of all things)?

Suppose you're talking to a machine that claims to have qualia. It claims to experience the taste of sugar, not merely register it. Presumably, you'd deny that possibility. I can see the machine growing exasperated. "What do I have to do to prove to you I'm conscious?!" it despairs. "Nothing," you reply "you lack the requisite neural tissue." Is that your stance?

Anonymous said...

On the other hand, you deny such qualia to a machine -- why? If your conscious experience is more than the sum of the neuron firings, why can not a machine's be more than the sum of the transistor firings?

I am NOT "fully" denying this, if you read carefully I said I am a "machine consciousness" denier with a whiff of agnosticism, yet not a strong AI denier.

I am only saying that given the differences in substrates, complexity and paths of development it is HIGHLY IMPLAUSIBLE (to me) that we can "build" a conscious machine.
As for the paths of development, like any living thing humans are grown inside out from the ovulae, machines are built by ASSEMBLING parts together, at which point in the assembly process would consciouness set in?
Same question for the ovulae BTW ;-)

How do you know that I am conscious, for that matter?

I DON'T, may be you are just an "intelligent" chatterbot running a blog.
But if you are indeed a human like me I find it (conversely from above) HIGHLY PLAUSIBLE that you have an "inner experience" akin to my own if not identical, since we then share so many "external" traits (i.e. intersubjectively comparable), though solipsism from either you or me cannot be absolutely excluded "on principles".

And if I weren't, would you feel silly arguing with a non-conscious entity (about consciousness, of all things)?

Not at all because our arguments are in the domain of discourse, i.e. handling MAPS of reality and I am not denying that a non conscious "mapping" of reality can be interactive.
Actually EVERY piece of software is a fragmentary and very primitive map of a "subset" (informal...) of reality.
This is why I think strong AI is much more plausible than machine consciousness.

"What do I have to do to prove to you I'm conscious?!" it despairs. "Nothing," you reply "you lack the requisite neural tissue." Is that your stance?

I wouldn't be specific about neural tissue and I will probably grant it the "right" to assert it's own consciousness under the benefit of the doubt.

I guess this whole discussion rests on a lack of distinction between high (un)plausibility and certainty.

Anonymous said...

Great topic, Leo! And a very interesting discussion, indeed.
A couple of comments, if I may ...
>>We can relate external causes, the fives senses, anesthetics, psychotropics to perceived events in consciousness, but AT NO POINT in this do we have a place where we can observe the transition from "external" to "internal". Even when self-experimenting you can ONLY observe before and after the "event" (just get drunk...) NOT the transition.
Well, tons of [trained] people have observed and documented their various transient mind states. Some even done it while in a process of dying [ever heard on the "Tibetan Book of the Dead"?]. This vastly depends on the [trainable] ability to fully concentrate on a single thought, while also ignoring input from all other senses. Can't we train a computer to ignore our input [other than by getting it in a permanent wait state (smile)] and still do something productive for itself? I don't see why not, but we would also lose our control over such a computer as it won't take our input. Just like we [usually] have no control over another "intelligence"...

>>Understanding the source of the inner life and the self-awareness of a human mind is something that we're not even approaching to understand, even with all our detailed knowledge about neural circuits, brain regions, biochemical pathways, etc. Until we start to have some degree of insight into what it is that makes people feel alive I don't think we have any hope of creating an artificial consciousness.

What makes you to think that people have not already studied and understood these issues? The ancient science is full of key signs, and some of them are truly amazing. For example, the ancient science makes a clear distinction between the "hardware" [the body, the brain] and the "software" [the aura]. Which also means that our current focus on the brain is a dead-end. The ancients have studied human aura and concluded that it is designed pretty much like the system of stars [hence, the idea of a "microcosm" and the importance of astrology]. As described in the Ouspensky book "In Search of the Miraculous", people may have various degrees of self-understanding, ranging from being totally asleep all the way up to the ability to directly influence the external powers. The most "awaken" of them might even consider us to be the "computers" in a sense that we do not have their control of life [nothing is as planned, everything is just happening to me by itself]. Does this not make our intelligence "artificial"?

Unknown said...

Leo, you wrote: If your conscious experience is more than the sum of the neuron firings, why can not a machine's be more than the sum of the transistor firings?

This question gets at what this argument hinges on, and I don't think there's any way to reach objective agreement on the answer.

Yes, my conscious experience is definitely more than the sum of the neuron firings in my brain. But brain science currently has no idea of what consciousness is beyond the sum of neuron-firings. Will we one day know how to explain this? Possibly. But my gut (which is an awesome scientist!) says that current directions of inquiry in AI aren't getting us there.

Does consciousness happen when some critical threshold is passed of neurons being strung together? Clearly no, there's a continuum -- from bacterium, to snail, to cockroach, to fish, to bird, to cow, to dog, to dolphin, to chimp, to human. Certainly a bacterium isn't conscious, and I am fairly certain that snails and cockroaches aren't either (in the sense we humans are), whereas dogs and chimps are. But nobody can tell me what that threshold is, nor even how to describe it, except "lots of neurons interconnected in a complex massively-parallel net." I can feel with every fiber of my being that my consciousness is something more than that.

And even if it is not more than that, because transistors and neurons operate so differently, even if you build me a network of transistors as extensive and complex as the network of neurons in a human brain, that's insufficient to convince me that the emergent property of this network (what you call "greater than the sum of its transistor firings") will be consciousness. And because nowadays stringing lots of transistors (plus software) together is our only serious attempt at achieving AI, I'm quite confident that we won't get there that way.

If, Leo, as you say, we were able to build a machine that behaves convincingly like a human, (c.f. Data of Star Trek TNG), then sure I would concede that such a machine has consciousness. But it would have to be pretty darn good at human behavior. I do not necessarily deny that a machine can ever have qualia. But I set a much higher bar for a machine to "prove" to me that it has them, since I know a priori that a human does, and that a machine normally doesn't. Yes, this is certainly a prejudice, but it comes from my experience of the world which is how I judge and integrate new information. I can't simply turn off the prejudices of my world experience -- a side-effect of biological consciousness, I might add).

As an interesting side-note, seems that human efforts to produce artificial consciousness are consistently aimed at creating a human-like one, since we're invariably trying to make something with a mastery of human language. A human-like consciousness is a tall order! An artificial animal-like (more primitive) consciousness with, say, basic needs/wants and not much in the way of abstract concepts/emotional intelligence would seem to be somehow easier. And yet, AI people seem convinced that artificial intelligence should be human-like, thereby making it much harder to convince us skeptics that they can ever succeed. ;)

Aryeh said...

Good points by Victor and Maxim. It's indeed not clear that we want to create human-like intelligence, as it may well refuse to carry out our orders (or even attempt to enslave us, etc.). And yes, AI people as well as behaviorists blundered big-time when they assumed intelligence had to be intelligible. It may well be that your brain is already the simplest description of your intelligence/consciousness/self; in that case, any other "explanation" would have to be at least as complicated as the myriad of synapse connections and parameters, and would not be "interpretable" by any other way than just getting to know you!

And I also agree with Maxim that our main visceral objection to machine consciousness is our tendency to ascribe mental states to other conscious beings based on our own experience.

Finally, if as Kevembuangga says, "this whole discussion rests on a lack of distinction between high (un)plausibility and certainty" -- then we have no argument at all! I personally don't believe we'll have achieved strong AI within the next 50 years, much less create conscious machines [insert favorite quip about when we'll achieve human intelligence]. Back when I used to work in Natural Language Processing, I'd to get a kick out of occasionally being asked if I'm building a super-intelligent computer that'll take over the world. ("I wish!" would be my answer.) Now I've switched to math, which people view as a much less ominous field (though I believe that the main hurdles in computer and information technology are mathematical).

Anonymous said...

Victor : Well, tons of [trained] people have observed and documented their various transient mind states.

This is NOT what I am talking about. What you refer to is no different than the experience of say, professional wine tasters or perfume creators.
That is, you describe the unfolding of some of your inner experiences using metaphors and relating this to external events (the "experiment") in order to establish the basis of an intersubjective discourse.
Actually this is not even different than just tasting sugar, put sugar in your mouth, feels "good".
This is just a correlation between an external event and an internal experience, there is still a "gap" in the explanation, it doesn't actually "explain" anything.
And FYI I did try Iboga and Salvia Divinorum, nice eye candy, deep psychological insights but no "spirits".
Also tried Vipassana meditation, deep changes too but no eye candy and no spirits either.
I guess this is matter of (lack of) inclination toward "mystical" explanations.

people may have various degrees of self-understanding, ranging from being totally asleep all the way up to the ability to directly influence the external powers.

The knowledge of "The ancients" is highly debatable. I happen to know quite a number of shamans, psychics and the like and while it is obvious to me that they are indeed doing "something" which is not adequately explained by current scientific standards their epistemology is total crap, they are not reliable and they don't really "know what they are doing".
It may end up that the whole "spiritual knowledge" shebang is just a mix of intuition, placebo effect, subliminal processing (a HORSE can do it too!) and delusions about causality.
Whenever we will be able to consistently measure properties of "things" like the aura I will have more confidence in that sort of "knowledge".

Maxim : And because nowadays stringing lots of transistors (plus software) together is our only serious attempt at achieving AI, I'm quite confident that we won't get there that way.

Here it seems you are still conflating AI and consciousness. While I agree that "stringing lots of transistors (plus software) together" isn't a way to achieve or understand consciousness I see no reason why this cannot lead to strong AI.
If there were such an impossibility that would mean that we could someday pinpoint this impossibility and therefore bridge the gap (even if negatively) between inner experience and external events, I don't think we will ever do that.

since we're invariably trying to make something with a mastery of human language

I think this a valuable goal which is not necessarily related to consciousness and full human mimicry.
Of course there are deniers.

Leo : then we have no argument at all

Even on Atheism?
Also there are certainly large discrepancies about the magnitudes of plausibilities and many other important points.

And yes, AI people as well as behaviorists blundered big-time when they assumed intelligence had to be intelligible.

I will NOT take any AI unless it is intelligible, this is the only safeguard against the risk you mention and which is the Singularitarians nightmare that AI will "take over" mankind.
As we have obvious limitations in our capacities for intelligibility the only way seem to use AI itself to improve OUR capacities in this domain in some sort of bootstrap process in which we will be part of the bootstrap loop.

though I believe that the main hurdles in computer and information technology are mathematical

I am not so sure, to me the main hurdles seems rather located at the "conversion" from the informal specs to the formal ones.
This is the place where things can go totally out of whack, after that it may be really hard (even NP hard ;-) but less exposed to disastrous blunders.

Anonymous said...

Dear Kevembuangga,
Apparently we are miscommunicating, because I am talking about the "science" [which, in my view, translates into "knowledge about the algorithm(s) of a process control"] while you seemed to be focused on the standalone experiments and/or magic rituals. Since I have no desire to argue, especially on the Leo's blog, let me just attract your attention to the acupuncture meridians [see, e.g. http://www.dcfirst.com/atlas_of_acupuncture_points.html], which is a direct product of the "highly debatable" ancient science. Regardless of your beliefs in their healing properties or even existence, they do absorb and transmit light [see, e.g. http://www.explorepub.com/articles/light_therapy.html]. Where does this put our modern science which is still unable to explain the effects?

And there are other unexplainable effects, e.g. Kirlian photos [see, e.g. http://en.wikipedia.org/wiki/Kirlian_photography].
Now, what's truly interesting to follow-up for an AI mathematician is the "algorithm(s) of a process control". Particularly, in the area of "feedback error detection and compensation". But this is another topic altogether...

Anonymous said...

Victor : Apparently we are miscommunicating,

I don't think so, see below.

Since I have no desire to argue, especially on the Leo's blog,

I don't have a blog myself, do you have any other place where you would "like" to argue?

let me just attract your attention to the acupuncture meridians [see, e.g. http://www.dcfirst.com/atlas_of_acupuncture_points.html], which is a direct product of the "highly debatable" ancient science.

I am not denying the effectiveness of acupuncture which is one of the best structured "ancient knowledge" what I am saying is that the "organisation" of such fields of ancient knowledge is very poor relative to the engineering standards of today.
These are a hodgepodge of "recipes" which don't lend themselves too well to investigation and improvement, there isn't much "theory" behind these and where there is it is an awful mess, this is exactly the case of acupuncture.
This is what I mean by "crappy epistemology".

And there are other unexplainable effects, e.g. Kirlian photos

Well... Whenever these "experimenters" will be willing to stick to more reproducible experiment protocols and to look for better "theories" than metaphysics and the "supernatural" (which amount to content free statements) they will get more consideration.
What is REALLY detrimental is that the disprepute that they gather turn away more knowledgeable scientists from investigating those feats for fear of damaging their reputations and careers.

Anonymous said...

Kevembuangga, here's a link to the most profound chapter - "Mud Shadows" - of the Carlos Castaneda's book "The Active Side of Infinity".

http://tinyurl.com/23hrpa

Note, that it's been first published just less than 10 years ago, so until than people [except very few in the special schools] had no access to the concept. [Not that the ancients didn't document their similar experiences but, their libraries got destroyed. Some sources claim that the Alexandrian library was burnt solely for the references to the word "Archons" in Gnostic texts]

If after reading this short chapter you keep insisting on "their epistemology is total crap", I am gonna through in my towel.

If, however, you come back saying "Wow! Now I know exactly what I was trying to achieve at Vipassana!" we might continue our discussion offline.

[I have to admit that although the concept is quite simple and seems to model-explain every nonsense that was going on on this Earth for centuries, an average reader doesn't seem to percieve it well. Amazingly, this is being discussed there too!]

Anonymous said...

here's a link to the most profound chapter - "Mud Shadows" - of the Carlos Castaneda's book "The Active Side of Infinity".

http://tinyurl.com/23hrpa


Pure insanity!
The very stuff of religious paranoid delusion it pretends to warn against.

Language is inadequate. All these experiences are beyond syntax.

Oh! Cannot rely on intersubjective communication, Eh?
Except of course for the "sorcerer" words!

I sort of looked at it from the corner of my eye, I would see a fleeting shadow crossing my field of vision.

Yeah! Trust "fleeting" visual illusions more than reproducible measures.
I learnt JUST THE OPPOSITE from my use of psychotropics.

Down in the depths of every human being, there is an ancestral, visceral [* visceral- obtained through intuition rather than from reasoning or observation] knowledge about the predators' existence.

That's about the ONLY sensible statement in this piece.
Yes we have an ingrained fear of predators, however for predators to nurture our paranoia they have no need to come "from the depths of the cosmos", read Scott Atran on the origins of the supernatural.

Castaneda is a crook, NOBODY ever found any evidence of his following any kind of teachings nor of the putative "Don Juan".
The "New Age" literature is replete with that kind of crap.

THOUGH, I am not dismissing the kind of "inner experiences" described, Castaneda could not have invented all this out of thin air, he is reusing a lot of tales from "tradition" which neither came out of thin air.
What I am dismissing is the silly approach of trying to mimick physics by talks about "energies" and the like (Quantum Mechanics is also a favorite of kooks).
Put all that where it belongs, neurophysiology, psychiatry and evolutionary psychology.

Anonymous said...

FYI, regarding Hofstadter - check his Wikipedia discussion page. Apparently, during his talk the next day, he claimed the editing of the NYT piece was a bit too 'tight'.

Kevembuangga said...

As I said, Castaneda was a crook but he was also a cult leader : The dark legacy of Carlos Castaneda.
When will we get rid of the New Age retards?
(Well, this is the Internets, sigh...)

Anonymous said...

This is NOT how a mathematician thinks. I mean, not by opinions, feelings or authority. Did I ever say I BELIEVE in Castaneda?

Mathematicians work with assumptions [sometimes counter-intuitive] and build models which may or may not correspond to the reality. How do you FEEL about the ASSUMPTION that speed of light is independent of the speed of its source? It FEELS like BS, because it's totally against our daily experience, but so far the resulting model proved to be much better than the Newton's in describing the reality.

Speaking of Newton, did you know that he was also a keen student of alchemy and left remarkable manuscripts on the prophecies of Daniel, on the Apocalypse and on a history of creation? When once asked by a Fellow of the Royal Society, why he believed in astrology and other such arrant nonsense, he is reputed to have retorted: 'Well, Sir, I have studied the matter, and you clearly have not!'.

Back to Castaneda:
"Sorcerers found out that if they taxed the flyers' mind with inner silence, the foreign installation would flee, and give any one of the practitioners involved in this maneuver the total certainty of the mind's foreign origin".

The concept of "inner silence" is also taught by raja-yoga, buddism [with its "empty mind"] and Kabbalah [the Hokhmah sefira], but this is not why I am accepting it. I'd like to treat this as an ASSUMPTION [about the foreign origin of our mind, that is] and ask you to come up with a CONTRADICTION. Only then you'd have the right to dismiss it as "pure insanity".

BTW, this simple assumption explains pretty much all the stupidity that is/was happenning on our Earth, like Inquisition, Holocaust or Jihad. What does it contradict?

Anonymous said...

Sorry Leo, I guess you know what I mean...

AaronSw said...

For the best argument against Strong AI, read John Searle, The Rediscovery of the Mind. (Not the Chinese Room stuff -- he apologizes for that.) The "for kids" version is Mind: A Brief Introduction and the shorter version is The Mystery of Consciousness.