Talk:Orch-OR: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pierre-Alain Gouanvic
(New page: == History of this article == This article was published in Wikipedia and featured on the Webpage of Stuart Hameroff, one of the co-founders of this well-known theory of consciousness. Si...)
 
imported>Howard C. Berkowitz
 
(14 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{subpages}}
== History of this article ==
== History of this article ==


Line 4: Line 5:
I consider that this article is lively and fascinating. I see it as a an example of what CZ should look like; it is written in a style that is simiar to the [[Life]] article. Of course, I'm not saying it can't be improved. But some of WP's "improvements" should be avoided.
I consider that this article is lively and fascinating. I see it as a an example of what CZ should look like; it is written in a style that is simiar to the [[Life]] article. Of course, I'm not saying it can't be improved. But some of WP's "improvements" should be avoided.
[[User:Pierre-Alain Gouanvic|Pierre-Alain Gouanvic]] 22:32, 25 April 2008 (CDT)
[[User:Pierre-Alain Gouanvic|Pierre-Alain Gouanvic]] 22:32, 25 April 2008 (CDT)
I have some problems with this article. This theory has attracted virtually no interest from academic neuroscientists; the Annals paper cited here has been cited just 20 times. Personally I am unsurprised, I don't personally see anything of substance or significance in the theory, but see many errors of fact in this article. As it is a WP import I've left it (for now) but flagged it as a WP import pending other views. It could be trimmed back and given a fair face here, but my view is that this falls below any notability threshold among fringe theories, and I don't really think it's worth the candle. Personal view- harmless nonsense.[[User:Gareth Leng|Gareth Leng]] 17:07, 10 May 2010 (UTC)
: I cannot judge the quality of this article myself and trust your opinion. Moreover, the mere fact that it is a direct WP import and is said to be copied from the website of one of the proponents of the theory makes it a candidate for removal. But even though it is not a topic as "popular" as [[Ormus]] it justifies a page, I think. Roger Penrose is well-known and recognized as a theoretical physicist and mathematician (though not as neuroscientist). This, I think makes even a bogus theory "notable". My approach would be to blank the page, and then use a (more critically) rewritten version of the lead as a short article on the subject (that may or may not be extended later). --[[User:Peter Schmitt|Peter Schmitt]] 11:50, 11 May 2010 (UTC)
OK, I'll give that a try.[[User:Gareth Leng|Gareth Leng]] 20:40, 11 May 2010 (UTC)
:Thanks, Gareth, looks good.  I moved the sentence about neuroscientists response to the first paragraph since, in this short synopsis, it acts as the lead.  You can move it back if your looking for the article to develop differently. [[User:D. Matt Innis|D. Matt Innis]] 12:51, 13 May 2010 (UTC)
::Being someone with very little knowledge of this model or any models of consciousness for that matter, this article reads as if it is  a legitimate possibility, at least as viable as any model of consciousness.  Is that the case, or should we clarify that better? [[User:D. Matt Innis|D. Matt Innis]] 13:01, 13 May 2010 (UTC)
:::I vaguely remember reading ''The Emperor's New Mind'', but don't remember it very well. As the text reads right now, I am unclear what is meant by "algorithmic computation" being insufficient. That, in and of itself, isn't outside mainstream computer science, where neural and semantic nets aren't always called algorithmic -- things requiring pattern recognition often are not. What is unclear is if he is alluding to things that are outside Turing or Godel scope.  As it reads, there's almost an Intelligent Design flavor. Could someone clarify? [[User:Howard C. Berkowitz|Howard C. Berkowitz]] 14:38, 13 May 2010 (UTC)
To Matt: I really hate it when you cut straight to the heart of the matter so sharply. I wish I could be as concise in my reply, but I guess you'll have to suffer a more cumbersome response
Conventionally, neuroscientists assume that consciousness ("self awareness") is an emergent property of classical computer-like activities in the brain's neural networks. The brain is hugely complex, with perhaps as many as 10,000 different types of neurone, each with different intrinsic properties that use many different chemical signalling systems. Altogether in the human brain there are several billion neurones, each making perhaps 10,000 synaptic connections - but as well as communication by synapses these neurones also intercommunicate by autocrine, paracrine, and neurohormonal chemical signals. We believe that neurons and the(mainly) chemical signals that pass between them are the fundamental units of information in the brain, that these signals interact with the intrinsic properties of individual neurones to generate patterns of electrical activity within those neurones, and that these patterns in turn determine what chemical messengers are made and when they are released. We believe that experience alters the patterns and strengths of connectivity between neurones, and that this is the basis of learning. We believe that patterns of neural network activities correlate with mental states. We know that in massively complex systems like the brain, the complexity can give rise to unexpected higher order properties, so called emergent properties, that could not have been easily foreseen from understanding the properties of the neurones themselves. Such emergent properties are very difficult to work with and understand, precisely because they arise unpredictably in highly complex systems. Nevertheless, an important part of contemporary neuroscience is about trying to understand, in various systems, the properties of neuronal networks - in order to relate network behaviour to the behaviour of single cells and to study at this level at least how higher order behaviours emerge from the properties of the component units from which networks are built.
Thus classically we assume that consciousness emerges as a novel property of computational complexity among neural networks, but really don't have a way of studying this yet - so mainly we leave it to psychologists and philosophers and computational theorists. the only real area where neuroscientists engage with the problem is by gross correlations of things like EEG or regional brain activity with conscious state; these are associational studies and don't actually tell us much if anything about mechanism. We don't at present see a reason to doubt that the signalling mechanisms that we know about are sufficient to provide an explanation of consciousness, given the vast complexity of the brain. However, we won't be able to say that we understand consciousness until we can build a model of the brain that is conscious. That is beyond us at present; it may be beyond us forever, it may come along in the next few years, I really wouldn't even hazard a guess on this one.
The best review I've found is this:
Seth AK, Izhikevich E, Reeke GN, Edelman GM. Theories and measures of consciousness: an extended framework. Proc Natl Acad Sci U S A. 2006 Jul 11;103:10799-804. PMID 16818879
But don't look at it without a large tub of aspirin close to hand.
So In brief answer to your too sharp questions, I wanted to avoid going into the deficiencies of Orch-OR in depth. It perhaps is as viable as any current theory of consciousness that attempts to be a theory of consciousness in terms of neuronal properties - but by default, wise neuroscientists just wouldn't think it's sensible at present to formulate a theory of consciousness until we can formulate it in terms that are clearly predictive and testable, which Orch-OR frankly isn't.
To Howard: also a sharp question. Penrose is a mathematician and a good one, and argues from Godel's incompleteness theorem. He's not a biologist and ''The Emperor's New Mind'' scarcely touches biology. In particular, one of the things not addressed anywhere in Orch-OR is the question of whether it is even conceivable that the quantum computational mechanisms proposed in Orch-OR could have evolved. It's not enough to say that a biological element like a protein has massive capacity for encoding information - we know this, but for this to be useful there must be we ways that the information can be put in and read out, and understand how the whole thing could have evolved. Otherwise it does indeed look like an ID theory.[[User:Gareth Leng|Gareth Leng]] 17:30, 13 May 2010 (UTC)
:Thanks, Gareth, you made that make perfect sense. I almost want to just cut and paste it! So would it be safe to infer that, at this point in time, neuroscientists continue to doubt that any model, no matter how many variables are integrated, can successfully simulate consciousness if only mathematically quantifying synapses.  Each synapse is too variably dependant on other factors, dependant and independant of the quantity of synapses, to predict its effect on the next synapse, much less the billions of synapses that would be necessary to be considered equivolent to a thought.  Any model would have to be able to quantify the effects of stored memory and things like hormonal modulation and emotion that would be necessary for what we experience as consciousness. Besides, I suppose millions of synapses occur that don't even reach to a level that would alter what we experience as consciousness.  Additionally, since it is currently practically impossible to test, making such a claim is premature and makes it suspect.  What do you think about putting something like that at the bottom of this.. you know, one of your famous synopses! [[User:D. Matt Innis|D. Matt Innis]] 02:14, 14 May 2010 (UTC)
:: Dear Matt. You should be doing my job. You have it absolutely right and put it much better than I would have.[[User:Gareth Leng|Gareth Leng]] 08:15, 14 May 2010 (UTC)
:::Haha, I don't think so!!  You're job is way too important to let just anybody do it.  The nice thing about being a chiropractor is that no-one expects much, so it sounds really good when we occasionally do get it right.  You, on the other hand, HAVE to get it right *every* time - too much pressure!  Besides, I think I left out the whole part about the microtublues and proteins. [[User:D. Matt Innis|D. Matt Innis]] 12:24, 14 May 2010 (UTC)
===Chiropractors and neuroscience===
Now, it would probably be fringe, but Matt's last comments make me think of my observations about the consciousness -- and I have not the slightest doubt he had it -- of my late and dearly missed cat Clifford.  Clifford was a gleaming coal black, except that the last centimeter of his tail was snow white. For his first several years, he didn't recognize the white spot as part of him, so he'd desperately run from it. As he matured, he became a very wise and social cat, but would have moments of incredible foolishness.
It was our contention that he had a second brain in the white spot, and there was occasional competition for control at both ends of his spine. If that isn't chiropractic neuroscience, what is? [[User:Howard C. Berkowitz|Howard C. Berkowitz]] 13:45, 14 May 2010 (UTC)
:I think if we could go ahead and continue the metaphor of black and white. Consider that it is a battle between good and evil.  The issue would be whether to cut the tail off and be damned, or cut the cat off and keep the tail as a good luck charm. :) [[User:D. Matt Innis|D. Matt Innis]] 13:50, 14 May 2010 (UTC)
::Ah, but evil is tricky.  The black part was full of good will toward all men (if not all cats); he was the most conscious clown of any cat I've known.  The white part would get him into trouble -- and others as well. Without really thinking about the consequences, we redid the bathroom with a plush black rug, and a clear plastic trash can. After the rug was cut to fit, Clifford stared at it, and we realized it was almost identical to his fur. He eventually leaped on the strange giant cat, all claws extended, and was puzzled by its passivity.
::The next morning, bleary-eyed and enthroned in that little room, I watched as an apparent makeup-removing cotton ball levitated from the trash can and started coming for me. I stopped screaming when I realized Clifford had been resting his tail across the trash container, and merely wanted to come close and cuddle. [[User:Howard C. Berkowitz|Howard C. Berkowitz]] 14:25, 14 May 2010 (UTC)

Latest revision as of 09:25, 14 May 2010

This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition A speculative theory of consciousness proposed in the mid-1990s by British theoretical physicist Sir Roger Penrose and American anesthesiologist Stuart Hameroff. [d] [e]
Checklist and Archives
 Workgroup categories Health Sciences and Psychology [Please add or review categories]
 Talk Archive none  English language variant American English

History of this article

This article was published in Wikipedia and featured on the Webpage of Stuart Hameroff, one of the co-founders of this well-known theory of consciousness. Since then, the page has lost interesting features (http://en.wikipedia.org/wiki/Orch-OR). For instance, the "questions" section is lost and there is an overemphasis on criticisms. Usual problems with WP. I consider that this article is lively and fascinating. I see it as a an example of what CZ should look like; it is written in a style that is simiar to the Life article. Of course, I'm not saying it can't be improved. But some of WP's "improvements" should be avoided. Pierre-Alain Gouanvic 22:32, 25 April 2008 (CDT)


I have some problems with this article. This theory has attracted virtually no interest from academic neuroscientists; the Annals paper cited here has been cited just 20 times. Personally I am unsurprised, I don't personally see anything of substance or significance in the theory, but see many errors of fact in this article. As it is a WP import I've left it (for now) but flagged it as a WP import pending other views. It could be trimmed back and given a fair face here, but my view is that this falls below any notability threshold among fringe theories, and I don't really think it's worth the candle. Personal view- harmless nonsense.Gareth Leng 17:07, 10 May 2010 (UTC)

I cannot judge the quality of this article myself and trust your opinion. Moreover, the mere fact that it is a direct WP import and is said to be copied from the website of one of the proponents of the theory makes it a candidate for removal. But even though it is not a topic as "popular" as Ormus it justifies a page, I think. Roger Penrose is well-known and recognized as a theoretical physicist and mathematician (though not as neuroscientist). This, I think makes even a bogus theory "notable". My approach would be to blank the page, and then use a (more critically) rewritten version of the lead as a short article on the subject (that may or may not be extended later). --Peter Schmitt 11:50, 11 May 2010 (UTC)

OK, I'll give that a try.Gareth Leng 20:40, 11 May 2010 (UTC)

Thanks, Gareth, looks good. I moved the sentence about neuroscientists response to the first paragraph since, in this short synopsis, it acts as the lead. You can move it back if your looking for the article to develop differently. D. Matt Innis 12:51, 13 May 2010 (UTC)
Being someone with very little knowledge of this model or any models of consciousness for that matter, this article reads as if it is a legitimate possibility, at least as viable as any model of consciousness. Is that the case, or should we clarify that better? D. Matt Innis 13:01, 13 May 2010 (UTC)
I vaguely remember reading The Emperor's New Mind, but don't remember it very well. As the text reads right now, I am unclear what is meant by "algorithmic computation" being insufficient. That, in and of itself, isn't outside mainstream computer science, where neural and semantic nets aren't always called algorithmic -- things requiring pattern recognition often are not. What is unclear is if he is alluding to things that are outside Turing or Godel scope. As it reads, there's almost an Intelligent Design flavor. Could someone clarify? Howard C. Berkowitz 14:38, 13 May 2010 (UTC)


To Matt: I really hate it when you cut straight to the heart of the matter so sharply. I wish I could be as concise in my reply, but I guess you'll have to suffer a more cumbersome response

Conventionally, neuroscientists assume that consciousness ("self awareness") is an emergent property of classical computer-like activities in the brain's neural networks. The brain is hugely complex, with perhaps as many as 10,000 different types of neurone, each with different intrinsic properties that use many different chemical signalling systems. Altogether in the human brain there are several billion neurones, each making perhaps 10,000 synaptic connections - but as well as communication by synapses these neurones also intercommunicate by autocrine, paracrine, and neurohormonal chemical signals. We believe that neurons and the(mainly) chemical signals that pass between them are the fundamental units of information in the brain, that these signals interact with the intrinsic properties of individual neurones to generate patterns of electrical activity within those neurones, and that these patterns in turn determine what chemical messengers are made and when they are released. We believe that experience alters the patterns and strengths of connectivity between neurones, and that this is the basis of learning. We believe that patterns of neural network activities correlate with mental states. We know that in massively complex systems like the brain, the complexity can give rise to unexpected higher order properties, so called emergent properties, that could not have been easily foreseen from understanding the properties of the neurones themselves. Such emergent properties are very difficult to work with and understand, precisely because they arise unpredictably in highly complex systems. Nevertheless, an important part of contemporary neuroscience is about trying to understand, in various systems, the properties of neuronal networks - in order to relate network behaviour to the behaviour of single cells and to study at this level at least how higher order behaviours emerge from the properties of the component units from which networks are built.

Thus classically we assume that consciousness emerges as a novel property of computational complexity among neural networks, but really don't have a way of studying this yet - so mainly we leave it to psychologists and philosophers and computational theorists. the only real area where neuroscientists engage with the problem is by gross correlations of things like EEG or regional brain activity with conscious state; these are associational studies and don't actually tell us much if anything about mechanism. We don't at present see a reason to doubt that the signalling mechanisms that we know about are sufficient to provide an explanation of consciousness, given the vast complexity of the brain. However, we won't be able to say that we understand consciousness until we can build a model of the brain that is conscious. That is beyond us at present; it may be beyond us forever, it may come along in the next few years, I really wouldn't even hazard a guess on this one.

The best review I've found is this:

Seth AK, Izhikevich E, Reeke GN, Edelman GM. Theories and measures of consciousness: an extended framework. Proc Natl Acad Sci U S A. 2006 Jul 11;103:10799-804. PMID 16818879

But don't look at it without a large tub of aspirin close to hand.

So In brief answer to your too sharp questions, I wanted to avoid going into the deficiencies of Orch-OR in depth. It perhaps is as viable as any current theory of consciousness that attempts to be a theory of consciousness in terms of neuronal properties - but by default, wise neuroscientists just wouldn't think it's sensible at present to formulate a theory of consciousness until we can formulate it in terms that are clearly predictive and testable, which Orch-OR frankly isn't.

To Howard: also a sharp question. Penrose is a mathematician and a good one, and argues from Godel's incompleteness theorem. He's not a biologist and The Emperor's New Mind scarcely touches biology. In particular, one of the things not addressed anywhere in Orch-OR is the question of whether it is even conceivable that the quantum computational mechanisms proposed in Orch-OR could have evolved. It's not enough to say that a biological element like a protein has massive capacity for encoding information - we know this, but for this to be useful there must be we ways that the information can be put in and read out, and understand how the whole thing could have evolved. Otherwise it does indeed look like an ID theory.Gareth Leng 17:30, 13 May 2010 (UTC)

Thanks, Gareth, you made that make perfect sense. I almost want to just cut and paste it! So would it be safe to infer that, at this point in time, neuroscientists continue to doubt that any model, no matter how many variables are integrated, can successfully simulate consciousness if only mathematically quantifying synapses. Each synapse is too variably dependant on other factors, dependant and independant of the quantity of synapses, to predict its effect on the next synapse, much less the billions of synapses that would be necessary to be considered equivolent to a thought. Any model would have to be able to quantify the effects of stored memory and things like hormonal modulation and emotion that would be necessary for what we experience as consciousness. Besides, I suppose millions of synapses occur that don't even reach to a level that would alter what we experience as consciousness. Additionally, since it is currently practically impossible to test, making such a claim is premature and makes it suspect. What do you think about putting something like that at the bottom of this.. you know, one of your famous synopses! D. Matt Innis 02:14, 14 May 2010 (UTC)
Dear Matt. You should be doing my job. You have it absolutely right and put it much better than I would have.Gareth Leng 08:15, 14 May 2010 (UTC)
Haha, I don't think so!! You're job is way too important to let just anybody do it. The nice thing about being a chiropractor is that no-one expects much, so it sounds really good when we occasionally do get it right. You, on the other hand, HAVE to get it right *every* time - too much pressure! Besides, I think I left out the whole part about the microtublues and proteins. D. Matt Innis 12:24, 14 May 2010 (UTC)

Chiropractors and neuroscience

Now, it would probably be fringe, but Matt's last comments make me think of my observations about the consciousness -- and I have not the slightest doubt he had it -- of my late and dearly missed cat Clifford. Clifford was a gleaming coal black, except that the last centimeter of his tail was snow white. For his first several years, he didn't recognize the white spot as part of him, so he'd desperately run from it. As he matured, he became a very wise and social cat, but would have moments of incredible foolishness.

It was our contention that he had a second brain in the white spot, and there was occasional competition for control at both ends of his spine. If that isn't chiropractic neuroscience, what is? Howard C. Berkowitz 13:45, 14 May 2010 (UTC)

I think if we could go ahead and continue the metaphor of black and white. Consider that it is a battle between good and evil. The issue would be whether to cut the tail off and be damned, or cut the cat off and keep the tail as a good luck charm. :) D. Matt Innis 13:50, 14 May 2010 (UTC)
Ah, but evil is tricky. The black part was full of good will toward all men (if not all cats); he was the most conscious clown of any cat I've known. The white part would get him into trouble -- and others as well. Without really thinking about the consequences, we redid the bathroom with a plush black rug, and a clear plastic trash can. After the rug was cut to fit, Clifford stared at it, and we realized it was almost identical to his fur. He eventually leaped on the strange giant cat, all claws extended, and was puzzled by its passivity.
The next morning, bleary-eyed and enthroned in that little room, I watched as an apparent makeup-removing cotton ball levitated from the trash can and started coming for me. I stopped screaming when I realized Clifford had been resting his tail across the trash container, and merely wanted to come close and cuddle. Howard C. Berkowitz 14:25, 14 May 2010 (UTC)