Talk:Scientific method/Draft: Difference between revisions
imported>Robert W King (→Is the "scientific method" pseudoscience?: new section) |
imported>Gareth Leng |
||
Line 93: | Line 93: | ||
Growing up in school I always thought that the scientific method was how you develop and test theories... is it actually a load of bunk? --[[User:Robert W King|Robert W King]] 10:09, 4 February 2008 (CST) | Growing up in school I always thought that the scientific method was how you develop and test theories... is it actually a load of bunk? --[[User:Robert W King|Robert W King]] 10:09, 4 February 2008 (CST) | ||
: Some of us (try) to do that (to adopt a "philosophical" approach). But if you look at what most scientists actually do - well, they do all kinds of things, and quietly, they often think that what some of their colleagues do is close to pseudoscience. It works from both sides, data collectors often treat theory with contempt (the current line in denigration is that hypotheses encourage a biased view of data), some theorists think that data collection is mindless (that data without theory is garbage), people with different ideas regard each other with scarce concealed suspicion about their sanity, intellect or honesty - sometimes even when the ideas are scarcely distinguishable to an outsider. It's competitive; maybe it has to be. But I don't think it's bunk, not at all; it's just that human elements are important. :-) [[User:Gareth Leng|Gareth Leng]] 03:33, 5 February 2008 (CST) |
Revision as of 03:33, 5 February 2008
An observation on 'method' article
I can see why the 'scientific method' article is so controversial. The frequent citation of Peter Medwar in this particular version is a classic sign of Popperian devotion. Still, any article purporting to state what is the scientific method will open a can of worms.
The various different theories of 'scientific method' remain highly debated, as is the basic question of whether there is any one such method. I should think that an historical account presenting the different major traditions of thought -- induction, deduction, abduction (pragmatism), hypothetico-deductivism, the current 'empirical' trend of just describing what scientists do (etc) -- together with the contrasts between positions would be useful, in providing readers with a basic understanding of what has been said and what is currently disputed. That is, how about simply describing the debate?
I wonder if the Cambridge Dictionary of Philosophy or the Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/) might be willing to allow links or abridged excerpts? That way we might, in fairly little space, be able to describe the alternatives and allow readers to pursue further.
Nick Rasmussen
Thanks Nick and welcome.It's certainly true that Popper has been a huge influence on scientists in the UK, (hypothesis-driven science has been the key word in Research Council guidance on grants for a long time now), and a lot of that has been indirectly through Medawar. One of the things to be aware of is that there is a separate article onHistory of scientific method - hence this was conceived as focussed on the scientific method in current practise.Gareth Leng 11:11, 9 March 2007 (CST)
- Please see the copyright statement for the Stanford Encyclopedia of philosophy: <http://plato.stanford.edu/info.html#c> The encyclopedia project has the right to distribute it on the web or prepare derivative works, but the individual authors have the copyright in their individual articles. They do not use a GNU or GFDL license. There is no way we can incorporate text from there beyond fair use, unless both the authors and the encyclopedia release it into the public domain. I do not think that likely. What we can do, is refer to it for a more formally academic treatment of many of our topics. DavidGoodman 04:19, 15 March 2007 (CDT)
- The current article is not really about "the scientific method" as much as the Methodology of Science. By the latter I mean, the ways scientists go about practicing their profession.
- By the former, I mean a short (4-step) procedure used by scientists to confirm or refute a specific discovery. This "method" is more of a checklist, to ensure you've done your homework.
- Proper scientific methodology usually requires four steps:
- Observation. Objectivity is very important at this stage.
- The inducement of general hypotheses or possible explanations for what has been observed. Here one must be imaginative yet logical. Occam's Razor should be considered but need not be strictly applied: Entia non sunt multiplicanda, or as it is usually paraphrased, the simplest hypothesis is the best. Entities should not be multiplied unnecessarily.
- The deduction of corollary assumptions that must be true if the hypothesis is true. Specific testable predictions are made based on the initial hypothesis.
- Testing the hypothesis by investigating and confirming the deduced implications. Observation is repeated and data is gathered with the goal of confirming or falsifying the initial hypothesis.
- Pseudoscience often omits the last two steps above. [1]
- Proper scientific methodology usually requires four steps:
- For me, the most important part of this 4-step process is where it recommends drawing conclusions from the hypothesis. The scientist then compares each conclusion with the facts. Any facts which contradict a conclusion invalidate the hypothesis.
- Logically, it works like this:
- Hypothesis: the moon is made of green cheese.
- If this is true, then the spectrum of light coming from the moon should match the spectrum for green cheese.
- Astronomer X did a spectral analysis of moonlight and found that it did not match green cheese.
- Therefore, the hypothesis is untrue.
- Hypothesis: the moon is made of green cheese.
- If you want an example that isn't so light-hearted, we could list the criteria used by medical researchers to determine whether a particular germ causes a disease. Such factors as:
- Does the disease ever occur without the presence of the germ (or at least antibodies indicating its presence)?
- Does the germ ever appear without the disease manifesting? If so, how much? Is there a threshold?
- If you want an example that isn't so light-hearted, we could list the criteria used by medical researchers to determine whether a particular germ causes a disease. Such factors as:
- I think this was used in determining whether e. coli bacteria in water makes people sick. --Ed Poor 10:42, 10 May 2007 (CDT)
Ed, all due respect, your understanding of the scientific method makes sense, and sounds good- but is off, whereas Gareth's is extremely sophisticated, and despite that-(or maybe because of it) sounds wrong. Everything you are saying is smart and well written, - but at least after "Logically, it works like this:", it is also completely incorrect. (Ouch!! sorry, I mean it with a giggle, but still, it's true) In fact, the lack of the ability to confirm that the spectral analysis of green cheese matches that of moonlight in no way proves that the moon is not made out of green cheese. It could be that the spectrographer was inexperienced and read the machine incorectly. It could be that some artiifact interefered with the reading. It could be that the type of green cheese that the moon is made out of gives off a different wavelength than green cheese on earth. Negative evidence never is proof. The proof for a germ causing a disease that you list is also naive, and unfortunately has been used by many physicians to mistreat patients- despite good intentions. Please understand,I am not trying to put you down- I just think that this point is at the very heart of this article. You are presenting a common, perhaps even majority view, which is actually incorrect. In the scientific method, the logic has to be proveable- the proof has to be positive. Again, if a person is found with a disease and the germ and antibodies are not found, this in no way proves that the disease can be caused without the germ. What it shows is that the presence of the germ cannot be proven in that person with current methods. It actually happens all the time. Sometimes, with more sophisticated tests the person can be shown to have evidence of the germ. Of course, it may be that the person does not have the germ, but that cannot be proven by the lack of a positive result. Koch's postulates, that are used to prove infectious cause of disease work differently, by positive evidence. Anyway, it is really possible to have a false negaitive and sometimes, that will be found in every single case. For eaxmple, it is very hard to culture spirochetes, and so an attempt at culturing the organisms for syphilis and Lyme diseas will always, unless extremely special methods are used that are only avaiable in a couple of research labs, come up negative-even when only people who are infected are cultured. In other cases, a culture or antibody test to identify an organism will not be positive, but still the organism is there, and the negative result is due to some glitch in timing (the person was infected 3 days ago, and although most infectede people show antibodies, this person has not yet begun producing them) or the specific way the test was done (the culture is usually positive, but in this case the culture plate was left out of the incubator too long). I say patients have been mistreated because sometimes doctors have depended completely onthe results of lab tests without really thinking about the limits of proof, and in this way denied antibiotics to those with Anthrax, and transfused blood from high risk patients if the antibody test for HIV was negative. All these tests have limitations and no test that fails to show a result is proof, ever.PS-I made the same mistake in Pseudoscience when talking about narcotic drugs that are found to be "not addictive" and stood corrected, myself. Nancy Sculerati 10:43, 15 May 2007 (CDT)
Bayesian inference
Somewhat ironically, I just stumbled across Probability yesterday, and today I just stumbled across this article. It seems that the familiar (and misleadling) truism that "you can't prove a negative" is causing a lot of (potentially) unnecessary angst. If you don't accept Kolmogorov's axioms or, at least, some axiomatization of probablitly, then any mathematical analysis is going to be meaningless. B ut if I observe that on 100 cloudless days it does not rain, I actually am collecting quantifiable information on the likelihood of rain on a cloudy day (assuming I also pay attention to how ofen it rains, and how often it is cloudy!) Of course, this doesn't really address the epistemologic issues raised here, but it does seem to me that arguments about "negative" evidence being unable to support positive assertions are basically fallacious. Greg Woodhouse 11:57, 15 May 2007 (CDT)
Popper's attack on Bayes was founded in logic: a universal statement of the form "All swans are white" is logically equivalent to the statement "all non-white things are not swans". Thus if you accept that you can infer the truth of the first statement by mere observation of white swans, you must equally accept that you can infer its truth by sufficient observation of non-white things that aren't swans. As few will accept that observing that blades of grass are green would ever be good grounds for believing that all swans are white, Popper concluded that it is logically invalid to infer a universal from any finite set of observations. More generally, he declared that to do so is unwise because there may be any number of different possible explanations for any finite set of observations. Accordingly he argued that instead we propose a hypothesis and seek to disprove it, not to support it.
Medicine, as Nancy says, at its best proceeds in exactly this way: diagnosis is made not by observing symptoms, as many different diseases produce similar systems. Instead, diagnosis proceeds by a process of exclusion, by considering possible causes, and seeking by further tests to exclude them as actual causes. What survives this process is regarded as the most likely diagnosis.
In science generally, confirmatory evidence of the sort you describe is not looked on favourably. There are many reasons for this, but one is simply that we often see what we wish to see, and tend to report that which is consistent with what we would like to believe and disregard that which is consistent. Knowing this weakness in ourselves, we distrust it when we see it in others. Gareth Leng 13:00, 15 May 2007 (CDT)
But the point is- a negative test, by itself, does not exclude the diagnosis. Nancy Sculerati 13:05, 15 May 2007 (CDT)
Indeed, doubt is eternal. (Well, until the pathologist gets there anyway).Gareth Leng 13:31, 15 May 2007 (CDT)
"Negative" evidence needs some defining I think. Negative evidence in the sense of the failure to observe an expected effect is generally considered weak for many reasons (lack of evidence is not evidence of lack being one reason). It is certainly not possible to show absolutely that there is no effect from any measurements, only that the effect may be smaller than the ability to detect it. However as Nancy says, science is full of examples of false negatives caused by a wide variety of factors.
However experiments to exclude a cause are not negative evidence in this sense. For example, if a child of otherwise normal proportions and in good health is growing only slowly, the physician might suspect a disorder in the endocrine regulation of bone growth. One cause of dwarfism is a congenital failure to synthesize growth hormone, and a blood sample may well reveal that growth hormone is absent. However this is now known to be unreliable, with a high incidence of false negatives, as growth hormone is secreted not continuously but intermittently. In fact this is not the only explanation - another is that it is the receptors for growth hormone that are absent, and a third is that the signal from the hypothalamus - the growth hormone releasing factor - is absent. So there are at least three plausible hypotheses for the failure to grow. The physician can begin by attempting to exclude each in turn. If you give the child an injection of GRF, then this should elicit a release of GH if GH is being made by the pituitary. In this case if there is an observed release then the hypothesis of a lack of GH production is excluded - excluded by positive demonstration. However again the failure to see GH secretion may be a false negative, in this case either because the defect may be a lack of GRF receptors, or an excess of somatostatin, which is an inhibitory factor that can override the actions of GRF. The first can be excluded by challenging with a different GH secretagogue, say ghrelin which acts through a separate receptors. A response to secretagogue will exclude a lack of GH production, leaving a specific defect in the GRF receptors as the remaining likely cause.
The point is in all this the logical process is to exclude plausible hypotheses systematically, rather than to try to verify one in particular.
Having said that, in science all paths are open and all are taken. Nevertheless many of the most influential experiments in science are designed in the following way: from a generally accepted theory and a given set of observations draw a conclusion A, find a different explanation for the set of observations, B. Design an experiment where A implies one outcome and B a different outcome. Do the experiment to disprove either A or B. The surviving hypothesis is not "proved", it merely survives while the other falls.Gareth Leng 03:27, 16 May 2007 (CDT)
- Gareth, I myself (let's stay in context and not read the last thing and jump to conclusions) am not given to philosophical discourse. Please read Ed's example of disproving the moon is made of green cheese, and my reply. If you disagree with my reply, please let me know. I know that I can learn from you. If you don't, you might say so- so as to avoid leaving that impression here. Nancy Sculerati 08:14, 16 May 2007 (CDT)
- Sorry to get sidetracked, I fully agree with your reply to Ed. The thread above was really addressed to Greg's comments on Bayesian inference and negative evidence, not a qualification of your response above.Gareth Leng 10:46, 16 May 2007 (CDT)
It's interesting that you use medicine as an example, too, because this seems to me to be the model par excellence of Bayesian inference. Think about it: a patient presents with difficulty breathing. Is he a smoker? No. Well, that decreases the likelihood of emphysema somewhat. does he have any (known) allergies? No. Well, that decreases the likelihood of an allergic reaction. Is he over 40? Yes. Well, that makes heart disease a more likely possibility. His blood pressure is 170/110. Oops. Now, that really does make heart disease looke more likely. You get the idea. (Caveat: I'm not a doctor, so I make no claim that this scenario is realistic.) Like Nancy(?), I'm really less interested in philosophical discussions than I am in a practical understanding of scientific inquiry, and Bayesian inference seems to me to have the advantage of plausibility (no one would go about testing the hypothesis that all swans are white by examining every non-white thing they can find to see if it is, in fact, not a swan) and the added advantage of being practical from an implementation standpoint (i.e., programs can be written to implement Bayesian networks). Greg Woodhouse 16:50, 16 May 2007 (CDT)
The logical problem is that the Bayesian should consider examining every non-white thing they can find to see if it is, in fact, not a swan. Except for one thing, for Bayesian processes to work you have to attach values to the prior probabilities, and this is impossible in the swans case. Trouble is it is also impossible in most cases. The example you give is one where there might be rational grounds for attaching probabilities, but these are exceptions rather than the norm. I wouldn't dispute that scientists don't often reason in the way you describe, but I think they do so to generate hypotheses which they can then subsequently test. You don't draw conclusions on circumstantial evidence unless you have to and when this is the best evidence you've got. Doctors of course have to make a best guess sometimes, but in acting on it they are still in effect testing their hypothesis in that they wouldn't draw a conclusion prescribe a treatment and then discharge the patient - instead they follow up, checking to see whether their hypothesised diagnosis is disproved by the effects of the treatment on the patient.
To what extent medicine is in fact scientific is a different issue. I think what makes the scientific approach different from the medical approach is the goal - for a doctor the overriding concern is that the patient gets better, and explaining an illness is merely a route to finding a treatment that works, and it doesn't really matter why the treatment works, or even that the improvement is in fact because of the treatment. A scientist would probably prefer to decapitate the patient if that was needed for a critical experiment to better establish the real cause of disease, but this is generally frowned on in medicine.
I think one index of how scientists feel is in the kinds of comments that referees give when rejecting manuscripts: phrases like "mere correlation", "circumstantial evidence", "circular reasoning", "no mechanistic explanation", "failure to exclude alternative explanations", "no clear hypothesis", "merely confirmatory evidence", "lacking a critical test" and "purely negative evidence" are very common grounds for rejection, if not always fair. Gareth Leng 03:40, 17 May 2007 (CDT)
Math workgroup
Does it really belong in Math Workgroup? Inspired by recent discussions on ID (or on its appointment to Bio Workgroup), I think that no mathematical training include problems of this kind. Mathematicians are not working on this, are they... Neither they can approve it, nor prevent from approving some mathematical parts (are there any?). Actually, even some portions relevant to statistics, IMHO, belong rather in Philosophy than in Math. IMHO, a math editor can act here as an author only. Paradoxically, from a point of view, Math is not considered a science at all ;-) While the topic is of my personal interest, I suggest deleting Math Workgroup tag. --Aleksander Stos 01:31, 1 June 2007 (CDT)
natural phenomena
From the article: "Scientists propose hypotheses to explain natural phenomena". Does not this (inadvertently I am sure) imply that only "natural science" is real science? Daniel Demaret 02:15, 13 January 2008 (CST)
- Thanks :-)Gareth Leng 09:36, 4 February 2008 (CST)
Is the "scientific method" pseudoscience?
Growing up in school I always thought that the scientific method was how you develop and test theories... is it actually a load of bunk? --Robert W King 10:09, 4 February 2008 (CST)
- Some of us (try) to do that (to adopt a "philosophical" approach). But if you look at what most scientists actually do - well, they do all kinds of things, and quietly, they often think that what some of their colleagues do is close to pseudoscience. It works from both sides, data collectors often treat theory with contempt (the current line in denigration is that hypotheses encourage a biased view of data), some theorists think that data collection is mindless (that data without theory is garbage), people with different ideas regard each other with scarce concealed suspicion about their sanity, intellect or honesty - sometimes even when the ideas are scarcely distinguishable to an outsider. It's competitive; maybe it has to be. But I don't think it's bunk, not at all; it's just that human elements are important. :-) Gareth Leng 03:33, 5 February 2008 (CST)
- Article with Definition
- Philosophy Category Check
- Physics Category Check
- Biology Category Check
- Nonstub Articles
- Advanced Articles
- Internal Articles
- Philosophy Nonstub Articles
- Philosophy Advanced Articles
- Philosophy Internal Articles
- Physics Nonstub Articles
- Physics Advanced Articles
- Physics Internal Articles
- Biology Nonstub Articles
- Biology Advanced Articles
- Biology Internal Articles