Evidence-based medicine: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Gareth Leng
(/* Studies /moved to new article)
imported>Gareth Leng
Line 131: Line 131:




==Studies on how to teach evidence-based medicine==
This sections includes implications based on the preceding discussion on studies of effectiveness. In addition, this section includes other studies or reports of teaching methods, even though these methods have not been subjected to study of effect on clinical outcomes.


===Search strategies===
A search strategy similar to the 5S strategy should be taught for use when the searcher has limited time available during clinical care. This is based on one positive study of its use<ref name="pmid17082828">{{cite journal |author=Patel MR ''et al.'' |title=Randomized trial for answers to clinical questions: evaluating a pre-appraised versus a [[MEDLINE]] search protocol |journal=JMLA |volume=94 |pages=382–7 |year=2006 |pmid=17082828 |doi=}}</ref> and two negative studies<ref name="pmid17683300"/><ref name="pmid11532204"/> of teaching the use using secondary and primary publications. In addition, indirect evidence on the time needed to search also supports the emphasis on using tertiary publications. Doctors may have two minutes available to search<ref name="pmid10435959">{{cite journal |author=Ely JW ''et al.'' |title=Analysis of questions asked by family doctors regarding patient care |journal=BMJ |volume=319 |pages=358–61 |year=1999 |pmid=10435959 |doi=}}</ref>, whereas using MEDLINE may take 20 minutes or more.<ref name="pmid8708623">{{cite journal |author=Chambliss ML, Conley J |title=Answering clinical questions |journal=The Journal of Family Practice |volume=43  |pages=140–4 |year=1996 |pmid=8708623 |doi=}}</ref><ref name="pmid11903763">{{cite journal |author=Cabell CH ''et al.'' |title=Resident utilization of information technology |journal=J Gen Intern Med |volume=16|pages=838–44 |year=2001 |pmid=11903763 |doi=}}</ref>
Teaching [[MEDLINE]] searching would be appropriate for ''Doers'' who might be willing to invest time in searching MEDLINE when not hurried by clinical care. Based on studies of common errors in searching MEDLINE, learners should be taught Medical Subject Headings (MeSH) terms and their explosion, appropriate limits, and best evidence to search for.<ref name="pmid16186614">{{cite journal |author=Gruppen LD ''et al.''|title=A controlled comparison study of the efficacy of training medical students in evidence-based medicine literature searching skills |journal=Academic medicine |volume=80  |pages=940–4 |year=2005 |pmid=16186614 |doi=}}</ref> The mnemonic PEARL may guide how to each.<ref name="pmid16501264">{{cite journal |author=Silk H ''et al.''|title=A new way to integrate clinically relevant technology into small-group teaching |journal=Academic Medicine  |volume=81 |pages=239–44 |year=2006 |pmid=16501264 |doi=}}</ref> PEARL stands for:
# "Choose a ''''P'''replanned search intervention'"
# "Allow learners to ''''E'''xecute the search,' thus committing themselves"
# "''''A'''llow learners to teach other learners' about their search process
# "''''R'''eview the quality of evidence' for the information found"
# "Discuss ''''L'''essons of the search.'"
===Clinical reasoning===
There are various methods of clinical reasoning include probabilistic (Bayesian), causal (physiologic), and deterministic (rule-based).<ref name="pmid2655522">{{cite journal | author = Kassirer JP | title = Diagnostic reasoning | journal = Ann Intern Med | volume = 110 | pages = 893–900 | year = 1989 | pmid = 2655522 | doi = }}</ref>  In addition, medical experts rely more on pattern recognition which is faster and less prone to error<ref name="pmid7503827">{{cite journal | author = Leape LL | title = Error in medicine | journal = JAMA | volume = 272 | pages = 1851–7 | year = 1994 | pmid = 7503827 | doi = }}</ref>; however, clinical experts seem flexible and may use whichever method of reasoning most easily represents and solves a given problem.<ref name="pmid17124025">{{cite journal | author = Norman G | title = Building on experience--the development of clinical reasoning | journal = N Engl J Med | volume = 355  | pages = 2251–2 | year = 2006 | pmid = 17124025 | doi = 10.1056/NEJMe068134}}</ref> Scales to measure clinical reasoning have been proposed.<ref name="pmid9231115">{{cite journal | author = Boshuizen HP ''et al.''| title = Measuring knowledge and clinical reasoning skills in a problem-based curriculum | journal = Medical education | volume = 31 | pages = 115–21 | year = 1997 | pmid = 9231115 | doi = }}</ref> Explicit Bayesian thinking with precise numbers is rarely done.<ref name="pmid3277516">{{cite journal | author = Moskowitz AJ ''et al.''| title = Dealing with uncertainty, risks, and tradeoffs in clinical decisions. A cognitive science approach | journal = Ann. Intern. Med. | volume = 108| pages = 435–49 | year = 1988 | pmid = 3277516 | doi = }}</ref><ref name="pmid9576412">{{cite journal |author=Reid MC, Lane DA, Feinstein AR |title=Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy |journal=Am J Med |volume=104 |pages=374–80 |year=1998 |pmid=9576412 |doi=}}</ref> Basic science knowledge is probably "encapsulated" into clinical knowledge.<ref name="pmid16043534">{{cite journal | author = de Bruin AB ''et al.'' | title = The role of basic science knowledge and clinical knowledge in diagnostic reasoning: a structural equation modeling approach | journal = Academic Medicine  | volume = 80 |  pages = 765–73 | year = 2005 | pmid = 16043534 | doi = }}</ref>
{| align="right" border="1"
|+'''Competing-hypotheses heuristic'''<ref name="pmid3385753">{{cite journal | author = Wolf FM ''et al.'' | title = Use of the competing-hypotheses heuristic to reduce 'pseudodiagnosticity' | journal = J Med Educ | volume = 63 | pages = 548–54 | year = 1988 | pmid = 3385753 | doi = }}</ref>
! Finding || Disease A || Disease B
|-
| Fever || 66%  || cell B
|-
| Rash || cell C  || cell D
|-
| Colspan="3" | The most important missing information is cell B
|}
Possible strategies to improve clinical reasoning have been reviewed<ref name="pmid17124019">{{cite journal |author=Bowen JL |title=Educational strategies to promote clinical diagnostic reasoning |journal=N Engl J Med |volume=355  |pages=2217–25 |year=2006 |pmid=17124019 |doi=10.1056/NEJMra054782}}</ref><ref name="pmid12377672">{{cite journal |author=Graber M ''et al.'' |title=Reducing diagnostic errors in medicine: what's the goal? |journal=Academic Medicine |volume=77  |pages=981–92 |year=2002 |pmid=12377672 |doi=}}</ref> and using problem-based learning<ref name="pmid12377672"/>, include teaching appropriate problem representation creating a one-sentence summary of a case<ref name="pmid17124019"/>, standardized patients<ref name="pmid16423099">{{cite journal |author=Windish DM ''et al.'' |title=Teaching medical students the important connection between communication and clinical reasoning |journal=J Gen Intern Med |volume=20 |pages=1108–13 |year=2005 |pmid=16423099 |doi=10.1111/j.1525-1497.2005.0244.x}}</ref>, teaching hypothetico-deductive reasoning<ref name="pmid11893348">{{cite journal |author=Wiese J ''et al.''|title=Improving oral presentation skills with a clinical reasoning curriculum: a prospective controlled study |journal=Am J Med |volume=112 |pages=212–8 |year=2002 |pmid=11893348 |doi=}}</ref><ref name="pmid7070446">{{cite journal |author=Eddy DM, Clanton CH |title=The art of diagnosis: solving the clinicopathological exercise |journal=N Engl J Med |volume=306 |pages=1263–8 |year=1982 |pmid=7070446 |doi=}}</ref>, cognitive forcing strategies<ref name="pmid11073470">{{cite journal |author=Croskerry P |title=The cognitive imperative: thinking about how we think |journal=Academic Emergency Medicine |volume=7 |pages=1223–31 |year=2000 |pmid=11073470 |doi=}}</ref><ref name="pmid12414468">{{cite journal |author=Croskerry P |title=Achieving quality in clinical decision making: cognitive strategies and detection of bias |journal=Academic Emergency Medicine |volume=9 |pages=1184–204 |year=2002 |pmid=12414468 |doi=}}</ref> to avoid premature closure<ref name="pmid3736379">{{cite journal |author=Dubeau CE ''et al.''|title=Premature conclusions in the diagnosis of iron-deficiency anemia: cause and effect |journal=Medical Decision Making  |volume=6  |pages=169–73 |year=1986 |pmid=3736379 |doi=}}</ref>, teaching the competing-hypotheses heuristic<ref name="pmid3385753">{{cite journal | author = Wolf FM ''et al.'' | title = Use of the competing-hypotheses heuristic to reduce 'pseudodiagnosticity' | journal = J Med Educ | volume = 63 | pages = 548–54 | year = 1988 | pmid = 3385753 | doi = }}</ref>, and using fuzzy-trace theory<ref name="pmid11251760">{{cite journal |author=Lloyd FJ, Reyna VF |title=A web exercise in evidence-based medicine using cognitive theory |journal=J Gen Intern Med|volume=16 |pages=94–9 |year=2001 |pmid=11251760 |doi=}} [http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=11251760 PubMed Central]</ref>.
Studies are inconclusive on using cognitive feedback<ref name="pmid7898300">{{cite journal |author=Poses RM ''et al.''|title=You can lead a horse to water--improving physicians' knowledge of probabilities may not affect their decisions |journal=Medical Decision Making |volume=15  |pages=65–75 |year=1995 |pmid=7898300 |doi=}}</ref> and teaching logic<ref name="pmid3742999">{{cite journal |author=Cheng PW ''et al.'' |title=Pragmatic versus syntactic approaches to training deductive reasoning |journal=Cognitive Psychology |volume=18 |pages=293–328 |year=1986 |pmid=3742999 |doi=10.1016/0010-0285(86)90002-2}}</ref><ref name="pmid16907682">{{cite journal | author = Jenicek M | title = The hard art of soft science: Evidence-Based Medicine, Reasoned Medicine or both? | journal = Journal of Evaluation in Clinical Practice | volume = 12  | pages = 410–9 | year = 2006 | pmid = 16907682 | doi = 10.1111/j.1365-2753.2006.00718.x}}</ref>.


==Criticisms of evidence-based medicine==
==Criticisms of evidence-based medicine==

Revision as of 06:30, 19 November 2007

This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article has an approved citable version (see its Citable Version subpage). While we have done conscientious work, we cannot guarantee that this Main Article, or its citable version, is wholly free of mistakes. By helping to improve this editable Main Article, you will help the process of generating a new, improved citable version.

Evidence-based medicine is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.".[1] Alternative definitions are "the process of systematically finding, appraising, and using contemporaneous research findings as the basis for clinical decisions"[2] or "evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical expertise and our patient's unique values and circumstances."[3] Better known as EBM, evidence based medicine emerged in the early 1990's to help healthcare providers and policy makers evaluate the efficacy of different treatments.

Evidence-based practice is not restricted to medicine; dentistry, nursing and other allied health science are adopting "evidence-based medicine" as well as alternative medical approaches, such as acupuncture[4][5]. Evidence-Based Health Care or evidence-based practice extends the concept of EBM to all health professions, including management[6][7] and policy[8][9][10].

Why do we need evidence-based medicine?

It is easy to assume that physicians always use scientific evidence conscientiously and judiciously in treating patients. In fact, most of the specific practices of physicians and surgeons are based on traditional techniques learned from their mentors in the care of patients during training. Additional modifications come from personal clinical experience, from information in the medical literature and continuing education courses. Although these practices almost always have a rational basis in biology, the actual efficacy of treatments is rarely tested by experimental trials in people. Further, even when the results of experimental trials or other evidence have been reported, there is a lag time between the acceptance of changes to medical practice and establishing them as routine in clinical care. EBM seeks to address these issues by promoting practices that have been shown to have validity using the scientific method.

Steps in evidence-based medicine

Ask

"Ask" - Formulate a well-structured clinical question.

Acquire

The ability to "acquire" evidence in a timely manner may improve healthcare.[11] Unfortunately, doctors may be led astray when acquiring information as often as they find correct answers.[12]

A proposed structure of the evidence search is the 5S search strategy,[13] which starts with the search of "summaries" (textbooks). A randomized controlled trial supports the efficiency of this approach.[14]

Appraise

To "Appraise" the quality of the answer found is very important as one third of the results of even the most visible medical research is eventually either attenuated or refuted.[15] There are many reasons for this[16]; two important reasons are publication bias[17] and conflict of interest[18]. These two problems interact, as conflict of interest often leads to publication bias.[19][17]

However an obvious important reason is that many (if not all) studies contain potential flaws in their design, and even when there are no clear methodological flaws, any outcome of a test that is evaluated by statistical test has a margin of error: this means that some positive outcomes will be "false positives".

Publication bias

Whether a treatment or medical intervention is effective or not may be judged either on the basis of the experience of the practising physician, or on the basis of what has been published by others. The publications with greatest authority are generally those that appear in the peer-reviewed scientific journals, and particularly in those journals generally thought to have the highest standards of editorial scrutiny. However, it is no simple matter to get a study published in any peer-reviewed journal, least of all in the best journals. Accordingly, many studies go unreported. It is often thought to be particularly difficult to publish small studies, the outcome of which conflicts with the reported outcomes of larger prviously published studies, or to publish studies where the outcome is equivocal - where no clear conclusion can be drawn. In part this reflects the wish of the best journals to publish influential papers, and in part it reflects simply the fact of authors choosing not to put their energies into publishing studies that are thought to be uninteresting. Such publication bias can be difficult to recognise, but its effects generally tend to encourage publication of studies that support an already formed conclusion, while tending to discourage publication of contradictory or equivocal findings.[17][20] Publication bias may be more prevalent in industry sponsored research.[21]

In performing a meta-analyses, a file drawer[22] or a funnel plot analysis[23][24] may help detect underlying publication bias among the studies in the meta-analysis.

Conflict of interest

In any publication, there is always some issue with regard to conflict of interest. All the work by scientists is funded by groups such as charities, public bodies or private industry. Accordingly there could be pressure to overstate any outcomes or bias a trial to favor a particular outcome. Unfortunately, the presence of authors with a conflict of interest is not reliably indicated in journal articles.[25] Worse it has been reported that some published articles use 'ghost writers'.[26] Ghost writers may have a conflict of interest but this is not apparent since they are not credited as an author in the byline. Finally, academic scientists gain their professional reputations by publishing in quality journals and purely factual summaries do not necessarily impress journal editors any more than they inspire casual readers.

In the design of randomized controlled trials, industry-sponsored studies may be more likely to select an inappropriate comparator group that would favor finding benefit in the experimental group. This may manifest itself by comparing the effectiveness of a new drug with the effectiveness of an established older treatment rather than choosing a competitors current treatment for comparison [21] When reporting data from randomized controlled trials, industry-sponsored studies may be more likely to omit intention-to-treat analyses.[19] Regarding the conclusions reached in randomized controlled trials, industry sponsored studies may be more likely to conclude that drugs are safe, even when they have increased adverse effects.[27] Alternatively, the usefulness of drugs may be overstated, although, this is contentious since one study did not find evidence of overstatement.[28] in contrast, a later study found that industry sponsored studies are more likely to recommend the experimental drug as treatment of choice even after adjusting for the treatment effect.[29]

Obviously a pharmaceutical company wants to report that its drug is better than a competitor's drug, or better than no treatment, however, due to the threat of litigation, it is not in their interests to suppress or minimise evidence of harm. For the scientists who are conducting the trials, however, the perspective might different: if it becomes clear that a drug is useless or harmful, then the company will cease to work on the drug and a scientists livelyhood could be threatened. Consequently, the responsibility for the integrity of the design and analysis of studies lies squarely with the authors. If the scientists involved in any trial are lacking in competence or integrity, then this will prejudice the value of a trial both for the public and indeed for their industrial sponsors.

Other issues

Statistical analysis of the outcomes of a clinical trial is a complex and highly technical process. These often require the involvement of a professional medical statistician whose advice is needed in the design of the trial as well as in the analysis of its outcome. Flaws in the design of a trial can lead subsequently to weaknesses in statistical analysis. Ideally, a trial protocol should be carefully designed with statistical issues in mind, and with the hypothesis under test clearly formulated, and the agreed protocol should then be strictly adhered to. Often however problems arise during the test; for example, there may be unanticipated outcomes of the trial, or problems in patient recruitment or in compliance with the test protocol, and these problems can weaken the power and authority of the trial. Common problems include small sample sizes in some of the groups[30], problems of "multiple comparisons" when several different outcomes are being assessed, and biasing of study populations by selection criteria.

Application

It is important to "apply" the best practices found to the correct situation. One of the common problems in applying evidence are difficulties with numeracy. Both patients and healthcare professionals have difficulties with health numeracy and probabilistic reasoning.[31] A second problem is to recognise the patient population that will benefit from the new practices. Extrapolating study results to the wrong patient populations (over-generalization)[32][33][34] and not applying study results to the correct population (under-utilization)[35][36] can both increase adverse outcomes.

The problem in over-generalization of study results may be more common among specialist physicians.[37] Two studies found specialists were more likely to adopt cyclooxygenase 2 inhibitor drugs before the drug rofecoxib was withdrawn by its manufacturers after it emerged that its use had unanticipated adverse effects [38][39]. One of the studies went on to state:

"using COX-2s as a model for physician adoption of new therapeutic agents, specialists were more likely to use these new medications for patients likely to benefit but were also significantly more likely to use them for patients without a clear indication".[39]

Similarly, orthopedists may provide more intensive care for back pain, but without benefit from the increased care.[40] Specialists may be less discriminating in their choice of journal reading. [41]

The problem of under-utilizing study results may be more common when physicians are practicing outside of their expertise. For example, specialist physicians are less likely to under-utilize specialty care[42][43], while primary care physicians are less likely to under-utilize preventive care[44][45].

Classification

Two types of evidence-based medicine have been proposed.[46]

Evidence-based guidelines

Evidence-based guidelines (EBG) is the practice of evidence-based medicine at the organizational or institutional level. This includes the production of guidelines, policy, and regulations.

Evidence-based individual decision making

Evidence-based individual decision (EBID) making is evidence-based medicine as practiced by the individual health care provider and an individual patient. There is concern that current evidence-based medicine focuses excessively on EBID.[46]

Evidence-based individual decision making can be further divided into three modes, "doer", "user", "replicator" by the intensity of the work by the individual.[47]

This categorization somewhat parallels the theory of Diffusion of innovations, but without pejorative terms, in which adopters of innovation are categorized as innovators (2.5%), early adopters (13%), early majority (33%), late majority (33%), and laggards (16%).[48] This categorization for doctors is supported by a preliminary empirical study of Green et al. that grouped doctors into Seekers, Receptives, Traditionalists, and Pragmatists.[49] The study of Green et al. has not been externally validated.

The same doctors may vary which group they resemble depending on how much time is available to seek evidence during clinical care.[50] Medicine residents early in training tend to prefer being taught the practitioner model, whereas residents later in training tended to prefer the user model.[51]

Doer

The "doer"[47] or "practitioner"[52] of evidence-based medicine does at least the first four steps (above) of evidence-based medicine and are performed for "self-acquired"[50] knowledge.

If the Doers are the same as the "Seekers" in the study of Green, then this group may be 3% of physicians.[49]

This group may also be the similarly small group of doctors who use formal Bayesian calculations[53] or MEDLINE searches[54].

User

For the "user" of evidence-based medicine, [literature] searches are restricted to evidence sources that have already undergone critical appraisal by others, such as evidence-based guidelines or evidence summaries"[47]. More recently, the 5S search strategy,[13] which starts with the search of "summaries" (evidence-based textbooks) is a quicker approach.[14]

If the Users are the same as the "Receptives" in the study of Green, then this group may be 57% of physicians.[49]

Replicator

For the "replicator", "decisions of respected opinion leaders are followed"[47]. This has been called "'borrowed' expertise".[50]

If the Replicators are the same as the "Traditionalists" and "Pragmatists" combined in the study of Green, then this group may be 40% of physicians.[49] This is a very broad group of doctors. Possibly the lowest end of this group may be equivalent to the laggards of Rogers. This much smaller group of doctors, ones who have "severely diminished capacity for self-improvement", may be at increased risk of disciplinary action by medical boards.[55]

Metrics used in evidence-based medicine

Diagnosis

  • Sensitivity and specificity
  • Likelihood ratios (Odds ratios)

Interventions

Relative measures

  • Relative risk ratio
  • Relative risk reduction

Absolute measures

  • Absolute risk reduction
  • Number needed to treat
  • Number needed to screen
  • Number needed to harm

Health policy

  • Cost per year of life saved[56]
  • Years (or months or days) of life saved. "A gain in life expectancy of a month from a preventive intervention targeted at populations at average risk and a gain of a year from a preventive intervention targeted at populations at elevated risk can both be considered large."[57]

Statistical significance

The outcome of a trial or study is often summarised by calculation of a "P-value" that expresses the likelihood that an observed difference (between treatment groups reflects a true difference in treatment effectiveness; the P value is a statistical calculation of the chance that the observed apparent difference reflects the chance outcome of random sampling. Some have argued that focussing on P values neglects other important sources of knowledge and information that should properly be used to assess the likely efficacy of a treatment [58] In particular, some argue that the P-value should be interpreted in light of how plausible is the hypothesis based on the totality of prior research and physiologic knowledge.[59][58][60]

Experimental trials: producing the evidence

For more information, see: Randomized controlled trial.

"A clinical trial is defined as a prospective scientific experiment that involves human subjects in whom treatment is initiated for the evaluation of a therapeutic intervention. In a randomized controlled clinical trial, each patient is assigned to receive a specific treatment intervention by a chance mechanism."[61] The theory behind these trials is that the value of a treatment will be shown in an objective way, and, though usually unstated, there is an assumption that the results of the trial will be applicable to the care of patients who have the condition that was treated.

The best evidence is thought to come from large multicentre clinical trials that are randomised and placebo-controlled, and which are conducted double-blind according to a predetermined schedule that is strictly adhered to. Trials should be large, so that serious adverse events might be detected even when they occur rarely. Multi-centre trials minimise problems that can arise when a single geographical locus has a population that is not fully representative of the global population, and they can minimise the effect of geographical variations in environment and health care delivery. Randomisation (if the study population is large enough) should mean that the study groups are unbiased. A double-blind trial is one in which neither the patient nor the deliverer of the treatment is aware of the nature of the treatment offered to any particular individual, and this avoids bias caused by the expectations of either the doctor or the patient. Placebo controls are important, because the placebo effect can often be very strong.

However such trials are very expensive, difficult to co-ordinate properly, and are often impractical to design optimally. For example, for many types of medical intervention, no satisfactory placebo treatment is possible. For several medical interventions, the use of a placebo, although feasible, is considered unethical (see section on Unethical use of placebos).

Sackett, one of the founders of evidence-based medicine, recognized that large-scale trials were not conducted for many conditions (see section below), and that it might not be possible to conduct them. Underlining the inherent difficulty in extrapolating from large scale trials, Sackett proposed the use of N of 1 randomized controlled trials (also called single-subject randomized trials). In these trials, the patient is both the treatment group and the placebo group, but at different time periods. Blinding must be done with the collaboration of the pharmacist, and treatment effects must appear and dissapear quickly following introduction and cessation of the therapy. This type of RCT can be performed for many chronic, stable conditions.[62] The individualized nature of the single-subject randomized trial, and the fact that it often requires the active participation of the patient (questionnaires, diaries), appeals to the patient and promotes better insight and self-management[63][64] as well as patient safety,[65] in a cost-effective manner.

Evidence synthesis: summarizing the evidence

Systematic review

For more information, see: Systematic review.

A systematic review is a summary of healthcare research that involves a thorough literature search and critical appraisal of individual studies to identify the valid and applicable evidence. It often, but not always, uses appropriate techniques (meta-analysis) to combine these valid studies, and may grade the quality of the particular pieces of evidence according to the methodology used, and according to strengths or weaknesses of thstudy design.

While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews which nonetheless adhere to the standards for gathering, analyzing and reporting evidence.

Clinical practice guidelines

For more information, see: Clinical practice guideline.

Clinical practice guidelines are defined as "Directions or principles presenting current or future rules of policy for assisting health care practitioners in patient care decisions regarding diagnosis, therapy, or related clinical circumstances. The guidelines may be developed by government agencies at any level, institutions, professional societies, governing boards, or by the convening of expert panels. The guidelines form a basis for the evaluation of all aspects of health care and delivery."[66]

Medical informatics: Incorporating evidence into clinical care

For more information, see: Medical informatics.

Practicing clinicians usually cite the lack of time for reading newer textbooks or journals. However, the emergence of new types of evidence can change the way doctors treat patients. Unfortunately the recent scientific evidence gathered through well controlled clinical trials usually do not reach the busy clinicians in real time. Another potential problem lies in the fact that there may be numerous trials on similar interventions and outcomes but they are not systematically reviewed or meta-analyzed.

Medical informatics is an essential adjunct to EBM, and focuses on creating tools to access and apply the best evidence for making decisions about patient care.[3]

Before practicing EBM, informaticians (or informationists) must be familiar with medical journals, literature databases, medical textbooks, practice guidelines, and the growing number of other dedicated evidence-based resources, like the Cochrane Database of Systematic Reviews and Clinical Evidence.[67]

Similarly, for practicing medical informatics properly, it is essential to have an understanding of EBM, including the ability to phrase an answerable question, locate and retrieve the best evidence, and critically appraise and apply it.[68][69]



Criticisms of evidence-based medicine

There are a number of criticisms of EBM.[70][47] Most generally, EBM has been criticized as an attempt to define knowledge in medicine in the same way that was done unsuccessfully by the logical positivists in epistemology, "trying to establish a secure foundation for scientific knowledge based only on observed facts".[71]

Unethical use of placebos

According to EBM, placebo control is an important element of a well done randomized controlled trial. The Declaration of Helsinki by the the World Medical Association condems the use of placebos if a beneficial treatment exists.[72][73] Many scientists and ethicists consider that the U.S. Food and Drug Administration, by demanding placebo-controlled trials, encourages the systematic violation of the Declaration of Helsinki.[74] The use of placebo controls remains a convenient way to avoid direct comparisons with a competing drug.

As EBM evolves, appropriate use of placebo is being revised.[75][76] When guidelines suggest a placebo is an unethical control, then an "active-control noninferiority trial" may be used.[77] To establish non-inferiority, the following three conditions should be - but frequently are not - established:[77]

  1. "The treatment under consideration exhibits therapeutic noninferiority to the active control."
  2. "The treatment would exhibit therapeutic efficacy in a placebo-controlled trial if such a trial were to be performed."
  3. "The treatment offers ancillary advantages in safety, tolerability, cost, or convenience."

Lack of randomized controlled trials for clinical decisions

Randomized controlled trials are available to support 21%[78] to 53%[79] of principle therapeutic decisions.[80] Due to this, evidence-based medicine has evolved to accept lesser levels of evidence when randomized controlled trials are not available.[81]

Ulterior motives

An early criticism of evidence-based medicine is that it will be a guise for rationing resources or other goals that are not in the interest of the patient.[82][83] In 1994, the American Medical Association helped introduce the "Patient Protection Act" in Congress to reduce the power of insurers to use guidelines to deny payment for a medical services.[84]

As a possible example, Milliman Care Guidelines state they produce "evidence-based clinical guidelines since 1990".[85] In 2000, an academic pediatrician sued Milliman for using his name as an author on a practice guidelines that he stated were "dangerous" [86][87][88] A similar suit disputing the origin of care decisions at Kaiser has been filed.[89] The outcomes of both suits are not known.

Conversely, clinical practice guidelines by the Infectious Disease Society of America are being investigated by Connecticut's attorney general on grounds that the guidelines, which do not recognize a chronic form of Lyme disease, are anticompetitive.[90][91]

EBM not recognizing the limits of clinical epidemiology

A common criticism addressed to epidemiology is that it can show association, but not causation. Evidence-based medicine is a set of techniques derived from clinical epidemiology. While clinical epidemiology has its role in inspiring clinical decisions, if it is complemented with testable hypotheses on disease,[92] many critics consider that Evidence-Based Medicine is a form of clinical epidemiology which became so prevalent in health care systems, and imposed such an empiricist bias on medical research, that it contributed to undermine the very notion of causal inference in clinical practice.[93] It is argued that it has even become condemnable to use common sense,[94] as was cleverly illustrated in a systematic review of randomized controlled trials studying the effects of parachutes against gravitational challenges (free falls).[95]

Fallibility of knowledge

Evidence-based medicine has been criticized on epistemologic grounds as "trying to establish a secure foundation for scientific knowledge based only on observed facts"[71] and not recognizing the fallible nature[96] of knowledge in general. The inevitable failure of reliance on empiric evidence as a foundation for knowledge was recognized over 100 years ago and is known as the "Problem of Induction" or "Hume's Problem".[97]

Complexity theory

Complexity theory and chaos theory are proposed as further explaining the nature of medical knowledge.[98][99] Regarding health services research, although complexity theory has not advanced to the state of being able to mathematically model healthcare delivery, it has been used as a framework for case study[100][101][102][103] and traditional bivariate analysis[104] of healthcare delivery. For example, a systematic review of organizational interventions to improve the quality of care of diabetes mellitus type 2 suggests that interventions based on complexity theory will be more successful.[104] If the goal of modeling healthcare is design interventions to comply with specific quality indicators, interventions based on systems theory may be more effective than complexity theory.[105]

Regarding basic science research, fractals have been found in cardiac conduction.[106]

References

  1. Sackett DL et al. (1996). "Evidence based medicine: what it is and what it isn't". BMJ 312: 71–2. PMID 8555924[e]
  2. Evidence-Based Medicine Working Group (1992). "Evidence-based medicine. A new approach to teaching the practice of medicine. Evidence-Based Medicine Working Group". JAMA 268: 2420–5. PMID 1404801[e]
  3. 3.0 3.1 Glasziou, Paul; Strauss, Sharon Y. (2005). Evidence-based medicine: how to practice and teach EBM. Elsevier/Churchill Livingstone. ISBN 0-443-07444-5. 
  4. Manheimer E et al. (2007). "Meta-analysis: acupuncture for osteoarthritis of the knee". Ann Intern Med 146: 868–77. PMID 17577006[e]
  5. Assefi NP et al. (2005). "A randomized clinical trial of acupuncture compared with sham acupuncture in fibromyalgia". Ann Intern Med 143: 10–9. PMID 15998750[e]
  6. Clancy CM, Cronin K (2005). "Evidence-based decision making: global evidence, local decisions". Health affairs (Project Hope) 24: 151–62. DOI:10.1377/hlthaff.24.1.151. PMID 15647226. Research Blogging.
  7. Shojania KG, Grimshaw JM (2005). "Evidence-based quality improvement: the state of the science". Health affairs (Project Hope) 24: 138–50. DOI:10.1377/hlthaff.24.1.138. PMID 15647225. Research Blogging.
  8. Fielding JE, Briss PA (2006). "Promoting evidence-based public health policy: can we have better evidence and more action?". Health affairs (Project Hope) 25: 969–78. DOI:10.1377/hlthaff.25.4.969. PMID 16835176. Research Blogging.
  9. Foote SB, Town RJ (2007). "Implementing evidence-based medicine through medicare coverage decisions". Health affairs (Project Hope) 26 (6): 1634–42. DOI:10.1377/hlthaff.26.6.1634. PMID 17978383. Research Blogging.
  10. Fox DM (2005). "Evidence of evidence-based health policy: the politics of systematic reviews in coverage decisions". Health affairs (Project Hope) 24: 114–22. DOI:10.1377/hlthaff.24.1.114. PMID 15647221. Research Blogging.
  11. Banks DE et al. (2007). "Decreased hospital length of stay associated with presentation of cases at morning report with librarian support". Journal of the Medical Library Association : JMLA 95: 381–7. DOI:10.3163/1536-5050.95.4.381. PMID 17971885. Research Blogging.
  12. McKibbon KA, Fridsma DB (2006). "Effectiveness of clinician-selected electronic information resources for answering primary care physicians' information needs". Journal of the American Medical Informatics Association : JAMIA 13: 653–9. DOI:10.1197/jamia.M2087. PMID 16929042. Research Blogging.
  13. 13.0 13.1 Haynes RB (2006). "Of studies, syntheses, synopses, summaries, and systems: the "5S" evolution of information services for evidence-based health care decisions". ACP J Club 145: A8. PMID 17080967[e]
  14. 14.0 14.1 Patel MR et al. (2006). "Randomized trial for answers to clinical questions: evaluating a pre-appraised versus a MEDLINE search protocol". JMLA 94: 382–7. PMID 17082828[e]
  15. Ioannidis JP (2005). "Contradicted and initially stronger effects in highly cited clinical research". JAMA 294: 218–28. DOI:10.1001/jama.294.2.218. PMID 16014596. Research Blogging.
  16. Ioannidis JP et al. (1998). "Issues in comparisons between meta-analyses and large trials". JAMA 279: 1089–93. PMID 9546568[e]
  17. 17.0 17.1 17.2 Dickersin K et al. (1992). "Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards". JAMA 267: 374–8. PMID 1727960[e] Cite error: Invalid <ref> tag; name "pmid1727960" defined multiple times with different content Cite error: Invalid <ref> tag; name "pmid1727960" defined multiple times with different content
  18. Smith R (2005). "Medical journals are an extension of the marketing arm of pharmaceutical companies". PLoS Med 2: e138. DOI:10.1371/journal.pmed.0020138. PMID 15916457. Research Blogging.
  19. 19.0 19.1 Melander H et al. (2003). "Evidence b(i)ased medicine--selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications". BMJ 326: 1171–3. DOI:10.1136/bmj.326.7400.1171. PMID 12775615. Research Blogging.
  20. Krzyzanowska MK et al. (2003). "Factors associated with failure to publish large randomized trials presented at an oncology meeting". JAMA 290: 495–501. DOI:10.1001/jama.290.4.495. PMID 12876092. Research Blogging.
  21. 21.0 21.1 Lexchin J et al. (2003). "Pharmaceutical industry sponsorship and research outcome and quality: systematic review". BMJ 326: 1167–70. DOI:10.1136/bmj.326.7400.1167. PMID 12775614. Research Blogging. Cite error: Invalid <ref> tag; name "pmid12775614" defined multiple times with different content
  22. Pham B et al. (2001). "Is there a "best" way to detect and minimize publication bias? An empirical evaluation". Evaluation & the Health Professions 24 (2): 109–25. PMID 11523382[e]
  23. Egger M et al. (1997). "Bias in meta-analysis detected by a simple, graphical test". BMJ 315: 629–34. PMID 9310563[e]
  24. Terrin N et al. (2005). "In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias". J Clinical Epidemiol 58: 894–901. DOI:10.1016/j.jclinepi.2005.01.006. PMID 16085192. Research Blogging.
  25. Papanikolaou GN et al. (2001). "Reporting of conflicts of interest in guidelines of preventive and therapeutic interventions". BMC medical research methodology 1: 3. PMID 11405896[e]
  26. Laine C, Mulrow CD (2005). "Exorcising ghosts and unwelcome guests". Ann Intern. Med 143: 611–2. PMID 16230729[e]
  27. Nieto A et al. (2007). "Adverse effects of inhaled corticosteroids in funded and nonfunded studies". Arch Intern Med 167: 2047–53. DOI:10.1001/archinte.167.19.2047. PMID 17954797. Research Blogging.
  28. Friedberg M et al. (1999). "Evaluation of conflict of interest in economic analyses of new drugs used in oncology". JAMA 282: 1453–7. PMID 10535436[e]
  29. Als-Nielsen B et al. (2003). "Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events?". JAMA 290: 921–8. DOI:10.1001/jama.290.7.921. PMID 12928469. Research Blogging.
  30. Glasziou P, Doll H (2007). "Was the study big enough? Two "café" rules (Editorial)". ACP J Club 147: A08. PMID 17975858[e]
  31. Ancker JS, Kaufman D (2007). "Rethinking Health Numeracy: A Multidisciplinary Literature Review". DOI:10.1197/jamia.M2464. PMID 17712082. Research Blogging.
  32. Gross CP et al. (2000). "Relation between prepublication release of clinical trial results and the practice of carotid endarterectomy". JAMA 284: 2886–93. PMID 11147985[e]
  33. Juurlink DN et al. (2004). "Rates of hyperkalemia after publication of the Randomized Aldactone Evaluation Study". N Engl J Med 351: 543–51. DOI:10.1056/NEJMoa040135. PMID 15295047. Research Blogging.
  34. Beohar N et al. (2007). "Outcomes and complications associated with off-label and untested use of drug-eluting stents". JAMA 297: 1992–2000. DOI:10.1001/jama.297.18.1992. PMID 17488964. Research Blogging.
  35. Soumerai SB et al. (1997). "Adverse outcomes of underuse of beta-blockers in elderly survivors of acute myocardial infarction". JAMA 277: 115–21. PMID 8990335[e]
  36. Hemingway H et al. (2001). "Underuse of coronary revascularization procedures in patients considered appropriate candidates for revascularization". N Engl J Med 344: 645–54. PMID 11228280[e]
  37. Turner BJ, Laine C (2001). "Differences between generalists and specialists: knowledge, realism, or primum non nocere?". J Gen Intern Med 16: 422-4. DOI:10.1046/j.1525-1497.2001.016006422.x. PMID 11422641. Research Blogging. PubMed Central
  38. Rawson N et al. (2005). "Factors associated with celecoxib and rofecoxib utilization". Ann Pharmacother 39: 597-602. PMID 15755796.
  39. 39.0 39.1 De Smet BD et al. (2006). "Over and under-utilization of cyclooxygenase-2 selective inhibitors by primary care physicians and specialists: the tortoise and the hare revisited". J Gen Intern Med 21: 694-7. DOI:10.1111/j.1525-1497.2006.00463.x. PMID 16808768. Research Blogging.
  40. Carey T et al. (1995). "The outcomes and costs of care for acute low back pain among patients seen by primary care practitioners, chiropractors, and orthopedic surgeons. The North Carolina Back Pain Project". N Engl J Med 333: 913-7. PMID 7666878.
  41. McKibbon KA et al. (2007). "Which journals do primary care physicians and specialists access from an online service?". JMLA 95: 246-54. DOI:10.3163/1536-5050.95.3.246. PMID 17641754. Research Blogging.
  42. Majumdar S et al. (2001). "Influence of physician specialty on adoption and relinquishment of calcium channel blockers and other treatments for myocardial infarction". J Gen Intern Med 16: 351-9. PMID 11422631.
  43. Fendrick A, Hirth R, Chernew M (1996). "Differences between generalist and specialist physicians regarding Helicobacter pylori and peptic ulcer disease". Am J Gastroenterol 91: 1544-8. PMID 8759658.
  44. Lewis C et al. (1991). "The counseling practices of internists". Ann Intern Med 114: 54-8. PMID 1983933.
  45. Turner B et al.. "Breast cancer screening: effect of physician specialty, practice setting, year of medical school graduation, and sex". Am J Prev Med 8: 78-85. PMID 1599724.
  46. 46.0 46.1 Eddy DM (2005). "Evidence-based medicine: a unified approach". Health affairs (Project Hope) 24: 9-17. DOI:10.1377/hlthaff.24.1.9. PMID 15647211. Research Blogging.
  47. 47.0 47.1 47.2 47.3 47.4 Straus SE, McAlister FA (2000). "Evidence-based medicine: a commentary on common criticisms". CMAJ : Canadian Medical Association Journal 163: 837–41. PMID 11033714[e] Cite error: Invalid <ref> tag; name "pmid11033714" defined multiple times with different content
  48. Berwick DM (2003). "Disseminating innovations in health care". JAMA 289: 1969–75. DOI:10.1001/jama.289.15.1969. PMID 12697800. Research Blogging.
  49. 49.0 49.1 49.2 49.3 Green LA, Gorenflo DW, Wyszewianski L (2002). "Validating an instrument for selecting interventions to change physician practice patterns: a Michigan Consortium for Family Practice Research study". Journal of Family Practice 51: 938–42. PMID 12485547[e]
  50. 50.0 50.1 50.2 Montori VM et al. (2002). "A qualitative assessment of 1st-year internal medicine residents' perceptions of evidence-based clinical decision making". Teaching and Learning in Medicine 14: 114–8. PMID 12058546[e]
  51. Akl EA et al. (2006). "EBM user and practitioner models for graduate medical education: what do residents prefer?". Medical Teacher 28: 192–4. DOI:10.1080/01421590500314207. PMID 16707306. Research Blogging.
  52. Guyatt GH et al. (2000). "Practitioners of evidence based care. Not all clinicians need to appraise evidence from scratch, but all need some skills". BMJ 320: 954–5. PMID 10753130[e]
  53. Reid MC et al. (1998). "Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy". Am J Med 104: 374–80. PMID 9576412[e]
  54. Ely JW et al. (1999). "Analysis of questions asked by family doctors regarding patient care". BMJ 319: 358–61. PMID 10435959[e] PubMed Central
  55. Papadakis MA et al. (2005). "Disciplinary action by medical boards and prior behavior in medical school". N Engl J Med 353: 2673–82. DOI:10.1056/NEJMsa052596. PMID 16371633. Research Blogging.
  56. Tengs TO et al (1995). "Five-hundred life-saving interventions and their cost-effectiveness". Risk Anal 15: 369–90. PMID 7604170[e]
  57. Wright JC, Weinstein MC (1998). "Gains in life expectancy from medical interventions--standardizing data on outcomes". N Engl J Med 339: 380–6. PMID 9691106[e]
  58. 58.0 58.1 Goodman SN (1999). "Toward evidence-based medical statistics. 1: The P value fallacy". Ann Intern Med 130: 995–1004. PMID 10383371[e]
  59. Browner WS, Newman TB (1987). "Are all significant P values created equal? The analogy between diagnostic tests and clinical research". JAMA 257: 2459–63. PMID 3573245[e]
  60. Goodman SN (1999). "Toward evidence-based medical statistics. 2: The Bayes factor". Ann Intern Med 130: 1005–13. PMID 10383350[e]
  61. Stanley K (2007). "Design of randomized controlled trials". Circulation 115 (9): 1164–9. DOI:10.1161/CIRCULATIONAHA.105.594945. PMID 17339574. Research Blogging.
  62. Guyatt G, Sackett D, Adachi J, et al (1988). "A clinician's guide for conducting randomized trials in individual patients". CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne 139 (6): 497–503. PMID 3409138[e]
  63. Brookes ST, Biddle L, Paterson C, Woolhead G, Dieppe P (2007). ""Me's me and you's you": Exploring patients' perspectives of single patient (n-of-1) trials in the UK". Trials 8: 10. DOI:10.1186/1745-6215-8-10. PMID 17371593. Research Blogging.
  64. Langer JC, Winthrop AL, Issenman RM (1993). "The single-subject randomized trial. A useful clinical tool for assessing therapeutic efficacy in pediatric practice". Clinical pediatrics 32 (11): 654–7. PMID 8299295[e]
  65. Mahon J, Laupacis A, Donner A, Wood T (1996). "Randomised study of n of 1 trials versus standard practice". BMJ 312 (7038): 1069–74. PMID 8616414[e]
  66. National Library of Medicine. Clinical practice guidelines. Retrieved on 2007-10-19.
  67. Mendelson D, Carino TV (2005). "Evidence-based medicine in the United States--de rigueur or dream deferred?". Health Affairs (Project Hope) 24: 133–6. DOI:10.1377/hlthaff.24.1.133. PMID 15647224. Research Blogging.
  68. Hersh W (2002). "Medical informatics education: an alternative pathway for training informationists". JMLA 90: 76–9. PMID 11838463[e]
  69. Shearer BS et al. (2002). "Bringing the best of medical librarianship to the patient team". JMLA 90: 22–31. PMID 11838456[e]
  70. Straus S et al. (2007). "Misunderstandings, misperceptions, and mistakes". Evidence-based medicine 12 (1): 2–3. DOI:10.1136/ebm.12.1.2-a. PMID 17264255. Research Blogging.
  71. 71.0 71.1 Goodman SN (2002). "The mammography dilemma: a crisis for evidence-based medicine?". Ann Intern Med 137: 363–5. PMID 12204023[e] Cite error: Invalid <ref> tag; name "pmid12204023" defined multiple times with different content
  72. World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. Retrieved on 2007-11-17.
  73. (1997) "World Medical Association declaration of Helsinki. Recommendations guiding physicians in biomedical research involving human subjects". JAMA 277 (11): 925–6. PMID 9062334[e]
  74. Michels KB, Rothman KJ (2003). "Update on unethical use of placebos in randomised trials". Bioethics 17 (2): 188–204. PMID 12812185[e]
  75. Temple R, Ellenberg SS (2000). "Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 1: ethical and scientific issues". Ann. Intern. Med. 133 (6): 455–63. PMID 10975964[e]
  76. Ellenberg SS, Temple R (2000). "Placebo-controlled trials and active-control trials in the evaluation of new treatments. Part 2: practical issues and specific cases". Ann. Intern. Med. 133 (6): 464–70. PMID 10975965[e]
  77. 77.0 77.1 Kaul S, Diamond GA (2006). "Good enough: a primer on the analysis and interpretation of noninferiority trials". Ann. Intern. Med. 145 (1): 62–9. PMID 16818930[e]
  78. Michaud G et al. (1998). "Are therapeutic decisions supported by evidence from health care research?". Arch Intern Med 158: 1665–8. PMID 9701101[e]
  79. Ellis J et al. (1995). "Inpatient general medicine is evidence based. A-Team, Nuffield Department of Clinical Medicine". Lancet 346: 407–10. PMID 7623571[e]
  80. Booth, A. Percentage of practice that is evidence based?. Retrieved on 2007-11-15.
  81. Haynes RB (2006). "Of studies, syntheses, synopses, summaries, and systems: the "5S" evolution of information services for evidence-based healthcare decisions". Evidence-based Medicine 11: 162–4. DOI:10.1136/ebm.11.6.162-a. PMID 17213159. Research Blogging.
  82. Grahame-Smith D (1995). "Evidence based medicine: Socratic dissent". BMJ 310: 1126–7. PMID 7742683[e]
  83. Formoso G et al. (2001). "Practice guidelines: useful and "participative" method? Survey of Italian physicians by professional setting". Arch Intern Med 161: 2037–42. PMID 11525707[e]
  84. Pear, R. A.M.A. and Insurers Clash Over Restrictions on Doctors - New York Times. Retrieved on 2007-11-14.
  85. Evidence-Based Clinical Guidelines by Milliman Care Guidelines. Retrieved on 2007-11-14.
  86. Nissimov, R (2000). Cost-cutting guide used by HMOs called `dangerous' / Doctor on UT-Houston Medical School staff sues publisher. Houston Chronicle. Retrieved on 2007-11-14.
  87. Nissimov, R (2000). Judge tells firm to explain how pediatric rules derived. Houston Chronicle. Retrieved on 2007-11-14.
  88. Martinez, B (2000). Insurance Health-Care Guidelines Are Assailed for Putting Patients Last. Wall Street Journal.
  89. Colliver, V (1/07/2002). Lawsuit disputes truth of Kaiser Permanente ads. San Francisco Chronicle. Retrieved on 2007-11-14.
  90. Warner, S (2/7/2007). The Scientist : State official subpoenas infectious disease group. Retrieved on 2007-11-14.
  91. Gesensway, D (2007). ACP Observer, January-February 2007 - Experts spar over treatment for 'chronic' Lyme disease. Retrieved on 2007-11-14.
  92. Djulbegovic B, Morris L, Lyman GH (2000). "Evidentiary challenges to evidence-based medicine". Journal of evaluation in clinical practice 6 (2): 99–109. PMID 10970004[e]
  93. Charlton BG. [Book Review: Evidence-based medicine: how to practice and teach EBM by Sackett DL, Richardson WS, Rosenberg W, Haynes RB. http://www.hedweb.com/bgcharlton/journalism/ebm.html] Journal of Evaluation in Clinical Practice. 1997; 3: 169-172
  94. Michelson J (2004). "Critique of (im)pure reason: evidence-based medicine and common sense". Journal of evaluation in clinical practice 10 (2): 157–61. DOI:10.1111/j.1365-2753.2003.00478.x. PMID 15189382. Research Blogging.
  95. Smith GC, Pell JP (2003). "Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials". BMJ 327 (7429): 1459–61. DOI:10.1136/bmj.327.7429.1459. PMID 14684649. Research Blogging.
  96. Upshur RE (2000). "Seven characteristics of medical evidence". Journal of evaluation in clinical practice 6 (2): 93–7. PMID 10970003[e]
  97. Vickers, J (2006). The Problem of Induction (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved on 2007-11-16.
  98. Sweeney, Kieran (2006). Complexity in Primary Care: Understanding Its Value. Abingdon: Radcliffe Medical Press. ISBN 1-85775-724-6. Review
  99. Holt, Tim A (2004). Complexity for Clinicians. Abingdon: Radcliffe Medical Press. ISBN 1-85775-855-2.  Review, ACP Journal Club Review
  100. Anderson RA, Crabtree BF, Steele DJ, McDaniel RR (2005). "Case study research: the view from complexity science". Qualitative health research 15 (5): 669–85. DOI:10.1177/1049732305275208. PMID 15802542. Research Blogging.
  101. Miller WL, McDaniel RR, Crabtree BF, Stange KC (2001). "Practice jazz: understanding variation in family practices using complexity science". The Journal of family practice 50 (10): 872–8. PMID 11674890[e]
  102. Crabtree BF, Miller WL, Stange KC (2001). "Understanding practice from the ground up". The Journal of family practice 50 (10): 881–7. PMID 11674891[e]
  103. Sturmberg JP (2007). "Systems and complexity thinking in general practice: part 1 - clinical application". Australian family physician 36 (3): 170–3. PMID 17339983[e]
  104. 104.0 104.1 Leykum LK, Pugh J, Lawrence V, et al (2007). "Organizational interventions employing principles of complexity science have improved outcomes for patients with Type II diabetes". Implementation science : IS 2: 28. DOI:10.1186/1748-5908-2-28. PMID 17725834. Research Blogging.
  105. Rhydderch M, Elwyn G, Marshall M, Grol R (2004). "Organisational change theory and the use of indicators in general practice". Quality & safety in health care 13 (3): 213–7. DOI:10.1136/qhc.13.3.213. PMID 15175493. Research Blogging.
  106. Goldberger AL (1996). "Non-linear dynamics for clinicians: chaos theory, fractals, and complexity at the bedside". Lancet 347 (9011): 1312–4. PMID 8622511[e] Full text at Ebsco