Guidelines have strayed from their original purpose and value — perhaps it is time for new approaches.
Clinical practice guidelines (CPGs), once somewhat spare and elegant in their creation, dissemination, and application, have become commonplace, tedious, and of questionable clinical relevance.
In a review of all American College of Cardiology and American Heart Association (ACC/AHA) CPGs that were issued from 1984 to 2008 (53 CPGs with 7196 recommendations on 22 topics), 16 current guidelines provided evidence levels for supporting data. Only 314 of 2711 recommendations that provided evidence levels were supported by level A evidence (multiple randomized trials or meta-analyses). Nearly half these recommendations (1246) were supported by only level C evidence (expert opinion, case studies, or standards of care). Only 245 of 1305 level I recommendations (evidence or general agreement that a given procedure or treatment is useful and effective) had level A evidence. Thirty of 350 level III recommendations (evidence or general agreement that a given procedure or treatment is not useful or effective and might be harmful) were based on level A evidence. Among guidelines that had been revised at least once, the number of recommendations was 48% higher in revisions than in first versions.
The National Guideline Clearinghouse currently contains 2373 CPGs from 285 organizations. The guideline development process originally was designed to produce evidence-based "rules of thumb" that focus attention on a few core concerns for diagnosing and treating complex conditions. But, for some conditions, guidelines have become plethoric (e.g., at least 10 CPGs are available for management of pharyngitis in adults) and obsolete (relative to the rapidly changing world of biomedical research); they often are used to establish medical (and sometimes legal) standards of care and are influenced unduly by "expert" dogma. When recommendations are supported largely by opinion instead of evidence, we have the very problem CPGs were supposed to solve! The large number of guidelines, and the many guideline revisions, lead to the most important recommendations being lost in the blizzard of minor self-evident ones.
One example that is relevant to primary care is the 2002 and 2007 ACC/AHA CPGs on perioperative evaluation of patients who are scheduled for elective noncardiac surgery. An analysis of these guidelines showed substantial inconsistencies between the cited evidence and associated recommendations. For example, the guideline writers note mixed support for stress testing and beta-blocker therapy, yet their algorithm suggests these interventions for many patients. What results is a body of unclear recommendations, unsupported by evidence, which might lead clinicians to initiate interventions "just to be safe."
In another recent essay, writers call for several reforms of the CPG process, including substantial changes in leadership and membership from one version of a CPG to the next, posting of guideline drafts for both broad public review and structured scientific review, detailed reporting of conflicts of interest for both panel members and sponsoring professional associations, and centralization of the CPG development process under a government agency (such as the Agency for Healthcare Research and Quality). All in all, the process of creating and using CPGs needs a major overhaul.
Thomas L. Schwenk, MD
Published in Journal Watch General Medicine March 10, 2009
- Tricoci P et al. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009 Feb 25; 301:831.
- Shaneyfelt TM and Centor RM. Reassessment of clinical practice guidelines: Go gently into that good night. JAMA 2009 Feb 25; 301:868.