North Carolina Chapel Hill Evaluates CDC opioid prescribing guidelines

150px-University_of_North_Carolina_at_Chapel_Hill_seal
Share the News
  • 55
  •  
  •  
  • 2
  •  
    57
    Shares

While reviewing NIH funding for research into chronic long term pain, this grant was the only funded research found which attempts to evaluate the impact of the 2016 CDC Guidelines for Chronic Pain. 

Below are excerpts taken from the study, which have been edited for presentation to non-medical readers of this blog. Still the content is fairly dry for those not interested in facts and figures. If the reader is wanting to know specific details on why the CDC guidelines have done as much harm as good, I encourage you to continue reading.

There is a growing consensus in the U.S. that these guidelines have been too rapidly adopted by state medical boards, hospitals and clinics. Leading to unintended adverse events, i.e. forced tapering, forced abrupt termination leading to both suffering and suicide in chronic long term pain patients. Concerns have also been raised with regards to inappropriate limitations placed on the treatment of acute pain for injuries and elective surgical procedures.

This review of the guidelines also underscores the need and reason why, medical research must be peer reviewed by other experts, to ensure government policy is based on rational conclusions which do not adversely impact target populations with inappropriate regulations.

The reader should note, this is not the first time the CDC has published medical guidelines which have had a large scale negative impact on U.S. populations. Regardless of the growing opioid crisis, the decision to introduce guidelines which were inadequately peer reviewed or validated, especially ones which impact otherwise healthy populations, is an offense some would label as not only irresponsible but reckless.

October 2, 2018

The NIH awarded a three-year, $2 million research grant to the University of North Carolina at Chapel Hill’s Injury Prevention Research Center. 

Dr. Shabbar Ranapurwala

DR. SHABBAR RANAPURWALA

Shabbar I. Ranapurwala, PhD, a core faculty member and assistant professor lead the study.

 

The grant allows examination of the impact of North Carolina mandates on physicians’ prescribing behaviors and patient health outcomes. 

 

 

 

Study Methods

Motivated by the need to strengthen evidence base government policies, this review details limitations of the opioid safety studies cited in the CDC guidelines with a focus on the limitations in methods used. 

Study Results

Internal validity concerns were related to poor selection of data sources, variable misclassification, resource selection bias, competing risks presented, and potential competing interventions. External validity concerns arose from the use of limited population sources, incomplete historical data and issues with the use of cancer and acute pain data for evaluation in chronic pain conditions.

Study Conclusion

Larger population base. i.e. big data evidence is needed to aid in any future revisions of the CDC guidelines as well as better statistical methods to eliminate disparate groupings of subjects and better identification of cause and effect outcomes.

Key Criticisms

  • State prescribing guidelines for chronic noncancer pain have been developed using these guidelines and is being adopted nationwide to mitigate the growing incidence of opioid use disorders and overdose deaths, [ignoring chronic pain outcomes].
  • These guidelines have to rely on the current evidence base for prescription opioid safety, which includes studies with multiple internal and external validity‐related limitations.
    • [Or in other words, there is insufficient data on the use of opiates in treating chronic pain for drawing conclusions on long term risks and benefits].
  • Utilization of “big data” resources, superior computing power, and employment of advanced epidemiological and statistical methods is needed to strengthen the opioid safety evidence base.
    • [Or, larger population groups used in studies, better epidemiological and statistical methods are needed to strengthen evidence based government policy].  

Detailed Criticisms

Despite resolute efforts by policy makers, public health and law enforcement to stem the opioid overdose epidemic, efforts have failed. And while prescription overdose deaths have declined, overall opiate overdose deaths continue to rise, with heroin and illicitly manufactured fentanyl as the leading causes. 

The CDC guidelines had swift uptake with state licensing boards adopting them as the standard of care. The guidelines are one of the most prominent initiatives by a federal agency and have strong potential for limiting opioid prescribing while maintaining appropriate pain management. However, the guidelines have been subject to criticism. [1], [2], [3]. Notably, it has been argued that the guidelines are based on limited evidence,[1], [2], [3] also acknowledged in the guidelines themselves (grade 3 and 4 evidence). Additionally, the majority of studies cited in the guidelines are limited to the association between opioid prescribing with overdose deaths. 

The guidelines note, however, that while preventing overdose death is paramount, guidelines aimed at preventing prior outcomes like opioid use disorders (OUDs) are equally important. Despite these limitations in the evidence base, the escalating magnitude of the opioid crisis required strong federal action. Therefore, the guidelines were developed by experienced pain medicine physicians and scientists by leveraging pragmatic pain management approaches and the best possible interpretations of the literature available at the time.

The guidelines’ emphasis on chronic noncancer pain, reflects concerns that most non-medical use of opioid analgesics occurs in the chronic pain population; that many primary care providers feel inadequately prepared to manage chronic pain while minimizing OUD (opioid use disorder) and overdose risks.

However, in doing so, the guidelines were forced to omit cancer pain patients and only made a brief note that 3 to 7 days of opioids might suffice for most acute and post-surgical pain. Moreover, the absence of evidence based on specific clinical sub-populations (eg, women, minorities, acute trauma, and elective surgery) or specific opioid formulations, meant that the guidelines adopted a one‐size‐fits‐all approach for the many pain‐inducing conditions, regardless of pain etiology and biologic variation among patient sub-populations.

Notably, there is a lack of data on effective noncancer pain management among African‐Americans, which is concerning given mixed evidence on racial differences in pain and prescribing. These critical research gaps need to be addressed if prescribing behavior is to become more evidence‐based.

Threats to internal validity

A major internal validity issue in research is lack of exchangeability. This refers to the imbalance of potential [cause and effect elements] between exposure groups. Lack of exchangeability gives rise to [conflicting data points] or selection bias. Lack of exchangeability is typically of minimal concern in large, well‐conducted randomized controlled trials (RCT). However, it is a concern in small sample RCTs with selective withdrawals, which was seen in 3 of the 6 RCTs.

Lack of exchangeability is a major concern in observational studies and requires the use of statistical methods to be addressed. This poses several challenges for opioid safety studies.

  • First, depending on the data source used, there may be a lack of confounder information [information describing why the data does not fit expectations]. For example, prescription monitoring program (PMP) and death records may not include diagnostic and substance use disorder histories. While linkage of these 2 sources allows for good exposure assessment and for examining associations between opioid dispensing and overdose deaths, it does not allow us to establish causal relationships between prescription opioids and opioid safety outcomes because of inadequate cause and effect control.
  • Second, even when [cause and effect information] is available from sources like VHA, claims, and EHR [electronic health records], failing to identify and control for all appropriate [cause and effect conditions] can lead to biased effect estimates. However, several of the observational studies cited in the guidelines failed to account for some or all of these well‐known [cause and effect elements].
  • Third, in some of the reviewed studies, investigators controlled for multiple [cause and effect elements] and interpreted all coefficient estimates from the single multivariable model as causal effects. This results in so called “table 2 fallacy,” a term that refers to improper interpretations of effect estimates and potential selection bias.
  • Fourth, most studies failed to account for time‐varying opioid exposure and [cause and effect] by indication (eg, patient selection for abuse‐deterrent formulations).

In addition to [cause and effect], measurement error due to misclassification of outcome, exposure, or covariates can also lead to a lack of internal validity. For example, researchers frequently express concern that ICD‐9/10 codes for opioid dependence and abuse lack sensitivity and underestimate opioid use disorder. Claims data also do not capture out‐of‐pocket prescriptions, all prescribed medications are not filled and all filled medications are not consumed, (all leading to exposure misclassification and outcome misclassification ).

Other measurement error issues include considering all opioid medications as equal in equivalents without consideration to specific formulations. Such information is beneficial to regulators like the US Food and Drug Administration. Similarly, inconsistencies in correctly identifying the substance involved in overdose deaths may lead to outcome misclassification. For example, heroin rapidly metabolizes to morphine potentially rendering the overdose indistinguishable from prescription opioid overdoses. Even as these limitations exist, they are infrequently discussed and not addressed in the cited studies .

An additional source of bias in both observational and randomized studies is selection bias. For example, many observational studies used prevalent opioid user designs (i.e. both opiate naive and chronic pain users), as opposed to study designs in which the comparison group was limited to opiate naive (new opioid initiators). The use of prevalent opioid user designs is likely to introduce survival bias, the inclusion of individuals who did not have severe adverse events (eg, fatal overdose) from initial opioid exposures. This can lead to under‐ascertainment of adverse events that occur early in opioid therapy. Additionally, selection bias can result from differential withdrawal or dropout from the study.

Lastly, competing interventions and competing outcomes (risk competition) may lead to a lack of internal validity. Competing interventions (i.e. policy change or Naloxone access) may predict both the receipt of opioid prescriptions and opioid safety outcomes [confusing] the relationship between opioid prescribing and opioid safety outcomes. On the other hand, not accounting for competing risks (i.e. death due to other causes) may introduce immortal person‐time into some studies (time during which the outcome cannot occur), leading to underestimation of overall risk measures.

Reference

[1] Pergolizzi JV JrRaffa RBLeQuang JAThe Centers for Disease Control and Prevention opioid guidelines: potential for unintended consequences and will they be abused? J Clin Pharm Ther2016;41(6):592593.

[2] Webster LRChronic pain and the opioid conundrumAnesthesiology Clin2016;34(2):341355.

[3] Chou RDeyo RDevine BHansen RSullivan SJarvik J. GBlazina IDana TBougatsos CTurner J.The effectiveness and risks of long‐term opioid treatment of chronic pain. Evidence Report/Technology Assessment No. 218

Threats to external validity

Lack of generalizability may result from different data generating mechanisms or source populations. For example, 7 studies cited in the guidelines utilized data from 1 specific hospital. Such data is only generalizable to the specific geographic area and the practices of physicians and staff at that hospital. Similarly, data generated from the VHA, a single state, or another country, as used in 21 studies cited in the guidelines, might not generalize to the broader US population. Moreover, small sampled studies, even from nationwide sources, may not represent the source population from which the sample arises.

Outdated or historical data when not used along with current data also threaten external validity. Some studies cited in the CDC guidelines included data from the 1980s and 1990s, the majority utilized data up to 2010, and only a few used data after 2010, but none went beyond 2012. As the opioid epidemic is rapidly changing, more recent data would improve generalizability to current conditions, thereby improving effectiveness of the guidelines.

Finally, exclusion of cancer pain patients, or inclusion of cancer pain patients but reporting 1 summary estimate for the whole population, may reduce generalizability. Cancer pain is generally long lasting and often requires high doses of opioid analgesics to be relieved. The exclusion of these patients means results can only be generalized to noncancer pain patients, eliminating a large and important segment of the opioid‐using population. On the other hand, some studies included both cancer and noncancer pain patients but only presented overall effect estimates. 

Doing so may reduce generalizability to both cancer as well as noncancer pain patients because of potential heterogeneity of effects between the 2 groups. The overall summary estimate, even if unbiased for the entire sample, may be too large or too small for either cancer or noncancer patients. Similarly, pooling estimates from opioid patients suffering with distinct chronic or acute pain conditions may limit generalizability.

Furthermore, the practice of identifying and excluding cancer patients varies between studies. For example, some studies included skin cancer patients but excluded patients with other types of cancer. Previous studies have shown that White race is associated with a higher incidence of skin cancer, pain, receiving opioids, and experiencing overdose. Therefore, selective inclusion of skin cancers (or selective exclusion of other cancers) could lead to selection bias, threatening both internal and external validity.

DISCUSSION AND RECOMMENDATIONS

Centers for Disease Control and Prevention’s opioid prescribing guidelines constitute a logical and timely response by a federal agency to a rapidly escalating public health crisis. However, we identified several internal and external validity concerns in the opioid safety evidence base, which if addressed in subsequent studies may allow future guidelines to ensure ideal balance between limiting over prescribing and providing appropriate pain management.

Additionally, variable misclassification, selection bias due to the prevalent users, and competing risks threatened internal validity among these studies. Limited source populations, timeliness of data, and issues in the handling of cancer pain patient data were the most common threats to external validity. These limitations are persistent in the literature. We reviewed 6 additional opioid safety studies that were published after the CDC guidelines (until December 31, 2016) and 1 study previously published but not cited in the guidelines; these studies shared similar limitations.

  1. Utilize large data resources from multiple states to increase generalizability.
  2. Link multiple data sources to harness detailed covariate information. 
  3. Use of DAGs can help researchers distinguish between causal and non-causal pathways between an exposure (opioid prescribing) and outcome (OUD, overdose, and overdose death), including identification of non-causal pathways due to confounding. 
  4. Utilize longitudinal study designs and modern epidemiologic and analytic methods to examine the causal effect of opioids on OUDs in observational data. 
  5. Consider examining effect measure modification or even biologic interaction due to cancer rather than excluding cancer pain patients or pooling them with noncancer pain patients. This type of research could then inform the need (or lack thereof) for different guidelines for cancer pain and chronic noncancer pain patients. 
  6. Conduct validation studies to quantify the extent of exposure and outcome misclassification in opioid safety studies, especially for claims and EHR data sources. 
  7. Use epidemiologic tools like sensitivity analyses or quantitative bias analyses to examine the level of unmeasured bias involved in the generated evidence and its impact on effect estimates. 

My conclusions from this review:

  • Lack of NIH funding for research to discover risks and benefits for the use of opiates in treating long term non-cancer pain, is a major contributor for why experts are unable to make rational guidelines for policy makers.
  • Assuming research funding is initiated and outcomes identified, education for prescribers will still be required. Therefore regulations should also include mandatory continuing education for prescribers when combined with regulated standards of care.
  • The assumption that the majority of non-medical abuse of prescription opiates occurs in the chronic pain community is a false narrative, data for such was and is inconclusive. While not an impossible assumption prior to 2015 data since then strongly suggests otherwise. Leading to rational conclusions that other explanations must exist.
  • New data since 2015 supports a narrative that non-medical use was occurring because of over prescribing for a variety of medical conditions, not just chronic pain.
    • Or that relaxed prescribing sentiments among doctors played a major role regardless of the underlying condition and it’s reasonable to assume both are true.
  • Big data studies using closed insurance claims could help answer the questions around physician prescribing practices.
  •  
  •  
  •  
  •  
  •  

1 thought on “North Carolina Chapel Hill Evaluates CDC opioid prescribing guidelines

Comments are closed.