Do You Understand that Adherence Statistic?

Measuring adherence is fraught with difficulties explains Katrina Firlik.



It remains unclear as to who first uttered, “There are three kinds of lies: lies, damned lies, and statistics,” but apparently it was Mark Twain who popularized the saying.

Regardless, it serves as a good reminder - albeit perhaps a bit exaggerated - that we have to be careful about numbers. If we always take them at face value, without questioning or even seeking out the methods behind them, we’re bound to be confused or even deceived by them from time to time.

Here’s one simple case in point. The study of medication adherence involves a good deal of numbers, and often percentages or ratios, as in medication possession ratio (MPR) or proportion of days covered (PDC). These statistics basically reflect how often, over the course of typically one year, a patient has medication available to take based on refill records.

Although a single data point in and of itself may be relatively straightforward, the comparison of MPR or PDC data across study groups or pre- vs. post-intervention presents a common trap in how the numbers are communicated given that it’s often a comparison of percentages.

Let’s say that a control group has PDC of 60% and the intervention group has a PDC of 75%. That represents a difference of 15 percentage points, not 15%. Expressed as a percentage, this is actually a difference of 25%. Often, in comparing percentages, this percentage point vs. percentage terminology is confused, with the effect of downplaying the differences between groups. The 25% difference is more impressive than the incorrect 15% difference.

One way to measure persistence is to take a population of new patients and gauge at the end of 12 months how many are still refilling (after deciding upon a specific “gap period” reflecting number of days late to refill, which could be 30, 60, or even 90). Then, you would express the statistic as a percentage, such as “The persistence rate of patients on an anti-hypertensive medication was 60% at 12 months.”

The measurement of medication persistence can present an even more confusing trap. Let’s say you want to understand how long patients tend to stick with a hypertension medication. There are two ways to measure and communicate persistence. For both, the cohort of patients studied is typically those who are new to therapy, which represents perhaps 10% of the population of patients on chronic meds, more or less. The rest of the medication-taking population would be considered more established, and it wouldn’t make sense to lump a patient on month 85 of therapy with a patient starting out on month 1.

One way to measure persistence is to take a population of new patients and gauge at the end of 12 months how many are still refilling (after deciding upon a specific “gap period” reflecting number of days late to refill, which could be 30, 60, or even 90). Then, you would express the statistic as a percentage, such as “The persistence rate of patients on an anti-hypertensive medication was 60% at 12 months.” In other words, 40% stopped refilling during the course of the year. (Which begs the question: are drug switches counted as nonpersistence, or not? This is important to understand for any study, as a patient who switched from drug A to drug B is still persistent with therapy, but is nonpersistent with drug A.)

The second way to measure persistence attempts to gauge mean number of months on therapy per patient, also based on pharmacy refill records. This one is particularly fraught with difficulties given the limitations of most databases. Patients often switch pharmacies and insurers (in the U.S. at least) such that it is nearly impossible to track the most adherent patients who do stick with their hypertension medications for decades. We all know people (maybe even ourselves) who have been on therapy for a chronic condition for years or even decades. Such real-world longer-term persistence typically remains unaccounted for in adherence studies.

Instead, it’s methodologically simpler to limit a study to 1 year, or maybe 2 years, and measure mean months on therapy over that limited time period. If the conclusion of such a study then reads something like: “The mean persistence for patients on a hypertension medication is 6 months,” it’s clearly critical to understand that the time period studied was only 12 months, such that it’s not a measure of real-world persistence, per se, as the maximum was artificially capped at 12 months.

And finally, I’ll add another word about terminology. Given the multitude of factors that affect clinical outcomes, medication adherence being one of many including diet, exercise, avoidance of smoking, family history, demographics, adherence to medical follow-up and testing, and others, it’s often most accurate to conclude that greater medication adherence is “associated” with better outcomes as opposed to having “caused” better outcomes. I do believe that there typically is a causal relationship, but that’s hard to prove unless you control for those other variables.


If you’re interested in reading more on this topic, download our latest white paper “Medication Adherence: Tips and Traps in Understanding the Literature” here: https://healthprize.com/whitepapers/tips-and-traps-in-understanding-the-literature-2/