In statistics there’s something known as the bell curve. Values that are distributed normally tend to cluster at the center, tapering away to outliers on either end. If you were to plot them on a graph, the curve assumes a bell shape – sometimes flat and sometimes steep, depending on what you’re measuring and the data you’re using.
Health care often relies on principles similar to the bell curve for making diagnoses, managing diseases and recommending screenings and vaccinations. A particular set of symptoms, for instance, will usually suggest one or more probabilities for what the diagnosis might be. Some diseases are more likely to be found among children; others, such as heart disease, tend to occur among adults.
Real life, of course, isn’t always this predictable. Common diseases sometimes present themselves in uncommon ways, or proceed down a path that isn’t typical. Not all patients can be treated the same. The challenge for the physician is to be aware of the outliers yet not lose sight of the most likely probabilities.
Figuring out this balance between what’s right for the group and what’s right for individuals has never been easy, a fact that has been hammered home the past couple of weeks in the wake of the U.S. Preventive Services Task Force’s controversial new recommendation to offer fewer mammograms, especially to women in their 40s. The task force’s epidemiology was sound; after all, breast cancer is statistically most common among women in the 50- to 70-year-old age group. But how should we account for women on either end of this particular bell curve – women older than 70 and women in their 40s and younger? Where do they fit into this picture?
I’m not sure this is a question that epidemiology is equipped to answer. The fact that younger women are not the majority when it comes to breast cancer doesn’t mean their needs can be brushed aside. Indeed, breast cancer often is more aggressive in this age group, something that isn’t always reflected when large amounts of data are compiled and analyzed. It’s one of the dangers of statistical analysis: The sheer numbers can obscure critical differences among subgroups and lead to conclusions that are overly broad.
Did the USPSTF fail to account for the bell curve? Plenty of critics think the task force totally missed the boat.
One of the accusations has been that the task force’s analysis was too limited. Its study focused primarily on women considered at average risk of breast cancer. Rather than simply looking at whether screening helps with early detection of breast cancer, the panel examined whether it leads to fewer deaths. The task force also was selective about the existing studies it reviewed, which could well have influenced the conclusions that were reached. If another group designed a slightly different analysis, the results might be different too.
Another criticism is that the recommendations are based on the use of film mammography, an older technology that is increasingly being replaced with digital mammography. If you read the task force’s clinical summary, however, it’s clear that both digital mammography and MRI imaging were reviewed for their effectiveness. The conclusion was that there’s insufficient evidence to show these two technologies are better overall at detecting cancer than film imaging. The task force noted that digital mammography appears to be "somewhat better" for younger women or women with dense breast tissue. MRI imaging appears to be more effective among women at higher risk of getting breast cancer. The down side: Both technologies are more expensive, and they’re more likely to lead to false positive findings and possibly overdiagnosis.
Other organizations are doing their own analysis of how the USPSTF reached its conclusions. No doubt we’ll be hearing more about this issue. None of this is etched in granite, after all, and our perspectives on the benefits of mammography will likely continue to evolve as more data are accumulated.
For what it’s worth, I don’t think the USPSTF deserves the bashing it has received over the new mammography guidelines. This is a nonpartisan group with considerable credibility. Its recommendations have generally been viewed as the gold standard in clinical practice. In many respects the panel is even conservative – careful to weigh the evidence and consider the existing science. Whether you agree with the panel’s conclusions or not, it took guts to ask important and tough questions about the benefits of screening for breast cancer. It’s safe to say we in the United States spend millions of dollars each year on mammograms – more than any other industrialized nation. As politically unpopular as it may sound, we need to be asking ourselves whether it has made us healthier or given us better outcomes.
Maybe this is part of the problem. Pointy-headed academic science has a way of colliding with real life. What we see and what we experience aren’t always explained by the statistics. The bell curve might illustrate the epidemiological probabilities but it doesn’t necessarily tell us how this is supposed to apply to individuals.
There’s a balance somewhere in here between being driven solely by the scientific evidence vs. being ruled by emotion and anecdote. I’m not sure where it is but we do need to find it and bring the discussion back to a more rational plane.