The snack habit

I’ve noticed it for some time, in a vaguely-paying-attention sort of way. But it wasn’t until seeing a news article in the Star Tribune of Minneapolis this weekend that it really hit home: For better or worse, snacking has become a mainstream American way of eating.

From the article:

Two studies from 2010 by University of North Carolina researchers looked at snacking trends between 1977 and 2006 and found that children now eat three snacks a day and adults snack twice a day. That is one additional snack for each group compared with 30 years ago.

Three square meals don’t exist anymore, said Larry Finkel, director of food and beverage research at Packaged Facts, a publishing company that focuses on consumer product research. “Meals and snacks have become all blurred together.” The authors of the report concluded that “our children are moving toward constantly eating.”

What is up with all the nibbling between meals? More to the point, how did we get this way?

Finkel attributes it to the American on-the-run lifestyle which has led to the decline of structured mealtimes. Instead of sitting down to dinner together, dinner has become what you eat in the car on the way to work or soccer practice, he said.

Once you start noticing, the signs are everywhere that snacking is an eating behavior that’s widely accepted and even expected. Manufacturers have developed hundreds of “snack-sized” products meant to be eaten at your desk or on the run. Restaurants are open late into the night or, increasingly, 24 hours a day. Few kids’ events are complete anymore without a snack of some kind. Even diet plans now include recommendations for morning and afternoon snacks, on the assumption that this is part of people’s daily routine.

In an indication of how entrenched the snack habit has become, a study that appeared in Health Affairs in 2010 found that nearly one-third of the daily calories consumed by American children now come from snacks.

It all raises the question: Is this a trend we should be welcoming?

There in fact seems to be no universally agreed-upon definition of what constitutes a snack. Are snacks a replacement for a full meal or are they something that’s consumed in addition to a meal? How many calories can a snack contain before it stops being a snack and becomes a meal?

There’s some limited evidence that “grazing,” or consuming half a dozen small meals a day, may be more effective for losing weight, curbing hunger and controlling blood sugar levels than the traditional three squares a day. But the research findings so far have been somewhat contradictory. At least two studies have found that for those who want to lose weight, what ultimately matters is how many calories they consume, not how often they eat. According to other studies, however, grazing can be beneficial, especially in helping people feel more sated throughout the day and less likely to overeat.

Grazing might also benefit some groups more than others. Athletes and active children often need snacks to replenish the calories they burn. Some studies involving older adults, an age group at risk of malnutrition, have found that large meals can be unappetizing for them and that they fare better on smaller meals throughout the day.

What it seems to come down to is the quality and amount of snacking that takes place. There’s a difference, after all, between snacking on carrot sticks vs. a bag of chips. And there’s a difference between snacks that are part of an overall healthy eating plan vs. snacks that add to one’s daily calorie load. Munching on something a few extra times a day might not seem like much but the calories can add up in ways that might astonish many people.

Left unanswered in all of this are the social implications of replacing structured mealtimes with grazing and snacks eaten on the run. Are we losing something when we don’t sit down at the table together, or doesn’t this really matter? What happens when the whole world becomes our dining room?

I’m not sure we’ve pondered these questions, and in any case it’s too late. This particular train has already left the station. The challenge, it seems, is how to manage this cultural shift in eating in ways that are healthful rather than toxic.

What’s behind the whooping cough epidemic?

Many people in the western Minnesota communities of Dawson and Boyd must have been surprised when pertussis, or whooping cough, broke out in 1998. I remember it well, especially for the concern it caused. By the time the outbreak was over, more than 30 people, mostly school-aged children, had gotten sick.

From the vantage point of nearly 14 years later, it seems to have been an early warning sign – in our own back yard, no less – of things to come.

An alarming increase in whooping cough cases in the United States has caught everyone’s attention this summer. For those in the public health field, however, there’s nothing new about it. Pertussis has been climbing in incidence for several years and it’s not completely clear why.

What we’re seeing with pertussis seems like a good illustration of how easy it can be to mistakenly assume we’ve eradicated most of the formerly common childhood diseases, such as whooping cough, and how challenging it is to maintain whatever progress has been made.

Many have been quick to blame the anti-vaccine movement for what’s happening. But although this is probably one of the contributing factors, it doesn’t seem to be the whole story.

The real issue could well lie with the vaccine itself. It’s now known that even among those who were vaccinated as children, the protective effects begin to wane by adolescence. Essentially this creates an enormous pool of teens and adults who may have become a reservoir for the pertussis bacteria and are unwittingly aiding in its spread, especially to infants and the very young who aren’t yet fully immunized. (Children need to be at least 2 months old to receive the vaccine, and it takes a series of shots to acquire full protection.)

Questions also have been raised about the vaccine formula. In the late 1990s the formula was changed from a whole-cell form of the pertussis bacteria to an acellular, or inactive form. Did this make it less effective? Did it somehow alter the control of the disease? On the other hand, evidence suggesting the vaccine lacks long-term effectiveness has been around longer than this, so a change in the formula might not adequately explain what’s happening. Although researchers have been exploring these issues for several years, the answers so far have been unclear.

Other research has found that the Bordetella pertussis bacterium may be evolving and perhaps is no longer well matched to the existing vaccine formula.

It was encouraging to hear this week that Affiliated Community Medical Centers here in Willmar has joined a project by the Minnesota Department of Health to increase the surveillance of whooping cough and collect more data. With more information, the public health community is in a better position to address the growing problem of whooping cough.

A couple of other points bear mentioning. First, although teens and adults do get sick from pertussis, their disease tends to be less severe – and as a result, it can go undiagnosed, allowing them to unknowingly spread it to others who might be much more vulnerable. The very young are usually hit the hardest; indeed, half of babies under the age of 1 year who develop pertussis end up being hospitalized.

Secondly, it’s critical for health providers to keep whooping cough on their radar screen. They may not see it often and they may assume that the vaccine has made it a non-issue. But as we’re learning, whooping cough is still very much present, and a persistent, hacking cough in an adult or severe cough in a child signals the need to look closer.

(For audio of what whooping cough sounds like, click here. Warning: Some may find it disturbing. Also, keep in mind that the characteristic high-pitched whoop at the end of the cough is not always present.)

Despite the tangle of issues surrounding the effectiveness of the whooping cough vaccine, it’s pretty clear that it’s still far better and safer to get vaccinated – and with the full series – than to skip it. Since we know that the vaccine wears off by adolescence, teens should get the recommended booster shot. Adults whose last pertussis vaccination was years ago and who’ve never had a booster shot should get one too, especially if they spend any time around young children.

It may not fully stem the tide, but there’s uniform agreement in the public health community that appropriate vaccination will go a long way toward closing some of the gaps in the immunity safety net and starting to bring down those alarming pertussis numbers.

The younger generation: more stressed, less healthy?

They’re in the prime of life, yet younger workers often report feeling more stressed and less healthy than their older counterparts.

And we’re busy worrying about ailing baby boomers swamping the health care system as they age?

This rather interesting finding appears in the recently issued Aflac WorkForces Report, an annual study by the benefits company on trends and attitudes in workplace benefit programs.

It seems to fly in the face of the common belief that younger workers are healthier and more apt to engage in healthy behavior than those who are older. (In the interests of full disclosure, I’m a boomer who’s becoming a teensy bit weary of the generation wars and the inference that my entitled, me-first generation is going to bankrupt the health care system. Says who?)

The survey, conducted earlier this year among more than 6,100 American employees, found that across the board, young workers in their 20s, 30s and early 40s seemed to struggle the most with healthy habits, while those who were older fared the best.

For instance, of all the age groups that participated in the survey, Generation Y employees (ages 18-24) reported exercising the least. About one in three workers over age 65 said they didn’t eat as well as they should. For boomers (ages 45-64), it was 41 percent. But for Gen Y and Generation X (ages 25-44), nearly half confessed to less-than-ideal eating habits.

Here’s another finding: 11 percent of Gen Y’ers and 15 percent of Gen X’ers said they didn’t currently feel healthy, compared to 8 percent of boomers and 9 percent of the oldest workers who said they felt this way.

As for stress, Generations X and Y reported the most, at 16 percent each. Eleven percent of boomers and 6 percent of workers over age 65 said they were stressed.

So what does it all mean? Because the survey relied on the participants’ own assessment of their health rather than objective measures of their actual health habits, it might not be entirely accurate. Nor do we know whether the results would be any different if the same survey had been administered a generation or two ago.

Yet this isn’t the first time a piece of research has belied the cultural American myths about youth and aging. A survey by the American Heart Association last year found that although many young adults consider themselves healthy, they’re also more apt than middle-aged and older folks to consume fast food and skimp on fresh fruits and vegetables, which could raise their risk for heart disease later in life.

A large study released last year by the U.S. Centers for Disease Control and Prevention found that consumption of sugared soft drinks was most prevalent among younger age groups.

Meanwhile, older adults aren’t necessarily as decrepit and unhappy as they’re often assumed to be. In 2010, a Gallup Poll was released of more than 340,000 people, ages 18 to 85, asking them about their health, finances and state of well-being. In a finding that surprised many people (but perhaps shouldn’t have), people reported becoming happier as they grew older, especially at 50 and beyond.

Maybe the real issue here isn’t about which generation is healthier but that we ought not to make assumptions or develop health policies on the basis of age alone.

The facts about drowning

Maybe it’s the early start to the warm season this year, or the hot summer that’s driving everyone to seek relief at lakes and swimming pools. Whatever the reason, Minnesota is seeing a dismaying spike this summer in drowning deaths.

There’s no trend that seems to stand out. The drownings this year have claimed the lives of both children and older adults. They’ve happened in lakes and pools. They’ve involved boaters and swimmers.

There’s been considerable research on the risk factors that contribute to drowning, and it all points to the same conclusion: Many, if not most, drowning deaths can be prevented.

Among the facts:

- Children are the most vulnerable to drowning – and this seems to be true worldwide, not just in the United States. According to the U.S. Centers for Disease Control and Prevention, drowning is the second leading cause of injury-related deaths in the U.S. among children ages 1 to 4.

- The majority of drowning deaths among American children happen in swimming pools, not lakes or rivers.

- Most – although not all – drowning victims are male.

- One of the risk factors for drowning is, not surprisingly, exposure to water. Others that most of us are likely aware of include alcohol use, risky behavior and lack of supervision.

- There are less well known risk factors as well: lower socioeconomic status, less education and rural residency.

- Some demographic groups also seem to be more at risk than others. A study done in Canada a couple of years ago found that new immigrants are more vulnerable to drowning, probably because many of them never learned how to swim. Other studies have found similar issues among African-American children.

An article published this spring in the New England Journal of Medicine concluded that if you adjust for exposure to swimming vs. exposure to traffic crashes, drowning deaths are 200 times more common than deaths from crashes.

Knowing what many of the most influential risk factors are, how can we avoid becoming one of the statistics?

Learning how to swim is at the top of the list. But experts say there are other important steps that can be taken as well.

For parents, keeping an eye on kids in the water and not allowing themselves to be distracted is key, Susan Grundeen recently told the Pioneer Press of St. Paul. “Leave the cellphones, the magazines, the books aside,” said Grundeen, who is the beach safety coordinator for the Three Rivers Park District. “When you’re with that one person, have your eyes on them at all times.”

One of the myths about drowning, abetted by television and the movies, is that people who are drowning will splash around and yell for help. The reality is that drowning oftentimes is silent; indeed, someone who’s going under often isn’t even capable of waving or yelling. Many swimmers who get into trouble are never noticed until they’re found submerged in the water.

Water safety experts also point to the importance of swimming where there’s a lifeguard and sticking to beaches and lakes with which you’re familiar.

Finally, don’t assume that good swimmers never drown. Fatigue, cramps and even unexpected injuries or a medical emergency can overwhelm the strongest of swimmers. I’ve heard anecdotally of skilled swimmers who got into trouble because they hyperventilated in order to hold their breath under water and ended up blacking out.

Of all the activities to be enjoyed during the summer, swimming combines some of the best – physical activity, hanging out with friends and family, and a chance to cool off. Rather than being fearful of the water, swimmers can be prudent yet still reap all the benefits.

Of zebras, unicorns and rare events in medicine

By any measure, it’s a horrifying story. Rory Staunton, a 12-year-old from New York, died April 1 after a seemingly minor cut on his arm became infected with Group A streptococcus, leading to septic shock that fatally overwhelmed him.

A New York Times article this week recounts the frightening and ultimately tragic chain of events: how Rory’s parents brought him to the pediatrician, then to the emergency room. Feverish and vomiting, he was thought to be suffering from a stomach virus and dehydration. He was sent home, then brought back to the ER by his parents as his condition steadily worsened.

Abnormal lab results from his first visit to the ER apparently weren’t recognized or acted upon, the Times reported. Nor were they shared with his parents, who were increasingly anxious and worried that no one was hearing their concerns.

This story is about many things: the death of a child. A bereaved family. A health system that seemingly dropped the ball. Lapses in communication. A failure to connect the dots or notice alarming red flags.

I could write about any of these but I’m going to focus instead on something else: the rarities of medicine, those one-in-a-million complications or tragic outcomes for which most of us are never really quite prepared.

There’s a saying in medicine: “When you hear hoofbeats, look for a horse, not a zebra.” In other words, what ails the patient is most likely something common rather than something unusual.

Most of the time this serves the diagnostic process well… except when it doesn’t. Zebras might be uncommon but, unlike unicorns, they aren’t nonexistent and therefore safe to dismiss. Inevitably there’s the rare patient with an unusual problem or atypical symptoms. These situations challenge clinicians to think outside the box – to consider the common things first but to be aware of the less-common possibilities.

Of all the processes in medicine, diagnosis is among the most complex. It requires the ability to think critically, to consider all the possibilities, to collect and study the evidence, interpret the findings and draw accurate conclusions.

Not surprisingly, there can be missteps along the way. There’s a growing body of research into diagnostic error and why it happens. Two-thirds of these mistakes lie within system vulnerabilities – delays in scheduling procedures, failure to follow up on test results, poor coordination of care, inadequate communication that prevents critical information from being shared, and so on.

But about one-third of diagnostic delays and errors are thought to rest with the clinician’s thought processes – for instance, “anchoring bias,” or locking onto key features of the patient’s symptoms too early in the diagnostic process and failing to adjust when later information suggests something different. To put it another way: convincing yourself that the hoofbeats you’re hearing belong to a horse, despite subsequent evidence to the contrary.

How should health care deal with the fact that sometimes there are zebras in the herd? The New York Times article about the Staunton family has drawn more than 1,000 responses, many of them from physicians whose reactions range from empathy to defensiveness.

“As an ER physician, I can tell you that we see hundreds if not thousands of patients JUST like this every month,” one doctor wrote.

Another doctor commented, “If you look at the medicine involved, I would bet out of thousands of 12-year-olds with this same presentation he would be the only one with this type of outcome. While it is very human to look for blame, sepsis in teens or preteens from a cut without other factors is very very rare. And flu symptoms are very common.”

The tenor of some of the discussion suggests, appallingly, that septic shock was such an unlikely possibility it was reasonable not to consider it.

Therein lies one of the issues. If you test for every possibility to rule out even the slightest chance of something rare, the system would rapidly be overwhelmed. But if you fail to recognize that rare situations aren’t impossible, you risk missing a diagnosis that could save the patient.

It’s zero comfort for patients and families to be told they lost the one-in-a-million odds of not being a zebra. The standard advice – speak up, ask questions, advocate for yourself – crumbles in the face of unusual diseases or complications that most people couldn’t possibly be expected to know about or anticipate.

There’s a challenge here for health care. Much of the work in patient safety focuses on errors that are common or that represent a trend. It makes sense, since addressing these can have the greatest impact on the largest number of patients. But where does that leave the rare incidents that, by definition, are rare? Should they be viewed as a fluke that couldn’t have been prevented? Or assigned a lower priority on the grounds that they’re simply not very likely to happen? What about the human cost of failing to recognize that sometimes the horse really is a zebra?

It’s not clear what the answer should be. In the meantime, a family is grieving and in emotional shock, and every clinician involved in this case is agonizing over what they could have done differently. The scars are going to last forever.

Update, July 19: The hospital involved in Rory Staunton’s case has announced several major changes in the discharge process for emergency-room patients. A key quote from Dr. Joshua Needleman, a specialist in pediatric pulmonary medicine at Weill Cornell Medical Center, New York: “The big questions are about how to integrate new information that doesn’t fit with the perception you have formed. How to listen to the patient when they are telling you something that doesn’t fit with your internal narrative of the case. Those are the hardest things to do in medicine and yet the most important.”

Rethinking the medical home… again

The medical home model, seen by many as a solution to what ails primary care, continues to receive mixed reviews suggesting it may not be living up to its promise.

The latest dose of reality comes from two new studies which found that primary care medical homes 1) don’t necessarily save money; and 2) don’t increase patient satisfaction.

A little background is in order. The medical home – or, to be more complete, the patient-centered medical home – is a model developed by the National Committee for Quality Assurance to improve the delivery of primary care. Its principles are team-based care that’s coordinated, makes effective use of information technology and tracks how the patient is doing over time.

When correctly implemented, it’s supposed to improve patient care, especially for chronic conditions, by ensuring patients don’t fall through the cracks. The model also is designed to make better use of medical resources by assigning responsibility for patient care to a team that includes nurses and mid-level practitioners as well as physicians. At last count, about 4,000 medical practices in the U.S. have adopted this model.

In theory the medical home sounds terrific – the patients win, the staff wins, the practice wins. In reality the picture is less clear.

Take the study that recently appeared in the Journal of the American Medical Association, examining the relationship between quality of care and operating costs at patient-centered medical homes. Researchers at the University of Chicago found that medical homes with higher quality ratings also had higher operating costs – $2.26 more per patient per month, to be exact. This may not sound like much but over the course of a year it could add up to half a million dollars or more.

A couple of caveats are in order. First, this study only involved federally funded health centers, so the results might not apply to a privately owned medical clinic. Second, it focused primarily on cost, hence may not have fully captured the relationship between cost and value.

It provides a glimpse, however, of the fiscal dynamics that may underlie the medical home model once it’s implemented. Among policymakers who support the medical home concept, much of the emphasis has been on the cost savings that will result with fewer visits to the emergency room, fewer hospital admissions and so on. While this may save money for the system as a whole, it might not necessarily save money for the primary care clinics who are doing much of the work.

A bigger issue, at least from the public’s point of view – and one I’ve blogged about before – is whether patients like the primary care medical home.

Some of the early results aren’t encouraging. At the handful of demonstration sites that piloted the medical home model, patient satisfaction actually declined. And a new study, published last month in the Health Services Research journal, reached a similar conclusion: When 1,300 patients were surveyed about their experience at practices that had adopted the medical home model, they weren’t more satisfied.

American Medical News offered this take on the findings: Perhaps the medical home model is more about policy wonkery and behind-the-scenes restructuring than about improving the patient’s actual experience of care.

Most of the process “has been focused on talking with researchers and with academics and with clinic executives, and looking to see what makes a clinic effective, what makes the processes efficient and what makes them better able to track patients,” Dr. Robin Clarke, an assistant clinical professor at the University of California in Los Angeles, told American Medical News. “We haven’t spent a lot of time talking to patients about what they perceive to be patient-centered care and what they want to see in a primary care practice.”

Exactly.

Some months ago I was conversing with a health insurance executive when the topic turned to the medical home model. He was enthusiastic about how wonderful it was for patients. But when I pointed out that some surveys showed a decline in patient satisfaction, his response was, “Oh, people just don’t like seeing someone different,” i.e. a nurse or physician assistant instead of the doctor.

I can think of many reasons why team care might be problematic for patients, not least because of the potential for poor communication or fragmentation of care. Maybe patients instinctively sense this, or maybe they’ve actually experienced it. Either way, why wouldn’t their perspective matter?

Regardless of whether people know or care about the medical home model, their chances of encountering it are growing. Here in Minnesota, there are now 170 medical practices that are certified as medical homes, providing care for two million people.

If some of the early studies suggest the model isn’t all it’s cracked up to be, it doesn’t necessarily mean the concept is fatally flawed. Perhaps it just takes time to learn from mistakes and allow the model to mature. At the very least, however, we might want to proceed with caution.

Eating gone wild: the unhealthy side of competitive eating

Watching competitive eaters cram dozens of hot dogs down their gullets is both mesmerizing and nausea-inducing.

Apparently few do it better than Joey Chestnut, a 28-year-old from San Jose, Calif., (his nickname is “Jaws”) who won his sixth straight title this week in the Nathan’s Famous Fourth of July International Hot Dog Eating Contest by downing 68 hot dogs in 10 minutes. (The second-place winner only managed a paltry 52 hot dogs.)

What is the deal with competitive eating? At one time it was a fringe activity most often found at local fairs and on college campuses. These days it has gone mainstream, with televised contests, cash prizes and competitors who actually train for a shot at stardom.

Somewhere in here is a commentary on America’s disordered cultural attitudes towards food. But that’s fodder for another day. What I want to know is: How on Earth do they do it? And how does the human body tolerate it?

The reaction of most in the health community seems to be: Speed-stuffing yourself with that many hot dogs (or anything else, for that matter; other competitive eating contests involve burritos, oysters, meatballs, jalapeno peppers or pie)  is not good for you.

Health experts have been speaking out for some time against the so-called sport of competitive eating. The American Medical Association views it as an unhealthy practice akin to binge eating, with possible long-term consequences.

ABC’s Good Morning America tallied up the excess fat and calories consumed on July Fourth by the Nathan’s Famous hot dog-eating contestants (or “gurgitators,” as they’re often called) and came up with some whopping numbers. For instance, a Nathan’s hot dog contains about 300 calories; by this measure, Chestnut gorged himself on 20,400 calories in just 10 minutes – 10 times the number of daily calories recommended for adult men by the USDA.

Keith-Thomas Ayoob of the Albert Einstein College of Medicine told Good Morning America, “I’m not sure if eating that many hot dogs can damage your blood, but it will probably raise your cholesterol level temporarily. And it puts a strain on your body’s organs to handle that amount of calories, fat and sodium all at once.”

Someone susceptible to high blood pressure who downs such massive amounts of sodium “is really rolling the dice and could end up in the hospital,” Ayoob said.

Here’s what many of us would really like to know: How do the contestants avoid, um, experiencing what’s known as a “reversal of fortune”? Answer: Often they don’t. In most contests, eaters are disqualified if they vomit during the actual competition. Sometimes this is extended through a set time after the contest has ended – say, two minutes. After that, gurgitators may reverse without penalty.

For the record, Chestnut told the media after his chow-down that he felt “good” and was “looking forward to next year already.”

Sonya Thomas, the 100-pound winner of the women’s competition (45 hot dogs) admitted she began feeling sick during the contest but kept going.

For all the criticism of the excess surrounding competitive eating, there’s been surprisingly little actual study of its effects on the human body. In one of the few investigations into how competitive eaters manage to consume so much at one time, a group of researchers at the University of Pennsylvania conducted imaging studies of a speed-eating champion matched with a control subject.

They wrote that the competitive eater’s stomach was extraordinarily adapted to his sport: “Unlike the control subject, the speed eater had markedly altered gastric physiology that enabled his stomach to rapidly accommodate an enormous quantity of ingested food by progressively expanding until it became a giant, flaccid sac occupying most of his upper abdomen.”

Interestingly, he was not overweight – and it seems many of the top-ranked eating contestants are quite trim, despite their sport. He told the researchers, however, that he had lost the ability to feel sated after a meal, and that he followed strict portion control in everyday eating.

It’s unclear what these findings mean for the long-term health of competitive eaters. This was an extremely tiny study, involving only one participant. Nevertheless, the researchers raise some key questions: What happens as competitive eaters get older, perhaps struggle with willpower and begin to engage in chronic binge eating? What’s the long-term impact of having a chronically dilated stomach?

The researchers conclude that speed eating “is a potentially self-destructive form of behavior that over time could lead to morbid obesity, intractable nausea and vomiting, and even the need for gastric surgery.”

So there you have it. Only one hot dog at a time for me, thanks very much.

What price for peace of mind?

The patient is eight years out from treatment for breast cancer and is doing well, but four times a year she insists on a blood test to check her inflammation levels.

The test is pointless and has nothing to do with cancer or its possible recurrence. But what happens when the patient makes the request to the doctor?

“To my shame, I must admit, I order it every time,” writes her oncologist, Dr. James Salwitz (if you haven’t yet discovered his excellent blog, Sunrise Rounds, head over there and check it out).

The test may provide temporary reassurance for the patient. At $18, it isn’t expensive, and it’s considerably less harmful or invasive than other tests. But all the same, it’s useless, and it prompts Dr. Salwitz to ask: What can health care do to stem the practice of tests, procedures and other interventions that have no real benefit to the patient?

He writes:

Medicine is full of better-known examples of useless and wasteful testing. PSA and CA125 cancer markers that fail as screening tests. Analysis indicates they cause more harm than benefit. MRIs for muscular back pain, which will go away by itself. Unneeded EKGs, stress tests and cardiac catheterizations, instead of thoughtful conservative medical management. CT scans often take the place of sound clinical analysis and judgment. A 15-year study of 30.9 million radiology imaging exams published recently shows a tripling in the last 15 years.

These unneeded tests do more than waste dollars. If a test is not necessary and has no medical benefit, it can only cause harm. The test itself can cause problems such as excess radiation exposure, allergic reactions and discomfort. In addition, tests find false positive results, which lead to further useless testing or unneeded treatment.

It’s been rather remarkable to witness the pulling-away from excess tests and treatment that has taken place in American medicine in the past few years. There’s a growing recognition that there’s such a thing as too much intervention and that intervention is not automatically good for patients.

Moreover, we’re becoming more willing to talk openly about the tradeoff of benefit vs. harm. Not all that long ago, it was considered heresy to even suggest that women in their 40s didn’t absolutely need a mammogram every single year. The thinking on this is beginning to change as the evidence accumulates of mammography’s limited benefits for younger, low-risk women, and it’s showing up in patient decisions; a recent study by the Mayo Clinic found a 6 percent decline last year in the number of 40-something women who opted to have a mammogram.

It’s easy to oversimplify this issue, and indeed, it’s not always as straightforward as it seems. Interventions sometimes don’t look useless until they’re viewed afterwards through the retrospectoscope. At the time, in the heat of battle, they may seem necessary and justified. Nor do patients fit neatly into little diagnostic boxes; what may be unnecessary for one might make sense for someone else.

There’s a larger question, though, that we sometimes fail to ask: If something is medically useless, does it still have value if it gives the patient (and perhaps the clinician as well) some peace of mind?

To many patients, this is no small thing. It’s an emotional need that’s not easily met by science-based rational discussion about the studies and the actual evidence for the pros and cons. Unfortunately it’s also often abetted by consumer marketing that plays up the peace-of-mind aspect of certain tests while remaining silent about the limited benefit, the possible risk and the clinical complexity that may be part of the larger picture. The message can be sent that it’s OK as long as it provides the patient with some reassurance, and who’s to say this is entirely wrong?

Should clinicians be tougher about just saying no, then? It may be easier to give in, but does this constitute quality care? An interesting ethics case by the Virtual Mentor program of the American Medical Association explores some of the issues that make this so challenging: the responsibility of the physician to make recommendations and decisions that are clinically appropriate, the importance of respecting the patient’s autonomy and values, the balance between patient preferences and wise use of limited health care resources.

You could argue that patients should be allowed to obtain unnecessary care as long as they pay for it themselves, but does this really address the larger question of resources? Regardless of who’s paying the bill, unnecessary care comes with a cost. The blood test requested by Dr. Salwitz’s patient, for instance, likely would involve the use of equipment such as a disposable needle and lab supplies, staff time to draw the blood, analyze the sample, record the results, report them to the doctor and patient, enter them in the medical record and generate a bill (and I may have skipped a few steps).

Yet it’s not always easy to make this case to patients when what they’re really looking for is that elusive creature known as peace of mind.

Dr. Salwitz writes that he’ll be seeing his patient again soon and will try once again to persuade her that the test she’s asking for has no medical benefit for her. “I will try to replace this test crutch with knowledge, reassurance and hope,” he writes. “Maybe it will help us to understand each other a little better.”