Dinner at home: ideal vs. reality

According to the common nutritional wisdom, families who sit down together for home-cooked meals tend to be both healthier and happier, and research for the most part supports that this is true.

But when a group of sociologists decided to study what it really takes to prepare a family dinner, they learned that all is not well in the kitchen. In fact, moms reported feeling pressured to live up to unrealistic ideals and many felt the benefits of home-prepared food weren’t worth the hassle.

Utopia, meet Real Life.

Food gurus may romanticize about the love and skill that goes into preparing a meal and the appreciation with which it’s eaten, but “they fail to see all of the invisible labor that goes into planning, making and coordinating family meals, ” the researchers concluded. “Cooking is at times joyful, but it is also filled with time pressures, tradeoffs designed to save money, and the burden of pleasing others.”

For their study, aptly titled “The Joy of Cooking?”, they spent a year and a half conducting in-depth interviews with a social and economic cross-section of 150 black, white and Latina mothers. They also spent more than 250 hours observing poor and working-class families as they shopped for groceries, cooked and ate meals.

They found mothers were strapped for time, sometimes working two jobs and unpredictable hours to make ends meet and with little energy left over to plan a meal, prepare it and clean up the kitchen afterwards while their children clamored for attention. “If it was up to me, I wouldn’t cook,” one mother bluntly told the researchers.

They discovered that in most of the poorer households they observed, mothers routinely did their own cooking to save money. But these women often were disadvantaged by kitchens that were too small and inadequately equipped – not enough counter space, a shortage of pots and pans, lack of sharp knives and cutting boards, and so on. One family living in a motel room prepared all their meals in a microwave and washed their dishes in the bathroom sink.

A common barrier was the cost of fresh fruit, vegetables, whole grains and lean meats, typical ingredients of a healthy meal. Some of the mothers didn’t have reliable transportation so they only shopped for groceries once a month, which limited the amount of fresh produce they could buy. Even in the middle-class households, moms fretted that the cost of quality food forced them to buy more processed foods and less organic food than they wished.

The final straw: family members who fussed, picked at the food or refused to eat what was served. The researchers rarely observed a family meal during which no one complained. In many of the low-income homes, moms resorted to making the same foods over and over rather than try something new that might be rejected and go to waste. For middle-class moms, the pressure to put healthy, balanced meals on the table often led to considerable anxiety over what they cooked and served.

Despite all this, is it possible for families to consistently prepare good meals at home and actually enjoy the experience instead of viewing it as a chore? Of course it is. But for many households, getting there clearly will be a slog.

When the reality surrounding the home-cooked meal is often at odds with the ideal, why then do food system reformers insist that the revolution needs to happen in the household kitchen? the researchers wonder.

They call the emerging standard for the home-cooked meal “a tasty illusion, one that is moralistic, and rather elitist, instead of a realistic vision of cooking today. Intentionally or not, it places the burden of a healthy, home-cooked meal on women.”

Perhaps we need more options for feeding families, such as introducing healthy food trucks or monthly town suppers, or getting schools and workplaces involved in sharing healthy lunches, they suggest. “Without creative solutions like these, suggesting that we return to the kitchen en masse will do little more than increase the burden so many women already bear.”

DIY blood pressure management

What would happen if patients with high blood pressure were allowed to manage the condition with minimal involvement by a doctor?

A recent study gives support to the notion that when patients are given the opportunity to take charge of their care, many of them will successfully rise to the occasion.

OK, that’s reaching beyond what the study actually addressed. (Here’s the link to the Journal of the American Medical Association, where the research was published a couple of weeks ago.) But it offers an intriguing look at allowing patients a more active role in managing a chronic condition.

Short summary of the study: Researchers compared two groups of patients, one that received standard care for high blood pressure and one that monitored their blood pressure at home and self-adjusted their medication as necessary. At the end of one year, the self-monitoring patients achieved an average blood pressure reading of 128 over 74, while the group that received the usual care – visiting the doctor’s office at regular intervals to have their blood pressure checked and their medications reviewed and adjusted – had an average of 138 over 76. (High blood pressure is defined as consistent readings of 140/90 or higher.)

The researchers primarily wanted to find out whether the do-it-yourself approach is effective in managing hypertension, a significant risk factor for stroke, cardiovascular disease and kidney disease. In the U.S., about one in three adults has high blood pressure and only about half have it under control.

Based on the study’s results, self-management seems to be feasible, at least for some patients, in reaching the clinical goal of a healthy blood pressure.

There are a ton of questions that still need to be answered. Which patients might be the best candidates for a greater role in managing their blood pressure? What kind of education or support do they need for taking on this responsibility? Can patients continue to self-manage their blood pressure over a period of years or will adherence drop off after a year or two?

What really caught my attention, though, was the evidence that patients don’t invariably fail when they’re given more responsibility. Moreover, it’s possible for them to achieve good results. (It should be noted that the self-managing patients in the JAMA study weren’t entirely on their own; any adjustments they made in their blood pressure medication were part of an agreed-on plan with their doctor.)

Helping patients reach optimal blood pressure is one of the most common tasks carried out by primary care doctors. As you might guess, it consumes a great deal of time and resources. If patients who are capable of self-managing their blood pressure are given more latitude to do so, maybe it would free up the doctor’s time to concentrate on the more challenging cases and everyone would gain something.

Rich, poor, healthy, sick

Take two people, one with a college degree and earning $100,000 a year and the other with a high school diploma earning $15,000, and guess which one is likely to have better health.

Those who study population health have long known that, elitist as it sounds, income and education are two of the strongest predictors of overall health in the United States. Americans who are educated and financially secure tend to live longer. They’re more likely to be physically active and less likely to use tobacco. Often they have better health outcomes and better management of chronic conditions such as diabetes or high blood pressure.

Exactly why this would be the case is not clearly understood. One assumption is that it’s all about access – people with more money are able to afford better food, live in safe neighborhoods and receive adequate medical care. Another assumption is that it has to do with healthy or unhealthy community environments. But an interesting review from a few years back indicates it’s much more complex than this.

The authors identify some influences that are familiar. Having more money, for example, means that people have the resources to join a health club or buy fresh fruits and vegetables. Where people live can shape how they behave – whether they have access to parks and sidewalks or the amount of tobacco and fast-food advertising they’re exposed to.

But the authors also identify several factors that are psychological, social and more subtle to tease out.

Education and efficacy. One of the functions of education is to teach critical thinking, problem-solving and self-discipline, which can better equip people to process health information and apply it to their lives. These skills can also make them feel more confident about their ability to successfully manage their health.

- Peer group identification. People tend to associate with their own socioeconomic group and usually will adopt similar norms that reinforce their place in the social ecosystem. If everyone else in your educated, professional social circle is a nonsmoker, chances are you won’t smoke either, or will make serious attempts to quit. Likewise, blue-collar individuals may smoke to show independence, toughness and solidarity with their social group.

- Optimism about the future. Lower lifetime earnings and a sense of limited opportunity can make lower-income people feel there’s less reason to invest in their long-term health. Their health decisions can be more focused on the here and now than on the future. The authors of the review also suggest that the higher levels of stress associated with being economically disadvantaged can lead people to use tobacco, alcohol and/or eating as a way of coping.

Who remembers seeing the headlines this past month about a study that found a link between saving for retirement and being in better health? The researchers may have been onto something, namely that planning for one’s retirement could be just one of many markers for the psychosocial factors that influence health – disposable income, self-efficacy, peer group norms, belief in the future and so on.

Money does indeed make a difference but it isn’t just about money, the authors explain in their review. Walking as a form of exercise costs nothing, while smoking can be an expensive habit. What motivates someone to choose one over the other?

This is only scratching the surface, of course. Many of these factors are interrelated – for example, someone at a lower socioeconomic level who is motivated to adopt healthy habits but has difficulty achieving this because of a lack of means. And it’s hard to outsmart your genetic background regardless of your income or education level or motivation to pursue a healthy lifestyle.

There’s a huge national conversation taking place about being healthy: what it is, how to achieve it and how to reduce some of the obvious income and racial disparities. Do we just keep urging everyone to “make better choices”? Do we legislate? It’s clear from all the research that the social and psychological factors surrounding health-related behavior are complex and not easy to untangle. If ever there was an area to resist simplistic solutions, this is it.

‘Looks older than stated age’

Pity the young, pretty blonde doctor who’s constantly mistaken for being less accomplished than she truly is.

“Sexism is alive and well in medicine,” Dr. Elizabeth Horn lamented in a guest post this week at Kevin, MD, wherein she describes donning glasses and flat heels in an attempt to make people take her more seriously.

As someone who used to be mistaken for a college student well into my mid-20s, I certainly feel her pain. But let’s be fair: Doctors judge patients all the time on the basis of how old they appear to be.

It’s a longstanding practice in medicine to note in the chart whether adult patients appear to be older, younger or consistent with their stated age. Doctors defend it as a necessary piece of information that helps them discern the patient’s health status and the presence of any chronic diseases.

According to theory, patients who look older than their stated age are more likely to have poorer health, while those who look more youthful than their years are in better health. But does it have any basis in reality? Well, only slightly.

An interesting study was published a few years ago that examined this question. The researchers found that patients had to look at least 10 years older than their actual age for this to be a somewhat reliable indication of poor health. Beyond this, it didn’t have much value in helping doctors sort out their healthy patients at a glance. In fact, it turned out to have virtually no value in assessing the health of patients who looked their age.

Other studies – and there are only a few that have explored this issue – have come up with conflicting results but no clear consensus, other than the conclusion that judging someone’s apparent age is a subjective undertaking.

When there’s such limited evidence-based support for the usefulness of noting the patient’s apparent age, then why does the habit persist?

I’ve scoured the literature and can’t find a good answer. My best guess is that doctors are trained to constantly be on the lookout for risk factors – which patient is a heart attack waiting to happen, which one can’t safely be allowed to take a narcotic, which one is habitually non-adherent – and assessing apparent age vs. actual age is one more tool they think will help, a tool they may have learned during their training and continued to use without ever questioning its validity.

Appearances can be deceiving, however. A patient who looks their age or younger can still be sick. Someone who looks older can still be relatively hale and hearty.

And beware the eye-of-the-beholder effect. One of the studies that looked at this issue found that younger health care professionals consistently tended to overestimate the age of older adults. When you’re 30, everyone over the age of 60 looks like they’re 80, I guess.

Whether you’re a young physician fighting for the respect your training commands or a patient fighting against assumptions in the exam room, the message is the same: You can’t judge a book by its cover.

When you just can’t sleep

Reservations about the safety of prescription sleeping pills have been around for a long time. But recent new research has raised fresh concerns about when they’re appropriate and who’s most at risk.

To summarize: A study by the U.S. Centers for Disease Control and Prevention and Johns Hopkins University found that psychiatric medications – a category that includes sedatives – account for thousands of emergency room visits in the U.S. each year. One of the key findings, which may have come as somewhat of a surprise to the public, was that zolpidem, or Ambien, was implicated in 90,000 emergency room visits annually for adverse drug reactions.

The majority of ER visits for drug reactions associated with sedatives and anti-anxiety medications were among adults in their 20s, 30s and 40s. But among older adults who were taking these medications and ended up in the ER, the consequences were often more severe and were more likely to result in hospitalization.

This could be an opportunity to address adverse drug events, or emergency room utilization, or prescription drug use, or medication use by older adults. But I’m not going there, at least this time.

If I ruled the world, we would have a long-overdue national conversation about sleep and insomnia.

We’d open with a discussion of the “sleep is for wimps” mindset. Where does this come from, and who do these people think they’re kidding?

We’d take a look at the science. What do we know about the human body’s need for sleep and the mechanisms of sleep? How many questions still lack good answers?

We’d involve the medical community. How often are patients queried about their sleep? Is there more than one option for helping them, or is the immediate response to hand out (or refuse) a prescription for a hypnotic or to assume the problem is related to stress or lifestyle?

Finally, we’d get real about insomnia. Although sleep difficulties can often be traced to how people live their lives, simply telling them to practice better “sleep hygiene” may not cut it for those whose insomnia is longstanding, complex and more challenging to treat.

Somewhere in the discussion we might talk about shift work and the impact it has on sleep and health. We could talk about sleep apnea and restless legs syndrome as specific causes of poor sleep, while at the same time recognizing that many people with insomnia don’t have either of these conditions.

We could probably talk about the punishing levels of daily stress experienced by many people and how it interferes with their sleep.

And yes, we’d have a serious discussion about where pills fit into this. We would acknowledge that sleep aids are sometimes prescribed to people who don’t really need them or whose safety might be compromised by taking them. But if we’re being fair, we’d also have to recognize that clamping down on sleeping pill prescriptions could consign many people to chronic, intractable insomnia – and as anyone with longstanding insomnia can attest, it’s a miserable and ultimately unhealthy place to be.

Who’s up for the conversation?

When the patient becomes the doctor’s caretaker

In a video interview, the anonymous doctor’s frustration comes through loud and clear. She takes care of complex patients with many health needs, often working 11 or 12 hours a day, sacrificing time with her family. Yet the message she constantly gets from administrators is that she’s “dumb and inefficient” if she can’t crank patients through the system every 15 minutes.

In a word, she’s abused.

And patients ought to care enough about their doctors to ask them if they’re being abused, according to Dr. Pamela Wible, who raised the issue recently on her blog. “The life you save may save you,” wrote Dr. Wible, a primary care doctor on the West Coast who established her own version of the ideal medical practice after becoming burned-out by the corporate model of care.

This is one of those issues that’s like lifting a corner of the forbidden curtain. Many patients probably don’t think too much about their doctor’s challenges and frustrations. After all, physicians are paid more than enough to compensate for any workplace frustration, aren’t they? Isn’t this what they signed up for?

The problem with this kind of thinking is that it ignores reality. Medicine, especially primary care, has become a difficult, high-pressure environment to be in. One study, for example, that tracked the daily routine at a private practice found the physicians saw an average of 18 patients a day, made 23.7 phone calls, received 16.8 emails, processed 12.1 prescription refills and reviewed 19.5 laboratory reports, 11.1 imaging reports and 13.9 consultation reports.

And when physicians are overloaded, unhappy and feel taken advantage of, it tends to be only a matter of time before it spills over into how they interact with their patients.

The million-dollar question here is whether patients can – or should – do anything about it.

Dr. Wible advocates taking a “just ask” approach. Compassion and advocacy by patients for their doctors can accomplish far more than most people think, she says.

One of her blog readers agreed, saying the pressures “must frustrate them beyond endurance. I’m going to start asking.”

Another commenter sounded a note of caution, though: “I feel there is a risk for a patient to ask such a question to a dr. who might be hiding how very fragile he/she is.”

More doubts were voiced at Kevin MD, where Dr. Wible’s blog entry was cross-posted this week. A sample:

- “Abused is a very emotionally loaded word that brings up powerful emotions and feelings like shame. I think if a doc is asked by a patient whether he/she is abused, they might actually end up feeling accused.”

-  ”I’m having a hard time imagining most docs responding well to their patients asking them if they are abused and I doubt that most docs would respond ‘yes, I am being abused’ to patients who do ask that no matter what was going on in their workplace. Nor do I think most patients want to spend a big chunk of their doctor visit talking about the doctor’s problems and issues.”

- “And what could I do if the answer is ‘yes’?”

I’m not sure what to think. At its core, health care is a transaction between human beings that becomes most healing when all the parties are able to recognize each other’s humanity.

Yet reams have been written about doctor-patient boundaries and the hazards of too much self-disclosure by the physician. Can it ultimately damage the relationship if the doctor shows vulnerability or emotional neediness? What are the ethics of a role reversal that puts the patient in the position of being caretaker to the doctor?

What do readers think? I’d like to know.

“I found my diagnosis on the Internet”

Raise your hand if you’ve ever gone online in search of a diagnosis that fits your symptoms or to read everything you can find about a condition you’re dealing with or a new medication you’re taking. Now raise your hand if you’ve ever talked to your doctor about what you’re reading on the Internet.

The Internet has made a bottomless well of information readily accessible to anyone with an online connection. But it seems we’re still figuring out how to incorporate this fact into the doctor-patient relationship in ways that allow everyone to feel comfortable with it.

Doctors tend to cringe when patients show up for an appointment with a stack of printouts from the Internet.

Patients tend to resent it when the doctor ignores or dismisses their personal research.

Doctors may not mind their patients’ efforts to become more informed but they don’t always trust the patient’s ability to recognize whether a source is reputable.

Patients want to know how to evaluate a source’s credibility but they don’t know where to start.

These are some of the impressions I gathered earlier this week from a health care social media chat on Twitter that focused on, among other things, that pesky pile of printouts in the exam room. (For those who haven’t discovered the weekly tweetchat, it takes place every Sunday from 8 to 9 p.m. central time; follow along at #hcsm and brace yourself for an hour of free-wheeling, fun and insightful discussion.)

 

 

 

 

 

 


So what are we to glean from all of this? Although there doesn’t seem to be a single right way for patients to share health information they’ve found online, some approaches may be more helpful than others.

Most doctors don’t have time to wade through large stacks of printouts, so patients will probably have more success if they stick to summaries and as few pages as possible.

How the topic is introduced seems to matter. Is it an open-minded exchange or is it an argument? Is it respectful of each other’s perspective? Doctors have greater medical knowledge and hands-on experience but patients are the experts when it comes to their own experiences.

One of the conundrums is how to sort the wheat from the chaff. There’s a lot of questionable health information floating around online, but if patients haven’t learned to critically evaluate what they’re reading for accuracy and credibility, they can easily be led astray by misinformation. On the other hand, it’s hard for patients to develop these skills if their doctor is dismissive of their efforts and unwilling to provide coaching or guidance.

Frankly, the train has already left the station on this issue. A 2009 study by the Pew Research Internet Project found that looking for health information online has become a “mainstream activity.” Sixty-one percent of the adults who were surveyed said they used online sources for health information, and one in 10 said that what they learned had a major impact on either their own health care or how they cared for someone else.

But here’s another interesting finding: Patients whose doctors encourage them to search out health information online are, on average, more engaged in their care and more satisfied, regardless of whether they and the doctor agree on the information. In other words, it’s the participation and the open dialogue that really matter.

Quibbles over whether patients should talk to their doctor about health information they’ve found online are no longer the point. It’s time to move on to the bigger issue of how to have these conversations in the most beneficial way possible.

Needs to be seen

You need a refill for a prescription that’s about to run out. You’ve taken the medication for years without any problems and can’t think of any reason why the prescription can’t just be automatically continued. But the doctor won’t order a refill unless you make an appointment and come in to be seen.

Is this an unfair burden on the patient or due diligence by the doctor?

Dr. Lucy Hornstein, a family practice physician who blogs at Musings of a Dinosaur, was up against the wall recently with a patient who needed refills for blood pressure medications but wouldn’t make an appointment, despite repeated reminders.

“What to do?” Dr. Hornstein asked her blog readers.

First round of analysis: What are the harms of going off BP meds? Answer: potentially significant, in that patient is on several meds which are controlling BP well, and has other cardiovascular risk factors.

Next, anticipating the patient’s objections to a visit: Why exactly do I need to see her? We call it “monitoring”; making sure her BP is still controlled, and that there are no side effects or other related (or unrelated) problems emerging. “But you never do anything,” I hear her responding, and it’s hard to argue. It certainly seems that the greater benefit comes from continuing to authorize the refills.

What’s the down side? This: What if something changes, and either the BP is no longer controlled, or something else happens as a result of the meds (kidney failure comes to mind)? I can just hear the lawyer bellowing, “Why were you continuing to prescribe these dangerous medications without monitoring them?” causing the jury to come back and strip me of all my worldly goods.

So what to do? Refuse the refill and risk having her stroke out from uncontrolled blood pressure? Or keep on prescribing without seeing her? If so, how long? Four years? Five? Ten?

It’s a good summary of a dilemma that puts the doctor between a rock and a hard place no matter which course of action she chooses.

Patients don’t necessarily see it this way, though, judging from many of the online conversations about everything from seasonal allergy medications to antidepressants. Sometimes it ends up being a source of conflict, as in “Why are you making me jump through all these ridiculous hoops for a routine refill?” Sometimes they’re running out of medication but can’t get in for an appointment right away – what then? In some cases they’re genuinely worried about the cost of an office visit, especially if they have a large copayment or lack health insurance.

Dr. Hornstein’s readers (most of whom seem to be health care professionals) didn’t mince words on the need to crack down on this patient.

“No more refills until she sees you in person. Period. You are not a refill machine,” was one person’s blunt assessment.

There seems to be room for debate on how often patients should be required to see a doctor for a prescription refill. Is twice a year excessive for someone whose blood pressure is well controlled with medication and minimal side effects? Is once a year enough for someone who has complex health issues and is taking multiple medications? What about prescription narcotics? Should it make a difference if the patient is someone the doctor knows well?

Truth be told, I don’t enjoy the hassle of getting a refill. But after reading Dr. Hornstein’s side of the story, it’s easy to see why doctors don’t like reauthorizing prescriptions willy-nilly. After all, it’s their name on record as the prescriber if something happens to go wrong. Sometimes the patient really does need to be seen.

Providers by any other name

Guilty, guilty, guilty.

That was my reaction after reading Dr. Danielle Ofri’s take at the New York Times Well blog on the use of the word “provider” to refer to people in health care.

Dr. Ofri dislikes the term intensely:

Every time I hear it – and it comes only from administrators, never patients – I cringe. To me it always elicits a vision of the hospital staff as working at Burger King, all of us wearing those paper hats as someone barks: “Two burgers, three Cokes, two statins and a colonoscopy on the side.”

“Provider” is a corporate and impersonal way to describe a relationship that, at its heart, is deeply personal, Dr. Ofri writes. “The ‘consumers’ who fall ill are human beings, and the ‘providers’ to whom they turn for care are human beings also. The ‘transactions’ between them are so much more than packets of ‘health care’.”

I suppose this is as good a time as any to confess I’ve used the p-word – used it more than once, in fact.

It’s not that I don’t know any better. I’ve been aware for quite some time that many people in health care don’t really like to be called providers. I deploy the word rather gingerly, and have increasingly taken to calling them something more specific – doctors, for instance, or nurses, or clinicians.

Frankly, it’s hard to know what the right word should be. Once upon a time it was pretty safe to use the term “doctor” to refer to those who provide (sorry!) care. This is no longer a given; many of those engaged in patient care these days are nurses, nurse practitioners, physician assistants, medical assistants and… you get the picture. (For that matter, “doctor” and “physician” aren’t interchangeable either, but I digress.)

“Clinician” seems a little closer to the mark, and it has the merit of distinguishing those in health care who are engaged in the actual care of patients from those who aren’t. But it’s a catch-all term, not particularly descriptive and somewhat lacking in personal warmth and fuzziness.

Similar minefields lurk in terms such as “health care professional,” “mid-level professional,” “allied health professional,” “health care worker” and the like. They’re lengthy, cumbersome and stilted. It can be inaccurate to call everyone who works in health care a “professional,” because many of those who toil behind the scenes in disciplines such as medical records management and central distribution aren’t professionally licensed, even though their work is just as essential. On a more politically correct note, many allieds don’t like being called mid-level because of the hierarchy this implies.

Don’t even get me started on the varying levels of “health care organizations.” Hospitals, medical clinics, outpatient surgery centers, community health centers, public health agencies – each occupies a special place in the ecosystem and can’t easily be lumped into generic vagueness.

And if you really want to get technical, shouldn’t we consider health insurance, pharmaceutical companies and medical device manufacturers as part of the “health care system” as well?

What’s a writer supposed to do? It’s a constant challenge: trying to be accurate and descriptive yet not allow oneself to become bogged down in multiple syllables.

Language does matter. There’s a case to be made for the importance of being aware of cost, quality and medical necessity – for behaving as a consumer of health care, not solely as a patient. But I view myself first and foremost as a patient, and I’m not sure I like it when patients are urged to “shop around” for good care as if they were kicking the tires at a used-car lot. There’s a relationship aspect to health care that goes beyond pure consumerism and that needs to be recognized and valued.

If we’re debating about the language, perhaps it’s because everyone’s role is shifting to a greater degree than at any other point in history. Patients have more information and are being asked to take more responsibility, as well they should. The people who care for patients are being pulled in more directions than ever before. We don’t know what to call ourselves anymore, and reasonable alternatives don’t seem to have been created yet.

So here’s the deal: When I use the term “providers,” it isn’t because I think of them as burger-makers at a fast food restaurant. It’s generally because the word is short, to the point and encompasses the range of individuals and organizations I’m writing about. As far as I’m concerned, my doctor and nurse are still my doctor and nurse and I’m still their patient. Sometimes I’m a consumer, but only when consumer behavior is what’s called for.

If anyone has suggestions for new terminology other than “provider,” I’m all ears.

This entry was originally published Dec. 30, 2011.

What price for peace of mind?

The patient is eight years out from treatment for breast cancer and is doing well, but four times a year she insists on a blood test to check her inflammation levels.

The test is pointless and has nothing to do with cancer or its possible recurrence. But what happens when the patient makes the request to the doctor?

“To my shame, I must admit, I order it every time,” writes her oncologist, Dr. James Salwitz (if you haven’t yet discovered his excellent blog, Sunrise Rounds, head over there and check it out).

The test may provide temporary reassurance for the patient. At $18, it isn’t expensive, and it’s considerably less harmful or invasive than other tests. But all the same, it’s useless, and it prompts Dr. Salwitz to ask: What can health care do to stem the practice of tests, procedures and other interventions that have no real benefit to the patient?

He writes:

Medicine is full of better-known examples of useless and wasteful testing. PSA and CA125 cancer markers that fail as screening tests. Analysis indicates they cause more harm than benefit. MRIs for muscular back pain, which will go away by itself. Unneeded EKGs, stress tests and cardiac catheterizations, instead of thoughtful conservative medical management. CT scans often take the place of sound clinical analysis and judgment. A 15-year study of 30.9 million radiology imaging exams published recently shows a tripling in the last 15 years.

These unneeded tests do more than waste dollars. If a test is not necessary and has no medical benefit, it can only cause harm. The test itself can cause problems such as excess radiation exposure, allergic reactions and discomfort. In addition, tests find false positive results, which lead to further useless testing or unneeded treatment.

It’s been rather remarkable to witness the pulling-away from excess tests and treatment that has taken place in American medicine in the past few years. There’s a growing recognition that there’s such a thing as too much intervention and that intervention is not automatically good for patients.

Moreover, we’re becoming more willing to talk openly about the tradeoff of benefit vs. harm. Not all that long ago, it was considered heresy to even suggest that women in their 40s didn’t absolutely need a mammogram every single year. The thinking on this is beginning to change as the evidence accumulates of mammography’s limited benefits for younger, low-risk women, and it’s showing up in patient decisions; a recent study by the Mayo Clinic found a 6 percent decline last year in the number of 40-something women who opted to have a mammogram.

It’s easy to oversimplify this issue, and indeed, it’s not always as straightforward as it seems. Interventions sometimes don’t look useless until they’re viewed afterwards through the retrospectoscope. At the time, in the heat of battle, they may seem necessary and justified. Nor do patients fit neatly into little diagnostic boxes; what may be unnecessary for one might make sense for someone else.

There’s a larger question, though, that we sometimes fail to ask: If something is medically useless, does it still have value if it gives the patient (and perhaps the clinician as well) some peace of mind?

To many patients, this is no small thing. It’s an emotional need that’s not easily met by science-based rational discussion about the studies and the actual evidence for the pros and cons. Unfortunately it’s also often abetted by consumer marketing that plays up the peace-of-mind aspect of certain tests while remaining silent about the limited benefit, the possible risk and the clinical complexity that may be part of the larger picture. The message can be sent that it’s OK as long as it provides the patient with some reassurance, and who’s to say this is entirely wrong?

Should clinicians be tougher about just saying no, then? It may be easier to give in, but does this constitute quality care? An interesting ethics case by the Virtual Mentor program of the American Medical Association explores some of the issues that make this so challenging: the responsibility of the physician to make recommendations and decisions that are clinically appropriate, the importance of respecting the patient’s autonomy and values, the balance between patient preferences and wise use of limited health care resources.

You could argue that patients should be allowed to obtain unnecessary care as long as they pay for it themselves, but does this really address the larger question of resources? Regardless of who’s paying the bill, unnecessary care comes with a cost. The blood test requested by Dr. Salwitz’s patient, for instance, likely would involve the use of equipment such as a disposable needle and lab supplies, staff time to draw the blood, analyze the sample, record the results, report them to the doctor and patient, enter them in the medical record and generate a bill (and I may have skipped a few steps).

Yet it’s not always easy to make this case to patients when what they’re really looking for is that elusive creature known as peace of mind.

Dr. Salwitz writes that he’ll be seeing his patient again soon and will try once again to persuade her that the test she’s asking for has no medical benefit for her. “I will try to replace this test crutch with knowledge, reassurance and hope,” he writes. “Maybe it will help us to understand each other a little better.”

This post was originally published on July 3, 2012.