Rich, poor, healthy, sick

Take two people, one with a college degree and earning $100,000 a year and the other with a high school diploma earning $15,000, and guess which one is likely to have better health.

Those who study population health have long known that, elitist as it sounds, income and education are two of the strongest predictors of overall health in the United States. Americans who are educated and financially secure tend to live longer. They’re more likely to be physically active and less likely to use tobacco. Often they have better health outcomes and better management of chronic conditions such as diabetes or high blood pressure.

Exactly why this would be the case is not clearly understood. One assumption is that it’s all about access – people with more money are able to afford better food, live in safe neighborhoods and receive adequate medical care. Another assumption is that it has to do with healthy or unhealthy community environments. But an interesting review from a few years back indicates it’s much more complex than this.

The authors identify some influences that are familiar. Having more money, for example, means that people have the resources to join a health club or buy fresh fruits and vegetables. Where people live can shape how they behave – whether they have access to parks and sidewalks or the amount of tobacco and fast-food advertising they’re exposed to.

But the authors also identify several factors that are psychological, social and more subtle to tease out.

Education and efficacy. One of the functions of education is to teach critical thinking, problem-solving and self-discipline, which can better equip people to process health information and apply it to their lives. These skills can also make them feel more confident about their ability to successfully manage their health.

- Peer group identification. People tend to associate with their own socioeconomic group and usually will adopt similar norms that reinforce their place in the social ecosystem. If everyone else in your educated, professional social circle is a nonsmoker, chances are you won’t smoke either, or will make serious attempts to quit. Likewise, blue-collar individuals may smoke to show independence, toughness and solidarity with their social group.

- Optimism about the future. Lower lifetime earnings and a sense of limited opportunity can make lower-income people feel there’s less reason to invest in their long-term health. Their health decisions can be more focused on the here and now than on the future. The authors of the review also suggest that the higher levels of stress associated with being economically disadvantaged can lead people to use tobacco, alcohol and/or eating as a way of coping.

Who remembers seeing the headlines this past month about a study that found a link between saving for retirement and being in better health? The researchers may have been onto something, namely that planning for one’s retirement could be just one of many markers for the psychosocial factors that influence health – disposable income, self-efficacy, peer group norms, belief in the future and so on.

Money does indeed make a difference but it isn’t just about money, the authors explain in their review. Walking as a form of exercise costs nothing, while smoking can be an expensive habit. What motivates someone to choose one over the other?

This is only scratching the surface, of course. Many of these factors are interrelated – for example, someone at a lower socioeconomic level who is motivated to adopt healthy habits but has difficulty achieving this because of a lack of means. And it’s hard to outsmart your genetic background regardless of your income or education level or motivation to pursue a healthy lifestyle.

There’s a huge national conversation taking place about being healthy: what it is, how to achieve it and how to reduce some of the obvious income and racial disparities. Do we just keep urging everyone to “make better choices”? Do we legislate? It’s clear from all the research that the social and psychological factors surrounding health-related behavior are complex and not easy to untangle. If ever there was an area to resist simplistic solutions, this is it.

‘Looks older than stated age’

Pity the young, pretty blonde doctor who’s constantly mistaken for being less accomplished than she truly is.

“Sexism is alive and well in medicine,” Dr. Elizabeth Horn lamented in a guest post this week at Kevin, MD, wherein she describes donning glasses and flat heels in an attempt to make people take her more seriously.

As someone who used to be mistaken for a college student well into my mid-20s, I certainly feel her pain. But let’s be fair: Doctors judge patients all the time on the basis of how old they appear to be.

It’s a longstanding practice in medicine to note in the chart whether adult patients appear to be older, younger or consistent with their stated age. Doctors defend it as a necessary piece of information that helps them discern the patient’s health status and the presence of any chronic diseases.

According to theory, patients who look older than their stated age are more likely to have poorer health, while those who look more youthful than their years are in better health. But does it have any basis in reality? Well, only slightly.

An interesting study was published a few years ago that examined this question. The researchers found that patients had to look at least 10 years older than their actual age for this to be a somewhat reliable indication of poor health. Beyond this, it didn’t have much value in helping doctors sort out their healthy patients at a glance. In fact, it turned out to have virtually no value in assessing the health of patients who looked their age.

Other studies – and there are only a few that have explored this issue – have come up with conflicting results but no clear consensus, other than the conclusion that judging someone’s apparent age is a subjective undertaking.

When there’s such limited evidence-based support for the usefulness of noting the patient’s apparent age, then why does the habit persist?

I’ve scoured the literature and can’t find a good answer. My best guess is that doctors are trained to constantly be on the lookout for risk factors – which patient is a heart attack waiting to happen, which one can’t safely be allowed to take a narcotic, which one is habitually non-adherent – and assessing apparent age vs. actual age is one more tool they think will help, a tool they may have learned during their training and continued to use without ever questioning its validity.

Appearances can be deceiving, however. A patient who looks their age or younger can still be sick. Someone who looks older can still be relatively hale and hearty.

And beware the eye-of-the-beholder effect. One of the studies that looked at this issue found that younger health care professionals consistently tended to overestimate the age of older adults. When you’re 30, everyone over the age of 60 looks like they’re 80, I guess.

Whether you’re a young physician fighting for the respect your training commands or a patient fighting against assumptions in the exam room, the message is the same: You can’t judge a book by its cover.

When you just can’t sleep

Reservations about the safety of prescription sleeping pills have been around for a long time. But recent new research has raised fresh concerns about when they’re appropriate and who’s most at risk.

To summarize: A study by the U.S. Centers for Disease Control and Prevention and Johns Hopkins University found that psychiatric medications – a category that includes sedatives – account for thousands of emergency room visits in the U.S. each year. One of the key findings, which may have come as somewhat of a surprise to the public, was that zolpidem, or Ambien, was implicated in 90,000 emergency room visits annually for adverse drug reactions.

The majority of ER visits for drug reactions associated with sedatives and anti-anxiety medications were among adults in their 20s, 30s and 40s. But among older adults who were taking these medications and ended up in the ER, the consequences were often more severe and were more likely to result in hospitalization.

This could be an opportunity to address adverse drug events, or emergency room utilization, or prescription drug use, or medication use by older adults. But I’m not going there, at least this time.

If I ruled the world, we would have a long-overdue national conversation about sleep and insomnia.

We’d open with a discussion of the “sleep is for wimps” mindset. Where does this come from, and who do these people think they’re kidding?

We’d take a look at the science. What do we know about the human body’s need for sleep and the mechanisms of sleep? How many questions still lack good answers?

We’d involve the medical community. How often are patients queried about their sleep? Is there more than one option for helping them, or is the immediate response to hand out (or refuse) a prescription for a hypnotic or to assume the problem is related to stress or lifestyle?

Finally, we’d get real about insomnia. Although sleep difficulties can often be traced to how people live their lives, simply telling them to practice better “sleep hygiene” may not cut it for those whose insomnia is longstanding, complex and more challenging to treat.

Somewhere in the discussion we might talk about shift work and the impact it has on sleep and health. We could talk about sleep apnea and restless legs syndrome as specific causes of poor sleep, while at the same time recognizing that many people with insomnia don’t have either of these conditions.

We could probably talk about the punishing levels of daily stress experienced by many people and how it interferes with their sleep.

And yes, we’d have a serious discussion about where pills fit into this. We would acknowledge that sleep aids are sometimes prescribed to people who don’t really need them or whose safety might be compromised by taking them. But if we’re being fair, we’d also have to recognize that clamping down on sleeping pill prescriptions could consign many people to chronic, intractable insomnia – and as anyone with longstanding insomnia can attest, it’s a miserable and ultimately unhealthy place to be.

Who’s up for the conversation?

When the patient becomes the doctor’s caretaker

In a video interview, the anonymous doctor’s frustration comes through loud and clear. She takes care of complex patients with many health needs, often working 11 or 12 hours a day, sacrificing time with her family. Yet the message she constantly gets from administrators is that she’s “dumb and inefficient” if she can’t crank patients through the system every 15 minutes.

In a word, she’s abused.

And patients ought to care enough about their doctors to ask them if they’re being abused, according to Dr. Pamela Wible, who raised the issue recently on her blog. “The life you save may save you,” wrote Dr. Wible, a primary care doctor on the West Coast who established her own version of the ideal medical practice after becoming burned-out by the corporate model of care.

This is one of those issues that’s like lifting a corner of the forbidden curtain. Many patients probably don’t think too much about their doctor’s challenges and frustrations. After all, physicians are paid more than enough to compensate for any workplace frustration, aren’t they? Isn’t this what they signed up for?

The problem with this kind of thinking is that it ignores reality. Medicine, especially primary care, has become a difficult, high-pressure environment to be in. One study, for example, that tracked the daily routine at a private practice found the physicians saw an average of 18 patients a day, made 23.7 phone calls, received 16.8 emails, processed 12.1 prescription refills and reviewed 19.5 laboratory reports, 11.1 imaging reports and 13.9 consultation reports.

And when physicians are overloaded, unhappy and feel taken advantage of, it tends to be only a matter of time before it spills over into how they interact with their patients.

The million-dollar question here is whether patients can – or should – do anything about it.

Dr. Wible advocates taking a “just ask” approach. Compassion and advocacy by patients for their doctors can accomplish far more than most people think, she says.

One of her blog readers agreed, saying the pressures “must frustrate them beyond endurance. I’m going to start asking.”

Another commenter sounded a note of caution, though: “I feel there is a risk for a patient to ask such a question to a dr. who might be hiding how very fragile he/she is.”

More doubts were voiced at Kevin MD, where Dr. Wible’s blog entry was cross-posted this week. A sample:

- “Abused is a very emotionally loaded word that brings up powerful emotions and feelings like shame. I think if a doc is asked by a patient whether he/she is abused, they might actually end up feeling accused.”

-  ”I’m having a hard time imagining most docs responding well to their patients asking them if they are abused and I doubt that most docs would respond ‘yes, I am being abused’ to patients who do ask that no matter what was going on in their workplace. Nor do I think most patients want to spend a big chunk of their doctor visit talking about the doctor’s problems and issues.”

- “And what could I do if the answer is ‘yes’?”

I’m not sure what to think. At its core, health care is a transaction between human beings that becomes most healing when all the parties are able to recognize each other’s humanity.

Yet reams have been written about doctor-patient boundaries and the hazards of too much self-disclosure by the physician. Can it ultimately damage the relationship if the doctor shows vulnerability or emotional neediness? What are the ethics of a role reversal that puts the patient in the position of being caretaker to the doctor?

What do readers think? I’d like to know.

“I found my diagnosis on the Internet”

Raise your hand if you’ve ever gone online in search of a diagnosis that fits your symptoms or to read everything you can find about a condition you’re dealing with or a new medication you’re taking. Now raise your hand if you’ve ever talked to your doctor about what you’re reading on the Internet.

The Internet has made a bottomless well of information readily accessible to anyone with an online connection. But it seems we’re still figuring out how to incorporate this fact into the doctor-patient relationship in ways that allow everyone to feel comfortable with it.

Doctors tend to cringe when patients show up for an appointment with a stack of printouts from the Internet.

Patients tend to resent it when the doctor ignores or dismisses their personal research.

Doctors may not mind their patients’ efforts to become more informed but they don’t always trust the patient’s ability to recognize whether a source is reputable.

Patients want to know how to evaluate a source’s credibility but they don’t know where to start.

These are some of the impressions I gathered earlier this week from a health care social media chat on Twitter that focused on, among other things, that pesky pile of printouts in the exam room. (For those who haven’t discovered the weekly tweetchat, it takes place every Sunday from 8 to 9 p.m. central time; follow along at #hcsm and brace yourself for an hour of free-wheeling, fun and insightful discussion.)

 

 

 

 

 

 


So what are we to glean from all of this? Although there doesn’t seem to be a single right way for patients to share health information they’ve found online, some approaches may be more helpful than others.

Most doctors don’t have time to wade through large stacks of printouts, so patients will probably have more success if they stick to summaries and as few pages as possible.

How the topic is introduced seems to matter. Is it an open-minded exchange or is it an argument? Is it respectful of each other’s perspective? Doctors have greater medical knowledge and hands-on experience but patients are the experts when it comes to their own experiences.

One of the conundrums is how to sort the wheat from the chaff. There’s a lot of questionable health information floating around online, but if patients haven’t learned to critically evaluate what they’re reading for accuracy and credibility, they can easily be led astray by misinformation. On the other hand, it’s hard for patients to develop these skills if their doctor is dismissive of their efforts and unwilling to provide coaching or guidance.

Frankly, the train has already left the station on this issue. A 2009 study by the Pew Research Internet Project found that looking for health information online has become a “mainstream activity.” Sixty-one percent of the adults who were surveyed said they used online sources for health information, and one in 10 said that what they learned had a major impact on either their own health care or how they cared for someone else.

But here’s another interesting finding: Patients whose doctors encourage them to search out health information online are, on average, more engaged in their care and more satisfied, regardless of whether they and the doctor agree on the information. In other words, it’s the participation and the open dialogue that really matter.

Quibbles over whether patients should talk to their doctor about health information they’ve found online are no longer the point. It’s time to move on to the bigger issue of how to have these conversations in the most beneficial way possible.

Needs to be seen

You need a refill for a prescription that’s about to run out. You’ve taken the medication for years without any problems and can’t think of any reason why the prescription can’t just be automatically continued. But the doctor won’t order a refill unless you make an appointment and come in to be seen.

Is this an unfair burden on the patient or due diligence by the doctor?

Dr. Lucy Hornstein, a family practice physician who blogs at Musings of a Dinosaur, was up against the wall recently with a patient who needed refills for blood pressure medications but wouldn’t make an appointment, despite repeated reminders.

“What to do?” Dr. Hornstein asked her blog readers.

First round of analysis: What are the harms of going off BP meds? Answer: potentially significant, in that patient is on several meds which are controlling BP well, and has other cardiovascular risk factors.

Next, anticipating the patient’s objections to a visit: Why exactly do I need to see her? We call it “monitoring”; making sure her BP is still controlled, and that there are no side effects or other related (or unrelated) problems emerging. “But you never do anything,” I hear her responding, and it’s hard to argue. It certainly seems that the greater benefit comes from continuing to authorize the refills.

What’s the down side? This: What if something changes, and either the BP is no longer controlled, or something else happens as a result of the meds (kidney failure comes to mind)? I can just hear the lawyer bellowing, “Why were you continuing to prescribe these dangerous medications without monitoring them?” causing the jury to come back and strip me of all my worldly goods.

So what to do? Refuse the refill and risk having her stroke out from uncontrolled blood pressure? Or keep on prescribing without seeing her? If so, how long? Four years? Five? Ten?

It’s a good summary of a dilemma that puts the doctor between a rock and a hard place no matter which course of action she chooses.

Patients don’t necessarily see it this way, though, judging from many of the online conversations about everything from seasonal allergy medications to antidepressants. Sometimes it ends up being a source of conflict, as in “Why are you making me jump through all these ridiculous hoops for a routine refill?” Sometimes they’re running out of medication but can’t get in for an appointment right away – what then? In some cases they’re genuinely worried about the cost of an office visit, especially if they have a large copayment or lack health insurance.

Dr. Hornstein’s readers (most of whom seem to be health care professionals) didn’t mince words on the need to crack down on this patient.

“No more refills until she sees you in person. Period. You are not a refill machine,” was one person’s blunt assessment.

There seems to be room for debate on how often patients should be required to see a doctor for a prescription refill. Is twice a year excessive for someone whose blood pressure is well controlled with medication and minimal side effects? Is once a year enough for someone who has complex health issues and is taking multiple medications? What about prescription narcotics? Should it make a difference if the patient is someone the doctor knows well?

Truth be told, I don’t enjoy the hassle of getting a refill. But after reading Dr. Hornstein’s side of the story, it’s easy to see why doctors don’t like reauthorizing prescriptions willy-nilly. After all, it’s their name on record as the prescriber if something happens to go wrong. Sometimes the patient really does need to be seen.

Providers by any other name

Guilty, guilty, guilty.

That was my reaction after reading Dr. Danielle Ofri’s take at the New York Times Well blog on the use of the word “provider” to refer to people in health care.

Dr. Ofri dislikes the term intensely:

Every time I hear it – and it comes only from administrators, never patients – I cringe. To me it always elicits a vision of the hospital staff as working at Burger King, all of us wearing those paper hats as someone barks: “Two burgers, three Cokes, two statins and a colonoscopy on the side.”

“Provider” is a corporate and impersonal way to describe a relationship that, at its heart, is deeply personal, Dr. Ofri writes. “The ‘consumers’ who fall ill are human beings, and the ‘providers’ to whom they turn for care are human beings also. The ‘transactions’ between them are so much more than packets of ‘health care’.”

I suppose this is as good a time as any to confess I’ve used the p-word – used it more than once, in fact.

It’s not that I don’t know any better. I’ve been aware for quite some time that many people in health care don’t really like to be called providers. I deploy the word rather gingerly, and have increasingly taken to calling them something more specific – doctors, for instance, or nurses, or clinicians.

Frankly, it’s hard to know what the right word should be. Once upon a time it was pretty safe to use the term “doctor” to refer to those who provide (sorry!) care. This is no longer a given; many of those engaged in patient care these days are nurses, nurse practitioners, physician assistants, medical assistants and… you get the picture. (For that matter, “doctor” and “physician” aren’t interchangeable either, but I digress.)

“Clinician” seems a little closer to the mark, and it has the merit of distinguishing those in health care who are engaged in the actual care of patients from those who aren’t. But it’s a catch-all term, not particularly descriptive and somewhat lacking in personal warmth and fuzziness.

Similar minefields lurk in terms such as “health care professional,” “mid-level professional,” “allied health professional,” “health care worker” and the like. They’re lengthy, cumbersome and stilted. It can be inaccurate to call everyone who works in health care a “professional,” because many of those who toil behind the scenes in disciplines such as medical records management and central distribution aren’t professionally licensed, even though their work is just as essential. On a more politically correct note, many allieds don’t like being called mid-level because of the hierarchy this implies.

Don’t even get me started on the varying levels of “health care organizations.” Hospitals, medical clinics, outpatient surgery centers, community health centers, public health agencies – each occupies a special place in the ecosystem and can’t easily be lumped into generic vagueness.

And if you really want to get technical, shouldn’t we consider health insurance, pharmaceutical companies and medical device manufacturers as part of the “health care system” as well?

What’s a writer supposed to do? It’s a constant challenge: trying to be accurate and descriptive yet not allow oneself to become bogged down in multiple syllables.

Language does matter. There’s a case to be made for the importance of being aware of cost, quality and medical necessity – for behaving as a consumer of health care, not solely as a patient. But I view myself first and foremost as a patient, and I’m not sure I like it when patients are urged to “shop around” for good care as if they were kicking the tires at a used-car lot. There’s a relationship aspect to health care that goes beyond pure consumerism and that needs to be recognized and valued.

If we’re debating about the language, perhaps it’s because everyone’s role is shifting to a greater degree than at any other point in history. Patients have more information and are being asked to take more responsibility, as well they should. The people who care for patients are being pulled in more directions than ever before. We don’t know what to call ourselves anymore, and reasonable alternatives don’t seem to have been created yet.

So here’s the deal: When I use the term “providers,” it isn’t because I think of them as burger-makers at a fast food restaurant. It’s generally because the word is short, to the point and encompasses the range of individuals and organizations I’m writing about. As far as I’m concerned, my doctor and nurse are still my doctor and nurse and I’m still their patient. Sometimes I’m a consumer, but only when consumer behavior is what’s called for.

If anyone has suggestions for new terminology other than “provider,” I’m all ears.

This entry was originally published Dec. 30, 2011.

What price for peace of mind?

The patient is eight years out from treatment for breast cancer and is doing well, but four times a year she insists on a blood test to check her inflammation levels.

The test is pointless and has nothing to do with cancer or its possible recurrence. But what happens when the patient makes the request to the doctor?

“To my shame, I must admit, I order it every time,” writes her oncologist, Dr. James Salwitz (if you haven’t yet discovered his excellent blog, Sunrise Rounds, head over there and check it out).

The test may provide temporary reassurance for the patient. At $18, it isn’t expensive, and it’s considerably less harmful or invasive than other tests. But all the same, it’s useless, and it prompts Dr. Salwitz to ask: What can health care do to stem the practice of tests, procedures and other interventions that have no real benefit to the patient?

He writes:

Medicine is full of better-known examples of useless and wasteful testing. PSA and CA125 cancer markers that fail as screening tests. Analysis indicates they cause more harm than benefit. MRIs for muscular back pain, which will go away by itself. Unneeded EKGs, stress tests and cardiac catheterizations, instead of thoughtful conservative medical management. CT scans often take the place of sound clinical analysis and judgment. A 15-year study of 30.9 million radiology imaging exams published recently shows a tripling in the last 15 years.

These unneeded tests do more than waste dollars. If a test is not necessary and has no medical benefit, it can only cause harm. The test itself can cause problems such as excess radiation exposure, allergic reactions and discomfort. In addition, tests find false positive results, which lead to further useless testing or unneeded treatment.

It’s been rather remarkable to witness the pulling-away from excess tests and treatment that has taken place in American medicine in the past few years. There’s a growing recognition that there’s such a thing as too much intervention and that intervention is not automatically good for patients.

Moreover, we’re becoming more willing to talk openly about the tradeoff of benefit vs. harm. Not all that long ago, it was considered heresy to even suggest that women in their 40s didn’t absolutely need a mammogram every single year. The thinking on this is beginning to change as the evidence accumulates of mammography’s limited benefits for younger, low-risk women, and it’s showing up in patient decisions; a recent study by the Mayo Clinic found a 6 percent decline last year in the number of 40-something women who opted to have a mammogram.

It’s easy to oversimplify this issue, and indeed, it’s not always as straightforward as it seems. Interventions sometimes don’t look useless until they’re viewed afterwards through the retrospectoscope. At the time, in the heat of battle, they may seem necessary and justified. Nor do patients fit neatly into little diagnostic boxes; what may be unnecessary for one might make sense for someone else.

There’s a larger question, though, that we sometimes fail to ask: If something is medically useless, does it still have value if it gives the patient (and perhaps the clinician as well) some peace of mind?

To many patients, this is no small thing. It’s an emotional need that’s not easily met by science-based rational discussion about the studies and the actual evidence for the pros and cons. Unfortunately it’s also often abetted by consumer marketing that plays up the peace-of-mind aspect of certain tests while remaining silent about the limited benefit, the possible risk and the clinical complexity that may be part of the larger picture. The message can be sent that it’s OK as long as it provides the patient with some reassurance, and who’s to say this is entirely wrong?

Should clinicians be tougher about just saying no, then? It may be easier to give in, but does this constitute quality care? An interesting ethics case by the Virtual Mentor program of the American Medical Association explores some of the issues that make this so challenging: the responsibility of the physician to make recommendations and decisions that are clinically appropriate, the importance of respecting the patient’s autonomy and values, the balance between patient preferences and wise use of limited health care resources.

You could argue that patients should be allowed to obtain unnecessary care as long as they pay for it themselves, but does this really address the larger question of resources? Regardless of who’s paying the bill, unnecessary care comes with a cost. The blood test requested by Dr. Salwitz’s patient, for instance, likely would involve the use of equipment such as a disposable needle and lab supplies, staff time to draw the blood, analyze the sample, record the results, report them to the doctor and patient, enter them in the medical record and generate a bill (and I may have skipped a few steps).

Yet it’s not always easy to make this case to patients when what they’re really looking for is that elusive creature known as peace of mind.

Dr. Salwitz writes that he’ll be seeing his patient again soon and will try once again to persuade her that the test she’s asking for has no medical benefit for her. “I will try to replace this test crutch with knowledge, reassurance and hope,” he writes. “Maybe it will help us to understand each other a little better.”

This post was originally published on July 3, 2012.

Downsizing all the stuff

If you’ve ever had to downsize your belongings or help a parent or aging relative with downsizing, what you’re about to read will come as no surprise.

Most of us possess an enormous quantity of stuff. Stuff we don’t use and don’t need, yet continue to hang onto for whatever reason.

Although consumer spending looms large in the American economy, there’s been little research on what we actually do when it comes to keeping and disposing of our many possessions, especially as we get past middle age.

A recent study in the Journals of Gerontology: Series B tackles this issue and concludes that although many older adults find comfort and identity in the things they’ve accumulated during their lifetime, the sheer quantity can become burdensome – not just to the owner but also to relatives who might eventually be forced to deal with all the excess belongings.

The study also turned up a finding that could make many of us think twice before procrastinating on cleaning out the attic or the closets: Past the age of 50, we become less and less likely to dispose of our possessions.

The researchers drew their findings from the 2010 Health and Retirement Study, an annual survey of Americans 50 and older that for the first time included four questions about what people did with their belongings. What they found: 30 percent of respondents who were over age 70 had done nothing in the previous year to clean out, donate or dispose of excess belongings, and 80 percent had sold nothing. Even after controlling for various factors such as widowhood or a recent move from a house into a small apartment, activities to reduce what the authors call “the material convoy” consistently grew less with advancing age.

Yet more than half of the survey respondents in all age categories felt they had too much stuff. Among those in their 50s, 56 percent believed they had more possessions than they needed; for those in their 70s, it was 62 percent.

(To be clear, we’re talking here about the ordinary lifetime accumulation of possessions, not hoarding.)

One need only look at the proliferation of articles and advice about downsizing to recognize this as a growing social issue. What happens to all the belongings we no longer need or have room for as we age – does most of it end up in landfills? What about older adults who have a large accumulation of stuff they haven’t disposed of – does it become a hindrance to moving into a living situation that might be safer or more manageable? Who makes decisions about the stuff if the owner is unable to do it himself or herself?

Perhaps this is a problem unique to the current older generation, folks who came of age during the Depression and who tend to save and reuse things instead of throwing them away. Then again, the increasing square footage of the average American house (1,740 square feet for the average new single-family home in 1980; 2,392 square feet for the same home in 2010) suggests that with more space, younger families may fill it with more belongings than their parents ever had, thus perpetuating the problem when it’s eventually their turn to downsize.

Perhaps declining health and reduced stamina simply make it harder to deal with excess stuff as we get older, regardless of which generation we belong to. There’s an emotional aspect as well to parting with items that we’ve lived with for many years.

How we accumulate and dispose of our possessions might seem like a purely personal issue but bigger issues are at stake, the study’s authors wrote. While we may treasure our belongings, the stuff can also become burdensome, especially as we age and our vulnerability increases. “To the extent that possessions – not just single, cherished items but in their totality – create emotional and environmental drag, individuals will be less adaptive should they need to make changes in place or even change place,” the researchers wrote. “The material convoy is not innocuous, mere stuff; its disposition must be undertaken sooner or later by someone. The realization of this is why the material convoy – personal property – becomes an intergenerational or collective matter.”

Additional reading on downsizing an aging household:

20 tips to help you get rid of junk

Family things: Attending the household disbandment of older adults

Tips to help you get a grip on downsizing possessions

Learning to give bad news: the patient’s perspective

After Suzanne Leigh’s daughter, Natasha, died from a brain tumor, her mother wrote an essay about “What I wish I’d said to my child’s doctor.”

One of her pieces of advice was to ban junior doctors from sitting in while sensitive, emotional conversations took place. She wrote, “When emotions are high and we are at our most vulnerable, we don’t want to be politely scrutinized as if we were lab rats – even if it means that those in training might lose out on a one-of-a-kind lesson in patient-doctor communications.”

My last two blog posts were about training doctors to deliver bad news and the benefits (or not) of scripting to help them through a difficult conversation. Where are patients supposed to fit into this? Suzanne raises the human side of the medical teaching process – that patients and families are often drafted into being practice cases for students, many times willingly but sometimes with awkwardness, reluctance or downright resentment.

It’s not an easy balance, though, between respecting the needs of patients and families at a supremely vulnerable time while creating opportunities for students to learn and gain skills. And it’s this balancing act that touched off a contentious online debate after Suzanne’s essay was reposted late last week on Kevin, MD.

It started with a medical student who wanted to make it known that although the patient’s and family’s wishes should come first, observation is one of the ways that students learn the difficult art of giving bad news.

It is not about scrutinizing the parents during the conversation; rather it is about seeing our mentors perform an essential skill that we will very shortly have to put into practice on our own. When patients and families are comfortable allowing us in on those conversations, we are immensely grateful, as it is really the only way we can learn an immensely difficult skill.

A blanket ban on junior doctors? One commenter called it “tantamount to saying ‘I want my child to benefit from the superior care that is available in a teaching hospital, but I think an important aspect of that teaching should be abolished, since it made me personally feel uncomfortable.’”

As you might guess, the discussion started to get lively.

Knowing when to refrain from judging and graciously leave the room is a learning experience too, wrote one commenter.

Should the need for training ever outweigh a family’s discomfort with having an audience during a difficult conversation? wondered someone else. Is it fair to lay guilt on people for saying no to the presence of a doctor-in-training in the room?

Another commenter, apparently a doctor, suggested that a family who requests privacy during one of these conversations “probably has a lower level of coping skills.” (What? Maybe privacy is just their preference.)

I’m not sure there ever can be a consensus on the right way to do this, other than that it’s a tough balance between the needs of the individual and the needs of society, in this case the value of a doctor who has learned to communicate well and skillfully when bad news is involved.

What’s consistently missing is the voice of patients and families themselves.

Doctors in training may learn from observing more experienced clinicians during a difficult conversation but this is only one side of the transaction. How was it perceived by the patient and family? After all, what seems empathetic to an observer might not meet the patient’s definition of empathetic. On the other hand, there’s no one-size-fits-all approach to the art and skill of delivering bad news, so how do doctors learn to tailor what they say and how they say it without unintentionally giving offense?

Medical students might learn a lot from patients and families about giving bad news – what makes the situation worse, what makes it better. If we need a better balance between real-life training opportunities for students vs. allowing difficult conversations to belong to the patient and family, why not invite patients into the education process instead of debating their obligation to do so?