Providers by any other name

Guilty, guilty, guilty.

That was my reaction after reading Dr. Danielle Ofri’s take at the New York Times Well blog on the use of the word “provider” to refer to people in health care.

Dr. Ofri dislikes the term intensely:

Every time I hear it – and it comes only from administrators, never patients – I cringe. To me it always elicits a vision of the hospital staff as working at Burger King, all of us wearing those paper hats as someone barks: “Two burgers, three Cokes, two statins and a colonoscopy on the side.”

“Provider” is a corporate and impersonal way to describe a relationship that, at its heart, is deeply personal, Dr. Ofri writes. “The ‘consumers’ who fall ill are human beings, and the ‘providers’ to whom they turn for care are human beings also. The ‘transactions’ between them are so much more than packets of ‘health care’.”

I suppose this is as good a time as any to confess I’ve used the p-word – used it more than once, in fact.

It’s not that I don’t know any better. I’ve been aware for quite some time that many people in health care don’t really like to be called providers. I deploy the word rather gingerly, and have increasingly taken to calling them something more specific – doctors, for instance, or nurses, or clinicians.

Frankly, it’s hard to know what the right word should be. Once upon a time it was pretty safe to use the term “doctor” to refer to those who provide (sorry!) care. This is no longer a given; many of those engaged in patient care these days are nurses, nurse practitioners, physician assistants, medical assistants and… you get the picture. (For that matter, “doctor” and “physician” aren’t interchangeable either, but I digress.)

“Clinician” seems a little closer to the mark, and it has the merit of distinguishing those in health care who are engaged in the actual care of patients from those who aren’t. But it’s a catch-all term, not particularly descriptive and somewhat lacking in personal warmth and fuzziness.

Similar minefields lurk in terms such as “health care professional,” “mid-level professional,” “allied health professional,” “health care worker” and the like. They’re lengthy, cumbersome and stilted. It can be inaccurate to call everyone who works in health care a “professional,” because many of those who toil behind the scenes in disciplines such as medical records management and central distribution aren’t professionally licensed, even though their work is just as essential. On a more politically correct note, many allieds don’t like being called mid-level because of the hierarchy this implies.

Don’t even get me started on the varying levels of “health care organizations.” Hospitals, medical clinics, outpatient surgery centers, community health centers, public health agencies – each occupies a special place in the ecosystem and can’t easily be lumped into generic vagueness.

And if you really want to get technical, shouldn’t we consider health insurance, pharmaceutical companies and medical device manufacturers as part of the “health care system” as well?

What’s a writer supposed to do? It’s a constant challenge: trying to be accurate and descriptive yet not allow oneself to become bogged down in multiple syllables.

Language does matter. There’s a case to be made for the importance of being aware of cost, quality and medical necessity – for behaving as a consumer of health care, not solely as a patient. But I view myself first and foremost as a patient, and I’m not sure I like it when patients are urged to “shop around” for good care as if they were kicking the tires at a used-car lot. There’s a relationship aspect to health care that goes beyond pure consumerism and that needs to be recognized and valued.

If we’re debating about the language, perhaps it’s because everyone’s role is shifting to a greater degree than at any other point in history. Patients have more information and are being asked to take more responsibility, as well they should. The people who care for patients are being pulled in more directions than ever before. We don’t know what to call ourselves anymore, and reasonable alternatives don’t seem to have been created yet.

So here’s the deal: When I use the term “providers,” it isn’t because I think of them as burger-makers at a fast food restaurant. It’s generally because the word is short, to the point and encompasses the range of individuals and organizations I’m writing about. As far as I’m concerned, my doctor and nurse are still my doctor and nurse and I’m still their patient. Sometimes I’m a consumer, but only when consumer behavior is what’s called for.

If anyone has suggestions for new terminology other than “provider,” I’m all ears.

This entry was originally published Dec. 30, 2011.

What price for peace of mind?

The patient is eight years out from treatment for breast cancer and is doing well, but four times a year she insists on a blood test to check her inflammation levels.

The test is pointless and has nothing to do with cancer or its possible recurrence. But what happens when the patient makes the request to the doctor?

“To my shame, I must admit, I order it every time,” writes her oncologist, Dr. James Salwitz (if you haven’t yet discovered his excellent blog, Sunrise Rounds, head over there and check it out).

The test may provide temporary reassurance for the patient. At $18, it isn’t expensive, and it’s considerably less harmful or invasive than other tests. But all the same, it’s useless, and it prompts Dr. Salwitz to ask: What can health care do to stem the practice of tests, procedures and other interventions that have no real benefit to the patient?

He writes:

Medicine is full of better-known examples of useless and wasteful testing. PSA and CA125 cancer markers that fail as screening tests. Analysis indicates they cause more harm than benefit. MRIs for muscular back pain, which will go away by itself. Unneeded EKGs, stress tests and cardiac catheterizations, instead of thoughtful conservative medical management. CT scans often take the place of sound clinical analysis and judgment. A 15-year study of 30.9 million radiology imaging exams published recently shows a tripling in the last 15 years.

These unneeded tests do more than waste dollars. If a test is not necessary and has no medical benefit, it can only cause harm. The test itself can cause problems such as excess radiation exposure, allergic reactions and discomfort. In addition, tests find false positive results, which lead to further useless testing or unneeded treatment.

It’s been rather remarkable to witness the pulling-away from excess tests and treatment that has taken place in American medicine in the past few years. There’s a growing recognition that there’s such a thing as too much intervention and that intervention is not automatically good for patients.

Moreover, we’re becoming more willing to talk openly about the tradeoff of benefit vs. harm. Not all that long ago, it was considered heresy to even suggest that women in their 40s didn’t absolutely need a mammogram every single year. The thinking on this is beginning to change as the evidence accumulates of mammography’s limited benefits for younger, low-risk women, and it’s showing up in patient decisions; a recent study by the Mayo Clinic found a 6 percent decline last year in the number of 40-something women who opted to have a mammogram.

It’s easy to oversimplify this issue, and indeed, it’s not always as straightforward as it seems. Interventions sometimes don’t look useless until they’re viewed afterwards through the retrospectoscope. At the time, in the heat of battle, they may seem necessary and justified. Nor do patients fit neatly into little diagnostic boxes; what may be unnecessary for one might make sense for someone else.

There’s a larger question, though, that we sometimes fail to ask: If something is medically useless, does it still have value if it gives the patient (and perhaps the clinician as well) some peace of mind?

To many patients, this is no small thing. It’s an emotional need that’s not easily met by science-based rational discussion about the studies and the actual evidence for the pros and cons. Unfortunately it’s also often abetted by consumer marketing that plays up the peace-of-mind aspect of certain tests while remaining silent about the limited benefit, the possible risk and the clinical complexity that may be part of the larger picture. The message can be sent that it’s OK as long as it provides the patient with some reassurance, and who’s to say this is entirely wrong?

Should clinicians be tougher about just saying no, then? It may be easier to give in, but does this constitute quality care? An interesting ethics case by the Virtual Mentor program of the American Medical Association explores some of the issues that make this so challenging: the responsibility of the physician to make recommendations and decisions that are clinically appropriate, the importance of respecting the patient’s autonomy and values, the balance between patient preferences and wise use of limited health care resources.

You could argue that patients should be allowed to obtain unnecessary care as long as they pay for it themselves, but does this really address the larger question of resources? Regardless of who’s paying the bill, unnecessary care comes with a cost. The blood test requested by Dr. Salwitz’s patient, for instance, likely would involve the use of equipment such as a disposable needle and lab supplies, staff time to draw the blood, analyze the sample, record the results, report them to the doctor and patient, enter them in the medical record and generate a bill (and I may have skipped a few steps).

Yet it’s not always easy to make this case to patients when what they’re really looking for is that elusive creature known as peace of mind.

Dr. Salwitz writes that he’ll be seeing his patient again soon and will try once again to persuade her that the test she’s asking for has no medical benefit for her. “I will try to replace this test crutch with knowledge, reassurance and hope,” he writes. “Maybe it will help us to understand each other a little better.”

This post was originally published on July 3, 2012.

Downsizing all the stuff

If you’ve ever had to downsize your belongings or help a parent or aging relative with downsizing, what you’re about to read will come as no surprise.

Most of us possess an enormous quantity of stuff. Stuff we don’t use and don’t need, yet continue to hang onto for whatever reason.

Although consumer spending looms large in the American economy, there’s been little research on what we actually do when it comes to keeping and disposing of our many possessions, especially as we get past middle age.

A recent study in the Journals of Gerontology: Series B tackles this issue and concludes that although many older adults find comfort and identity in the things they’ve accumulated during their lifetime, the sheer quantity can become burdensome – not just to the owner but also to relatives who might eventually be forced to deal with all the excess belongings.

The study also turned up a finding that could make many of us think twice before procrastinating on cleaning out the attic or the closets: Past the age of 50, we become less and less likely to dispose of our possessions.

The researchers drew their findings from the 2010 Health and Retirement Study, an annual survey of Americans 50 and older that for the first time included four questions about what people did with their belongings. What they found: 30 percent of respondents who were over age 70 had done nothing in the previous year to clean out, donate or dispose of excess belongings, and 80 percent had sold nothing. Even after controlling for various factors such as widowhood or a recent move from a house into a small apartment, activities to reduce what the authors call “the material convoy” consistently grew less with advancing age.

Yet more than half of the survey respondents in all age categories felt they had too much stuff. Among those in their 50s, 56 percent believed they had more possessions than they needed; for those in their 70s, it was 62 percent.

(To be clear, we’re talking here about the ordinary lifetime accumulation of possessions, not hoarding.)

One need only look at the proliferation of articles and advice about downsizing to recognize this as a growing social issue. What happens to all the belongings we no longer need or have room for as we age – does most of it end up in landfills? What about older adults who have a large accumulation of stuff they haven’t disposed of – does it become a hindrance to moving into a living situation that might be safer or more manageable? Who makes decisions about the stuff if the owner is unable to do it himself or herself?

Perhaps this is a problem unique to the current older generation, folks who came of age during the Depression and who tend to save and reuse things instead of throwing them away. Then again, the increasing square footage of the average American house (1,740 square feet for the average new single-family home in 1980; 2,392 square feet for the same home in 2010) suggests that with more space, younger families may fill it with more belongings than their parents ever had, thus perpetuating the problem when it’s eventually their turn to downsize.

Perhaps declining health and reduced stamina simply make it harder to deal with excess stuff as we get older, regardless of which generation we belong to. There’s an emotional aspect as well to parting with items that we’ve lived with for many years.

How we accumulate and dispose of our possessions might seem like a purely personal issue but bigger issues are at stake, the study’s authors wrote. While we may treasure our belongings, the stuff can also become burdensome, especially as we age and our vulnerability increases. “To the extent that possessions – not just single, cherished items but in their totality – create emotional and environmental drag, individuals will be less adaptive should they need to make changes in place or even change place,” the researchers wrote. “The material convoy is not innocuous, mere stuff; its disposition must be undertaken sooner or later by someone. The realization of this is why the material convoy – personal property – becomes an intergenerational or collective matter.”

Additional reading on downsizing an aging household:

20 tips to help you get rid of junk

Family things: Attending the household disbandment of older adults

Tips to help you get a grip on downsizing possessions

Learning to give bad news: the patient’s perspective

After Suzanne Leigh’s daughter, Natasha, died from a brain tumor, her mother wrote an essay about “What I wish I’d said to my child’s doctor.”

One of her pieces of advice was to ban junior doctors from sitting in while sensitive, emotional conversations took place. She wrote, “When emotions are high and we are at our most vulnerable, we don’t want to be politely scrutinized as if we were lab rats – even if it means that those in training might lose out on a one-of-a-kind lesson in patient-doctor communications.”

My last two blog posts were about training doctors to deliver bad news and the benefits (or not) of scripting to help them through a difficult conversation. Where are patients supposed to fit into this? Suzanne raises the human side of the medical teaching process – that patients and families are often drafted into being practice cases for students, many times willingly but sometimes with awkwardness, reluctance or downright resentment.

It’s not an easy balance, though, between respecting the needs of patients and families at a supremely vulnerable time while creating opportunities for students to learn and gain skills. And it’s this balancing act that touched off a contentious online debate after Suzanne’s essay was reposted late last week on Kevin, MD.

It started with a medical student who wanted to make it known that although the patient’s and family’s wishes should come first, observation is one of the ways that students learn the difficult art of giving bad news.

It is not about scrutinizing the parents during the conversation; rather it is about seeing our mentors perform an essential skill that we will very shortly have to put into practice on our own. When patients and families are comfortable allowing us in on those conversations, we are immensely grateful, as it is really the only way we can learn an immensely difficult skill.

A blanket ban on junior doctors? One commenter called it “tantamount to saying ‘I want my child to benefit from the superior care that is available in a teaching hospital, but I think an important aspect of that teaching should be abolished, since it made me personally feel uncomfortable.’”

As you might guess, the discussion started to get lively.

Knowing when to refrain from judging and graciously leave the room is a learning experience too, wrote one commenter.

Should the need for training ever outweigh a family’s discomfort with having an audience during a difficult conversation? wondered someone else. Is it fair to lay guilt on people for saying no to the presence of a doctor-in-training in the room?

Another commenter, apparently a doctor, suggested that a family who requests privacy during one of these conversations “probably has a lower level of coping skills.” (What? Maybe privacy is just their preference.)

I’m not sure there ever can be a consensus on the right way to do this, other than that it’s a tough balance between the needs of the individual and the needs of society, in this case the value of a doctor who has learned to communicate well and skillfully when bad news is involved.

What’s consistently missing is the voice of patients and families themselves.

Doctors in training may learn from observing more experienced clinicians during a difficult conversation but this is only one side of the transaction. How was it perceived by the patient and family? After all, what seems empathetic to an observer might not meet the patient’s definition of empathetic. On the other hand, there’s no one-size-fits-all approach to the art and skill of delivering bad news, so how do doctors learn to tailor what they say and how they say it without unintentionally giving offense?

Medical students might learn a lot from patients and families about giving bad news – what makes the situation worse, what makes it better. If we need a better balance between real-life training opportunities for students vs. allowing difficult conversations to belong to the patient and family, why not invite patients into the education process instead of debating their obligation to do so?

Emotions by rote: when empathy is scripted

Something bothered me last week while blogging about teaching medical students to deliver bad news to patients.

Although training and practice can help develop and reinforce effective, empathetic communication skills in medical students as well as doctors, I kept having the nagging thought that this wasn’t the whole story. Is delivering bad news merely about following the six steps of the SPIKES protocol? Would I want a conversation involving bad news to be packaged as a carefully learned formula? Don’t patients sense the difference between rote platitudes and genuine caring?

Then… eureka! an article appearing in the New York Times the same day captured it exactly.

Dr. Timothy Gilligan, co-director of the Center for Excellence in Healthcare Communication at the Cleveland Clinic, and Dr. Mikkael Severes, director of the leukemia program at the Cleveland Clinic, nailed it: Doctor-patient communication isn’t something you can readily force or script or reduce to “10 Easy Techniques For Demonstrating Empathy.” Although good communication is an essential skill for any health care practitioner, it should be real, not faked.

From their article:

No communications course will magically transform lifelong introverts to hand-holders and huggers. At the same time, we must ensure that we are not converting people who genuinely care about their patients into people who only sound as if they care. Having physicians sound like customer service representatives is not the goal.

For those doctors who are emotionally challenged, communications courses can provide the basics of relating to other human beings in ways that, at the very least, won’t be offensive. But for the rest of us, we should take care to ensure the techniques and words we learn in such courses don’t end up creating a barrier to authentic human contact that, like the white coats we wear, make it even harder to truly touch another person.

As one of those “lifelong introverts”, I take exception to the implication that introverted clinicians need fixing so they can give more hugs. Introversion and good communication are not mutually exclusive, and introverted doctors may in fact be better than many extroverts at listening to their patients.

But Drs. Gilligan and Severes raise important questions about the degree to which empathy can be taught and the unintended consequences of trying to program clinicians into better communicators.

Reactions to their article were almost as fascinating as the article itself. A sample:

- “Patients usually come to us because they hurt. They’re suffering, and most of their suffering isn’t from tumors or low platelet counts; it’s from their own normal emotions in response to being sick: depression, anger, confusion, loneliness, anxiety, and so on. We might be able to fix their lesions, but we can address their emotions only by listening to them, and that’s a more sublime skill than repeating, ‘Go on’ or ‘How does that make you feel?’”

- “Providing education and skill sets to improve communication will only go so far. Addressing issues around emotional intelligence is the core of the problem.”

- “As a physician, I find It amazing that anybody could write about problematic interactions between doctors and patients without noting the 2,000 pound gorilla in the room: what our healthcare system pays doctors for. The less time doctors spend with each patient, the more patients they are able to see and the more money they can make. If you pay doctors the same whether they talk to patients for five minutes or 45 minutes, guess which they are most likely to do.”

- “Most of these technically brilliant but interpersonally stunted doctors are either married, have friends, and successful careers, meaning that they must have some social skills. I’m not sure the issue has anything to do with being introverted or geeky. I think it might have a lot to do with arrogance and not seeing the patient as a person.”

I’ve heard some health care workers say scripts help them stay focused, especially in difficult situations. Others find scripts restrictive, even a bit silly, and would rather allow the conversation to flow naturally.

What do you suppose patients and families prefer – empathy that’s clumsily expressed but sincere, or all the right words with no feeling behind them?

Learning to give bad news

Bad news awaited the patient as she recovered from surgery… but the first person to enter her hospital room while she anxiously waited for the pathology report wasn’t the doctor, it was a medical student.

Could this situation have been handled better? Kyle Amber, a fourth-year student at the University of Miami Miller School of Medicine in Miami, Fla., writes about the experience in a recent issue of Academic Medicine, the journal of the Association of American Medical  Colleges:

“Is it cancer?” she asked, as if completely unfazed by the three hours she waited to be seen at this busy clinic that treats patients with poor access to care. All I could reply was, “The resident will come into the room in a few minutes to discuss the findings with you.” Let down, she nodded and told me she understood. Even as I attempted to comfort my patient, I knew it would take quite some time before a resident would be able to see her in this busy teaching clinic. An hour after I had left the room, the resident walked into the room, introduced himself to the patient, and told her that the pathology demonstrated a widely disseminated Stage IV cancer.

If the patient is about to receive bad news, the first person in the room should not be a medical student, he concludes. “This is no way to deliver bad news.”

Breaking bad news to patients and families is one of the most difficult tasks for a doctor to undertake. It’s stressful, and few physicians ever look forward to it or truly become comfortable with it. Yet at some point during their training, they have to confront it.

How do you teach this skill to medical students? There’s a fair amount of debate in medical academia over whether empathy can be taught. Students can learn and practice communication skills, however, that help increase their comfort level as well as their competency. The evidence also suggests that training can help instill awareness and behaviors that make it less likely for bad news to be delivered in ways that are insensitive or devoid of compassion.

At one time, doctors received little if any formal training in how to communicate bad news. They mostly learned by watching more seasoned mentors carry out this task. Hopefully they saw good examples, but maybe they didn’t. These days, medical schools are recognizing it’s an important skill for students to learn and most training programs have incorporated it in their curriculum.

But there seems to be an unwritten rule that medical students should never, ever be the bearer of bad news in a real-life clinical setting. That responsibility lies with more senior colleagues who are directly involved in caring for the patient.

It’s tough, then, for a medical student to be placed in the position of being with a patient who has been waiting for hours to hear the outcome of her surgery, yet be unable to do anything about that patient’s most immediate need – the need for information. Sometimes a student’s most memorable lesson is about how to handle things differently next time.

Beyond the red dress

Think “pink” in relation to health issues, and breast cancer immediately springs to mind. Think “red” and… well, many people probably will recognize it as the symbol for women’s heart health, especially if it happens to be February, but plenty of folks might come up blank.

Are we so conditioned to fear breast cancer that we can’t fully recognize or appreciate the risk that heart disease poses to women’s health? It’s an intriguing question posed by cardiologist Dr. Lisa Rosenbaum in the New England Journal of Medicine last week.

In the past decade the American Heart Association’s Go Red for Women campaign and similar educational efforts have greatly increased women’s knowledge about heart disease, Dr. Rosenbaum writes. More recently, though, the gains have mostly leveled off. Gaps in knowledge also persist among minority women, who often are at higher risk of heart disease.

Dr. Rosenbaum wonders: What’s needed here – more facts or a greater effort to address women’s emotions?

There seems to be something visceral about breast cancer that taps into women’s fears in ways that don’t happen with heart disease, Dr. Rosenbaum writes. She speculates that maybe it’s connected with female identity and thus resonates with women very deeply. Against this, perhaps it’s harder for women to really engage with heart disease, she suggests.

Dr. Rosenbaum writes:

Have pink ribbons and Races for the Cure so permeated our culture that the resulting female solidarity lends mammography a sacred status? Is the issue that breast cancer attacks a body part that is so fundamental to female identity that, to be a woman, one must join the war on this disease? In an era when women’s reproductive rights remain under assault, is reduced screening inevitably viewed as an attempt to take something away? Or is the issue one of a tragic story we have all heard — a young woman’s life destroyed, the children who watch her suffer and are then left behind?

On the other hand, what is it about being at risk for heart disease that is emotionally dissonant for women? Might we view heart disease as the consequence of having done something bad, whereas to get breast cancer is to have something bad happen to you? In a culture obsessed with the “natural,” are risk-reducing medications anathema to our vision of healthy living? Or are we held up by our ideal of beauty? We can each summon the images of beautiful young women with breast cancer. Where are all the beautiful women with heart disease?

There’s certainly food for thought here. One way of encouraging women to rethink their perception of heart disease risk might be more emphasis on the message that heart disease isn’t always caused by unhealthy lifestyles, Dr. Rosenbaum suggests. Maybe women need to be reminded that a “natural” approach isn’t necessarily better than taking medication to reduce their cholesterol or blood pressure.

And maybe, she suggests, “we can try to move beyond disease wars toward the creation of communities of women in which stories about living with heart disease are as celebrated as stories of surviving breast cancer.”

Here’s a place to start: a collection of survivor stories from WomenHeart, a national coalition for women with heart disease. And from the excellent Heart Sisters blog by heart attack survivor Carolyn Thomas, here are several first-person stories from women who describe openly and frankly what their heart attack was like.

Competing our way to better health

Reactions to the finale of the latest season of “The Biggest Loser” seem to fall into two general categories:

1) Rachel Frederickson, who shed more than half of her body weight to win the competition and take home $250,000, looks fabulous and deserved the prize for her hard work.

2) Rachel lost too much weight too rapidly and is a prime example of everything that’s toxic about how Americans view food behavior and entertainment.

Amid all the chatter, not to mention the snark, that has swirled around the show for the past several days, there’s one point that seems worthy of extended, thoughtful public discussion: What are the ethics of creating competition around people’s health?

Treating individual health as fodder for a contest is by no means confined to “The Biggest Loser.” Corporate and community initiatives abound that pit individuals and teams against each other to see who can lose the most weight, walk the most steps, exercise the most hours and so on. The popularity of “The Biggest Loser” has spawned imitation contests in workplaces, neighborhoods and even churches.

What’s the harm, as long as it helps motivate people to change their behavior? Well, not so fast.

The competitive approach may be successful for some, and it may inspire many on-the-fencers to become more engaged. For a story last year on corporate wellness trends, the Washington Post talked to an office worker from Maryland who joined a corporate fitness challenge and lost 42 pounds. “There’s sort of like a peer pressure and a competitiveness to it,” Sal Alvarez told the Post, adding that he found it very motivating.

Not everyone responds well to competition, however, and some people may even find it alienating, especially if they feel coerced to participate.

Although there are plenty of positive stories about how wellness challenges have helped people change their lives, there’s surprisingly little solid evidence of competition’s overall effectiveness as a health improvement strategy. More importantly, it’s not clear whether competition leads to behavior change that’s sustained after the contest is over.

Then there’s the matter of prizes. Are people motivated by the prospect of T-shirts, water bottles and gift cards? Do they need carrots to entice them to take action or is it better to emphasize good health as the reward? What if the carrot is really, really significant, such as the quarter of a million dollars dangled before the Biggest Loser contestants?

With that amount of money at stake, along with a chance to be on national TV and impress their family and friends, it shouldn’t be surprising that the participants in the show would become uber-competitive, possibly to the point of going overboard.

In a recent interview with People magazine, Rachel herself conceded that she “maybe” was “a little too enthusiastic in my training to get to the finale.”

Maybe competition is OK, as long as it helps people accomplish their health goals. Then again, maybe it exploits people’s vulnerabilities and propels them into doing something perhaps the wrong way or for the wrong reasons. Does the end justify the means? That’s the moral question that hasn’t really been answered.

The ‘shiver’ diet? Don’t we wish

We hardy inhabitants of the Snow Belt have joked for years about the calories we (theoretically) burn from shivering our way through winter.

But wait – could there be scientific evidence supporting shivering as a hot new form of winter exercise?

Apparently so, according to the New York Times, which reported today on a study suggesting that shivering boosts the metabolism in the same way that exercise does.

Study participants were brought into a lab on three different occasions. During the first two sessions, they were told to exercise on a stationary bike in an indoor temperature of 65 degrees, and samples of their blood, fat and skin cells were obtained. For the last session, the participants were instructed to lie down, lightly clad, for half an hour while the indoor heat was reduced from 75 to 53. Their skin and muscle reactions were measured and samples taken again to see what happened.

Lo and behold, the study subjects produced the same amount of irisin, a hormone involved in the conversion of white fat cells to more desirable brown fat, from shivering as they did from exercise.

From the article:

What seemed to matter, the researchers concluded, was not the exertion of the exercise, but the contraction of various muscles, which occurred during shivering as well as cycling.

In view of the fact that the temperature this morning on my way to work was 0 and most of Minnesota is under its umpteenth wind chill advisory of the season, readers will have to excuse me for not climbing enthusiastically onto the shivering-as-a-form-of-exercise bandwagon.

A variety of studies have shown that we do indeed burn more calories when we’re trying to stay warm. But whether this is an appropriate substitute for exercise is debatable, especially since the study described by the New York Times only involved 10 people – hardly enough to build a strong scientific case. And the article does in fact offer an important caveat: There seems to be no evidence that working out in the cold helps rev up the production of irisin anymore than exercising in warmer temperatures.

In a sentence that could only have come from someone blithely unaware of the dangers of frostbite, the author concludes that if you can’t get to the gym, “at least consider lingering outside at the bus stop and shivering.”

No, thanks. I’ll take indoor exercise over lingering at the curb with a minus 25 breeze in my face any day.

Tribune photo by Ron Adams

Dry skin blues

There’s a bottle of fish oil capsules sitting on our kitchen counter. From the veterinary clinic. For the cat, who’s been experiencing the same misery as a lot of humans during this long winter: dry, itchy, flaky skin.

Dry skin doesn’t get much respect. After all, it’s not as serious as, say, heart disease. But as anyone who’s ever had it can attest, it’s certainly uncomfortable to live with.

Dry skin (the medical name for it is xerosis) can have many causes, some more difficult to address than others. Most of the time and for most people, though, the cause is environmental – and if it happens to be wintertime, you can usually blame it on constant exposure to dry air, both indoors and out.

The consensus seems to be that we function best with an indoor relative humidity of 40 to 60 percent. But during the winter, indoor humidity is usually much lower – around 20 percent in most buildings (and in fact it shouldn’t be much higher than this if you want to avoid problems with condensation). Unfortunately, the lower the outdoor temperature drops, the lower the relative humidity inside your home, workplace or school and the greater the chance that your hide will feel dry and uncomfortable.

Signs of dry skin can range from tightening, roughness and flakiness to fine lines and cracks. In more severe cases, deep painful fissures can develop that may become infected if bacteria are introduced – no small matter for health care workers and others who deal with chronically dry hands from washing them many, many times in a single day.

What to do? Here’s some advice from the Mayo Clinic:

- Choose an emollient-rich moisturizer and apply it several times a day. Products with petroleum jelly also can help but they tend to be greasy so you might want to limit them to nighttime use.

- Avoid harsh soaps. Cleansing creams, bath gels and soaps containing added moisture or fat are better choices.

- A long hot bath or shower may sound tempting after being outdoors in subzero weather but beware – hot water can strip valuable moisture from your skin. Stick to warm water and limit your time in the shower. After bathing, lightly pat your skin dry, then immediately apply a moisturizer; doing so will help trap water in the surface cells of your skin.

- Use a home humidifier during the dry winter months (but be sure to follow the manufacturer’s instructions for proper cleaning and maintenance).

So what about fish oil? Although consumption of omega 3 fatty acids has some benefit to heart health, there’s little proof it does much for dry skin, at least in the human species. There’s one small study that found fish oil supplements helped reduce itching among patients undergoing hemodialysis. Another rather interesting study on orange roughy oil found that it lessened dry skin – but again, this was a very small study and the fish oil was applied directly to the skin, not consumed in a capsule.

Thankfully the cat has stopped biting at herself and pulling out tufts of fur to try to relieve the itching, so we’re taking that as a sign the omega 3 supplement is helping. But we’ll  let her keep the fish oil all to herself.