When words matter, and why

Call it a teachable moment. While training at Memorial Sloan Kettering Cancer Center in New York City, Dr. Don Dizon was asked to evaluate a woman in her 60s with recurrent ovarian cancer and then present her case to a physician mentor.

He summed up the situation for his colleague: “Well, since she failed this regimen, I think she needs to start on a new salvage treatment. What about a combination?”

In a recent blog entry at ASCO Connection, Dr. Dizon describes what happened next:

He looked at me kindly but with a degree of exasperation.

“Don – if there’s one thing I’ve learned, it’s that people do not fail chemotherapy. The chemotherapy didn’t work, but no one failed; she didn’t and I didn’t. And, we don’t salvage people. Salvage is what you do with scrap metal and trash.”

The lesson stuck. Ever since, he has been “sensitive to words and phrases, particularly when they are used in reference to patients, treatment, and circumstances surrounding recurrent disease,” Dr. Dizon writes. “I cringe when I hear someone referred for ‘salvage treatment’ or how it’s ‘too bad she failed therapy.’”

Do words matter? To patients and families, they often do because of the attitudes and expectations they convey, consciously or not.

Take the battleground metaphors so frequently used when speaking about cancer. British radio host Jenni Murray recently ranted about how “infuriating” it was to see and hear, over and over, the expression “lost his battle with cancer” in news reports of the death of Robin Gibb of the Bee Gees. She writes:

I’m at a loss to know why, despite a number of us who’ve been through the dread diagnosis and subsequent treatment pointing out that such pugilistic terminology is entirely inappropriate, we continue to be given the impression that death from cancer is somehow an indication of failure to have the moral fibre to fight and defeat it.

Not everyone who read the piece agreed with her, of course. “Self-indulgent, politically correct survivor’s guilt” was the judgment from one commenter.

Indeed, it’s hard to know at what point the line might be crossed. What’s offensive or even hurtful to one person can be a non-issue for someone else. But the fact remains that people do pay attention to language and often feel alienated or blamed by the words that are used.

Does it matter more when the words are those used by clinicians talking about their patients? Perhaps it does.

Anyone who’s ever been a patient has probably been taken a little aback when terms such as “problem list” and “chief complaint” are used to describe their visit. Like systems anywhere, the world of clinical care has its own language but the vocabulary is not necessarily understood the same way by patients and families. Take the phrase “comfort measures only,” for example. Those who work with dying patients know what this means, but what about patients and families? Do they focus on the “comfort” aspect or do they mostly hear the word “only” and feel the medical team has given up or abandoned them? When someone has “failed chemotherapy,” does this objectify or subtly shift the blame to the patient and allow the oncology team to distance themselves?

Dr. Dizon says his goal in writing about the language of medicine, especially in cancer care, is to encourage his colleagues to become more sensitive to the words they use and how they might be perceived by patients.

“Ultimately, our words should provide hope, comfort, and honesty,” he writes. “When they don’t, each of us has a personal responsibility to make things right.”

The Hatfield-McCoy feud: What made them do it?

I was glued to the History Channel last night, watching the first two hours in the “Hatfields and McCoys” original miniseries.

It’s a grim story, based on two actual clans in 1800s Appalachia who collide over economic rivalry, social change and plain old cussedness. Last night we saw the beginnings of the feud; look for the body count to pile up even more in the next two installments tonight and Thursday.

Where does that kind of rage come from? Did the fighting represent how most mountain families in the post-Civil War era settled their differences, or were the Hatfields and McCoys a departure from the norm?

Plenty of scholars have asked these same questions.

One of the more intriguing possibilities that has only emerged in recent years: The McCoy family carries a rare genetic disorder that causes adrenal tumors, leading to the release of adrenaline and other compounds that might predispose someone to anger, outbursts and perhaps violence. Left untreated, it can be fatal.

Several modern-day descendants of the McCoys have the disorder, known as von Hippel-Lindau disease, lending considerable credence to this theory. Vanderbilt Magazine published an article about it in 2007 after treating an 11-year-old girl who’s descended from the McCoys.

Dr. Revi Mathew, associate professor of pediatrics at Vanderbilt University Medical Center, described the symptoms: “It does produce hypertension, headache and sweating intermittently depending on when the surge of these compounds occurs in the bloodstream. I suppose these compounds could possibly make somebody very angry and upset for no good reason.”

The Coal Valley News of Madison, W. Va., expanded further on this theory with an interview with Dr. Coleman Hatfield of Stollings, W. Va., a great-grandson of patriarch William Anderson “Devil Anse” Hatfield and family historian.

Dr. Hatfield told the Coal Valley News that he had long been puzzled about the feud. “I always thought it odd how Ran’l McCoy could so easily go into a rage over seemingly inconsequential incidents. Perhaps his temper concerning ‘that damnable pig’ could better be explained partly due to a disorder or disease. His volatile mannerisms, and his inability to let go of his anger, didn’t always seem rational or reasonable.”

Genetic researchers apparently have known for years about the McCoy family’s susceptibility to von Hippel-Lindau disease. One researcher reportedly traced it through four generations of the clan. It wasn’t until 2007, however, that the information became public knowledge, mostly to alert other McCoy relatives of their risk.

Whether genes can be blamed for a feud that lasted nearly three decades (1863-1891) and claimed 13 lives is highly debatable, of course. The Hatfields participated in the violence too, and there’s no evidence of von-Hippel Lindau disease or any similar disorder on their family tree.

What about post-traumatic stress among the clan’s two leaders, Devil Anse Hatfield and Randall McCoy, who were both Civil War veterans? Kevin Costner, who plays Anse Hatfield and also co-produced the miniseries, offers this as one of the possibilities to help explain their behavior. “I think both men suffered from post-traumatic stress syndrome,” he told the Fresno Bee last week. “Both guys came back to their families with millions of images in their heads.”

The role of moonshine in the region’s culture, its availability and the potential for alcohol abuse probably didn’t help either.

Altina Waller, a historian and professor at the University of Connecticut who spent 10 years researching the Hatfield-McCoy feud, told the Wall Street Journal that the real force underlying the conflict was likely a complicated brew of economics, social change, industrialization and politics. Americans back then may not have known about the economic changes taking place in the valleys of eastern Kentucky and West Virginia in the wake of the Civil War, but their imaginations were captured because they “saw in the feud their own anxieties about family cohesion and family violence,” Waller explained.

Then as now, the roots of violence and conflict seem to be a tangle of physical, mental and social factors that aren’t easy to pin down.

Photo: Kevin Costner as William Anderson “Devil Anse” Hatfield, courtesy of History Channel.

Young adults in the emergency room

Who goes to the emergency room – and why?

A new report from the U.S. Centers for Disease Control and Prevention sheds some light on emergency room use last year among a subset of folks not typically associated with needing much emergency care – young adults ages 19 to 25.

It’s a demographic worth studying for two reasons: First, these individuals are often working entry-level or temporary jobs or attending school and are vulnerable to being uninsured. And second, the report provides some idea of how young adults are faring under the Accountable Care Act, which allows them to remain covered by their parents’ health insurance until age 26.

The statistics, which were collected from January through September of last year through the National Health Interview Survey, don’t contain any big surprises. Young adults who were poor were more likely to lack a usual place for receiving health care and more likely to have visited the emergency room than those who weren’t poor. Young adults with public health coverage also were more likely to have gone to an emergency room than their counterparts who had private insurance or were uninsured.

The report confirms something else that’s echoed in other recent studies about health care utilization: The young adults who were surveyed were much more likely to delay needed care if they were uninsured. Nearly one in three who were uninsured said they had skipped seeing a doctor in the past year because of the cost. For those with public coverage, it was 10.1 percent; for the privately insured, it was 7.6 percent.

In fact, the statistics more or less mirror what’s happening with the rest of the adult population under age 65. Contrary to popular belief, most studies indicate that emergency rooms are not being unduly clogged with the uninsured. Indeed, this group seems to be the least likely to visit the ER except as a last resort – probably because they fear being saddled with a large bill they won’t be able to pay.

Regardless of age, the highest ER utilization tends to be among those on Medicaid. Various studies have pointed to a number of reasons: These folks might have more difficulty finding a primary care doctor and more difficulty making a timely appointment, forcing them to turn to ER care instead. Problems with transportation and lack of medical clinics close to where they live have also been identified as barriers.

The persistence of these patterns, even among a fairly narrow subset of young adults who belong to one of the healthiest age groups, is troubling. It suggests there are deeper issues involving socioeconomics, physician supply and demand and overall access than can be cured simply by providing more young adults with health insurance.

Eating better, for less money

Does it really cost more to buy healthful foods? Maybe not.

According to a new study by the U.S. Department of Agriculture, most fruits, vegetables, grains and other foods we think of as healthful are actually cheaper per portion than foods with higher amounts of sugar, salt and fat.

The report contradicts a rather long-standing belief among many folks that it’s more expensive to eat well – a belief bolstered by a number of previous studies that have found higher costs for healthful food choices.

So who’s right?

The answer seems to lie in how you calculate it. Most studies that examine food costs have used a standard metric: the cost per calorie. The USDA researchers decided to look at this a couple of different ways. They estimated the cost for each of more than 4,000 food items – 4,439, to be exact – and then crunched the numbers to come up with the price per calorie, price per edible weight and price per average portion consumed. They also calculated the cost of meeting the government’s daily ChooseMyPlate recommendations.

Here’s what they found:

- When the price is measured by the calorie, lower-calorie healthful foods do cost more – but this is mainly because of the math. It takes far more broccoli, for instance, to equal 500 calories than does a cinnamon roll.

- When the cost is measured by weight or serving size, healthful foods are generally the better bargain.

Our perception that the U.S. nutritional guidelines are unaffordable for many households may simply be a matter of the metrics, the study’s authors suggest. Price per calorie may be “one way, but not the only way, to measure the cost of a healthy diet,” they wrote.

Andrea Carlson, an economist and co-author of the study, told USA Today, “We have all heard that eating a healthy diet is expensive, and people have used that as an excuse for not eating a healthy diet… but healthy foods do not necessarily cost more than less healthy foods.”

She notes, “The price of potato chips is nearly twice as expensive as the price of carrots by portion size.”

There are many other factors, of course, that go into the consumer’s decisions in the grocery aisle. Some fresh fruits and vegetables genuinely are expensive, especially when they’re purchased out of season. For many people, the time and cost of preparation are a major consideration; it’s more work, after all, to rinse, trim, cut up and steam a head of broccoli than it is to pick up the phone and order a pizza. The metrics of cost per calorie vs. cost per serving or cost per edible weight also don’t address food availability, which is an entirely separate – and important – issue.

But there’s clearly more than one way of looking at food costs. Carlson suggests the best way to think of it is by portion size. “How much do you have to pay to put something on your plate?” she asks.

Understanding the patient experience

I have a strong feeling I’ll be glued to Twitter for the next couple of days, following the chatter about the Cleveland Clinic’s third annual patient experience summit (the hashtag is #pesummit, for anyone who’d like to follow along).

This event is a pretty big deal, drawing big-name speakers and a substantial audience. That an organization the size of the Cleveland Clinic would even host something like this says something about health care’s growing realization of the importance of patient-centered care.

Want to check out what everyone is saying on Twitter? Here are a few excerpts from the conversation:

 

 

 

 


By the time the summit ends on Tuesday, attendees will have participated in sessions ranging from the use of the social media in health care to the role of empathy in patient-centered care.

So does this all mean that health care is finally getting it when it comes to the patient experience? Well, yes and no.

For the words “patient experience” to enter the language of health care providers is a huge step forward. Likewise the seriousness that many organizations are now bringing to their efforts to be more patient-centered. It’s being talked about in ways we didn’t see even as recently as five years ago.

But we still have a long way to go – or, to be more accurate, some health organizations and some individuals seem to be much farther ahead than others.

In some kind of coincidence, the Robert Wood Johnson Foundation, Harvard School of Public Health and National Public Radio just released a new study on the experience of sick Americans – “sick” being defined as anyone who had a serious illness, medical condition, injury or disability requiring considerable medical care or an overnight hospitalization in the past 12 months.

Most were actually quite satisfied with their care. But fully one-fourth of them said they felt their condition wasn’t being well managed. Three in 10 said their doctor, nurse or other health professional didn’t spend enough time with them, and 25 percent felt they didn’t get enough information about a treatment or prescription drug. They also reported mistakes in their care – wrong diagnoses, wrong treatments and tests being ordered, hospital-acquired infections and more.

If health care is still struggling to deliver safe, appropriate care, how much harder will it be to ensure the care is also patient-centered?

This seems to be a culture change, and like all such changes, it’s much easier said than done.

A microcosm of perspectives is evident in a post and the online reaction that appeared a few months ago at the Hospital Impact blog. Guest writer Steve Wilkins describes an unsatisfying experience of having blood drawn at a hospital lab: There’s a long wait, the staff is perfunctory and no one really seems to value the patient’s convenience. “I began to feel like a rat in an experiment in which people were being socialized to accept poor service and like it,” Wilkins wrote.

Most of the response was supportive of the need for health organizations to really mean it when they claim to be patient-centered. “This is a great example of the total misalignment of what companies say and what companies do,” lamented one of the commenters.

But look at the reaction from a couple of other folks. “We don’t do the right thing anymore,” wrote one person. “We’re too busy allowing patients and families [to] make medical decisions they are not qualified to make, almost always with disastrous results. But, it’s all about ‘customer service’ these days!”

Someone else chastised Wilkins for going to a hospital for a routine blood draw: “We really would like everyone to have a truely wonderful Disney World experience while they are being treated for a very serious illness and even for those of you who are using our Emergency room as your primary care office but then there is reality.”

I also note with reluctance that while the roster of speakers for the Cleveland Clinic’s patient experience summit is lengthy and impressive, the majority of them are not… you know, patients, with an ordinary person’s perspective on what it’s like to be a patient.

It all begs the question: Is the patient experience actually being defined by patients themselves, or is it (still) primarily being defined by those in health care? Until health organizations can answer this question honestly, they’re unlikely to make the progress they really want in becoming more patient-centered.

The dreaded ‘D’ label

Raise your hand if you’ve ever been too meek at the doctor’s office because you didn’t want to be labeled as difficult.

Yup, me too.

A recent study found that many patients wanted to be more involved in making treatment-related decisions, but often held back from doing so because they were afraid it would strain the relationship with the doctor or result in poorer care.

It’s an interesting finding, yet it probably comes as no surprise to the many folks who are trying to navigate the shifting tides of the doctor-patient relationship.

The San Francisco Chronicle, which reported earlier this month on the study, spoke to Hugo Campos of Oakland, who has been an outspoken advocate of gaining access to the raw data from the defibrillator implanted in his chest – data that’s currently only available to doctors and device manufacturers.

Campos said the issue is too important for him not to speak up, but acknowledged that “whenever I go in, I’m walking on eggshells. I’m thinking, should I say this? There’s no real partnership when you feel like you have to hold back. But I think health care is not quite ready for a true partnership.”

“We have some work to do to educate patients,” agreed Julia Hallisy, a San Francisco dentist and author of “The Empowered Patient.” “We’ve had this paternalist model where we tell the patient to sit back and let us take care of you. Now all the rules have changed.”

This study wasn’t large. From a demographic standpoint, it may not even have been very representative of the typical American patient.

The researchers recruited 48 patients from three primary care practices in Palo Alto, Calif., to participate in focus groups. Most of the patients were older than 50, educated and affluent – in short, the type of patients who would be expected to be relatively confident about asking questions. But if these folks are hesitant to challenge the doctor, you can’t help wondering about patients who are farther down the socioeconomic ladder.

What were some of the issues that emerged from the focus groups? For one thing, many of the participants said they felt some pressure to play the “good patient” role so they could avoid giving the impression they were challenging the doctor’s expertise. Although they respected the doctor’s training and knowledge, they often saw doctors as authoritarian and felt doctors themselves perpetuated this image.

When they turned to online information about their health, it frequently was because they didn’t want to rock the boat with the physician -  or because there simply wasn’t enough time during the appointment to get their questions answered.

The focus group participants reported feeling “vulnerable and dependent on the good will of their physicians,” the study’s authors wrote. “Thus, deference to authority instead of genuine partnership appeared to be the participants’ mode of working.”

Previous research doesn’t seem to have done much exploring of this aspect of doctor-patient communication. There’s a lot of talk these days about patient-centered care and shared decision-making, but how much attention has really been paid to the voices and experiences of patients themselves? More to the point, why not? That patients would fear being labeled with a scarlet letter “D” for speaking up – or, worse yet, their fear that it would be recorded in their chart and somehow influence the care they receive – ought to concern the health care community.

If this study is any indication, many patients genuinely do care about the quality of their relationship with the doctor and are hesitant to do anything that might jeopardize it. Doctors, for their part, may not even be aware of their role in creating an environment that discourages open communication.

There doesn’t seem to be an easy path toward a more equal partnership between doctors and patients, but listening to each other and attempting to lessen some of the barriers is as good a place as any to start.

Taking a risk in the tanning booth

Here’s a story that recently made headlines: A mom in New Jersey is facing a criminal charge of child endangerment after allegedly allowing her 5-year-old daughter to use a tanning booth, leading to a rash and sunburn.

Patricia Krentcil, 44, whom some of the news accounts describe as “bronzed” or “deeply tanned,” has denied doing so, and it remains to be seen how the issue plays out in a courtroom. In the meantime, it has triggered a new round of interest and debate over indoor tanning safety.

Is it reckless for a mother to allow a child or teen to use a tanning booth? Is indoor tanning ever safe?

The cumulative weight of evidence says indoor tanning is more damaging to the skin than sunlight. A number of recent studies also have confirmed a higher risk of skin cancer, including melanoma, among tanning-booth users, especially those who began indoor tanning before the age of 35.

The Journal of Clinical Oncology recently focused on the issue, summing up much of the research and its implications for public health. The journal also published a study last month that tracked nearly 73,500 women from the well-known Nurses Health Study II cohort. The researchers found that women who engaged in indoor tanning not only were more likely to develop basal cell skin cancer later in life, but their risk also went up when they were exposed to tanning beds in their teens and 20s.

How often the women used a tanning bed seemed to matter as well. Those who were exposed more than six times a year, especially at a younger age, had a higher likelihood of later developing skin cancer.

These findings are “not exactly headline news,” writes Dr. Mary S. Brady, of Memorial Sloan Kettering Cancer Center, in an accompanying commentary. Yet “approximately 30 million Americans use tanning salons at least once a year,” she wrote. “Women and young people are the most frequent users.” She also notes that indoor tanning “is one of the most rapidly growing industries nationally, with a three-fold increase in the number of indoor tanners in the United States between 1986 and 1996.”

So why, in spite of everything that’s known about indoor tanning and skin cancer risk, is the message not resonating very well? According to a report released late last week by the U.S. Centers for Disease Control and Prevention, half of U.S. adults under the age of 30 reported being sunburned at least once in the previous year – an incidence unchanged since a decade ago. Women in their 20s also reported going to a tanning salon twice a week on average.

The summary in the Journal of Clinical Oncology offers a possible explanation for why it’s proving so difficult to get young people to change their tanning behavior: Perhaps they do it because it makes them feel good. In a small but fascinating study, a group of frequent tanners was exposed to both ultraviolet and non-ultraviolet light tanning beds twice a week for six weeks, then offered a third tanning session at the end of each week. Nearly all of them opted for the additional tanning; furthermore, most of them chose the UV light option, suggesting that the UV light exposure somehow reinforced their tanning behavior. The study participants also reported feeling more relaxed after UV exposure.

A number of other studies have found an addictive element to indoor tanning. Researchers also have identified an association between frequent indoor tanning, higher anxiety levels and use of alcohol, marijuana and other substances, although it’s not entirely clear how all of these are related.

Given the facts, it’s probably no wonder that a growing number of states are restricting the use of indoor tanning by underage users. It has become difficult to ignore the science about ultraviolet light exposure and skin damage. The real question is whether people’s habits will ever catch up with the evidence.

Why I won’t be watching ‘Weight of the Nation’

Count me out of the audience tonight when “Weight of the Nation,” the much-anticipated HBO documentary about obesity in the United States, makes its television debut.

If the trailers and pre-show publicity are any indication, “Weight of the Nation” stands a good chance of further fueling the stigma, stereotypes and alarmism about being fat, and ultimately doing little to solve the crisis it purports to address (although I hope I’m wrong about this).

Others have articulated this far better than I can. Here’s Fall Ferguson, a health educator with a law degree, with a list of 10 reasons to be concerned about the potential messages behind the HBO documentary.

Some excerpts from the list:

10. The misguided focus on obesity. The series identifies weight as “the problem” when the focus of our public health efforts should be health promotion and the prevention of chronic disease.

9. The appeal to fear. The publicity for the series (and I am guessing the actual documentary itself too) uses fear as a means of persuasion and motivation for change. Few things are as destructive to health and well-being as fear. I also question whether health professionals who use fear to influence people are behaving ethically.

8. Disservice to thin people. Thinner people may get the message that their lower weight means they don’t need to take care of their health or be concerned about preventing chronic diseases.

There’s more. Ferguson questions the message the documentary may send to children. She’s concerned it’ll add to the stigma that already surrounds being overweight or obese. She worries that the documentary won’t include alternative points of view or recognize the issues of body shame and eating disorders in the U.S.

The final point on her list? That “Weight of the Nation” will serve to escalate the cultural war on obesity while sidestepping a more critical look at how we frame this national conversation. “As a fat person, I am tired of being engaged in a war that I didn’t start and that uses my body as cannon fodder,” Ferguson writes. “As a health educator, I deplore the damage done to people’s health and self-esteem by our cultural war on obesity and I deplore the misinformation about health that masquerades as ‘public health messaging.’”

None of this is to say we shouldn’t be concerned about the quality and amount of food we eat or the environment we live in. We ought to be concerned because all of these things do matter to health. It’s frustrating to note, however, the way the message is so frequently shaped: that it’s all about the numbers on the scale and that if people could only lose weight, they would be healthy and their problems would be solved.

Here’s something else to think about: Has the war on obesity become so shrill that it’s counterproductive? A provocative new article by science writer Sharon Begley suggests it has. She writes that as long as we continue to stigmatize fat people and blame them for their weight, we’ll make little progress in more substantive changes such as altering the environment – and we’ll probably make the situation worse for fat children and adults.

Is it time to reframe the conversation? Perhaps so, because the current conversation doesn’t appear to be resulting in much progress. Maybe it’s time to recognize there’s more than one way of talking about obesity and that there are alternate points of view that ought to be heard.

Update: Read more from reviewers who posted today about “Weight of the Nation:” Mary McNamara, television critic for the Los Angeles Times: “Weight of the Nation” pounds away at obesity; Michele Simon at the Huffington Post, More empty recommendations on junk food marketing to children; Tom Conroy, Media Life Magazine, “The Weight of the Nation,” weighty; NPR, Pounding away at America’s obesity epidemic.

Making the investment in maternal health

In between shopping for a Mother’s Day card for my mom and browsing my recipe collection for a nice dessert to make for her this weekend, I came across Save the Children’s annual State of the World’s Mothers report.

The report delivers a cold, hard dose of reality about what motherhood means for many women worldwide: difficult, risky and uncertain in outcome. Health, education and economic indicators were analyzed for 165 countries with the following results: Norway was ranked the best nation in the world to be a mother; Niger was the worst. The United States? It ranked 25th among the 43 developed nations that were included in the analysis.

This year’s report zeroes in on nutrition during the critical 1,000 days at the beginning of life – from pregnancy through a child’s second birthday – and the global prevalence of malnutrition among mothers and babies.

Worldwide, the main nutritional threat to mothers and children isn’t obesity; it’s too few calories and the consequences this has on health. The report found, for instance, that more than half of the world’s children do not have access to the “Lifesaving Six”: iron folate supplementation during pregnancy, breastfeeding during the first six months, complementary feeding, vitamin A supplementation, zinc treatment for diarrhea, and adequate access to water, sanitation and hygiene.

The report explains why this is so critical:

Alarming numbers of mothers and children in developing countries are not getting the nutrition they need. For mothers, this means less strength and energy for the vitally important activities of daily life. It also means increased risk of death or giving birth to a pre-term, underweight or malnourished infant. For young children, poor nutrition in the early years also means irreversible damage to bodies and minds during the time when both are developing rapidly. And for 2.6 million children each year, hunger kills, with malnutrition leading to death.

The disappointing ranking for the United States was based on its poor showing at creating an environment supportive of mothers who breastfeed. This includes maternity leave laws and workplace policies that give women time to nurse.

Pregnancy, childbirth and infancy certainly are safer, at least in the industrialized world, than they used to be. At the start of the 20th century, for every 1,000 live births in the United States there were six to nine women who died of pregnancy-related complications and 100 infants who died before their first birthday. By 1999, thanks to improved education, public health measures and a higher standard of living, the mortality rate had fallen drastically – by 99 percent for mothers and 90 percent for infants.

These figures obscure some troubling facts, however. Maternal mortality actually has been increasing in the U.S., and at a faster rate than in any other developed nation. Nationally, deaths attributed to obstetrical causes within one year of giving birth rose from 7.6 per 100,0000 to 13.3 per 100,000 – this in a country that spends more per capita on maternal health than anywhere else. It’s in fact safer to give birth in Bosnia or Kuwait than it is in the United States.

The reasons are unclear. Part of the apparent increase may be due to improvements in data collection. But other factors may be involved as well: mothers who give birth later in life, when pregnancy- and childbirth-related complications are more likely; higher rates of caesarean deliveries and repeat caesareans that increase the risk of placental complications; and higher rates of diabetes, high blood pressure and other health risks among pregnant women. Some observers also believe the medical world has failed to successfully adapt its practices to a change in the paradigm, from young, mostly healthy women who give birth to mothers whose age, health status and background are more diverse and complex.

A report issued by Amnesty International in 2010 points to yet another cause: economic disparities and lack of access to health care for many U.S. women of childbearing age. It notes that among African-American women, the death rate from pregnancy-related complications is nearly four times that of white women, and that this disparity hasn’t budged in 20 years.

The most shameful fact about maternal and infant mortality? That the majority of these deaths, whether in developing nations or in the wealthy industrialized world, are considered preventable. The most common causes of death during delivery – uncontrollable bleeding, infection, high blood pressure, obstructed labor – can all be addressed with access to appropriate medical care. Improved nutrition for both moms and babies can make a big difference. So can factors such as alcohol use and tobacco exposure during pregnancy – and the list goes on and on.

It seems there’s still a long way to go before Mother’s Day is truly healthier for mothers and infants worldwide.

Photo: Wikimedia Commons

The reactance factor, or why patients don’t always cooperate

Blogger Steve Wilkins was supposed to have a colonoscopy recently. Understandably, he wasn’t thrilled about the preparation for the procedure. He knew why it was necessary but, as he explained, “the whole ritual made me feel really imposed upon by everyone – the doctor, hospital where I had the procedure, and the makers of the ‘stuff’ I had to drink.”

It’s an illustration of what’s known as reactance, or the way we respond when our behavioral freedom is threatened.

It starts with the perception that what we’re being asked to do is unfair or unreasonably restrictive. This is typically followed by an emotional and cognitive response such as “This isn’t worth doing” or “I’m not going to take it anymore.” It often culminates in action – for example, resisting the doctor’s recommendation, refusing to adhere or scoring the doctor poorly on a patient satisfaction survey.

I suspect reactance is far more common than many in health care would like to believe, and more often than not it probably goes unrecognized and unaddressed.

Wilkins writes:

If you think about it, reactance is an inadvertent by-product of the way much of health care is organized and delivered. Who hasn’t felt that waiting 45 minutes to see their doctor isn’t an unfair restriction on their time and behavior? Or who hasn’t felt that the hospital admitting process is all about protecting the hospital and does nothing for the patient other than hold them captive as some clerk reads through 30 minutes of legal mumbo jumbo.

Commenters had no trouble coming up with examples of health care situations that tend to trigger reactance. Surgery patients are almost always told they can’t have any food after midnight the day before their surgery, even when the surgery isn’t scheduled until the following afternoon and fasting from midnight isn’t necessary, one person pointed out. Someone else noted how long-term care facilities often have restrictive food choices for their residents or don’t allow them to be in a room with the door closed.

Carolyn Thomas, a heart attack survivor who blogs at Heart Sisters, offered yet more insight into why a medical visit can provoke reactance in the patient. She calls it “the one damned thing after another phenomenon”:

People with chronic disease diagnoses put up with a lot (not to even mention the actual disease!) so that yet another appointment for yet another test, treatment, specialist’s consult or lab procedure can loom larger than it actually is – yet another interminable sentence in yet another waiting room, yet another anonymous new face who knows or cares little about us as human beings other than we’re just their 2 o’clock echocardiogram or consult or scan or EKG or (fill in the blank here).

Why should it matter in health care? Because when it’s overlooked, it can influence patient outcomes – and not always in a good way, Wilkins says.

The Healthy Influence – Persuasion blog puts it another way: “Virtually everyone has some power to create and enforce rules, procedures, events, etc. You decide what will happen, when it will happen, in what order, and by whom. You need to realize that if you use that power in a way that is perceived as an unfair restriction on your target’s freedom, they will be unhappy campers.”

There’s been some research on the role of reactance in health care and how it can affect the patient’s motivation. It’s worth noting that reactance doesn’t only apply to patients – for instance, efforts to change physician behavior in areas such as hand hygiene or prescribing habits can be an uphill battle and sometimes result in even less compliance than before.

What can be done about reactance so the desired results can be achieved? It may not be possible to eliminate it but researchers suggest it can be minimized, as long as there’s “a reasonable balance between what providers ask a patient to do (take a medication, get a colonoscopy, or wait 45 minutes) and the reasonableness and fairness of the request as perceived by the patient,” Wilkins writes.

Research also suggests that when rules and policies are made, the target audience – in this case, patients – is more likely to be OK with the rules if they’ve been allowed to participate in the process. As fuzzy as it might sound, the foundation of patient engagement and shared decision-making seems to be built at least as much on psychology and emotions as on clinical outcomes and quality indicators.