Salad days in school

I don’t remember much about school lunches back when I was in grade school and junior high, other than that the fare generally consisted of canned vegetables, some form of meat and lots of carbohydrates.

Students often complained about the monotony but did anyone really care about the nutritional value? Probably not very much.

Fast-forward… oh, never mind how many years. The quality of what’s served in the school cafeteria has moved front and center as an issue of importance to children’s health, and all too often it falls lower than it should on the nutritional yardstick.

Schools have been slow to make the switch to more healthful menus. Many of them are trying. Compared to even a decade ago, the awareness is higher. School lunch programs have more and better options for incorporating fruit, vegetables and whole grains in what they serve.

But on many measures, progress has been slow. A 2008 government analysis of school lunch quality found that although most children were getting adequate vitamins and minerals, school lunches were still too low in fruit, vegetables and fiber and too high in sodium and saturated fat.

So why aren’t school lunch programs doing better?

What isn’t always recognized, frankly, are the barriers they face. This past summer the USDA released its first-ever evaluation of the Farm to School program, which connects school lunch programs with local food producers. When a government team visited 15 school districts participating in the Farm to School program, they found that both farmers and school officials were enthusiastic about the program – but they also identified a number of practical challenges.

For one thing, smaller local producers often aren’t equipped to supply the volume required by school cafeterias. For another, fresh fruit and vegetables are seasonal, generally peaking during the summer months when school isn’t in session. Although farmers could build greenhouses to extend the availability of fresh produce, it would cost them money to do so.

School food service directors told the USDA team they were especially concerned about safe growing and handling practices. The report noted that standards aren’t always clear for growers, nor are food service directors necessarily trained to determine whether the fresh apples they’re purchasing from a local grower have been safely produced.

Schools that want to make the transition to more fresh food also face infrastructure problems. Many school kitchens, accustomed to reheating food that’s already precooked, simply aren’t set up to prepare meat or vegetables from scratch. They might not have adequate refrigerator space. Some even lack decent knives or other essential cooking equipment, nor do they have staff with the training to work with fresh food.

All of this takes money, which many school districts don’t have. In this context, it’s encouraging to see initiatives such as Let’s Move! Salad Bars 2 Schools, whose goal is to fund and provide grants that help schools install properly cooled salad bars in the cafeteria. For cash-poor districts, however, the expense of retrofitting a kitchen to serve more nutritious meals is likely to remain a significant obstacle.

Finally, there’s the political aspect to consider: Because school lunch programs are subsidized by the federal government, Congress gets a say in the standards – and last week lawmakers in Washington voted to block or delay efforts at reducing sodium and potatoes in school lunches and increasing the amount of whole grains served to school children.

None of these problems is necessarily insurmountable. But they’re not trivial either. Schools can serve better lunches but it won’t happen overnight and it’s probably safe to say that most of them will need help getting there.

Update, Nov. 30: Read more here about how students at the middle school in Willmar, Minn., are being introduced to whole grains through the federal Chefs Move to Schools program.

Frazzled for the holidays

I enjoy the holiday season as much as the next person, but guess what? It’s always a relief when Jan. 2 rolls around and life returns to normal.

Talk about irony: This is supposed to be a happy time of year, yet for many people it often ends up being incredibly stressful.

When the American Psychological Association conducted a survey some years ago, money problems led the list of top reasons for holiday-related stress. About 60 percent of the 1,000 people who took the survey said a lack of money caused the most stress for them during the holidays. The pressures of gift-giving, lack of time and credit card debt were close behind.

The survey also found that people worried about how holiday stress was affecting their health; about one in three reported turning to food or alcohol to help them cope.

List My 5, a website devoted to top-five lists of everything from must-read books to remedies for migraines, highlights the emotional baggage that often contributes to the Christmas crunch: families who fight or don’t get along with each other, pressure to pull off the perfect Christmas, and nostalgia for happier Christmases of the past.

The effect of all of this on people’s well-being is very real. They might develop headaches, difficulty sleeping or even full-blown depression, especially if they’re already susceptible to seasonal affective disorder, which is triggered by the shortened hours of daylight during the winter.

Here’s an eye-opening fact: Several years ago, researchers decided to track heart disease-related deaths across the year to determine whether there was a seasonal pattern. They were surprised to find that deaths from heart attacks and strokes were more frequent during the early winter months than the rest of the year – and cold weather, which is known to affect the circulatory system, did not seem to be an important factor.

In an article that appeared in the Circulation journal in 2004, Dr. Robert A. Kloner describes the findings:

When we plotted daily rates of death from ischemic heart disease in Los Angeles County during November, December, and January, we were struck by an increase in deaths starting around Thanksgiving, climbing through Christmas, peaking on New Year’s Day, and then falling, whereas daily minimum temperatures remained relatively flat during December and January.

The mechanism for this isn’t entirely clear, but there are several suspects: holiday foods that tend to be higher in salt and fat, excess alcohol consumption, and even increased exposure to potentially harmful particulates from wood-burning fireplaces.

Stress is an obvious culprit, Kloner writes. “During the holiday season, patients may feel stress from having to interact with relatives whom they may or may not want to encounter; having to absorb financial pressures such as purchasing gifts, traveling expenses, entertaining, and decorating; and having to travel, especially in the post-September 11, 2001, era.”

Hospitals that are more lightly staffed during the holiday season – and therefore not geared up as usual to care for cardiac patients – might be a factor as well, Kloner suggests.

The phenomenon of seasonal heart emergencies is recognized well enough to have earned its own name: the “Merry Christmas coronary” or the “Happy New Year heart attack.” (Related to this is “holiday heart syndrome,” or episodes of atrial fibrillation that are caused by alcohol consumption and that can lead to stroke.)

The message in all of this can be summed up in two words: Slow down. Kloner’s advice, especially to those who already have heart disease or have risk factors for heart disease, is to watch their intake of salt and alcohol, avoid overeating, get enough sleep and be prudent about physical exertion.

Curbing holiday stress is easier said than done, but the holiday season will probably be a lot happier for many folks if they can enjoy it without the burden of being overly frazzled.

Photo: Wikimedia Commons

Trend-spotting in health care

We’re still a few weeks away from 2012, but apparently it’s not too soon to take a look at some of the trends shaping what happens in health care next year.

Pricewaterhouse Cooper LLC issued a report last week on the top 12 trends to watch in 2012. Their summary of the findings:

In 2012, health industry organizations will connect in new ways with each other and their consumers as they wade through economic, regulatory, and political uncertainty. Some are stepping forward in cooperation; others are rewriting the rules of competition.

Among the key issues, we’ll see value move from theory to reality, investments ramp up in informatics, the effects of drug shortages, insurers gear up to compete in a new insurance exchange marketplace, pharma companies slim down and healthcare increasing its social media presence.

That’s the view from 90,000 feet. But what does it all mean for the ordinary person? Here’s how some of these trends might show up at the level of the patient/consumer experience:

- Expect to see information technology, i.e. computers, become much more widespread, from the hospital room to the exam room.

- Out-of-pocket costs for health care will continue to rise, forcing many people to think harder about whether to seek care at the doctor’s office or the emergency room. In a survey by Pricewaterhouse Cooper, nearly half of the respondents said they had decided to skip a doctor visit or prescription drug in the past year because of the cost, and one in 10 had done so at least five times.

- Value will be important. Health organizations are under growing pressure to deliver quality, affordable care that keeps their patients satisfied; those that fall short could wind up being financially penalized. Patients can expect to see more surveys asking them for their opinion.The real test, of course, will be whether patient preferences lead to meaningful changes in how care is provided.

- Look for care delivery systems to become more integrated. There are many incentives to do so: efficiency, savings and better coordination of care. According to Pricewaterhouse Cooper, health insurers invested $2 billion last year to acquire or align with physician groups, clinics and hospitals. For patients, there could be more confusion in the short run as the players reposition themselves on the chessboard. In the long run, though, it’s believed that care will be better when it’s less fragmented. Whether this actually will become the case remains debatable; the concept of “accountable care organizations,” contained within the 2010 health care reform bill, has plenty of critics.

- A retail spin is coming to the health insurance market. States are expected to start certifying health plans in October 2012 for participation in health insurance exchanges, allowing consumers to shop for their own health care coverage. This could be a real benefit for people who aren’t otherwise able to obtain affordable health insurance. But the option could be underutilized if consumers don’t have enough information or education to compare plans and make good decisions.

- Is your hospital or clinic on Facebook? If not, they risk getting left behind. The social media are emerging as an important way of reaching out to patients and the community. One-third of the respondents in the Pricewaterhouse Cooper survey – and half of the respondents under age 35 – said they had used social media channels to connect with health care organizations or with other people with shared health interests.

Euro RSCG, a global marketing and communications agency (and the same company that forecast the concepts we now know as corporate social responsibility and helicopter parenting), last week issued its own trend outlook.

The top five trends identified: a rising use of medical tourism, telemedicine and personalized medicine; an increasing worldwide incidence of diabetes; and a rise in cyberchondria, or the tendency of people to Google their symptoms and misdiagnose themselves. According to Euro RSCG, all five of these trends signal a cultural shift in the attitudes, beliefs and values that shape how we think and talk about health care.

Photo: Wikimedia Commons

Thanksgiving indulgence

Here’s a quick quiz: When you hear the word “Thanksgiving”, what are the three words that most immediately come to mind?

For me, it’s “family”, “turkey” and “stuffed”.

The pervasive stereotype of Thanksgiving is of the clan gathered around the table, ready to sink their forks into a perfectly browned turkey and eat until they explode.

But if you look closely, you can see signs here and there suggesting this picture isn’t entirely accurate. LiveStrong.com released the findings today from the site’s 2010 user data about people’s holiday habits, and they’re rather surprising.

For starters, people are skipping the gravy and the whipped cream; more than three-fourths of the LiveStrong members opted against gravy and only 6 percent added whipped cream to their pumpkin pie.

Potatoes were still their favorite side dish, but about 25 percent chose sweet potatoes over traditional mashed potatoes.

Nine out of 10 preferred white turkey meat over dark.Their other holiday favorite: ham. Although most continued to roast their Thanksgiving turkey the traditional way, 5 percent had smoked or deep-fried turkey or Tofurky last year.

Pumpkin pie represented 70 percent of all pie consumed, but stand aside for clementines; LiveStrong members ate four times as many of these little citrus fruits during the 2010 holiday season as they did pumpkin pie.

Granted, this may be a rather skewed perspective, since the data reflect a sample group that was health-conscious to begin with. But it raises the question: Does Thanksgiving really deserve its reputation as a national occasion for overeating?

An interesting article at Helium.com suggests popular culture is at least as much to blame as individual behavior for the perpetuation of the “get stuffed” concept – for instance, Norman Rockwell’s famous 1943 cover for Life magazine depicting a happy family seated around the Thanksgiving table.

“The associations aren’t new,” writer Adele Gregory points out. “Throughout history feasting has been associated with friendship, generosity and success.”

But in 21st-century America, with its abundance of cheap food, the messages are often mixed, she writes. “Despite awareness of the obesity epidemic, today’s media contains just as many inducements to overeat as warnings.”

How many people genuinely overeat at the Thanksgiving table? The Book of Odds gives it a 1 in 2.17 chance that this will be you – but also notes that Thanksgiving stems from a harvest feast tradition and that humans may simply be programmed to eat more in this particular setting.

The best estimate I could find was that about 47 percent of Americans will overdo it on Thanksgiving. Another way of looking at this is that 53 percent – still the majority, although just barely – will eat with some restraint. It seems there’s plenty of room at the table to enjoy Thanksgiving dinner in good company, without the guilt or turkey hangover that comes with eating too much.

For tips and resources on how to downsize the Thanksgiving meal without sacrificing the enjoyment, read more here and here.

You’re sick and it’s all your fault

When it comes to attitudes about disease, there seem to be two schools of thought:

1. You get sick because of bad lifestyle choices and a failure to take care of yourself.

2. You get sick because bad things sometimes just happen and can’t always be prevented.

It’s a pretty fierce debate, and not one that’s going to be settled any time soon. But who, if anyone, has the better argument?

An online discussion this week at Kevin MD suggests how hard it is to sort out all of this in a way that’s thoughtful rather than judgmental. A guest entry cross-posted from the excellent Heart Sisters blog, which is hosted and written by heart attack survivor Carolyn Thomas, explores what people from heart disease can learn from people with cancer.

Individuals undergoing treatment for cancer, she writes, usually focus all their energy on getting well again in ways that heart patients do not.

Yes, many people recovering from a heart attack do successfully change their lives to reduce the chances of a second heart attack, she writes.

Yet alas I’ve seen far more of those who seem to utterly lack that “change it all now” attitude. These are the heart patients who don’t exercise as instructed, or don’t change the way they eat, or stop taking their medications, or don’t bother showing up at their Cardiac Rehabilitation programs, or keep smoking – and even start smoking again after being terrified into quitting by their initial heart attack.

The harsh view would be that these people deserve the consequences for failing to do enough to modify their risks. Heck, their bad habits probably caused their heart attack in the first place.

There may be some truth to this but it’s not the whole picture. Thomas herself was a runner before having a heart attack, she explains. She said she knows many other people with heart disease who led active lives, didn’t smoke, had no obvious risk factors – but had a heart attack anyway.

Researchers have tried mightily to link cancer with health-related behaviors – for example, a recent study that linked alcohol consumption with increased risk of breast cancer. Yet most of these associations are tenuous and don’t fully explain why some people get cancer and others don’t. The link between tobacco use and lung cancer is probably one of the strongest and most convincingly demonstrated – but many of us know individuals who smoked for years, never got cancer and died of something else. Were they just lucky, or was there something else going on?

There’s been a lot of study about the many factors wrapped up in good health vs. bad health. The best estimates break it down this way: Health-related behavior, such as being physically active and avoiding tobacco and excess alcohol use, accounts for 30 to 40 percent of overall health. Genetic factors – a family history of heart disease, for instance – accounts for about 30 percent. Socioeconomics, including income and education, make up about 30 percent. The rest is divided between environmental factors and the clinical care we receive.

(Various estimates assign slightly different percentages to each category but the proportions are generally similar. Here’s a breakdown from the annual County Health Rankings project that provides a helpful visual perspective; note that it’s primarily focused on health outcomes and does not include genetics as a factor.)

There are several messages here. First, socioeconomics matter much more than many of us realize, and this particular collection of risk factors is not easy to change. Second, it’s hard to escape the forces of DNA. If your parents have high blood pressure and your siblings have high blood pressure, the odds are pretty strong that you’ll develop it too, and lifestyle changes may not be enough to prevent it.

None of this means health-related behaviors don’t make a difference. Even when we’re battling a genetic predisposition toward type 2 diabetes or live in a neighborhood without sidewalks or a nearby grocery store that sells fresh produce, there are things we can do to help modify our existing risks. We might have to work harder at it than someone who doesn’t have the same social barriers or family health history, but it’s not impossible.

Whether we can achieve total, 100 percent prevention simply by taking good care of ourselves is questionable, though. There are too many unknowns and outside factors that can neither be predicted nor controlled.

In any case, it seems counterproductive to blame people for their own illnesses. An interesting new study carried out at Ithaca College in upstate New York examined the attitudes of people who had diabetes and how responsible they felt for their disease. Those who blamed themselves had higher levels of anger and also reported more difficulty in managing their diabetes. (The study apparently didn’t break down whether there was a difference between participants with type 1 diabetes and those with type 2 diabetes.)

Perhaps it’s just easier for people to accept a new diagnosis when they aren’t second-guessing themselves or beating themselves up over habits and health decisions of the past. By all means, we should try to control what we can – but recognize we cannot control everything.

Photo: WIkimedia Commons

Two simple questions

Asking kids two simple questions about alcohol use can have considerable power in the medical setting to help reduce the rate of underage drinking.

Of the many professionals who work with teens, clinicians are in one of the best positions to spot when a young patient’s drinking behavior might be headed in an unsafe direction. Up until now, however, time constraints often have kept pediatricians and family doctors from asking kids about their alcohol use.

Look for this to start changing. The National Institute on Alcoholism and Alcohol Abuse recently published a guide for clinicians on how to identify and intervene with teens at risk of problem drinking. It isn’t every day that a screening comes along that’s both quick and effective, but this seems to be one of them.

The two questions:

1. Do you have any friends who drank beer, wine or any beverage containing alcohol in the past year?

2. How about you – in the past year, on how many days have you had more than a few sips of beer, wine or any beverage containing alcohol?

How kids respond can indicate their level of risk for problems related to alcohol. Hanging out with friends who drink is strongly associated with risk of future drinking, according to the NIAAA. Among teens who already drink, the amount of alcohol they consume and how often they consume it can help predict how likely they are to get into trouble because of their alcohol use.

Drinking tends to be seen as a rite of passage for American adolescents. But it’s a path that also can send them in an ultimately harmful direction.

There’s been a fair amount of research on underage drinking, and the statistics aren’t encouraging. The teen years are when many kids start experimenting with alcohol; nationally, the number of youths who have more than one sip goes up dramatically with age, from 7 percent of 12-year-olds to almost three-fourths of 17-year-olds.

Evidence suggests that when kids start drinking at age 15 or younger, they may have a higher risk of developing alcohol-related problems in adulthood than kids who waited until 21 to start drinking. Underage alcohol use also is frequently a marker for other issues such as poor performance in school, social difficulties, and risk of injury or death.

Some factors appear to make adolescents more susceptible to underage alcohol use. Kids who smoke cigarettes, for instance, are also more likely to drink. Conditions such as depression, anxiety, ADD/ADHD or conduct problems are associated as well with a higher risk of underage drinking.

The NIAAA’s guide for clinicians was developed with the help of the American Academy of Pediatrics, researchers and health professionals with expertise in substance abuse and adolescent health. It’s designed to be used in any number of clinical settings – routine doctor visits, an urgent care clinic, even the emergency room.

What makes it especially innovative is a “youth risk estimator” – a chart that helps classify young patients into low, moderate or high risk of alcohol-related problems. The guide also gives clinicians some tools for how to talk to kids in ways that are sympathetic and nonjudgmental yet nudge them toward quitting their alcohol use, or at least reducing the riskiest drinking behavior.

I don’t think underage drinking has ever been considered OK. But these days we know so much more about the impact of alcohol on kids, including what it can do to their still-developing brains. There seem to be fewer and fewer reasons to tolerate it as “kids will be kids” and more reasons than ever to address teen drinking sooner rather than later.

Photo: Wikimedia Commons

Taking our medicine

Apparently even the promise of no co-payments isn’t enough to entice some patients to take their medication.

For those who missed it yesterday, a new study reported that when a group of heart attack survivors was offered prescription medication fully covered by their health plan, fewer than half took the drugs. In fact, the researchers struggled to even get enough people signed up to take part in the study.

Coverage by the Associated Press produced what surely must be the health care quote of the month, from one of the researchers who was involved in the study: “My God, we gave these people the medicines for free and only half took it.”

A quick summary: The study involved 5,855 adults who had recently been hospitalized with a heart attack and who had been prescribed one or more medications afterwards to lower their risk of another cardiac event. About half of them were prescribed preventive medications with no out-of-pocket costs; the rest were given prescriptions with the usual co-pay. At the end of one year, fewer than half of the patients overall were actually filling their prescriptions – and there were only minor differences in the adherence rate between those who had a co-pay and those who didn’t.

So what is up with that? Are patients so unwilling to comply with taking their prescribed medication that they can’t even be motivated by free drugs?

That’s certainly one way to interpret this study. But I suspect there’s more going on here than a straightforward case of collective noncompliance.

Adhering to a medication regimen involves multiple, interrelated steps. For starters, patients have to buy into the notion that they need medication and that it will somehow benefit them. They have to fill the prescription at the pharmacy. They have to remember to take the drugs each day, and take them correctly. They have to remember to get refills. They might have to deal with unwanted side effects. Adherence can go off the rails at any one of these critical points.

Although it’s often assumed that cost is a major influence on whether patients get their prescriptions filled, the NEJM study suggests that it perhaps isn’t as important as other factors – and that if clinicians want to devise effective strategies to encourage adherence, they need to do more than address the money angle.

I’m aware of at least one study that found a surprisingly basic reason for why some patients don’t take their medication: They simply don’t like the idea of taking a lot of pills each day.

It would be interesting to know the extent to which psychology might be contributing to medication non-adherence. The patients in the NEJM study had all recently had a heart attack. On average, they were 53 years old. Among at least some of them, perhaps their vision of themselves was that they were mostly healthy. Perhaps they weren’t emotionally ready yet to accept that their health had changed or that they were going to need medication for the rest of their life.

On top of this, there are strong American cultural attitudes about aging and infirmity. We tend to regard disease as a burden on society and often blame the sick for “not taking better care of themselves.” Should we be surprised when people resist taking prescription medication because, consciously or not, they don’t want to be perceived as one of those sick, costly individuals?

The study in the New England Journal of Medicine did reinforce that when patients stuck with their medication regimen, they were less likely to have a second cardiovascular event. Overall, health care costs for these people also were somewhat lower. The savings weren’t huge but then again, it often can take years to see a measurable payoff from this kind of health intervention, and the study wasn’t designed to track long-term results.

Did the elimination of co-pays help some patients more than others? It’s probably safe to assume that it did, at least among those for whom cost was the main barrier. For other folks, though, it’s clearly going to take more than this to raise the adherence rate. A better understanding of both the practical and emotional issues involved might be a good place to start.

Complications of surgery

A patient advocate who blogs at 2centsdujour has come right out and asked the same question that’s been bothering me ever since the death of Andy Rooney last week: Unexpected and fatal complications from surgery? What happened?

The curmudgeonly CBS commentator must be wondering what happened too. Pat Mastors, who lost her own father to “complications of surgery” five years ago, channels Rooney’s unique brand of bemused crankiness to speculate about the whole situation:

“I died last week, just a month after I said goodbye to you all from this very desk. I had a long and happy life – well, as happy as a cranky old guy could ever be. 92. Not bad. But then I read what killed me: ‘serious complications following minor surgery’.

 “Now what the heck is that?

 “Nobody gets run over by a ‘serious complication’. You don’t hear about a guy getting shot in the chest with a ‘serious complication’. Sure, I didn’t expect to live forever (well, maybe only a little bit), but I was sorta going for passing out some Saturday night into my strip steak at that great restaurant on Broadway. Maybe nodding off in my favorite chair, settling into a good dream of reeling in a 40-pound striper. You know, not waking up. This whole ‘death by complication’ thing is just so, I don’t know…vague and annoying. It bothers me.”

Somewhat surprisingly, most news reports have glossed over the apparent sequence of events: 92-year-old patient undergoes what reportedly was a minor surgical procedure, unexpectedly develops serious complications and dies in the hospital.

No other information is available, so it’s impossible to know what actually happened. But it’s a blunt reminder of the risks patients face whenever they undergo surgery – even when the procedure is minor or routine.

There’s no accurate estimate for how many patients die in the U.S. from complications of surgery. The most widely quoted figure of health care-related fatalities comes from the now-famous Institute of Medicine report, “To Err Is Human,” but the number – 45,000 to 98,000 – is purely an estimate and includes all deaths related to medical error, not just those involving surgical complications. The best available statistic is that about one in six patients undergoing surgery will develop potentially life-threatening complications.

Surgery is risky, period. Complications can range from a bad anesthesia reaction to hospital-acquired infection or technical error by the surgeon. Many of these are considered preventable, and hospitals are under growing pressure to develop safer processes and lower the rate of complications and deaths.

Over the past decade, a considerable body of research has been built on how to reduce surgery-related complications. One intriguing study, published in 2009 in the New England Journal of Medicine, found that lowering the incidence of complications might be only half the picture. Hospitals also need to get better at managing complications when they do arise, the researchers concluded after examining outcomes among 84,000 patients at 186 U.S. hospitals. Other studies have demonstrated the benefit of checklists and protocols to prevent complications such as infection or post-surgery blood clots.

It’s not easy to tease apart the types of complications that are preventable from those that aren’t. The rate of fatal surgery-related complications undoubtedly could be lower than it is, however. Whether the patient is 9 or 90, death is not the outcome anyone would wish for, and it’s not an outcome that should be seen as acceptable.

Photo: Wikimedia Commons

Getting personal about the cost of care

Even being a doctor doesn’t necessarily make you immune to hassles about health care costs, as Dr. Jeffrey Rice, chief executive of Health Care Blue Book, described recently at the Costs of Care blog.

Dr. Rice’s 12-year-old son needed to undergo relatively minor surgery on his leg. The family had a high-deductible health plan and wanted to know in advance what it would cost.

Dr. Rice explains:

I called the hospital to request a price for the surgery and they said they couldn’t really tell me. They offered to send the procedure codes to an external reviewer who would provide a general idea of the anticipated charges. Three days later the answer came back at $37,000. I reiterated that I had high deductible insurance and needed to know the actual price they would bill me after an insurance adjustment to the network fee schedule.

The hospital next referred me to my insurance company. The insurance company referred me to their PPO network. The PPO network said that they could not reveal the prices until after the case was performed.

Health care consumers are often criticized for not thinking about the cost before seeking care. But when doing your homework involves negotiating an obstacle course worthy of the Green Berets, it’s not hard to see why many people give up after encountering the first few barriers – or worse yet, don’t even try.

Luckily Dr. Rice was persistent. He decided to ask the surgeon if the procedure could be done at an independent ambulatory surgical center. The answer was yes. “One phone call and 10 minutes later I have the exact price for his surgery – $1,515,” Dr. Rice wrote. “My son had his surgery and is doing well. We got a fair price because we demanded more of the system.”

Stories like these are a real-life example of how cost intersects with the care patients receive and how challenging it can be to link the two in a meaningful way.

Costs of Care, a nonprofit organization based in Massachusetts, is looking for stories about the cost of care for its second annual essay contest.

Consumers and providers are invited to submit personal anecdotes illustrating how cost awareness led to high-value care and/or cost savings, or how a lack of cost awareness led to an unexpectedly high bill or difficulty figuring out what a test or treatment would cost.

Four $1,000 prizes will be awarded to the winners. The contest deadline is Tuesday, Nov. 15. Finalists will be announced Dec. 15; the winners will be announced Jan. 15. Click here to read more about the contest rules and how to submit an entry. And watch this space for contest updates and links to the winning entries.

For what it’s worth, I don’t buy the argument that it’s too difficult for health organizations to give patients an estimate ahead of time for what their care will cost. Here in Willmar, Minn., Rice Memorial Hospital has been doing this for the past year with the help of a software tool called CarePricer. By all accounts, it’s quite accurate, even taking into account how much of the patient’s deductible has already been met for the year. I’ve heard anecdotally from hospital staff that in a few cases, patients have decided to delay elective surgery after learning what their out-of-pocket costs would be. It takes work to put this kind of process into place but clearly it can be done, and done successfully.

The diabetes wars

Even if I didn’t already know November is American Diabetes Month, the flood of news releases into my email inbox would tip me off.

And I can’t help noticing that while most of them contain helpful statistics and information, all too often there’s something missing. Amid all the general references to diabetes, there’s seldom a mention of the fact that diabetes comes in two forms, type 1 and type 2, or an attempt to differentiate the two.

Should it matter? After all, they’re both diabetes. If you talk to any of the 26 million American children and adults with diabetes, however, it soon becomes clear that while they have much in common, there also are distinct differences in their experience of living with and managing type 1 vs. type 2 diabetes.

A quick overview: Type 1 diabetes develops when islet cells in the pancreas stop producing insulin, typically due to autoimmune, environmental or genetic factors. It develops rapidly and requires daily, lifelong insulin injections.

The mechanism behind type 2 diabetes is insulin resistance. These individuals might still be producing insulin in their pancreas but their body can’t use it effectively. Type 2 diabetes often develops slowly and can be delayed and/or managed with nutrition, physical activity and weight loss.

Type 1 diabetes sometimes is referred to as juvenile diabetes because it most often develops in children and teens. It’s not an entirely accurate name, though, because this form of diabetes can occur at any age. Likewise, type 2 diabetes, which has traditionally been seen as a disease of adults, is increasingly being found among high-risk children.

No matter what you call it, there’s no arguing with the numbers. The incidence of type 2 diabetes vastly outnumbers type 1; for every person who’s a type 1, there are nine or 10 who have type 2.

It’s a minority status that can rankle with the type 1s, who often feel marginalized and misunderstood. Meanwhile, the type 2s have to deal with the all-too-common public attitude that if they have diabetes, it’s their fault for not taking better care of themselves. 

The differences between the two forms of diabetes, and who has it worse, are a very real – and sometimes divisive – issue for many of those living with this disease.

It was eye-opening to encounter an essay last year in the Huffington Post that examined the so-called diabetes wars and the arguments on both sides of the fence. The author, Riva Greenberg, has had type 1 diabetes since childhood and is an author, diabetes advocate and health coach.

“I’m truly tired of how little people know about the intense, exhausting and daily effort of managing type 1 diabetes,” she writes. With so much attention focused on type 2, “people with type 1 diabetes feel invisible, overlooked and are often blamed by an unknowing public for causing their own condition – eating ourselves into our disease. This isn’t the case for type 1, and frankly, it isn’t always the case for type 2.”

It’s time to find a new name for type 1 diabetes, Greenberg writes. “Recognition of type 1 as a significantly different disease than type 2 will help the general public understand that our diabetes can only be cured by more research while type 2 prevention and reversal requires a lifestyle change.”

The debate raged even more intensely at Diabetes Daily, where an online discussion last year argued whether it’s better to have type 1 or type 2.

One person wrote:

 What is so frustrating is even being compared to a type 2. I think it should be called a totally different disease and should not be linked… Anyone that is a “controlled type 2 diabetic” likely does not have a problem. There are countless of people that have totally erased type 2 diabetes from their lives because they have changed their eating habits.

Type 1 does not have this choice. It is a fatal disease, they must check themselves before every meal, after every meal, upon getting up, upon going to bed, and injecting a needle for every little bit of food they put into their body.

Not so fast, wrote someone else, whose father died of type 2 diabetes at age 63. It was a “very painful, terrible, slow death,” she wrote. “It is so awful and horrible hearing all these insensitive comments from people who think type 2 is a piece of cake.”

I think the lesson here isn’t which form of diabetes is worse. Both are challenging to live with, wrote David Edelman, who, along with his wife Elizabeth, is the cofounder of Diabetes Daily.

There are substantial differences between treating type 1 and type 2 diabetes. There are major overlaps, too. We all need to:

- Understand what’s in our food.

- Figure out what’s going on in our bodies.

- Use our insulin and medications effectively.

- Have positive relationships with our medical teams.

- Stay motivated.

Type 1s and type 2s need to “appreciate other’s struggles,” someone else wrote. “The reality is that the needs and issues of the type 1 community are not the same needs and issues that affect the type 2 community, hence why some of us are so vocal. I don’t think one type wants to ‘elevate’ themselves above the other, we just want a cure for our particular disease.”

A good start would be for the public to become better educated about diabetes and more understanding of the daily challenge of living with diabetes – type 1 as equally as type 2.

Photo: Wikimedia Commons