Health Beat

Is Your Yearly Physical a Waste of Time?

This article originally appeared on Health Beat.

Keep reading... Show less

Could the Media Derail Health Care Reform?

This article originally appeared on Health Beat.

Keep reading... Show less

The Dangers of Do-It-Yourself DNA Testing

This article originally appeared on Health Beat.

Recently, Time magazine listed the retail DNA test as its best invention of 2008 (thanks to Kevin M.D. for the tip). The best?  Maybe one of the most worrisome.

Time specifically highlights the do-it-yourself DNA testing kit from 23andMe, a California-based corporation named after the 23 pairs of chromosomes in each human cell.  The company sells $399 DNA kits that consist of a test tube in which you spit and send to the company's lab. There, over the next 4-6 weeks, researchers extract DNA from your saliva and map your genome, putting the results online. You can access the results through the web and navigate a guide to your genes that estimates "[genetic] predisposition for more than 90 traits and conditions ranging from baldness to blindness." 

Admittedly, this sounds pretty cool. As Time gushes, "in the past, only élite researchers had access to their genetic fingerprints, but now personal genotyping is available to anyone who orders the service online..." But look closer at the commoditization of DNA testing and the novelty wears off pretty quickly.

By pinpointing specific genes associated with certain diseases, a 23andMe gene read-out can inform a user of his or her susceptibility to those conditions. It turns out this is a lot less useful than it might seem. For example, Time reports that one test showed that the husband of 23andMe's founder has a rare mutation that gives him an estimated 20 percent to 80 percent chance of getting Parkinson's disease. The couple's child, due later this year, has a 50 percent chance of inheriting this mutation, and thus his dad's risk of Parkinson's.


At this point, the parents-to-be have to worry that their kid will have a mutation associated with an incurable disease. If he has it, they also have to fret that he has anywhere from a one in five to a four in five chance of actually contracting the disease. Really, how helpful are these numbers? That's a big range of probabilities. I wager it doesn't feel terribly good to be tracking the genetic lottery of your son's health, disease by disease.  In fact, I imagine that it's downright harrowing.

Dr. Alan Guttmacher, acting director of the National Human Genome Research Institute of the National Institutes of Health, agrees. In September, he told the New York Times that "[DNA testing] can be neat and fun, but it can also have deep psychological implications" because it can profoundly influence the way we view ourselves, our loved ones, and our relationship to the world. As Guttmacher told Time, "a little knowledge is a dangerous thing."

Here Guttmacher isn't just talking about the strange helplessness of knowing the ever-so-approximate probability of your child getting sick. He's also speaking to the fact that DNA tests themselves only provide a little knowledge -- just one small piece of the complex puzzle that is our health. Unfortunately, DNA tests often promise much more than this. One company, Navigenics, is actually dedicated to reading your DNA and diagnosing you with a set of medical risk factors that you then discuss with an appointed "genetic counselor." The idea is that genetic tests reveal some sort of fundamental physiological truth; a complete and comprehensive assessment of our health.

It's true that some conditions, like cystic fibrosis and Huntington's disease, have been scientifically proven to be associated with particular genetic mutations. But many other conditions have not been shown to have a genetic origin -- particularly when that gene is detected without an intimate understanding of environmental factors surrounding a patient, as it is the case when researchers on the other side of the country analyze your spit. 

In light of this fact, the Genetics and Public Policy Center, a project of Johns Hopkins University and Pew Charitable Trust, warns that many consumers "might have difficulty distinguishing between tests widely used and accepted by medical professionals...and those whose validity is unproven in the scientific literature." Customers will see their genetic print-out, with risk assessments for particular illnesses tagged on each gene. But they won't have a sense of how the DNA testing company calculated that number -- i.e. how much it reflects established medical research or a best guess from a company trying to convince you of its product's predictive possibilities.

Unfortunately, the latter seems to be more plausible. In 2006, the Government Accountability Office (GAO) purchased 14 DNA tests from four different websites and sent in samples. The office found that "the results from all the tests GAO purchased mislead consumers by making predictions that are medically unproven and so ambiguous that they do not provide meaningful information to consumers."

From GAO's saliva samples, the companies sent back risk predictions for conditions like diabetes and osteoporosis with little qualification, even though "scientists have very limited understanding about the functional significance of any particular gene, how it interacts with other genes, and the role of environmental factors in causing disease." In other words, the tests were spitting out numbers and warnings even though the genetic causality of these conditions "cannot be medically proven."

Further, many other results were all but "meaningless. For example, [the companies reported that] many people 'may' be 'at increased risk' for developing heart disease." But this is true for pretty much everyone, "so such an ambiguous statement could relate to any human that submitted DNA." The laughable superficiality of the companies' test results carried over into lifestyle information that GAO provided: when the office told a company that the patient from whom the sample derived smoked, the DNA company recommended that they stop smoking. When the patient reported that he had quit, the company "gave recommendations to continue to avoid smoking." Gee, thanks -- is that really worth $400?

Ultimately, in the words of Dr. Muin Khoury, director of the National Office of Public Health Genomics at the Centers for Disease Control and Prevention, "the uncertainty [of medicine] is too great," to view DNA testing as a sort of medical crystal ball. Even within the context of our genes, the possibilities are endless. To its credit, Time points out that "many diseases stem from several different genes and are triggered by environmental factors. Since less than a tenth of our 20,000 genes have been correlated with any condition, it's impossible to nail down exactly what component is genetic."

In fact, even when doctors do know that there's a genetic component to a given condition, they're not always sure which genes to look at. For example, in 2006 the Boston Globe noted that "there are hundreds of mutations in two well-known breast cancer genes, BRCA1 and BRCA2, for which reliable commercial tests exist. A woman could be told that she didn't have the common mutations but might still be at high risk from less common mutations or a different gene altogether..." Translation: even though we know there's a genetic component to breast cancer, it's very difficult to pinpoint which gene is the problem -- particularly if the only way of communicating with a patient about the issue is watered-down risk probability.

Meanwhile, the vagueness of DNA test results works in the favor of testing companies. If they keep things simple and superficial, they can make cross-promotion easier. In the GAO study, for example, the DNA test results were synched with expanded product offers such as dietary supplements, which had only a tangential relationship to the patients' test results.

This sort of aggressive marketing is direct-to-consumer medicine at its most profitable. Companies often want to convince patients that they have a certain condition and then sell them on the cure. In prescription drugs, this "disease mongering" has usually been about listing symptoms to get people scared. But DNA testing kicks things to another level: convince people that they are actually hard-wired to contract a particular disease, and your cure becomes that much harder to resist.

It's no wonder that experts at Johns Hopkins are worried that "advertisements may...underemphasize the uncertainty of genetic testing results, or exaggerate the risk and severity of a condition for which testing is available, thus increasing consumer anxiety and promoting unnecessary testing." Given what we've seen in direct-to-consumer medicine up until now, this is a very reasonable fear.

Another plausible concern is that DNA tests, in their superficiality and over-simplification of medicine, will be routinely misinterpreted by patients. Time cites the case of Nate Guy, a 19-year-old in Warrenton, Va., who "was relieved that though his uncle had died of prostate cancer, his own risk for the disease was about average," according to his 23andMe test. This sounds uplifting until you realize that, by the age of 70, the vast majority of men have prostate cancer. Almost all of them will die with prostate cancer, not from it. (Something else will kill them before this very common, but usually slow-growing, cancer catches up with them.)  An "average risk" of prostate cancer means you'll probably get prostate cancer and live with it for years, just as do nearly all older men.

Presumably, Guy doesn't know this. One gets the sense that he thought his uncle died of prostate cancer because he died with prostate cancer, and that this fact meant that his uncle had been uniquely susceptible to the disease. Now Nate finds out he has an average level of vulnerability and thinks that he won't get prostate cancer. Statistically speaking, none of this is probably true -- but this is the sort of reasoning that happens when patients are confronted with misleading, sparse data about their health, devoid of a broader medical context.

One can imagine that, had Nate been disheartened with the results of his test, he would have similarly embraced the definitiveness of the results and undergone unnecessary prostate screenings throughout his life -- screenings which have never been shown to actually improve survival rates. Either way, Nate's taking the wrong message from his genome. Indeed, the likelihood that the patient will go for more screenings -- just to be safe -- combined with the fact that people are paying $400 a pop for a test which vaguely suggests whether they may or may not contract a disease makes DNA tasting a profoundly cost-ineffective health care option.

Genomics is a field that's new and exciting; scientists will and should pursue it. But it's probably not something you should try at home. From what we've seen so far, do-it-yourself DNA testing risks exacerbating many of our most pressing health care problems: the deceitfulness of for-profit medicine, the dangers of direct-to-consumer health care, the glut of wasteful, potentially harmful, screenings, and the general misconception that -- if our gizmos are fancy enough -- we can all live forever.

How John McCain Would Dismantle Medicare

This article originally appeared on Health Beat.


No doubt you've seen the ads. Barack Obama claims that John McCain plans to hollow out Medicare, arguably the most popular social program in America.



McCain says that just isn't so.



The controversy began early this month when the Wall Street Journal reported that McCain plans major reductions in Medicare and Medicaid spending totaling $1.3 trillion over the text ten years. This will help pay for his health care plan. Douglas Holtz-Eakin, McCain's senior policy adviser, told the Journal that "the savings would come from eliminating Medicare fraud and by reforming payment policies to lower the overall cost of care." 



Without question, there is money to be saved if Washington cracks down on Medicare and Medicaid fraud. But before reaping any savings, the government first would have to spend money to ferret out the fraudulent claims. And no one believes that Washington could recover anything close to $1.3 trillion. Meanwhile, "reforming payment policies" seems to suggest that McCain plans to pay doctors and hospitals less, at a time when many Medicare patients are having a hard time finding a primary care physicians -- precisely because Medicare's fees are already so low. This could reduce access to care.



Barack Obama quickly went on the attack with ads warning that McCain would take "Eight hundred and eighty-two billion from Medicare alone...requiring cuts in benefits, eligibility, or both."




McCain's camp fired back, arguing that McCain had no intention of slashing benefits. The "savings"  would come from eliminating fraud, accelerating the computerization of health records, speeding the use of generic drugs, eliminating government subsidies for private Medicare Advantage plans, and  requiring high-income beneficiaries to pay more for pharmaceuticals,.




Let's look at this list, item by item. While electronic medical records could reduce waste in the long run, experience has shown that it takes at least ten years for healthcare IT to begin to pay off. In the meantime, where would the Senator find the money to install the technology and train doctors and hospital staff to use it? Electronic medical records would be a fine investment: but this is not a way that Medicare can save billions over the next decade. Speeding the use of generics should reap some savings -- though if you buy generics, you have probably noticed that prices are spiraling.  As for eliminating the bonus that Medicare now lavishes on private insurers that offer Medicare Advantage, that would trim spending by $16 billion. But that's still far from the $1.3 trillion that McCain aims to save.




Finally, what about the last item: "requiring high-income beneficiaries to pay more for pharmaceuticals"?  Let me suggest that this gets to the heart of the matter. For the goal here is not so much to raise revenues for Medicare as to shrink the size of the program. 




When it comes to McCain's plans for Medicare, the Wall Street Journal story is only the tip of the iceberg. If you want to understand McCain's intentions toward Medicare you need to realize that his objections to the program are firmly grounded in a conservative ideology that can be traced back to Ronald Reagan. From a conservative point of view, the problem with Medicare is that it covers everyone.




This explains why both President George Bush and Senator McCain have supported "means-testing" benefits, charging some seniors more than others. It also sheds light on why Sarah Palin quoted Ronald Reagan in her closing statement during the vice-presidential debate earlier this month.


McCain's Advisers Send a Signal




Begin with Palin. During the October 5 debate (the day before the Wall Street Journal story about McCain's new plan to fund his healthcare plan appeared), Sarah Palin turned to Reagan, as she reminded her audience that if we want to protect our liberties, we must be vigilant: "It was Ronald Reagan who said that freedom is always just one generation away from extinction. We don't pass it to our children in the bloodstream; we have to fight for it and protect it, and then hand it to them so that they shall do the same, or we're going to find ourselves spending our sunset years telling our children and our children's children about a time in America, back in the day, when men and women were free."




Some observers took this moment as an example of Palin's naiveté. Apparently she didn't know that these famous lines came from a speech that Ronald Reagan had recorded for the American Medical Association (AMA) in 1961, railing against the evils of that dark plot to undermine our liberties...Medicare.




Palin probably didn't know the source of the quotation. But the McCain advisers who prepped her for the debate most certainly did. Those lines are treasured by Reagan's many admirers.  By quoting that particular speech, the McCain camp was signaling how it views Medicare -- the day before the Wall Street Journal would announce that McCain planned to radically cut Medicare's funding. To understand the message, it's worth going back to 1961.


Reagan Leads a Secret Operation Aimed at Housewives




At the time, the Democrats were proposing, the "King-Anderson" bill, a proposal, backed by President John F. Kennedy, that would create a program much like modern Medicare, covering all Americans over 65. The AMA vigorously opposed the legislation. The physicians' guild saw Medicare as one step forward on the slippery slope toward "universal coverage," which the AMA called "socialized medicine."




Enter the Woman's [sic] Auxiliary of the AMA an organization composed primarily of the wives of member physicians. In an essay titled "Operation CoffeeCup: Ronald Reagan's Effort to Prevent the Enactment of Medicare," Larry DeWitt, a public historian for the Social Security Administration, describes how the Woman's Auxiliary was asked to launch a special high-priority initiative under the title of WHAM, Women Help American Medicine in 1961.




"The avowed aim of WHAM was bluntly stated," DeWitt reports: "This campaign is aimed at the defeat of the King-Anderson bill of the 87 th Congress, a bill which would provide a system of socialized medicine for our senior citizens and seriously curtail the quality of medical care in the United States." (Thanks to The New Republic's Jonathan Chait and Dr. SteveB on Daily Kos for calling attention to DeWitt's excellent essay.)




"The AMA's campaign against the King-Anderson version of Medicare was a complex, extensive, and well-financed lobbying tour-de-force," DeWitt continues. "Many aspects of the WHAM campaign were very public and visible. The AMA placed advertisements in major newspapers and funded radio and television spots, all deploying the usual red-brush of 'socialism,' and even the specter of jack-booted federal bureaucrats violating 'the privacy of the examination room.'"




But DeWitt reveals, "there was also a more stealthy component to the campaign," one that depended for its success on its sponsorship and origins being hidden from the members of Congress who would be lobbied under its aegis. This was Operation CofffeeCup," and Ronald Reagan was its star.




Operation CoffeeCup arranged a series of coffee-klatches hosted by the members of the Woman's Auxiliary. "The Auxiliary members were instructed to downplay the purpose of the get-to-gathers," DeWitt explains, "depicting them as sort of spontaneous neighborhood events: "Drop a note -- just say 'Come for coffee at 10 a.m. on Wednesday. I want to play the Ronald Reagan record for you.'"




In 1961, Reagan's film career had faded and he was contemplating a move into politics.  With that in mind, in agreed to become the AMA's spokesperson, recording a 19-minute LP vinyl entitled "Ronald Reagan Speaks Out Against Socialized Medicine." (You can hear Reagan's silky voice on YouTube.)




Reagan's impassioned address was followed by an 8-minute speech by an unnamed announcer. Reagan's work on behalf of the AMA was, listeners were assured, unpaid (although there was no mention of the fact that Reagan's father-in-law was a top official of the AMA) and was motivated only by his own strong political convictions on the issue.




Meanwhile, "the attendees at these coffees were trained and encouraged in writing apparently spontaneous letters to members of Congress expressing their strong opposition to the pending King-Anderson bill. It was essential, the attendees were instructed, that their letters appear to be the uncoordinated, spontaneous, expressions of a rising tide of public sentiment."


Ronaldreagan Reagan's speech was "a determined and in-depth attack on the principles of Medicare (and Social Security)," DeWitt points out, "going well beyond opposition to King-Anderson or any other particular piece of legislation."




In the recording, Reagan described "the idea that all people of Social Security age should be brought under a program of compulsory health insurance" as an "imminent threat." He emphasized the breadth of the plan, explaining that it would cover, "not only our senior citizens" but "those who are disabled" (just as Medicare does today.)




Reagan urged his listeners to write to their Congressmen telling them that "We do not want socialized medicine...[we] demand the continuation of our traditional free enterprise system.




"Call your friends, and tell them to write . . . If you don't," Reagan warned, "this program I promise you, will pass just as surely as the sun will come up tomorrow. And behind it will come other federal programs that will invade every area of freedom as we have known it in this country, until, one day... we will awake to find that we have socialism. And if you don't do this, and if I don't do it, one of these days, you and I are going to spend our sunset years telling our children and our children's children, what it once was like in America when men were free.




Despite efforts to keep Operation CoffeeCup under the radar, Reagan's role in the AMA campaign was revealed, in a scoop by Drew Pearson in his Washington Merry-Go-Round column, "Star vs. JFK" :
Pearson wrote: "Ronald Reagan of Hollywood has pitted his mellifluous voice against President Kennedy in the battle for medical aid for the elderly. As a result it looks as if the old folks will lose out. He has caused such a deluge of mail to swamp Congress that Congressmen want to postpone action on the medical bill until 1962. What they don't know, of course, is that Ron Reagan is behind the mail; also that the American Medical Association is paying for it.




"Reagan is the handsome TV star for General Electric...Just how this background qualifies him as an expert on medical care for the elderly remains a mystery."




Nevertheless, Reagan and the AMA carried the day. Medicare legislation would not pass until 1965, after JFK had been assassinated. 




DeWitt stresses that Reagan objected to Medicare because it was universal. As an alternative, both Reagan and the AMA preferred the Kerr-Mills bill, which offered an early version of Medicaid, paying medical bills for those on welfare, or those who could qualify as indigent. "By restricting federal programs to the 'truly needy' these programs could be kept small," DeWitt explains, "involving few if any middle-class or upper-class Americans. . .."




Medicare, by contrast, was a program that included all seniors as well as the disabled. Based on a collective vision of society, it was a program that might create social solidarity -- as indeed it would. And conservatives knew that there was a danger that when younger Americans saw how well Medicare worked, they might say "we want that too." Some might begin talking about "Medicare for All."




So Reagan argued against universality: "Now what reason could the other people [i.e. the Democrats] have for backing a bill which says we insist on compulsory health insurance for senior citizens on a basis of age alone regardless of whether they are worth millions of dollars, whether they have an income. . . whether they have savings?" he asked. "I think we could be excused for believing that . . . this was simply an excuse to bring about what they wanted all the time: socialized medicine."




When Medicare legislation finally passed in 1965, some in Congress continued to argue that more affluent Americans should not be eligible. But the program's supporters rejected that idea. They did not want Medicare to become "a poor program for the poor." They realized that what makes Medicare special -- and so popular -- is the fact that it treats all Americans over 65 equally.


McCain Echoes Reagan's Position




Conservatives do not understand this. Or perhaps they do.  In 2003, as part of a veiled attempt to privatize Medicare, the Bush administration opened the door to mean-testing, hiking premiums on Part B of the program for Americans with incomes  over $80,000 -- $160,000 for couples).




Seniors already had seen serious hikes in Part B premiums. "Between 2000 and 2007, all Medicare beneficiaries faced average annual increases in Part B premiums of nearly 11 percent," the Medicare Payment Advisory Commission (MedPac) pointed out in its March 2008 report. "Over the same period, Social Security benefits, grew by just 3 percent a year.




The new increase for wealthier seniors was slipped into the Medicare Modernization Act of 2003, the SeniorJournal.com reports: "a Republican dominated committee quietly added a provision to  the Act, which was not included in the versions passed by the House or Senate, that would add a surcharge to the Part B Medicare premium for more affluent seniors. The 13 percent surcharge will begin in 2007 and be phased in over three years. According to the Congressional Budget Office, the means test will affect about 1.2 million beneficiaries in 2007 and 2.8 million by 2013. Medicare has made no public mention of this change, not even in the July fact sheet on Part B costs, which estimated the Part B premium for 2007 would be less than $100 per month."




Today, the surcharge has kicked in, driving a wedge into the Medicare program. For the first time since Medicare's creation 43 years ago, seniors are no longer paying the same amount for the same services. By January 2009, higher-income beneficiaries will be paying 1.4 to 3.2 times the standard Part B premium, depending on their incomes.  The standard premium for individuals earning less than $85,000 will be $96.40.  By contrast, more affluent seniors will pay premiums that range from $134.90 to $308.30.




And now John McCain has proposed more means-testing, arguing that Medicare beneficiaries with incomes over $82,000 should also pay more for Medicare  Part D, the prescription drug benefit. Echoing Reagan's objection to covering the wealthy under Medicare, McCain has called the drug benefit a "new and costly entitlement" that included many people "who could buy insurance on their own without government help -- people like Warren Buffet and Bill Gates. By making them pay more for their medicines than some worker who spent his career in the coal mines, the country could save billions of dollars that could be returned to taxpayers or put to better use."  According to McCain adviser Douglas Holtz-Eakin,  the  proposal would affect the richest 5 percent of Medicare beneficiaries and save the system about $2 billion a year.




On the face of it, this sounds fair. Bill Gates doesn't need my Medicare dollars. But McCain isn't talking about only excluding billionaires. He proposes making Medicare less attractive to a large swathe of the upper-class and upper-middle-class -- everyone earning over $82,000 ($164,000 for couples).




And why stop there? As Trudy Lieberman pointed out in the Columbia Journalism Review, in April, McCain adviser Douglas Holtz-Eakin tipped his hand when he told the Washington Post:  "You could make this as aggressive as you want to get more savings."  In other words, Lieberman added, "if the government saves $2 billion by making couples with incomes greater than $164,000 pay higher premiums, it could save $6 billion by moving down the income ladder to, say, $100,000 or even less."




"Many health care advocates see McCain's proposal as just another opening to privatize and destroy Medicare as a social insurance program, under which everyone who has paid into the system is entitled to equal benefits as a matter of right," Lieberman observed. "If drug benefits, [like part B premiums] are based on income, critics fear that support for the program will eventually erode as those with more choices and more money will opt out of the program and buy coverage from private insurers."




After all, at a certain point some upper-middle class and many upper-class seniors may well decide that they could get better care at a lower cost if they dropped out of Medicare Part B (which, like Part D, is voluntary) and used the $308 monthly premiums to purchase a plan from a private insurers. One can easily imagine insurers offering seniors relatively low-cost high-deductible plans that cover what Part B and D cover -- -physicians' visits, out-patient care and prescription drugs.  (Meanwhile, Medicare Part A would continue to shield seniors from the charges that even the wealthy fear: hospital bills.)




Switching to private insurance to cover doctors' bills would offer well-heeled beneficiaries some advantages.  Today, many doctors are refusing to take Medicare patients because the fees Medicare pays many physicians are low.




Wealthier seniors might prefer a private plan that lets them choose from a wider range of physicians. If they switched to private insurance, they could pay down their deductible while giving their doctors say, 15 to 20 percent more than Medicare would allow.




Of course, this would mean that even more doctors would refuse Medicare, leaving middle-class and upper-middle class seniors who earn less than $82,000 (and pay only $96 for Part B) with many fewer physicians to choose from. In this way, Medicare would become a two-tier program.




"Those left in Medicare will likely be the poorest and the sickest with few options," Lieberman points. Experience tells us that private insurers will shun sicker seniors, leaving them on Medicare, where they would push the program's premiums ever higher.  Meanwhile, healthier, wealthier seniors who opt out of Part B and Part D will be opting out of what Lieberman rightly describes as "a compact among generations and made it possible for people to have health care when they are old and need it."




Under this scenario, that compact would be threatened. After all, if some seniors begin opting out of Part B, younger, affluent Americans might well ask, "Why should I continue to pay the full payroll tax for Medicare? I only plan to use Part A. I'll let the government know I'm not interested in Part B and they can cut my contribution accordingly." This leaves just one question: how would Medicare stay afloat if only lower-income employees are shelling out the full tax?




Finally, how confident do you feel that private insurers would reimburse for all of the benefits that Medicare now covers? What would happen to upper-middle class seniors who purchased a high deductible plan if their retirement savings suddenly swooned? Would they find themselves putting off needed care because they couldn't afford the deductible? Would Medicare still be there if they decided to switch back? For decades, Medicare has served as a safety net that all Americans could count on -- rich or poor, sick or well.  This is why we call it a "social safety net."



The Conservative Agenda





Keep in mind that when conservatives talk about means-testing, their goal is not to put Medicare on a firmer financial footing by raising co-pays for wealthier beneficiaries. Their aim is simply to drive more affluent seniors out of Medicare and into the arms of private insurers. At that point, health care advocates warn, Medicare would become welfare for low-income seniors and middle-class seniors. And just have much political support would it have then?




Finally: even if McCain is not elected, keep an eye on Congress. There are more than a few legislators who would like to gradually shrink Medicare, killing the program by inches.




One cannot help but remember what House  Speaker  Newt Gingrich said in 1995, when calling for Medicare cuts: "We don't want to get rid of [Medicare] in round one because we don't think it's politically smart...but we believe that it's going to wither on the vine because we think [seniors] are going to leave it voluntarily."




On the face of it, "means-testing" Medicare sounds so reasonable.  By contrast, reforming Medicare to raise quality and contain costs would be a tough job. As I've discussed in the past, if done right, Medicare reform could serve as a model for national health care reform that included a public sector plan open to everyone. This is just what conservatives fear. 




It would be so much easier, they say, to just raise co-pays on more affluent seniors -- until finally Medicare becomes a model for nothing. At that point, those who oppose "Medicare for All" can breathe a sigh of relief.


Will the Economic Meltdown Undermine Interest in Health Care Reform?

This post originally appeared on Health Beat.

Writing on The Health Care Blog, D.C. insider Bob Laszewski puts the chances of health care reform -- at least in the form envisioned by the presidential candidates and ambitious activists -- at about zero in the wake of Wall Street's meltdown. It's easy to see why Laszewski is so pessimistic:

"On top of the $500 billion deficit [that the government faces] in 2009 ... and the cost of the Freddie and Fannie bailout ... the Congress is now being told it must take on a total of almost $1 trillion in government long-term costs to try to turn the financial system around."

That's a problem. McCain claims his reform plan will cost $10 billion; Senator Obama says his will cost $65 billion. Both are no doubt low-ball estimates. Obama's plan, for example, is more likely to cost $86 billion in 2009 and $160 billion in 2013, after it's expanded, according to the Urban Institute. Given these numbers, Laszewski says that the candidates have to "get...real" about how they're "really going to deal with health care reform in the face of all of these challenges."

In an upcoming post, Maggie will dig deeper into just how health care reformers can and should 'get real' in post-meltdown America. But instead of talking about what reformers should do, I want to discuss another important question we have to pose in the upcoming age of austerity: will the public even care about health care reform anymore, now that the economy has gone south?

On September 30, the Partnership to Fight Chronic Disease (PFCD) held a conference call with reporters. On the call were Ken Thorpe, PFCD's Executive Director, and former U.S. Secretary of Health and Human Service Tommy Thompson. Though I've never been a fan of Thompson, he had some interesting things to say.

Thompson opened by laying out the numbers behind U.S. health care expenditures, noting that "16 percent of the [U.S.] gross national product goes into healthcare [every year], and [that proportion is] on its way to 21 percent." He also pointed out that "we're spending $2.4 trillion, on the way to $4.6 trillion, and 75 to 80 percent of that cost is over chronic illnesses" like cardiovascular disease, strokes, cancer, diabetes, and obesity.

While these statistics are hardly new to health care wonks, they're worth reconsidering in light of Congress' bailout plan. Seventy-five percent of $2.4 trillion is $1.8 trillion -- meaning that, annually, chronic diseases cost us almost three times as much as the current bailout bill. The nation's total health care bill is the equivalent of passing a bailout, saving Bear Sterns, nationalizing Fannie and Freddie, and propping up AIG twice every year.

If nothing else, the Wall Street implosion puts the sheer scale of America's health care woes in perspective. As such, Thompson and Thorpe agree that the economic meltdown is a powerful wake-up call to the American public. During the call, Thompson said that he thinks that citizens are "absolutely frustrated with Congress and Washington avoiding problems," and are thus likely to begin demanding action on long-term crises like health care. The need for reform "is hung around the neck of Democrats, Republicans, George Bush and everybody else, and Wall Street," he said, and the American public wants to "find an answer." Thorpe agreed, saying that outrage surrounding the economic crisis has "stirred a bee's nest" of dissatisfaction that will "elevate the interest and desire to do something on healthcare reform in 2009."

In other words, our economic crisis highlights the danger of senseless spending and lays bare the catastrophic danger that comes with ignoring the rumbling of a financial crisis. As Thompson and Thorpe see it, voters are deciding that they're mad as hell -- and health care is another area triggering their wrath.

Dr. David Kibbe of the American Academy of Family Physicians agrees. Also writing on The Health Care Blog, Kibbe argues that Americans' feelings of betrayal over Wall Street's greed will spill over into health care. Kibbe notes: "[A]ny sentient observer of this [economic] trickery on such a massive and systematic scale will start to ask questions about who else among our highest paid and most trusted professionals might be lying to us about the well being we place in their hands.   Who else [besides financiers,] they will ask, is making money off our trust in them? Who else, they will ask, is skimming money off the top of an inflated and ultimately doomed -- because unsustainable -- market for complex services? Where is the next bubble that privatizes profits but socializes risk?"

It's health care, says Kibbe -- a sector where "fifty million people are without health insurance, and at least that many are under insured, while revenues going into the industry continue to increase at double digit rates of increase year after year." Then he asks: "How can this go on much longer?"

Under normal circumstances, the answer might be a good, long time. After all, our health care system has been dysfunctional for decades. But today Americans aren't just disappointed with the way our institutions work -- they're outraged and scared. In a Gallup poll released yesterday, 53 percent of Americans said they felt "angry" about the financial crisis, and 41 percent said they felt "afraid." Americans feel that the system has failed them -- and, as perverse as this might sound, it's that sort of disillusionment with institutions that is needed to fuel changes as far-reaching as health care reform.

Interestingly, the Gallup poll shows that more affluent Americans are the angriest. Sixty-three percent of college graduates say they have felt anger over the recent events in the financial world, compared with 50 percent of non-graduates and only 43 percent of those who have not attended college. Sixty-two percent of respondents in upper-income households with annual incomes of $60,000 or more have been angry, compared with 50 percent of those in lower-income households. This is important: it's always hardest to convince the "haves" that the system is broken, because the system is built to work best for them. But if Americans of higher socioeconomic status begin to acknowledge that an unsustainable system threatens the entire economy, institutional overhaul becomes a much more plausible political proposition.

Granted, all of these numbers refer to the financial crisis and not health care. But the assumption that worries about the economy will fuel outrage over health care isn't as far-fetched as it may sound. Polls show that concerns over the economy and health care do in fact trend together. Check out the graph below, from the Kaiser Family Foundation, which I originally posted back in January to illustrate this very point.

Moreover, public interest in health care is still high: the September 2008 Kaiser Family Foundation election tracking poll puts health care as the number three priority of all voters, and health care remains the second most commonly reported economic hardship (after paying for gas). What will happen when our long-time interest in health care is mixed with a new appreciation for government oversight and regulation, smart spending, and building system that works? Maybe, just maybe, a renewed political will for health care reform.

Admittedly, in post-meltdown America, resources will be limited. It's also true that the sort of done-in-one reform packages that reformers like to trumpet -- cover everybody! Cut costs! Improve quality! -- will probably have to be unpacked into separate initiatives. (This isn't necessarily a bad thing -- as Maggie said last week, "we shouldn't rush into providing health insurance for everyone until we're sure that we're offering Americans health care" anyway).

In the meantime, don't be too quick to assume that Americans have forgotten about health care because the economy has taken a nosedive. It may be that, as people feel increasingly insecure -- and get wise to the danger of governmental inaction -- they will want health care reform now more than ever.

Most Results of Drug Studies Never Published

Last week, the Guardian UK published a story that should be shocking -- but isn't: "More than Half of U.S. Drug Studies Never See the Light of Day." This serves as further proof -- if we needed it -- that pharmaceutical companies should not be allowed to control what doctors and patients know, and don't know, about new drugs.

The story follows below.

More than half of US drug safety studies never see the light of day
Only 43 percent of the evidence of safety and efficacy that the US Food and Drug Administration uses to approve drugs is published in scientific journals. The authors of the survey say this amounts to "scientific misconduct."

James Randerson, guardian.co.uk,Tuesday September 23 2008 10:46 BST

The results of more than half of all clinical trials that demonstrate the safety and effectiveness of new drugs are not published within five years of the drug going on the market, according to an analysis of 90 drugs approved by US regulators between 1998 and 2000.

The researchers, who traced the publication or otherwise of 909 separate clinical trials in the scientific literature, wrote that the failure of drug companies to publish the evidence relating to new medicines amounted to "scientific misconduct". They said it "harms the public good" by preventing informed decisions by doctors and patients about new medicines and by hampering future scientific work.

Sir Iain Chalmers, who is director of the James Lind Library in Oxford and a founder of the Cochrane Collaboration, a respected organization that reviews medical evidence, said that it was vital that all data on new medicines be made public.

"Patients may otherwise suffer or die unnecessarily," said Chalmers, who was not involved in the work. "The people who participate in a trial have a right to expect that their participation and their data will be made available publicly so that people can take whatever decisions seem appropriate in the light of that information."

The US researchers who carried out the study searched the academic literature for publication of the trials that drug companies relied on to convince the US Food and Drug Administration that their new products were safe and effective and so worthy of market approval.

Information that is used to convince the regulators is not necessarily subsequently published for public and scientific scrutiny, but the scale of the missing information was found to be vast.

Five years after each of the 90 drugs was first available for patients, only 43 percent of the studies supporting the drugs' use had been published, with most publication happening in the first one or two years. In the case of one product -- an antibiotic -- the researchers could not find a single supporting trial in the scientific literature, while five trials were published twice and one was published three times.

The team also found evidence for a "publication bias." Trials with statistically significant results were more likely to be published than those with non-significant results, as were those with larger sample sizes.

"In the years immediately following FDA approval that are most relevant to public health, there exists incomplete and selective publication of trials supporting approved new drugs," Prof Ida Sim and her colleagues at the University of California, San Francisco, wrote in the journal PLoS Medicine.

One possible explanation for the scientific data not being published is that drug companies hold back publication of the results that are least flattering to their new drugs. Another possibility is that academic journal editors are less inclined to publish papers on trials that have negative or ambiguous results.

"Regardless of the cause, publication bias harms the public good by impairing the ability of clinicians and patients to make informed clinical decisions, and the ability of scientists to design safer and more efficient trials based on past findings," the authors wrote. "Publication bias can thus be considered a form of scientific misconduct."

The reporting of clinical trial results should have improved since the period analysed by the researchers, because the 2007 FDA Amendments Act mandated basic results reporting for all trials supporting FDA-approved drugs and devices. However, the researchers said it remained to be seen whether clinical reporting would improve.

The new law could even have the opposite effect. "Might sponsors feel less compelled to publish equivocal trials because the basic results will already be in the public domain?" they speculated.

Liked this story? Find more health news at Health Beat.

Universal Health Coverage Is No Silver Bullet

The Massachusetts experiment in health care reform is all about expanding access. But it doesn't try to control costs. This, in a nutshell, is why it's running into trouble.

The plan didn't reform health care delivery, just coverage. Granted, in terms of bringing more people in under the tent, it's been a success: Since the plan went into effect in 2006, 439,000 people have signed up for insurance -- a number that represents more than two-thirds of the estimated 600,000 people uninsured in the state two years ago. This surge in coverage has reduced use of emergency rooms for routine care by 37 percent, which has saved the state about $68 million. (Going to the ER for routine care drives up health care costs by creating longer wait times and tying up resources that can be used to help patients who are critically ill).

But even with these savings, Massachusetts is having trouble funding its plan. Earlier this month the Boston Globe reported that the governor's office is planning to shift more responsibility for funding to employers. Currently, the Mass. Health care law requires most employers with more than 10 full-time employees to offer health coverage or to pay an annual 'fair share' penalty of $295 per worker: this is called 'pay or play', an employer either provides coverage or pays a fee toward the system for not doing so.

To "play" rather than "pay," employers must show either that they are paying at least 33 percent of their full-time workers' premiums within the first 90 days of employment, or that they are making sure that at least 25 percent of their full-time workers are covered on the company's plan. (In other words, they must be paying a large enough share of the premiums so that 25 percent of their employees can afford the plan they offer.)

Now, instead of giving employers this Either/Or option, the new proposal requires that employers do both -- or fork over the penalty fee. In a sense, this is an admirable move by the government, since its intention is to push toward truly universal coverage. But there's also a game of scrounging-for-dollars going on here: The state wants employers to pay more -- to, in the words of Mass. Governor Deval Patrick, "step up" and embrace "shared responsibility" -- either by covering a greater share of health care costs or paying more in penalty fees.

As you might expect, businesses are putting up a fight. They say that Patrick's proposal "ignores the obvious," which is the fact that "employers definitely are doing their part." While it's tempting to vilify "Big Employer" as stingy and selfish, the truth is that there's only so much you can ask businesses to do without harming citizens.

As I pointed out in a March blog post, research shows that there's a big trade-off between health care costs and workers wages: when employers have to pay a lot for health care, they take the cost out of employees' paychecks. Or, as a 2004 study from the International Journal of Health Care Finance and Economics put it, "the amount of earnings a worker must give up for gaining health insurance is roughly equal to the amount an employer must pay for such coverage."

In other words, you can't bleed employers dry without also screwing workers. True, big corporations might have deep enough pockets to pay more for health care without adjusting workers' wages. But they're still bottom-line driven enterprises, which means that they're going to try and break even wherever possible. Unless you want to see laws that strictly regulate the correlation between business health care costs and workers' wages, the working Joe's income is going to take a hit as employers shoulder more health care costs.


The other choice employers have is to opt out of coverage and instead pay the penalty. Many have pointed out that, for employers, this is an attractive option -- particularly in Massachusetts, which, as of 2007, had the highest annual health care costs per employee in the country: $9,304. That's a lot more than $295 a year. We can be certain that this fee will rise in the future. And once it gets high enough for employers to choose health coverage over the fee, they're still going to take the cost of that coverage out of workers' wages.

On the one hand, one might argue that this is fair -- that workers are better off with health insurance rather than higher salaries. Massachusetts is a relatively wealthy state: in 2008, median income for a family of four stood a $85,420 (so half of the population earned more than that). Insofar as some of Massachusetts' wealthier citizens don't have health insurance, it probably would make sense for them to earn a little less, and be covered. Some would insist that this should be an individual choice, but the truth is, if an individual decides to go without insurance, we all wind up paying the cost when he or she becomes seriously ill. On the other hand, many households could not afford to take a pay cut -- especially when you consider the cost of housing in Massachusetts.

Ultimately, the only way to make a universal health care system sustainable both for employers and employees is to tackle the high cost of health care in Massachusetts. As noted, in the Commonwealth, the average annual premium for an employee's health care is $9,304 -- significantly more than the national average of $6,881.

As Maggie has pointed out in the past, this is not because insurers in Massachusetts are profiteering. Insurance is expensive in the Commonwealth because its citizens consume more health care than people in many other states. They undergo more tests and procedures than most of us, and they see more specialists. Look at a graph of average health care expenditures per person in Massachusetts compared to average health care expenditures in the rest of the U.S., and you find that in Massachusetts, individuals receive an average of nearly $10,000 worth of care each year -- compared to just a little over $7,000 per capita nationwide.

High consumption of care is driven, Maggie explained,"by the fact that the state is a medical Mecca, crowded with academic medical centers, specialists and the equipment needed to perform any test the human mind is capable of inventing."

As she originally explained in this article, in states where there are more hospital beds and more specialists, the population receives more aggressive, more intensive, and more expensive care. Even after adjusting for local prices, race, age, and the underlying health of the population, supply drives demand. And it turns out that the Bay State has one doctor for every 267 citizens -- versus one doctor for every 425 people in the nation as a whole. Meanwhile, the state boasts an abundance of specialists, while suffering a critical shortage of primary care physicians.

For reform to work in Massachusetts, the state needs to make care more cost-effective, not just more accessible. That means encouraging providers to emphasize proven treatments that can do the most good for the most people which avoiding over-priced, not fully proven bleeding edge services and products.

This won't be easy. As Ezra Klein recently pointed out on his blog at The American Prospect: "people generally like to equate better health with awesomer technology, but developing a slightly better drug for late-stage cancer -- a good, profitable innovation -- will do much less for health than getting the flu vaccine to everyone who needs it, or creating systems so everyone with elevated cardiac risk is on statins. These interventions are low innovation, but actually extremely effective ... Saving some level of money on innovation in order to rechannel it to access and basic interventions would probably make the country a whole lot healthier. But I don't think you're allowed to say that." Yet if health care is to improve -- in both Massachusetts and the U.S. -- this is exactly what we need to say.

In fact, Massachusetts is a classic example of the technology-cost problem. According to a Boston Globe report, between 1997 and 2004, the number of MRI scanners in Mass. tripled to 145, about the total for all of Canada. From 1998 to 2002, the number of patient MRI scans in the state increased by 80 percent, to almost 500,000 a year. With insurers paying between $500 and $1,400 to cover a scan, the numbers add up: in 2003, Harvard Pilgrim, a non-profit insurer, shelled out $73 million on MRI scans, even as other costs increased as well. It should come as no surprise that a state with so much medical technology, also has more medical and clinical lab technicians than any state in the union: 184.06 per 1,000 of the population, almost twice the national average of 101.32.

Massachusetts also reports an abundance of hospital beds -- enough to allow patients to spend an average of 11.8 days in the hospital during their last six months of life -- compared to 9.5 days in Maine. None of this is consciously planned. It's just that if the beds are available, it's easier to hospitalize the patient. And once they are in the hospital, it's easy to refer them to a dozen specialists, assuming that enough specialists are available.

Bottom line, health care in Massachusetts is extremely expensive, thanks to supply-side factors -- which means expanding and sustaining full coverage is, fiscally speaking, a tough proposition. Luckily, steps are being taken to address cost issues: in June, the state's biggest insurer and the state itself said that they would stop reimbursing doctors and hospitals for 28 medical errors.

Certainly punishing doctors isn't the key to sustainable health reform -- and some errors are more "preventable" than others. But changing health care delivery -- that is, changing patterns in the types and volume of treatments and procedures made available to patients -- is the key to making health care run smoothly in the long-term. Universal coverage is wonderful and necessary. But it's only one piece of the health care puzzle.

Want High-Quality Universal Health Coverage? Fix Medicare First and Use It as a Model

This article originally appeared on Health Beat.

Thanks to the unbridled rise in health care prices, Medicare is going broke. As I mentioned in a recent post, four years ago the Medicare trust fund that pays for hospital stays started to run out of money. In 2004 the fund began paying out more than it takes in through payroll taxes.

"Since then, the balance in the fund, combined with interest income on that balance, has kept the fund solvent. But in just 11 years, it will be exhausted," the Medicare Payment Advisory Commission (MedPac) reported in March. "Revenues from payroll taxes collected in that year will cover only 79 percent of projected benefit expenditures." And each year after 2019, the shortfall will grow larger.

Make no mistake: This is not an example of an inefficient government program spending hand over fist without caring whether it is getting a bang for the taxpayer's buck. As I discussed in that earlier post, health care prices have been climbing -- without a concomitant improvement in patient outcomes or patient satisfaction -- in the private sector as well.

Medicare Reform Could Pave the Way for National Reform

Before trying to roll out national health insurance, the next administration needs to address the structural problems that undermine the laissez-faire chaos that we euphemistically refer to as our health care "system." Otherwise, we run the risk of winding up with a larger version of the dysfunctional, unsustainable system that we have today. Ideally, the administration should make Medicare reform a demonstration project for high-quality, affordable universal coverage.

Let me be clear: Medicare reform does not preclude national health reform. To the contrary, by starting with Medicare and showing what can be done, reformers enhance their chances of winning the larger war.

Nor does focusing on Medicare first mean that the suffering of uninsured and underinsured Americans must be ignored. As part of Medicare reform, Medicaid and the State Children's Health Insurance Program should be folded into Medicare, turning these "poor programs for the poor" into federal programs, expanding coverage and raising the fees for doctors who treat Medicaid patients.

Medicare presents reformers with a promising starting point. Politically, Medicare reform will be easier than a nationwide overhaul, simply because Congress recognizes that when it comes to Medicare, it has no choice. Legislators know that Medicare has reached a tipping point: They must do something.

This is why, earlier this month, the Senate flirted with political suicide by considering letting a scheduled cut in the fees Medicare pays physicians go into effect. On July 1, Medicare was supposed to slash physicians' fees, across the board, by an average of 10.6 percent. Physicians threatened that they would close their doors to Medicare patients. At the eleventh hour, legislators stepped back from the edge of the cliff.

But Congress knows that the problem has not been solved. Legislators will have to revisit the problem of Medicare spending early in 2009. Physicians' fees are scheduled for another across-the-board cut on Jan. 1, 2009. Everyone knows that this is a crude solution. Medicare pays primary care physicians too little, which is why, MedPac explains, 30 percent of Medicare patients looking for a new primary care doctor report difficulty finding one. Meanwhile, Medicare overpays other physicians for certain services. Dollars need to be redistributed; the net result could easily be a wash.

How Medicare Can Save Dollars and Lift the Quality of Care

But Medicare cannot contain runaway inflation simply by looking at how much it pays doctors. The program overpays drugmakers; it overpays device makers; it overpays private insurers who offer Medicare Advantage. In some cases, it overpays hospitals -- or squanders dollars on services that don't benefit patients, without giving hospitals the right financial incentives to provide more effective, safer care.

In its March and June reports, the Medicare Payment Commission has made a number of shrewd and imaginative suggestions for cutting waste in the system. First, Medicare should eliminate the windfall bonuses that it now pays Medical Advantage insurers (see "The Problems With Medicare Advantage").

Secondly, MedPac details ways that Medicare can begin to pay doctors and hospitals for the quality of the care they provide rather than the quantity. I'll be writing more about this in future posts.

MedPac also calls for an independent, unbiased, "comparative effectiveness institute" that would compare new, cutting-edge drugs, devices and procedures to the less expensive products and treatments that they are trying to replace. Medicare would then make coverage decisions based not on the cost of these treatments, but on their effectiveness, paying for the treatments that are most effective for a particular set of patients. (This would not be one-size-fits-all medicine).

The U.K. offers a model: The National Institute for Health and Clinical Excellence (NICE) reviews medical research, decides which treatments to cover and issues guidelines to doctors. (Note: Since the U.K. has a much, much smaller health care budget, NICE must look at the cost-effectiveness of treatments, i.e., it must find out if a product that is likely to give the patient only another six months of life is worth the price tag. But as I have written, because we have so much long-hanging fruit in our bloated system, we can save billions just by concentrating on clinical effectiveness, without worrying about cost-effectiveness.)

In the U.K., NICE uses its research to issue guidelines for "best practice" to physicians. Note that there are not "rules," but guidelines. Everyone understands that in individual cases, physicians might want to deviate from the recommendations. But in the U.K., physicians now follow the guidelines 89 percent of the time.

Medicare also could reap huge savings by repealing that part of the Medicare bill -- negotiated behind closed doors at the end of 2003 -- that specifically prohibits Medicare from negotiating with drugmakers for the bulk discounts that the Veterans Administration wins. As a result, Medicare now forks over at least 50 percent more than the VA pays for half of the top 20 brand-name prescription drugs sold to seniors.

Won't industry lobbyists fight tooth and nail against such changes?

Absolutely. The for-profit health care industry reaps a handsome profit from sky-high spending.

Yet, despite the opposition, when it comes to Medicare reform, legislators have a compelling argument: Neither the elderly nor the taxpayers who support Medicare can afford to continue to be gouged by drugmakers, device makers and others in the health care industry who are the "price makers," while patients and taxpayers have become what Wall Street calls the "price takers."

Today, the health care industry says: "We will charge whatever the market will bear." That sounds fair enough. But when "the market" is comprised of seriously ill patients (and roughly 80 percent of our health care dollars are spent on those who are seriously ill), it turns out that there is almost no limit to what "the market" will bear. This is why free market competition in health care doesn't work the way it does in other sectors of the economy: When it comes to health care, the customer has too little leverage.

Meanwhile, in the unregulated Wild West that we call a health care system, no one pushes back to protect the patient. Virtually every other developed country in the world negotiates for bulk discounts on drugs. We don't. Other countries insist on unbiased evidence that a new, more expensive product or procedures is, in fact, more effective than less expensive treatments already available. We don't. (The FDA only requires that a sponsor provides evidence that the new entry is more effective than a placebo -- in other words, better than nothing.)

Most Americans would agree: The government needs to defend both the elderly and taxpayers. We simply cannot afford to keep the pharmaceutical industry's shareholders in the style to which they have become accustomed. (Until recently, drugmakers have been far more profitable than virtually any other Fortune 500 businesses. From 1995 to 2002 they took first prize, reporting earnings that ranged from 13 percent to 18.6 percent of sales each and every year. Over the same span, the average Fortune 500 firm posted earnings that averaged just 3.1 percent to 5 percent of revenues. Granted, in 2004, drugmakers fell to third place, trailing mining, crude oil production and commercial banks, but even then, earnings equaled 16 percent of sales.)

Medicare Reform Is a More Manageable Project

A final reason to begin Medicare reform before tackling national health care reform is that, when compared to a fragmented national health care system that involves a kaleidoscopic cast of payers and insurers, Medicare is a single and fairly coherent program. This is a more manageable project. It will give reformers a chance to show that they can, in fact, trim spending while lifting quality. This should provide a blueprint for tackling the larger, messier job of overhauling care nationwide.

If done right, national health care reform will take time. Those fighting for universal coverage should not delude themselves: Reform will be extraordinarily difficult, not only because the system is so fragmented, but because conservatives and progressives are so polarized on this issue. The political climate is even more fraught than it was when the Clintons attempted reform in 1993.

In 1993, compromise seemed possible. Keep in mind, 23 Republicans, including then-Minority Leader Robert Dole, co-sponsored a bill introduced by Republican Sen. John Chafee that sought to achieve universal coverage through a mandate that would require that all individuals buy insurance.

Today, where would you find 23 Republicans willing to support progressive reform that doesn't simply hand the keys to the kingdom to the private insurance industry?

In 2008, progressives and conservatives are so polarized on this issue that anyone who talks about "bipartisan compromise" is talking about health care "reform" in name only. Conservatives believe that free market competition can solve our health care problems. Progressives understand that this is a sector where market competition doesn't work to ensure high quality and reasonable costs. Government oversight is needed to ensure that insurers do not discriminate against those who are sick, and that everyone receives an equally comprehensive package of benefits at a price they can afford.

Conceivably, if Congress is in a hurry, it could cobble together compromise legislation that gives every American access to a piece of paper called "health insurance." But this would not mean that middle-class Americans would have access to health care. If the insurance industry is not regulated, insurers will continue to sell health care policies that are filled with holes, while shifting costs to the sickest patients.

Finally, the most important reason why the next administration should put Medicare first on its health care list is this: If Washington tackles Medicare reform in 2009, it can show Americans that it is possible to reduce costs and improve care. In fact, lower costs and higher quality go hand in hand.

This is counter-intuitive. The notion that more care -- and more expensive care -- is not better goes against some of our most deeply held assumptions about medicine and medical technology. But as I have explained, decades of medical research show that less aggressive, less intensive care leads to better outcomes.

Until most Americans understand this, it will be supremely difficult to create an efficient, affordable and sustainable system that offers high-quality care to all. This is why starting with Medicare reform could lead to the health care reform that Americans need.

How Long Will Your Doctor Continue Accepting Private Insurance?

This article originally appeared on Health Beat.

More and more doctors are fed up with private insurers. It's not just a question of how stingy they are, but how difficult it is to get reimbursed. Paperwork, phone calls, insurers who play games by deliberately making reimbursement forms difficult to interpret ...

Some physicians have just said "no" to insurers.

What does this mean for patients? Business models vary. Some doctors charge by the minute. I recently read about a physician who punches a time clock when the appointment begins. She has calculated that her time is worth $2 per minute. Fifty-nine minutes = $118. Will you be paying cash, or by charge today? Somehow, I think the meter would make me nervous. I suspect I might begin talking very quickly. But this is only one model.

Rather than charging by the minute, some doctors make fee-for-service charges. In those cases, many physicians mark up their fees well beyond what an insurer would pay. But, they point out, they also spend more time with their patients. No one feels rushed.

A story in a New Jersey newspaper describes how physicians in Northern Jersey have begun following in the footsteps of "elite Manhattan doctors and are withdrawing from all insurance plans." The article compares fees with and without insurance. On the right, the fees that insurers typically pay for these services; on the left, the fees that Jersey doctors who don't take insurance charge:

  • Mastectomy: $5,000 / $900

  • Ruptured abdominal aneurysm: $8,000 / $1,800

  • Routine screening mammogram: $350 / $100

  • Initial neurological consultation: $400 / $100


Some Doctors Share Savings with Patients

Other physicians find that if they don't take insurance, they can cut their overhead and actually charge patients less.

Over at Revolution Health, "Dr. Val and the Voice of Reason" tells how Dr. Alan Dappen has set up his practice:

"He is available to his patients 24 hours a day, 7 days a week, by phone, e-mail and in person. Visits may be scheduled on the same day if needed, prescriptions may be refilled any time without an office visit, he makes house calls, and all records are kept private and digital on a hard drive in his office.

"How much do you think this costs? Would you believe only about $300/year?"

Dappen has streamlined his practice. It's not just that he doesn't need an assistant to keep up with stacks of insurance paperwork. In general, he keeps his overhead low, offers full price transparency, has "physician extenders" who work with him, and "charges people for his time, not for a complex menu of tests and procedures."

The key is that Dappen practices very conservative medicine.

"I believe in doing what is necessary and not doing what is not necessary," he says. "The health care system is broken because it has perverse incentives, complicated reimbursement strategies, and cuts the patient out of the billing process. When patients don't care what something costs, and believe that everything should be free, doctors will charge as much as they can. Third-party payers use medical records to deny coverage to patients, collectively bargain for lower reimbursement, and set arbitrary fees that reward tests and procedures. This creates a bizarre positive feedback loop that results in a feeding frenzy of billing and unnecessary charges, tests and procedures. Unlike any other sector, more competition actually drives up costs."

Dappen has it right about competition in the health care marketplace. Studies show that in areas where there are more hospitals competing with each other, hospital bills are higher. This is in part because hospitals jousting for market share all invest the same cutting-edge equipment. The only way to pay for it is to use it. So they do more tests and more procedures, driving hospital bills higher.

Dappen, who practices in Fairfax, Va., told Dr. Val Jones that "after building a successful traditional family medicine practice he felt morally compelled to cease accepting insurance so that he could be free to practice good medicine without having to figure out how to get paid for it. He noticed that at least 50 percent of office visits were not necessary -- and issues could be handled by phone in those cases. Phone interviews, of course, were not reimbursable by insurance."

Dappen also casts a skeptical eye on the pricey annual physical: "The physical exam is a straw man for reimbursement. Doctors require people to appear in person at their offices so that they can bill for the time spent caring for them. But for long-standing adult patients, the physical exam rarely changes medical management of their condition. It simply allows physicians to be reimbursed for their time."

Again, Dappen is spot on.

"Cutting the middleman (health insurance) out of the equation allows me to give patients what they need without wasting their time in unnecessary in-person visits," Dappen explains. "This also frees up my schedule so that I can spend more time with those who really do need an in-person visit."

How many readers have found themselves sitting in a doctor's waiting room, not because they were sick, but because they needed to renew a prescription? Since insurers don't pay doctors for the time it takes to read an e-mail or to take a phone call and then write a new prescription, many doctors insist that patients come in whenever they need a renewal -- that way, the doctor can bill the insurer. This makes sense if the doctor needs to check your blood pressure to see whether the medication is working. But if he's simply going to chat for a few minutes and write the script, the visit is a waste of time.

"Health insurance is certainly necessary to guard against financially catastrophic illness. And the poor need a safety net beyond what Dr. Dappen can provide," Jones observes. "But for routine care," a practice like Dappen's "can make heath care affordable to the middle class, and reduces costs by at least 50 percent while dramatically increasing convenience."

Concierge Medicine

Dr. Val calls Dappen's practice "concierge medicine for the masses." Other physicians practice more traditional "concierge medicine:" customized, round-the-clock care for the elite.

In California, the Ventura County Star reports that local doctors opting out of insurance "spend more time with patients -- and make more money."

Some doctors charge patients an annual "membership fee" -- rather like the fee you might pay to belong to a country club.

"I wish I had done it a long time ago," says Dr. Edward Portnoy. An internist, Portnoy once had a practice of about 2,800 patients. Now he sees roughly 380 people but takes home "about the same profit" thanks to the $1,800 membership fee that each patient pays yearly.

Portnoy spends roughly twice as much time with each patient as he did when he accepted insurance. He explains that he has "more time to do intensive physicals and help patients stay healthy, rather than running from one crisis to the next like a war surgeon doing meatball surgery."

At Dr. Stanley Frochtzwajg's family practice in Ventura, patients don't face annual fees but pay "at the office for whatever services they receive," the paper observes. "A routine office visit is about $80." Patients are then given the paperwork to submit to their insurance companies themselves. "One patient said she ends up paying about 30 percent of the bill but is happy with her care and willing to pay for it."

The paper reports that doctors "don't really like the term 'concierge' or 'boutique' medicine. They prefer labels like personalized, preventive care."

That's understandable; they don't want to sound snobbish. But in truth: "There's not a lot of people who can afford it," says Anthony Wright, executive director of the consumer advocacy group Health Access California. "The reason some people call it boutique medicine is that this is for a well-to-do clientele."

Wright is concerned: "I don't think systems that shift more burden onto the patients are the answer to our broken system or will evolve into more than an isolated alternative. ... The trend of boutique medicine sends the consumers in the direction of you're on your own. Everyone for themselves."

On the other hand, the paper notes, "Carol Miller of Thousand Oaks thinks the $3,600 she and her husband pay in annual fees to see Portnoy is worth it because it brings peace of mind. The money covers an annual physical and a battery of screenings for everything from Alzheimer's to sleep apnea. The fee also covers follow-up that focuses on preventive care.

"There are other perks. People in Portnoy's waiting room find a basket filled with Cliff bars, crunchy peanut butter and chocolate chip bars. Tea and Snapple is served."

Crunchy peanut butter and chocolate chip bars? Is this part of the emphasis on preventive care?

Some worry about what the larger trend means. Are the Millers, who receive an annual "battery of screenings," being overtreated? If insurers reimburse for even 70 percent of unnecessary treatment, are we all paying for boutique medicine?

Dr. Bob Gonzalez, medical director at Ventura County Medical Center, also talked to the reporter and confided that he worries "that less reliance on insurance means fewer people getting health care. They won't be able to afford it."

The specter of more doctors downsizing their practices and seeing fewer patients also alarms Gonzalez. It means patients won't be able to find any doctors, or they could be dumped on an already overburdened doctor.

So yes, he said, more money, less insurance and more time with patients may be good for individual doctors.

"But whether it's good for society or good for patients is the overall question."

Congress Passes Key Medicare Bill

This article originally appeared on Health Beat.

When Ted Kennedy came onto the Senate floor, his colleagues cheered.

He was there to vote on the bill that would prevent a 10.6 percent cut to physicians who treat Medicare patients.

Just before Congress broke for the July 4 holiday, the bill missed the 60 votes needed to pass by just one vote.

Today, Kennedy, who is battling a brain tumor, brought that vote to the Senate floor. "Aye," the 76-year-old Kennedy said, grinning and making a thumbs-up gesture as he registered his vote.

Meanwhile, it appeared that Republican members of the Senate had been released to vote as they wished after it became apparent that the 60-vote threshold would be met. Pressure from seniors, the AARP , and the AMA had been mounting on members who voted against the bill June 26.

Republicans resisted voting for the legislation because while it spares physicians, it would reduce the fat subsidy that Congress has been giving private insurers who offer Medicare Advantage. President Bush and Senate Republicans had been strongly against any cut in the Advantage program.

In the end, the vote was 69 to 30 in favor of the bill. President Bush had threatened to veto the bill if it passed the Senate, but 67 votes make it veto-proof. And since the House has already voted 355-59 in favor of the bill, Congress appears able to over-ride any veto.

According to Roll Call, 18 Republicans broke with their party to pass the House-backed bill.

Sighs of relief could be heard on the Democratic side as lawmakers, beginning with Sen. Kay Bailey Hutchison (R-Texas), chairwoman of the Republican Steering Committee, joined with Democrats to pass the bill. Hutchison's Texas colleague, Sen. John Cornyn (R), who was on the receiving end of an American Medical Association ad blitz slamming his pre-recess position, also ended up voting for the bill.

Other Republicans who voted to proceed to debate on the politically charged bill included Sens. Elizabeth Dole (N.C.), George Voinovich (Ohio), Susan Collins (Maine), Norm Coleman (Minn.), Pat Roberts (Kan.) Gordon Smith (Ore.), Lisa Murkowski (Alaska), Bob Corker (Tenn.), Johnny Isakson (Ga.), Arlen Specter (Pa.) and Mel Martinez (Fla.).

McCain did not appear.

McCain's Predicament

Today, Bloomberg News did an excellent job of explaining why McCain might not show up:

"Senator John McCain will be on the spot, in person or by his absence, when the Senate takes up a measure today to halt a cut in Medicare payments to doctors.

"Republicans have stalled Democratic-backed legislation to reverse the 10.6 percent cut in doctors' fees by reducing payments to insurance companies instead. Democrats on June 26 fell one senator short of the 60 they will need to force a floor vote. Two senators were absent: Edward Kennedy, a Democrat from Massachusetts who is being treated for brain cancer, and McCain of Arizona, the presumptive Republican presidential nominee.

"For McCain, whose schedule indicates he will campaign today in Pennsylvania and Ohio and whose office won't say whether he'll show up in the Senate, the vote is a political dilemma. "

"In one case McCain could be voting against his party and in the other he could be voting against an issue framed as pro-senior and pro-physician,'' Robert Blendon, a health policy professor at Harvard University's School of Public Health in Boston, said in a telephone interview yesterday."

It is worth noting that McCain is one of few Republican senators who voted against the original legislation that created Medicare Advantage and provided what many view as windfall subsidies for private insurers.

The Startling Truth About Doctors and Diagnostic Errors

This article originally appeared on Health Beat.

Despite all of the talk about medical errors and patient safety, almost no one likes to talk about diagnostic errors. Yet doctors misdiagnose patients more often than we would like to think. Sometimes they diagnose patients with illnesses they don't have. Other times, the true condition is missed. All in all, diagnostic errors account for 17 percent of adverse events in hospitals, according to the Harvard Medical Practice Study, a landmark study that looks at medical errors.

Traditionally, these errors have not received much attention from researchers or the public. This is understandable. Thinking about missed diagnosis and wrong diagnosis makes everyone -- patients as well as doctors -- queasy. Especially because there is no obvious solution. But this past weekend the American Medical Informatics Association (AMIA) made a brave effort to spotlight the problem, holding its first-ever "Diagnostic Error in Medicine" conference.

Hats off to Bob Wachter, associate chairman of the Department of Medicine at the University of California, San Francisco, and the keynote speaker at the conference. Wachter shared some thoughts on diagnostic errors through his blog Wachter's World.

Wachter begins by pointing out that a misdiagnosis lacks the concentrated shock value that is needed to grab the public imagination. Diagnostic mistakes "often have complex causal pathways, take time to play out, and may not kill for hours [i.e., if a doctor misses myocardial infarction in a patient], days (missed meningitis) or even years (missed cancers)." In short, to understand diagnostic errors, you need to pay attention for a longer period of time -- not something that's easy to do in today's sound-bite driven culture.

Diagnostic errors just aren't media-friendly. When someone is prescribed the wrong medication and they die, the sequence of events is usually rapid enough that the story can be told soon after the tragedy occurs. But the consequences of a mistaken diagnosis are too diffuse to make a nice, punchy story. As Wachter puts it: "They don't pack the same visceral wallop as wrong-site surgery."

Finally, Wachter observes, it's hard to measure diagnostic errors. It's easy to get an audience's attention by telling it that "the average hospitalized patient experiences one medication error a day" or that "the average ICU patient has 1.7 errors per day in their care."

But we don't have equally clean numbers on missed diagnoses. As a result, he points out, "it's difficult to convince policy makers and hospital executives, who are now obsessing about lowering the rates of hospital-acquired infections and falls" to focus on a problem that is much more difficult to tabulate.

This is a recurring problem in programs that strive to improve the quality of care: We are mesmerized by the idea of "measuring" everything. Yet, too often, what is most important cannot be easily measured. Wacther recognizes the urgency of the problem: "As quality and safety movements gallop along, the need to" address diagnostic errors" grows more pressing," he writes. "Until we do, we will face a fundamental problem: A hospital can be seen as a high-quality organization -- receiving awards for being a stellar performer and oodles of cash from P4P programs -- if all of its 'pneumonia' patients receive the correct antibiotics, all its 'CHF' patients are prescribed ACE inhibitors, and all its 'MI' patients get aspirin and beta blockers.

"Even if every one of the diagnoses was wrong."

Why so many errors?

Medicine is shot through with uncertainty; diseases do not always present neatly, in textbook fashion, and every human body is unique. These are just a few reasons why diagnosis is, perhaps, the most difficult part of medicine.

But misdiagnosis almost always can be traced to cognitive errors in how doctors think. When diagnosis is based on simple observation in specialties like radiology and pathology, which rely heavily on visual interpretation, error rates probably range from 2 percent to 5 percent, according to Drs. Eta S. Berner and Mark L. Graber, writing in the May issue of the American Journal of Medicine.

By contrast, in clinical specialties that rely on "data gathering and synthesis" rather than observation, error rates tend to run as high as 15 percent. After reviewing "an extensive and ever-growing literature" on misdiagnosis, Berner and Graber conclude that "diagnostic errors exist at nontrivial and sometimes alarming rates. These studies span every specialty and virtually every dimension of both inpatient and outpatient care."

As the table below reveals, numerous studies show that the rate of misdiagnosis is "disappointingly high" both "for relatively benign conditions" and "for disorders where rapid and accurate diagnosis is essential, such as myocardial infarction, pulmonary embolism, and dissecting or ruptured aortic aneurysms."

STUDY NAME: Shojania et al (2002)
ASSESSED CONDITION: Tuberculosis of the lungs (bacterial infection)
FINDINGS: Reviewing autopsy studies specifically focused on the diagnosis of lung TB, researchers found that 50 percent of these diagnoses were not suspected by physicians before the patient died.

STUDY: Pidenda et al (2001)
CONDITION: Pulmonary embolism ( a blood clot blocks arteries in the lungs)
FINDINGS: This study reviewed diagnosis of fatal dislodged blood clots over a five-year period at a single institution. Of 67 patients who died of pulmonary embolism, clinicians didn't suspect the diagnosis in 37 (55 percent) of them.

STUDY: Lederle et al (1994), von Kodolitsch et al (2000)
CONDITION: Ruptured aortic aneurysm (when a weakened, bulging area in the aorta ruptures)
FINDINGS: These two studies reviewed cases at a single medical center over a seven-year period. Of 23 cases involving these aneurysms in the abdomen, diagnosis of rupture was initially missed in 14 (61 percent); in patients presenting with chest pain, doctors missed the need to dissect the bulging part of the aorta in 35 percent of cases.

STUDY: Edlow (2005)
CONDITION: Subarachnoid hemorrhage (bleeding in a particular region of the brain)
FINDINGS: This study, an updated review of published studies on this particular type of brain bleeding, shows about 30 percent are misdiagnosed on initial evaluation.

STUDY: Burton et al (1998)
CONDITION: Cancer detection
FINDINGS: Autopsy study at a single hospital: of the 250 malignant tumors found at autopsy, 111 were either misdiagnosed or undiagnosed, and in just 57 of the cases, the cause of death was judged to be related to the cancer.

STUDY: Beam et al (1996)
CONDITION: Breast cancer
FINDINGS: Looked at 50 accredited centers agreed to review mammograms of 79 women, 45 of whom had breast cancer. The centers missed cancer in 21 percent of the patients.

STUDY: McGinnis et al (2002)
CONDITION: Melanoma (skin cancer)
FINDINGS: This study, the second review of 5,136 biopsy samples found that diagnosis changed in 11 percent (1.1 percent from benign to malignant, 1.2 percent from malignant to benign, and 8 percent had a change in doctors' ranking of how abnormal the cells were) of the samples over time, suggesting a not insignificant initial error rate.

STUDY: Perlis (2005)
CONDITION: Bipolar disorder
FINDINGS: The initial diagnosis was wrong in 69 percent of patients with bipolar disorder and delays in establishing the correct diagnosis were common.

STUDY: Graff et al (2000)
CONDITION: Appendicitis (inflamed appendix)
FINDINGS: Retrospective study at 12 hospitals of patients with abdominal pain and operations for appendicitis. Of 1,026 patients who had surgery, there was no appendicitis in 110 (10.5 percent); of 916 patients with a final diagnosis of appendicitis, the diagnosis was missed or wrong in 170 (18.6 percent).

STUDY: Raab et al (2005)
CONDITION: Cancer pathology (microscopic examination of tissues and cells to detect cancer)
FINDINGS: The frequency of errors in diagnosing cancer was measured at four hospitals over a one-year period. The error rate of pathologic diagnosis was 2 percent to 9 percent for gynecology cases and 5 percent to 12 percent for nongynecology cases; errors ran from what tissues the doctors used, to preparation problems, to misinterpretations of tissue anatomy when viewed under microscope.

STUDY: Buchweitz et al (2005)
CONDITION: Endometriosis (tissue similar to the lining of the uterus is found elsewhere in the body)
FINDINGS: Digital videotapes of the inside of patients' bodies were shown to 108 gynecologic surgeons. Surgeons agreed only 18 percent of the time as to how many tissue areas were actually affected by this condition.

STUDY: Gorter et al (2002)
CONDITION: Psoriatic arthritis (red, scaly skin coupled with join inflammation)
FINDINGS: One of two patients with psoriatic arthritis visited 23 joint and motor specialists; the diagnosis was missed or wrong in nine visits (39 percent).

STUDY: Bogun et al (2004)
CONDITION: Atrial fibrillation (abnormal heart beat in the upper chambers of the heart)
FINDINGS: Review of doctor readings of electro-cardiograms [a graphical recording of the change in body electricity due to cardiac activity] that concluded a patient suffered from this abnormal heart beat found that: 35 percent of the patients were misdiagnosed by the machine, and the error was detected by the reviewing clinician only 76 percent of the time.

STUDY: Arnon et al (2006)
CONDITION: Infant botulism (toxic bacterial infection in newborns' intestines)
FINDINGS: Study of 129 infants in California suspected of having botulism during a five-year period; only 50 percent of the cases were suspected at the time of admission.

STUDY: Edelman (2002)
CONDITION: Diabetes (high blood sugar due to insufficient insulin)
FINDINGS: Retrospective review of 1,426 patients with laboratory evidence of diabetes showed that there was no mention of diabetes in the medical record of 18 percent of patients.

STUDY: Russell et al (1988)
CONDITION: Chest x-rays in the emergency department
FINDINGS: One third of x-rays were incorrectly interpreted by the emergency department staff compared with the final readings by radiologists.

Overconfidence

Misdiagnosis rarely springs from a "lack of knowledge per se, such as seeing a patient with a disease that the physician has never encountered before," Berner and Grave explain. "More commonly, cognitive errors reflect problems gathering data, such as failing to elicit complete and accurate information from the patient; failure to recognize the significance of data, such as misinterpreting test results; or most commonly, failure to synthesize or 'put it all together.'"

The breakdown in clinical reasoning often occurs because the physician isn't willing or able to "reflect on [his] own thinking processes and critically examine [his] assumptions, beliefs, and conclusions." In a word, the physician is too "confident."

Indeed, Berner and Graber find an inverse relationship between confidence and skill. In one study they reviewed, the researchers looked at diagnoses made by medical students, residents and physicians, and asked them how certain they were that they were correct. The good news is that while medical students were less accurate, they also were less confident; meanwhile the attending physicians were the most accurate and highly confident. The bad news is that the residents were more confident than the others, but significantly less accurate than the attending physicians. In another study, researchers found that residents often stayed wedded to an incorrect diagnosis even when a diagnostic decision support system suggested the correct diagnosis.

In a third study of 126 patients who died in the ICU and underwent autopsy, physicians were asked to provide the clinical diagnosis and also their level of uncertainty. Level 1 represented complete certainty, level 2 indicated minor uncertainty, and level 3 designated major uncertainty. Here the punch line is alarming: Clinicians who were "completely certain" of the diagnosis before death were wrong 40 percent of the time.

Overconfidence, or the belief that "I know all I need to know," may help explain what the researchers describe as a "pervasive disinterest in any decision support or feedback, regardless of the specific situation." Studies show that "physicians admit to having many questions that could be important at the point of care, but which they do not pursue. Even when information resources are automated and easily accessible at the point of care with a computer, one study found that only a tiny fraction of the resources were actually used."

Research shows that physicians tend to ignore computerized decision-support systems, often in the form of guidelines, alerts and reminders. "For many conditions, consensus exists on the best treatments and the recommended goals," Berner and Graber point out. Nevertheless, a comprehensive review of medical practice in the United States found that the care provided deviated from recommended best practices half of the time. In one study, the researchers suggest that the high rate of noncompliance with clinical guidelines relates to "the sociology of what it means to be a professional" in our health care system: "Being a professional connotes possessing expert knowledge in an area and functioning relatively autonomously." Many physicians have yet to learn that 21st century medicine is too complex for anyone to know everything -- even in a single specialty. Medicine has become a team sport.

But while it's easy to blame medical "arrogance" for the high rate of errors, "there is ubstantial evidence that overconfidence -- that is, miscalibration of one's own sense of accuracy and actual accuracy -- is ubiquitous and simply part of human nature," Berner and Graber write. "A striking example derives from surveys of academic professionals, 94 percent of whom rate themselves in the top half of their profession. Similarly, only 1 percent of drivers rate their skills below that of the average driver."

In another study published in the same issue of AMJ, Pat Croskerry and Geoff Norman note that such equanimity regarding one's own skills can lead to what's called "confirmation bias." People "anchor" on findings that support their initial assumptions. Given a set of information, it's much easier to pull out the data that proves you right and pat yourself on the back than it is to look at the contradictory evidence and rethink your assumptions. Indeed, Croskerry and Norman observe,"It takes far more mental effort to contemplate disconfirmation -- by considering all the other things it might be -- than confirmation."

Making things all the more difficult is the fact that, at a certain point, the alternative to confirmation bias -- what Croskerry and Norman call "consider the opposite" -- becomes impractical. If a doctor embraces uncertainty, he could easily become paralyzed.

What doctors need to do is to simultaneously make a decision -- and keep an open mind. Often, a doctor must embark on a course of treatment as a way of diagnosing the condition -- all the time knowing that he may be wrong.

Too often, Berner and Graber observe, physicians narrow the diagnostic hypotheses too early in the process, so that the correct diagnosis is never seriously considered. Reliance on advanced diagnostic tests can encourage what they call "premature closure." After all, high-tech diagnostic technologies offer up hard-and-fast data, fostering the illusion that the physician has vanquished medicine's ambiguity.

But in truth, advanced diagnostic tools can miss critical information. The problem is not the technology, but how we use it. Some observers suggest that the newest and most sophisticated tools are more likely to produce false negatives because doctors accept the results so readily.

"In most cases, it wasn't the technology that failed," explains Dr. Atul Gawande in Complications: A Surgeon's Notes on an Imperfect Science. "Rather, the physician did not consider the right diagnosis in the first place. The perfect test or scan may have been available, but the physician never ordered it." Instead, he ordered another test -- and believed it.

"We get this all the time," Bill Pellan of Florida's Penallas-Pasca County Medical Examiner's Office told the New York Times a few years ago. "The doctor will get our report and call and say: 'But there can't be a lacerated aorta. We did a whole set of scans.'

"We have to remind him we held the heart in our hands."

Autopsies

Sometimes physicians are overly confident; sometimes they narrow their hypothesis too early in the diagnostic process. Sometimes they rely too heavily on advanced diagnostic tests and accept the results too quickly. As I explained in part one of this post, these are some of the reasons why physicians misdiagnose their patients up to 15 percent of the time.

"Complacency" (i.e., the attitude that "nobody's perfect") also is a factor, reports Drs. Eta S. Berner and Mark L. Graber in the May issue of the American Journal of Medicine. "Complacency reflects tolerance for errors, and the belief that errors are inevitable," they write, "combined with little understanding of how commonplace diagnostic errors are. Frequently, the complacent physician may think that the problem exists, but not in his own practice ..."

It is crucial to recognize that physicians are not simply deceiving themselves: In our fragmented healthcare system, many honestly don't know when they have misdiagnosed a patient. No one tells them -- including the patient.

Sometimes a patient who isn't getting better simply leaves the doctor and finds someone else. His original doctor may well assume that he was finally cured. Or the patient may be discharged from the hospital, relapse three months later, and go to a different ER where he discovers that his symptoms have returned because he was, in fact, misdiagnosed. The doctors who cared for him at the first hospital have no way of knowing; they think they cured him. In other cases, the patient gets better despite the wrong diagnosis. (It is surprising how often bodies heal themselves.) Meanwhile, both doctor and patient assume that the diagnosis was right and that the treatment "worked."

In still other cases, the patient dies, and because everyone assumes that the diagnosis was correct, it is listed as the "cause of death" -- when in fact, another condition killed the patient.

When giving talks to groups of physicians on diagnostic errors, Graber says that he frequently "asks whether they have made a diagnostic error in the past year. Typically, only 1 percent admit to having made such a mistake."

Here, we reach the heart of the problem: what Berner and Graber call "the remarkable discrepancy between the known prevalence of diagnostic error and physician perception of their own error rate." This gap "has not been formally quantified and is only indirectly discussed in the medical literature," they note "but [it] lies at the crux of the diagnostic error puzzle and explains in part why so little attention has been devoted to this problem."

One cannot expect doctors to learn from their mistakes unless they have feedback: At one time, autopsies provided physicians with the information they needed. And the results were regularly discussed at "mortality and morbidity" conferences, where doctors became Monday-morning quarterbacks, discussing what they could have done differently.

But today, "autopsies are done in 10 percent of all deaths; many hospitals do none," notes Dr. Atul Gawande in Complications: A Surgeons Notes on an Imperfect Science. "This is a dramatic turnabout. Throughout much of the 20th century, doctors diligently obtained autopsies in the majority of all deaths ... Autopsies have long been viewed as a tool of discovery, one that has been used to identify the cause of tuberculosis, reveal how to treat appendicitis and establish the existence of Alzheimer's disease.

"So what accounts for the decline?" Gawande asks. "In truth, it's not because families refuse -- to judge from recent studies, they still grant their permission up to 80 percent of the time. Instead, doctors once so eager to perform autopsies that they stole bodies [from graves] have simply stopped asking.

"Some people ascribe this to shady motives," Gawande continues. "It has been said that hospitals are trying to save money by avoiding autopsies, since insurers don't pay for them, or that doctors avoid them in order to cover up evidence of malpractice. And yet," he points out, "autopsies lost money and uncovered malpractice when they were popular, too."

Gawande doesn't believe that fear of malpractice has driven the decline in autopsies. Instead," he writes, "I suspect, what discourages autopsies is medicine's 21st century, tall-in-the-saddle confidence."

This is an important point. Autopsies have fallen out of fashion in recent years: "Between 1972 and 1995, the last year for which statistics are available, the rate fell from 19.1 percent of all deaths to 9.4 percent. A major reason for the decline over this period is that "imaging technologies such as CT scanning and ultrasound have enabled doctors to 'see' such obvious internal causes of death as tumors before the patient dies," says Dr. Patrick Lantz, associate professor of pathology at Wake Forest University Baptist Medical Center. Nowadays an autopsy seems a waste of time and resources.

Gawande agrees: "Today we have MRI scans, ultrasound, nuclear medicine, molecular testing and much more. When somebody dies, we already know why. We don't need an autopsy to find out ... Or so I thought ... " Gawande then goes on to tell the story of a autopsy that rocked him. He had completely misdiagnosed a patient.

What autopsies show

The autopsy has been described as "the most powerful tool in the history of medicine" and the "gold standard" for detecting diagnostic errors. Indeed, Gawande points out that three studies done in 1998 and 1999 reveal that autopsies "turn up a major misdiagnosis in roughly 40 percent of all cases."

A large review of autopsy studies concluded that, "in about a third of the misdiagnoses, the patients would have been expected to live if proper treatment had been administered," Gawande reports. "Dr. George Lundberg, a pathologist and former editor of the Journal of the American Medical Association, has done more than anyone to call attention to these figures. He points out the most surprising fact of all: The rate at which misdiagnosis is detected in autopsy studies have not improved since at least 1938."

When Gawande first heard these numbers he couldn't believe them. "With all of the recent advances in imaging and diagnostics ... it's hard to accept that we have failed to improve over time." To see if this really could be true, he and other doctors at Harvard put together a simple study. They went back into their hospital records to see how often autopsies picked up missed diagnosis in 1960 and 1970, before the advent of CT, ultrasound, nuclear scanning and other technologies, and then in 1980, after those technologies became widely used.

Gawande reports the results of the study: "The researchers found no improvement. Regardless of the decade, physicians missed a quarter of fatal infections, a third of heart attacks and almost two-thirds of pulmonary emboli in their patients who died."

But these numbers may exaggerate the rate of error. As Berner and Graber observe, "Autopsy studies only provide the error rate in patients who die." One can assume that the error rate is much lower in patients who survived.

"For example, whereas autopsy studies suggest that fatal pulmonary embolism is misdiagnosed approximately 55 percent of the time, the misdiagnosis rate for all cases of pulmonary embolism is only 4 percent ..." a large discrepancy also exists regarding the misdiagnosis rate for myocardial infarction: although autopsy data suggest roughly 20 percent of these events are missed, data from the clinical setting (patients presenting with chest pain or other relevant symptoms) indicate that only 2 percent to 4 percent are missed."

Still, they acknowledge that when laymen are trained to pretend to be a patient suffering from specific symptoms, studies show that "internists missed the correct diagnosis 13 percent of the time. Other studies have found that physicians can even disagree with themselves when presented again with a case they have previously diagnosed."

On the question of whether the diagnostic error rate has changed over time, Berner and Graber quote researchers who suggest that the near-constant rate of misdiagnosis found at autopsy over the years probably reflects two factors that offset each other:

  1. diagnostic accuracy actually has improved over time (more knowledge, better tests, more skills);
  2. but as the autopsy rate declines, there is a tendency to select only the more challenging clinical cases for autopsy, which then have a higher likelihood of diagnostic error. A long-term study of autopsies in Switzerland (where the autopsy rate has remained constant at 90 percent) supports the theory that the absolute rate of diagnostic errors is, as suggested, decreasing over time.


Nevertheless, nearly everyone agrees, the rate of diagnostic errors remains too high.

We need to revive the autopsy, Gawande argues. For "autopsies not only document the presence of diagnostic errors, they also provide an opportunity to learn from one's errors (errando discimus) if one takes advantage of the information.

"The rate of autopsy in the United States is not measured anymore," he observes, "but is widely assumed to be significantly 10 percent. To the extent that this important feedback mechanism is no longer a realistic option, clinicians have an increasingly distorted view of their own error rates.

"Autopsy literally means "to see for oneself," Gawande observes, and despite our knowledge and technology, when we look we are often unprepared for what we find. Sometimes it turns out that we had missed a clue along the way or made a genuine mistake. Sometimes we turn out wrong despite doing everything right.

"Whether with living patients or dead, we cannot know until we look. ... But doctors are no longer asking such questions. Equally troubling, people seem happy to let us off the hook. In 1995, the United States National Center for Health Statistics stopped collecting autopsy statistics altogether. We can no longer even say how rare autopsies have become."

If they are going to reflect on their mistakes, physicians need to "see for themselves."

How the Mainstream Media Hype Health Care

This article originally appeared on Health Beat.

"False Hopes, Unwarranted Fears: The Trouble with Medical News Stories." If you find the headline alarming, you should read the editorial, published just last week in PLoS Medicine. There, the journal's editors summarize what the Health News Review has discovered over the past two years while evaluating medical stories about new products and procedures throughout the mainstream media.

"It's not a pretty picture," says Gary Schwitzer, the University of Minnesota School of Journalism professor who publishes the online project.

In a video linked to the Health News Review website, Schwitzer points out that "about 65 percent of the time" major news organization are not telling viewers and readers how "big the potential harms" of new treatments are -- or "how small the potential benefits."

Meanwhile, about three-quarters of the stories about a new product or procedure fail to talk about how much the idea costs. "At a time when the U.S. is spending 16 percent of GDP on healthcare, I find this unfathomable," says Schwitzer. "No one is asking, 'How are we going to pay for it?' 'Who will have access to these things?' Who's to say that we even need some of these things? This is what we need to discuss."

Ultimately, "these stories are painting a 'kid in the candy-store' picture of U.S. healthcare," Schwitzer charges, "whereby everything is made to look terrific, risk-free, and without a price tag. Nothing could be further from the truth."

Health News Review is supported by a grant from the nonprofit Foundation for Informed Decision Making, which was founded in 1989 by Dr. Jack Wennberg and colleagues. Its mission is to assure "that people understand their choices and have the information they need to make sound decisions affecting their health and well-being."

But rather than helping people understand that they have choices, news stories trumpeting a new product often fail to compare it to existing alternatives. Schwitzer explains: "We expect that a story would put the new approach being discussed into the context of alternatives, with some discussion of the possible advantages or disadvantages of the new approach compared with [treatments already on the market]."

Instead, says Schwitzer, good-news stories about "medical breakthroughs" are "feeding people" who believe there is "a pill for every ill, creating unrealistic expectations and undue demand for unproven ideas. This may help explain why we are spending 16 percent of GDP on healthcare -- and not getting the value for our dollars."

In addition to the editorial, the May issue of PLoS includes an article by Schwitzer detailing the shortcomings of the 500 medical stories that Health News Review has reviewed over the past two years while evaluating stories published in the top 50 U.S. newspapers and in the three major newsweeklies, carried on the Associated Press' wires, and aired on morning and evening news at ABC, NBC and CBS.

Too often, reporters rely on sources that have an axe to grind: "Of 170 stories that cited an expert or a scientific study," Schwitzer observes "85 (50 percent) cited at least one with a financial tie to the manufacturer of the drug, a tie that was disclosed in only 33 of the 85 stories."

For example, a story that ran on ABC World News in April of 2007 heralding a new test for prostate cancer "did not disclose what was abundantly clear even in a Johns Hopkins news release: the principal investigator receives a share of the royalties received on sales of the test. He is also a paid consultant to the manufacturer of the test. There were no quotes from anyone expressing skepticism about the development."

Stores that hype hope can also spread fear. The reviewers, who gave the ABC piece a "2" on a 10-point scale, criticized it for leading with a dramatic graphic that stated: "Prostate cancer in the U.S.: 1.6 million men undergo prostate biopsies each year."

"That graphic, setting the stage for the story, can be misleading and confusing to viewers," the reviewers noted. "It could easily be inferred that 1.6 million men each year develop prostate cancer. And therefore we rate it as disease-mongering. The American Cancer Society estimates that during 2007 about 218,890 new cases of prostate cancer will be diagnosed in the United States -- a number not provided in the story ... it [also] would have been helpful to simply show the number of men diagnosed and the number of men who actually die to ... help men to understand that this cancer isn't always a killer." Finally, "at the least, the story could have included one line saying that screening is controversial regardless of method chosen, because it isn't yet clear if treatment saves lives."

But it isn't just television news that falls short by relying too heavily on sources who have a financial interest in the product. Top-tier newspapers fall into the same trap. A 2006 New York Times story headlined "Drug Doubles Endurance" also received a "2" from reviewers, in part because it failed to provide "more sources [expressing] healthy skepticism to balance the overwhelming enthusiasm from other sources, several of whom had ties to the drug companies promoting the substance." Then, too, the story failed to note that "there is an important difference between the results from a few research studies in animals and demonstration of efficacy in people." (The old mice vs. men problem.)

Who does the reviews? How do the reporters respond?

Health News Review uses a team of reviewers from around the country. "Some have a master's in public health; some are RNs; some are MDs from places like Duke, UCSF, Harvard and Dartmouth," Schwitzer explains. "There are people trained in evidence-based medicine. In a sense we are trying to promote evidence-based health journalism."

Three reviewers analyze each article. (All reviewers are listed online). As the publisher of the project, Schwitzer is always the third reviewer of each story. "I'll mediate any disagreements between the first two reviewers," he explains, "gaining consensus before publishing the final review."

The rating instrument used includes ten criteria used by similar websites in Australia and Canada. All of the criteria -- which range from "Adequately explains and quantifies potential harms" to "Compares the new idea with existing alternatives" are addressed in the Association of Health Care Journalists' Statement of Principles. For each of the 10 criterion, the story is given a rating of "satisfactory," "unsatisfactory" or "not applicable."

The goal of the exercise is "not media-bashing" says Schwitzer. "It's outreach. When we evaluate a health news story, we e-mail the evaluation to the journalist who wrote that story. We're saying: 'Come see how we have reviewed your story; learn from it, engage us -- and the public -- in a discussion of where things could be done better.'"

"And their responses have been overwhelmingly positive," he reports. "It's quite sobering to read the reviews," wrote one journalist. "I imagine you've heard all the laments from reporters, but the lack of both space and research time is enormously frustrating (and will probably drive me out of journalism in the end)."

Cutbacks at many newspapers, plus a lack of training, also make it difficult for journalists to do the job that they would like to do. One week they're reporting on crime, the next week they're covering cancer. Yet the public does not understand that, even at our leading newspapers, a reporter may be writing about something that he or she does not fully understand.

Editors and publishers also can get in the way of telling the true story. As Schwitzer observes: "Reporters and writers have been receptive to the feedback; editors and managers must be reached if change is to occur."

As the PLoS editorial points out: "There is also a broader context in which medical stories get exaggerated -- the 24-hour news cycle means that media organizations are battling for audience share, which in turn means that the press has moved toward sensationalism, entertainment and opinion. Headlines are often written by news editors, rather than the article's reporter, and are particularly prone to exaggeration. All of this sensationalism strays far from the reality of biomedical research, a slow process that yields small, incremental results based on long-term studies that always have weaknesses."

I know, from experience, that publishers and editors are sometimes more concerned about ratings and circulation that they are about the facts. While working as a journalist, I was told on more than one occasion: "Our readers don't like negative stories. They want to hear good news."

Headlines about medical miracles sell newspapers. Articles that explain that the breakthrough fizzled do not. Unless the bad news is truly sensational ("400 Women Felled by Botox Treatments in L.A.") readers and viewers may not be terribly interested in tales of side effects, risks and complications. Nevertheless, while some editors worry about what their customers "want to hear," good journalists know that it's their job to inform people -- to tell them what they "need to know."

Schwitzer's project should open up some much-needed dialogue about the difference -- especially when the topic is so important.

The Dangers of Consumer-Driven Medicine

Medical device makers are taking direct-to-consumer (DTC) advertising to a perilous new level. In a piece titled "Crossing the Line in Consumer Education?" that appeared in the May 22 issue of The New England Journal of Medicine (NEJM), Drs. William E. Boden, and George A. Diamond tackle the issue, arguing that a new campaign to peddle medical devices directly to patients warrants close scrutiny. Manufacturers are inviting consumers to decide not only what is best for them, but what is best for their surgeons. This is "consumer-driven medicine" at its most dangerous.

Boden and Diamond focus on a 60-second television spot for Johnson & Johnson's drug-eluting coronary stent, "the Cypher," which debuted during last year's Thanksgiving match-up between the Dallas Cowboys and the New York Jets. (Click here to view the advertisement in question).

The commercial has all of the hallmarks of the drug industry's highly polished DTC advertising: First, we're introduced to "the tough guy" -- a once-powerful man who now is "cornered by chest pains" and sits slumped in his arm chair. Then, we are shown how he can reclaim his life in a montage of joyous physical activity accompanied by upbeat music. Of course, "this product isn't for everyone," we're told. But "life is wide open. It all depends on what you've got inside."

In the campaign to put the health care "consumer" in the driver's seat, where he can have "control" and "choice," J&J is breaking new ground. This ad isn't for a pill that you buy in a pharmacy but rather for a coronary stent, a wire mesh device that is placed in an artery which has been blocked by fatty deposits. Doctors first thread a tiny balloon into the artery and inflate it to clear the blockage; then they insert a stent into the artery, and a second balloon expands the stent to keep the newly cleared blood vessel wide open.

"Unlike a drug," Boden and Diamond point out, "whose use merely requires an office visit to a physician and a prescription the patient can fill at a pharmacy, a specialized medical device such as a stent can be selected and implanted only by someone with a very sophisticated medical understanding that no member of the lay public could realistically expect to gain from a DTCA campaign."

This is an important point. It's bad enough that some patients are now sold drugs via a sound-bite, but it is even more pernicious to pretend that the pros and cons of a medical device can be condensed into a 30-second spot. In this case you're not just popping a pill that you can decide to stop taking if you don't like the way it makes you feel. Medical devices are literally installed in our bodies. Even if short-term results look promising, the reality of medical devices is that they stay in our bodies; and so complications often do not become evident until well after their installation.

Moreover, it's imperative that a surgeon is comfortable with the device he is using. In Money-Driven Medicine George Cipoletti, co-founder of Apex Surgical, a company that focuses on joint replacement products, explains that, when it comes to devices, "90 percent of success is determined not by the device itself but, by how good the surgeon is at implanting that particular device -- how much experience he has with it."

John Cherf, a Chicago knee surgeon, adds that surgical technique accounts for "80 to 85 percent" of a successful operation. "Think of it this way," said Cherf. "If you gave Tiger Woods 20-year old golf clubs, and gave me the newest clubs, he'd still kick my butt."

This is another reason why Boden and Diamond find it "almost unimaginable that a patient would challenge an interventional cardiologist's judgment about the use of a particular stent or that a cardiologist would accede to a patient's request for a particular stent on the basis of the information gleaned from a television ad. Indeed, the notion that television viewers, inspired by such an ad, would go to their physicians and request not only a stent but a specific brand and model of stent is frightening, if not utterly absurd."

Yet why else would J&J spend millions on television advertising? The company's goal is to create demand -- a "buzz" that will cause patients to ask about the product, and that will make some hospitals and surgeons feel that they must use it.

This is what happened with another J&J product, a spinal device called "Charité." After being approved by the FDA in 2004, Charité was heavily promoted. By the fall of 2005, more than 3,000 of the spinal discs had been implanted -- even though only two of the nation's eight largest insurers had agreed to pay for the operation. Some surgeons were questioning the safety of the device, but patients who read favorable reports about Charité online or in the press were beginning to demand the operation.

As a result, some hospitals were willing to absorb the cost of the operation even though insurers wouldn't reimburse. Dr. John Brockvar, chief of neurosurgery at Wyckoff Heights Hospital in Brooklyn, told Dow Jones Newswires that his hospital gave him permission to implant the device "because it was important to be on the leading edge."

"Some doctors say they're worried they will lose business if they don't offer the Charité option to patients," The Wall Street Journal reported. "There's a feeling that it isn't adequately proven, but there's anxiety about being left behind." [my emphasis]

In an almost pure example of money-driven, consumer-driven medicine, manufacturers intent on profits pushed consumers to push doctors and hospitals to use a product that they were not convinced was safe. This is not how we want medical decisions to be made.

Today, Charité remains controversial. There are many questions about long-term complications, and last spring, Medicare announced that it would no longer cover Charité for patients over sixty.

J&J's stent, Cypher, also has its critics -- which may be one reason why the company is pumping up promotion via television advertising. In the past, drug-makers have poured money into television ads for the same reason that movie studios resort to expensive television advertising: the critics are panning the product. If a drug-maker is having a hard time selling its new product directly to physicians -- either because the reviews in the medical journal reviews are mixed, or because it is a "me-too "product that appears to offer little benefit over older, less expensive drugs -- manufacturers go directly to the consumer, who is less likely to be aware of what the critics are saying.

This may be what J&J is doing with Cypher, one of the new "drug-eluting stents" that, unlike older, less expensive "bare-metal stents" release drugs which are supposed to prevent arteries from re-clogging.

If you were to judge drug-eluting stents solely by the Cypher advertisement, you might think they're a remarkable sure-fire innovation. After all, as the commercial asserts, "when your arteries narrow, so does your life." Who wouldn't want to lead a better life thanks to a device that -- again, according to the advertisement -- is "studied," "trusted" and "proven"?

Unfortunately, while drug-eluting stents have been studied, they are far from "proven." In fact, there is much debate over whether or not they're good options for folks with clogged arteries. In a 2007 NEJM article, William Maisel of Harvard Medical School asserted that, since the FDA approval of drug-eluting stents in 2003, "concerns about an increased risk of late stent thrombosis [i.e. late onset blood clots] have arisen and have been exacerbated by insufficient and conflicting information in the public domain."

This is putting it diplomatically. According to Maisel, a major 2006 study found "that between 7 and 18 months after implantation, the rates of nonfatal myocardial infarction [i.e. heart attacks], death from cardiac causes, and ... stent thrombosis were higher with drug-eluting stents than with bare-metal stents [i.e. those that don't release drugs]."

Equally disconcerting is a 2003 Swedish study cited by NEJM in which doctors examined a computer registry of every Swedish stent patient for the years 2003 and 2004. This analysis of almost 20,000 people found that when drug-eluting stents were implanted, patients were slightly more likely to die than those who had old-fashioned bare-metal stents. Four years later a Columbia University study reported that the four-year rate of stent thrombosis was 1.2 percent amongst patients who had received the Cypher, as opposed to 0.6 percent for those who had received bare-metal stents -- double the proportion.

On the other hand, a more recent but smaller study of 6,552 cases published in the NEJM comparing bare metal to drug-eluting stents for so-called "off-label" use (use not specifically approved by the FDA), found a lower rate of complications and no increased risk of death or heart attack for the drug-coated stents. But in the same issue of the NEJM, a study suggested that if patients have more than one blocked artery, bypass surgery provides a lower risk of death and heart attacks than do procedures involving any type of stent.

Questions about when to use stents, and what kind of stent to use, are far from resolved. As Boden and Diamond point out, this underlines the absurdity of J&J's effort to sell Cypher "to millions of people who are ill-equipped to make judgments about the many clinically relevant but subtle and complex therapeutic issues that even specialists continue to debate."

But just how likely is it that surgeons really will respond to consumer demands?

Much to their chagrin, doctors are finding that pressure from patients does in fact change their behavior. In one of the most compelling analyses of this dynamic to date, a team led by the University of California had trained actors make 298 visits to 152 primary-care physicians, portraying patients with major depression or adjustment disorder. The actors presented doctors with three types of scenarios: requests for specific brands of medications, general requests for medications without naming brands, and no requests for medication.

For major depression, physicians prescribed antidepressants at a rate of 53 percent for brand-specific requests; 76 percent for general requests; and 31 percent for no requests. For adjustment disorder, physicians prescribed antidepressants at a rate of 55 percent for brand-specific requests; 39 percent for general requests; and 10 percent for no requests. In other words, people with identical conditions were prescribed drugs at dramatically different rates depending on what they asked for.

The Wall Street Journal's report that hospitals and doctors feel that they must experiment with J&J's spinal disc, Charité, suggests that this logic can carry over to medical devices. One can envision a spike in procedures and surgeries thanks to patient demand. But do we really want a system where knee-jerk patient response to 30-second commercials trumps medical expertise?

Patients want to be able to ask questions. They want their doctors to take the time to give them detailed answers. But the more DTC ads encourage patients to make demands of their doctors, the more doctors and patients are positioned as antagonists rather than collaborators.

That's a recipe for friction, not patient satisfaction. One last question: even if patients get the Cypher they want, what happens when they develop a blood clot? Who is responsible then?

Why Some Hospitals Are Allowing Unnecessary Suffering

"His heart filled virtually his whole chest," recalls Dr. Diane Meier describing her very first patient, an 89-year-old suffering from end-stage congestive heart failure. 

It was the first day of Meier's internship at a hospital in Portland Oregon, and after being assigned 23 patients, she was suddenly told that one of her patients, who had been in the Intensive Care Unit for months, was "coding." She raced to the ICU where the resident told her to put in a "central line."

"I didn't know how," Meier admits.  "I felt overwhelmed and inadequate. Then, the patient died ...

"Everyone just walked out of the room," she remembers.  I stood there. I still sometimes flash back on that scene: the patient, naked, lying on the table, strips of paper everywhere, the room empty. This was my patient. I felt I was supposed to do something -- but I didn't know what."

Meier left the room and, in the hallway, saw the patient's wife. "I walked right past her," she recalls, nearly shuddering at her own cowardice.  I didn't know what to say. I didn't even say 'I'm sorry.' As a physician, I didn't think that I was supposed to do that. "

I heard Dr. Diane Meier tell this story at a conference for medical students at  Manhattan's Mt. Sinai School of Medicine last week. When she finished, she asked her audience, "What is the hidden curriculum here? What does this story tell you?'


"Once the patient dies, he no longer matters," said one student.


"If we can't save the patient, the patient doesn't matter," added another.


Meier drew a third lesson: "Before he died, this patient had spent two months in the ICU. We had done everything possible to prolong the dying process."  As a doctor, you have to step back and say, 'What is this experience telling me, and is this right?'"


As a palliative care specialist, Meier spends much of her time with dying patients.  For many, "palliative care" offers a middle road between pulling out all the stops and simply giving up hope. Like traditional "hospice" care, palliative care focuses on "comfort" rather than "cure," emphasizing pain management and easing the emotional trauma of facing death, both for the patient and for the family.  But palliative care also includes procedures aimed at treating the symptoms of the disease.


In the past, Meier explains, physicians have seen caring for a terminally ill patient as an "either/or" situation: "Either we are doing everything possible to try to prolong your life -- or when there is 'nothing more that we can do,' only then do we make the switch to providing comfort measures. This dichotomous notion -- that you can do one thing alone and then the other thing alone later -- has nothing to do with the reality of what patients and their families go through."


In her talk last week, Meier explained that her first patient was one of three who marked turning points on her life as a physician. Originally, she trained to become a geriatrician, a doctor who cares for people over 65.  "I think because I was very close to my grandfather," she explained, "and because I'm a 'lumper' not a 'splitter'," she added, referring to the distinction between doctors who prefer to treat the whole patient, head to toe, and those who prefer to specialize in a body part: the foot, for example, or the eye.'


Her interest in treating the elderly brought her to Mt. Sinai, which, at the time, had the only Department of Geriatrics in the country.  But as her career unfolded, she found herself "become more and more alienated from medicine. Here, in the hospital, everyone was running around, ostensibly trying to help the patient, but actually often hurting the patient. I thought about quitting. I had a fantasy of opening a bakery/book shop where I could read and eat brownies ..." she told the med students.


"Then I met a patient I will call Mr. Santanaya."


Meier first encountered Santanaya when she was walking down a hospital hallway and  heard a man screaming and moaning in pain. She looked into his room and there he was, pinioned to his bed, hand and foot, in "four-point restraint."


"I went to the nurse and asked, 'Why is this man in a four-point restraint?"  The nurse called for the intern.


"I'll never forget this kid's face," Meier recalled "To me, he looked about twelve years old. And terrified.


Meier asked the question again,  and the intern explained: "He has lung cancer that spread to his brain and he's delirious. We put a feeding tube up into his nose and down to his stomach, and he pulled it out. So we tied his hands. Then he pulled it out with his knees and feet -- so we tied his knees and feet."


"The feeding tube is very uncomfortable," Meier told the students. "It makes the nose and esophagus raw. I asked the intern, 'Why do we have to do this?'"


"He looked at me with tremendous distress in his eyes: 'Because if we don't, he'll die."


"I realized he didn't know any better," said Meier.  "Neither did the resident or the attending physician. I realized that this was an educational problem.


"They cared about the patient. This wasn't callousness or indifference or venality.  They just didn't know when too much is too little." So Mr. Santayana spent 33 days tied hand and foot to his bed before he died. He spoke no English, but during that time, he kept screaming "Ayudeme! Ayudeme!" (Help Me! Help Me!)


Why didn't Mr. Santayana's physician intervene to do something to help him? "He didn't have a primary care physician because he was on Medicaid," Meier explained. So it was left to the hospital staff, and not knowing what else to do, they simply followed procedure.


"This was the early 1990s, and that is when I decided to shift my career to try to make up for what happened to Mr. Santayana," said Meier.  Then she got lucky.


Dr. Robert Butler, founder of gerontology at Mount Sinai, and  a friend of George Soros, urged her to apply for funding from Soros's newly formed Project on Death in America. Meier and three colleagues won the funding and in 1995, with help from Soros and the United Hospital Fund, launched the Hertzberg Palliative Care Institute at Mount Sinai School of Medicine. The Robert Wood Johnson Foundation also invested in developing content.  In 1999 Meier and Dr. Christine Cassel founded the Center to Advance Palliative Care (CAPC) .  As a result of CAPC's program, by 2005, the number of hospital-based palliative care programs in the U.S. had roughly doubled to 1,240, and some 3,100 health care professionals had been taught CAPC's methods and ethics.


The third patient who Meier told the students about last week  is a 24-year-old who she called "Kate." Kate had just graduated from college and had worked and saved enough money to go to Australia. There she developed the worst headache of her life. "She called her mother from Sydney and her mother came to get her," Meier told her audience. "In retrospect, she might have been better off if she had stayed in Australia."


The problem was that Kate had no health insurance.  She was only 24 and she thought she didn't need insurance.


Her mother brought her directly from the airport to Mt. Sinai, "where she was admitted directly to the oncology service, not to a doctor," Meier explains. Like Mr. Santayana, she would be on Medicaid and so wouldn't have her own doctor. Kate was diagnosed with leukemia.


"I met Kate on day 7 when a consult called me to say that they had a manipulative drug-seeking patient with acute myeloid leukemia," Meier recalled. "By then, Kate had earned the contempt and hostility of the house staff because she was constantly screaming for pain-killers.


"It turned out that no one knew the half-life of the opiate they were giving her -- not the attending physician, not the resident, not the intern."


Meier then turned to her audience, made up largely of second-year medical students. "Does anyone here know the half-life," she asked, naming the pain-killer.


No one did. (The half-life of a pain-killer tells you how long it will be before it wears off.)


"What they were giving her provided relief for only 90 minutes," said Meier, "and they were giving it to her every six hours."  After 90 minutes , Kate would begin ringing for nurse. Then, after a half hour, when no one came, she would begin ringing more and more frantically, and finally begin screaming. "Between four and six hours, she would just be screaming," said Meier.


This had been going on for seven days.  "The pain specialists wouldn't see her because she had no insurance."


"I doubled the dose and ordered that it be given to Kate every three hours, around the clock," said Meyer. "And before long, she was transformed into the sweet, charming intelligent person she always had been."


"Kate had become the victim of iatrogenic pseudo-addiction," Meier added. She wasn't an addict, but she was behaving like an addict and seemed like an addict -- a pseudo-addiction created by her doctors, which makes it an "iatrogenic disease," an illness caused, inadvertently, by medical care. 


Why hadn't her mother tried to persuade the doctors to give her more pain-killers? "Kate was the middle child in an Irish family of seven kids and one of her brothers had become addicted to drugs. As a result, the mother was terrified of opiates,"  said Meier. "The palliative care team had to spend time with the parent, explaining that pain kills.


The only possible hope for Kate was a bone marrow transplant. Because she was on Medicaid, this would be very hard to get. "It took six weeks of begging to get someone to take her," Meier recalled. "And then the transplant failed.


"While she was dying, Kate told us that the worst part of the experience had been those first six days when she was labeled a 'manipulative drug addict.' She was marginalized because her doctors did not know how to administer the opiates.


"Untreated pain is a medical emergency," Meier told the students. "The reason no one here knew the half-life of that opiate is because learning about pain-killers is not a priority in medical curriculums." In fairness, this is the sort of thing that doctors on the ward often look up. But in this case, no one even tried to look it up.


"The relief of suffering is a fundamental part of medicine," Meier concluded. "In this country there is a tremendous amount of stigma associated with opiates. When you are caring for patients, and you leave an order for the  nurse to administer the pain-killers, remember, there's a real chance that she'll think, 'This is dangerous. I don't want something bad to happen on my shift.  Okay, I'll give it to you -- but I won't give you enough.'


"This is why pain is so poorly managed in this country."


In Italy, by contrast, a patient dying of cancer is often sent home, with morphine, to die in his own bed. His wife administers the morphine and she is given enough to keep him as much as he wants -- when he wants it.   In the U.K., where hospice care was invented in the 1960s, there are many more palliative care specialists than in the U.S. 


Here, medicine is all about "cure," not about "care." "Defeating death at any cost: that is the priority," Meier told me. "It comes ahead of reducing suffering or considering the quality of the patient's life. If you look at NIH funding," she pointed out, "you see that this is where the money goes -- to cure cancer, to prevent all heart disease and stroke."


This is not to say that Meier favors cutting back on end-of-life care because it is so expensive and so much money is "squandered" during the final year of a patient's life. "The problem is, of course, that we don't know who is in their last year -- or their final three months," Meier observed.  "The fact that we spend so much on these patients in their final months of life is not necessarily a bad thing," she added. "These are the sickest people in the hospital, who need the most care. We shouldn't say: 'We're wasting money on the dying.' But," she added , "we should be asking, 'Is this the best care? Is it appropriate care?"


Clearly, we need more palliative care specialists like Meier. But this is another case where we don't pay enough for "thinking medicine" -- which involves talking to and listening to the patient -- rather than cutting him or radiating him.


"When a three-person palliative care team made up of a doctor, a nurse and a psychologist spends 90 minutes in a meeting with a family, Medicare would probably pay $130 to $140 -- for all three people," Meier told me. "And Medicare is one of the better payers. This explains why Meier earns $100 for every several thousand dollars that her husband, an invasive cardiologist, takes home. "Though," Meier said mildly, "it would be hard to say that one of us is practicing more sophisticated medicine."

Whatever Happened to American Longevity?

Life expectancy is a pretty simple concept: it's an estimation of how long the average person lives. Anyone can understand that. So how is this for a compelling data point: if you look at life expectancy in nations around the globe, you'll find that over the past 20 years, the U.S. has sunk from ranking  No. 11 to  ranking No. 42. In other words, a baby born in 2004 in any one of 41 other countries can expect to live longer than his or her American counterpart.

This may come as a surprise. Sure, we all know the health care system in the U.S. is broken, but life expectancy isn't just tied to medicine -- it's also related to quality of life in a larger sense. (I can live in a nation with the best health care system in the world, but if it's in the throes of civil war, my life expectancy will be short). As we all know, the American standard of living is the envy of the world.  After all, we're the richest country on the globe. So what gives?

While some of us are rich, the average American is not.  And while the rich are living longer, the poor are living shorter.  Factor in the profit motive that drives U.S. healthcare, and you will begin to understand why American medicine has done little to heal the gap between rich and poor.  Over the past twenty-five years, we have poured money into healthcare, but have paid relatively little attention to public health.

This may seem a bold claim, but last month the Congressional Budget Office (CBO) issued a report that provides the numbers:  "In 1980," the CBO found that "life expectancy at birth was 2.8 years more for the highest socioeconomic group than for the lowest. By 2000, that gap had risen to 4.5 years."

The report notes that "the 1.7-year increase in the gap" between socioeconomic groups "amounts to more than half of the increase in overall average life expectancy at birth between 1980 and 2000." In other words, even though the average life expectancy has increased in the U.S., it has grown more slowly because of widening socioeconomic disparities.

Citizens of countries that don't tolerate as much inequality enjoy longer lives. According to numbers from the Census Bureau and the National Center for Health Statistics, a baby born in the United States in 2004 will live an average of 77.9 years. In the U.K., an '04 baby can expect to live 78.7 years; in Germany, 79 years; in Norway, 79.7 years; in Canada, 80.3 years; in Australia, Sweden, and Switzerland, 80.6 years; and in Japan, a newborn can expect to live 81. 4 years.


Somehow or other, when they hear these figures, most Americans just shrug. Indeed, "it is remarkable how complacent the public and the medical profession are in their acceptance of" our low ranking when it comes to life expectancy, "especially in light of trends in national spending on health, " Dr. Steven Schroeder, a professor in the Department of Medicine at the University of California, San Francisco wrote in the New England Journal of Medicine last year.


"One reason for the complacency may be the rationalization that the United States is more ethnically heterogeneous than the nations at the top of the rankings, such as Japan, Switzerland, and Iceland. But," Schroeder pointed out, "even when comparisons are limited to white Americans, our performance is dismal. And even if the health status of white Americans matched that in the leading nations, it would still be incumbent on us to improve the health of the entire nation."

In the OECD countries that outrank us, the gaps between rich and poor are not as great and, not coincidentally, all have universal health insurance. (As Maggie wrote in an earlier post on Health Beat, in countries that are mainly middle-class, there tends to be more social solidarity. People identify with each other, and are more willing to pool their resources to pay for healthcare for everyone.)




But having access to health care is only a small part of health. Schroeder identifies five factors that determine health and longevity: "social circumstances, genetics, environmental exposures, behavioral patterns and health care."  Of these five, when "it comes to reducing early deaths," he points out, "medical care has a relatively minor role."  Indeed, "inadequate health care accounts for only 10% of premature deaths, yet it receives by far the greatest share of resources and attention." 


Socioeconomic status is the strongest predictor of health, above and beyond access to health care. This is because socioeconomic status includes access to health care and a variety of other factors. Even if when the poor have insurance, they are less likely to have access to cutting-edge medical discoveries; they're more likely to smoke, more likely to be obese, more likely to live in unsafe or unhealthy environments. They also tend to be less educated, meaning that they are less able to manage chronic diseases.


These facts are reflected in life expectancy. African-Americans are more likely to live in poverty than other Americans: as a result, black men can expect to live six years less than white men, and black women four years less than white women.  Education, another critical component of socioeconomic status, also contributes to the story. The CBO reports that "the gap in life expectancy at age 25 between individuals with a high school education or less and individuals with any college education increased by about 30 percent" from 1990 to 2000. "The gap widened because of increases in life expectancy for the better educated group," the report notes. "Life expectancy for those with less education did not increase over that period."


This trend is clear: since 1980, affluent members of society have made gains while the have-nots have, at best, run in place, and, at worst, lost ground. Another recent study published in the PLoS Medicine takes a broader look at the problem by going all the way back to 1960 to see at how life expectancies have differed in U.S. counties. (Counties were used because they are the smallest geographic units for which death rates are collected, thus allowing for a precise comparison of subgroups). The authors, who hail from Harvard, UCSF, and the University of Washington, discovered that "beginning in the early 1980s and continuing through 1999, those who were already disadvantaged did not benefit from the gains in life expectancy experienced by the advantaged, and some became even worse off."


1980 was a watershed year. Indeed, the study reports that from 1960 to 1980, life expectancy increased everywhere. But "beginning in the early 1980s the differences in death rates among/across different counties began to increase. The worst-off counties no longer experienced a fall in death rates, and in a substantial number of counties, mortality actually increased, especially for women..."


So what was so special -- or rather, harmful -- about the 1980s?


1980 was the year that a conservative agenda firmly replaced the "War on Poverty" that LBJ had begun in the 1960s. For the next 28 years, the trend would continue as corporate welfare and tax cuts for the wealthy replaced programs for the poor and middle-class.


As the authors of a 2006 PLoS Medicine study note, "in the 1980s there was a general cutting back of welfare state provisions in America, which included cuts to public health and antipoverty programs, tax relief for the wealthy, and worsening inequity in the access to and quality of health care." By contrast, in the 1960s, "civil rights legislation and the establishment of Medicare set out to reduce socioeconomic and racial/ethnic inequalities and improve access to health care."


But after 1980, the '06 PLoS Medicine study shows that rates of premature mortality across socioeconomic groups began to diverge, helping to roll back the gains of the 1960s and 1970s. In a stunning conclusion, the study's authors reported that "if all people in the US population experienced the same health gains as the most advantaged [i.e. whites in the highest income group] without the problems of the 1980s,  "14 percent of the premature deaths among whites and 30 percent of the premature deaths among people of color would have been prevented."


In sum, the stronger social safety net of the 1960s helped to increase longevity for all Americans; its erosion in the 1980s created a discrepancy between the haves and the have-nots. Indeed, given that socioeconomic status is the strongest predictor of health, it's noteworthy that the lowest quintile of earners in the U.S. saw its income fall by 15 percent between 1979 and 1993, while the highest 20 percent saw their income grow by 18 percent over this same period. The poverty rate in the U.S. was cut nearly in half tin the 1960s; from 1980 to 1989, it inched down by just one percentage point.


Clearly, the decline of American longevity is related to an increase in American inequality. But it would be short-sighted to stop our analysis here. It's also worth asking, where have we been spending our health care dollars?


"To the extent that the United States has a health strategy, its focus is on the development of new medical technologies and support for basic biomedical research," Schroeder observes. "We already lead the world in the per capita use of most diagnostic and therapeutic medical technologies, and we have recently doubled the budget for the National Institutes of Health. But these popular achievements are unlikely to improve our relative performance" when it comes to longevity.


If we want to cut the number of premature deaths, we might put more emphasis on smoking cessation clinics. "Smoking causes 440,000 deaths a year in the United States," notes Schroeder, who directs the Smoking Cessation Leadership Center at UCSF. "Smoking shortens smokers' lives by 10 to 15 years, and those last few years can be a miserable combination of severe breathlessness and pain."  44.5 million Americans still smoke.  "Smoking among pregnant women is a major contributor to premature births and infant mortality. Smoking is increasingly concentrated in the lower socioeconomic classes and among those with mental illness or problems with substance abuse," Schroeder explains.  "Understanding why they smoke and how to help them quit should be a key national research priority. Given the effects of smoking on health, the relative inattention to tobacco by those federal and state agencies charged with protecting the public health is baffling and disappointing."


Kaiser Permanente of northern California has shown that it can be done. When Kaiser implemented a multisystem approach to help smokers quit, Schroeder reports that "the smoking rate dropped from 12.2% to 9.2% in just 3 years. Of the current 44.5 million smokers, 70% claim they would like to quit. Assuming that one half of those 31 million potential nonsmokers will die because of smoking, that translates into 15.5 million potentially preventable premature deaths. Merely increasing the baseline quit rate from the current 2.5% of smokers to 10% -- a rate seen in placebo groups in most published trials of the new cessation drugs -- would prevent 1,170,000 premature deaths. No other medical or public health intervention approaches this degree of impact. And we already have the tools to accomplish it."


The poor also are more likely to be obese, "in part because of inadequate local food choices and recreational opportunities," says Schroeder.  Fattening foods are cheaper than fresh fruit, vegetables and fish, particularly if you are shopping in inner cities. Gyms are too expensive for low-income families; exercising outdoors can be dangerous, and in inner cities, public schools often lack playgrounds and gymnasiums.


"Psychosocial stress" also leads poorer Americans to engage in "other behaviors that reduce life expectancy such as drug use and alcoholism," Schroeder notes. And even when they avoid these behaviors, " people in lower classes are less healthy and die earlier than others." A polluted environment, combined with uncertainty and worry, takes a toll.


Rather than focusing solely on medicine and medical care, Schroeder is committed to strategies that would improve public health. In the U.S. there is a sharp division between the two, with public health always the poor relation.


"It's harder, because there's stigma attached to it," Schroeder explains. "There's a sense among some that if a large portion of the nation's population is obese or sedentary, drinks or smokes too much, or uses illegal drugs, that's their own fault or their own business."


"We often get a double-standard question,"  he continues.  "Critics who object to investing more in programs that could help drug addicts and alcoholics, ask: Well, don't many of these people relapse?


"Yes, of course," Schroeder responds. "But is it worth treating pancreatic cancer, which has a 5 percent survival rate, at most? Yes. So the odds of successfully treating drug abuse or alcoholism are actually better than in many of the serious illnesses that society, without question, wants us to treat."

Schroeder is right: When allocating health care dollars, we eagerly spend far more on cutting-edge drugs that might give a cancer patient an extra five months than on drug rehab clinics that could make the difference between dying at 28 and living to 68. 


Again, 1980 marks a turning point, notes Marcia Angell, a Senior Lecturer at Harvard Medical School and the former editor-in-chief of NEJM. "Between 1960 and 1980, "prescription drug sales were fairly static as a percent of US gross domestic product, but from 1980 to 2000, they tripled."


This wasn't just happenstance, says Angell. A major catalyst of the pharma boom was the Bayh-Dole Act of 1980, a law that "enabled universities and small businesses to patent discoveries emanating from research sponsored by the National Institutes of Health, the major distributor of tax dollars for medical research, and then to grant exclusive licenses to drug companies." In other words, the Bayh-Dole Act commoditized medical research.


Before 1980, "taxpayer-financed discoveries were in the public domain, available to any company that wanted to use them," says Angell. As a result, long-term, collaborative tinkering could help to create new and effective medications. But Bayh-Dole made research proprietary and profitable.


After Bayh-Dole, drug research seemed to be less about making real medical progress, and more about doing the bare minimum to create a patentable product. And so began the age of me-too drugs, which do little to promote health and instead exist to increase market share. In a Boston Globe op-ed last year, Angell observed that, "according to FDA classifications, fully 80 percent of drugs that entered the market during this decade are unlikely to be better than existing ones for the same condition."


Why are we willing to devote 13 or 14 percent of our $2.2 trillion health care budget to prescription drugs, while refusing to help the quarter of the population that still smokes?


"It is arguable that the status quo is an accurate expression of the national political will --
a relentless search for better health among the middle and upper classes," Schroeder acknowledges. [our emphasis]  "This pursuit is also evident in how we consistently outspend all other countries in the use of alternative medicines and cosmetic surgeries and in how frequently health 'cures' and 'scares' are featured in the popular media. The result is that only when the middle class feels threatened by external menaces (e.g., secondhand tobacco smoke, bioterrorism, and airplane exposure to multidrug-resistant tuberculosis) will it embrace public health measures. In contrast, our investment in improving population health -- whether judged on the basis of support for research, insurance coverage, or government-sponsored public health activities -- is anemic."


We're hopeful that this will change. In going to medical conferences over the past year, Maggie has met an impressive number of very, very bright 20-somethings who are devoting their careers to public health. And they understand that "medicine" and "public health" are not separate disciplines.

Just How Secure Is Your Employer-Based Health Insurance?

Last week, the Economic Policy Institute released a disturbing report revealing just how many white-collar workers have lost their employer-based health insurance in recent years -- even though they didn't change jobs.

Many workers believe that if they hold onto their job, their insurance is safe. Professionals with jobs near the top of the occupational ladder are especially likely to assume that their employer is not going to cut their coverage. That may well have been true in the 1990s, when the job market was tight -- but not today.

The EPI report shows that in just the first six years of this century, the share of U.S. workers with employer-provided health insurance (EPHI) fell from 51.1 percent to 48.8 percent. Moreover, workers in white-collar occupations -- including executives, managers and workers in professional specialties -- were just as likely as blue-collar workers to lose their safety net.

Perhaps this shouldn't come as a surprise, since employers typically pay a much larger share of premiums for higher-income employees. So as insurance premiums soar (up 78 percent since 2001), employers are beginning to chafe under the very costly burden of providing first-class benefits to white-collar employees. (Insurance premiums rose "only" 6.1 percent in 2007, but going forward, experts expect sharper increases because the cost of medical technology continues to skyrocket).

Most employers will just shift more costs to employees in the form of higher co-pays and deductibles. But some will decide that they cannot continue to offer insurance.

"No one is immune to the slow unraveling of the employer-based health insurance system," warns Heidi Shierholz, EPI economist and co-author, with Jared Bernstein, of the report "A Decade of Decline: The Erosion of Employer-Provided Health Care in the United States and California, 1995-2006."

"This dramatic loss of employer-provided health insurance since 2000 is not simply driven by the loss of high-quality jobs, such as those in the manufacturing sector," the report observes. "Rather, it is caused by the significant decline in employers providing coverage within existing jobs across the board. The burden of these employer cuts is not carried by part-time or marginal workers. Rather, the most dramatic loss is among workers with the strongest connection to the labor force."

Note, for example, the startling declines, from 2000 to 2006, in the share of workers covered by EPHI as shown in the bottom half of the table below (click for larger version). At the top of the job ladder, in the first three occupations listed, the percentage of executives, professionals and technicians with employer-based coverage fell by over 3 percent to 5.6 percent.

The top half of the table below shows what percentage of workers are employed in various occupations; the bottom half reveals what percentage in each occupation have employer-provided health insurance.

Click below to view larger table.
slide1

Drilling a little deeper, the bottom half of the table below tells you more about the people who lost their insurance. For example, from 1995 to 2006, workers with a college degree were just as likely to lose their EPHI as those who didn't have a degree. Meanwhile, from 2000-2006 the share of 45- to 54-year-old workers with EPHI -- which includes many people who are most likely to need health care -- fell by a fat 4 percent. (The small dip in the share of those over 55 with employer-based insurance is due to the fact that many people in this age group retire or partially retire, the report explains).

slide1

"EPHI is disappearing across the entire age and education spectrum, including prime-age workers and those with college degrees," the report's authors note. "These findings show that health insecurity is now a broadly shared American experience.

"As a consequence," they say, "the solution requires a broadly shared approach. The erosion of the employer-based system, with losses accumulating in even high-end sectors, along with the critical need to control healthcare costs, indicates that the provision of coverage needs to be at least partly 'taken out of the market.'"

As employers back out of the benefits business, individuals who try to get insurance on their own will discover just how expensive it is. Rates vary by state, but family coverage in a state like Virginia can cost as much as $24,000 a year.

This is why, in the very near future, "we will need 'universal programs 'that pool risk across large populations," Bernstein and Shierholz advise. Anyone who doesn't have EPHI (or doesn't like/cannot afford the EPHI that they have) could join these groups. In addition, if universal insurance is going to cover everyone at an affordable price -- even if they are sick -- the authors conclude that we will have to "mandate coverage, with subsidies for those unable to meet the mandate."

Although an individual mandate requiring that everyone join an insurance pool is not a popular concept, the report's authors are correct. Note that they are economists -- not politicians. And while economists can be dreary, the nice thing is that they are not worried about whether you will vote for them. And therefore, rather than telling people what they want to hear, they tend to address the reality of the numbers. Even better, these are excellent economists (I know Bernstein). So, not only are they confronting the numbers, they truly understand the numbers.

Unless a mandate requires that everyone have insurance, some young, healthy people would wait until they became sick to join a pool, expecting people who had been paying premiums into that program for years to now cover them. If that happened, ultimately only the sick and the elderly would buy insurance -- and prices would levitate to a point that virtually no one could afford it. If we are going to have a mandate, however, we have to provide adequate subsidies on a sliding scale for those who cannot afford the coverage.

How do we do that? This brings me back to the "basics" of healthcare spending, a series that I've been doing over the past few months, showing where our healthcare dollars are going -- and where we might pare waste, while pushing back against healthcare manufacturers who are gouging America.

The Mythology of Boomers Bankrupting Our Healthcare System

Berlin, March 13, 2008 -- By bringing 600 government and industry leaders together from more than 50 countries, the "World Health Care Congress Europe" (WHCCE) last month offered a splendid window on the wide variety of solutions that countries around the world are using as they struggle toward healthcare reform. One constant theme of the conference: "No One Thing Works."

When the three-day conference ended, it also was apparent that developed countries share many of the same problems. One that stands out is the fact that our populations are aging. Each country faces the same question: How will a shrinking work force possibly pay for the medicine their nations' retirees will need?

This brings me to Princeton economist Uwe Reinhardt's speech on the very first day of the conference. The only American to speak at WHCCE, Reinhardt focused on what he called "the folklore that people bring to the healthcare policy table." By nature an iconoclast, Reinhardt spent the next 20 minutes shattering some of the myths that have become part of the received wisdom among policymakers.

Begin with the notion that an aging population is a major factor driving healthcare inflation. In the United States this is accepted as a justification for why the nation's healthcare bill now equals more than $2 trillion -- and why we must expect it to climb ever higher.

Bad news is often more gripping than good news, and "if you want to be a popular speaker, you need to feed the paranoia of your audience," Reinhardt observed, pointing to the first slide of his PowerPoint presentation -- a chart illustrating just how quickly we can expect a horde of wrinkly boomers to take over the nation. Some stooped and shriveled, others proudly bloated, these former members of the Pepsi generation will be far more demanding, we're told, than the World War II veterans who preceded them.



A second slide is even more distressing, revealing that healthcare spending on patients over 75 averages about five times what we spend on 40-year-olds.




Yet the next graph that Reinhardt offers is a little puzzling.




Here, we see that the United States spends close to $7,000 per person on care -- even though its population is younger than the citizens of most developed countries, including Germany, Italy and Japan. (Because of a slightly higher fertility rate and an annual intake of 900,000 legal immigrants, the median age in the United States will rise in just three years to 39 over the next quarter-century, before the aging of America really starts to accelerate.) Meanwhile, Japan's population has been graying for some time, yet it spends only $1,000 per person. Could eating fish really make that much difference?

Reinhardt's next graph provides the explanation.




It turns out that when you look at estimates of growth in healthcare spending from 1990 to 2030, a senescent citizenry plays only a minor role in the projected jump from $585 billion (what we laid out for healthcare in 1990) to $14,026 billion (what analysts say we'll ante up in 2030, assuming we continue in our profligate ways).

What will be the biggest factor pushing the tab so much higher? Innovation. "The healthcare industry will continue developing new stuff for every age group," Reinhardt explains. Will that "new stuff" -- in the form of new drugs, devices, tests and procedures -- be worth it? Some of it will be. Some won't. Indeed as this article from Health Affairs reveals, over the past 12 years, rising spending on new medical technologies designed to address heart disease has not meant that more patients have survived. In many areas, we seem to have reached a point of diminishing returns. This also is true in the drug industry, where most new entries are "me too drugs" -- little different from products already on the market.

As I have often discussed, it is usually suppliers, not "patient demand," that drives healthcare inflation. The big ticket items are not the ones patients ask for; they're the ones companies advertise -- or that doctors and hospitals tell us we need. Few chronically ill patients ask to be hospitalized; not many cry out for dialysis, or the chance to spend thousands on cancer drugs; it's the rare person who asks if he can die in an ICU.

"In truth, the aging of the population is not a big problem," Reinhardt says. We really don't have to worry about greedy geezers suddenly clamoring for more care than we can afford. For one, they won't grow old all at once. They'll grow old just as they were born -- over a period of many years.




As Reinhardt mentioned earlier, a speaker who wants to grab his audience's attention may well scale a chart so that the demographic change looks like a wave that could wipe us out -- but the truth is much less sensational.




This doesn't mean that healthcare spending won't continue to levitate. "But what will drive costs in coming years, will come, not from the demand side of the equation, but from the supply side," says Reinhardt, repeating his theme. We can be certain that, without some significant reforms, suppliers will continue to invent new products for every age group, charging us more and selling us more -- using whatever methods it takes, from direct-to-consumer advertising to promises of near immortality and perpetual youth (just as 120 can be the new 80, 55 can be the new 35!) -- if we just swallow enough pills and replace enough body parts. (Of course remembering to swallow the pills could become a problem around 101, but that's another post).

Moreover, healthcare is labor intensive -- and by 2070, the number of U.S. workers per Medicare beneficiary will have dropped from 3.4 (in 2000) to 1.9. We are already experiencing a shortage of registered nurses -- which has helped raise wages. "Today a RN in California often makes more than a pediatrician," Reinhardt notes. (Though this says more about how niggardly we are when paying our pediatricians than how extravagant we are when paying nurses. See this post on physicians' pay).



Looking ahead, we'll probably need 50 percent more nurses than we employed in 2000. Given the laws of supply and demand, this all but ensures that nurses' wages will continue to rise.




So between the endless inventiveness of those who would overmedicate us to the unavoidable costs of a labor-intensive industry in an aging society, it is the supply side of medicine that is likely to push prices higher. This, says Reinhardt, is what policymakers should be thinking about.

But, he emphasizes, it doesn't have to happen. "If we begin to purge our healthcare system of Waste, Fraud and Abuse," we could save billions, Reinhardt notes. And when it comes to caring for the elderly, he suggests, "If we develop healthcare information technology, we could use it to monitor seniors in their homes -- instead of in nursing homes."

This is just one example of how the United States could bring costs down on the supply side. In addition, Medicare could use its clout to negotiate for lower drug and device prices -- just as other nations do. We could become more discriminating about what we buy from the healthcare industry's suppliers -- insisting on independent medical evidence that the new product or service really is worth the higher price. And patients could refuse to sign on for an elective procedure like knee replacement or prostate surgery unless they are given a chance to share in weighing risks against benefits. (See my post on "informed choice").

Finally, Sweden offers proof that an aging population doesn't have to spell financial disaster. The second day of the conference I interviewed Mona Heurgren, an economist at Sweden's National Board of Health and Welfare, and she pointed out that "while we have the oldest population in the EU, our healthcare costs haven't been rising. Over the last 15 years or so, the share of our citizens who are older has been growing, yet healthcare spending has stayed level at about 9 percent of GDP."

How has Sweden managed the buck the trend? For one, 95 percent of the country's hospitals and doctors use electronic medical records, which guarantee fewer errors and much greater efficiency. (As of three years ago, only 15 percent to 20 percent of U.S doctors' offices and 20 percent to 25 percent of U.S. hospitals had implemented electronic medical records, and adoption continues to move slowly as we try to decide who should pay for healthcare IT).

Moreover, in Sweden, preventive care is free. So no one is tempted to skip a needed Pap smear. Diabetics go for their eye checkups. In the United States, by contrast, many 50-something patients put off care that they can't afford, waiting until they reach the magic age of 65 and qualify for Medicare. At that point, the catch-up care they need can be very expensive and in some cases, their health has been permanently damaged.

Finally, in Sweden, long-term care is included in the national healthcare package, which is financed almost entirely through income taxes. Heurgren estimates that the share of a family's taxes that is used to fund healthcare equals roughly 10 percent of the average household's income. This is roughly what a median-income family in the United States lays out for health insurance -- if it is lucky enough to have an employer able and willing to pay slightly more than 50 percent of the family's healthcare premiums. (Comprehensive insurance for a family now fetches close to $13,000; if the employer pays $7,000, that leaves a family earning $60,000 with premiums of $6,000. Of course, in the United States that family also would face co-pays and deductibles, making healthcare more expensive, as a percentage of gross income, than in Sweden).

But as Heurgren puts it, with a modest shrug, "We're just a small country in the north." She is suggesting that Sweden is too small to serve as a model for larger nations. It is easier, in many ways, for Sweden to manage the challenges of 21st century medicine in a country where most people are middle-class and social solidarity is part of the culture.

21st Century Medicine Fraught With Miscommunication and Human Error

In the most recent issue of the New England Journal of Medicine, Dr. Thomas Bodenheimer defines the coordination of medical care as "the deliberate integration of patient care activities between two or more participants involved in a patient's care to facilitate the appropriate delivery of healthcare services." Or, to put it in layman's terms: doctors working together to get things right.

The value of this sentiment should be self-evident, but the coordination of medical care is more complex than it initially seems -- even when discussing admittedly uncomplicated concepts. Consider the "hand-off," that transitional moment when a patient is passed from one provider to another (e.g., from primary care physician to specialist, specialist to surgeon, surgeon to nurse, etc.) -- or is discharged. This transition is unavoidable. As Bodenheimer points out, modern healthcare necessitates a "pluralistic delivery system that features large numbers of small providers, [which] magnif[ies] the number of venues such patients need to visit." Twenty-first century medicine is too complex for one-stop shopping.

Inescapable though it may be, the hand-off is wrought with pitfalls. As Quality and Safety in Health Care (QSHC), a publication of the British Medical Journal, noted in January, the simple transition of a patient from one caretaker to another represents a gap that is "considered especially vulnerable to error."

Even the most common hand-off -- your standard referral from primary care physician to specialist -- is not risk-free. As Dr. Bob Wachter recently noted in his blog, "In more than two-thirds of outpatient subspecialty referrals, the specialist received no information from the primary care physician to guide the consultation." Sadly, the radio silence goes both ways: "In one-quarter of the specialty consultations," Wachter says, "the primary care physician received no information back from the consultant within a month."

These missteps are indicative of what can go wrong during the hand-off, such as, according to QSHC, "inaccurate medical documentation and unrecorded clinical data." Such misinformation can lead to extra "work or rework, such as ordering additional or repeat tests" or getting "information from other healthcare providers or the patient" -- a sometimes arduous process that can "result in patient harm (e.g., delay in therapy, incorrect therapy, etc)."

Bodenheimer points out other troubling statistics that speak to the problems with fragmented, discontinuous medical care -- and that extend well beyond the physician-specialist back-and-forth. Indeed, poorly integrated care is evident across the spectrum of medical services. In the nation's emergency rooms, for example, 30 percent of adult patients that underwent emergency procedures report that their regular physician was not informed about the care they received. Another study "showed that 75 percent of physicians do not routinely contact patients about normal diagnostic test results, and up to 33 percent do not consistently notify patients about abnormal results." And an academic literature review concluded that a measly "3 percent of primary care physicians [are] involved in discussions with hospital physicians about patients' discharge plans."

If you're sensing a pattern here, you should be: Most of the gaps in care are failures of communication involving primary care physicians. That's because, at least in theory, primary care docs are the touchstone for patient care -- the glue that holds it all together.

But primary care has become an increasingly precarious occupation. The problem is that, relative to specialists, PCPs do a lot more for relatively little pay. And they are expected to do more each day. Bodenheimer notes that "it has been estimated that it would take a physician 7.4 hours per working day to provide all recommended preventive services to a typical patient panel, plus 10.6 hours per day to provide high quality long-term care." So it should come as no surprise that "forty-two percent of primary care physicians reported not having sufficient time with their patients."

With such a heavy time-crunch, it's not surprising that some things can fall through the cracks -- like follow-ups, double-checking, and generally going the extra mile (which really shouldn't be extra at all).

Making things worse is our fee-for-service system, which, as Dr. Kevin Pho (aka blogger KevinMD) notes, pressures "primary care physicians to squeeze in more patients per hour" and thus encourages a short attention spans vis-à-vis individual patients. The volume imperative is strongest for PCPs, who make significantly less money than do their specialist peers. As Maggie has pointed out in the past, primary care doctors can expect to pull in -- at the high end -- just under one-third as much as surgeons or radiologists.

Predictably, the all-work, little-reward life of PCPs is increasingly unsexy to newly minded doctors. Kevin notes that "since 1997, newly graduated U.S. medical students who choose primary care as a career have declined by 50 percent."

It's clear that we have a systemic problem that makes hand-off mixups more likely: PCPs are crunched for time, desperate to max out patient volume, and their ranks are dwindling. Is it any wonder that they can't provide the "medical home" that reformers talk about?

This is a recipe for disaster that needs to be addressed. There are options: We can reform the fee-for-service system, perhaps by introducing payments for effective care coordination. We can create financial incentives (such as loan forgiveness) for med students to choose primary care. We also should have primary care physicians work in teams more often, from the very beginning of a patient relationship, thus allowing them to share the load and watch each others' backs.

But for all that these ambitious changes hold promise, the hand-off will always exist -- which means reformers need to dig deeper and develop protocols at the operational level. Luckily, they're doing just that. Kaiser Permanente, for example, has created a procedure meant to formalize communication between healtcare teams when a patient is transitioning from one provider to another. It's called SBAR -- which stands for Situation, Background, Assessment, and Recommendation. QSHC delves deeper into what this actually means:

Keep reading... Show less

The Bad Science That Created the Cholesterol Con

The widespread belief that "bad Cholesterol" ( LDL cholesterol) is a major factor driving heart disease -- and that cholesterol-lowering drugs like Lipitor and Crestor can protect us against fatal heart attacks -- is turning out to be a theory filled with holes. These drugs, which are called "statins," are the most widely-prescribed pills in the history of human medicine. In 2007 world-wide sales totaled $33 billion. They are particularly popular in the U.S., where 18 million Americans take them.

We thought we knew how they worked. But last month, when Merck/Schering Plough finally released the dismal results of a clinical trial of Zetia, a cholesterol-lowering drug prescribed to about 1 million people, the medical world was stunned. Dr. Steven E. Nissen, chairman of cardiology at the Cleveland Clinic called the findings "shocking." It turns out that while Zetia does lower cholesterol levels, the study failed to show any measurable medical benefit. This announcement caused both doctors and the mainstream media to take a second look at the received wisdom that "bad cholesterol" plays a major role in causing cardiac disease. A Business Week cover story asked the forbidden question, "Do Cholesterol Drugs Do Any Good?"

The answer, says Dr. Jon Abramson, a clinical instructor at Harvard Medical School, and the author of Overdosed America, is that "statins show a clear benefit for one group -- people under 65 who have already had a heart attack or who have diabetes. But," says Abramson, "there are no studies to show that these drugs will protect older patients over 65 -- or younger patients who are not already suffering from diabetes or established heart disease from having a fatal heart attack. Nevertheless, 8 or 9 million patients who fall into this category continue to take the drugs, which means that they are exposed to the risks that come with taking statins -- which can include severe muscle pain, memory loss, and sexual dysfunction."

Finally -- and here is the stunner -- it turns out we don't have any clear evidence that statins help the first group by lowering cholesterol levels. It's true that they do lower cholesterol, but many researchers are no longer convinced that this is what helps patients avoid a second heart attack. It now seems likely that they work by reducing inflammation. In other words, these very expensive drugs seem to do the same thing that aspirin does. (Are they more effective than the humble aspirin? We'll need head-to-head studies to find out.)

In the past, some physicians have questioned the connection between high cholesterol and heart disease. After all, as Dr. Ronald M. Krauss, director of atherosclerosis research at the Oakland Research Institute, told Business Week, "When you look at patients with heart disease, their cholesterol levels are not that [much] higher than those without heart disease ... Compare countries, for example. Spaniards have LDL levels similar to Americans, but less than half the rate of heart disease. The Swiss have even higher cholesterol levels, but their rates of heart disease are also lower. Australian aborigines have low cholesterol but high rates of heart disease."


Why then, were we all so certain that LDL cholesterol led to fatal heart attacks? The truth is that we were not "all" so sure. Within the medical profession, there have always been skeptics -- particularly in the U.K. But in the U.S., the Popes of cardiology, the American Heart Association and the College of Cardiologist each put their imprimatur on the cholesterol story, insisting on its truth, until finally, it became dogma.


As science writer Gary Taubes pointed out in a recent New York Times Op-ed: "The idea that cholesterol plays a key role in heart disease is so tightly woven into modern medical thinking that it is no longer considered open to question." Taubes, whose work has appeared in The Best American Science Writing, Science, and the New York Times Magazine, explains that "because medical authorities have always approached the cholesterol hypothesis as a public health issue, rather than as a scientific one, we're repeatedly reminded that it shouldn't be questioned. Heart attacks kill hundreds of thousands of Americans every year, statin therapy can save lives, and skepticism might be perceived as a reason to delay action. So let's just trust our assumptions, get people to change their diets and put high-risk people on statins and other cholesterol-lowering drugs."


Taubes sees things differently. "Science suggests a different approach: Test the hypothesis rigorously and see if it survives." But when it comes to the cholesterol theory, this is what never happened. Go back to 1950, and you will understand why.


As the second half of the twentieth century began, public health experts were flummoxed by the steep rise in heart attacks. Turn-of-the century records suggest that heart disease caused no more than 10 percent of all deaths -- many more people died of pneumonia or tuberculosis. But by 1950, coronary heart disease, or CHD, was the leading source of mortality in the United States, causing more than 30 percent of all deaths.


One common-sense explanation comes to mind: With improved sanitation, plus new drugs, fewer people were dying of infectious diseases. So they were living long enough to die of a heart attack.


But to many, that didn't seem sufficient. So in 1949, the National Heart Institute introduced the protocol for the Framingham Study. The research, which began in 1960, set out to investigate the factors leading to cardiovascular disease (CVD) and began with these hypotheses:


1. CVD increases with age. It occurs earlier and more frequently in males.
2. Persons with hypertension developed CVD at a greater rate than those who are not hyper-tensive.
3. Elevated blood cholesterol level is associated with an increased risk of CVD.
4. Tobacco smoking is associated with an increased occurrence of CVD.
5. Habitual use of alcohol is associated with increased incidence of CVD.
6. Increased physical activity is associated with a decrease in the development of CVD.
7. An increase in thyroid function is associated with a decrease in the development of CVD.
8. A high blood hemoglobin or hematocrit level are associated with an increased rate of the development of CVD.
9. An increase in body weight predisposes to CVD.
10. There is an increased rate of the development of CVD in people with diabetes mellitus.
11. There is higher incidence of CVD in people with gout.


Other factors were later added to the list, including HDL and LDL lipid fractions.


Ultimately, "the Framingham study determined that higher total cholesterol levels significantly correlate with an increased risk of death from coronary heart disease only through the age of 60" observes "Evidence for Caution: Women and Statin Use," a well-documented 2007 report from The Canadian Women's Health Network. Moreover, the research showed that cholesterol was only one of many factors leading to CVD for younger patients.


"Tales From the Other Drug Wars," a paper presented at a 1999 health conference in Vancouver, also stresses that "The Framingham Study actually found an association between blood cholesterol and coronary heart disease in young and middle-aged men only. No corresponding association was found in women or in the elderly, and it is in the latter group that most of the cases of heart disease occur." And while the study linked blood cholesterol to heart disease in younger men, the study also found no association between dietary cholesterol (cholesterol that comes from what we eat) and the risk of coronary heart disease, even in young and middle-aged men.


"Dietary saturated fats were not associated with heart disease even after adjusting for other risk factors. Buried deep in the massive number of reports produced from the study is a quote from the investigators saying " ... there is, in short, no suggestion of any relationship between diet and the subsequent development of coronary heart disease in the study group."


Many of the other factors that the Framingham Study investigated -- including lack of physical activity, obesity, stress, smoking and alcoholism would prove very important, yet "for a variety of reasons," the focus shifted to cholesterol" the 2007 Canadian report ("Evidence for Caution") notes, which now " has become the most prominent and feared risk factor for both women and men -- perhaps because it is the most easily modifiable. By contrast there is no pill for the effects of air pollution, which is a substantial risk factor for heart disease, especially for women."


Thus began what the report calls "the "cholesterolization" of cardiovascular disease -- that is, emphasis on a single risk factor... . Cholesterol has come to represent a virtual disease state in itself, rather than one risk factor among many, and has distracted from grappling with other risk factors that are strong indicators of cardiovascular disease and cardiovascular risk."


Yet, as Taubes points out in his NYT Op-ed, the Framingham study did not support this conclusion: The researchers concluded that the molecules that carry LDL cholesterol (low-density lipoproteins) were only "a 'marginal risk factor' for heart disease" while the "cholesterol carried by high-density lipoprotein" actually "lowered the risk of heart disease."


"These findings led directly to the notion that low-density lipoproteins carry 'bad' cholesterol and high-density lipoproteins carry 'good' cholesterol," Taubes explains. "And then the precise terminology was jettisoned in favor of the common shorthand. The lipoproteins LDL and HDL became ''good cholesterol' and 'bad cholesterol' and the molecule carrying the cholesterol was now conflated with its cholesterol cargo.


"The truth is, we've always had reason to question the idea that cholesterol is an agent of disease," says Taubes. "Indeed, what the Framingham researchers meant in 1977 when they described LDL cholesterol as a ''marginal risk factor'' is that a large proportion of people who suffer heart attacks have relatively low LDL cholesterol.


"So how did we come to believe strongly that LDL cholesterol is so bad for us?" he asks. "It was partly due to the observation that eating saturated fat raises LDL cholesterol, and we've assumed that saturated fat is bad for us. This logic is circular, though: saturated fat is bad because it raises LDL cholesterol, and LDL cholesterol is bad because it is the thing that saturated fat raises." Yet, he points out, "in clinical trials, researchers have been unable to generate compelling evidence that saturated fat in the diet causes heart disease.


"The other important piece of evidence for the cholesterol hypothesis is that statin drugs like Lipitor lower LDL cholesterol and also prevent heart attacks. The higher the potency of statins, the greater the cholesterol lowering and the fewer the heart attacks. This is perceived as implying cause and effect: Statins reduce LDL cholesterol and prevent heart disease, so reducing LDL cholesterol prevents heart disease. This belief is held with such conviction that the Food and Drug Administration now approves drugs to prevent heart disease, as it did with Zetia, solely on the evidence that they lower LDL cholesterol.



"But the logic is specious because most drugs have multiple actions," Taubes notes. "It's like insisting that aspirin prevents heart disease by getting rid of headaches."


Indeed, as noted above, many researchers now believe that statins help some cardiac patients the way aspirin help many cardiac patients: not by lowering cholesterol or by easing headaches, but by reducing inflammation.


Nevertheless, in the 1950s, the theory that saturated fat and cholesterol from animal sources raise cholesterol levels in the blood, leading to deposits of cholesterol and fatty material in the arteries that, in turn, leads to fatal heart disease took off. It was called the Lipid theory, and before long food manufacturers would recognize just how much money there was to be made by promoting it.


At the time there was relatively little profit to be made by trying to persuade Americans to stop smoking (smoking cessation clinics still don't make anyone rich), and expensive gyms that encourage exercise had not yet become widely popular. But there was a fortune to be made by persuading Americans that if they ate foods low in saturated fats, they could live longer.


"The Oiling of America," a colorful history of the political campaign against animal fat by Mary Enig, a biochemist, nutritionist and former researcher at the University of Maryland, reports that in 1957, the food industry launched a series of ad campaigns that touted the health benefits of products low in fat or made with vegetable oils. A typical ad read: "Wheaties may help you live longer." Wesson recommended its cooking oil "for your heart's sake" and Journal of the American Medical Association ad described Wesson oil as a "cholesterol depressant."


Mazola advertisements assured the public that "science finds corn oil important to your health." Medical journal ads recommended Fleishmann's unsalted margarine for patients with high blood pressure. Dr. Frederick Stare, head of Harvard University's Nutrition Department, encouraged the consumption of corn oil -- up to one cup a day -- in his syndicated column.


In a promotional piece specifically for Procter and Gamble's Puritan oil, he cited two experiments and one clinical trial as showing that high blood cholesterol is associated with CHD. Presumably, he was well paid for his work.


Dr. William Castelli, Director of the Framingham Study was one of several specialists to endorse Puritan. Dr. Antonio Gotto, Jr., former AHA president, sent a letter promoting Puritan Oil to practicing physicians -- printed on Baylor College of Medicine, The De Bakey Heart Center letterhead.


The American Heart Association also pitched in. In 1956, a year before the food manufacturers' advertising blitz, an AHA fund-raiser aired on all three major networks, featuring Irving Page and Jeremiah Stamler of the AHA. Panelists presented the lipid hypothesis as the cause of the heart disease epidemic and launched the Prudent Diet, one in which corn oil, margarine, chicken and cold cereal replaced butter, lard, beef and eggs.


("Stamler would show up again in 1966 as an author of Your Heart Has Nine Lives, a little self-help book advocating the substitution of vegetable oils for butter and other so-called "artery clogging" saturated fats," Enig points out in "The Oiling of America." The book was sponsored by makers of Mazola Corn Oil and Mazola Margarine. Stamler did not believe that lack of evidence should deter Americans from changing their eating habits. The evidence, he stated, "was compelling enough to call for altering some habits even before the final proof is nailed down ... the definitive proof that middle-aged men who reduce their blood cholesterol will actually have far fewer heart attacks waits upon diet studies now in progress." And, of course, we still wait for that definite proof that middle-aged men who do not suffer from established heart disease nevertheless should be on statins.)


"But the television campaign was not an unqualified success," Enig continues, "because one of the panelists, Dr. Dudley White, disputed his colleagues at the AHA. Dr. White noted that heart disease in the form of myocardial infarction was nonexistent in 1900, when egg consumption was three times what it was in 1956 and when corn oil was unavailable.


"But the lipid hypothesis had already gained enough momentum to keep it rolling, in spite of Dr. White's nationally televised plea for common sense in matters of diet and in spite of the contradictory studies that were showing up in the scientific literature."


"The American Medical Association at first opposed the commercialization of the lipid hypothesis," Enig reports, " and warned that "the anti-fat, anti-cholesterol fad is not just foolish and futile ... it also carries some risk." The American Heart Association, however, was committed. In 1961 the AHA published its first dietary guidelines aimed at the public."


No doubt many researchers at the AHA were sincere. But it is worth noting that ultimately the AHA would find a way to turn the War Against Cholesterol into a profitable cottage industry.


You've probably seen the AHA's "heart check" logo on numerous food products. No surprise, they don't give them out for free. Food manufacturers pay a first-year fee of $7,500 per product, with subsequent renewals priced at $4,500 according to Steve Millay, a biostatician, lawyer and adjunct scholar at the conservative Cato Institute, who posted about this on "junk science" in 2001.


"There's gold in the AHA's credibility," Milloy observed. "Several hundred products now carry the heart-check logo. You do the math. Adding insult to injury, consumers pay up for the more expensive brands that can afford to dance with the AHA. Pricey Tropicana grapefruit juice is 'heart healthy' but supermarket bargain brand grapefruit juice isn't?"


It wasn't until 1987, when Merck produced the first statin, that the pharmaceutical industry began to get in on the action. But when it joined the party, it began to spread the money around, not only by advertising, but by paying well-placed cardiologists "consulting fees."



As I noted in last week's post, when the National Cholesterol Education Program (NCEP) published new guidelines in 2004, urging that individual cholesterol levels be monitored from age 20 and that acceptable levels be significantly lower than was previously advised for prevention of cardio vascular disease in both women and men -- whether or not they already suffered from established heart disease -- eight of the nine doctors on the panel making the recommendations had financial ties to drug makers selling statins. They did not disclose this possible conflict of interest at the time. Both the American Heart Association and the American College of Cardiology endorsed the panel's recommendation.


At that point, the 2007 Canadian women's study observes, the "'cholesterization' of heart disease intensified. Meanwhile, the study notes:


"a year before the U.S. panel came out with the new guidelines, the AHRQ, the US agency that reviews the quality of healthcare research produced a report on women and heart disease stating that there was insufficient evidence to determine whether lowering lipid levels by any method reduced the risk of heart attack or stroke in women, because women were under-represented in trials.


"According to US research," the report adds, "high cholesterol in women is not a statistically significant risk factor for sudden cardiac death. On the other hand, smoking is one of the most important predictors of sudden cardiac death in women." Which makes one wonder: Why doesn't the American Heart Association start a television campaign to try to persuade more women and girls to stop smoking?


Finally, despite widespread skepticism about statins and cholesterol, don't expect the controversy to end anytime soon. There is just too much money and too much political muscle supporting the theory that 18 million Americans should be on statins.


Millions have been made not only selling statins, but also testing patients' cholesterol levels on an annual basis. As "The Other Drug Wars" puts it, "the case of cholesterol illustrates well how the demands for testing and drugs interact: testing leads to increased utilization of cholesterol-lowering drugs, which, in turn, leads to even more testing, which, in turn, leads to more drug utilization."


In 1999, the authors of "The other Drug Wars" were pessimistic that reason would ever trump hype. Quoting T.J. Moore's book, Heart Failure, they noted that "The National Heart Lung, and Blood Institute's eager partners in promoting cholesterol consciousness are the drug companies which are understandably very excited that the government is creating their largest new market in decades ... A program that may have truly begun in sincere but somewhat misguided zeal for the public good, became very quickly intertwined with greed. The world was learning how much money could actually be made scaring people about cholesterol."


"Crowds of other agencies and companies have joined in the sustained reinforcement of the importance of cholesterol through the advertisement of their respective products," the authors of "The Other Drugs Wars" continued. "One can hardly open a magazine or browse the Internet without seeing offerings of the latest anti-cholesterol miracle drug, new low-cholesterol wonder diet, new life-saving cholesterol treating device or health-conscious cholesterol-lowering food product.


"The voice of evidence questioning the value of directing so many public resources towards cholesterol control was and is still being lost amongst the thousands of advertising messages directed at the public."


Perhaps the time has come for "the voice of evidence" to make itself heard. It's not just that money is being wasted -- or that close to half of the 18 million Americans taking statins may not benefit. All of them are being exposed to risks which range from serious muscle pain to memory loss that can look like Alzheimer's. And too often, well-meaning physicians who have been sold on statins ignore their complaints.

Military Doctors Withholding Treatment from Soldiers with Mental Health Problems

Since 9/11, one Army division has spent more time in Iraq than any other group of soldiers: the 10th Mountain Division, based at Fort Drum, New York.

Over the past 6 years and and six months, their 2nd Brigade Combat Team (BCT) has been the most deployed brigade in the army. As of this month, the brigade had completed its fourth tour of Iraq. All in all, the soldiers of BCT have spent 40 months in Iraq.

At what cost? According to a February 13 report issued by the Veterans for America's (VFA) Wounded Warrior Outreach Program, which is dedicated to strengthening the military mental health system, it is not just their bodies that have been maimed and, in some cases, destroyed. Many of these soldiers are suffering from severe mental health problems that have led to suicide attempts as well as spousal abuse and alcoholism.

Meanwhile, the soldiers of the 2nd BCT have been given too little time off in between deployments: In one case they had only six months to mentally "re-set"; following an eight-month tour in Afghanistan -- before beginning a 12-month tour in Iraq.

Then, in April 2007, Secretary of Defense Robert Gates decided to extend Army tours in Iraq from 12 to 15 months -- shortly after the BCT had passed what it assumed was its halfway mark in Iraq.

As the VFA report points out, "Mental health experts have explained that 'shifting the goalposts' on a soldier's deployment period greatly contributes to an increase in mental health problems."

Perhaps it should not come as a surprise that, during its most recent deployment, the 2nd BCT suffered heavy casualties. "Fifty-two members of the 2nd BCT were killed in action (KIA)," the VFA reports and "270 others were listed as non-fatality casualties, while two members of the unit remain missing in action (MIA)."

This level of losses is unusual. "On their most recent deployment," the VFA report notes, "members of the 2nd BCT were more than five times as likely to be killed as others who have been deployed to OEF and OIF and more than four times likely to be wounded." One can only wonder to what degree depression and other mental health problems made them more vulnerable to attack.

When they finally returned to Fort Drum, these soldiers faced winter conditions that the report describes as "dreary, with snow piled high and spring still months away. More than a dozen soldiers reported low morale, frequent DUI arrests, and rising AWOL, spousal abuse, and rates of attempted suicide. Soldiers also reported that given the financial realities of the Army, some of their fellow soldiers had to resort to taking second jobs such as delivering pizzas to supplement their family income."

What has the army done to help the soldiers at Fort Drum? Too little.

In recent months, VFA reports, it has been contacted by a number of soldiers based at Fort Drum who are concerned about their own mental health and the health of other members of their units. In response, VFA launched an investigation of conditions at Fort Drum, and what it found was shocking.

Soldiers told the VFA that "the leader of the mental health treatment clinic at Fort Drum asked soldiers not to discuss their mental health problems with people outside the base. Attempts to keep matters 'in house' foster an atmosphere of secrecy and shame," the report observed "that is not conducive to proper treatment for combat-related mental health injuries."

The investigators also discovered that "some military mental health providers have argued that a number of soldiers fake mental health injuries to increase the likelihood that they will be deemed unfit for combat and/or for further military service."

The report notes that a "conversation with a leading expert in treating combat psychological wounds" confirmed "that some military commanders at Fort Drum doubt the validity of mental health wounds in some soldiers, thereby undermining treatment prescribed by civilian psychiatrists" at the nearby Samaritan Medical Center in Watertown, NY.

"In the estimation of this expert, military commanders have undue influence in the treatment of soldiers with psychological wounds," the report noted. "Another point of general concern for VFA is that Samaritan also has a strong financial incentive to maintain business ties with Fort Drum -- a dynamic [that] deserves greater scrutiny."

Because some soldiers do not trust Samaritan, the report reveals that a number of "soldiers have sought treatment after normal base business hours at a hospital in Syracuse, more than an hour's drive from Watertown ... because they feared that Samaritan would side with base leadership, which had, in some cases, cast doubt on the legitimacy of combat-related mental health wounds.

"In one case," the report continued, "after a suicidal soldier was taken to a Syracuse hospital, he was treated there for a week, indicating that his mental health concerns were legitimate. Unfortunately, mental health officials at Fort Drum had stated that they did not believe this soldier's problems were bona fide."

According to the VFA, the problem of military doctors refusing to back soldiers with mental health problems is widespread: "VFA's work across the country has confirmed that soldiers often need their doctors to be stronger advocates for improved treatment by their commanders and comrades. For instance, soldiers need doctors who are willing to push back against commanders who doubt the legitimacy of combat-related mental health injuries."

While talking to soldiers at Fort Drum, VFA also discovered "considerable stigma against mental health treatment within the military and pressure within some units to deny mental health problems as a result of combat.

Some soldiers who had been in the military for more than a decade stated that they lied on mental health questionnaires for fear that if they disclosed problems, it would reduce their likelihood of being promoted."

Soldiers at Fort Drum are not alone. In an earlier report titled "Trends in Treatment of America's Wounded Warriors" VFA disclosed that leaders of the military mental health treatment system have been warning Department of Defense leadership of the magnitude of the mental health crisis that is brewing.

A report by the Army's Mental Health Advisory Team (MHAT) that was released last May found that the percentage of soldiers suffering "severe stress, emotional, alcohol or family problem[s]" had risen more than 85 percent since the beginning of Operation Iraqi Freedom. MHAT also found that 28 percent of soldiers who had experienced high-intensity combat were screening positive for acute stress (i.e., Post-Traumatic Stress Disorder, PTSD).

Finally, MHAT disclosed that soldiers who had been deployed more than once were 60 percent more likely to screen positive for acute stress (i.e., PTSD) when compared to soldiers on their first deployment.

VFA's most recent report notes points out that, despite these warnings, soldiers at Fort Drum do not have access to the care they need: "More than six years after large-scale military operations began in Afghanistan and, later, in Iraq, a casual observer might assume that programs would have been implemented to ensure access for Soldiers from the 10th Mountain Division to mental health services on base. Unfortunately, an investigation by VFA has revealed that [soldiers] who recently returned from Iraq must wait for up to two months before a single appointment can be scheduled ...

"Given the great amount of public attention that has been focused on the psychological needs of returning service members, a casual observer might also assume that these needs would have been given a higher priority by Army leaders and the National Command Authority -- the two entities with the greatest responsibility for ensuring the strength of our Armed Forces. These needs have long been acknowledged but," the report concludes " there has been insufficient action."

Last month the army tried putting a band-aid on the problems at Fort Drum by sending three Army psychiatrists from Walter Reed Army Medical Center (WRAMC) to the Fort D on a temporary basis to treat the large influx of returning soldiers requiring mental health care. But, as the VFA points out, "this is only a temporary fix", as the Walter Reed-based psychiatrists will likely return to Washington, DC, within a few weeks.

Fort Drum will again be left with the task of treating thousands of soldiers with far too few mental health specialists. In addition, for those service members who were initially treated by psychiatrists from Walter Reed, their care will suffer from discontinuity, as their cases will be assigned a new mental health professional on subsequent visits."

And the war drags on. Earlier this month, the UK Times reported that "the conservative Washington think tank that devised the "surge" of US forces in Iraq [the American Enterprise Institute] now has come up with a plan to send 12,000 more American troops into southern Afghanistan.

A panel of more than 20 experts convened by the (AEI) has also urged the administration to get tough with Pakistan. "The US should threaten to attack Taliban and Al-Qaeda fighters in lawless areas on the border with Afghanistan if the Pakistan military did not deal with them itself, the panel concluded."

Where do conservatives expect to find those troops?

More soldiers are likely to suffer the fate of the soldiers at Fort Drum. They will be sent back to combat, again and again -- until finally, they break. Soldiers suffering from post-traumatic stress syndrome, depression or a host of other mental problems are not in a good position to protect themselves. Sending them back only guarantees that fatalities will rise.

Forcing Medical Patients To Be Consumers Wreaks Havoc on Our Health System

One of the most common justifications for consumer-driven medicine is reduced health care costs. The reasoning here is two-fold:

Keep reading... Show less
BRAND NEW STORIES