Niko Karvounis

Is Your Yearly Physical a Waste of Time?

This article originally appeared on Health Beat.

Keep reading... Show less

Could the Media Derail Health Care Reform?

This article originally appeared on Health Beat.

Keep reading... Show less

The Dangers of Do-It-Yourself DNA Testing

This article originally appeared on Health Beat.

Recently, Time magazine listed the retail DNA test as its best invention of 2008 (thanks to Kevin M.D. for the tip). The best?  Maybe one of the most worrisome.

Time specifically highlights the do-it-yourself DNA testing kit from 23andMe, a California-based corporation named after the 23 pairs of chromosomes in each human cell.  The company sells $399 DNA kits that consist of a test tube in which you spit and send to the company's lab. There, over the next 4-6 weeks, researchers extract DNA from your saliva and map your genome, putting the results online. You can access the results through the web and navigate a guide to your genes that estimates "[genetic] predisposition for more than 90 traits and conditions ranging from baldness to blindness." 

Admittedly, this sounds pretty cool. As Time gushes, "in the past, only élite researchers had access to their genetic fingerprints, but now personal genotyping is available to anyone who orders the service online..." But look closer at the commoditization of DNA testing and the novelty wears off pretty quickly.

By pinpointing specific genes associated with certain diseases, a 23andMe gene read-out can inform a user of his or her susceptibility to those conditions. It turns out this is a lot less useful than it might seem. For example, Time reports that one test showed that the husband of 23andMe's founder has a rare mutation that gives him an estimated 20 percent to 80 percent chance of getting Parkinson's disease. The couple's child, due later this year, has a 50 percent chance of inheriting this mutation, and thus his dad's risk of Parkinson's.

At this point, the parents-to-be have to worry that their kid will have a mutation associated with an incurable disease. If he has it, they also have to fret that he has anywhere from a one in five to a four in five chance of actually contracting the disease. Really, how helpful are these numbers? That's a big range of probabilities. I wager it doesn't feel terribly good to be tracking the genetic lottery of your son's health, disease by disease.  In fact, I imagine that it's downright harrowing.

Dr. Alan Guttmacher, acting director of the National Human Genome Research Institute of the National Institutes of Health, agrees. In September, he told the New York Times that "[DNA testing] can be neat and fun, but it can also have deep psychological implications" because it can profoundly influence the way we view ourselves, our loved ones, and our relationship to the world. As Guttmacher told Time, "a little knowledge is a dangerous thing."

Here Guttmacher isn't just talking about the strange helplessness of knowing the ever-so-approximate probability of your child getting sick. He's also speaking to the fact that DNA tests themselves only provide a little knowledge -- just one small piece of the complex puzzle that is our health. Unfortunately, DNA tests often promise much more than this. One company, Navigenics, is actually dedicated to reading your DNA and diagnosing you with a set of medical risk factors that you then discuss with an appointed "genetic counselor." The idea is that genetic tests reveal some sort of fundamental physiological truth; a complete and comprehensive assessment of our health.

It's true that some conditions, like cystic fibrosis and Huntington's disease, have been scientifically proven to be associated with particular genetic mutations. But many other conditions have not been shown to have a genetic origin -- particularly when that gene is detected without an intimate understanding of environmental factors surrounding a patient, as it is the case when researchers on the other side of the country analyze your spit. 

In light of this fact, the Genetics and Public Policy Center, a project of Johns Hopkins University and Pew Charitable Trust, warns that many consumers "might have difficulty distinguishing between tests widely used and accepted by medical professionals...and those whose validity is unproven in the scientific literature." Customers will see their genetic print-out, with risk assessments for particular illnesses tagged on each gene. But they won't have a sense of how the DNA testing company calculated that number -- i.e. how much it reflects established medical research or a best guess from a company trying to convince you of its product's predictive possibilities.

Unfortunately, the latter seems to be more plausible. In 2006, the Government Accountability Office (GAO) purchased 14 DNA tests from four different websites and sent in samples. The office found that "the results from all the tests GAO purchased mislead consumers by making predictions that are medically unproven and so ambiguous that they do not provide meaningful information to consumers."

From GAO's saliva samples, the companies sent back risk predictions for conditions like diabetes and osteoporosis with little qualification, even though "scientists have very limited understanding about the functional significance of any particular gene, how it interacts with other genes, and the role of environmental factors in causing disease." In other words, the tests were spitting out numbers and warnings even though the genetic causality of these conditions "cannot be medically proven."

Further, many other results were all but "meaningless. For example, [the companies reported that] many people 'may' be 'at increased risk' for developing heart disease." But this is true for pretty much everyone, "so such an ambiguous statement could relate to any human that submitted DNA." The laughable superficiality of the companies' test results carried over into lifestyle information that GAO provided: when the office told a company that the patient from whom the sample derived smoked, the DNA company recommended that they stop smoking. When the patient reported that he had quit, the company "gave recommendations to continue to avoid smoking." Gee, thanks -- is that really worth $400?

Ultimately, in the words of Dr. Muin Khoury, director of the National Office of Public Health Genomics at the Centers for Disease Control and Prevention, "the uncertainty [of medicine] is too great," to view DNA testing as a sort of medical crystal ball. Even within the context of our genes, the possibilities are endless. To its credit, Time points out that "many diseases stem from several different genes and are triggered by environmental factors. Since less than a tenth of our 20,000 genes have been correlated with any condition, it's impossible to nail down exactly what component is genetic."

In fact, even when doctors do know that there's a genetic component to a given condition, they're not always sure which genes to look at. For example, in 2006 the Boston Globe noted that "there are hundreds of mutations in two well-known breast cancer genes, BRCA1 and BRCA2, for which reliable commercial tests exist. A woman could be told that she didn't have the common mutations but might still be at high risk from less common mutations or a different gene altogether..." Translation: even though we know there's a genetic component to breast cancer, it's very difficult to pinpoint which gene is the problem -- particularly if the only way of communicating with a patient about the issue is watered-down risk probability.

Meanwhile, the vagueness of DNA test results works in the favor of testing companies. If they keep things simple and superficial, they can make cross-promotion easier. In the GAO study, for example, the DNA test results were synched with expanded product offers such as dietary supplements, which had only a tangential relationship to the patients' test results.

This sort of aggressive marketing is direct-to-consumer medicine at its most profitable. Companies often want to convince patients that they have a certain condition and then sell them on the cure. In prescription drugs, this "disease mongering" has usually been about listing symptoms to get people scared. But DNA testing kicks things to another level: convince people that they are actually hard-wired to contract a particular disease, and your cure becomes that much harder to resist.

It's no wonder that experts at Johns Hopkins are worried that "advertisements may...underemphasize the uncertainty of genetic testing results, or exaggerate the risk and severity of a condition for which testing is available, thus increasing consumer anxiety and promoting unnecessary testing." Given what we've seen in direct-to-consumer medicine up until now, this is a very reasonable fear.

Another plausible concern is that DNA tests, in their superficiality and over-simplification of medicine, will be routinely misinterpreted by patients. Time cites the case of Nate Guy, a 19-year-old in Warrenton, Va., who "was relieved that though his uncle had died of prostate cancer, his own risk for the disease was about average," according to his 23andMe test. This sounds uplifting until you realize that, by the age of 70, the vast majority of men have prostate cancer. Almost all of them will die with prostate cancer, not from it. (Something else will kill them before this very common, but usually slow-growing, cancer catches up with them.)  An "average risk" of prostate cancer means you'll probably get prostate cancer and live with it for years, just as do nearly all older men.

Presumably, Guy doesn't know this. One gets the sense that he thought his uncle died of prostate cancer because he died with prostate cancer, and that this fact meant that his uncle had been uniquely susceptible to the disease. Now Nate finds out he has an average level of vulnerability and thinks that he won't get prostate cancer. Statistically speaking, none of this is probably true -- but this is the sort of reasoning that happens when patients are confronted with misleading, sparse data about their health, devoid of a broader medical context.

One can imagine that, had Nate been disheartened with the results of his test, he would have similarly embraced the definitiveness of the results and undergone unnecessary prostate screenings throughout his life -- screenings which have never been shown to actually improve survival rates. Either way, Nate's taking the wrong message from his genome. Indeed, the likelihood that the patient will go for more screenings -- just to be safe -- combined with the fact that people are paying $400 a pop for a test which vaguely suggests whether they may or may not contract a disease makes DNA tasting a profoundly cost-ineffective health care option.

Genomics is a field that's new and exciting; scientists will and should pursue it. But it's probably not something you should try at home. From what we've seen so far, do-it-yourself DNA testing risks exacerbating many of our most pressing health care problems: the deceitfulness of for-profit medicine, the dangers of direct-to-consumer health care, the glut of wasteful, potentially harmful, screenings, and the general misconception that -- if our gizmos are fancy enough -- we can all live forever.

Will the Economic Meltdown Undermine Interest in Health Care Reform?

This post originally appeared on Health Beat.

Writing on The Health Care Blog, D.C. insider Bob Laszewski puts the chances of health care reform -- at least in the form envisioned by the presidential candidates and ambitious activists -- at about zero in the wake of Wall Street's meltdown. It's easy to see why Laszewski is so pessimistic:

"On top of the $500 billion deficit [that the government faces] in 2009 ... and the cost of the Freddie and Fannie bailout ... the Congress is now being told it must take on a total of almost $1 trillion in government long-term costs to try to turn the financial system around."

That's a problem. McCain claims his reform plan will cost $10 billion; Senator Obama says his will cost $65 billion. Both are no doubt low-ball estimates. Obama's plan, for example, is more likely to cost $86 billion in 2009 and $160 billion in 2013, after it's expanded, according to the Urban Institute. Given these numbers, Laszewski says that the candidates have to "get...real" about how they're "really going to deal with health care reform in the face of all of these challenges."

In an upcoming post, Maggie will dig deeper into just how health care reformers can and should 'get real' in post-meltdown America. But instead of talking about what reformers should do, I want to discuss another important question we have to pose in the upcoming age of austerity: will the public even care about health care reform anymore, now that the economy has gone south?

On September 30, the Partnership to Fight Chronic Disease (PFCD) held a conference call with reporters. On the call were Ken Thorpe, PFCD's Executive Director, and former U.S. Secretary of Health and Human Service Tommy Thompson. Though I've never been a fan of Thompson, he had some interesting things to say.

Thompson opened by laying out the numbers behind U.S. health care expenditures, noting that "16 percent of the [U.S.] gross national product goes into healthcare [every year], and [that proportion is] on its way to 21 percent." He also pointed out that "we're spending $2.4 trillion, on the way to $4.6 trillion, and 75 to 80 percent of that cost is over chronic illnesses" like cardiovascular disease, strokes, cancer, diabetes, and obesity.

While these statistics are hardly new to health care wonks, they're worth reconsidering in light of Congress' bailout plan. Seventy-five percent of $2.4 trillion is $1.8 trillion -- meaning that, annually, chronic diseases cost us almost three times as much as the current bailout bill. The nation's total health care bill is the equivalent of passing a bailout, saving Bear Sterns, nationalizing Fannie and Freddie, and propping up AIG twice every year.

If nothing else, the Wall Street implosion puts the sheer scale of America's health care woes in perspective. As such, Thompson and Thorpe agree that the economic meltdown is a powerful wake-up call to the American public. During the call, Thompson said that he thinks that citizens are "absolutely frustrated with Congress and Washington avoiding problems," and are thus likely to begin demanding action on long-term crises like health care. The need for reform "is hung around the neck of Democrats, Republicans, George Bush and everybody else, and Wall Street," he said, and the American public wants to "find an answer." Thorpe agreed, saying that outrage surrounding the economic crisis has "stirred a bee's nest" of dissatisfaction that will "elevate the interest and desire to do something on healthcare reform in 2009."

In other words, our economic crisis highlights the danger of senseless spending and lays bare the catastrophic danger that comes with ignoring the rumbling of a financial crisis. As Thompson and Thorpe see it, voters are deciding that they're mad as hell -- and health care is another area triggering their wrath.

Dr. David Kibbe of the American Academy of Family Physicians agrees. Also writing on The Health Care Blog, Kibbe argues that Americans' feelings of betrayal over Wall Street's greed will spill over into health care. Kibbe notes: "[A]ny sentient observer of this [economic] trickery on such a massive and systematic scale will start to ask questions about who else among our highest paid and most trusted professionals might be lying to us about the well being we place in their hands.   Who else [besides financiers,] they will ask, is making money off our trust in them? Who else, they will ask, is skimming money off the top of an inflated and ultimately doomed -- because unsustainable -- market for complex services? Where is the next bubble that privatizes profits but socializes risk?"

It's health care, says Kibbe -- a sector where "fifty million people are without health insurance, and at least that many are under insured, while revenues going into the industry continue to increase at double digit rates of increase year after year." Then he asks: "How can this go on much longer?"

Under normal circumstances, the answer might be a good, long time. After all, our health care system has been dysfunctional for decades. But today Americans aren't just disappointed with the way our institutions work -- they're outraged and scared. In a Gallup poll released yesterday, 53 percent of Americans said they felt "angry" about the financial crisis, and 41 percent said they felt "afraid." Americans feel that the system has failed them -- and, as perverse as this might sound, it's that sort of disillusionment with institutions that is needed to fuel changes as far-reaching as health care reform.

Interestingly, the Gallup poll shows that more affluent Americans are the angriest. Sixty-three percent of college graduates say they have felt anger over the recent events in the financial world, compared with 50 percent of non-graduates and only 43 percent of those who have not attended college. Sixty-two percent of respondents in upper-income households with annual incomes of $60,000 or more have been angry, compared with 50 percent of those in lower-income households. This is important: it's always hardest to convince the "haves" that the system is broken, because the system is built to work best for them. But if Americans of higher socioeconomic status begin to acknowledge that an unsustainable system threatens the entire economy, institutional overhaul becomes a much more plausible political proposition.

Granted, all of these numbers refer to the financial crisis and not health care. But the assumption that worries about the economy will fuel outrage over health care isn't as far-fetched as it may sound. Polls show that concerns over the economy and health care do in fact trend together. Check out the graph below, from the Kaiser Family Foundation, which I originally posted back in January to illustrate this very point.

Moreover, public interest in health care is still high: the September 2008 Kaiser Family Foundation election tracking poll puts health care as the number three priority of all voters, and health care remains the second most commonly reported economic hardship (after paying for gas). What will happen when our long-time interest in health care is mixed with a new appreciation for government oversight and regulation, smart spending, and building system that works? Maybe, just maybe, a renewed political will for health care reform.

Admittedly, in post-meltdown America, resources will be limited. It's also true that the sort of done-in-one reform packages that reformers like to trumpet -- cover everybody! Cut costs! Improve quality! -- will probably have to be unpacked into separate initiatives. (This isn't necessarily a bad thing -- as Maggie said last week, "we shouldn't rush into providing health insurance for everyone until we're sure that we're offering Americans health care" anyway).

In the meantime, don't be too quick to assume that Americans have forgotten about health care because the economy has taken a nosedive. It may be that, as people feel increasingly insecure -- and get wise to the danger of governmental inaction -- they will want health care reform now more than ever.

Universal Health Coverage Is No Silver Bullet

The Massachusetts experiment in health care reform is all about expanding access. But it doesn't try to control costs. This, in a nutshell, is why it's running into trouble.

The plan didn't reform health care delivery, just coverage. Granted, in terms of bringing more people in under the tent, it's been a success: Since the plan went into effect in 2006, 439,000 people have signed up for insurance -- a number that represents more than two-thirds of the estimated 600,000 people uninsured in the state two years ago. This surge in coverage has reduced use of emergency rooms for routine care by 37 percent, which has saved the state about $68 million. (Going to the ER for routine care drives up health care costs by creating longer wait times and tying up resources that can be used to help patients who are critically ill).

But even with these savings, Massachusetts is having trouble funding its plan. Earlier this month the Boston Globe reported that the governor's office is planning to shift more responsibility for funding to employers. Currently, the Mass. Health care law requires most employers with more than 10 full-time employees to offer health coverage or to pay an annual 'fair share' penalty of $295 per worker: this is called 'pay or play', an employer either provides coverage or pays a fee toward the system for not doing so.

To "play" rather than "pay," employers must show either that they are paying at least 33 percent of their full-time workers' premiums within the first 90 days of employment, or that they are making sure that at least 25 percent of their full-time workers are covered on the company's plan. (In other words, they must be paying a large enough share of the premiums so that 25 percent of their employees can afford the plan they offer.)

Now, instead of giving employers this Either/Or option, the new proposal requires that employers do both -- or fork over the penalty fee. In a sense, this is an admirable move by the government, since its intention is to push toward truly universal coverage. But there's also a game of scrounging-for-dollars going on here: The state wants employers to pay more -- to, in the words of Mass. Governor Deval Patrick, "step up" and embrace "shared responsibility" -- either by covering a greater share of health care costs or paying more in penalty fees.

As you might expect, businesses are putting up a fight. They say that Patrick's proposal "ignores the obvious," which is the fact that "employers definitely are doing their part." While it's tempting to vilify "Big Employer" as stingy and selfish, the truth is that there's only so much you can ask businesses to do without harming citizens.

As I pointed out in a March blog post, research shows that there's a big trade-off between health care costs and workers wages: when employers have to pay a lot for health care, they take the cost out of employees' paychecks. Or, as a 2004 study from the International Journal of Health Care Finance and Economics put it, "the amount of earnings a worker must give up for gaining health insurance is roughly equal to the amount an employer must pay for such coverage."

In other words, you can't bleed employers dry without also screwing workers. True, big corporations might have deep enough pockets to pay more for health care without adjusting workers' wages. But they're still bottom-line driven enterprises, which means that they're going to try and break even wherever possible. Unless you want to see laws that strictly regulate the correlation between business health care costs and workers' wages, the working Joe's income is going to take a hit as employers shoulder more health care costs.

The other choice employers have is to opt out of coverage and instead pay the penalty. Many have pointed out that, for employers, this is an attractive option -- particularly in Massachusetts, which, as of 2007, had the highest annual health care costs per employee in the country: $9,304. That's a lot more than $295 a year. We can be certain that this fee will rise in the future. And once it gets high enough for employers to choose health coverage over the fee, they're still going to take the cost of that coverage out of workers' wages.

On the one hand, one might argue that this is fair -- that workers are better off with health insurance rather than higher salaries. Massachusetts is a relatively wealthy state: in 2008, median income for a family of four stood a $85,420 (so half of the population earned more than that). Insofar as some of Massachusetts' wealthier citizens don't have health insurance, it probably would make sense for them to earn a little less, and be covered. Some would insist that this should be an individual choice, but the truth is, if an individual decides to go without insurance, we all wind up paying the cost when he or she becomes seriously ill. On the other hand, many households could not afford to take a pay cut -- especially when you consider the cost of housing in Massachusetts.

Ultimately, the only way to make a universal health care system sustainable both for employers and employees is to tackle the high cost of health care in Massachusetts. As noted, in the Commonwealth, the average annual premium for an employee's health care is $9,304 -- significantly more than the national average of $6,881.

As Maggie has pointed out in the past, this is not because insurers in Massachusetts are profiteering. Insurance is expensive in the Commonwealth because its citizens consume more health care than people in many other states. They undergo more tests and procedures than most of us, and they see more specialists. Look at a graph of average health care expenditures per person in Massachusetts compared to average health care expenditures in the rest of the U.S., and you find that in Massachusetts, individuals receive an average of nearly $10,000 worth of care each year -- compared to just a little over $7,000 per capita nationwide.

High consumption of care is driven, Maggie explained,"by the fact that the state is a medical Mecca, crowded with academic medical centers, specialists and the equipment needed to perform any test the human mind is capable of inventing."

As she originally explained in this article, in states where there are more hospital beds and more specialists, the population receives more aggressive, more intensive, and more expensive care. Even after adjusting for local prices, race, age, and the underlying health of the population, supply drives demand. And it turns out that the Bay State has one doctor for every 267 citizens -- versus one doctor for every 425 people in the nation as a whole. Meanwhile, the state boasts an abundance of specialists, while suffering a critical shortage of primary care physicians.

For reform to work in Massachusetts, the state needs to make care more cost-effective, not just more accessible. That means encouraging providers to emphasize proven treatments that can do the most good for the most people which avoiding over-priced, not fully proven bleeding edge services and products.

This won't be easy. As Ezra Klein recently pointed out on his blog at The American Prospect: "people generally like to equate better health with awesomer technology, but developing a slightly better drug for late-stage cancer -- a good, profitable innovation -- will do much less for health than getting the flu vaccine to everyone who needs it, or creating systems so everyone with elevated cardiac risk is on statins. These interventions are low innovation, but actually extremely effective ... Saving some level of money on innovation in order to rechannel it to access and basic interventions would probably make the country a whole lot healthier. But I don't think you're allowed to say that." Yet if health care is to improve -- in both Massachusetts and the U.S. -- this is exactly what we need to say.

In fact, Massachusetts is a classic example of the technology-cost problem. According to a Boston Globe report, between 1997 and 2004, the number of MRI scanners in Mass. tripled to 145, about the total for all of Canada. From 1998 to 2002, the number of patient MRI scans in the state increased by 80 percent, to almost 500,000 a year. With insurers paying between $500 and $1,400 to cover a scan, the numbers add up: in 2003, Harvard Pilgrim, a non-profit insurer, shelled out $73 million on MRI scans, even as other costs increased as well. It should come as no surprise that a state with so much medical technology, also has more medical and clinical lab technicians than any state in the union: 184.06 per 1,000 of the population, almost twice the national average of 101.32.

Massachusetts also reports an abundance of hospital beds -- enough to allow patients to spend an average of 11.8 days in the hospital during their last six months of life -- compared to 9.5 days in Maine. None of this is consciously planned. It's just that if the beds are available, it's easier to hospitalize the patient. And once they are in the hospital, it's easy to refer them to a dozen specialists, assuming that enough specialists are available.

Bottom line, health care in Massachusetts is extremely expensive, thanks to supply-side factors -- which means expanding and sustaining full coverage is, fiscally speaking, a tough proposition. Luckily, steps are being taken to address cost issues: in June, the state's biggest insurer and the state itself said that they would stop reimbursing doctors and hospitals for 28 medical errors.

Certainly punishing doctors isn't the key to sustainable health reform -- and some errors are more "preventable" than others. But changing health care delivery -- that is, changing patterns in the types and volume of treatments and procedures made available to patients -- is the key to making health care run smoothly in the long-term. Universal coverage is wonderful and necessary. But it's only one piece of the health care puzzle.

The Startling Truth About Doctors and Diagnostic Errors

This article originally appeared on Health Beat.

Despite all of the talk about medical errors and patient safety, almost no one likes to talk about diagnostic errors. Yet doctors misdiagnose patients more often than we would like to think. Sometimes they diagnose patients with illnesses they don't have. Other times, the true condition is missed. All in all, diagnostic errors account for 17 percent of adverse events in hospitals, according to the Harvard Medical Practice Study, a landmark study that looks at medical errors.

Traditionally, these errors have not received much attention from researchers or the public. This is understandable. Thinking about missed diagnosis and wrong diagnosis makes everyone -- patients as well as doctors -- queasy. Especially because there is no obvious solution. But this past weekend the American Medical Informatics Association (AMIA) made a brave effort to spotlight the problem, holding its first-ever "Diagnostic Error in Medicine" conference.

Hats off to Bob Wachter, associate chairman of the Department of Medicine at the University of California, San Francisco, and the keynote speaker at the conference. Wachter shared some thoughts on diagnostic errors through his blog Wachter's World.

Wachter begins by pointing out that a misdiagnosis lacks the concentrated shock value that is needed to grab the public imagination. Diagnostic mistakes "often have complex causal pathways, take time to play out, and may not kill for hours [i.e., if a doctor misses myocardial infarction in a patient], days (missed meningitis) or even years (missed cancers)." In short, to understand diagnostic errors, you need to pay attention for a longer period of time -- not something that's easy to do in today's sound-bite driven culture.

Diagnostic errors just aren't media-friendly. When someone is prescribed the wrong medication and they die, the sequence of events is usually rapid enough that the story can be told soon after the tragedy occurs. But the consequences of a mistaken diagnosis are too diffuse to make a nice, punchy story. As Wachter puts it: "They don't pack the same visceral wallop as wrong-site surgery."

Finally, Wachter observes, it's hard to measure diagnostic errors. It's easy to get an audience's attention by telling it that "the average hospitalized patient experiences one medication error a day" or that "the average ICU patient has 1.7 errors per day in their care."

But we don't have equally clean numbers on missed diagnoses. As a result, he points out, "it's difficult to convince policy makers and hospital executives, who are now obsessing about lowering the rates of hospital-acquired infections and falls" to focus on a problem that is much more difficult to tabulate.

This is a recurring problem in programs that strive to improve the quality of care: We are mesmerized by the idea of "measuring" everything. Yet, too often, what is most important cannot be easily measured. Wacther recognizes the urgency of the problem: "As quality and safety movements gallop along, the need to" address diagnostic errors" grows more pressing," he writes. "Until we do, we will face a fundamental problem: A hospital can be seen as a high-quality organization -- receiving awards for being a stellar performer and oodles of cash from P4P programs -- if all of its 'pneumonia' patients receive the correct antibiotics, all its 'CHF' patients are prescribed ACE inhibitors, and all its 'MI' patients get aspirin and beta blockers.

"Even if every one of the diagnoses was wrong."

Why so many errors?

Medicine is shot through with uncertainty; diseases do not always present neatly, in textbook fashion, and every human body is unique. These are just a few reasons why diagnosis is, perhaps, the most difficult part of medicine.

But misdiagnosis almost always can be traced to cognitive errors in how doctors think. When diagnosis is based on simple observation in specialties like radiology and pathology, which rely heavily on visual interpretation, error rates probably range from 2 percent to 5 percent, according to Drs. Eta S. Berner and Mark L. Graber, writing in the May issue of the American Journal of Medicine.

By contrast, in clinical specialties that rely on "data gathering and synthesis" rather than observation, error rates tend to run as high as 15 percent. After reviewing "an extensive and ever-growing literature" on misdiagnosis, Berner and Graber conclude that "diagnostic errors exist at nontrivial and sometimes alarming rates. These studies span every specialty and virtually every dimension of both inpatient and outpatient care."

As the table below reveals, numerous studies show that the rate of misdiagnosis is "disappointingly high" both "for relatively benign conditions" and "for disorders where rapid and accurate diagnosis is essential, such as myocardial infarction, pulmonary embolism, and dissecting or ruptured aortic aneurysms."

STUDY NAME: Shojania et al (2002)
ASSESSED CONDITION: Tuberculosis of the lungs (bacterial infection)
FINDINGS: Reviewing autopsy studies specifically focused on the diagnosis of lung TB, researchers found that 50 percent of these diagnoses were not suspected by physicians before the patient died.

STUDY: Pidenda et al (2001)
CONDITION: Pulmonary embolism ( a blood clot blocks arteries in the lungs)
FINDINGS: This study reviewed diagnosis of fatal dislodged blood clots over a five-year period at a single institution. Of 67 patients who died of pulmonary embolism, clinicians didn't suspect the diagnosis in 37 (55 percent) of them.

STUDY: Lederle et al (1994), von Kodolitsch et al (2000)
CONDITION: Ruptured aortic aneurysm (when a weakened, bulging area in the aorta ruptures)
FINDINGS: These two studies reviewed cases at a single medical center over a seven-year period. Of 23 cases involving these aneurysms in the abdomen, diagnosis of rupture was initially missed in 14 (61 percent); in patients presenting with chest pain, doctors missed the need to dissect the bulging part of the aorta in 35 percent of cases.

STUDY: Edlow (2005)
CONDITION: Subarachnoid hemorrhage (bleeding in a particular region of the brain)
FINDINGS: This study, an updated review of published studies on this particular type of brain bleeding, shows about 30 percent are misdiagnosed on initial evaluation.

STUDY: Burton et al (1998)
CONDITION: Cancer detection
FINDINGS: Autopsy study at a single hospital: of the 250 malignant tumors found at autopsy, 111 were either misdiagnosed or undiagnosed, and in just 57 of the cases, the cause of death was judged to be related to the cancer.

STUDY: Beam et al (1996)
CONDITION: Breast cancer
FINDINGS: Looked at 50 accredited centers agreed to review mammograms of 79 women, 45 of whom had breast cancer. The centers missed cancer in 21 percent of the patients.

STUDY: McGinnis et al (2002)
CONDITION: Melanoma (skin cancer)
FINDINGS: This study, the second review of 5,136 biopsy samples found that diagnosis changed in 11 percent (1.1 percent from benign to malignant, 1.2 percent from malignant to benign, and 8 percent had a change in doctors' ranking of how abnormal the cells were) of the samples over time, suggesting a not insignificant initial error rate.

STUDY: Perlis (2005)
CONDITION: Bipolar disorder
FINDINGS: The initial diagnosis was wrong in 69 percent of patients with bipolar disorder and delays in establishing the correct diagnosis were common.

STUDY: Graff et al (2000)
CONDITION: Appendicitis (inflamed appendix)
FINDINGS: Retrospective study at 12 hospitals of patients with abdominal pain and operations for appendicitis. Of 1,026 patients who had surgery, there was no appendicitis in 110 (10.5 percent); of 916 patients with a final diagnosis of appendicitis, the diagnosis was missed or wrong in 170 (18.6 percent).

STUDY: Raab et al (2005)
CONDITION: Cancer pathology (microscopic examination of tissues and cells to detect cancer)
FINDINGS: The frequency of errors in diagnosing cancer was measured at four hospitals over a one-year period. The error rate of pathologic diagnosis was 2 percent to 9 percent for gynecology cases and 5 percent to 12 percent for nongynecology cases; errors ran from what tissues the doctors used, to preparation problems, to misinterpretations of tissue anatomy when viewed under microscope.

STUDY: Buchweitz et al (2005)
CONDITION: Endometriosis (tissue similar to the lining of the uterus is found elsewhere in the body)
FINDINGS: Digital videotapes of the inside of patients' bodies were shown to 108 gynecologic surgeons. Surgeons agreed only 18 percent of the time as to how many tissue areas were actually affected by this condition.

STUDY: Gorter et al (2002)
CONDITION: Psoriatic arthritis (red, scaly skin coupled with join inflammation)
FINDINGS: One of two patients with psoriatic arthritis visited 23 joint and motor specialists; the diagnosis was missed or wrong in nine visits (39 percent).

STUDY: Bogun et al (2004)
CONDITION: Atrial fibrillation (abnormal heart beat in the upper chambers of the heart)
FINDINGS: Review of doctor readings of electro-cardiograms [a graphical recording of the change in body electricity due to cardiac activity] that concluded a patient suffered from this abnormal heart beat found that: 35 percent of the patients were misdiagnosed by the machine, and the error was detected by the reviewing clinician only 76 percent of the time.

STUDY: Arnon et al (2006)
CONDITION: Infant botulism (toxic bacterial infection in newborns' intestines)
FINDINGS: Study of 129 infants in California suspected of having botulism during a five-year period; only 50 percent of the cases were suspected at the time of admission.

STUDY: Edelman (2002)
CONDITION: Diabetes (high blood sugar due to insufficient insulin)
FINDINGS: Retrospective review of 1,426 patients with laboratory evidence of diabetes showed that there was no mention of diabetes in the medical record of 18 percent of patients.

STUDY: Russell et al (1988)
CONDITION: Chest x-rays in the emergency department
FINDINGS: One third of x-rays were incorrectly interpreted by the emergency department staff compared with the final readings by radiologists.


Misdiagnosis rarely springs from a "lack of knowledge per se, such as seeing a patient with a disease that the physician has never encountered before," Berner and Grave explain. "More commonly, cognitive errors reflect problems gathering data, such as failing to elicit complete and accurate information from the patient; failure to recognize the significance of data, such as misinterpreting test results; or most commonly, failure to synthesize or 'put it all together.'"

The breakdown in clinical reasoning often occurs because the physician isn't willing or able to "reflect on [his] own thinking processes and critically examine [his] assumptions, beliefs, and conclusions." In a word, the physician is too "confident."

Indeed, Berner and Graber find an inverse relationship between confidence and skill. In one study they reviewed, the researchers looked at diagnoses made by medical students, residents and physicians, and asked them how certain they were that they were correct. The good news is that while medical students were less accurate, they also were less confident; meanwhile the attending physicians were the most accurate and highly confident. The bad news is that the residents were more confident than the others, but significantly less accurate than the attending physicians. In another study, researchers found that residents often stayed wedded to an incorrect diagnosis even when a diagnostic decision support system suggested the correct diagnosis.

In a third study of 126 patients who died in the ICU and underwent autopsy, physicians were asked to provide the clinical diagnosis and also their level of uncertainty. Level 1 represented complete certainty, level 2 indicated minor uncertainty, and level 3 designated major uncertainty. Here the punch line is alarming: Clinicians who were "completely certain" of the diagnosis before death were wrong 40 percent of the time.

Overconfidence, or the belief that "I know all I need to know," may help explain what the researchers describe as a "pervasive disinterest in any decision support or feedback, regardless of the specific situation." Studies show that "physicians admit to having many questions that could be important at the point of care, but which they do not pursue. Even when information resources are automated and easily accessible at the point of care with a computer, one study found that only a tiny fraction of the resources were actually used."

Research shows that physicians tend to ignore computerized decision-support systems, often in the form of guidelines, alerts and reminders. "For many conditions, consensus exists on the best treatments and the recommended goals," Berner and Graber point out. Nevertheless, a comprehensive review of medical practice in the United States found that the care provided deviated from recommended best practices half of the time. In one study, the researchers suggest that the high rate of noncompliance with clinical guidelines relates to "the sociology of what it means to be a professional" in our health care system: "Being a professional connotes possessing expert knowledge in an area and functioning relatively autonomously." Many physicians have yet to learn that 21st century medicine is too complex for anyone to know everything -- even in a single specialty. Medicine has become a team sport.

But while it's easy to blame medical "arrogance" for the high rate of errors, "there is ubstantial evidence that overconfidence -- that is, miscalibration of one's own sense of accuracy and actual accuracy -- is ubiquitous and simply part of human nature," Berner and Graber write. "A striking example derives from surveys of academic professionals, 94 percent of whom rate themselves in the top half of their profession. Similarly, only 1 percent of drivers rate their skills below that of the average driver."

In another study published in the same issue of AMJ, Pat Croskerry and Geoff Norman note that such equanimity regarding one's own skills can lead to what's called "confirmation bias." People "anchor" on findings that support their initial assumptions. Given a set of information, it's much easier to pull out the data that proves you right and pat yourself on the back than it is to look at the contradictory evidence and rethink your assumptions. Indeed, Croskerry and Norman observe,"It takes far more mental effort to contemplate disconfirmation -- by considering all the other things it might be -- than confirmation."

Making things all the more difficult is the fact that, at a certain point, the alternative to confirmation bias -- what Croskerry and Norman call "consider the opposite" -- becomes impractical. If a doctor embraces uncertainty, he could easily become paralyzed.

What doctors need to do is to simultaneously make a decision -- and keep an open mind. Often, a doctor must embark on a course of treatment as a way of diagnosing the condition -- all the time knowing that he may be wrong.

Too often, Berner and Graber observe, physicians narrow the diagnostic hypotheses too early in the process, so that the correct diagnosis is never seriously considered. Reliance on advanced diagnostic tests can encourage what they call "premature closure." After all, high-tech diagnostic technologies offer up hard-and-fast data, fostering the illusion that the physician has vanquished medicine's ambiguity.

But in truth, advanced diagnostic tools can miss critical information. The problem is not the technology, but how we use it. Some observers suggest that the newest and most sophisticated tools are more likely to produce false negatives because doctors accept the results so readily.

"In most cases, it wasn't the technology that failed," explains Dr. Atul Gawande in Complications: A Surgeon's Notes on an Imperfect Science. "Rather, the physician did not consider the right diagnosis in the first place. The perfect test or scan may have been available, but the physician never ordered it." Instead, he ordered another test -- and believed it.

"We get this all the time," Bill Pellan of Florida's Penallas-Pasca County Medical Examiner's Office told the New York Times a few years ago. "The doctor will get our report and call and say: 'But there can't be a lacerated aorta. We did a whole set of scans.'

"We have to remind him we held the heart in our hands."


Sometimes physicians are overly confident; sometimes they narrow their hypothesis too early in the diagnostic process. Sometimes they rely too heavily on advanced diagnostic tests and accept the results too quickly. As I explained in part one of this post, these are some of the reasons why physicians misdiagnose their patients up to 15 percent of the time.

"Complacency" (i.e., the attitude that "nobody's perfect") also is a factor, reports Drs. Eta S. Berner and Mark L. Graber in the May issue of the American Journal of Medicine. "Complacency reflects tolerance for errors, and the belief that errors are inevitable," they write, "combined with little understanding of how commonplace diagnostic errors are. Frequently, the complacent physician may think that the problem exists, but not in his own practice ..."

It is crucial to recognize that physicians are not simply deceiving themselves: In our fragmented healthcare system, many honestly don't know when they have misdiagnosed a patient. No one tells them -- including the patient.

Sometimes a patient who isn't getting better simply leaves the doctor and finds someone else. His original doctor may well assume that he was finally cured. Or the patient may be discharged from the hospital, relapse three months later, and go to a different ER where he discovers that his symptoms have returned because he was, in fact, misdiagnosed. The doctors who cared for him at the first hospital have no way of knowing; they think they cured him. In other cases, the patient gets better despite the wrong diagnosis. (It is surprising how often bodies heal themselves.) Meanwhile, both doctor and patient assume that the diagnosis was right and that the treatment "worked."

In still other cases, the patient dies, and because everyone assumes that the diagnosis was correct, it is listed as the "cause of death" -- when in fact, another condition killed the patient.

When giving talks to groups of physicians on diagnostic errors, Graber says that he frequently "asks whether they have made a diagnostic error in the past year. Typically, only 1 percent admit to having made such a mistake."

Here, we reach the heart of the problem: what Berner and Graber call "the remarkable discrepancy between the known prevalence of diagnostic error and physician perception of their own error rate." This gap "has not been formally quantified and is only indirectly discussed in the medical literature," they note "but [it] lies at the crux of the diagnostic error puzzle and explains in part why so little attention has been devoted to this problem."

One cannot expect doctors to learn from their mistakes unless they have feedback: At one time, autopsies provided physicians with the information they needed. And the results were regularly discussed at "mortality and morbidity" conferences, where doctors became Monday-morning quarterbacks, discussing what they could have done differently.

But today, "autopsies are done in 10 percent of all deaths; many hospitals do none," notes Dr. Atul Gawande in Complications: A Surgeons Notes on an Imperfect Science. "This is a dramatic turnabout. Throughout much of the 20th century, doctors diligently obtained autopsies in the majority of all deaths ... Autopsies have long been viewed as a tool of discovery, one that has been used to identify the cause of tuberculosis, reveal how to treat appendicitis and establish the existence of Alzheimer's disease.

"So what accounts for the decline?" Gawande asks. "In truth, it's not because families refuse -- to judge from recent studies, they still grant their permission up to 80 percent of the time. Instead, doctors once so eager to perform autopsies that they stole bodies [from graves] have simply stopped asking.

"Some people ascribe this to shady motives," Gawande continues. "It has been said that hospitals are trying to save money by avoiding autopsies, since insurers don't pay for them, or that doctors avoid them in order to cover up evidence of malpractice. And yet," he points out, "autopsies lost money and uncovered malpractice when they were popular, too."

Gawande doesn't believe that fear of malpractice has driven the decline in autopsies. Instead," he writes, "I suspect, what discourages autopsies is medicine's 21st century, tall-in-the-saddle confidence."

This is an important point. Autopsies have fallen out of fashion in recent years: "Between 1972 and 1995, the last year for which statistics are available, the rate fell from 19.1 percent of all deaths to 9.4 percent. A major reason for the decline over this period is that "imaging technologies such as CT scanning and ultrasound have enabled doctors to 'see' such obvious internal causes of death as tumors before the patient dies," says Dr. Patrick Lantz, associate professor of pathology at Wake Forest University Baptist Medical Center. Nowadays an autopsy seems a waste of time and resources.

Gawande agrees: "Today we have MRI scans, ultrasound, nuclear medicine, molecular testing and much more. When somebody dies, we already know why. We don't need an autopsy to find out ... Or so I thought ... " Gawande then goes on to tell the story of a autopsy that rocked him. He had completely misdiagnosed a patient.

What autopsies show

The autopsy has been described as "the most powerful tool in the history of medicine" and the "gold standard" for detecting diagnostic errors. Indeed, Gawande points out that three studies done in 1998 and 1999 reveal that autopsies "turn up a major misdiagnosis in roughly 40 percent of all cases."

A large review of autopsy studies concluded that, "in about a third of the misdiagnoses, the patients would have been expected to live if proper treatment had been administered," Gawande reports. "Dr. George Lundberg, a pathologist and former editor of the Journal of the American Medical Association, has done more than anyone to call attention to these figures. He points out the most surprising fact of all: The rate at which misdiagnosis is detected in autopsy studies have not improved since at least 1938."

When Gawande first heard these numbers he couldn't believe them. "With all of the recent advances in imaging and diagnostics ... it's hard to accept that we have failed to improve over time." To see if this really could be true, he and other doctors at Harvard put together a simple study. They went back into their hospital records to see how often autopsies picked up missed diagnosis in 1960 and 1970, before the advent of CT, ultrasound, nuclear scanning and other technologies, and then in 1980, after those technologies became widely used.

Gawande reports the results of the study: "The researchers found no improvement. Regardless of the decade, physicians missed a quarter of fatal infections, a third of heart attacks and almost two-thirds of pulmonary emboli in their patients who died."

But these numbers may exaggerate the rate of error. As Berner and Graber observe, "Autopsy studies only provide the error rate in patients who die." One can assume that the error rate is much lower in patients who survived.

"For example, whereas autopsy studies suggest that fatal pulmonary embolism is misdiagnosed approximately 55 percent of the time, the misdiagnosis rate for all cases of pulmonary embolism is only 4 percent ..." a large discrepancy also exists regarding the misdiagnosis rate for myocardial infarction: although autopsy data suggest roughly 20 percent of these events are missed, data from the clinical setting (patients presenting with chest pain or other relevant symptoms) indicate that only 2 percent to 4 percent are missed."

Still, they acknowledge that when laymen are trained to pretend to be a patient suffering from specific symptoms, studies show that "internists missed the correct diagnosis 13 percent of the time. Other studies have found that physicians can even disagree with themselves when presented again with a case they have previously diagnosed."

On the question of whether the diagnostic error rate has changed over time, Berner and Graber quote researchers who suggest that the near-constant rate of misdiagnosis found at autopsy over the years probably reflects two factors that offset each other:

  1. diagnostic accuracy actually has improved over time (more knowledge, better tests, more skills);
  2. but as the autopsy rate declines, there is a tendency to select only the more challenging clinical cases for autopsy, which then have a higher likelihood of diagnostic error. A long-term study of autopsies in Switzerland (where the autopsy rate has remained constant at 90 percent) supports the theory that the absolute rate of diagnostic errors is, as suggested, decreasing over time.

Nevertheless, nearly everyone agrees, the rate of diagnostic errors remains too high.

We need to revive the autopsy, Gawande argues. For "autopsies not only document the presence of diagnostic errors, they also provide an opportunity to learn from one's errors (errando discimus) if one takes advantage of the information.

"The rate of autopsy in the United States is not measured anymore," he observes, "but is widely assumed to be significantly 10 percent. To the extent that this important feedback mechanism is no longer a realistic option, clinicians have an increasingly distorted view of their own error rates.

"Autopsy literally means "to see for oneself," Gawande observes, and despite our knowledge and technology, when we look we are often unprepared for what we find. Sometimes it turns out that we had missed a clue along the way or made a genuine mistake. Sometimes we turn out wrong despite doing everything right.

"Whether with living patients or dead, we cannot know until we look. ... But doctors are no longer asking such questions. Equally troubling, people seem happy to let us off the hook. In 1995, the United States National Center for Health Statistics stopped collecting autopsy statistics altogether. We can no longer even say how rare autopsies have become."

If they are going to reflect on their mistakes, physicians need to "see for themselves."

The Dangers of Consumer-Driven Medicine

Medical device makers are taking direct-to-consumer (DTC) advertising to a perilous new level. In a piece titled "Crossing the Line in Consumer Education?" that appeared in the May 22 issue of The New England Journal of Medicine (NEJM), Drs. William E. Boden, and George A. Diamond tackle the issue, arguing that a new campaign to peddle medical devices directly to patients warrants close scrutiny. Manufacturers are inviting consumers to decide not only what is best for them, but what is best for their surgeons. This is "consumer-driven medicine" at its most dangerous.

Boden and Diamond focus on a 60-second television spot for Johnson & Johnson's drug-eluting coronary stent, "the Cypher," which debuted during last year's Thanksgiving match-up between the Dallas Cowboys and the New York Jets. (Click here to view the advertisement in question).

The commercial has all of the hallmarks of the drug industry's highly polished DTC advertising: First, we're introduced to "the tough guy" -- a once-powerful man who now is "cornered by chest pains" and sits slumped in his arm chair. Then, we are shown how he can reclaim his life in a montage of joyous physical activity accompanied by upbeat music. Of course, "this product isn't for everyone," we're told. But "life is wide open. It all depends on what you've got inside."

In the campaign to put the health care "consumer" in the driver's seat, where he can have "control" and "choice," J&J is breaking new ground. This ad isn't for a pill that you buy in a pharmacy but rather for a coronary stent, a wire mesh device that is placed in an artery which has been blocked by fatty deposits. Doctors first thread a tiny balloon into the artery and inflate it to clear the blockage; then they insert a stent into the artery, and a second balloon expands the stent to keep the newly cleared blood vessel wide open.

"Unlike a drug," Boden and Diamond point out, "whose use merely requires an office visit to a physician and a prescription the patient can fill at a pharmacy, a specialized medical device such as a stent can be selected and implanted only by someone with a very sophisticated medical understanding that no member of the lay public could realistically expect to gain from a DTCA campaign."

This is an important point. It's bad enough that some patients are now sold drugs via a sound-bite, but it is even more pernicious to pretend that the pros and cons of a medical device can be condensed into a 30-second spot. In this case you're not just popping a pill that you can decide to stop taking if you don't like the way it makes you feel. Medical devices are literally installed in our bodies. Even if short-term results look promising, the reality of medical devices is that they stay in our bodies; and so complications often do not become evident until well after their installation.

Moreover, it's imperative that a surgeon is comfortable with the device he is using. In Money-Driven Medicine George Cipoletti, co-founder of Apex Surgical, a company that focuses on joint replacement products, explains that, when it comes to devices, "90 percent of success is determined not by the device itself but, by how good the surgeon is at implanting that particular device -- how much experience he has with it."

John Cherf, a Chicago knee surgeon, adds that surgical technique accounts for "80 to 85 percent" of a successful operation. "Think of it this way," said Cherf. "If you gave Tiger Woods 20-year old golf clubs, and gave me the newest clubs, he'd still kick my butt."

This is another reason why Boden and Diamond find it "almost unimaginable that a patient would challenge an interventional cardiologist's judgment about the use of a particular stent or that a cardiologist would accede to a patient's request for a particular stent on the basis of the information gleaned from a television ad. Indeed, the notion that television viewers, inspired by such an ad, would go to their physicians and request not only a stent but a specific brand and model of stent is frightening, if not utterly absurd."

Yet why else would J&J spend millions on television advertising? The company's goal is to create demand -- a "buzz" that will cause patients to ask about the product, and that will make some hospitals and surgeons feel that they must use it.

This is what happened with another J&J product, a spinal device called "Charité." After being approved by the FDA in 2004, Charité was heavily promoted. By the fall of 2005, more than 3,000 of the spinal discs had been implanted -- even though only two of the nation's eight largest insurers had agreed to pay for the operation. Some surgeons were questioning the safety of the device, but patients who read favorable reports about Charité online or in the press were beginning to demand the operation.

As a result, some hospitals were willing to absorb the cost of the operation even though insurers wouldn't reimburse. Dr. John Brockvar, chief of neurosurgery at Wyckoff Heights Hospital in Brooklyn, told Dow Jones Newswires that his hospital gave him permission to implant the device "because it was important to be on the leading edge."

"Some doctors say they're worried they will lose business if they don't offer the Charité option to patients," The Wall Street Journal reported. "There's a feeling that it isn't adequately proven, but there's anxiety about being left behind." [my emphasis]

In an almost pure example of money-driven, consumer-driven medicine, manufacturers intent on profits pushed consumers to push doctors and hospitals to use a product that they were not convinced was safe. This is not how we want medical decisions to be made.

Today, Charité remains controversial. There are many questions about long-term complications, and last spring, Medicare announced that it would no longer cover Charité for patients over sixty.

J&J's stent, Cypher, also has its critics -- which may be one reason why the company is pumping up promotion via television advertising. In the past, drug-makers have poured money into television ads for the same reason that movie studios resort to expensive television advertising: the critics are panning the product. If a drug-maker is having a hard time selling its new product directly to physicians -- either because the reviews in the medical journal reviews are mixed, or because it is a "me-too "product that appears to offer little benefit over older, less expensive drugs -- manufacturers go directly to the consumer, who is less likely to be aware of what the critics are saying.

This may be what J&J is doing with Cypher, one of the new "drug-eluting stents" that, unlike older, less expensive "bare-metal stents" release drugs which are supposed to prevent arteries from re-clogging.

If you were to judge drug-eluting stents solely by the Cypher advertisement, you might think they're a remarkable sure-fire innovation. After all, as the commercial asserts, "when your arteries narrow, so does your life." Who wouldn't want to lead a better life thanks to a device that -- again, according to the advertisement -- is "studied," "trusted" and "proven"?

Unfortunately, while drug-eluting stents have been studied, they are far from "proven." In fact, there is much debate over whether or not they're good options for folks with clogged arteries. In a 2007 NEJM article, William Maisel of Harvard Medical School asserted that, since the FDA approval of drug-eluting stents in 2003, "concerns about an increased risk of late stent thrombosis [i.e. late onset blood clots] have arisen and have been exacerbated by insufficient and conflicting information in the public domain."

This is putting it diplomatically. According to Maisel, a major 2006 study found "that between 7 and 18 months after implantation, the rates of nonfatal myocardial infarction [i.e. heart attacks], death from cardiac causes, and ... stent thrombosis were higher with drug-eluting stents than with bare-metal stents [i.e. those that don't release drugs]."

Equally disconcerting is a 2003 Swedish study cited by NEJM in which doctors examined a computer registry of every Swedish stent patient for the years 2003 and 2004. This analysis of almost 20,000 people found that when drug-eluting stents were implanted, patients were slightly more likely to die than those who had old-fashioned bare-metal stents. Four years later a Columbia University study reported that the four-year rate of stent thrombosis was 1.2 percent amongst patients who had received the Cypher, as opposed to 0.6 percent for those who had received bare-metal stents -- double the proportion.

On the other hand, a more recent but smaller study of 6,552 cases published in the NEJM comparing bare metal to drug-eluting stents for so-called "off-label" use (use not specifically approved by the FDA), found a lower rate of complications and no increased risk of death or heart attack for the drug-coated stents. But in the same issue of the NEJM, a study suggested that if patients have more than one blocked artery, bypass surgery provides a lower risk of death and heart attacks than do procedures involving any type of stent.

Questions about when to use stents, and what kind of stent to use, are far from resolved. As Boden and Diamond point out, this underlines the absurdity of J&J's effort to sell Cypher "to millions of people who are ill-equipped to make judgments about the many clinically relevant but subtle and complex therapeutic issues that even specialists continue to debate."

But just how likely is it that surgeons really will respond to consumer demands?

Much to their chagrin, doctors are finding that pressure from patients does in fact change their behavior. In one of the most compelling analyses of this dynamic to date, a team led by the University of California had trained actors make 298 visits to 152 primary-care physicians, portraying patients with major depression or adjustment disorder. The actors presented doctors with three types of scenarios: requests for specific brands of medications, general requests for medications without naming brands, and no requests for medication.

For major depression, physicians prescribed antidepressants at a rate of 53 percent for brand-specific requests; 76 percent for general requests; and 31 percent for no requests. For adjustment disorder, physicians prescribed antidepressants at a rate of 55 percent for brand-specific requests; 39 percent for general requests; and 10 percent for no requests. In other words, people with identical conditions were prescribed drugs at dramatically different rates depending on what they asked for.

The Wall Street Journal's report that hospitals and doctors feel that they must experiment with J&J's spinal disc, Charité, suggests that this logic can carry over to medical devices. One can envision a spike in procedures and surgeries thanks to patient demand. But do we really want a system where knee-jerk patient response to 30-second commercials trumps medical expertise?

Patients want to be able to ask questions. They want their doctors to take the time to give them detailed answers. But the more DTC ads encourage patients to make demands of their doctors, the more doctors and patients are positioned as antagonists rather than collaborators.

That's a recipe for friction, not patient satisfaction. One last question: even if patients get the Cypher they want, what happens when they develop a blood clot? Who is responsible then?

Whatever Happened to American Longevity?

Life expectancy is a pretty simple concept: it's an estimation of how long the average person lives. Anyone can understand that. So how is this for a compelling data point: if you look at life expectancy in nations around the globe, you'll find that over the past 20 years, the U.S. has sunk from ranking  No. 11 to  ranking No. 42. In other words, a baby born in 2004 in any one of 41 other countries can expect to live longer than his or her American counterpart.

This may come as a surprise. Sure, we all know the health care system in the U.S. is broken, but life expectancy isn't just tied to medicine -- it's also related to quality of life in a larger sense. (I can live in a nation with the best health care system in the world, but if it's in the throes of civil war, my life expectancy will be short). As we all know, the American standard of living is the envy of the world.  After all, we're the richest country on the globe. So what gives?

While some of us are rich, the average American is not.  And while the rich are living longer, the poor are living shorter.  Factor in the profit motive that drives U.S. healthcare, and you will begin to understand why American medicine has done little to heal the gap between rich and poor.  Over the past twenty-five years, we have poured money into healthcare, but have paid relatively little attention to public health.

This may seem a bold claim, but last month the Congressional Budget Office (CBO) issued a report that provides the numbers:  "In 1980," the CBO found that "life expectancy at birth was 2.8 years more for the highest socioeconomic group than for the lowest. By 2000, that gap had risen to 4.5 years."

The report notes that "the 1.7-year increase in the gap" between socioeconomic groups "amounts to more than half of the increase in overall average life expectancy at birth between 1980 and 2000." In other words, even though the average life expectancy has increased in the U.S., it has grown more slowly because of widening socioeconomic disparities.

Citizens of countries that don't tolerate as much inequality enjoy longer lives. According to numbers from the Census Bureau and the National Center for Health Statistics, a baby born in the United States in 2004 will live an average of 77.9 years. In the U.K., an '04 baby can expect to live 78.7 years; in Germany, 79 years; in Norway, 79.7 years; in Canada, 80.3 years; in Australia, Sweden, and Switzerland, 80.6 years; and in Japan, a newborn can expect to live 81. 4 years.

Somehow or other, when they hear these figures, most Americans just shrug. Indeed, "it is remarkable how complacent the public and the medical profession are in their acceptance of" our low ranking when it comes to life expectancy, "especially in light of trends in national spending on health, " Dr. Steven Schroeder, a professor in the Department of Medicine at the University of California, San Francisco wrote in the New England Journal of Medicine last year.

"One reason for the complacency may be the rationalization that the United States is more ethnically heterogeneous than the nations at the top of the rankings, such as Japan, Switzerland, and Iceland. But," Schroeder pointed out, "even when comparisons are limited to white Americans, our performance is dismal. And even if the health status of white Americans matched that in the leading nations, it would still be incumbent on us to improve the health of the entire nation."

In the OECD countries that outrank us, the gaps between rich and poor are not as great and, not coincidentally, all have universal health insurance. (As Maggie wrote in an earlier post on Health Beat, in countries that are mainly middle-class, there tends to be more social solidarity. People identify with each other, and are more willing to pool their resources to pay for healthcare for everyone.)

But having access to health care is only a small part of health. Schroeder identifies five factors that determine health and longevity: "social circumstances, genetics, environmental exposures, behavioral patterns and health care."  Of these five, when "it comes to reducing early deaths," he points out, "medical care has a relatively minor role."  Indeed, "inadequate health care accounts for only 10% of premature deaths, yet it receives by far the greatest share of resources and attention." 

Socioeconomic status is the strongest predictor of health, above and beyond access to health care. This is because socioeconomic status includes access to health care and a variety of other factors. Even if when the poor have insurance, they are less likely to have access to cutting-edge medical discoveries; they're more likely to smoke, more likely to be obese, more likely to live in unsafe or unhealthy environments. They also tend to be less educated, meaning that they are less able to manage chronic diseases.

These facts are reflected in life expectancy. African-Americans are more likely to live in poverty than other Americans: as a result, black men can expect to live six years less than white men, and black women four years less than white women.  Education, another critical component of socioeconomic status, also contributes to the story. The CBO reports that "the gap in life expectancy at age 25 between individuals with a high school education or less and individuals with any college education increased by about 30 percent" from 1990 to 2000. "The gap widened because of increases in life expectancy for the better educated group," the report notes. "Life expectancy for those with less education did not increase over that period."

This trend is clear: since 1980, affluent members of society have made gains while the have-nots have, at best, run in place, and, at worst, lost ground. Another recent study published in the PLoS Medicine takes a broader look at the problem by going all the way back to 1960 to see at how life expectancies have differed in U.S. counties. (Counties were used because they are the smallest geographic units for which death rates are collected, thus allowing for a precise comparison of subgroups). The authors, who hail from Harvard, UCSF, and the University of Washington, discovered that "beginning in the early 1980s and continuing through 1999, those who were already disadvantaged did not benefit from the gains in life expectancy experienced by the advantaged, and some became even worse off."

1980 was a watershed year. Indeed, the study reports that from 1960 to 1980, life expectancy increased everywhere. But "beginning in the early 1980s the differences in death rates among/across different counties began to increase. The worst-off counties no longer experienced a fall in death rates, and in a substantial number of counties, mortality actually increased, especially for women..."

So what was so special -- or rather, harmful -- about the 1980s?

1980 was the year that a conservative agenda firmly replaced the "War on Poverty" that LBJ had begun in the 1960s. For the next 28 years, the trend would continue as corporate welfare and tax cuts for the wealthy replaced programs for the poor and middle-class.

As the authors of a 2006 PLoS Medicine study note, "in the 1980s there was a general cutting back of welfare state provisions in America, which included cuts to public health and antipoverty programs, tax relief for the wealthy, and worsening inequity in the access to and quality of health care." By contrast, in the 1960s, "civil rights legislation and the establishment of Medicare set out to reduce socioeconomic and racial/ethnic inequalities and improve access to health care."

But after 1980, the '06 PLoS Medicine study shows that rates of premature mortality across socioeconomic groups began to diverge, helping to roll back the gains of the 1960s and 1970s. In a stunning conclusion, the study's authors reported that "if all people in the US population experienced the same health gains as the most advantaged [i.e. whites in the highest income group] without the problems of the 1980s,  "14 percent of the premature deaths among whites and 30 percent of the premature deaths among people of color would have been prevented."

In sum, the stronger social safety net of the 1960s helped to increase longevity for all Americans; its erosion in the 1980s created a discrepancy between the haves and the have-nots. Indeed, given that socioeconomic status is the strongest predictor of health, it's noteworthy that the lowest quintile of earners in the U.S. saw its income fall by 15 percent between 1979 and 1993, while the highest 20 percent saw their income grow by 18 percent over this same period. The poverty rate in the U.S. was cut nearly in half tin the 1960s; from 1980 to 1989, it inched down by just one percentage point.

Clearly, the decline of American longevity is related to an increase in American inequality. But it would be short-sighted to stop our analysis here. It's also worth asking, where have we been spending our health care dollars?

"To the extent that the United States has a health strategy, its focus is on the development of new medical technologies and support for basic biomedical research," Schroeder observes. "We already lead the world in the per capita use of most diagnostic and therapeutic medical technologies, and we have recently doubled the budget for the National Institutes of Health. But these popular achievements are unlikely to improve our relative performance" when it comes to longevity.

If we want to cut the number of premature deaths, we might put more emphasis on smoking cessation clinics. "Smoking causes 440,000 deaths a year in the United States," notes Schroeder, who directs the Smoking Cessation Leadership Center at UCSF. "Smoking shortens smokers' lives by 10 to 15 years, and those last few years can be a miserable combination of severe breathlessness and pain."  44.5 million Americans still smoke.  "Smoking among pregnant women is a major contributor to premature births and infant mortality. Smoking is increasingly concentrated in the lower socioeconomic classes and among those with mental illness or problems with substance abuse," Schroeder explains.  "Understanding why they smoke and how to help them quit should be a key national research priority. Given the effects of smoking on health, the relative inattention to tobacco by those federal and state agencies charged with protecting the public health is baffling and disappointing."

Kaiser Permanente of northern California has shown that it can be done. When Kaiser implemented a multisystem approach to help smokers quit, Schroeder reports that "the smoking rate dropped from 12.2% to 9.2% in just 3 years. Of the current 44.5 million smokers, 70% claim they would like to quit. Assuming that one half of those 31 million potential nonsmokers will die because of smoking, that translates into 15.5 million potentially preventable premature deaths. Merely increasing the baseline quit rate from the current 2.5% of smokers to 10% -- a rate seen in placebo groups in most published trials of the new cessation drugs -- would prevent 1,170,000 premature deaths. No other medical or public health intervention approaches this degree of impact. And we already have the tools to accomplish it."

The poor also are more likely to be obese, "in part because of inadequate local food choices and recreational opportunities," says Schroeder.  Fattening foods are cheaper than fresh fruit, vegetables and fish, particularly if you are shopping in inner cities. Gyms are too expensive for low-income families; exercising outdoors can be dangerous, and in inner cities, public schools often lack playgrounds and gymnasiums.

"Psychosocial stress" also leads poorer Americans to engage in "other behaviors that reduce life expectancy such as drug use and alcoholism," Schroeder notes. And even when they avoid these behaviors, " people in lower classes are less healthy and die earlier than others." A polluted environment, combined with uncertainty and worry, takes a toll.

Rather than focusing solely on medicine and medical care, Schroeder is committed to strategies that would improve public health. In the U.S. there is a sharp division between the two, with public health always the poor relation.

"It's harder, because there's stigma attached to it," Schroeder explains. "There's a sense among some that if a large portion of the nation's population is obese or sedentary, drinks or smokes too much, or uses illegal drugs, that's their own fault or their own business."

"We often get a double-standard question,"  he continues.  "Critics who object to investing more in programs that could help drug addicts and alcoholics, ask: Well, don't many of these people relapse?

"Yes, of course," Schroeder responds. "But is it worth treating pancreatic cancer, which has a 5 percent survival rate, at most? Yes. So the odds of successfully treating drug abuse or alcoholism are actually better than in many of the serious illnesses that society, without question, wants us to treat."

Schroeder is right: When allocating health care dollars, we eagerly spend far more on cutting-edge drugs that might give a cancer patient an extra five months than on drug rehab clinics that could make the difference between dying at 28 and living to 68. 

Again, 1980 marks a turning point, notes Marcia Angell, a Senior Lecturer at Harvard Medical School and the former editor-in-chief of NEJM. "Between 1960 and 1980, "prescription drug sales were fairly static as a percent of US gross domestic product, but from 1980 to 2000, they tripled."

This wasn't just happenstance, says Angell. A major catalyst of the pharma boom was the Bayh-Dole Act of 1980, a law that "enabled universities and small businesses to patent discoveries emanating from research sponsored by the National Institutes of Health, the major distributor of tax dollars for medical research, and then to grant exclusive licenses to drug companies." In other words, the Bayh-Dole Act commoditized medical research.

Before 1980, "taxpayer-financed discoveries were in the public domain, available to any company that wanted to use them," says Angell. As a result, long-term, collaborative tinkering could help to create new and effective medications. But Bayh-Dole made research proprietary and profitable.

After Bayh-Dole, drug research seemed to be less about making real medical progress, and more about doing the bare minimum to create a patentable product. And so began the age of me-too drugs, which do little to promote health and instead exist to increase market share. In a Boston Globe op-ed last year, Angell observed that, "according to FDA classifications, fully 80 percent of drugs that entered the market during this decade are unlikely to be better than existing ones for the same condition."

Why are we willing to devote 13 or 14 percent of our $2.2 trillion health care budget to prescription drugs, while refusing to help the quarter of the population that still smokes?

"It is arguable that the status quo is an accurate expression of the national political will --
a relentless search for better health among the middle and upper classes," Schroeder acknowledges. [our emphasis]  "This pursuit is also evident in how we consistently outspend all other countries in the use of alternative medicines and cosmetic surgeries and in how frequently health 'cures' and 'scares' are featured in the popular media. The result is that only when the middle class feels threatened by external menaces (e.g., secondhand tobacco smoke, bioterrorism, and airplane exposure to multidrug-resistant tuberculosis) will it embrace public health measures. In contrast, our investment in improving population health -- whether judged on the basis of support for research, insurance coverage, or government-sponsored public health activities -- is anemic."

We're hopeful that this will change. In going to medical conferences over the past year, Maggie has met an impressive number of very, very bright 20-somethings who are devoting their careers to public health. And they understand that "medicine" and "public health" are not separate disciplines.

21st Century Medicine Fraught With Miscommunication and Human Error

In the most recent issue of the New England Journal of Medicine, Dr. Thomas Bodenheimer defines the coordination of medical care as "the deliberate integration of patient care activities between two or more participants involved in a patient's care to facilitate the appropriate delivery of healthcare services." Or, to put it in layman's terms: doctors working together to get things right.

The value of this sentiment should be self-evident, but the coordination of medical care is more complex than it initially seems -- even when discussing admittedly uncomplicated concepts. Consider the "hand-off," that transitional moment when a patient is passed from one provider to another (e.g., from primary care physician to specialist, specialist to surgeon, surgeon to nurse, etc.) -- or is discharged. This transition is unavoidable. As Bodenheimer points out, modern healthcare necessitates a "pluralistic delivery system that features large numbers of small providers, [which] magnif[ies] the number of venues such patients need to visit." Twenty-first century medicine is too complex for one-stop shopping.

Inescapable though it may be, the hand-off is wrought with pitfalls. As Quality and Safety in Health Care (QSHC), a publication of the British Medical Journal, noted in January, the simple transition of a patient from one caretaker to another represents a gap that is "considered especially vulnerable to error."

Even the most common hand-off -- your standard referral from primary care physician to specialist -- is not risk-free. As Dr. Bob Wachter recently noted in his blog, "In more than two-thirds of outpatient subspecialty referrals, the specialist received no information from the primary care physician to guide the consultation." Sadly, the radio silence goes both ways: "In one-quarter of the specialty consultations," Wachter says, "the primary care physician received no information back from the consultant within a month."

These missteps are indicative of what can go wrong during the hand-off, such as, according to QSHC, "inaccurate medical documentation and unrecorded clinical data." Such misinformation can lead to extra "work or rework, such as ordering additional or repeat tests" or getting "information from other healthcare providers or the patient" -- a sometimes arduous process that can "result in patient harm (e.g., delay in therapy, incorrect therapy, etc)."

Bodenheimer points out other troubling statistics that speak to the problems with fragmented, discontinuous medical care -- and that extend well beyond the physician-specialist back-and-forth. Indeed, poorly integrated care is evident across the spectrum of medical services. In the nation's emergency rooms, for example, 30 percent of adult patients that underwent emergency procedures report that their regular physician was not informed about the care they received. Another study "showed that 75 percent of physicians do not routinely contact patients about normal diagnostic test results, and up to 33 percent do not consistently notify patients about abnormal results." And an academic literature review concluded that a measly "3 percent of primary care physicians [are] involved in discussions with hospital physicians about patients' discharge plans."

If you're sensing a pattern here, you should be: Most of the gaps in care are failures of communication involving primary care physicians. That's because, at least in theory, primary care docs are the touchstone for patient care -- the glue that holds it all together.

But primary care has become an increasingly precarious occupation. The problem is that, relative to specialists, PCPs do a lot more for relatively little pay. And they are expected to do more each day. Bodenheimer notes that "it has been estimated that it would take a physician 7.4 hours per working day to provide all recommended preventive services to a typical patient panel, plus 10.6 hours per day to provide high quality long-term care." So it should come as no surprise that "forty-two percent of primary care physicians reported not having sufficient time with their patients."

With such a heavy time-crunch, it's not surprising that some things can fall through the cracks -- like follow-ups, double-checking, and generally going the extra mile (which really shouldn't be extra at all).

Making things worse is our fee-for-service system, which, as Dr. Kevin Pho (aka blogger KevinMD) notes, pressures "primary care physicians to squeeze in more patients per hour" and thus encourages a short attention spans vis-à-vis individual patients. The volume imperative is strongest for PCPs, who make significantly less money than do their specialist peers. As Maggie has pointed out in the past, primary care doctors can expect to pull in -- at the high end -- just under one-third as much as surgeons or radiologists.

Predictably, the all-work, little-reward life of PCPs is increasingly unsexy to newly minded doctors. Kevin notes that "since 1997, newly graduated U.S. medical students who choose primary care as a career have declined by 50 percent."

It's clear that we have a systemic problem that makes hand-off mixups more likely: PCPs are crunched for time, desperate to max out patient volume, and their ranks are dwindling. Is it any wonder that they can't provide the "medical home" that reformers talk about?

This is a recipe for disaster that needs to be addressed. There are options: We can reform the fee-for-service system, perhaps by introducing payments for effective care coordination. We can create financial incentives (such as loan forgiveness) for med students to choose primary care. We also should have primary care physicians work in teams more often, from the very beginning of a patient relationship, thus allowing them to share the load and watch each others' backs.

But for all that these ambitious changes hold promise, the hand-off will always exist -- which means reformers need to dig deeper and develop protocols at the operational level. Luckily, they're doing just that. Kaiser Permanente, for example, has created a procedure meant to formalize communication between healtcare teams when a patient is transitioning from one provider to another. It's called SBAR -- which stands for Situation, Background, Assessment, and Recommendation. QSHC delves deeper into what this actually means:

Keep reading... Show less

Forcing Medical Patients To Be Consumers Wreaks Havoc on Our Health System

One of the most common justifications for consumer-driven medicine is reduced health care costs. The reasoning here is two-fold:

Keep reading... Show less
@2022 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by