shawnkyna

Saturday, May 06, 2006

Medical studies call for some healthy scepticism

............... Experts' comments by themselves, even if the person is tops in the field, are considered the least reliable. "The big guns are quoted a lot, and what they say is taken at face value," says Kay Dickersin of Brown University. "But that doesn't mean they're right."...............

http://inq.philly.com/content/inquirer/2001/05/14/magazine/STUDY14.htm  Medical studies call for some healthy scepticism Monday, May 14, 2001  

Medical studies call for some healthy scepticism

In an era of instant news and pithy sound bites, some results may be little more than hype.

Related Links

-------------------------------------------------------------------------- ------

The Chain of Evidence ---------------------------------! ----------------------------------------- ------ By Linda Marsa LOS ANGELES TIMES One week, medical researchers report that beta carotene prevents cancer. Then they say it may cause cancer. Or we hear fiber is good, so we dutifully load up on oatmeal and green vegetables. Then we're told that maybe it's not so good. The list goes on and on - it's enough to give you mental whiplash.

So how can consumers, especially those with serious or chronic ailments, sort through this contradictory data to make informed decisions about their health care?

"It's very difficult," says Michele Rakoff, a breast-cancer survivor who directs a peer-support mentoring program at Long Beach Memorial Hospital in Long Beach, Calif. "Going through having breast cancer is frightening. Then, there's a news flash on TV about a new cancer cure, and desperate women start calling their doctors. When a closer look reveals it's no big deal, everyone feels let down." !

It can be disheartening even for people who aren't in the midst of a medical crisis but just want to stay healthy. Part of the problem is that most Americans get their health information secondhand, through television, newspapers and magazines, according to a 2000 survey conducted by the Kaiser Family Foundation. In this era of the instant news cycle and the pithy sound bite, preliminary studies often are inflated into major breakthroughs, and the subtle nuances and caveats that put findings in context are glossed over or forgotten.

No wonder there's so much confusion. Unfortunately, the conflicting results of studies breed "distrust of the traditional channels of communication, making some people vulnerable to quack cures, or X-Files-style theories about diseases," says David W. Murray, director of the Statistical Assessment Service, a nonprofit medical-science think tank in Washington.

With a little effort, though, consumers can separate the hope from the hype. The key is to understand how medical research wo! rks, and the different levels of evidence, so that one can determine the significance of studies.

"The public is looking for magic bullets and believes that when a study is done it means we have the answer once and for all," says Linda Rosenstock, dean of the School of Public Health at the University of California-Los Angeles. "But the reality is that science proceeds in a series of steps that don't always go in the same direction. And results can be overstated or oversimplified."

The gold standard in medical research is the randomized, double-blind, placebo-controlled clinical trial. What that means in plain English is that half the people in the study are selected at random to get the new drug or treatment, and the other half get a dummy pill, or placebo. The study is "blinded" - no one - not even the researchers - knows who is getting the real McCoy and who isn't.

Such studies avoid biases that might skew the outcomes. ! Otherwise, researchers may subconsciously treat subjects differently, or participants might have such a strong belief that a treatment works that they'll feel better even if it's not effective.

Ideally, when the study is finished, the people getting the therapy either benefited or didn't compared with those in the control group. This is the way virtually all new drugs are tested by pharmaceutical companies to get clearance by the Food and Drug Administration to market them in the United States.

Still, even seemingly well-executed research can have inherent flaws - and a host of questions needs to be answered before accepting its findings as gospel. How many people were in the study? How long did it last? If there were only 50 people over a six-month period, that's not enough time or study subjects to establish the strength of any effect. Who conducted the research? Research from scientists affiliated with reputable academic institutions is usually more reliable. Where was the study published? Articles in major journals tend! to be more rigorously scrutinized.

However, there also is such a thing as publication bias. "The major medical journals, drug companies and scientists themselves tend not to publish negative results," says Kay Dickersin, an associate professor at Brown University School of Medicine in Providence, R.I. So we often only hear about the exciting "breakthroughs," and not the follow-up studies where the treatments turned out to be duds.

Funding sources also can subtly create built-in biases. Scientists bristle at the suggestion that taking money from drug companies can influence their results. Yet research has consistently shown that studies of new treatments or drugs were much more favorable when funded by drug-makers than by other sponsors, such as the federal government.

Last month, for instance, a study in the Journal of the American Medical Association revealed that the herbal supplement St. John's wort was no more effective in tr! eating major depression than some other therapies.

But the fine pr int at the end of the journal article, where researchers disclose their monetary ties, should have given readers pause. Pfizer, which makes the antidepressant Zoloft, not only underwrote some of this research, but also had financial connections with many of the study's investigators.

"There are lots of vested interests," says UCLA's Rosenstock. "And they're not just economic - they can be emotional, too, where a scientist feels a stake in the outcome. People can use science for or against any point of view they want to promote."

A case in point was a widely publicized 1996 study that suggested that having an abortion increased a woman's risk of developing breast cancer by 30 percent. Critics quickly challenged the validity of the study, noting that the increased risks actually were quite minuscule. They also pointed out that Joel Brind, a biochemist at Baruch College in New York and the lead author of the study, had spoken out against a! bortion.

Sometimes, it's not possible to do the neatly designed double-blind studies. "We can't do randomized trials for many of the questions that are of the greatest interest to the public," says Warren S. Browner, scientific director of the research institute at California Pacific Medical Center in San Francisco. "You can't assign someone randomly to be obese, or to smoke, or even to take up exercise."

Consequently, scientists tend to rely on epidemiological research, which consists of vast studies that look at large groups or populations - often, 25,000 or more subjects - over long periods, sometimes 30 and 40 years, to see whether they can find a connection between such things as diet, exercise, or personal habits and health.

Epidemiological research, however, is not definitive. It's medical detective work: It generates circumstantial evidence, but not enough to convince a scientific jury. More research usually is needed to es! tablish strong links.

Consequently, if one of these mammoth studie s yields an intriguing, statistically significant association, researchers will study it further. The beta carotene controversy is a good example.

Doctors conducting one epidemiological study noticed that people who ate foods rich in beta carotene had lower rates of cancer and heart disease. So they decided to take a closer look. But when the supplement was put under the microscope of more tightly controlled clinical trials, it showed no particular benefit. And in a study of a large group of smokers, it actually seemed to increase cancer risks.

Even when there's a significant link that seems patently obvious, it may take years to unmask the real villain.

Several years ago, for instance, researchers noticed that women who smoked had much higher rates of cervical cancer. "The connection between smoking and cervical cancer seemed like a slam-dunk," says California Pacific's Browner. Later, however, other scientists discovered that the most commo! n factor in cervical cancer was the human papilloma virus, a sexually transmitted microbe. The original researchers had overlooked the fact that the female smokers they studied also were more likely to be sexually active - the real link.

The take-home message is that health-care decisions shouldn't be based on one study, no matter how encouraging it seems. It's only when cumulative evidence points to one inexorable conclusion that medical science feels comfortable making the connection.

Take the link between smoking and lung cancer. When studies first came out in the 1960s that suggested people who smoked were 200 to 800 times as likely to be stricken with lung cancer as nonsmokers, they were greeted with skepticism. But subsequent research showed that laboratory animals got cancer when they were habitually exposed to cigarette smoke, that cancer risks plummeted when people quit smoking, and on and on, until the case became airtight.

The wisest strate! gy, however, is to take all this information with a grain of salt. &q uot;People are hungry for answers, and recognizing the complexity doesn't make it easier for an individual," says UCLA's Rosenstock. "But the wheels of science grind slowly, and a healthy dose of skepticism, along with patience, is a good thing."  The Chain of Evidence http://philainq.infi.net/content/inquirer/2001/05/14/magazine/STUDY14...

Monday, May 14, 2001  Go to: S M T W T F S  

E-mail the story | Plain-text for printing  

The Chain of Evidence

Here are some tips for when - and when not - to get excited about medical studies reported by the news media.

Double-blind trials

 The best evidence in medical research is the blinded clinical trial, in which researchers divide up the study group, with half getting the treatment and half receiving a dummy pill. Be sure there's a large enoug! h sample, at least a couple of hundred subjects, and that the study continued for a year or more before you start pursuing your doctor for this new treatment.

Epidemiological studies

 While they may involve large numbers of people, epidemiological studies merely yield clues as to what might be causing an illness, not definitive answers. In media reports, look for wording such as "observed," "noticed" or "followed" a certain number of participants over several years. That's a tip-off that the findings were based on this type of research.

Animal studies

 In laboratory research, experiments on laboratory animals are used just as preliminary tests to see whether a treatment is toxic or has an effect on living tissue before research is pursued in humans. New therapies often behave much differently in humans compared to lab rats, or in tissue in a petri dish. And it can be decades before "breakthroughs" i! n the laboratory become available to patients. Of course, if it's a c oncept that has not even been tested in animals yet, take it with a grain of salt.

Peer review

 Along with the type of study, number of subjects and human vs. animal vs. concept, consider whether the research has been examined by others in the field. Studies in respected journals such as Science, Nature, the New England Journal of Medicine and the Journal of the American Medical Association are reviewed by the researchers' peers before publication. Most journals also report who funded the study. Studies presented at conferences may have been selected by the organizers but not reviewed by peers. Medical research first announced by a university or at a news conference may not have been seen by anyone else at all.

Expert opinion

 Experts' comments by themselves, even if the person is tops in the field, are considered the least reliable. "The big guns are quoted a lot, and what they say is taken at face value," says Kay Dickersin of! Brown University. "But that doesn't mean they're right."