Factual · Powerful · Original · Iconoclastic
Several studies indicate that power lines cause cancer, while others indicate they do not. One study links video display terminals to miscarriages, but several others do not. A man claims his wife got her brain cancer from a cellular phone. But some medical authorities say that’s crazy. As Meryl Streep asked during the Alar scare: "What’s a mother to do?"
Well, there is a method which seeks to sort out this madness. It’s called epidemiology.
Epidemiology is a science of association, relying on statistics plus knowledge of how illnesses or accidents come about (which is known as etiology). The purpose is to detect what is causing the problem and how great the problem is, in order ultimately to reduce or eliminate its incidence. Epidemiology is based on observation, and is thus in contrast with laboratory studies, which develop hard cause-and-effect relationships from experimental evidence.
An epidemiological exposure study usually has three parts. First, it isolates a group that has been exposed to a particular substance or other possible cause of illness. Then, it determines if the group has been more prone to a particular illness or injury than the rest of the population. Finally, if there is an excess incidence of illness or injury, it tries to decide, by excluding all other possible factors, whether the excess is a result of exposure to the substance in question.
Lab experiments don’t always produce expected results.
Most such studies are inconclusive and it may take many years to find a cause-and-effect relationship even when it turns out there was a particularly strong one, such as that linking lung cancer and cigarette smoking. Thus, epidemiology can be a crude tool, although when it does work its results are far more reliable than those of studies in test tubes and on lab animals, because those factors that cause certain effects in the laboratory or in a rodent or dog will not necessarily produce the same effects in a human.
This article takes a look at the basic rules of epidemiology. Some of those rules look really basic, even simple, but you’d be surprised at the trouble some people have with them.
That seems simple enough, but many people will, in the heat of the moment, forget it. The chances are that if your local newspaper ran a story saying, "Since the hazardous waste incinerator began operating last May, 186 people have died," there would be panic — if not in the streets then somewhere. A lot of people wouldn’t stop to think about how many people would have died in that same period, regardless of the operations of the incinerator.
A point related to this is that while we average a little over 70 years of life apiece, that is indeed simply an average. Some of us survive to 105; others die in infancy. Cancer and heart disease tend to be diseases of the elderly, but often a young person will die of cancer and occasionally one dies of heart disease as well. That is not fair, but then neither is life fair. So not only do we have to keep in mind that everyone dies, we must also remember that a lot of us die prematurely.
will die of it. Indeed, as the population ages and fewer and fewer people die of other causes, more and more will die of cancer. Why? Because you have to die of something. (See Tenet 1.) Cancer, for the most part, is a disease of old age, and the nation that has reduced its incidence of diseases killing younger people will find its cancer rates increasing.
Damage to the DNA of cells and improper cell division are thought to be at the root of the formation of cancer, but beyond that, our understanding of the disease begins to get quite foggy. DNA damage to cells and improper cell division don’t always lead to cancer and, further, we often don’t know what causes such damage or improper division.
Most lay people probably don’t understand that in all cases it is impossible to say for sure how someone got cancer. If a person who contracts lung cancer smoked three packs of cigarettes a day for sixty years, it’s a good bet that the cancer resulted from cigarette smoking, but that is by no means certain. Consider the number of heavy smokers who die in old age of diseases having nothing to do with smoking; and consider that about 15% of lung cancer victims have never smoked.
Tumors do not come stamped with an identification of their cause. A lump in a breast caused by consumption of alcohol looks exactly like a lump caused by excess exposure to radiation. A brain tumor caused by exposure to plutonium would look exactly like a brain tumor that occurs for no reason that can be identified.
One exception to this is mesothelioma, a lung disease which appears to be almost exclusively associated with asbestos exposure, but even here there appear to be some cases where asbestos is not the cause. If you read that a doctor proclaimed that X cancer was caused by Y, you may assume that either the doctor is wrongly treating his opinion as a fact, or that the media have wrongly interpreted what the doctor said.
Thus, when a person says, "I got cancer from working at such and such plant, or because I lived too close to such and such factory or because I ate such and such," he cannot possibly know for sure, nor can his doctor, nor can the wisest diagnostic physician on the face of earth. Which leads to Tenet 5.
in how that disease is contracted. It is a curious phenomenon that one afflicted with a disease is often treated as an expert solely because of that affliction. A man who claims he got cancer from working at a certain job is automatically given great credence.
In 1991, the late football great Lyle Alzado made the news by declaring that his inoperable and ultimately fatal brain cancer was caused by anabolic steroids. "I used a certain steroid that caused me to ruin my immune system, " he said. He added, "I just hope that this interview ... will convince the other people — junior high school, high school and college students — that they can do without this stuff."
In fact, it is documented that taking steroids can do all sorts of nasty things to the body, though brain cancer is not one of them. Further, people have been dying from brain cancer long before anyone ever had access to steroids that could be ingested or injected. Thus, neither Alzado nor anyone else could possibly say with authority that his cancer was linked to steroid use.
But the Associated Press, Sports Illustrated, Cable News Network, and at least one nationally syndicated columnist carried this story without any suggestion of this discrepancy.
Alzado said he got his cancer from steroid use, so who are we to argue with him? Besides, his story could scare kids out of using steroids. True, but it is still bad science. In fact, a brain cancer is a brain cancer, whether it was caused by cigarette smoking, air pollution, or by some sort of spontaneous cell mutation of which we understand nothing. The sufferer of that cancer has expertise in what it’s like to suffer from that type of cancer, but nothing more.
The victim’s assertions may be valid in that they give insight into his feelings. That may be of interest to the average newspaper, and to the average reader, but not to the epidemiologist. Yet time and again we read stories in the press or see news shows on television in which a cancer victim is stating that he or she knows that the cancer came from exposure to a nuclear power plant.
Likewise, we will occasionally read the story about the woman who "knows" that her child got cancer from exposure to a toxic waste dump or a pesticide. This is not science; it is superstition.
have any expertise in the cause of the problem when that problem is cancer. If it makes no sense to treat a victim of a disease as an expert in epidemiology, it also does not make much sense automatically to attribute such expertise to the treating physician. The media seem generally to assume that anyone with an "M.D." after his or her name who is willing to speak on a given medical subject is an expert in that subject.
In fact, most doctors who work outside of epidemiologically related areas (which includes the physicians you see when you are ill or hurt) took a couple of epidemiology courses way back when and now know about as much about epidemiology as you know about chemistry because you were required to take it in high school back in 1969. Further, the practice of a treating physician would not put him or her in a position to study epidemiological patterns. That is, looking at individual cases is of little use in getting the big picture.
Thus, a treating physician may make a statement like "I’ve never seen another case of this disease in a man of this age." The doctor may think this has great epidemiological importance, as will a reporter, as will then the reader. In fact, it probably has none. If he suddenly sees five such cases, that might mean something. But that he has practiced for many years and this is his first case means nothing.
for miscarriages after a recognized pregnancy vary from about 12.5% to 33.9%. Some of the earlier studies showing the higher end probably suffered from various errors, so a risk of about 12% to 15% seems most likely. As for the total rate of pregnancy lost after fertilization, including those that a woman couldn’t ordinarily have recognized as even having been a pregnancy, one recent study put this figure at 31%. These are higher rates than some of us probably would have thought, the point being that when you look at a number of miscarriages in an office or a neighborhood and the figure seems high to you, it doesn’t mean that it really is high relative to the expected number.
Obviously these are generalized numbers. Some categories of women have a much higher risk (those over 35 for example, or those with untreated severe diabetes), while others have a lower one. A good epidemiological study doesn’t compare miscarriages in a given group with the national rate of miscarriages; rather it tries to match up similar women (a control group) who do not have the risk factor being investigated but have much in common with those who do.
all babies born in this country exited the womb with at least one major malformation. Since about four million babies are born annually, that is between 80,000 and 120,000 babies born annually with birth defects.
The reason for these unknowns parallels the explanation of why we don’t know the origin of most cancers. There just isn’t enough knowledge about what causes birth defects in the first place.
Dr. Lewis Holmes, author of the aforementioned birth-defect study, says: "This bedevils us as much as it does the victims. Most people have blamed themselves, neighbors have given them ideas. They don’t understand that even genetic disorders often come as a total surprise."
Further, Holmes notes, even the technology that has been around for 30 years that sometimes can determine causes of defects and miscarriages is not utilized. So when a woman’s family physician or gynecologist tells her, "I just don’t know why you had this problem," it may simply mean that he doesn’t know.
epidemiological study is extremely difficult even when done by top professionals. Compare epidemiological studies to quizzes given in school. In any class, there will be some students who will do well on all quizzes and some who will do poorly on all quizzes, but most will have a range that, taken as a whole, will probably give a professor a good idea of the student’s ability in that class. Even this assumes that the professor’s quizzes are fair evaluations. In any case, any given quiz or epidemiological study does not have much weight without the support of others.
That’s why it’s not unusual to hear on Monday that coffee has been linked to cancer and on Thursday that there is no link, to hear on Tuesday that birth control pills cause heart disease and to hear on Friday that they do not. There is not necessarily dishonesty or a cover-up involved; there are just so many problems that must be factored out. It can take years, even decades, to do so. That is why it is simply wrong for a scientist or, as is far more often the case, a journalist or other public crusader to build a whole case around a single study, or even around two or three.
What a good journalist can do is to poke holes in a bad epidemiological study. That is the essential equivalent of a nonarchitect walking into a house and seeing that it was very poorly designed, or a landlubber noting that the ship she’s on is listing badly. But the building and piloting of these houses and ships is a job best left to the professionals.
cannot detect all causes of illness. If an illness is fairly common, a slight increase as caused by a specific agent may be impossible to detect. Thus, for example, Frederick J. Stare, Robert E. Olson, and Elizabeth M. Whelan, all of the American Council on Science and Health, write in their book, Balanced Nutrition: "Alar has been used since 1967 without a single case of cancer or any other disease attributed to its consumption at approved trace levels in apples."
But of course not. Almost all of us have at one time or another consumed apples and one fourth of us will get cancer. Searching among those tens of millions of cancers for those caused by Alar is like trying to determine if someone threw a brick into the backyard swimming pool by measuring the water line. Alar could cause 5,000 cancers a year, but against the backdrop of a million cancer diagnoses a year among apple eaters, you would never know it.
On the other hand, a brick thrown into a kitchen sink would cause a perceptible increase in the water line. The equivalent to this would be to measure those with extraordinary exposures, for example, workers who were exposed to high levels of Alar. Unfortunately no such study has ever been made.
that overstate or understate a problem. They may, very rarely to be sure, intentionally skew their data to meet predetermined conclusions. On the whole, epidemiologists are more trustworthy by a factor of at least ten than the journalists who will relay their work to the public or than the regulators or politicians who will pass regulations on the basis of their studies. Epidemiologists on the whole are also quite careful and conservative in their language. If they observe, for example, twice as many influenza cases one week as the one before, they might say that this development was significant." And if half the town got wiped out by bubonic plague, that too would be deemed "significant." Not "very significant," or "extremely significant" or "utterly horrifying," just "significant."
because people with some ailment in common have another thing in common doesn’t mean that the other thing caused the ailment. When I was very small, I observed that most folks drinking diet sodas were overweight. I therefore hypothesized that gulping Tab, which was just about the only diet soda there was then, made one fat. Likewise, to sit in the sauna of any American health club is to come to the conclusion that there are a disproportionate number of heavy people in these rooms. It is probably not wise to conclude from this that saunas make you fat, any more than does Tab.
you read that "John, who worked for twenty years at a radioactive widget factory, was suddenly stricken with an extremely rare form of cancer, that of the little toe," your first thought is probably that the cancer has something to do with the radioactive widgets. But assuming that 100 men and women each year get this very distressing form of cancer of "this little piggee," it must be asked, what of the exposures of the other 99?
In one highly publicized incident, a couple’s twelve-year-old was diagnosed with osteosarcoma (bone cancer), and since they lived near a plutonium-processing plant and osteosarcomas strike only 520 American children a year, the parents assumed the plant caused the cancer. The local media seemed to share that assumption. What they didn’t take into account was that 519 other Americans got that cancer in that year alone, none of whom lived anywhere near the plant.
speaking, a cluster is simply an elevated incidence of disease or other problem in a given population. Cancer, birth defects, and miscarriages are the three subjects that most often come up in media accounts of clusters.
Consider a group of sixty people of whom we would expect 20%, or 12 in number, to have cataracts. By great coincidence, they turn out to have exactly the correct percentage. We line them up in three rows in alphabetical order. They are represented below, with Ns being normal and Cs representing cataracts.
Since they have exactly the percentage of cataracts we would expect in this group, when we look at all three rows together, we find nothing unusual. No clusters. But if we look at them row by row, suddenly we find that cataracts are heavily overrepresented in the third row. Cluster! Of course, we know this means nothing, since the place one’s name occupies in the alphabet is not considered a risk factor for developing cataracts.
There are virtually an infinite number of ways of breaking down groups—sex, race, address, occupation, age categories, and so on, plus combinations of these categories. If you are looking for a cluster, you will always find one, simply by arranging arbitrary categories. You may find that black females living on the north end of town working as clerk typists who are between the ages of 15 and 30 have no elevation of cancers. But if your sample group of women with cancer has two such women who happen to be 31, by tossing them in suddenly you may have doubled the expected rate of cancer. Call in the news crew! Obviously a good epidemiologist tries to avoid such arbitrary breakdowns of groups, but most reporters and citizens in general don’t know the first thing about such methodology.
So what good, then, are clusters? In the hands of amateur sleuths and crusaders, such as those whose reports fill our magazines, newspapers, and airwaves, they are no good at all in epidemiological terms, and very harmful in sociological terms — in other words, they serve no purpose other than scaring the hell out of people.
What good are clusters in the hands of trained professionals, then? Not much there either, actually. At the National Conference on Clustering of Health Events, sponsored by the CDC in Atlanta in 1989, keynote speaker Kenneth J. Rothman, editor of the journal Epidemiology, argued that "with few exceptions, there is little scientific or public health reason to investigate individual clusters at all." Such efforts, he said, are increasingly becoming "exercises in public relations," fueled by health-conscious consumers and public misperceptions that research is the answer to every problem.
Rothman said: "If the epidemic of cluster research continues, it will eventually intrude on more productive epidemiological investigation of environmental exposures."
Alan Bender, chief of the Minnesota Department of Health’s Chronic Disease and Environmental Epidemiology section, said at the conference that of 500 reports of suspected clusters in Minnesota, only 5 prompted enough concern for formal studies. Other states reported similar rates. For example, from 1961 to 1983 the CDC investigated 108 cancer clusters from 29 states and five foreign countries. It found no clear cause for any of the clusters.
While clusters are of little use to epidemiologists, they are a wonderful tool for crusaders seeking to indict something as a cause of disease. Tell a layman that a given office building or a given city block has had twice the cancer victims or heart attack victims as the expected rate, and he instantly assumes that something is wrong in that building or on that block.
The concept of epidemiology should be clearer to the reader now, and yet it is to be hoped that something else is clear to the reader as well. While the concepts of epidemiology are basic, the application is fraught with pitfalls. The journalist or other layperson who fancies that he can just look at a cluster of cancers or other disease and say, "Aha! There’s clearly a problem there!" or who goes even further to say, "And I know what’s causing the problem! " is guilty of taking the surgeon’s tool unto himself and cutting away. The journalist who, upon finding that the epidemiologists disagree with him, insists that they must be engaging in a cover-up, is not only grossly ignorant, but arrogant as well.
As with any profession, epidemiologists have developed their own lingo, some of which is good for lay persons to know, some of which is better ignored, being essentially the equivalent of "legalese." Epidemiologists express the mathematical possibility of increased risk by using risk ratios.
A risk ratio or odds ratio of 3.0 for lung cancer means that three times as many people showed up in that category with lung cancer as in the control group. A risk ratio of 4.2 for leukemia means that 4.2 times more cases appeared than in the control group. A control group is a set of persons carefully matched to the set of persons who are being observed for the problem. Thus, an epidemiological study of women using video display terminals should have as its control women who didn’t use VDTs but who did as much sitting and as much smoking as women who did use VDTs.
Nevertheless, it is not cut-and-dried that risk ratios above 1.0 mean that something special is causing the cancer or other ailment being looked at. That is because of the laws of chance and probability. Thus, if you flipped a coin four times, you might expect two heads and two tails. In fact, it often doesn’t work out that way. Often you’ll get three of one and one of the other. That would give you a "risk ratio" of 1.5, because you are getting 1.5 times the number of heads or tails that you expected. It doesn’t mean anything is affecting the coin; it’s just chance. Diseases often cluster just by chance... (See above, Tenet 15.)
To augment their risk ratios, epidemiologists use "confidence intervals." Thus, you might see a risk ratio expressed as "2.9 (0.9-3.5)." This means that, expressed in strictest terms, the risk ratio is 2.9 but anything between 0.9 and 3.5 is within the range of the results of the study. In fact, even this parameter is not that solid. Confidence intervals themselves may be off. Thus, epidemiologists will say, "This is a 95% confidence interval," meaning that there is a 5% chance that even this broad range is inaccurate. A 90% confidence interval means that there is a 10% chance it is wrong, and so on. If it were conducted incorrectly, the results are completely thrown off.
At any rate, when not only the relative risk number is above 1.0 (the term "elevated" will often be used to describe this) but so is the bottom of the confidence interval range, then epidemiologists say that number is "statistically significant." It is very important to grasp this simple concept. A risk level elevated at 4.0 may look very serious. It says that four times as many cases of such-and-such are showing up as in the group that wasn’t exposed to the suspect agent. But if few enough people are involved in the study, the confidence interval may be something like 0.8-9.0, indicating that the elevated risk level of 4.0 may mean nothing other than that’s how the coin landed. The more people that are involved in a study, the closer the confidence interval and the better the chance that an elevated risk level actually means something.