Is EPA Blowing Its Own Smoke? How Much Science Is Behind Its Tobacco Finding?

January 28, 1993  ·  Michael Fumento  ·  Investor's Business Daily  ·  Passive smoking

"Taken together, the total weight of evidence is conclusive that environmental tobacco smoke increase the risk of lung cancer in nonsmokers."

So declared Environmental Protection Agency Administrator William Reilly at a news conference earlier this month, announcing the impending release of an EPA report attributing approximately 3,000 deaths a year 10 passive smoking, or environmental tobacco smoke.

Yet many in the scientific and medical community say the data the EPA cites does not bear out its conclusion.

While virtually all scientists agree that smoking is unhealthful — both for smokers and those around them — it’s the degree to which smoking is unhealthful, and the way the government masters its scientific case, that raises questions.

Some scientists and policy analysts who say they couldn’t care less about tobacco company profits or even the rights of smokers are worrying about that the EPA report is paving the way for justifying new health-based government regulations and programs without any real science behind them.

Said Bonner Cohen, editor of EPA Watch based in Chantilly, Va., "It’s now open season on whatever contaminant the EPA chooses to label the killer contaminant of the week, with the effect that once again, Americans are going to be stampeded into fearing a substance for reasons which upon close inspection are scientifically indefensible."

Yale University epidemiologist Alvan Feinstein, writing its the journal Toxicological Pathology, said he recently heard a prominent leader in epidemiology admit of the EPA’s work on passive smoking: "Yes, it’s rotten science, but it’s in a worthy cause. It will help us to get rid of cigarettes and to become a smoke-free society."

Another critic, Alfred P. Wehner, president of Biomedical and Environmental Consultants Inc., in Richland, Wash., said: "I did work for the EPA in the past and though of them reasonably well, but when I saw that report, I was really embarrassed. It was a bad document."

One thing both sides agree on is that the direct policy ramifications of the EPA report could be tremendous.

"You can bet your next paycheck that OSHA (the Occupational Safety and Health Administration) will ban all smoking in the workplace," said John Shanahan, the environmental policy analyst at the Heritage Foundation.

Although, in unveiling the report, Reilly expressly referred to cancer in children and in the workplace, the statistical analysis in the EPA report actually ignored the studies that looked for such links.

Rather, the EPA survey is based on 11 American studies of spouses of smokers. The report discussed, but did not put into its statistical analysis, the results of 19 other studies done outside the U.S.

In its analysis of those 11 studies, the EPA found that there was a "statistically significant" difference in the number of lung cancers suffered by non-smoking spouses of smokers, equal to 119 such cancers in nonsmoking spouses of smokers compared to 100 lung cancers in nonsmoking spouses of non-smokers.

This finding of statistical significance allowed it to rank passive smoking as a Class A carcinogen, the highest risk ranking possible.

Statistical significance, while sounding like arcane academic talk, is actually quite important. It is used to account for the possibility that something happened — in this case the 19 additional lung cancers — by chance.

But critics say that, using its own previous statistical standards, the EPA report shows no such significance.

"Frankly, I was embarrassed as a scientist with what they came up with. The main problem was the statistical handling of the data," said Wehner, who headed a panel of scientists and doctors that analyzed the draft version of the EPA report for the tobacco industry.


One aspect of this problem, say critics, involves the combination of the 11 studies into one big group what the EPA called a "meta-analysis."

The EPA has never before done this. Critics say such combinations may be valid, but if the studies weren’t done in the same way, the results will be like comparing apples and oranges and pears.

Not everyone agrees:

"Meta-analysis is totally fair," said Stanton Glantz of the Institute of Health Policy Studies at the University of California, San Francisco. "I review reports like that for the State of California, and the work the EPA did is absolutely first rate, one of the best pieces of science I’ve seen about anything."

But Wehner said the study was faulty.

"To get scientifically valid data, there are very strict rules and requirements on how and when you can apply meta-analysis, and virtually all of them were violated in the EPA analysis," he said.

’Confidence Intervals’

The 11 studies together actually reflected 10 studies that showed no statistically significant increases in cancer and only one that did. When the EPA says that the weight of 11 studies showed harm from passive smoking, it really meant one positive combined with 10 neutrals.

More important than the use of the meta-analysis, say critics, is the EPA’s use, also for the first time, of a less rigorous statistical analysis.

Epidemiologists — those who study disease and accident patients to establish why they occur — calculate "confidence intervals" to express the likelihood that a result could have happened strictly by chance.

A 95% confidence interval means that there is a 95% possibility that the result didn’t happen from chance, or a 5% possibility that it did.

Until the passive smoking report, the EPA has always used a 95% confidence interval, as have most researchers doing epidemiological studies. Indeed, all of the individual ETS studies were published with 95% confidence intervals.

Yet, in its averaging of those ETS studies, the EPA decided to go with a 90% confidence interval.

"That doubles the chance of being wrong," explained James Enstrom, a professor of epidemiology at the University of California, Los Angeles.

Reilly said simply: "With respect to the confidence interval, we have here a 90% confidence level. And that was, in fact, what was recommended to us by the scientific community as appropriate to this data." Repeated calls to the EPA to find out who in the scientific community had done so went unanswered.

’Hairsplitting’ Factor

Glantz said the criticism of the change in the confidence level is a kind of "hairsplitting that only professors care about."

Many epidemiologists, however, disagree.

"In most cases, a scientist would never do this sort of thing," Enstrom said. "It’s surprising that they would try to get away with it."

The bottom line is that such "hairsplitting" allowed the EPA to come to a totally different conclusion than it would have using its normal method.

It could now declare that the results of the American studies, when lumped together, were "statistically significant," a term of great importance to the medical community. At a 95% confidence interval, the result would not have been statistically significant and the EPA could not have labeled passive smoking a type of carcinogen.

Only one major newspaper or television news show covering the EPA announcement made any reference to this studies change of policy.

Critics say this statistical maneuvering amounts to little other than moving the goal posts to ensure that a football that landed on the two-yard line would count as a touchdown.

"They’re using it so they can get an effect," Enstrom said. "They’re going all out to get something they can call significant."

Glantz responds, "There is nothing magical about (the 95%). I know that scientifically it’s widely used, but there is a strong body of thought that people are too slavishly tied to 95%.

But critics say that noting that the original selection of 95% was arbitrary misses the point. It was arbitrary to make a football field 100 yards long, but once that’s the standard, you can’t change the length in the middle of a game.

"You cannot run science with the government changing the rules all the time," said Michael Gough, program manager for biological applications for the congressional Office of Technology Assessment.

’One-Tailed’ Analysis

Glantz said that another statistical reporting change, using what is known as a "one-tailed" analysis as opposed to a two-tailed one, compensates for lowering the statistical confidence.

In fact, it actually reduces the confidence level even further, providing a greater chance of labeling something carcinogenic when it isn’t.

Said Joel Hay, a health economist at the University of Southern California who teaches statistics. "In essence, that’s more like going to an 85%" level, which would triple the chance of a mistake due to chance.

"If they’ve done both, then they’re obviously reaching for results," he said.

The tobacco industry charged that the EPA left out of its analysis a recent major study, released in the November American Journal of Public Health, which, if combined with the other 11 American studies, would have resulted in no statistically significant findings even using the moved goalposts.

Reilly responded to the charge by saying that the EPA report was too far along to include these latest findings.

But, "When one new study can throw it from nonsignificant to significant and another can throw it back again, you’re not demonstrating a clear trend," said Alan Gross, a professor of biostatistics at the Medical University of South Carolina in Charleston.

Enstrom notes that substances previously labeled carcinogens normally have been found to have a much greater difference between levels of cancer in those exposed and in those not exposed.

With lung cancer caused by direct or active cigarette smoking for example, there may be 1,000 cancers compared to 100 for nonsmokers, as compared to the 119 per passive smoker the EPA found per 100 for nonsmokers.

Enstrom said, "For a heavy smoker exposed to asbestos, you can get up in the range of a relative risk of a hundred or more," meaning that for every 100 unexposed persons with lung cancer you find 10,000 exposed ones.

"With a disease like lung cancer and finding excess risk of only two or less, you really have to think about what you’re doing with the data," he said. "To me, it’s frightening that they could make such a case out of such a small risk factor when you’ve got so many variables."

Inexact Science

One problem with slicing the data so thinly as the EPA passive smoke study does it that epidemiology is not an exact science. A single variable unaccounted for can destroy a whole study.

According to Gary Huber, a doctor with the University of Texas Health Center in Tyler. "At least 20 confounding factors have been identified as important to the development of lung cancer. These include nutrition and dietary prevention, exposure to occupational carcinogens, exposure to various air pollution contaminants, genetic pre-disposition and family prevalence," among other factors.

"Your’re going to see huge lifestyle differences between (families with smokers and families with no smokers) generally," said Gross.

One of the 19 non-U.S. epidemiological studies that the EPA did not put into its data base, conducted by American and Chinese researchers in China, actually found a statistically significant decrease in risk.

"When you change just one of the assumptions EPA made," said Wehner, "just one parameter, you can prove ETS saves live — and, of course, that’s just nonsense. But it demonstrates how easily results can vary when assumptions are changed only slightly."

EPA Watch’s Cohen and other EPA critics think that the passive smoking report is just the latest in a litany of EPA abuses of science to achieve political ends — most prominently that of enlarging its own authority, especially to gain more control over indoor air regulation.

Cohen notes that while the EPA has attributed 5,000 lung cancer deaths a year to radioactive radon gas seeping up from the earth into houses, the epidemiological studies on household radon tend to show that houses with higher levels of the gas have lower levels of lung cancer.

Outside EPA Report’s Warning

"The science of which EPA avails itself is that which happens to fit the political agenda of the moment," Cohen said. "Epidemiology didn’t support its position on radon, so they ignored it."

Cohen notes that an outside report commissioned by the EPA released last year found that there was a wide perception that the agency’s science was "adjusted to fit policy". He says that clearly, the EPA did not heed the report’s warning.

"The EPA was not unaware of the fact that the tobacco industry is an extremely appealing target with few allies in the public arena," Cohen said.

"Further, the tobacco industry has cried wolf so many times that it doesn’t have any credibility anymore."

But Enstrom says that "politically correct" science isn’t science at all, and that regardless of how one feels about smoking and passive smoking, the EPA’s tack is simply wrong.

"I don’t think it nodes well for the field," Enstrom said. It’s going to make it hard to distinguish a real (problem) from a manufactured one using statistical manipulation."