2
The Unheeded Warning
THE TROY INCIDENT was easily forgotten because, at the time, little was known about the effects of low-level radiation -- either from fallout or from other sources. The subject had hardly even been thought about. Scientists generally assumed that such levels were harmless, since they produced no immediately observable effects. During the next few years, however, tremendously improved radiation measurement techniques coupled with detailed laboratory studies revealed many previously unsuspected hazards from fallout. And with these discoveries, the forgotten incident in upstate New York re-emerged and took on great significance.By 1953, it was already known that many of the radioactive elements (called isotopes) created by an atomic explosion, once they entered the atmosphere in the form of tiny fallout particles, would contaminate food, water, and air and thus find their way into the human body. What was not widely known, however, was the extent to which these isotopes became concentrated in various body organs. Inside the body, they behaved just like their nonradioactive natural counterparts. The isotope strontium, for instance, which is similar to calcium, settled in bones and teeth. Radioactive iodine behaved like regular iodine, seeking out and concentrating in the thyroid gland, an organ which is vital in regulating the growth and functioning of the human body.
It was in the case of iodine that some of the most alarming discoveries were made. In the early 1950s researchers found that iodine became concentrated in the milk of cows that grazed on pasture contaminated with fallout. When people drank the milk, the iodine began building up rapidly in their thyroid glands. Since the thyroid gland is small in size, the concentration was very heavy. Measurements revealed that in any given situation the radiation dose to the adult thyroid would be as much as a hundred times the external dose from the fallout in the outside environment. But far more important were the results of extensive studies conducted at the University of Michigan and published in 1960. These showed that the radiation dose to the thyroids of unborn children and infants was ten to one hundred times higher than that to the adult because of the greater concentration in the smaller thyroids. This discovery held serious implications for the health of the children of Troy. It meant that the doses to their thyroids might have been as much as a hundred to a thousand times higher than those estimated by Dr. Clark and the AEC scientists, who had only considered the overall dose from the fallout in the external environment.
However, by the time these discoveries became widely known, a voluntary halt in atmospheric testing had been agreed upon by the Soviet Union, the United States, and Great Britain, and there was considerable hope that incidents of heavy fallout would never occur again. Thus it seemed less urgent to pursue investigations into the problem. But in 1961, during the Berlin crisis, Russia's detonation of a 100-megaton hydrogen bomb high over Siberia marked the resumption of large-scale atmospheric testing by the nuclear powers, and the levels of radioactivity in air and water once again rose sharply throughout the world. In the weeks that followed, an enormous peak of radioactive iodine was detected in milk throughout the northern hemisphere. As the testing continued, many scientists began to feel it was imperative to find a conclusive answer to the question: Just exactly how harmful was low-level radiation from fallout?
It was in this context that the well-known nuclear physicist Ralph Lapp wrote an article for Science magazine in 1962 which first focused attention on the significance of the Albany-Troy incident. Lapp's article showed that radiation doses far larger than those permitted by federal safety guidelines must have been received by the children of Troy and numerous other cities that had been subjected to similar "rainouts" in the early years of testing. The purpose of the article was to point out that the Troy incident provided an excellent opportunity to find out just what the effects of fallout were. The surrounding area's population of half a million persons was large enough to insure that any increase in the normally low incidence of such radiation-caused diseases as thyroid cancer or childhood leukemia would show up. (The normal incidence of leukemia among children under ten years old was about two to three cases per year per 100,000 children. Thus, if any area with only a few thousand children were studied, no cases at all might be found in some years, even if the radiation were strong enough to double the normal expected number.) And the detailed radiation measurements taken by Dr. Clark's students and the AEC meant that relatively accurate estimates could be made of the doses involved.
The study that Lapp proposed had enormous potential ramifications. At the time, many people in government, military, and scientific circles still believed that mankind could survive the levels of fallout that would result from a nuclear war, levels thousands of times greater than those from peacetime testing. The United States had embarked on an extensive civil-defense program based on this belief. But if it were shown that peacetime fallout levels led to a significant increase in fatal diseases, then by implication, nuclear war would probably mean the end of mankind, and thus the vast nuclear war machinery developed by the United States and the Soviet Union would become useless. In the second place, if it were shown that large numbers of children had already died from the effects of fallout, then tremendous public revulsion would probably be generated against all activities that released more radioactivity into the environment. These would include not just the testing of nuclear weapons, but also the monumental program planned by many governments and industries throughout the world for the peacetime uses of atomic energy. For nuclear power reactors, atomic gas-mining explosions, and other forms of nuclear engineering all normally release low levels of radioactivity and, in the event of an accident, they entail the risk of much worse. And, finally, those individuals who had been in positions of responsibility would have a terrible guilt to bear for the damage already done.
The appearance of Lapp's article also served to highlight another extraordinary fact. It was then seventeen years since the first atomic explosion at Hiroshima in 1945, yet no large-scale cancer studies such as he proposed had ever been carried out, even though the AEC had long been in possession of detailed fallout data for many areas of the U.S. A great deal of information existed on the effects of high doses of radiation, such as those received by the survivors of the explosions at Hiroshima and Nagasaki, but there was no real evidence regarding low-level effects, either from laboratory animal studies or from direct observations of large human populations. The lack of animal studies was somewhat understandable, since no such experiments could be carried out at the extremely low doses produced by fallout without requiring hundreds of thousands or even millions of animals and many years to detect the small increases of a rare disease such as leukemia. But in the case of humans, such a large study population had already been created by the fallout from years of atomic testing. Yet the AEC had ignored this opportunity to resolve such an important issue. Thus, those who wished to minimize the danger of continued atomic testing could argue, in the absence of data to the contrary, that long-term, low-level exposure such as that from fallout had not been proven to increase fatal diseases.
The absence of such studies was all the more striking because there were already strong indications that such danger existed. It was toward the end of 1955 that Dr. Alice Stewart, head of the Department of Preventive Medicine at Oxford University, first became aware of a sharp rise in leukemia among young children in England. A young statistician in her department, David Hewitt, had discovered that the number of children dying of this cancer of the blood had risen over 50 percent in only a few years. In the United States an increase about twice as large had occurred. One aspect of this rise was extremely puzzling: The leukemia seemed to strike mostly children over two to three years of age -- there was little or no increase for younger children. This had not been the situation prior to World War II, when both groups had shown a parallel, much more gradual rise. The question was: What new postwar development could be responsible for the increase in deaths among the older children?
Dr. Stewart undertook a study to find out. With the assistance of health officers throughout England and Wales, she obtained detailed interviews with the mothers of all of the 1694 children in those countries who had died of cancer in the years 1953 to 1955, as well as with an equal number of mothers of healthy children. By May 1957, the analysis of 1299 cases, half of which involved leukemia and the rest mainly brain and kidney tumors, had been completed. The data showed that babies born of mothers who had a series of X-rays of the pelvic region during pregnancy were nearly twice as likely to develop leukemia or another form of cancer, as those born of mothers who had not been X-rayed. As Dr. Stewart noted, the chance of finding such a two-to-one ratio purely as a result of statistical accident was in this case less than one in ten million. Thus, in the paper she published in June 1958, Dr. Stewart concluded that the dose from diagnostic X-rays could produce a clearly detectable increase in childhood cancer when given during pregnancy.
This was an extremely low dose. It was roughly comparable to the dose that most people receive in only a few years from natural background radiation. (Mankind has always lived with a "natural background" of radiation, produced by cosmic rays and various naturally occurring radioactive substances. The annual dose from the radiation averages about 100 millirads.) But still more significant, this dose was comparable with what the pregnant mothers of Albany-Troy must have received from the fallout of the "Simon" test in 1953.
In this connection, there was another finding of Dr. Stewart's study that was even more disturbing. This concerned the timing of the X-rays. Children whose mothers were X-rayed during the first third of their pregnancy were found to be some ten times more likely to develop cancer than those whose mothers were X-rayed toward the end of pregnancy. In other words, the earlier the worse. This finding had much more serious implications for fallout than for medical X-rays. Almost 90 percent of pelvic X-ray examinations occur shortly before delivery time, but since fallout comes down indiscriminately on whole populations, it irradiates unborn children at all stages of development, including the earliest. The fallout hazard was further compounded by the tendency of various radioactive elements, such as iodine and strontium, to concentrate in vital body organs. This meant that the doses to the thyroids and bone marrows of unborn children from fallout could be many times higher than the doses received from diagnostic X-rays by the children in Dr. Stewart's study, which had already nearly doubled the cancer incidence.
But in order to establish a clear cause-and-effect relationship between the X-rays and the additional cancer deaths, there had to be a direct relationship between the amount of radiation received by the fetus and the chance that the child would develop cancer a few years later. And indeed, when Dr. Stewart and David Hewitt examined the available records for the number of X-ray films taken, they found that there were distinctly fewer cancer cases among the children whose mothers had only one X-ray than among those who had four or more. The number of cases where this information was available was too small to establish a conclusive connection between dose and cancer risk, but there was other evidence that supported this general trend. For example, whenever the X-rays had been taken only of other parts of the body, such as the arms and legs, so that only a small quantity of scattered radiation reached the unborn child in the womb, the increase in cancer risk was only about one-fifth as great as in those cases where the abdominal region itself was X-rayed.
These latter observations were in direct contradiction to a belief that was essential to the continuation of all programs for nuclear testing and the peaceful uses of the atom -- namely, the so-called "threshold" theory. This theory held that there was a certain low level of radiation exposure, a "threshold," below which no damage would be caused. If this threshold was about the same as the yearly dose from background radiation or from exposure to typical diagnostic X-rays, as various supporters of nuclear programs maintained it was, then there would theoretically be no ill effects from past or present weapons tests, from the radioactive releases of nuclear reactors, or even from the radiation persisting after a nuclear war, since this radiation would probably not exceed the threshold if it were averaged out over a lifetime. But Dr. Stewart's study implied that if there were any safe threshold for unborn children and infants it would have to be less than the dose from a single X-ray picture. And her finding that the risk of cancer seemed to be directly related to the size of the dose suggested that there might not be any safe threshold at all, and that any increase in radiation exposure might produce a corresponding increase in the risk. Even if the risk for a certain tiny amount of radiation was extremely small, say, one chance in ten thousand, then if millions of people were exposed to this radiation, hundreds would be likely to get cancer. Fallout had already exposed millions of people to doses comparable to those received by the children in Dr. Stewart's study, and the proliferation of nuclear explosions for peaceful purposes would make this exposure even more extensive.
There was widespread refusal to accept the implications of Dr. Stewart's work. Her findings were regarded as doubtful for such reasons as their dependence on the memories of the mothers as to the number of X-ray exposures received. Other studies were cited that showed no effects from X-rays. It was said that her study was inapplicable to fallout because it had been shown that a specified dose of radiation given all at once -- as is the case with a diagnostic X-ray -- is more damaging than the same total dose given gradually over a period of weeks, months, or years -- as is the case with fallout.
This argument opened up another important area of disagreement about radiation dangers. Were the cancer-causing effects of radiation cumulative? Or did body cells recover? There was no question that body cells did repair themselves in the case of such damage as radiation burns, which healed with the passage of time. Supporters of the threshold theory hypothesized that this would also hold true for cancer. This was another bulwark of the "threshold" theory, for, if such recovery did take place, then there would indeed exist a level of radiation low enough so that the body's repair mechanisms could keep pace with the damage.
However, evidence was soon forthcoming that would refute the criticisms of Dr. Stewart's study and thereby cast further doubt on the validity of the threshold theory. After the publication of Dr. Stewart's results, Dr. Brian MacMahon of the School of Public Health at Harvard University undertook another study of the relationship between diagnostic X-rays and childhood cancer. He constructed this study so that there would be no question as to the number of X-rays given to the mothers. Using the carefully maintained hospital records of 700,000 mothers who delivered their babies in a series of large hospitals in the northeastern United States between 1947 and 1954, he compared the risk of cancer for the children of the 70,000 mothers who had received one or more X-rays with the risk for the children of the remaining 630,000 mothers who had received no X-rays during pregnancy.
The results of his study, published in 1962, fully confirmed the findings of Dr. Stewart: There was a clear and highly significant increase in the risk of cancer for the children who had been X-rayed before birth, and, most important, the risk did indeed increase with the number of X-rays taken. The overall risk was somewhat smaller than had been found for the British children by Dr. Stewart, but this could easily be explained by the fact that the dose to the mothers in MacMahon's study from each X-ray picture was substantially lower than for those in Dr. Stewart's, due to improvements in X-ray technology. As for the studies cited by critics which did not show any increase in cancer risk from prenatal X-rays, it developed that these were all based on small study populations, and even then the indications were that if these results were carried out to larger numbers they would confirm Stewart and MacMahon.
But there was still one major question that remained unanswered. To what degree were the effects of diagnostic X-rays comparable with those of fallout?
There were already many indications that the effects might be similar. Among these was the fact that had prompted Dr. Stewart to undertake her study in the first place, namely, the evidence that in both the United States and England cancer and leukemia among school-age children had increased sharply beginning a few years after World War II. This was the period when nuclear fallout was first introduced into the atmosphere. And now, Dr. Stewart's and Dr. MacMahon's studies had served to point up the following significant aspects of this increase:
First, the effects of X-rays, although very real, were not strong enough to have caused all of the very large general increase in childhood cancer, which ranged from 50 to 100 percent. Dr. Stewart herself estimated that X-rays could only have accounted for perhaps 5 percent of this increase.
Second, this general increase had taken place only among children older than two or three -- exactly the age group that had suffered the greatest effects from X-rays. This suggested that some other form of radiation might be causing the unexplained portion of the increase, since the characteristic age at death was the same.
Third, other possible factors such as the introduction of new drugs, pesticides, or food additives had been ruled out because these factors had been found to be essentially the same for the healthy and afflicted children alike.
But the main reason why it seemed that fallout was at least as effective as X-rays in producing childhood cancer was the growing evidence for a direct relationship between the number of X-ray pictures taken and the risk of cancer. For if the risk increased with each additional picture, as the studies of Stewart and MacMahon indicated it did, then this clearly implied that there was no significant healing of the damage and thus that the cancer-causing effects of radiation were cumulative. This would mean that the effects of a dose received over a period of time from fallout would be similar to those from an equal dose received all at once from X-rays.
Such a direct connection between the amount of radiation absorbed and the likelihood of cancer could be predicted on the basis of a theory developed by Dr. E. B. Lewis of the California Institute of Technology. According to Dr. Lewis, cancer could be triggered if one particle of radiation scored a single bulletlike hit on a crucial DNA molecule in the chromosomes of a cell. The DNA contains the genetic code that controls the functioning and reproduction of the cell. If it were damaged by a particle of radiation, this might disrupt the governing mechanism and cause the cell to begin the unlimited growth which characterizes cancer.
The significance of this theory was twofold. First, it was already established that such damage to the DNA was one of the ways that radiation produced hereditary or genetic damage in the female ova and male sperm cells -- the type of damage that results in malformations and other harmful mutations in offspring. What Dr. Lewis stated, however, was that it was exactly the same type of damage, but to the DNA of any body cell, that could produce cancer. This was extremely important because it had already been decisively demonstrated that genetic damage was cumulative. In one experiment after another, using fruit flies and large colonies of mice, it was found that it did not matter how slowly or quickly a given dose of radiation was administered -- in every case the number of defective off-spring was essentially the same. The resulting effect on offspring was determined only by the total accumulated radiation dose received, regardless of the length of the time period over which it was given. There were some indications of repair in the ova of female mice, but the effect was relatively small at best. Thus there existed clear evidence that radiation effects of the type that produced genetic damage were cumulative, especially in the male sperm cell. But if Dr. Lewis was right, and radiation caused cancer in body cells in exactly the same way as it caused genetic damage in reproductive cells, then this clearly implied that the cancer-causing effect of radiation was also cumulative. And if this was so, then the greater the radiation dose, the greater the risk of cancer. Dr. Lewis's theory therefore supported the findings of Stewart and MacMahon, and simultaneously gave weight to the theory that the cancer-causing effects of protracted radiation from fallout would be the same as for X-rays given all at once.
All of this evidence combined pointed toward a single tragic conclusion: Man, especially during the stage of early embryonic life, was hundreds or thousands of times more sensitive to radiation than anyone had ever suspected.
Next |
ToC |
Prev
back to Secret Fallout |
radiation |
rat haus |
Index |
Search