The original Scylla was completed in 1958 and soon demonstrated the first controlled fusion reactions. The reaction chamber is the center of the white tube on the right.

Theta-pinch, or θ-pinch, is a type of fusion power reactor design. The name refers to the configuration of currents used to confine the plasma fuel in the reactor, arranged to run around a cylinder in the direction normally denoted as theta in polar coordinate diagrams. The name was chosen to differentiate it from machines based on the pinch effect that arranged their currents running down the centre of the cylinder; these became known as z-pinch machines, referring to the Z-axis in cartesian coordinates.

Theta-pinch was developed primarily in the United States, mostly at the Los Alamos National Laboratory (LANL) in a series of machines known as Scylla. In 1958, Scylla I was the first machine to clearly demonstrate thermonuclear fusion reactions of deuterium in a controlled manner. It became one of the major lines of fusion research during the 1960s. General Electric and the Naval Research Laboratory also experimented with the concept, and later, many international labs. A series of machines was capped by the Scylla IV which demonstrated temperatures as high as 80 million K, more than enough to sustain a burning plasma. During these runs, Scylla IV produced billions of fusion reactions.

The Scylla machines also demonstrated very poor confinement times, on the order of a few microseconds. It was believed this was due to losses at the ends of the linear tubes. Scyllac (Scylla-closed) was designed to test a toroidal version that would improve confinement a thousandfold. A design mistake led to Scyllac being unable to come anywhere near its desired performance, and the United States Atomic Energy Commission shut the program down in 1977 to focus on the tokamak and magnetic mirror.

Some of the lack of interest in theta since the 1970s is due to a variation of the design known as the field-reversed configuration, or FRC, which has seen significant exploration. In this version, the induced magnetic fields are coaxed to take on a closed form that gives better confinement. The differences are enough that FRCs are considered to be a separate concept. Likewise, theta-pinch is often seen in magnetized target fusion systems, but these also differ significantly from the original concept.

Fusion basics

Nuclear fusion occurs when nuclei, protons and neutrons, come close enough together for the nuclear force to pull them together into a single larger nucleus. Opposing this action is the electrostatic force, which causes electrically charged particles with like charges, like protons, to repel each other. To fuse, the particles must be travelling fast enough to overcome this coulomb barrier. The nuclear force increases with the number of nuclei, and the coulomb barrier is lowered when the number of neutrons in the nuclei is maximized, which leads to the fusion rate being maximized for isotopes of lighter elements like hydrogen and helium with extra neutrons.[1]

Using classical electromagnetism, the energies required to overcome the coulomb barrier would be enormous. The calculations changed considerably during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 paper on quantum tunnelling demonstrated that nuclear reactions could take place at much lower energies than classical theory predicted. Using this new theory, in 1929 Fritz Houtermans and Robert Atkinson demonstrated that expected reaction rates in the core of the sun supported Arthur Eddington's 1920 suggestion that the sun is powered by fusion.[1] In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into a metal foil containing deuterium, lithium and other elements.[2] This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium-deuterium reaction occurred at the lowest energy, peaking at about 100,000 electronvolts (100 keV).[3]

This energy corresponds to the average energy of particles in a gas heated to about 10 billion Kelvin (K). Materials heated beyond a few thousand K dissociate into their electrons and nuclei, producing a gas-like state of matter known as plasma. In any gas the particles have a wide range of energies, normally following the Maxwell–Boltzmann statistics. In such a mixture, a small number of particles will have much higher energy than the bulk.[4] This leads to an interesting possibility; even at average temperatures well below 100 keV, some particles within the gas will randomly have enough energy to undergo fusion. Those reactions release huge amounts of energy. If that energy can be captured back into the plasma, it can heat other particles to that energy as well, making the reaction self-sustaining. In 1944, Enrico Fermi calculated this would occur at about 50 million K for a deuterium-tritium fuel.[5][6][lower-alpha 1]

Taking advantage of this possibility requires the fuel plasma to be held together long enough that these random reactions have time to occur. Like any hot gas, plasma has an internal pressure and thus wants to expand according to the ideal gas law.[4] For a fusion reactor, the problem is keeping the plasma contained against this pressure; any known substance would melt at these temperatures.[7] As it consists of freely moving charged particles, plasma is electrically conductive. This makes it subject to electric and magnetic fields. In a magnetic field, the electrons and nuclei orbit the magnetic field lines.[7][8][9] A simple confinement system is a plasma-filled tube placed inside the open core of a solenoid. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates a magnetic field running down the centre of the tube, which the particles will orbit, preventing their motion towards the sides. Unfortunately, this arrangement does not confine the plasma along the length of the tube, and the plasma is free to flow out the ends. For a purely experimental machine, the losses are not necessarily a major problem, but a production system would have to eliminate these end losses.[10]

Pinch effect

In the early days of the fusion program, three designs quickly emerged that addressed these issues. The stellarator was a somewhat complex device but had some attractive qualities. The magnetic mirror and pinch effect devices were dramatically simpler, the former consisting of a modified solenoid and the latter is effectively a high-power version of a fluorescent lamp. Pinch, in particular, seemed like an extremely simple solution to the confinement problem, and was being actively studied at labs in the US, UK and USSR.[11]

As these machines began to be tested at higher confinement levels, a significant problem quickly became obvious. When the current was applied and the plasma began to pinch down into a column, it would become unstable, writhing about and eventually hitting the sides of the tube. It was soon realized this was due to slight differences in the density of the gas; when the discharge was applied, areas where the density was even slightly higher would have higher current and thus more magnetic pressure. This would cause that area to pinch more rapidly, increasing density further, and a chain-reaction known as "the kink" forced it out of the confinement area.[11]

In the early 1950s all of these efforts were secret. This ended in 1956 when Igor Kurchatov, director of the Soviet atomic bomb effort, offered to give a talk to his UK counterparts. To everyone's great surprise, Kurchatov outlined the Soviet fusion program, talking mostly about linear pinches and the great problems they were having with stability of the plasma. The British were already aware the US were having similar problems, and had their own as well. It now appeared there was no fast route to fusion, and an effort developed to declassify the entire field. All three countries released their research in 1958 at the second Atoms for Peace meeting in Geneva.[12]

Theta pinch

One approach to solving the stability problems seen in pinch machines was the concept of "fast pinch". In this approach, the electrical current that generated the pinch was applied in a single brief burst. The burst was too brief to cause the entire plasma to collapse, instead only the outer layers were compressed, and so rapidly that a shock wave formed. The goal was to use this shock wave to compress the plasma instead of the normal pinch that attempted to collapse the entire plasma column.[13]

The mirror and stellarator did not compress their plasma to any great degree, and did not appear to be suffering from the stability problems. However, these devices had a practical problem. In the pinch system, the collapse of the plasma caused it to heat up, meaning that the current provided both the confinement force as well as the heat needed to start the fusion reactions. With the other devices, some external source of heating would be needed. Richard Post, leader of the US mirror program at Lawrence Livermore National Laboratory (LLNL), produced a series of mirrors that used external magnets to compress the plasma.[14]

At the Naval Research Laboratory (NRL), Alan Kolb saw the mirror compression concept and came up with the idea of combining it with shock compression of the fast pinch approach, gaining the advantages of both. His first concept consisted of a mirror with a metal ring at either end. Once a plasma had been formed in the mirror, a single enormous burst of current was sent into the two rings. The idea was to cause a rapid pinch at either end of the tube, creating shock waves that would move inward and meet at the middle of the mirror.[13]

As they considered this design, an entirely new approach presented itself. In this version, the pinch was induced through a single wide sheet of copper wrapped once around the tube. When energized, the current flowed around the outside of the tube, creating a magnetic field at right angles, running down the long axis of the tube. This field, in turn, induced a current flowing around the outside of the plasma, or "boundary zone".[15]

According to Lenz's law, this current would be in the direction that produces a magnetic field in the opposite direction to the one that created it. This had the effect of pushing the original field out of the plasma, toward the one in the copper sheet. It was the interaction between these two fields in the area between the plasma and the container wall that created the inward-driving force that pinched the plasma. Because there was no current in the bulk of the plasma, it would not be subject to the instabilities being seen in the other pinch devices.[13]

When the new design became known within the energy labs, James L. Tuck of Los Alamos christened it theta-pinch[16] to distinguish it from the original pinch approach. The original pinch designs retroactively became known as z-pinch.[13] Others also proved interested in the design; at General Electric (GE) a small team formed to consider the concept as the basis for a power-producing reactor.[17]

Fusion success

The British ZETA came online in August 1957, and by the end of the next month the team was consistently measuring bursts of millions of neutrons. Kurchatov's visit the year earlier had warned against being too hasty in concluding that neutrons in the system were the result of fusion, and that there were other reactions that could produce them. The ZETA team did not consider this carefully enough, and became convinced they had produced fusion reactions. They released this to the press on January 25, 1958, and was an immediate worldwide news item.[18] However, further work in April clearly showed that the neutrons were not from fusion, but instabilities in the plasma that could not be seen on their test equipment.[19]

At NRL, Kolb began construction of a new version of his Pharos machine to test the single-ring concept.[lower-alpha 2] At the same time, at Los Alamos Tuck began construction of a system with two rings, similar to the original Kolb mirror.[20] Fond of mythological names, Tuck called the design Scylla.[16] Scylla I began operation in early 1958 and was soon giving off tens of thousands of neutrons per pulse. It was at this time that Keith Boyer began a modification to use a single-turn coil like Pharos. When the new version was started up, it began giving off tens of millions of neutrons.[15]

The events surrounding the ZETA claims forced the Scylla team to make absolutely sure that the neutrons were from fusion, and the team spent the summer of 1958 making all sorts of independent measurements to this end. By this time, Kolb's Pharos was also producing neutrons. The goal was to have definitive results one way or another in time for the meeting in Geneva.[15] Unfortunately, there was simply not enough time; the team shipped Scylla I to the show in September and mentioned that it was generating about 20 million neutrons per shot,[21] but was careful to make no claim as to their origin.[22]

The final evidence was provided shortly after the show. A wide variety of experiments on the system demonstrated that the ions were thermalizing at about 15 million Kelvin, much hotter than ZETA and hot enough to explain the neutrons if they were from fusion reactions. This was the first clear evidence that thermonuclear fusion reactions of deuterium in the lab were possible.[23][24]

Later devices

Concerned about the ever-rising cost of the fusion program, Paul McDaniel, director of the Division of Research at the United States Atomic Energy Commission (AEC), decided that the FY 1963 budget should cancel one design of the many being developed at the labs. Tuck had maintained that all researchers should focus only on small systems to prove out the physics, that there was no point in scaling up unless the basics could be demonstrated. Thus, Los Alamos had a large number of small machines, leaving them with no single make-or-break concept. McDaniel would suffer the least political fallout it he canceled one of Los Alamos' programs. This taught Tuck an important lesson; the way to avoid cancellation was to be too big to fail. During testimony to Congress in 1964, he stated "We resisted the temptation to build huge machines or hire large staffs. This sounds very virtuous, but I have now come to realize this was suicidal".[25] Tuck, Richard Taschek and Los Alamos' director Norris Bradbury were all convinced the lab needed a major machine.[25]

Meanwhile, the success of Scylla I led to a number of potential development pathways that began to be explored during the early 1960s. In the short term, a set of minor improvements produced Scylla II, which was similar to the original but later upgraded from 35 kJ of capacitor power to 185. It came online in 1959, but was used only briefly while the much larger Scylla III was built and entered operation in late 1960. Early operations were successful and led quickly to the even larger Scylla IV, which began work in January 1963. Scylla IV produced excellent results, reaching 80 million Kelvin and 2 x 1016 particle densities[26] – well into the practical reactor region – and was producing billions of reactions per pulse.[23] Unfortunately, the system also demonstrated very low confinement times, on the order of 2 microseconds, far too short for a practical reactor design.[26]

Through the 1960s, theta-pinch emerged as one of the leading programs in the fusion field. New teams were set up at Aldermaston and the recently opened Culham in the UK, Campus Garching and Forschungszentrum Jülich in Germany, Frascati National Laboratories in Italy, and Nagoya University, Osaka University and Nihon University in Japan.[27][28] These experiments demonstrated that the system was subject to a new form of instability, the m=2 instability which causes the plasma to thin out from its original cylinder into a barbell-like shape. This led to numerous experiments with different layouts to prevent the rotation of the plasma that caused this instability.[24]

Around this time, General Electric bowed out. As it appeared no breakthrough in performance was possible in the short term, moving forward with their research would require larger machines that they were not willing to build using internal funding alone. A review of the field was published under the direction of Leslie Cook, which concluded "The likelihood of an economically successful fusion electricity station being developed in the foreseeable future is small." GE turned to AEC for funding, but this was declined as their program seemed to offer nothing new compared to Scylla IV. GE then wound down their program.[29]

Toroidal theta

Fred Ribe describes Scyllac concept. The original M&S field is on the upper right, the inside path is made longer by the series of corrugations in the field. The magnets needed to do that are on the lower right.

Researchers were convinced that the short confinement times were due to particle losses from the open ends of the reactor. In 1965, Fred Ribe, having replaced Tuck as leader of the Scylla team, began examining practical reactors based on the Scylla layout. They discovered that the system could be improved by using the breeding blanket as a sort of magnetic conductor, which allowed the outer current delivery to be far less intense as it would be magnified as it travelled through the metallic blanket. To make the design work with the given end-loss rates, it would have to be extremely long – calculations suggested that it would have to be 500 metres (1,600 ft) to reach the required 3 millisecond confinement required by the Lawson criterion.[30] This would, in turn, demand an impossibly large power supply.[31]

The problem with end-flow is most simply addressed by bending the experimental tube around to form a torus (donut, or ring) shape. In this case, particles flowing along the long axis of the device no longer hit anything, and can circulate forever. However, it was demonstrated from the earliest days of the fusion effort that this configuration is not stable: when a magnetic field is applied to such a container, due purely to geometry, the field on the inside of the curve is stronger than on the outside, leading to uneven forces within the plasma that make the ions and electrons drift away from the center.[32][30] A number of solutions to this problem had been introduced, notably the original pinch machines. In these, the inward force of the pinch current was dramatically more powerful than the drift force so it was not a problem. Another solution was the stellarator, which circulated the particles so they spent time at the inside and outside of the tube to balance out the drift.[33]

In 1958, Meyer and Schmidt at Garching proposed another solution. They noted that the key requirement for stability in the toroid was that the total path length on the inside and outside of the curve was the same. The stellarator provided this by circulating the particles, adding rotational transform. Meyer and Schmidt proposed doing this by modifying the magnets to produce a field that was no longer uniform as one moved around the torus; instead, the field choked down and then widened out to produce a field not unlike a link of sausages. The field was bent inward more on the inside curve, making it longer, and thus the total path length on the inside and outside was the same.[31]

As the theta-pinch machines started pushing into the region where end-losses were now a limitation to further research, the concept appeared to offer a way to move theta-pinch to a toroidal layout that was still sufficiently different from the stellarator to be interesting. This solution had not been considered very deeply given the simplicity of the stellarator concept compared to the more complex magnet layout required for the Meyer and Schmidt corrugated version.[34] Further study revealed additional instabilities, but the predicted drift from these was slow and could be addressed using dynamic stabilization.[31]

With the Los Alamos team desiring a large machine to ensure continued funding, they proposed a large toroidal theta as their next device, not just as a larger experimental system, but as a potential demonstration of a power producing system.[25] By 1965, LANL was proposing such a machine under the name Scylla V.[35]

High-beta stellarator

Amasa Stone Bishop had recently taken over the AEC's fusion management from Arthur Ruark and formed a panel to review the Scylla V proposal, including members of the NRL and GE theta teams. They concluded that there was no convincing evidence that the energy losses being seen were due to end losses, and raised concerns about the effectiveness of dynamic stabilization as well as the possibility that the changing fields it required may simply induce new instabilities. The panel strongly suggested building one more linear machine, 15 metres (49 ft) long, to test the concepts being introduced.[36] Nevertheless, with no other projects, the system was approved, but on the proviso that it be targeted to researching the high-beta regime, not as a prototype power reactor. This marked the beginning of the shift of management of the overall goals of the program to Washington.[37]

One of the panel members, Harold Grad, was well known as an expert on plasma physics and stability. On his return to New York he began reading all of the published materials on the theta-pinch concept and concluded that a dynamic stabilization system would likely not work and would be extremely complex even if it did. In its place he proposed using helical magnets like those being added to recent stellarators, as these appeared to be naturally stable. He referred to the resulting system as a "high-beta stellarator", beta being a measure of the magnetic strength in the plasma, which would be much higher in a pinch device.[34]

Los Alamos proved extremely interested in Grad's work and proposed that he fully develop it with an eye to presenting it at the next triennial fusion research meeting, due to take place in August 1968 in Novosibirsk. As the team continued working, several new and disturbing instabilities were revealed and it became clear the helical magnets were ultimately no more stable than the original Meyer-Schmidt concept. Yet another set of dynamic damping had to be added,[38] this time one that had to react within a characteristic time, T.[36]

Theta vs. tokamak

It was at the Novosibirsk meeting that the Soviet delegation published new results on their tokamak devices that were demonstrating significant improvements over all previous devices. At first the results were dismissed as a lack of proper instrumentation, and a furious debate broke out over whether the results were reliable.[39]

The Soviets came up with a convincing solution to demonstrate whether their design worked. During the 1960s, the UK had developed the technique of directly measuring the temperature of the particles in the plasma using a laser system. Lev Artsimovich invited the team to bring their device to the Kurchatov Institute and independently measure the performance. The system required months of setup and calibration, but by the early summer of 1969 it was clear the tokamak really was working as described.[40]

This put the US in the uncomfortable position of being behind in the fusion race. At first, the labs refused to consider building tokamaks, presenting a laundry list of reasons why they were inferior. In May 1969, AEC fusion division director Taschek wrote to Bishop stating his feeling the US should respond with their own devices that had the best chance of showing reasonable performance, and that "it is inescapable that they are the Scyllac and 2X! They are better than anything we have in the US."[36][lower-alpha 3] Still concerned that the Scyllac program was trying to solve too many problems at once, the AEC reiterated its suggestion that a linear device be built first.[36]

By the end of October 1969, with the tokamak results to be released publicly the next month, the US began its own tokamak program. This placed Scyllac in the position of having to not only demonstrate its goals in terms of stability, but also compete against these machines, which had already demonstrated excellent performance. This presented the possibility that the linear version might quickly return results with values that competed with the tokamak. As Taschek put it in mid-1970, "there may be some real tactical and impact merit in noting that a linear theta pinch... would provide a major contribution to the derby which not seems to have arisen on a short time scale."[36]

Syllac

The full-circle Scyllac fusion reactor during construction.

In spite of what appeared to be agreement on the wisdom of building a 15 m linear version first, Ribe decided it would be better to instead build Syllac as quickly as possible. To do this, in February 1969 he outlined a plan in which a shorter 10 metres (33 ft) linear device would be built at the same time as a 120 degree sector of Scyllac which would be used to learn how to build the machine as a whole. By 1970 he had further modified these plans to reduce the linear device to only 5 metres (16 ft) with 2 metres (6 ft 7 in) mirrors on either end to improve confinement time.[36]

In 1972, Robert L. Hirsch took over the AEC's fusion program from Bishop. With the recent advances in tokamak performance pointing to the possibility of a production design, Hirsch began reevaluating the program on the basis of both performance and economics. While the tokamak had excellent performance, the mirrors being developed at Lawrence Livermore would be far less expensive to build and operate, and these two devices became the focus of his plans. To keep their design in the running, Los Alamos decided to rapidly move ahead with the toroidal section to prove their approach was also worthy of consideration.[41]

Experiments on the first sector began in April 1971 and demonstrated that the gross stability was there, prompting a major celebration at the lab. The next step was to add the feedback stability system. By this time, Ken Thomassen of MIT had made additional calculations that showed feedback would not work at the radius of the current design. In late 1972, Ribe decided to address this by enlarging Scyllac from 4.8 metres (16 ft) diameter to 8 metres (26 ft), reducing curvature and thus the required level of feedback. This reduced the critical parameter T to 0.9 microseconds – anything below 1 would work.[42]

Around this time, Robin Gribble, who was primarily responsible for the feedback program, was assigned to another project at Los Alamos. As the program developed, two changes to the layout caused the T parameter to increase. Lacking anyone with direct responsibility over the feedback side of the program, this went unnoticed. Experiments on Scylla IV and the original segment ended as the entire team focused on the new enlarged design, so additional problems were not discovered.[42]

Syllac was dedicated in April 1974. By October it was clear that the feedback system was not working. It was at this point that they recalculated the value of T and found it to be 1.5. Worse, further work on the underlying theory suggested the value of 1 was not good enough, and values closer to 0.5 were required. The final blow was that the gross stability seen in the original segment in 1971 proved to be illusory; in the larger machine the plasma was seen to slowly drift. The stability system was barely able to stop this, let alone correct the faster instabilities.[42]

Linear stops

The failure of Scyllac left the US with only its own tokamak program centered at Princeton and the mirror program at Livermore. Los Alamos attempted one more solution to save the system, re-commissioning Scylla IV with physical stoppers in the ends using light metals. This Scylla IV-P improved the confinement time from 9 to 29 microseconds, a three-fold improvement. But this was nowhere near enough to get into the millisecond range required for a production reactor. After two decades of effort, the best results of the theta program were only a marginal improvement over the results of the original Scylla series.[24]

FRCs

During the 1960s, several teams noticed that their theta experiments would sometimes show improved confinement times. This occurred when the magnetic field was reconfiguring as the external pulse was relaxing back to zero. At the time this behaviour was generally considered undesirable, although it did have the advantage of causing the ion temperature to increase as the fields folded, and it was this action that raised the temperatures to the point where fusion took place.[43]

In 1972, John Bryan Taylor published a series of papers on the topic of magnetic field conservation and flux reversals that had been seen on ZETA but not appreciated at the time. This led to the concept of the reversed field pinch, which saw development through the 1970s and 80s. The same basic mechanism was causing the field reversal seen in the theta devices, but the ultimate outcome was a different layout.[43]

In the early 1970s, the Kurchatov Institute had demonstrated stable confinement over lengthy periods by reducing the pinch power and adding additional magnets at the end of the linear tube to aid field reversal. The publication of their work on these field-reversed configuration (FRC) plasmas led to the topic gaining significant interest, with new efforts in the US and Japan. Although these are technically theta pinches due to their arrangement, the concept is considered distinct and a separate approach to fusion power.[43]

Notes

  1. Tritium was unknown when Oliphant's initial experiments on reaction rates were carried out, D-T reactions occur at lower energy levels than the D-D reaction Oliphant experimented with.
  2. NRL later used the name Pharos for an entirely unrelated fusion experiment in the 1970s.
  3. 2X was the latest mirror machine at LLNL.

References

Citations

  1. 1 2 Clery 2014, p. 24.
  2. Oliphant, Harteck & Rutherford 1934.
  3. McCracken & Stott 2005, p. 35.
  4. 1 2 Bishop 1958, p. 7.
  5. Asimov 1972, p. 123.
  6. McCracken & Stott 2005, pp. 36–38.
  7. 1 2 Thomson 1958, p. 12.
  8. Bishop 1958, p. 17.
  9. Clery 2014, p. 25.
  10. Thomson 1958, p. 11.
  11. 1 2 Phillips 1983, p. 65.
  12. Herman 1990, p. 45.
  13. 1 2 3 4 Braams & Stott 2002, p. 41.
  14. Post, Richard (January 2011). "The magnetic mirror approach to fusion". Nuclear Fusion. 27 (10): 1579–1739. doi:10.1088/0029-5515/27/10/001. S2CID 120266348.
  15. 1 2 3 Bromberg 1982, p. 84.
  16. 1 2 Dean 2013, p. 227.
  17. Bromberg 1982, p. 136.
  18. Pease, Roland (January 15, 2008). "The story of 'Britain's Sputnik'". BBC. Retrieved May 6, 2017.
  19. Bromberg 1982, p. 86.
  20. Bromberg 1982, p. 84, Figure 5.2.
  21. Braams & Stott 2002, p. 42.
  22. Bromberg 1982, p. 87.
  23. 1 2 Phillips 1983, p. 66.
  24. 1 2 3 Braams & Stott 2002, p. 83.
  25. 1 2 3 Bromberg 1982, p. 145.
  26. 1 2 Bromberg 1982, p. 143.
  27. Braams & Stott 2002, p. 82.
  28. Tuck 1965, p. 28.
  29. Bromberg 1982, p. 137.
  30. 1 2 Tuck 1965, p. 38.
  31. 1 2 3 Bromberg 1982, p. 144.
  32. Bromberg 1982, p. 16.
  33. Bromberg 1982, p. 17.
  34. 1 2 Bromberg 1982, p. 222.
  35. Tuck 1965, p. 39.
  36. 1 2 3 4 5 6 Bromberg 1982, p. 224.
  37. Bromberg 1982, p. 146.
  38. Bromberg 1982, p. 223.
  39. Seife 2008, p. 112.
  40. Forrest, Michael (2016). "Lasers across the cherry orchards: an epic scientific and political coup in Moscow at the height of the Cold War – a nuclear scientist's true story".
  41. Bromberg 1982, p. 225.
  42. 1 2 3 Bromberg 1982, p. 226.
  43. 1 2 3 Braams & Stott 2002, p. 108.

Bibliography

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.