Computers solve nuclear problem

Materials World magazine
,
3 Oct 2015

Dr Sergei Dudarev, Head of Materials Modelling Group, and Dr Mark Gilbert, Research Scientist, both at the Culham Centre for Fusion Energy, discuss quantification instead of conjecture in models for nuclear materials.

In 1960, EP Wigner, the leader of a group of physicists and engineers who, in 1942–1943 designed the first large-scale nuclear reactor, published a paper with a remarkable title – The Unreasonable Effectiveness of Mathematics in the Natural Sciences. The paper pointed out that the part played by the formal quantitative mathematical methods in applied natural sciences extends well beyond expectation, and that mathematics reveals connections between observed natural phenomena that at first glance appear unrelated.

 55 years later, we are witnessing a quiet but fundamental transformation taking place in a field that many consider to be fairly conservative and well established – the field of nuclear materials. Clarity, veracity and assertiveness of quantitative predictions derived using the recently developed mathematical modelling methods, complemented with a range of comprehensive and versatile experimental tests, are emerging in an area of materials science where empirical rules have long dominated the scene. Is this significant, and what benefits should we expect from this gradual but transformative new development? 

Hot hot heat

In fission and fusion power generation, nuclear energy is released as the kinetic energy of subatomic particles, including neutrons and alpha-particles (which are the nuclei of helium atoms), produced either in the bulk of the fission fuel or in the fusion plasma. The kinetic energy of these particles is often fairly high, of the order of millions of electron-volts (MeV). The fact that these energies are high is not surprising, as the energy scale is determined by the strength of bonding between protons and neutrons inside the atomic nuclei. For comparison, chemical reactions, like combustion of carbon or hydrogen in oxygen, release energy on a scale a million times smaller – only several electron-volts. Chemical bonds are a million times weaker than nuclear bonds, and hence the same energy obtained from an amount of conventional fuel can be produced by a million times smaller amount of nuclear fuel.

The high-energy subatomic particles released in nuclear reactions propagate through the materials forming the engineering structure, such as the core of a fission reactor or the first wall of a fusion power plant, colliding with atoms and initiating damage cascades. These cascades melt the material for approximately 10-11 of a second and then re-solidify. Lattice imperfections formed during this rapid solidification of a cascade are known as radiation defects, which are characterised depending on whether their constituent elementary point defects are vacancies or self-interstitial atoms. Subsequently, radiation defects diffuse, interact, coalesce, and form clusters that grow or shrink, and as a result change the physical and mechanical properties of materials, including their thermal conductivity and hardness or brittleness. 

In addition to collision cascades, neutrons give rise to transmutation reactions, changing the chemical composition of materials. For example, initially pure tungsten bombarded with neutrons that have been thermalised (slowed-down) in either a fission or fusion reactor would, over a relatively short time on the order of a year, transform, via neutron capture and radioactive decay, into an alloy containing several percent of rhenium and osmium. The high energy,
MeV fusion neutrons can even cause charged particles to be emitted, for example transmuting iron into chromium via the emission of an alpha particle. The image on page 34 illustrates the complexity produced by this neutron-induced transmutation, by showing the composition of stainless steel Eurofer, similar to the P91 and T91 steels widely used in the power industry, which would result from the sequence of nuclear reactions in a fusion environment.

Machines over minds

While the effects of high-energy subatomic particles on materials were qualitatively described and classified in the 1950s, predicting what would happen to a material after exposure to the environment of a reactor proved difficult. Partially, this was because of limitations on the accuracy of data and on the complexity of models. However, there was an even more fundamental reason – the structure of defects remained largely unknown because nobody had managed to see them. A typical defect in iron has a size smaller than a nanometre, and it is still very difficult to image such a small object using even the most powerful electron microscope. The fact that a defect does not stay still, but constantly moves because of thermal vibrations of the atoms that form it, makes an attempt to image it even more difficult.

For decades the problem appeared intractable. It was solved entirely unexpectedly, and not by observation but by computation. The solutions can be traced back to the two Nobel prize-winning mathematical theorems proved by Walter Kohn and colleagues in 1964–1965 that led to the development of density functional theory (DFT). DFT is a good example of the ‘unreasonable effectiveness of mathematics’, since to predict the structure of an alloy or a compound this method only requires, as input, the names of chemical elements forming the material. Domain and Becquart applied DFT to the investigation of radiation defects in iron, and the results proved spectacular – the temperatures of defect recombination stages predicted, using DFT, by Fu, Dalla Torre, Willaime and colleagues, were within ten degrees of the observed values. The structure of defects in metals, alloys, even in fission fuels, can now be predicted by computation, and predictions match observations extremely well where such experiments have been possible. 

A particular benefit of the new data garnered from DFT calculations has been the ability to improve the fidelity of the numerical descriptions of atomic interaction energies used for dynamic atomistic simulations at finite temperatures. 

Previously, only a limited amount of experimental data was available to define these so-called interatomic potentials but, with the availability of precise atomic structure predictions and their associated energies from DFT, it is now possible to create numerical functions that faithfully reproduce the fundamental properties of a material’s atomic lattice, including the formation and behaviour of defects. This means that atomic simulations of collision cascades, such as that shown in the figure on page 32, where development has also required improved computational resources and techniques, are now much more realistic and are directly comparable to experiment. 

It is not only on the microscopic scale where modelling methods compete with experiments as providers of reliable information about radiation effects. 

At the engineering scale, advances in massively parallel numerical computational methods have provided equally remarkable results. It is now possible to predict, in real space at a high a level of spatial and energy resolution, the neutron fields in complex engineering structures. Direct computation of neutron trajectories can now be performed routinely – enough to make statistically reliable predictions of neutron fluxes, and, even more significantly, the rates of damage at every point in the structure over the entire expected lifetime of the power plant. This allows identification of locations in a reactor design where materials are exposed to particularly demanding operating conditions, enabling engineers to improve the design prior to construction of the nuclear plant.    

A new age

The information-abundant environment created by these advances in mathematical and computational methods provide new opportunities for comparison and validation using highly accurate microscopic experimental tests of irradiated materials, which are now possible in the facility 'cluster' called the National Nuclear User Facility (NNUF). The presently available computational tools can even suggest new ways of designing new alloys and steels, some of which have already achieved commercial successes.Another example, illustrating the impact of computer-assisted data mining (above) shows how the analysis of electron microscope images of irradiated materials, until now performed by visually counting the defects and measuring their size, has been replaced by automated image processing. This has increased the amount of usable data by several orders of magnitude and produced new fundamental insights into the nature of defect generation by irradiation.    

Powerful computers have, for some time, been at the forefront of scientific advancement, and there are Nobel prize-winning methods for modelling of materials. More recently, massively parallel computational technologies have been able to tackle complex, integrated engineering issues at the very highest level of accuracy. Supported by large data analysis methodologies applied to intelligently designed experimental tests, this quantification of advanced computational models has produced a step-change in the understanding of nuclear materials science, consistent with what Wigner foresaw in the 1960s. There is every reason to expect that the integration of numerical data and mathematical models could produce major commercial benefits, by helping to select better performing materials and alloys for power stations of the future, and also by improving the understanding of ageing effects in current power stations to guide lifetime extension decisions.