Status and Future of Nuclear Matrix Elements for Neutrinoless DoubleBeta Decay: A Review
Abstract
The nuclear matrix elements that govern the rate of neutrinoless double beta decay must be accurately calculated if experiments are to reach their full potential. Theorists have been working on the problem for a long time but have recently stepped up their efforts as tonscale experiments have begun to look feasible. Here we review past and recent work on the matrix elements in a wide variety of nuclear models and discuss work that will be done in the near future. Ab initio nuclearstructure theory, which is developing rapidly, holds out hope of more accurate matrix elements with quantifiable error bars.
Contents
I Introduction
Neutrinos are the only neutral fermions we know to exist. They are thus the only known particles that may be Majorana fermions, that is, their own antiparticles. Because neutrinos are so light, the difference in behavior between Majorana neutrinos and Dirac neutrinos, which would be distinct from their antiparticles, is slight. The easiest way to determine which of the two possibilities nature has chosen — and it is far from easy — is to see whether certain nuclei undergo neutrinoless doublebeta () decay, a secondorder weakinteraction process in which the parent nucleus decays into its daughter with two fewer neutrons and two more protons, while emitting two electrons but, crucially, no (anti)neutrinos.
Experiments to observe decay are becoming more and more sensitive, and international teams are trying to push the sensitivity to the point at which they can identify a few decay events per year in a ton of material Henning (2016); Dell’Oro et al. (2016); Cremonesi and Pavan (2014); GómezCadenas et al. (2012). The hope is to be sensitive enough to detect decay if neutrinos are indeed Majorana particles and their masses are arranged in a pattern known as the “inverted hierarchy” (discussed in Sec. II.1). Because the decay takes place inside nuclei, the amount of material required to fully cover the invertedhierarchy region depends not only on the masses of the three kinds of neutrinos, but also on the nuclear matrix element (or elements, since present and planned decay experiments Gando et al. (2016); Albert et al. (2014); MartínAlbo et al. (2016); Agostini et al. (2013); Abgrall et al. (2014); Alfonso et al. (2015); Andringa et al. (2016); Blot et al. (2016); Beeman et al. (2015); Iida et al. (2016); Park (2016); Ebert et al. (2016); Dokania et al. ; Fukuda (2016) may consider about a dozen different nuclei) of a subtle twonucleon operator between the ground states of the decaying nucleus and its decay product. Since decay involves not only nuclear physics but also unknown neutrino properties, such as the neutrino mass scale, the matrix elements cannot be measured; they must be calculated. And at present they are not calculated with much accuracy. We need to know them better.
Fortunately, nuclearstructure theory has made rapid progress in the last decade and the community is now in a position to improve calculated matrix elements materially. This review describes work that has already been carried out, from early pioneering studies to more recent and sophisticated efforts, and discusses what is needed to do significantly better. We are optimistic that recent progress in the use of chiral effective field theory (EFT) to understand nuclear interactions Epelbaum et al. (2009); Machleidt and Entem (2011); Hammer et al. (2013); Machleidt and Sammarruca (2016), and of nonperturbative methods to efficiently solve the nuclear manybody problem from first principles (with controlled errors) Carlson et al. (2015); Hebeler et al. (2015a); Hagen et al. (2014); Hergert et al. (2016); Somà et al. (2014); Meißner (2016) will produce reliable matrix elements with quantified uncertainties over the next five or so years. We will outline the ways in which that might happen.
This review is structured as follows: Section II discusses the significance of decay and the nuclear matrix elements that govern it. Section III reviews calculations of the matrix elements and indicates where we stand at present. Section IV is a slight detour into a more general problem, the “renormalization of the axial vector coupling ,” that has important consequences for nuclear matrix elements. Section V is about ways in which matrixelement calculations should improve in the next few years, and ways in which the uncertainty in new calculations can be assessed. Section VI is a conclusion.
Ii Significance of DoubleBeta Decay
ii.1 Neutrino Masses and Hierarchy
Before turning to nuclearstructure theory, we very briefly review the neutrino physics that makes it necessary. References Avignone III et al. (2008) and Vergados et al. (2012) contain pedagogical reviews of both the neutrino physics and the nuclear matrix elements that are relevant for decay.
Flavor oscillations of neutrinos from the atmosphere Fukuda et al. (1998), from the sun Ahmad et al. (2002), and from nuclear reactors Eguchi et al. (2003) have revealed neutrino properties that were unknown a few decades ago. Neutrinos have mass, but the three kinds of neutrino with welldefined masses are linear combinations of the kinds with definite flavor that interact in weak processes. We know with reasonable accuracy the differences in squared mass among the three mass eigenstates, with one smaller difference Olive et al. (2014) coming mainly from solarneutrino experiments and one larger difference Olive et al. (2014) coming mainly from atmosphericneutrino experiments. We also know, with comparable accuracy, the mixing angles that specify which linear combinations of flavor eigenstates have definite mass GonzalezGarcia et al. (2014).
The arrangement of the masses, called the “hierarchy,” is still unknown, however. There are two possibilities: either the two mass eigenstates that mix most strongly with electron flavor are lighter than the third (the “normal hierarchy,” because it is similar to the hierarchy of quark mass eigenstates) or they are heavier (the “inverted hierarchy”). Long baseline neutrinooscillation experiments can eventually determine the hierarchy with a confidence level corresponding to four standard deviations or more, but for now they show just a two preference for the normal hierarchy Abe et al. (2015); Adamson et al. (2016). Figure 1 shows the present experimental decay limits on the combination of neutrino masses [defined by Eq. (5) in Sec. II.2.1], together with the regions corresponding to the normal and inverted hierarchies, as a function of the mass of the lightest neutrino. If the hierarchy is normal and the lightest neutrino is lighter than about 10 meV, then a detection of decay is out of reach for the coming generation of experiments unless the decay is driven by the exchange of a heavy particle, the existence of which we have not yet discovered, or some other new physics (see Sec. II.2.2). If the hierarchy is inverted, the experiments to take place in the next decade have a good chance to see the decay, provided they have enough material. Indeed, Fig. 1 shows that the current experimental limit almost touches the upper part of the invertedhierarchy region.
How much material will be needed to completely cover the region, so that we can conclude in the absence of a signal that either the neutrino hierarchy is normal or neutrinos are Dirac particles? And in the event of a signal, how will we tell whether the exchange of light neutrinos or some other mechanism is responsible? If it is the latter, what is the underlying new physics? To answer any of these questions, we need accurate nuclear matrix elements.
ii.2 Neutrinoless DoubleBeta Decay
ii.2.1 Lightneutrino Exchange
The beginning of this section closely follows Ref. Avignone III et al. (2008), which itself is informed by Ref. Bilenky et al. (2003). More detailed derivations of the transition rates can be found in Refs. Doi et al. (1985); Haxton and G. J. Stephenson Jr. (1984); Tomoda (1991).
The rate for decay, if we assume that it is mediated by the exchange of the three light Majorana neutrinos and the Standard Model weak interaction as represented in Fig. 2, is
(1) 
where , and , are the energies and momenta of the two emitted electrons, and are the energies of the initial and final nuclear states, and is an amplitude proportional to an matrix element up to delta functions that enforce energy and momentum conservation. The matrix depends on the product of leptonic and hadronic currents in the effective lowenergy semileptonic Lagrangian density:
(2) 
with the lefthanded chargechanging hadronic current density. Because is second order in the weakinteraction Lagrangian, it contains a lepton part that depends on two spacetime positions and , which are contracted and ultimately integrated over:
(3) 
Here is the Majorana mass eigenstate with mass and is the element of the neutrino mixing matrix that connects electron flavor with mass eigenstate . We denote the charge conjugate of a field by (in the PauliDirac representation), and because are Majorana states we can take .
The contraction of with turns out to be the usual fermion propagator, so that the lepton part above becomes
(4) 
where is the 4momentum of the virtual neutrino. The term with vanishes because the two currents are left handed and if we neglect the very small neutrino masses in the denominator, the decay amplitude becomes proportional to
(5)  
Here is the socalled Dirac phase, and are Majorana phases that vanish if neutrinos are Dirac particles. We have inserted the absolute value in Eq. (5) consistently with the amplitude in Eq. (1), because the expression inside can be complex.
To obtain the full amplitude , one must multiply the lepton part above by the nuclear matrix element of two timeordered hadronic currents and integrate the product over and . Because ( is the hadronic Hamiltonian and the current on the righthand side is evaluated at time ), one can write the matrix element of an ordinary product of hadronic currents between initial (i) and final (f) nuclear states as
(6)  
where the ’s are a complete set of intermediate nuclear states, with corresponding energies . Time ordering the product of currents and combining the phases in Eq. (6) with similar factors from the lepton currents yields the following amplitude, after integration first over , , and , then over and :
(7) 
where the tiny neutrino masses in the denominator of Eq. (II.2.1) and the electron momenta and have been neglected because they are much smaller than a typical momentum transfer . The energyconservation condition comes from the definition of .
To go further one needs to know the nuclear current operators. At this point, most authors make two important approximations. The first is the “impulse approximation,” i.e. the use of the current operator for a collection of free nucleons. The operator is then specified by its onebody matrix elements:
(8)  
where , the conservation of the vector current tells us that , and with (as given by the proton and neutron anomalous magnetic moments Olive et al. (2014)), Olive et al. (2014) (from neutron decay measurements Mendenhall et al. (2013)), and the GoldbergerTreiman relation , with and the nucleon and pion masses, connects the pseudoscalar and axial terms and is accurate enough for our purposes. The momentumtransfer dependence of the axial and vector terms can be parameterized in several ways by fitting experimental data Dumbrajs et al. (1983); Bernard et al. (2002). A nonrelativistic reduction of the matrix elements in Eq. (8) leads to the form , where the operator acts on space and spin variables of the nucleon and the isospinraising operator makes the nucleon a proton if it is initially a neutron.
The second approximation, known as closure, begins with the observation that to contribute significantly to the amplitude, the momentum transfer must be on the order of an average inverse spacing between nucleons, about MeV. The closure approximation is to neglect the intermediatestatedependent quantity (which is generally small compared to ) in the denominator of Eq. (II.2.1), so that can be replaced by a stateindependent average value and the contributions of intermediate states can be summed implicitly in Eq. (II.2.1). This approximation avoids the explicit calculation of excited states of the intermediate oddodd nucleus up to high energies, a nuclear structure calculation that is computationally much more involved than obtaining the initial and final states in the decay. Because the momentum transfer in decay (limited by the Qvalue of the transition) is of the same order of magnitude as , the closure approximation cannot be used there. For that reason, some methods that focus on lowlying states or eveneven nuclei can be applied to decay but not to decay. Approaches that do allow an evaluation of the contributions of each intermediate state suggest that a sensible choice of can allow the closure approximation to reproduce the unapproximated matrix element to within 10% Muto (1994); Šimkovic et al. (2011); Sen’kov and Horoi (2013); Sen’kov et al. (2014); Sen’kov and Horoi (2014). It is worth noting, however, that tests of the closure approximation have not included states above 10’s of MeV. Since higherenergy/shorterrange dynamics could be important, future closure tests should include them.
Assuming the closure approximation is accurate, and neglecting terms associated with the emission of wave electrons (which are expected to be a few percent of those associated with wave electrons) and the small electron energies in the denominator of Eq. (II.2.1), one has the expression
(9) 
where , is the proton number, and comes from the phasespace integral and has recently been reevaluated with improved precision Kotila and Iachello (2012); Stoica and Mirea (2013). The “nuclear matrix element” Šimkovic et al. (1999); Rodin et al. (2006); Horoi and Stoica (2010) is given by
(10) 
with, in the approximations mentioned above,
(11) 
Here the nucleon coordinates are all operators that, like spin and isospin operators, act on nuclear states. The nuclear radius, , is inserted by convention to make the matrix element dimensionless, with a compensating factor in in Eq. (9). The quantity is the magnitude of the internucleon position vector, and is the corresponding unit vector. The objects and denote spherical Bessel functions, and the ’s, called neutrino potentials, are defined in momentum space by
(12)  
where terms of higher order in , coming from the nonrelativistic expansion of Eq. (8), have been neglected.
Sometimes the operators inside the matrix elements of Eq. (11) are multiplied by a radial function , designed to take into account shortrange correlations that are omitted by Hilbertspace truncation in most manybody calculations. Several parameterizations of have been proposed; they are based either based on a Jastrow ansatz Miller and Spencer (1976), the unitary correlator operator method Roth et al. (2005), BruecknerGoldstone calculations Brueckner (1955); Müther and Polls (2000) or nuclear matter correlation functions Benhar et al. (2014). Even though the prescriptions differ from one another, those that preserve isospin symmetry (the Jastrow ansatz does not Engel et al. (2011)) have small effects on matrix elements when the momentum dependence of the transition operator is taken fully into account Kortelainen et al. (2007); Šimkovic et al. (2009).
For completeness, we write down the decay rate for decay, which is permitted in the Standard Model and therefore does not depend on neutrino mass or chargeconjugation properties. This process is sketched in Fig. 3 and the decay rate can be derived in a similar way as for decay, with the result that
(13) 
where is the corresponding phasespace factor (also calculated to high precision in Refs. Kotila and Iachello (2012); Stoica and Mirea (2013)) and
(14) 
Isospin symmetry forces all the Fermi strength to lie in the isobar analog state in the daughter nucleus, so that for the transition to the daughter ground state is negligibly small.
ii.2.2 New Physics Mechanisms
It is not just lightneutrino exchange that can contribute to decay. Although the occurrence of decay immediately implies that neutrinos are Majorana particles Schechter and Valle (1982) — once a leptonnumber violating operator appears in the Lagrangian, all possible effective operators violating the symmetry are generated — some other leptonnumber violating mechanism could be the dominant cause of the decay Rodejohann (2011); Deppisch et al. (2012). If that were the case, the detection of decay would not give us information about the absolute neutrino mass, but could be used as a lowenergy test of new physics that would complement highenergy searches at accelerators such as the Large Hadron Collider de Gouvea and Vogel (2013); Helo et al. (2013); Päs and Rodejohann (2015); Peng et al. (2016).
Several mechanisms for decay have been proposed. Besides the exchange of sterile neutrinos via lefthanded currents Blennow et al. (2010), the most popular are the exchange of light or heavy neutrinos in leftright symmetric models Mohapatra and Senjanovic (1980); Mohapatra and Vergados (1981), the exchange of supersymmetric particles Mohapatra (1986); Vergados (1987), and the emission of Majorons (bosons that appear in theories with spontaneous breaking of the global baryonlepton number symmetry Chikashigi et al. (1981); Gelmini and Roncadelli (1981); Georgi et al. (1981)). Combining the contributions from all possible mechanisms, one finds that the decay rate takes the form
(15) 
with newphysics parameters that are distinct for every mechanism and mode: a combination of the light neutrino masses () for the usual mechanism and combinations of heavyneutrino masses, the righthanded boson mass, the left and righthanded boson mixing angles, supersymmetric couplings, couplings of Majorons to neutrinos, etc. for nonstandard mechanisms. It is these parameters that decay experiments can constrain, provided that the associated nuclear matrix elements are known. (Updated phasespace factor calculations can be found in Ref. Štefánik et al. (2015).) Detailed treatments of the matrix elements governing these newphysics decay modes appear in Refs. Vergados et al. (2012); Doi et al. (1985); Haxton and G. J. Stephenson Jr. (1984); Tomoda (1991). Matrix elements calculations for the sterileneutrino exchange Blennow et al. (2010); Faessler et al. (2014); Barea et al. (2015a); Hyvärinen and Suhonen (2015); Horoi and Neacsu (2016a), leftright symmetric models Retamosa et al. (1995); Caurier et al. (1996); Horoi and Neacsu (2016b); Štefánik et al. (2015), and the exchange of supersymmetric particles Hirsch et al. (1995a, 1996); Horoi (2013); Meroni et al. (2013) are common in the literature.
Most of the newphysics mechanisms involve the exchange of heavy particles. However, the direct exchange between nucleons, represented by the contact operator in the bottom diagram in Fig. 4 in the heavyparticle limit, occurs less often in most models than exchange between pions or between a pion and a nucleon, shown in the top and middle diagrams of the figure. In EFT each pion propagator carries a factor , where MeV GeV is the chiralsymmetry breaking scale, at which the effective theory breaks down. Each ordinary twonucleon–pion () vertex comes with a derivative, which results in a factor of or , where is a typical momentum. Because the contact interaction has no derivatives in most models, pion mediation enhances the amplitude Prézeau et al. (2003). The twopion mode at the top of the figure is thus generally the dominant one. The onepion graph in the middle is nominally smaller by a factor of and the fournucleon graph at the bottom is smaller by another factor of the same quantity. The leading onepionexchange contribution to decay is forbidden by parity symmetry, however, and so the middle graph ends up contributing at the same order as the contact term Prézeau et al. (2003). The counting is different for nuclear forces, where the contact and onepion exchange interactions both appear at leading order Epelbaum et al. (2009); Machleidt and Entem (2011). The usual onepion exchange interaction diagram contains a derivative at each vertex; the derivatives counteract the pion propagator, placing the diagram at the same chiral order as the fournucleon contact diagram. Twopion exchange occurs at higher order. Computations of matrix elements in supersymmetric models, even when they do not rely explicitly on EFT, support the statement that pionexchange modes are the most important Faessler et al. (1997, 1998, 2008a).
The EFT counting should be confirmed by explicit calculations, as additional suppression or enhancement may occur Savage (1999). Lattice QCD studies that explicitly incorporate hadronic degrees of freedom are underway Nicholson et al. , and will provide accurate input for the effective field theory treatment of these decay modes.
The fournucleon contact vertex represented at the bottom of Fig. 4 is further affected by shortrange physics. In the lightneutrino exchange decay mode, typical internucleon distances are of the order of few femtometers. The exchange of heavy particles, with mass GeV Prézeau et al. (2003), requires nucleons to be closer to each other and will thus be suppressed. Pions have a mass of MeV fm, a distance comparable to the average internucleon spacing, and so the graphs with pions propagating between nucleons will not be suppressed. This behavior is apparent in potentials associated with the three modes of heavyparticle exchange. In momentum space, they have the form
(16)  
The first of these is clearly more strongly affected at high momentum transfer than the two pionexchange modes^{1}^{1}1The induced pseudoscalar term discussed in Sec. II.2.1 also involves pionexchange, in combination with the usual exchange of a light neutrino. There the pion brings no enhancement because the lightneutrino exchange is already long range.. Additional powers of the momentum transfer can be present in subleading contributions to each diagram.
The effects of shortrange physics must be treated with care when working in EFT, from which that physics has effectively been integrated out, or in manybody approaches with severe truncation. The contact coupling constant must then be renormalized from its naive value by an amount that can be computed with the similarity renormalization group (SRG) Bogner et al. (2010); Furnstahl and Hebeler (2013) or related schemes. Until such methods are perfected (we discuss progress in Sec. V) the strength of the contact term will carry some uncertainty. No such uncertainty plagues the strong contact interaction in the EFT Hamiltonian, the parameters of which are fit directly to data. It is fortunate that the contact operators are less important than those involving pion exchange.
Within specific models, heavyparticle exchange with of the order of one TeV can compete with the lightneutrino exchange Prézeau (2006); Cirigliano et al. (2004). Once decay has been observed, therefore, one must cope with the question of its cause. The easiest process to distinguish from lightneutrino exchange is decay with the emission of a Majoron; the additional emitted particle causes the energy spectrum of the electron pairs to be spread out rather than concentrated at a single energy. More challenging for experiments is exploiting the angular correlation between the two electrons, which is different if one of them is emitted in a wave. The emission in waves is suppressed in lightneutrino exchange (where waves are always more likely), but turns out to be important in models with righthanded lepton currents Doi et al. (1985); Horoi and Neacsu (2016b), where the parityodd term in Eq. (II.2.1) contributes to the amplitude instead of the term containing the neutrino masses. Some upcoming decay experiments should be able to measure angular distributions, given enough events Arnold et al. (2010); MartínAlbo et al. (2016).
Unfortunately, other kinds of new physics can produce the same angular electron correlations as does light neutrino exchange Ali et al. (2007). In those cases, however, one might be able to determine the physics responsible for the decay by combining measurements in different isotopes (such measurements will be required in any event to confirm detection). The matrix elements governing heavyparticlemediated and lightneutrinomediated decay can depend differently on the nuclear species in which the decay occurs Deppisch and Päs (2007); Gehman and Elliott (2007); Šimkovic et al. (2010); Meroni et al. (2013); Horoi and Neacsu (2016b). The task will be difficult, however, if the dependence is similar, as Refs. Faessler et al. (2011); Lisi et al. (2015) suggest. Another possibility is to compare the decay rates to the ground state and the first excited state of the daughter nucleus; the ratio of these quantities can also depend on the decay mode Šimkovic et al. (2001). Unfortunately, it is difficult to observe the decay to an excited state, which is suppressed by the small value associated with the transition Kotila and Iachello (2012). The suppression is too great to be compensated for by differences in nuclear matrix elements Šimkovic et al. (2001); Menéndez et al. (2009a); Barea et al. (2015b); Horoi and Neacsu (2016a); Hyvärinen and Suhonen (2016).
ii.3 Importance of Nuclear Matrix Elements for Experiments
The next generation of decay experiments will aim to fully cover the region meV in Fig. 1, so that a signal will be detected if neutrinos are Majorana particles and the mass hierarchy is inverted. As Eq. (9) shows, the decay lifetime depends on the square of the nuclear matrix element that we need to calculate. This sensitivity makes matrix elements calculations important in a number of ways.
First, if an experiment is completely background free, the amount of material needed to be sensitive to any particular neutrino mass in a given time is proportional to the lifetime and thus, from Eq. (9), the inverse square of the matrix element. An uncertainty of a factor of three in the matrix element thus corresponds to nearly an order of magnitude uncertainty in the amount of material required, e.g., to cover the parameter space corresponding to the inverted hierarchy. If the experiment is backgroundlimited, the uncertainty is even larger GomezCadenas et al. (2011). An informed decision about how much material to use in an expensive experiment will require a more accurate matrix element.
Second, the uncertainty affects the choice of material to be used in decay searches, a choice that is a compromise between experimental advantages and the matrix element value. Figure 5 (top) shows nuclear matrix elements calculated in different approaches, and because of the spread of the results (roughly the factor of three above) we can conclude only that the matrix element of Ca is smaller than those of the other decay candidates. And the differences in the expected rate, a product of the nuclear matrix elements and phasespace factors, are even more similar (see Fig. 5 bottom, and Eq. (9)) Robertson (2013). Better calculations would make it easier to select an optimal isotope.
Finally, and perhaps most obviously, we need matrix elements to obtain information about the absolute neutrino masses once a decay lifetime is known. Reducing the uncertainty in the matrix element calculations will be crucial if we wish to fully exploit an eventual measurement of the decay halflife. Even the interpretation of limits is hindered by matrixelement uncertainty. The blue band in Figure 1 represents the upper limit of meV from the KamLANDZen experiment Gando et al. (2016). The uncertainty, again a factor of about three, is due almost entirely to the matrix element. And the real theoretical uncertainty, at this point, must be taken to be larger; the “ problem,” which we discuss in Sec. IV, has been ignored in this analysis. We really need better calculations. Fortunately, we are now finally in a position to undertake them.
Iii Nuclear Matrix Elements At Present
As we have noted, calculated matrix elements at present carry large uncertainties. Matrix elements obtained with different nuclearstructure approaches differ by factors of two or three. Figure 5 compares matrix elements produced by the shell model Menéndez et al. (2009b); Iwata et al. (2016); Horoi and Neacsu (2016a), different variants of the quasiparticle random phase approximation (QRPA) Šimkovic et al. (2013); Fang et al. (2015); Hyvärinen and Suhonen (2015); Mustonen and Engel (2013), the interacting boson model (IBM) Barea et al. (2015b), and energy density functional (EDF) theory Yao et al. (2015); Yao and Engel (2016); Vaquero et al. (2013). The strengths and weaknesses of each calculation are discussed in detail later in this Section.
Some of these methods can be used to compute single and decay lifetimes. It is disconcerting to find that predicted lifetimes for these processes are almost always shorter than measured lifetimes, i.e. computed single GamowTeller and matrix elements are too large Caurier et al. (2005); Barea et al. (2013); Pirinen and Suhonen (2015). The problems are usually “cured” by reducing the strength of the spinisospin GamowTeller operator , which is equivalent to using an effective value of the axial coupling constant that multiplies this operator in place of its “bare” value of . This phenomenological modification is sometimes referred to as the “quenching” or “renormalization” of . In Sec. IV we review possible sources of the renormalization, none of which has yet been shown to fully explain the effect, and their consequences for matrix elements.
iii.1 Shell Model
The nuclear shell model is a wellestablished manybody method, routinely used to describe the properties of mediummass and heavy nuclei Caurier et al. (2005); Brown (2001); Otsuka et al. (2001), including candidates for decay experiments. The model, also called the “configuration interaction method” (particularly in quantum chemistry Čársky (1998); Sherrill and Schaefer (1999)), is based on the idea that the nucleons near the Fermi level are the most important for lowenergy nuclear properties, and that all the correlations between these nucleons are relevant. Thus, instead of solving the Schrödinger equation for the full nuclear interaction in the complete manybody Hilbert space, one restricts the dynamics to a limited configuration space (sometimes called the valence space) containing only a subset of the system’s nucleons. In the configuration space one uses an effective nuclear interaction , defined (ideally) so that the observables of the fullspace calculation are reproduced, e.g.
(17) 
The states and are defined in the full space and the configuration space, respectively, and have associated energy .
The configuration space usually comprises only a relatively small number of “active” nucleons outside a core of nucleons that are frozen in the lowestenergy orbitals and not included in the calculation. The active nucleons can occupy only a limited set of singleparticle levels around the Fermi surface. Manybody states are linear combinations of orthogonal Slater determinants (usually from a harmonicoscillator basis) for nucleons in those singleparticle states,
(18) 
with the determined by exact diagonalization of .
The shell model describes groundstate nuclear properties such as masses, separation energies, and charge radii quite well. It also does a good job with lowlying excitation spectra and with electric moments and transitions Caurier et al. (2005); Brown (2001); Otsuka et al. (2001) if appropriate effective charges are used Dufour and Zuker (1996). The wide variety of successes over a broad range of isotopes reflects the shell model’s ability to capture both the excitation of a single particle from an orbital below the Fermi surface to one above, in the spirit of the original naive shell model Mayer (1949); Haxel et al. (1949), and collective correlations that come from the coherent motion of many nucleons in the configuration space. The exact diagonalization of means that the shell model states contain all correlations (isovector and isoscalar pairing, quadrupole collectivity, etc.) that can be induced by .
This careful treatment of correlations, on the other hand, restricts the range of shell model to relatively small configuration spaces, at present those for which the Hilbertspace dimension is less than about ) Shimizu et al. (2016); Nowacki et al. (2016). For this reason most shell model calculations of decay have been performed in a single harmonicoscillator shell, consisting of four or five singleparticle orbitals, not counting degeneracies from rotational invariance, for both protons and neutrons Caurier et al. (2008a); Menéndez et al. (2009b, a); Horoi and Stoica (2010); Sen’kov and Horoi (2013); Sen’kov et al. (2014); Sen’kov and Horoi (2014); Neacsu and Horoi (2015); Sen’kov and Horoi (2016); Horoi and Neacsu (2016a). Enlarging the configuration space increases the number of active nucleons, leading to a Hilbertspace dimension that increases combinatiorally with the number of activenucleon configurations in the valence space and quickly making dimensions intractable. (Very recently, however, the authors of Ref. Iwata et al. (2016) performed the first calculation in two oscillator shells; we discuss it in detail in Sec. III.6.)
Pairing correlations, which are central in nuclear structure and have large effects on decay, may not be fully captured by within a single oscillator shell Vogel (2012). In addition, because the onebody spinorbit interaction significantly lowers the energy of orbitals with spin parallel to the orbital angular momentum in heavy nuclei, spinorbit partners are split by several MeV or more, and the shellmodel configuration space contains only one member of some spinorbit pair (except in Ca). The omission may have important consequences because the spinisospin part of the decay operator in Eq. (11), , strongly connects spinorbit partners. The omission also affects single decay.
Despite these limitations, the shell model reproduces experimental single decay rates well if one quenches the strength of the spinisospin operator from its bare value Chou et al. (1993); Wildenthal et al. (1983); MartínezPinedo et al. (1996). (Of course, that is a big “if,” which we address in detail in Sec. IV.) The quenching needed to agree with data is about 20%30%, and this small range is enough to describe GamowTeller transitions in nuclei with valence nucleons in the shell (a configuration space comprising the 0 and 0 singleparticle orbitals, representing the valence shell for ), the shell (the 0, 1, and orbitals, ), the shell (the , , and orbitals, ), and the space spanned by the , ,, and orbitals, for use near . With roughly the same quenching the model also reproduces the decay rate of the only emitter in this mass region, Ca. One shell model success, in fact, was the accurate calculation of the decay halflife of Ca Caurier et al. (1990); Poves et al. (1995) before it was measured^{2}^{2}2The NEMO3 collaboration has reported a longer halflive than was measured previously, resulting in a deviation of about two Arnold et al. (2016) from the shellmodel prediction. If this new value is confirmed, the shell model would overestimate the Ca matrix element. Balysh et al. (1996). The shell model also reproduces other decay halflives provided it uses a quenching factor appropriate for the configuration space and interaction Caurier et al. (2012); Horoi and Brown (2013); Horoi and Neacsu (2016a); Sen’kov et al. (2014); Sen’kov and Horoi (2014); Neacsu and Horoi (2015). It requires a similar quenching to reproduce magnetic moments and transitions Towner (1997); Brown and Wildenthal (1987); von NeumannCosel et al. (1998), which, like decay, involve the spin operator.
The reliability of the shell model depends on the effective interaction as well as the configuration space. The starting point for is usually the bare nucleonnucleon interaction, which is fit to twonucleon scattering data (and discussed in a bit more detail in Sec V.3). Then, manybody perturbation theory is used to obtain a configurationspaceonly Hamiltonian that takes the effects of the inert core and neglected singleparticle orbitals into account HjorthJensen et al. (1995). Finally, phenomenological corrections, mainly in the monopole part of the interaction (the part responsible for changes in singleparticlelike excitations with increasing ) are usually made to improve the agreement with experimental data Caurier et al. (2005); Otsuka et al. (2001); Brown (2001). This last step in the construction of severely restricts our ability to quantify the theoretical uncertainty associated with matrixelements. The usual fallback — comparing the results of calculations that use equally reasonable effective Hamiltonians — leads to variations off Menéndez et al. (2009a); Horoi and Stoica (2010); Neacsu and Horoi (2015).
iii.2 The QRPA and Some of Its Variants
The other major thrust over the past 30 years has been the development of the chargechanging quasiparticle random phase approximation (QRPA). This approach is a generalization of the ordinary random phase approximation (RPA), which has a long history both in nuclear physics and elsewhere Bohm and Pines (1951); Rowe (1968). The method can be derived in a number of ways. One is through an approach we take up in more detail later, the generator coordinate method (GCM). Suppose one has solved the HartreeFock equations, yielding a set of occupied orbitals that when put together form the best possible Slater determinant , and has then constructed another set of “nearby” nonorthogonal Slater determinants , each with occupied orbitals of the form :
(19) 
where is the number of orbitals included in the calculation. The RPA ground state is the (continuous) superposition of the that minimizes the energy in the limit that the are small. In that limit, the minimization process becomes equivalent to the solution of the Schrödinger equation for a multidimensional harmonic oscillator, in which the play the role of creation operators Jancovici and Schiff (1964). Along with the ground state one finds a set of “onephonon” normal modes, oneparticle onehole excitations that are the only states that can be connected to the ground state by a onebody operator. The transition strengths to those states automatically satisfy energyweighted sum rules.
For application to single and decay, the RPA must be modified in two ways. First, it must be “chargechanging,” that is, if in Eq. (19) is a neutron orbital then at least some of the must be proton orbitals. That condition guarantees that the onephonon excited states have components with one more proton and one less neutron than the ground state. Second, it must include the physics of pairing, which one can add by replacing the HartreeFock state with the BCS or HartreeFockBogoliubov (HFB) quasi particle vacuum, and the nearby Slater determinants with nearby quasiparticle vacua Ring and Schuck (1980). The result is the chargechanging (or protonneutron) QRPA.
In single decay QRPA excitations of the initial state produce final states. In decay, they produce intermediate states in Eqs. (II.2.1) and (14) and one must carry out two separate QRPA calculations, one based on the initial nucleus as the BCS vacuum, and a second on the final nucleus. (The closure approximation is not required.) In this way one obtains two different sets of states in the intermediate nucleus, one of which must be expressed in terms of the other. The overlaps that are needed to do that require approximations beyond those in the QRPA. We will return to this subject in Sec. V.2.3.
The main advantage of the QRPA in comparison to the shell model is the number of singleparticle orbits that can be included in the calculation. As we mentioned in Sec. III.1, shellmodel configuration spaces are usually based on a few singleparticle orbitals, most often in one major oscillator shell. In most QRPA calculations, all the orbitals within one or two oscillator shells of the Fermi surface are treated explicitly, with those further below assumed to be fully occupied and those further above completely empty. One QRPA calculation Mustonen and Engel (2013), albeit a particularly demanding one, actually included all levels between 0 and 60 MeV, with no inert core.
Being able to handle many orbitals guarantees, for example, that the and strengths obey the “Ikeda sum rule” [see Eq. (20) in Sec. IV]. But the price for such large singleparticle spaces in the QRPA is a restricted set of correlations. As a result, one cannot expect a really accurate calculation without compensating for the method’s limitations by modifying the effective nucleonnucleon interaction used to generate the nuclear states. As in the shell model, the original interaction is typically a realistic nucleonnucleon potential, one adapted to the QRPA configuration space through manybody perturbation theory HjorthJensen et al. (1995). The interaction is modified more severely than in the shell model, however, usually independently in the particlehole and pairing channels. Practitioners often renormalize the strengths of the interaction in the protonneutron particlehole and channels by 10% or so to properly reproduce the energies of the GamowTeller and spindipole giant resonances, altering both the and matrix elements somewhat. It is the protonneutron pairing interaction that is most important, however. Its strength, often called , is also changed by only about 10%, but that modest adjustment usually has a large effect on the matrix elements. Figure 6 shows a typical plot of the and matrix elements versus ; the strong suppression near the nominally correct value of that parameter, , is clear not only in the matrix element, where it is dramatic, but also in the matrix element. The usual procedure in the QRPA is to fix the value of so that the measured rate of decay is correctly reproduced Rodin et al. (2003). Then the same value is used to predict the rate of decay. As Fig. 6 shows, this procedure almost eliminates the dependence of the matrix elements on the number of singleparticle levels in the configuration space.
The reason for the sensitivity to the protonneutron pairing strength has been the subject of many investigations Vogel and Zirnbauer (1986). Some of it is easy to understand: in the isoscalar channel, for example, protonneutron pairing mixes protonneutron spinone pairs, which are like inmedium deuterons, into the usual condensate of likeparticle spinzero pairs. The GamowTeller single operator, which plays a role in both the matrix element in Eq. (14) and the matrix element in Eq. (11), connects the two kinds of pairs, altering the matrix elements. (For a simple discussion of why they shrink rather than grow as a result, see Ref. Engel et al. (1988).) But some of the sensitivity can be an artifact of the QRPA. When the parameter is made large enough the method breaks down because of an impending phase transition from the likeparticle pairing condensate to a “deuteronlike” condensate. In the region before this (nonexistent) transition, the sensitivity to is unrealistically high.
Several groups have tried to modify the QRPA to cure this behaviour. The usual approach is the “renormalized QRPA” (RQRPA) Toivanen and Suhonen (1995); Rodin and Faessler (2002). The idea is that the breakdown is due to the violation of the Pauli exclusion principle by the smallamplitude approximation at the heart of the QRPA, and that the violation can be remedied by modifying the BCS vacuum on which the QRPA phonons are based. In the “fully renormalized” version of Ref. Rodin and Faessler (2002) some terms are kept beyond lowest order in the smallamplitude expansion as well. These procedures succeed in avoiding the infinities that mark the breakdown of the ordinary QRPA, but may overcompensate by eliminating all traces of the approximate phase transitions that really take place in solvable models, where the methods have been benchmarked Engel et al. (1997); Hirsch et al. (1997). In realistic calculations, these methods alter the QRPA matrix elements by noticeable but not large amounts Rodin et al. (2006). More comprehensive and systematic modifications to the QRPA have been proposed; we discuss some of them in Sec. V.2.3.
Finally, it is possible to calculate matrix elements, in the closure approximation, in the likeparticle QRPA rather than the chargechanging version. With closure, any set of intermediate states that form a partition of the identity operator will do. To apply the likeparticle QRPA, one chooses these intermediate states to lie in the nucleus with twomore protons (or twoless neutrons) than the initial nucleus. Though such excitations do not conserve particle number, or even charge in the case of twoproton addition, they can be represented as twolikequasiparticle excitations of the initial state. So far, however, only one author Terasaki (2015, 2016) has applied the QRPA in this way.
iii.3 EnergyDensity Functional Theory and the GeneratorCoordinate Method
The term EnergyDensity Functional (EDF) theory refers at its most basic level to the process of minimizing an energy functional with respect to local and semilocal densities such as the number density , the spin density , the current density , etc. Nogueira et al. (2003). The functional is the minimum possible value for the expectation value of the Hamiltonian when the densities are constrained to have particular values. Once the functional is obtained, minimizing it with respect to its arguments provides the exact groundstate energy and densities, as Hohenberg and Kohn originally showed Hohenberg and Kohn (1964). Moreover, the minimization can be formulated so that it looks like meanfield theory with onebody potentials and orbitals, via the KohnSham procedure Kohn and Sham (1965). The independent particle or quasiparticle wave functions that result have no meaning beyond supplying the correct energy and densities. In nuclear physics, approximations to the functional usually derive from the HartreeFock or HFB energy asociated with a “densitydependent twobody interaction” of the Skyrme Vautherin and Brink (1972), Gogny Dechargé and Gogny (1980) or relativistic Walecka Serot and Walecka (1986) type, sometimes with additional modifications. The parameters of the interaction or functional are then fit to groundstate properties — masses, radii, etc. — in a variety of nuclei and used without alteration all over the nuclear chart. The method can be extended to EDFbased RPA or QRPA, which are adiabatic versions of timedependent KohnSham theory. Indeed, that approach was used profitably in the decay calculations of Ref. Mustonen and Engel (2013).
The problem at present is that the Skyrme, Gogny, or relativistic functionals , though they do a good job with collective properties such as binding energies, radii, and transitions, are not close enough to the exact functional to work for all quantities in all nuclei^{3}^{3}3Nor should they be, even in principle. The HohenbergKohn theorem does not imply that the functional has the same form in all nuclei.. Something must be added to accurately describe nuclear properties, and the easiest way to add physics is to explicitly modify the HartreeFock or HFB wave functions so that they contain explicit correlations Bender et al. (2003). Adding correlations means mixing the wavefunctionbased and EDFbased approaches, however, opening the door to all kinds of inconsistencies Bender et al. (2009). The only way to remove these is to abandon the focus on functionals and return to a Hamiltonian. It is still an open question whether densitydependent Hamiltonians of the kind used in phenomenological calculations of isotopes along the nuclear chart can ever be made fully consistent.
The use of functionals together with explicit correlations is valuable nonetheless, particularly within the GCM. Although the method can be used in conjunction with the smallamplitude approximation to derive the RPA or QRPA (see Sec. III.2) the need for it is greatest when deviations from a single mean field have large amplitudes and the QRPA breaks down. Largeamplitude fluctuations are often associated with collective correlations. Shapes with significantly different degrees of deformation often must coexist, making the QRPA inapplicable. The same is sometimes true of pairing gaps. And both deformation and pairing involve the simultaneous motion of many nucleons in many orbitals, some of which may be outside shellmodel spaces. The large singleparticle spaces in EDFbased work and the mixing of mean fields in the GCM make it possible for these collective correlations to be fully captured.
The GCM with EDFs has several steps Bender et al. (2003):

Choose one or more “collective operators” . The most commonly chosen is the axial quadrupole operator . Operators that reflect nonaxial deformation and pairing gaps are other common choices.

Carry out repeated meanfield calculations with the expectation values of the constrained to many different values (approximating a continuum of values). In other words, find the Slater determinants or HFB vacua that minimize under the constraint that the take on particular sets of values.

Project the resulting meanfield states onto states with well defined angular momentum, particle number, and whatever other conserved quantity the meanfield approximation does not respect.

Use the resulting nonorthogonal states as a basis in which to “diagonalize the Hamiltonian.”
The last phrase is in quotation marks because although the EDF is associated with one or more densitydependent Hamiltonians (often separate ones for the particlehole and pairing channels), the theory says nothing about how to evaluate matrix elements of the density or of densitydependent operators between different Slater determinants. The most sensible and commonly used procedure is to replace the density by the transition density, but the approach has been shown to produce illdefined singularities when examined closely Anguiano et al. (2001); Dobaczewski et al. (2007); Bender et al. (2009) and no alternative has gained currency. Some functionals have fewer problems than others, however, and the Gogny and relativistic functionals, which are based strictly on densitydependent Hamiltonians, both allow stable results, provided one does not push numerical accuracy too far.
The applications to decay have been relatively few so far. Reference Rodríguez and MartínezPinedo (2010) contains the first, with the Gogny functional and axial quadrupole moment as the generator coordinate. In Ref. Vaquero et al. (2013), fluctuations in particle number were added as coordinates, leading in a modest enhancement of the matrix elements from the richer pairing correlations. References Song et al. (2014); Yao et al. (2015) used the axial quadrupole moment as a coordinate in conjunction with a relativistic functional, and Ref. Yao and Engel (2016) added octupole deformation in the decay of the rareearth nucleus Nd. The most obvious feature of all these results is that the matrix elements are larger than those of the shell model, and usually larger than those of the QRPA as well. The reason seems to be missing correlations in the GCM ansatz. At first these were thought to be noncollective Menéndez et al. (2014), but recent work Menéndez et al. (2016) strongly suggests that most of them come from the isoscalar pairing that we encountered when discussing the QRPA in Sec. III.2. Quite recently, Ref. Hinohara and Engel (2014) included the isoscalar pairing amplitude as a generator coordinate together with a Hamiltonian in one or two oscillator shells rather than an EDF. That coordinate allows the GCM to reproduce shellmodel results quite well. The Hamiltonianbased GCM can easily be extended to larger spaces, and we discuss plans to do so in Sec. III.6.
iii.4 The Interacting Boson Model
The strength of the shell model is the inclusion of all correlations around the Fermi surface and that of the GCM is a careful treatment of collective motion. The IBM aims to share both these strengths and describes excitation spectra and electromagnetic transitions among collective states up to heavy nuclei Iachello and Arima (1987). The cost, perhaps, is more abstract degrees of freedom than nucleons and a more phenomenological approach to nuclear structure.
The IBM leverages the algebra of boson creation and annihilation operators to provide simple Hamiltonians that generate complex and realistic collective spectra. In its original form (IBM1) Arima and Iachello (1981), the degrees of freedom are bosons (where is usually half the number of nucleons in the shellmodel configuration space), each of which can be in six positive parity states: an angularmomentumzero state (in which case the boson is labeled “”) and five angularmomentumtwo states (in which case it is labeled “,” with the magnetic projection). The Hamiltonian is usually a combination of one and twoboson scalar operators, of the form , etc., where and the superscripts denote the angular momentum to which the operators inside the corresponding parentheses or square brackets are coupled. In the IBM2 Iachello and Arima (1987), which is used to study decay, there are separate and boson states for neutrons and protons.
The IBM is appealing because it has clear connections to both the shell model and the collective model of Bohr and Mottelson Bohr and Mottelson (1998). On the one hand, the bosons represent nucleon pairs and on the other quadrupole phonons. Because the first correspondence is hard to make precise, Hamiltonians and effective operators are usually determined by fits to data rather than by mappings from the shell model Otsuka et al. (1978). There are no data on matrix elements, however, and the associated operators therefore must be derived from the shell model, at least approximately. The mapping for doing so is described in Ref. Barea and Iachello (2009). It is approximate because it involves only two and fournucleon states (which are mapped to one and twoboson states) and a schematic surfacedelta shellmodel interaction that is inconsistent with the phenomenological boson interaction.
The matrix elements produced by the IBM generally lie between those of the EDF/GCM and those of the shell model, and for several isotopes are close to the QRPA values. This result is perhaps surprising because the IBM is supposed to be a collective approximation to the shellmodel in one harmonic oscillator shell. The behavior may be connected to the truncation of the set of all possible bosons to a single set of likeparticle and objects, but that is just a guess; the subject needs to be investigated more thoroughly.
iii.5 Other Approaches
We will do little more than mention a couple of other methods that have been used to calculate matrix elements. Neither gives results that are startlingly different from those of other more modern and sophisticated approaches.
References Chaturvedi et al. (2008); Chandra et al. (2009); Rath et al. (2010, 2013) describe calculations with projected HFB states in a valence space of between one and two shells, with a schematic paring plus quadrupole Hamiltonian. The method itself is a simpler version of the GCM, which diagonalizes Hamiltonians in a space built from many projected HFB states.
References Hirsch et al. (1994, 1995b) describe calculations in the pseudoSU(3) model of the matrix elements for Nd, for the very heavy nucleus U, and for a few other deformed isotopes. The model takes advantage of a dynamical symmetry in the deformed Nilsson basis to construct states that should be similar to those obtained from number and angularmomentumprojected HFB calculations of the kind discussed in the previous paragraph and in Sec. III.3.
iii.6 Tests and Comparisons
Although we have mentioned the strengths and weaknesses of the manybody approaches most commonly used to calculate matrix elements, it is worth trying to compare them more carefully. How important are all the correlations included in the shell model but neglected in the QRPA, GCM, and IBM? What about the large configuration spaces in which the GCM and QRPA work, but which the shell model and IBM cannot handle? A few publications address these matters and we discuss them here.
References Caurier et al. (2008b); Horoi and Brown (2013); Iwata et al. (2016) have studied the effects of enlarging the configuration space in the shell model. The tentative conclusion is that allowing up to one or two particles to occupy a few selected orbitals beyond the original configuration space can have a strong effect on matrix elements (in particular if these orbitals are the spinorbit partners of orbitals in the original configuration space Horoi and Brown (2013)), but the effects on matrix elements are more moderate. The reason is that two distinct kinds of correlations compete, as illustrated in Fig. 7, taken from Ref. Iwata et al. (2016). On one hand, pairinglike excitations to the additional orbitals enhance the matrix elements Caurier et al. (2008b) (diagrams ii, iii and iv in Fig. 7); on the other hand, particlehole excitations (1p1h) to the additional orbitals, which in principle require less energy than pairinglike excitations (2p2h), tend to reduce the matrix elements (diagram v in Fig. 7). The net result appears to be a moderate increase, generally by less than 50%. A careful calculation of the matrix element for Ca Iwata et al. (2016) that extended the configuration space from four to seven singleparticle orbitals found about a 30% enhancement. That agrees reasonably well with results of calculations that include the effect of extra orbitals perturbatively: a 75% increase in the matrix element of Ca Kwiatkowski et al. (2014), along with a 20% increase in Ge, and 30% increase in Se Holt and Engel (2013). (In perturbation theory configuration spaces even larger than those in QRPA and GCM calculations can be included.) Though the enhancement is greater in perturbation theory, the final matrix elements in Refs. Iwata et al. (2016) and Kwiatkowski et al. (2014) differ by only 20%.
Other publications Caurier et al. (2008a); Menéndez et al. (2009a); Escuderos et al. (2010) have reported studies of the correlations included in the shellmodel but not in the QRPA. Nuclear manybody states can be classified by a “seniority” quantum number that labels the number of nucleons not in correlated neutronneutron and protonproton pairs. Fully paired states contribute the major part of matrix elements Caurier et al. (2008a); Menéndez et al. (2009a), a fact that is consistent with the finding that the additional pairing (2p2hlike) correlations from extra orbitals enhance the matrix element. The states with broken pairs (seniority ) contribute with opposite sign, shrinking matrix elements Caurier et al. (2008a); Menéndez et al. (2009a). An overestimate of pairing correlations, i.e. of the component in the nuclear states, thus leads to an overestimate of the matrix elements themselves. QRPA correlations in spherical nuclei include states with seniority but not those with and 10, and QRPA matrix elements could be too large for that reason. Although the contributions of states with and 10 are relatively small in shell model calculations Escuderos et al. (2010), Ref. Menéndez et al. (2011a) noted that when shell model states are forced to have the same seniority structure as QRPA states, the resulting shellmodel matrix elements grow, implying that the shortage of broken pairs in the QRPA makes its matrix elements too large. On the other hand, Ref. Šimkovic et al. (2008) observed that when the QRPA is applied within the small shellmodel configuration space (which is not a natural space for the QRPA) the resulting matrix elements are similar to those of the shell model, suggesting that the shellmodel matrix elements are about 50% too small. These issues are still unresolved.
The shell model’s ability to incorporate all correlations induced by the nuclear interaction, at least in a small model space, allows it to benchmark the GCM as well as the QRPA, and to tease out the most important kinds of correlations for decay. Reference Menéndez et al. (2014) compared shell model and EDF/GCM matrix elements for the decay of a wide range of calcium, titanium and chromium isotopes. These nuclei are not candidates for a decay experiment, but their decay matrix elements can still be calculated, allowing one to perform a systematic study in mediummass nuclei. The calculations of Ref. Menéndez et al. (2014) were in the full shell, a useful testbed because it includes all spinorbit partners in the oneshell configuration space, removing one of the shortcomings of the shell model in heavier nuclei (see Sec. III.1). In addition, both the shell model and EDF theory with the GCM describe the spectra of nuclei in this region quite well Caurier et al. (2005); Rodríguez and Egido (2007). The main finding of Ref. Menéndez et al. (2014) is that the EDFbased GCM and the shell model produce similar matrix elements when spherical EDF configurations are kept and shellmodel configurations are restricted to those with . It is only when higher seniority components, which include correlations beyond pairing, are permitted that the shell model matrix elements decrease. The reduction is much larger than that which can be induced by including axial deformation as one of the coordinates in the EDFbased GCM. The inclusion of highseniority states in the shell model but not the EDFbased GCM thus explains the results in Fig. 5, where shell model matrix elements are always smaller than those produced by EDF theory.
What is the nature of the higherseniority correlations that reduce matrix elements? Quadrupole correlations, induced by deformation, can play a role, particularly when the initial and final nuclei are deformed by different amounts Menéndez et al. (2011b); Rodríguez and MartínezPinedo (2010); Fang et al. (2011). But the QRPA and shell model agree that isoscalar pairing correlations usually play an even more crucial role Vogel and Zirnbauer (1986); Engel et al. (1988); Menéndez et al. (2016). Figure 8, based on work in Ref. Menéndez et al. (2016), compares shellmodel matrix elements with and without isoscalar pairing correlations. Put more precisely, it compares results produced by the full shellmodel Hamiltonian with those of an approximate collective Hamiltonian Dufour and Zuker (1996) from which isoscalar pairing can be excluded. The collective Hamiltonian reproduces the full results nearly perfectly. Without isoscalar pairing, however, the collective Hamiltonian greatly overestimates the matrix elements, always by about two units.
After noting the importance of isoscalar pairing, Ref. Menéndez et al. (2016) demonstrated that the correlations missing from the GCM are nearly all of that kind. The paper tested a version of the GCM that works with Hamiltonians rather than EDFs. This shellmodel based GCM and the shell model itself used the same Hamiltonian — the collective approximation — and the same configuration space; the shell model was thus the “exact” solution in the test because it diagonalized the interaction exactly. The result, shown in Fig. 8, was good overall agreement, provided the GCM included the isoscalar pairing amplitude as one of its generalized coordinates Hinohara and Engel (2014). (Some small differences remain for decays involving closed shells because these isotopes have fewer collective correlations.) That step allowed the method to capture isoscalar pairing correlations quite well. GCM calculations in larger spaces do not yet include isoscalar pairing coordinate, but should be able to in the near future. The Hamiltonianbased GCM may thus be able to include both the large model spaces of the QRPA and the important shellmodel correlations, without the drawbacks of either method.
Comparing the manybody methods suggests that with proper attention, each of them could obtain accurate matrix elements, which might well lie somewhere between the current predictions of the shell model and those of the EDF/GCM and QRPA. We will say more about how to improve and benchmark the different methods in the near future in Sec. V. First however, we turn to an issue that plagues all calculations in heavy nuclei and vitiates the idea that the matrix elements should be in the range spanned by current calculations: the overprediction of single and matrix elements, sometimes referred to as the “ problem”.
Iv The Problem
iv.1 Systematic Overprediction of Single and Matrix Elements
For nuclei close to stability up to mass number , calculations reproduce ground state properties, excitation spectra and electric moments and transitions well^{4}^{4}4The moments and transitions require effective charges that one can obtain by treating collective states outside the configuration space in perturbation theory Dufour and Zuker (1996). Caurier et al. (2005); Otsuka et al. (2001); Brown (2001). They do not do as well with decay rates, at least not without a tweak. Figure 9, taken from Ref. MartínezPinedo et al. (1996), compares of experimental GamowTeller strengths — summed over certain lowlying states and scaled as described in that reference — for nuclei with mass number between about 40 and 50 versus the theoretical predictions. The calculated strengths are generally larger than the data, but if the GamowTeller operator is multiplied by 0.74 or, equivalently, if the axial coupling is replaced by an effective value , then the calculations reproduce most of the experimental data quite well. A similar phenomenological correction, , brings predictions into agreement with data for nuclei with less than 16 Chou et al. (1993). For between about 16 and 40, a value of Brown and Wildenthal (1988) works well, and a slightly larger phenomenological correction is preferred for between 60 and 80 Kumar et al. (2016). That such a simple renormalization can largely eliminate a theoretical problem is a remarkable fact that has resisted attempts at explanation for several decades Brown et al. (1978); Bertsch and Hamamoto (1982); Arima et al. (1987); Caurier et al. (1995); Towner (1997); Park et al. (1997).
The problem is not confined to decay or even electroweak operators. The total GamowTeller integrated strength is governed by the Ikeda sum rule Ikeda et al. (1963):
(20) 
where is the total GamowTeller strength from the (neutron to proton) or (proton to neutron) operators, summed over all energies, and and are the initial nucleus’s neutron and proton numbers. The sum rule is a simple consequence of commutation relations and must hold to the extent that neutrons and protons (as opposed, e.g., to isobar excitations) are the only important nuclear degree of freedom. The summed strength can be extracted from chargeexchange experiments in which, for example, a proton is absorbed and a neutron is emitted, or a He ion is absorbed and a triton emitted. The weak interaction plays no part in these reactions but they nevertheless measure GamowTeller strength because the crosssection at forward angles is determined by the transition matrix elements of the operators Ichimura et al. (2006); Fujita et al. (2011); Frekers et al. (2013). Experiments that can determine the sum of strength below about 50 MeV report considerably less than , typically about half that much Ichimura et al. (2006). ( is much smaller because stable nuclei usually have more neutrons than protons and protontoneutron transitions are thus Pauli blocked.) That amount would correspond to a “quenching” of the operator that is similar to what is needed for single decay.
The agreement is strange. One experiment examines weak decay and the other tests nothing but the strong interaction. In one experiment, nature disagrees with complicated manybody calculations and in the other with a simple consistency requirement (though measured strength distributions are also smaller than calculations at low energies, demanding a quenching that is consistent with that used for decay Yako et al. (2009); Fujita et al. (2013); Noji et al. (2014); Iwata et al. (2015)). Some chargeexchange experiments Yako et al. (2005); Sasano et al. (2009) suggest that both discrepancies are due to strength that spreads out above the GamowTeller resonance, up to 50 MeV or more of excitation energy. There is still no consensus about the suggestion, however, mainly because it is hard to distinguish spinisospin strength from background at high energies Ichimura et al. (2006); Frekers et al. (2013).
Given this comprehensive quenching of GamowTeller strength, it is not surprising that the “ problem” also afflicts calculations of matrix elements. Figure 10, taken from Ref. Barea et al. (2015b), shows the values of required to reproduce measured matrix elements for a selection of shellmodel and IBM calculations (the latter of which are in the notalwaysreliable decay closure approximation.) In both models decreases with mass, approaching half of or less in the heavier nuclei. Although other sets of shellmodel calculations show milder and less massdependent quenching Caurier et al. (2012); Neacsu and Horoi (2015); Horoi and Neacsu (2016a); Neacsu and Horoi (2015); Horoi and Neacsu (2016a) and the mass dependence might be due to the fixable omission of spinorbit partners Caurier et al. (2012); Horoi and Brown (2013), the main message of Fig. 10 is undeniable and its implication stark. decay rates, which are proportional to the fourth power of , are much smaller than shellmodel and IBM predictions^{5}^{5}5As we discussed earlier, the QRPA fits the strength of isoscalar pairing so as to reproduce decay rates.. If decay rates, to which the main contribution is in Eq. (10), are smaller than predictions by a similar amount, next generation experiments will be significantly less sensitive than we currently expect; they will not be able to rule out the inverted hierarchy with a ton of material. Is decay really that quenched?
The answer depends on the source of the quenching, which is still unknown. We discuss possibilities and their implications next.
iv.2 Possible Causes and Implications for Neutrinoless DoubleBeta Decay
Over the years, theorists have suggested a wide range of sources for the quenching of GamowTeller strength. Almost all, in modern language, fall into one of two classes: nuclear manybody correlations that escape calculations Bertsch and Hamamoto (1982); Arima et al. (1987); Towner (1997); Caurier et al. (1995), and manynucleon weak currents Towner (1997); Park et al. (1997). The former include shortrange correlations, multiphonon states, particlehole excitations outside shellmodel configuration spaces, etc. The latter stand for nonnucleonic degrees of freedom, i.e. isobar excitations, inmedium modification of pion physics, partial restoration of chiral symmetry, etc. The consequences for decay depend on which of these two complementary sources is mostly responsible for the renormalization of the operator.
The reason that the relative quenching of and matrix elements can depend on the source is that there are marked differences between the two processes, even though both involve two virtual single decays. The virtual neutrino that is emitted and reabsorbed in decay makes the average momentum transfer from nucleons to leptons at each vertex much higher than in decay. If the neutrinos actually emerge from the decay, the momentum and energy transfer are constrained by the value of the transition, which is on the order of 1 MeV. If only electrons are emitted, however, the average momentum is about 100 MeV, a scale set by the average distance between the two decaying neutrons. Fig. 11 presents the contribution of different momentum transfers to the nuclear matrix element produced by the shell model and QRPA, for the representative mother nucleus Xe. The momentumtransfer distribution is similar in the two calculations. Although the QRPA distribution is shifted to higher , probably because of the larger configuration space, it falls off slowly in both cases so that several hundred MeV are transferred with reasonable probability. The higher momentum transfer means that the first virtual transition in decay can excite virtual intermediate nuclear states with all spins and parities, not just the or intermediate states that contribute to decay. If the spinisospin renormalization depends on the momentum transfer or multipolarity of the intermediate states, the large quenching needed to correctly predict single GamowTeller and decay rates may not be needed for decay. Although experimentalists are trying to test the momentumtransfer and multipolarity dependence of quenching Ejiri and Frekers (2016), the experiments are difficult and the existing data inconclusive.
In the search for the cause of quenching, complex correlations that calculations do not capture have long been a suspect. Reference Bertsch and Hamamoto (1982) proposed in 1982 that twoparticle–two hole excitations to orbitals outside shellmodel configuration spaces or beyond QRPA correlations shift the GamowTeller strength to high energies. (The Ikeda sum rule requirement means that strength does not appear or disappear, but rather moves.) Nuclearstructure models miss this effect and therefore need to quench the lowenergy strength. The authors of Refs. Arima et al. (1987); Towner (1997) made a similar argument, and Ref. Brown and Wildenthal (1987) proposed that about two thirds of the spinisospin quenching comes from missing particlehole configurations. The authors of Ref. Caurier et al. (1995) argued slightly differently, suggesting that because operates at all internucleonic distances, its matrix elements should be affected not only by the longrange (lowenergy) correlations included e.g. in shellmodel states, but also by shortrange (highenergy) correlations, which are not included. They went on to argue that shellmodel GamowTeller strength should be quenched consistently with the roughly 30% depletion of singleparticle occupancies needed to reproduce electron scattering data Pandharipande et al. (1997) because both kinds of quenching reflect the same inability of the shell model to include shortrange correlations.
More recently, two studies have tried to use manybody perturbation theory HjorthJensen et al. (1995) to quantify the effect of missing correlations on the operator in the shell model. Reference Siiskonen et al. (2001) reported a 20% reduction of GamowTeller strength for nuclei whose valence nucleons are in the and shells; the result agrees well with phenomenological fits to experimental strength. In heavier systems the authors found a much stronger reduction, as large as a 60% in Sn; that resut is in reasonable agreement with the trend shown in Fig. 10. The degree of renormalization varies by only a few percent up to momentum transfers of about 100 MeV, suggesting similar quenching of and matrix elements. Reference Holt and Engel (2013) studied decay within a similar perturbative framework. While the method required the closure approximation and so could say relatively little about decay, it produced about a 20% enhancement of the matrix element in Ge and a 30% enhancement in Se. These results agree with the tendency of the shell model to increase matrix elements when configuration spaces are enlarged slightly Caurier et al. (2008b); Iwata et al. (2016), and argue against any suppression of decay. Once more, however, the argument is not conclusive: Firstorder oneparticle onehole excitations strongly suppress the matrix element and it just so happens that higherorder terms tend to counteract the suppression. But it is not at all clear whether or how fast the perturbative expansion converges, and neglected terms could have large effects.
Nonnucleonic degrees of freedom, manifested as manynucleon currents in models without explicit isobars and pions, are also a longterm suspect in the search to explain quenching (see e.g. Ref. Towner (1997) and references therein). Reference Brown and Wildenthal (1987) concluded that only about one third of the phenomenological quenching is due to mesonexchange currents, most of which involved the isobar. The currents in use at the time were based on models with no systematic power counting, however, and so the uncertainty in the result is very large. The situation has improved with the advent of EFT, which relates electroweak currents to nuclear interactions Park et al. (2003); Hoferichter et al. (2015); Krebs et al. (2017). Several studies have exploited the relation in nuclei with , and it is now clear that the twonucleon currents are necessary in precise calculations of weak Butler et al. (2001); Nakamura et al. (2001); Gazit et al. (2009); Baroni et al. (2016) and electromagnetic Bacca and Pastore (2014); Marcucci et al. (2016) transition rates. The effects in these nuclei range from a few percent to about 40% and depend significantly on nuclear structure.
Reference Menéndez et al. (2011c) applied manynucleon currents, shown in Fig. 12, to the single and decay of mediummass nuclei. The authors reduced the twobody current operators to effective onebody operators by normal ordering with respect to a spinisospinsymmetric Fermigas reference state. They found that the single and matrix elements were reduced by an amount corresponding to . Most of the quenching comes from the longrange onepionexchange current (depicted in the lefthand diagram in Fig. 12), which includes contributions from the isobar. Uncertainty in the EFT couplings that appear in the twonucleon currents, especially in the coefficient of the contact term (depicted in the righthand diagram in Fig. 12), leads to a significant uncertainty in the quenching, and even allows a slight enhancement.
Interestingly, the corresponding reduction in matrix elements turned out to be about 30% Menéndez et al. (2011c) as well, much less than one would get by squaring the renormalization factor from single decay, as one does to get the quenching of matrix elements. Later corrections to the currents Klos et al. (2013); Menéndez did not change this fact. The twobody currents have a smaller effect at high momentum transfer, softening the quenching of matrix elements. QRPA calculations that used the same currents obtained even less quenching (about 20% Engel et al. (2014)) partly because the average momentum transfer is slightly higher in QRPA calculations and partly because the coefficient of the isoscalar pairing interaction was readjusted after adding the twobody currents to ensure that matrix elements were still correctly reproduced. In any case, these studies suggest that whatever renormalization is due to manynucleon currents will be milder in decay than in single and decay.
The smaller quenching at momentum transfers near the pion mass is consistent with the studies of muon capture, where the evidence for GamowTeller quenching is weaker than in decay Zinner et al. (2006). Exclusive chargedcurrent electronneutrino scattering from C to the ground state of N also shows little evidence of quenching, at momentum transfer even smaller than the pion mass Hayes and Towner (2000); Volpe et al. (2000); Suzuki et al. (2006). The inclusive crosssection, for which additional multipoles are relevant, is harder to calculate and shell model calculations disagree with one another by a factor of about two Hayes and Towner (2000); Volpe et al. (2000); Suzuki et al. (2006). Improved calculations, using ab initio methods such as those initiated in Ref. Hayes et al. (2003); Lovato et al. (2016), are clearly needed. Further measurements, preferably in targets that complement C, would also provide valuable information about quenching at nonzero momentum transfer.
Recently, Ref. Ekström et al. (2014) included twonucleon currents in coupledcluster calculations of single decay. The authors normalordered the operator with respect to a HartreeFock reference state rather than the simpler Fermi gas of Ref. Menéndez et al. (2011c). Focusing on carbon and oxygen isotopes, they found that the twonucleon currents reduce the strength of the operator by about 10%. Though this result suggests a very small quenching of matrix elements, the potentially coherent contribution of the nuclear core could make manybody currents more effective in the much heavier nuclei that will be used in decay experiments.(The normal ordering approximation that makes the coherence apparent should be carefully explored, however.)
The discussion in this section should make it clear just how important it is to characterize and untangle the sources of the problem. If they lie mainly in complicated manybody effects, the matrix elements may be significantly too large (though that conclusion is not uniformly supported by theoretical work in perturbation theory). On the other hand, if the problem is mostly due to manynucleon currents, then our matrix elements are probably too large by less than a factor of two.
To determine which of the contributions is more important and plumb the consequences for decay, we need calculations that treat manybody correlations in a comprehensive way and include manynucleon currents consistently. It should be possible to do all this in the next five or so years, and we discuss how in the next section. We raise just one more consideration now. As suggested by the discussion of electronscattering strengths just above, we may be able to learn about the source of quenching by studying other electroweak processes, for instance magnetic moments and transitions. The operators for these observables also need phenomenological renormalization to agree with experimental data Towner (1997); Brown and Wildenthal (1987); von NeumannCosel et al. (1998). The missing manybody effects are expected to be similar to those in decay because they emerge from the same nuclear states and similar nonrelativistic operators ( and vs. ). The manynucleon corrections to the current operators can be quite different, however, because the photon has no axialvector coupling.
V Improving Matrix Element Calculations in the Next Few Years
v.1 General Ideas
We have seen that the important matrix elements still carry significant theoretical uncertainty. We believe, however, that recent developments in the shell model, QRPA, GCM, and especially in ab initio nuclear structure methods and EFT will finally allow the community of nuclear theorists to produce more accurate matrix elements with real estimates of uncertainty. The approach to that task will have at least three prongs:

the improvement of current methods, in particular to accommodate all the collective correlations we know to be important,

the development and application of ab initio methods to the initial and final decay nuclei,

a systematic assessment of theoretical uncertainty.
In what follows we address each of these in turn.
v.2 Improving Present Models
v.2.1 Extending Shell Model Configuration Spaces
The main limitation of the shell model is that it is restricted to small configuration spaces. We have already mentioned that the effects of the neglected singleparticle orbitals can be estimated perturbatively Kwiatkowski et al. (2014); Holt and Engel (2013), and that calculations in two oscillator shells are now becoming feasible Iwata et al. (2016). To add more than a few singleparticle levels in a nonperturbative way, however, requires new ideas.
One way to mitigate the shell model’s shortcomings is to discard manybody configurations that have little effect on the observables one is interested in. The most established approach for doing this is the Monte Carlo shell model (MCSM) Shimizu et al. (2012), which uses statistical sampling to select deformed Slater determinants and then projection to restore angularmomentum symmetry. More recent work uses importance truncation Stumpf et al. (2016) or the densitymatrix renormalization group Legeza et al. (2015), though not yet in a systematic way. These schemes allow configuration spaces with dimensions several orders of magnitude larger than those that can be handled with exact diagonalization. MCSM calculations now can to work with configuration spaces of dimension Togashi et al. (2016), while exact diagonalization currently requires a space of dimension or less. The MCSM number is large enough to allow the extension of shell model configuration spaces so that they include all spinorbit partners in decaying nuclei and all singleparticle orbitals found relevant in QRPA or EDF calculations Šimkovic et al. (2009); Rodríguez et al. (2016). The Monte Carlo approach could also facilitate ab initio nocore shell model calculations Abe et al. (2012), which are currently limited to nuclei with less than about 20 nucleons Navrátil et al. (2016); Barrett et al. (2013), in isotopes closer to those used in decay experiments. Calculations in light nuclei are useful for other reasons as well, and we discuss them further in Sec. V.3.
Extended shellmodel spaces will require suitable nuclear interactions. The phenomenological modification of that accurate calculations usually require makes it difficult to obtain reliable effective interactions in larger configuration spaces; the number of twobody matrix elements to be constrained phenomenologically increases with the size of the space. The situation would be better if we could dispense with the adjustment, the need for which may be due to the absence of threenucleon forces in the original interaction Zuker (2003). Recent promising work Otsuka et al. (2010); Jansen et al. (2014); Bogner et al. (2014); Dikmen et al. (2015); Tsunoda et al. (2017) avoids any phenomenological tuning by using EFT two and threebody interactions as a starting point to generate . These ab initio interactions allow a better assessment of uncertainty and appear to be usable in larger configuration spaces. One can obtain transition operators the same way. We discuss the use of EFT and nonperturbative manybody methods to create shell model interactions and operators in Sec. V.3.2.
v.2.2 Adding Correlations to the EDF and the IBM
The EDF matrix elements, as we noted in Sec. III.3, are almost universally larger than matrix elements calculated in any other approach. The reason, as discussed in Refs. Menéndez et al. (2014, 2016), is missing correlations, particularly isoscalar pairing correlations. It is actually not hard to add protonneutron pairing to the GCM; Ref. Hinohara and Engel (2014) did precisely that within a twoshell calculation that used a multiseparable interaction. There is no reason that the same physics cannot be added to EDFbased GCM. The main ingredient is a set of HFB calculations that allow protonneutron mixing, i.e. quasiparticles that are a combination of a neutron and a proton as well as a particle and a hole. Computer codes to do these calculations are nearly complete Sheikh et al. (2014). Problems with the EDFbased GGM, as noted earlier, arise only when one tries to superpose the projected HFB quasiparticle vacua. Here, with a few limited exceptions, one needs a Hamiltonian rather than a functional. That does not mean, however, that EDF theory can play no role. One might, for example, use the EDFbased HFB, with protonneutron pairing, simply to generate the projected HFB vacua. These states can be constructed so that they include all possible collective correlations, and in particular all correlations relevant to decay. One then can use them to form a basis in which to diagonalize a real Hamiltonian such as that produced by EFT rather than the original density functional. Functionals were designed to work well in meanfield calculations and thus should produce a good correlated basis with contributions from many more singleparticle states than are in the shell model. The Hamiltonian will then determine how these states are mixed. The result should combine many of the virtues of shellmodel calculations with those of current EDF and QRPA calculations.
Even so, some correlations, e.g. those from high relative momentum, may still be absent from EDFbased approaches. Within the shell model, as we discuss in Sec. V.3.2, one can construct an effective Hamiltonian and a consistent decay operator that captures such effects. It has not been as clear how to do that with EDF or QRPAbased calculations, in which the full manybody Hilbert space does not cleanly separate into included and neglected pieces. Recently, however, techniques to add missing correlations to approximate state vectors such as those produced by the GCM have nonetheless been developed; we discuss them in Sec. V.3.3.
Finally, the IBM2 decay calculations done so far, like those in EDF theory, have no explicit protonneutron pair degrees of freedom. Bosons that represent isoscalar pairs can certainly be added — a boson model with both isovector and isoscalar protonneutron pairs is known as IBM4 — and work is in progress to see the effects on decay van Isacker et al. . While it is not obvious that these effects will be large, the new pairs may bring the IBM matrix elements closer to those of the shell model. The IBM2, however, already includes bosons that represent likeparticle monopole and quadrupole pairs. The coupling of two spin1 isoscalar pairs to total angular momentum zero can be represented as a combination of two likeparticle or pairs, and these are exactly the degrees of freedom the usual IBM2 bosons represent. It may be, then, that present IBM results capture most of the effects of isoscalar pairing without explicit isoscalar bosons. On the other hand, some facts indicate that isoscalar pairing effects are still missing. Figure 13, which is typical of shell model results, shows that almost the entire matrix element arises from the decay of likeparticle pairs or pairs, with a sizable cancellation between these two contributions. In the IBM2, by contrast, the boson contribution is much smaller than that of the boson, so that there is very little cancellation Barea and Iachello (2009). The dominance of pairs is much like what one obtains in the shell model when isoscalar pairing is omitted Menéndez et al. (2016).
v.2.3 Higher QRPA and the Overlap Problem
The QRPA can also be improved. It is currently limited by the simplicity of its correlations, which can be represented as oneboson states; the bosons are simplified versions of correlated quasiparticle pairs. Without manyboson states, the Pauli principle is violated and the transition strength to intermediate states overly concentrated. Accuracy requires that the manyboson states be included.
Many studies advance one scheme or another for doing that. The second QRPA and the quasiparticle timeblocking approximation (QTBA) are two representative examples. The second QRPA systematically adds all fourquasiparticle excitations to the twoquasiparticle excitations that enter the usual QRPA. Outside of preliminary investigations in Refs. Raduta et al. (1991); Stoica and KlapdorKleingrothaus (2001), however, it has not been systematically developed. Second RPA, without quasiparticles, is further along, both in an ab inito form Papakonstantinou and Roth (2010) and in conjunction with EDF theory Gambacurta et al. (2010, 2015). Though these schemes are computationally intensive, their quasiparticle versions should not be completely intractable and may be both accurate and flexible enough to allow ab initio calculations.
The QTBA is a Green’sfunction approach to linear response that supplements the iterated ring and ladder diagrams that are equivalent to the QRPA with a subset of diagrams that contain the emission and reabsorption of QRPA phonons by quasiparticles and the exchange of phonons between quasiparticles Litvinova et al. (2008). When used in conjunction with a relativistic EDF, the QTBA appears to significantly improve predictions of single decay strength functions Robin and Litvinova (2016). This method has the potential to significantly reduce the important physics omitted by the QRPA.
Any RPAlike method designed to calculate linear response faces a problem when applied to decay, however. As mentioned in III.2, matrix elements involve separate transition matrix elements to each intermediatenucleus state from the initial and the final ground states. These two contributions must be multiplied and then summed over all intermediate states. But the two QRPA ground states are unrelated, so there is no way to match intermediate states produced by the excitation of one nucleus with those from the excitation of the other. Intermediatestate energies are different in the two cases, and the QRPA provides only transition densities, not full wave functions. Thus even attempts to compute the overlap of two different sets of intermediatestate wave functions must rely on prescriptions.
The only way to solve this problem, at least partially, is to extend the QRPA so that it does provide wave functions. One solution is to use wave functions from the simpler quasiTamm Dancoff approximation. Another track, taken by Refs. Terasaki (2012) and Terasaki (2013), is to represent the ground states of the initial and final nuclei by quasiparticle coupledcluster wave functions of the schematic form
(21) 
where the ’s and ’s are the usual QRPA amplitudes for the excited state. This form is a natural extension of the quasiboson wave function that the QRPA does provide when pairs of fermionic operators are replaced by bosonic operators (with respect to a particular vacuum). The quality of the approximate wave function in Eq. (21) is not yet known, however. In any event, every extension of the QRPA must face this issue. The method is a smalloscillations approximation, and the initial and final nuclei in decay are often quite different in structure, so they cannot both be only a little different from states in the intermediate nucleus.
v.3 Ab Initio Approaches
v.3.1 Kinds of Ab Initio Calculations
The term ab initio, literally “from the beginning,” is a little vague in the nuclearphysics context. A truly firstprinciples calculation of nuclei involves solving the underlying theory, quantum chromodynamics (QCD), with quarks and gluon degrees of freedom. The ambitious framework for doing so is lattice QCD Beane et al. (2011); Aoki et al. (2012); Briceño et al. (2015). In spite of rapid progress, only the lightest nuclei with two, three and four nucleons have been studied thus far, and even these are treated with significant approximations, e.g. at largerthanrealistic values for the pion mass. We will not see QCD calculations of the structure of mediummass and heavy nuclei for some time. Nevertheless, lattice QCD is valuable for the decay program. It can connect the parameters in the EFT diagrams discussed in Sec. II.2.2 with the underlying fundamental beyondstandardmodel physics, for example Nicholson et al. . More generally, it provides a way of determining all EFT parameters, which are often poorly constrained by data. Though doing so is computationally demanding and computed parameters are not accurate enough at present Beane et al. (2011); Aoki et al. (2012); AbdelRehim et al. (2015), researchers are working to improve their calculations.
In nuclear structure the term ab initio usually refers to calculations that 1) take nucleons — all of them — as the degrees of freedom, and 2) use nuclear interactions and currents obtained from fits to nucleonnucleon scattering data and properties of the lightest nuclei: the deuteron and isotopes of hydrogen and helium with 3 or 4 nucleons, and as few (slightly) heavier nuclei as possible. These fits can proceed through a mesonexchange phenomenology that describes the elastic channel of nucleonnucleon scattering up to and perhaps beyond the pionproduction threshold, leading, e.g., to the “Argonne” potentials and other similar interactions Wiringa et al. (1995); Machleidt et al. (1987); Stoks et al. (1994). Phenomenological potentials of this sort work quite well in light nuclei but seem to need improvement for use in mediummass nuclei Gandolfi et al. (2014).
The fits that determine nuclear interactions can also be based on EFT Meißner (2016); Epelbaum et al. (2009); Hammer et al. (2013); Machleidt and Entem (2011), discussed already in connection with heavyparticle exchange (Sec. II.2.2) and manynucleon currents (Sec. IV.2). Here we give a very brief description of its use in ab initio nuclear structure. EFT is an effective theory, based on the symmetries of QCD, that provides a perturbative framework for interactions and currents. It yields a systematic expansion of two and manynucleon forces and consistent one, two, and manynucleon currents in powers of the parameters and , where is a typical nucleon momentum and is the chiralsymmetry breaking scale defined in Sec. II.2.2. Once the interactions are fixed, an accurate manybody method is used to calculate nuclear binding energies, radii, excitation spectra, electromagnetic transitions, decay rates, and other observables, with error estimates inferred from the power counting and tests of the manybody method.
EFT is not without its problems. Though they sometimes produce excellent results Hebeler et al. (2015a), the most widelyused EFT nuclear interactions frequently predict binding energies that are too large and radii that are too small, especially in heavy systems Binder et al. (2014); Hergert et al. (2016); Somà et al. (2014). The overbinding can be partly cured by fitting the EFT couplings to properties of nuclei as heavy as oxygen Ekström et al. (2015), even while omitting interactions beyond the threenucleon level (and thus limiting the ability to estimate uncertainties). But the theory may still work without that step; some interactions that are obtained without it predict saturation properties Hebeler et al. (2011) and agree better with experimental radii than others Garcia Ruiz et al. (2016). And fits that include consistent two and threenucleon forces to fourthorder in the chiral expansion are now underway Hebeler et al. (2015b). Reference Drischler et al. (2016) recently presented the initial calculations in neutron matter with these interactions.
Ab