2Institute for Advanced Simulation, Institut für Kernphysik, and Jülich Center for Hadron Physics, Forschungszentrum Jülich, D-52425 Jülich,
3Tbilisi State University, 0186 Tbilisi,
First author contact:
Received:2020-10-19Revised:2020-12-7Accepted:2020-12-10Online:2021-02-04
Abstract
Keywords:
PDF (872KB)MetadataMetricsRelated articlesExportEndNote|Ris|BibtexFavorite
Cite this article
Bastian Kaspschak, Ulf-G Meißner. How machine learning conquers the unitary limit. Communications in Theoretical Physics, 2021, 73(3): 035101- doi:10.1088/1572-9494/abd84d
1. Introduction
After neural networks have already been successfully used in experimental applications, such as particle identification, see e.g. [1], much progress has been made in recent years by applying them to various fields of theoretical physics, such as [2, 4, 3, 5–14]. An interesting property of neural networks is that their prediction is achieved in terms of simple mathematical operations. A notable example is given by multilayer perceptrons (MLPs), see equation (Despite their excellent performance, a major drawback of many neural networks is their lack of interpretability, which is expressed by the term ‘black boxes’. However, there are methods to restore interpretability. A premier example is given in [10]: by investigating patterns in the networks’ weights, it is demonstrated that MLPs develop perturbation theory in terms of Born approximations to predict natural S-wave scattering lengths a0 for shallow potentials. Nevertheless, this approach fails for deeper potentials, especially if they give rise to zero-energy bound states and thereby to the unitary limit ${a}_{0}\to \infty $. The physical reason for this is that the unitary limit is a highly non-perturbative scenario. In addition, the technical difficulty of reproducing singularities with neural networks emerges, which requires unconventional architectures. Note that in its initial formulation, the Bertsch problem for the unitary Fermi gas includes a vanishing effective range ${r}_{0}\to 0$ as an additional requirement for defining the unitary limit, see e.g. [15]. However, r0 is non-zero and finite, that is it violates scale invariance, for ${a}_{0}\to \infty $ in real physical systems, on which we want to focus in this work. Therefore, the case ${a}_{0}\to \infty $ we consider as the unitary limit, is independent of the effective range and, thus, less restrictive than the definition in the Bertsch problem. The unitary limit plays an important role in our understanding of bound strongly interacting fermionic systems [16–21] and can be realized in cold atom experiments, see, e.g. [22]. Therefore, the question arises how to deal with such a scenario in terms of machine learning? Our idea is explain the unitary limit as a movable singularity in potential space. This formalism introduces two geometric quantities f and b0 that are regular for ${a}_{0}\to \infty $ and, therefore, can be easily approached by standard MLPs. Finally, natural and unnatural scattering lengths are predicted with sufficient accuracy by composing the respective networks.
The manuscript is organized as follows: in section
2. Discretized potentials and unitary limit surfaces
As we investigate the unitary limit, we only consider attractive potentials. For simplicity, the following analysis is restricted to non-positive, sphericallysymmetricpotentials V(r)≤0 with finite range ρ. Together with thereducedmass μ, the latter parameterizes all dimensionless quantities. The most relevant for describing low-energy scattering processes turn out to be the dimensionless potential U=−2μρ2V≥0 and the S-wave scattering lengtha0. An important first step is to discretize potentials, since these can then be treated as vectors ${\boldsymbol{U}}$ ∈Ω⊂${{\mathbb{R}}}^{d}$ with non-negative components Un=U(nρ/d)≥0 and become processable by common neural networks. We associate the piecewise constant step potentialAs a further result of discretization, the potential space is reduced to the first hyperoctant Ω of ${{\mathbb{R}}}^{d}$. Counting bound states naturally splits Ω=${\bigcup }_{i\in {{\mathbb{N}}}_{0}}$ Ωi into pairwise disjunct, half-open regionsΩi, with Ωi containing all potentials with exactly i bound states. All potentials on the d−1 dimensional hypersurface ${{\rm{\Sigma }}}_{i}\ \equiv \partial {{\rm{\Omega }}}_{i-1}\cap {{\rm{\Omega }}}_{i}$ between two neighboring regions with ${{\rm{\Sigma }}}_{i}\subset {{\rm{\Omega }}}_{i}$ give rise to a zero-energy bound state, see figure 1. Since we observe the unitarylimit${a}_{0}\,\to \infty $ in this scenario, we refer to Σi as the ith unitary limit surface. Considering the scattering length as a function ${a}_{0}:{\rm{\Omega }}\to {\mathbb{R}}$, this suggests a movable singularity on each unitary limit surface. For simplicity, we decide to focus on the first unitary limit surfaceΣ1, as this approach easily generalizes to higher order surfaces. Let ${\boldsymbol{U}}\in {\rm{\Omega }}$ and $f\in {{\mathbb{R}}}^{+}$ be a factor satisfying $f{\boldsymbol{U}}\in {{\rm{\Sigma }}}_{1}$. This means scaling ${\boldsymbol{U}}$ by the unique factor f yields a potential on the first unitary limit surface. While potentials with an empty spectrum must be deepened to obtain a zero-energy bound state, potentials whose spectrum already contains a dimer with finite binding energy E<0 need to be flattened instead. Accordingly, this behavior is reflected in the following inequalities:
Figure 1.
New window|Download| PPT slideFigure 1.Sketch of the regions Ω0 and Ω1 and the first unitary limit surface ${{\rm{\Sigma }}}_{1}\subset {{\rm{\Omega }}}_{1}$ for the degree d=2 of discretization. In this specific case, the potential space Ω is the first quadrant of ${{\mathbb{R}}}^{2}$ and unitary limit surfaces are one-dimensional manifolds.
3. Predicting f with an ensemble of MLPs
The factor f seems to be a powerful quantity for describing the geometry of the unitary limit surfaceΣ1. The latter is merely the contour for f=1. It is a simple task to derive f iteratively by scaling a given potential ${\boldsymbol{U}}$ until the scattering length flips sign, see appendixWe decide to work with MLPs. These are a widely distributed and very common class of neural networks and provide an excellent performance for simpler problems. Here, an MLP ${{ \mathcal F }}_{i}$ with L layers is a composition
Figure 2.
New window|Download| PPT slideFigure 2.Predictions ${ \mathcal F }({\boldsymbol{U}})$ of the scaling factor by the ensemble${ \mathcal F }$ versus the targets f for all $({\boldsymbol{U}},f)\in {T}_{2}$. The resulting point cloud is very closely distributed around the bisector, which indicates an excellent performance of ${ \mathcal F }$ on the test set T2.
4. Predicting scattering lengths in vicinity of Σ1
Our key motivation is to predict scattering lengths in vicinity of Σ1. Being a movable singularity in potential space, the unitary limit itself imposes severe restrictions on MLP architectures and renders training steps unstable. Therefore, we opt for the alternative approach of expressing scattering lengths in terms of regular quantities, that each can be easily predicted by MLPs. Given the factors f we first considerAs shown in figure 3, b0 is finite and restricted to a small interval for all considered potentials. This does not imply that f and b0 are globally regular. Indeed, f diverges for ${\boldsymbol{U}}\to 0$ and b0 diverges on each higher order unitary limit surface. However, these two scenarios have no impact on our actual analysis.
Figure 3.
New window|Download| PPT slideFigure 3.b0 versus the corresponding factors f for all potentials ${\boldsymbol{U}}$ in the training set T1. Note that b0 is restricted to a small interval. The width of the point cloud suggests that there is no one-to-one relation between b0 and f.
Similar to the ensemble ${ \mathcal F }$ in equation (
Having trained the ensembles ${ \mathcal B }$ and ${ \mathcal F }$ to predict b0 and f precisely (see appendix
Figure 4.
New window|Download| PPT slideFigure 4.(a): Predicted scattering lengths ${ \mathcal A }({\boldsymbol{U}})$ for potential wells with depths u with ${ \mathcal A }$ as given in equation (
Note that outputs ${ \mathcal A }({\boldsymbol{U}})$ for potentials in the unitary limit f→1 are very sensitive to ${ \mathcal F }({\boldsymbol{U}})$. In this regime, even the smallest errors may cause a large deviation from the target values and thereby corrupt the accuracy of ${ \mathcal A }$. In figure 4(b) we observe significantly larger relative errors ${\varepsilon }_{{ \mathcal A }({\boldsymbol{U}})}$ = $({ \mathcal A }({\boldsymbol{U}})$ −a0)/a0 in a small interval around the unitary limit at u=π2/4. Of course, this problem could be mitigated by a more accurate prediction of f, but the underlying difficulty is probably less machine-learning-related and is rather caused by working with large numbers. Nonetheless, the quotient ${ \mathcal A }$ reproduces the basic behavior of a0 sufficiently well for our purposes. Inspecting the prediction-versus-target plot for the test set T2 is another and more general, shape-independent way to convince ourselves of this, see figure 5. Although we notice a broadening of the point cloud for unnaturally large scattering lengths, the point cloud itself remains clearly distributed around the bisector. This implies that ${ \mathcal A }$ predicts natural and unnatural scattering lengths precisely enough and agrees with its low relatively low MAPE. Finally, the resulting MAPE of 0.41% indicates an overall good performance of ${ \mathcal A }$ on the test set T2.
Figure 5.
New window|Download| PPT slideFigure 5.Predictions ${ \mathcal A }({\boldsymbol{U}})$ of scattering lengths by the quotient ${ \mathcal A }$ versus the targets a0. The point cloud becomes broader for unnaturally large scattering lengths. Nonetheless it is still distributed sufficiently close around the bisector, which indicates that ${ \mathcal A }$ generalizes well and reproduces the correct behavior of a0 around Σ1.
5. Taylor expansion for interpretability
Considering the quotient ${ \mathcal A }({\boldsymbol{U}})={ \mathcal B }({\boldsymbol{U}})/(1-{ \mathcal F }({\boldsymbol{U}}))$, we can make reliable predictions on natural and unnatural scattering lengths. We have established a geometrical understanding of the quantities f and b0, predicted by ${ \mathcal F }$ and ${ \mathcal B }$, respectively. However, since both ensembles are ‘black boxes’, their outputs and the outputs of ${ \mathcal A }$ are no longer interpretable beyond that level. One way to establish interpretability is to consider their Taylor expansions with respect to an appropriate expansion point ${{\boldsymbol{U}}}^{* }$. In the following, we demonstrate this for the ensemble ${ \mathcal F }$ with ${{\boldsymbol{U}}}^{* }$ on the first unitary limit surface: since ${ \mathcal F }$ is regular in ${{\boldsymbol{U}}}^{* }\in {{\rm{\Sigma }}}_{1}$, its Taylor series can be written asTo give an example, we consider the first order Taylor approximation with respect to the potential well ${{\boldsymbol{U}}}^{* }$=${\pi }^{2}/4\left(\begin{array}{c}1,\ldots ,1\end{array}\right)$ in Σ1. At first we derive ${ \mathcal F }({{\boldsymbol{U}}}^{* })$= (1–3.19)×10−5. Due to the rather involved architecture of ${ \mathcal F }$, we decide to calculate derivatives numerically, that is ${n}_{i}\approx [{ \mathcal F }({{\boldsymbol{U}}}^{* }+{\rm{\Delta }}{{\boldsymbol{e}}}_{i})-{ \mathcal F }({{\boldsymbol{U}}}^{* })]/{\rm{\Delta }}$, with the ith basis vector ${{\boldsymbol{e}}}_{i}$ and the step size Δ=0.01. In figure 6 we can see that ${\boldsymbol{n}}$ is far from collinear to ${{\boldsymbol{U}}}^{* }$, which implies a complicated topology for Σ1. Using the expansion in equation (
Figure 6.
New window|Download| PPT slideFigure 6.Components of the normal vector ${\boldsymbol{n}}$ of the unitary limit surface at ${{\boldsymbol{U}}}^{* }={\pi }^{2}/4\left(1,\ldots ,1\right)$. As a gradient of ${ \mathcal F }$, this vector points towards the strongest ascent of f, which explains why its components are negative.
6. Discussion and outlook
The unitary limit ${a}_{0}\to \infty $ is realized by movable singularities in potential space Ω, each corresponding to a hypersurface ${{\rm{\Sigma }}}_{i}\subset {\rm{\Omega }}$ that we refer to as the ith unitary limit surface. This formalism not only lets one understand the unitary limit in a geometric manner, but also introduces new quantities f and b0, that are related to the radial distance between the corresponding potential ${\boldsymbol{U}}$ and the first unitary limit surface Σ1. These are regular in the unitary limit and provide an alternative parameterization of low-energy scattering processes. As such, they suffice to derive the S-wave scattering length a0. By training ensembles of MLPs in order to predict f and b0, respectively, we therefore successfully establish a machine learning based description for unnatural as well as natural scattering lengths.There is one major problem that remains unresolved by the presented approach: predictions ${ \mathcal A }({\boldsymbol{U}})$ of unnaturally large scattering lengths sensitively depend on the precision of the ensemble ${ \mathcal F }$. Minor errors in f cause the predicted first unitary limit surface to slightly deviate from the actual surface Σ1. In very close neighborhood of the unitary limit, this generates diverging relative errors with respect to the true scattering lengths a0. As the predictions of ${ \mathcal F }$ will always be erroneous to a certain degree, this problem cannot be solved by optimizing the architecture or the training method. Instead, it is less machine-learning-related and rather originates in the handling of large numbers. However, the presented method involving f and b0 is still superior to more conventional and naive approaches like predicting the inverse scattering length 1/a0, which is obviously regular in the unitary limit, and considering the inverse prediction afterwards. Although the latter would provide a good estimate on unnatural scattering lengths, too, it would fail for a0≈0, whereas the divergence of f for extremely shallow potentials ${\boldsymbol{U}}\in {\rm{\Omega }}$ (that is $\parallel {\boldsymbol{U}}\parallel \ll 1$) can be easily factored out using ${f}_{{\boldsymbol{U}}}=\alpha {f}_{\alpha \cdot {\boldsymbol{U}}}$ for any $\alpha \in {{\mathbb{R}}}_{0}^{+}$, such that ${f}_{\alpha \cdot {\boldsymbol{U}}}$ is regular. For example, when choosing $\alpha =1/\parallel {\boldsymbol{U}}\parallel $, the factors ${f}_{{\boldsymbol{U}}/\parallel {\boldsymbol{U}}\parallel }$ correspond to the radial coordinate of Σ1 in the direction ${\boldsymbol{U}}/\parallel {\boldsymbol{U}}\parallel $ from the origin of Ω, which is clearly finite. Apart from the geometric interpretation of the unitary limit, which simply cannot be provided by classical approaches, we, therefore, conjecture the proposed method to offer the opportunity of simultaneously determining extremely large and small scattering lengths with sufficient precision. Nevertheless, it does not substitute a direct solution of the Schrödinger equation. The asymptotics of its solutions are required in order to compute the effective range function and, finally, the scattering lengths. As targets, the latter are an important part of supervised learning and have to be determined before initiating the training procedure.
We recall that both ensembles leave training as ‘black boxes’. By considering their Taylor approximations, we can obtain an interpretable expression of the predicted scattering lengths ${ \mathcal A }({\boldsymbol{U}})$ in terms of a scalar product. This also provides additional geometric insights like normal vectors on the unitary limit surface.
Note that the presented approach is far more general, than the above analysis of Σ1 suggests and, in fact, is a viable option whenever movable singularities come into play. First of all, we could have defined the quantities f and b0 with respect to any higher order unitary limit surface Σi with i>1, since we can apply the same geometric considerations as for Σ1. Adapting the training and test sets, such that all potentials are distributed around Σi would allow to train ${ \mathcal F }$ and ${ \mathcal B }$ to predict f and b0, respectively, which finally yields scattering lengths in vicinity of Σi. This procedure is, however, not even limited to the description of scattering lengths and can be generalized to arbitrary effective range parameters, since these give rise to movable singularities, as well. To give an example, we briefly consider the effective range r0, which diverges at the zero-crossing of a0. In analogy to the unitary limit surfaces Σi, we could, therefore, define the ith zero-crossing surface Σi′ as the d−1 dimensional manifold in Ωi, that contains all potentials with i bound states and a vanishing scattering length ${a}_{0}\to 0$, such that $| {r}_{0}| \to \infty $. Here, we could define f′ by scaling potentials onto a particular surface, that is ${f}^{{\prime} }{\boldsymbol{U}}\in {{\rm{\Sigma }}}_{i}^{{\prime} }$ for all ${\boldsymbol{U}}\in {\rm{\Omega }}$, and subsequently ${b}_{0}^{{\prime} }={r}_{0}(1-{f}^{{\prime} })$. From this point, all further steps should be clear. Even beyond unitary limit and zero-crossing surfaces, analyzing how the effective range behaves under the presented scaling operation and interpreting the outputs of the corresponding neural networks seems to be an interesting further step from here in order to investigate effective range effects and deviations from exact scale invariance.
A downside of the presented method is that it is, as defined above, only capable of approaching one movable singularity Σi. In the case of scattering lengths, this is because b0 diverges at each other unitary limit surface ${{\rm{\Sigma }}}_{{i}^{{\prime} }}$ with ${i}^{{\prime} }\ne i$ due to the divergence of a0 and $f\rlap{/}{\approx }1$. Let us define ${{\rm{\Omega }}}_{i}^{{\prime} }$ as the subset of all potentials ${\boldsymbol{U}}\in {\rm{\Omega }}$ that are surrounded by the above mentioned zero-crossing surfaces ${{\rm{\Sigma }}}_{i}^{{\prime} }$ and ${{\rm{\Sigma }}}_{i+1}^{{\prime} }$, that is for all ${\boldsymbol{U}}\in {{\rm{\Omega }}}_{i}^{{\prime} }$ there are α<1 and β≥1, such that $\alpha {\boldsymbol{U}}\in {{\rm{\Sigma }}}_{i}^{{\prime} }$ and $\beta {\boldsymbol{U}}\in {{\rm{\Sigma }}}_{i+1}^{{\prime} }$. The problem can be solved by redefining f to scale potentials between two zero-crossing surfaces onto the enclosed unitary limit surface ${{\rm{\Sigma }}}_{i+1}$, that is $f{\boldsymbol{U}}\in {{\rm{\Sigma }}}_{i+1}$ for all ${\boldsymbol{U}}\in {{\rm{\Omega }}}_{i}^{{\prime} }$. As a consequence, f becomes discontinuous and behaves similar to an inverted sawtooth function, which accordingly requires to involve discontinuous activations in the MLPs ${{ \mathcal F }}_{i}$. Note that even after the redefinition, b0 remains continuous as it vanishes on the zero-crossing surfaces due to a0=0, which is exactly where the redefined f has a jump discontinuity.
The idea to study manifolds in potential space does not need to be restricted to movable singularities of effective range parameters, but can be generalized to arbitrary contours of low-energy variables. To give an example, consider the d−1 dimensional hypersurface ${{\rm{\Sigma }}}_{i}^{(B)}$ that consists of all discretized potentials ${\boldsymbol{U}}\in {\rm{\Omega }}$ that give rise to i bound states and whose shallowest bound state has the binding energy B. Note that for B=0, this exactly reproduces the ith unitary limit surface ${{\rm{\Sigma }}}_{i}={{\rm{\Sigma }}}_{i}^{(0)}$. In this case, the shallowest bound state in a zero-energy bound state. Otherwise, that is if $B\ne 0$, the scattering lengths of all points on ${{\rm{\Sigma }}}_{i}^{(B)}$ must be finite and, thus, there cannot be an overlap between ${{\rm{\Sigma }}}_{i}^{(B)}$ and any unitary limit surface Σj, such that ${{\rm{\Sigma }}}_{i}^{(B)}\cap {{\rm{\Sigma }}}_{j}=\varnothing $ for all $i,j\in {\mathbb{N}}$. Here, unitary limit surfaces mark important boundaries between scattering states and bound state spectra: by crossing a unitary limit surface, a scattering state undergoes dramatic changes to join the spectrum as a new, shallowest bound state. When analyzing a given system in a finite periodic box, instead, zero-energy bound states resemble deeper bound states much more. In this context we refer to Lattice Monte Carlo simulations probing the unitary limit in a finite volume, see [24].
Appendix A. Preparation of data sets
As a consequence of discretization, the lth partial wave is defined piecewise: between the transition points rn−1 and rn it is given as a linear combination of spherical Bessel and Neumann functions,The most general way to derive any effective range expansion parameter ${Q}_{l}^{(j)}(\varkappa )$ for arbitrary expansion points $\varkappa \in {\mathbb{C}}$ in the complex momentum plane is a contour integration along a circular contour γ with radius κγ around ϰ. Applying Cauchy’s integral theorem then yields
While generating the training and test sets, we must ensure that there are no overrepresented potential shapes among the respective data set. To maintain complexity, this suggests generating potentials with randomized components Un. An intuitive approach therefore is to produce them via Gaussian random walks: given d normally distributed random variables X1, …, Xd,
Figure A1.
New window|Download| PPT slideFigure A1.(a): Distribution of the square root $\sqrt{u}$ of the average depth over the training set. By construction, this distribution is uniform. (b): Bimodal distribution of the scattering length a0 over the training set. Note that extremely large scattering lengths are not displayed in this histogram.
Appendix B. Training by gradient descent
Given a data set $D\,\subseteq {\rm{\Omega }}\times {{\mathbb{R}}}^{n}$, there are several ways to measure the performance of a neural network ${ \mathcal N }:{\rm{\Omega }}\to {{\mathbb{R}}}^{n}$ on D. For this we have already introduced the MAPE that we have derived for the test set D = T2 after training. Lower MAPEs are thereby associated with better performances. Such a function $L:{\rm{\Gamma }}\to {{\mathbb{R}}}^{+}$ that maps a neural network to a non-negative, real number is called a loss function. The weight space Γ is the configuration space of the used neural network architecture and as such it is spanned by all internal parameters (e.g. all weights and biases of an MLP). Therefore, we can understand all neural networks ${ \mathcal N }\in {\rm{\Gamma }}$ of the given architecture as points in weight space. The goal all training algorithms have in common is to find the global minimum of a given loss function in weight space. It is important to note that loss functions become highly non-convex for larger data sets and deeper and more sophisticated architectures. As a consequence, training usually reduces to finding a well performing local minimum.A prominent family of training algorithms are gradient descent techniques. These are iterative with each iteration corresponding to a step the network takes in weight space. The direction of the steepest loss descent at the current position ${ \mathcal N }\in {\rm{\Gamma }}$ is given by the negative gradient of $L({\boldsymbol{t}},{ \mathcal N }({\boldsymbol{U}}))$. Updating internal parameters along this direction is the name-giving feature of gradient descent techniques. This suggests the update rule
Usually, the order of training samples $({\boldsymbol{U}},{\boldsymbol{t}})$ is randomized to achieve a faster learning progress and to make training more robust to badly performing local minima. Therefore, this technique is also called stochastic gradient descent. Important alternatives to mention are mini-batch gradient descent and batch gradient descent, where update steps are not taken with respect to the loss $L({\boldsymbol{t}},{ \mathcal N }({\boldsymbol{U}}))$ of a single sample, but to the batch loss
For training the members ${{ \mathcal F }}_{i}$ and ${{ \mathcal B }}_{i}$ of both ensembles ${ \mathcal F }$ and ${ \mathcal B }$, we apply the same training procedure using the machine learning framework provided by PyTorch [28]: weights and biases are initialized via the He-initialization [27]. We use the Adamax optimizer with the batch size B = 10 to minimize the L1-Loss,
Acknowledgments
We thank Hans-Werner Hammer and Bernard Metsch for useful comments. We acknowledge partial financial support from the Deutsche Forschungsgemeinschaft (Project-ID 196253076 - TRR 110, ‘Symmetries and the Emergence of Structure in QCD’), Further support was provided by the Chinese Academy of Sciences (CAS) President’s International Fellowship Initiative (PIFI) (Grant No. 2018DM0034), by EU (Strong2020) and by VolkswagenStiftung (Grant No. 93562).Reference By original order
By published year
By cited within times
By Impact factor
DOI:10.1038/s41586-018-0361-2 [Cited within: 1]
DOI:10.1088/0004-637X/733/1/10 [Cited within: 1]
DOI:10.1016/j.cpc.2011.12.026 [Cited within: 1]
DOI:10.1093/mnras/stu642 [Cited within: 1]
DOI:10.1126/science.aag2302 [Cited within: 1]
DOI:10.1103/PhysRevA.96.042113
DOI:10.1103/PhysRevB.96.184410
DOI:10.1016/j.physletb.2017.10.024
DOI:10.1103/PhysRevD.98.023019
DOI:10.1103/PhysRevA.98.010701 [Cited within: 1]
DOI:10.1103/PhysRevC.99.064307
DOI:10.1103/PhysRevLett.121.111801
DOI:10.1007/JHEP12(2019)122
DOI:10.1016/j.physrep.2019.11.001 [Cited within: 1]
DOI:10.1103/PhysRevC.60.054311 [Cited within: 1]
DOI:10.1016/0370-2693(70)90349-7 [Cited within: 1]
DOI:10.1103/PhysRevA.63.043606
DOI:10.1016/j.physrep.2006.03.001
DOI:10.1103/PhysRevLett.96.090404
DOI:10.1103/PhysRevB.73.115112
DOI:10.1103/PhysRevLett.118.202501 [Cited within: 1]
DOI:10.1038/nature04626 [Cited within: 1]
[Cited within: 1]
DOI:10.1103/PhysRevA.87.023615 [Cited within: 1]
DOI:10.1109/3.62122 [Cited within: 1]
[Cited within: 1]
[Cited within: 1]
[Cited within: 1]