Abstract In this paper, we construct the Darboux transformation (DT) for the reverse-time integrable nonlocal nonlinear Schrödinger equation by loop group method. Then we utilize the DT to derive soliton solutions with zero seed. We investigate the dynamical properties for those solutions and present a sufficient condition for the non-singularity of multi-soliton solutions. Furthermore, the asymptotic analysis of bounded multi-solutions has also been established by the determinant formula. Keywords:nonlocal nonlinear Schrödinger equation;multi-soliton solution;singularity;asymptotic analysis
PDF (623KB)MetadataMetricsRelated articlesExportEndNote|Ris|BibtexFavorite Cite this article Wei-Jing Tang, Zhang-nan Hu, Liming Ling. Bounded multi-soliton solutions and their asymptotic analysis for the reversal-time nonlocal nonlinear Schrödinger equation. Communications in Theoretical Physics, 2021, 73(10): 105001- doi:10.1088/1572-9494/ac08fb
1. Introduction
Integrable nonlocal nonlinear Schrödinger equations (nNLSE) play a vital role in mathematical physics [1–8] and have been studied extensively [9–15]. This type of equation is symmetric because it is invariant under the joint transformation x→−x, t→−t and complex conjugation. For a AKNS spectral problem, the reduction $r{(x,t)=\sigma q(x,t)}^{\ast }\sigma =\pm 1$ was thought to be the only interesting one until 2013. However, Ablowitz and Musslimani [9] discovered that there existed another interesting reduction $r(x,t)=\sigma {q}^{* }(-x,t)$ which results in a new form of integrable nNLSE:$\begin{eqnarray*}{\rm{i}}{q}_{t}(x,t)={q}_{{xx}}(x,t)-2\sigma {q}^{2}(x,t){q}^{* }(-x,t).\end{eqnarray*}$It was not long before they found two more reductions of the AKNS scattering problem: $r(x,t)=\sigma q(-x,-t)$ and $r(x,t)\,=\sigma q(x,-t)$, which gave rise to the so-called reverse-space–time NLS and reverse-time NLS equations, respectively reported by [16]$\begin{eqnarray*}\begin{array}{rcl}{\rm{i}}{q}_{t}(x,t) & = & {q}_{{xx}}(x,t)-2\sigma {q}^{2}(x,t)q(-x,-t),\\ {\rm{i}}{q}_{t}(x,t) & = & {q}_{{xx}}(x,t)-2\sigma {q}^{2}(x,t)q(x,-t).\end{array}\end{eqnarray*}$
In this paper, we focus on a nonlocal reverse-time NLS equation [16, 17]$\begin{eqnarray}{\rm{i}}{p}_{t}(x,t)={p}_{{xx}}(x,t)+2\sigma {p}^{2}(x,t)p(x,-t),\qquad \end{eqnarray}$which was generalized by Ma [18] to a multi-component one. Yang [17] has derived general multi-solitons in three types of nNLSE, including the reverse-time, reverse-space and reverse-space–time nNLSE, and also presented a unified Riemann–Hilbert framework for them. Ma utilized inverse scattering transformation to construct N-soliton solutions to multi-component nNLS equations under the framework of Riemann–Hilbert problem [18]. There are also many other scholars who have made their own contributions to the study of reverse-time nNLSE. Lou [19] have established some new types of methods to solve nonlinear systems, as the full reversal invariant method and so on. Very recently, Ye and Zhang [20] constructed the general soliton solutions with zero and non-zero background to a reverse-time nNLSE via a matrix version of binary Darboux transformation (DT). They derived the formula of multi-soliton and high order solitons in a determinant form. It is seen that the single-soliton could exponentially blow up or decay. The asymptotic analysis has been established on two-soliton solutions for the nonlocal complex coupled dispersionless equation [21].
In fact, the soliton solutions of the integrable equation can be constructed by many methods, such as the inverse scattering method [22–26], the Hirota bilinear method [27–29], the DT [30–32] and so on. As we know that, for a physical system it is more interesting to find the bounded non-singular multi-soliton solutions. For a nonlocal integrable model, there will be problems such as singularity and boundedness in the process of searching for soliton solutions, and there are few studies in this area [33, 34]. In this work, we would like to explore a sufficient condition for the bounded multi-soliton solution for the nonlocal time-reversal NLS equation. What is more, the asymptotic analysis of bounded multi-soliton solutions to the reverse-time nNLSE can be established.
In this paper we use the DT method to construct the multi-soliton solutions for nNLSE (1) with the aid of loop group method [35]. By the construction of loop group method, the soliton solutions admit the compact determinant representation, which is beneficial to analyze the singularity of soliton solutions. Then we present a sufficient condition for a symmetric bounded multi-soliton solution. Furthermore, the asymptotic analysis for the multi-soliton solution is performed by the determinant technique, in which the modulus of multi-soliton solutions can be approximately decomposed into the sum of single-soliton solutions. The multi-soliton solutions exhibit the similar structure as the classical NLS equation, but they can not be decomposed directly since the phase term has the indefinite limitation. But we find the modulus of the multi-soliton solutions can be decomposed as $t\to \pm \infty $. Our proposed method provides a way to implement the singularity and asymptotic analysis for the time-reversal nonlocal integrable system, and it can be extended to other nonlocal NLS equation and vector nonlocal NLS equation [36–38].
The rest of the paper is organized as follows. In section 2, we will develop the N-fold DT for equation (1) by the loop group method. In section 3, the soliton solutions for equation (1) will be constructed through DT with zero seed. Besides, we will analyze the singularity and asymptoticity of the solutions that we have obtained. In the final section, we will give a few discussions and conclusions.
2. DT for the time-reversal nonlocal nonlinear Schödinger equation
As the nNLSE (1) can be regarded as a special reduction of the AKNS system, we firstly consider the AKNS system without reduction in order to obtain the DT for nNLSE (1) later. The AKNS system that we are going to explore is as follows:$\begin{eqnarray}\begin{array}{l}{{\boldsymbol{\Psi }}}_{x}={\boldsymbol{U}}{\boldsymbol{\Psi }},\qquad {\boldsymbol{U}}({\boldsymbol{Q}};\lambda )\equiv {\rm{i}}(\lambda {{\boldsymbol{\sigma }}}_{3}+{\boldsymbol{Q}}),\\ {{\boldsymbol{\Psi }}}_{t}={\boldsymbol{V}}{\boldsymbol{\Psi }},\qquad {\boldsymbol{V}}({\boldsymbol{Q}};\lambda )\equiv {\rm{i}}({\lambda }^{2}{{\boldsymbol{\sigma }}}_{3}+\lambda {\boldsymbol{Q}})+{{\boldsymbol{V}}}_{0},\\ {{\boldsymbol{V}}}_{0}({\boldsymbol{Q}})=\displaystyle \frac{1}{2}{{\boldsymbol{\sigma }}}_{3}({{\boldsymbol{Q}}}_{x}-{\rm{i}}{{\boldsymbol{Q}}}^{2}),\end{array}\end{eqnarray}$where$\begin{eqnarray*}{\boldsymbol{Q}}=\left[\begin{array}{cc}0 & \sigma p(x,-t)\\ p(x,t) & 0\end{array}\right],\qquad {{\boldsymbol{\sigma }}}_{3}=\left[\begin{array}{cc}1 & 0\\ 0 & -1\end{array}\right].\end{eqnarray*}$Meanwhile, note that the reverse-time character implies the following symmetric relationship between ${\boldsymbol{U}}$ and ${\boldsymbol{V}}$ matrices [18, 39]:$\begin{eqnarray}\begin{array}{rcl}{{\boldsymbol{U}}}^{\top }(x,-t;-\lambda ) & = & -{\boldsymbol{C}}{\boldsymbol{U}}(x,t;\lambda ){{\boldsymbol{C}}}^{\top },\\ {{\boldsymbol{V}}}^{\top }(x,-t;-\lambda ) & = & {\boldsymbol{C}}{\boldsymbol{V}}(x,t;\lambda ){{\boldsymbol{C}}}^{\top },\end{array}\end{eqnarray}$where$\begin{eqnarray*}{\boldsymbol{C}}=\left[\begin{array}{cc}1 & 0\\ 0 & -\sigma \end{array}\right].\end{eqnarray*}$Based on the symmetric relation (3) and the existence and uniqueness of ordinary differential equation, we have$\begin{eqnarray}{{\boldsymbol{\Psi }}}^{\top }(x,-t;-\lambda ){\boldsymbol{C}}{\boldsymbol{\Psi }}(x,t;\lambda ){{\boldsymbol{C}}}^{-1}={\mathbb{I}}\end{eqnarray}$with the condition ${\boldsymbol{\Psi }}(0,0;\lambda )={\mathbb{I}}.$ Moreover, if the column vector solution ${{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})$ is the solution of Lax pair (2) at $\lambda ={\lambda }_{1}$, then the row vector solution ${{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}$ satisfies the following adjoint Lax pair:$\begin{eqnarray}-{{\boldsymbol{\Phi }}}_{x}={\boldsymbol{\Phi }}{\boldsymbol{U}},\,\,-{{\boldsymbol{\Phi }}}_{t}={\boldsymbol{\Phi }}{\boldsymbol{V}}\end{eqnarray}$at $\lambda =-{\lambda }_{1}$.
Combining the loop group method [35] and the above symmetric relationship (3), it follows that the DT for system (2) can be represented as$\begin{eqnarray}\begin{array}{rcl}{{\boldsymbol{T}}}_{1}(\lambda ;x,t) & = & {\mathbb{I}}-\displaystyle \frac{2{\lambda }_{1}}{\lambda +{\lambda }_{1}}{{\boldsymbol{P}}}_{1}(x,t),\\ {{\boldsymbol{P}}}_{1}(x,t) & = & \displaystyle \frac{{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1}){{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}}{{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})},\end{array}\end{eqnarray}$where the ${{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})$ in DT is a special solution to system (2) at $\lambda ={\lambda }_{1}$. As ${{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})$ can be written as $\left(\begin{array}{c}{{\boldsymbol{\psi }}}_{1}^{(1)}(x,t;{\lambda }_{1})\\ {{\boldsymbol{\psi }}}_{1}^{(2)}(x,t;{\lambda }_{1})\end{array}\right)$, the corresponding Bäcklund transformation between old potential function and new one is given by$\begin{eqnarray}\begin{array}{rcl}{{\boldsymbol{Q}}}^{[1]} & = & {\boldsymbol{Q}}+2{\lambda }_{1}[{{\boldsymbol{P}}}_{1},{{\boldsymbol{\sigma }}}_{3}],\,\,{\rm{i}}.{\rm{e}}.\,\,{p}^{[1]}(x,t)\\ & = & p(x,t)+2{\lambda }_{1}\displaystyle \frac{{{\boldsymbol{\psi }}}_{1}^{(2)}(x,t;{\lambda }_{1}){{\boldsymbol{\psi }}}_{1}^{(1)}(x,-t;{\lambda }_{1})}{{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})}.\end{array}\end{eqnarray}$In what follows, we verify the validity of above DT (6) and the corresponding Bäcklund transformation (7). Then, we can establish the following theorem: The DT (6) converts Lax pair (2) into a new one$\begin{eqnarray*}{{\boldsymbol{\Psi }}}_{x}^{[1]}={\boldsymbol{U}}({{\boldsymbol{Q}}}^{[1]};\lambda ){{\boldsymbol{\Psi }}}^{[1]},\qquad {{\boldsymbol{\Psi }}}_{t}^{[1]}={\boldsymbol{V}}({{\boldsymbol{Q}}}^{[1]};\lambda ){{\boldsymbol{\Psi }}}^{[1]},\end{eqnarray*}$where ${{\boldsymbol{\Psi }}}^{[1]}(x,t;\lambda )={{\boldsymbol{T}}}_{1}(x,t;\lambda ){\boldsymbol{\Psi }}(x,t;\lambda )$, and ${{\boldsymbol{Q}}}^{[1]}$ is given by equation (7).
The proof of this theorem is equivalent to verify the following two equations:$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{T}}}_{1,x}{{\boldsymbol{T}}}_{1}^{-1}+{{\boldsymbol{T}}}_{1}{\boldsymbol{U}}({\boldsymbol{Q}};\lambda ){{\boldsymbol{T}}}_{1}^{-1}= {\boldsymbol{U}}({{\boldsymbol{Q}}}^{[1]};\lambda ),\\ {{\boldsymbol{T}}}_{1,t}{{\boldsymbol{T}}}_{1}^{-1}+{{\boldsymbol{T}}}_{1}{\boldsymbol{V}}({\boldsymbol{Q}};\lambda ){{\boldsymbol{T}}}_{1}^{-1}= {\boldsymbol{V}}({{\boldsymbol{Q}}}^{[1]};\lambda ),\end{array}\end{eqnarray*}$and the symmetric relationship (3). Defining$\begin{eqnarray*}{\boldsymbol{F}}(x,t;\lambda )\equiv {{\boldsymbol{T}}}_{1,x}{{\boldsymbol{T}}}_{1}^{-1}+{{\boldsymbol{T}}}_{1}{\boldsymbol{U}}{{\boldsymbol{T}}}_{1}^{-1}-{\boldsymbol{U}}({{\boldsymbol{Q}}}^{[1]};\lambda ),\end{eqnarray*}$through direct calculation, we have$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{T}}}_{1,x}{{\boldsymbol{T}}}_{1}^{-1} & = & -\displaystyle \frac{2{\lambda }_{1}}{\lambda +{\lambda }_{1}}{{\boldsymbol{P}}}_{1,x}\left({\mathbb{I}}+\displaystyle \frac{2{\lambda }_{1}}{\lambda -{\lambda }_{1}}{{\boldsymbol{P}}}_{1}\right),\\ {{\boldsymbol{T}}}_{1}{\boldsymbol{U}}{{\boldsymbol{T}}}_{1}^{-1} & = & \left({\mathbb{I}}-\displaystyle \frac{2{\lambda }_{1}}{\lambda +{\lambda }_{1}}{{\boldsymbol{P}}}_{1}\right){\boldsymbol{U}}({\boldsymbol{Q}};\lambda )\left({\mathbb{I}}+\displaystyle \frac{2{\lambda }_{1}}{\lambda -{\lambda }_{1}}{{\boldsymbol{P}}}_{1}\right).\end{array}\end{eqnarray*}$Based on above equations, we can obtain the residue for function ${\boldsymbol{F}}(x,t;\lambda )$ as$\begin{eqnarray*}\begin{array}{l}\mathop{\mathrm{Res}}\limits_{\lambda ={\lambda }_{1}}{\boldsymbol{F}}(x,t;\lambda )=-2{\lambda }_{1}{{\boldsymbol{P}}}_{1,x}{{\boldsymbol{P}}}_{1}+2{\lambda }_{1}{{\boldsymbol{P}}}_{1}({\mathbb{I}}-{{\boldsymbol{P}}}_{1}){\boldsymbol{U}}({\boldsymbol{Q}};\lambda )\\ =-2{\lambda }_{1}\left[\frac{{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})}{{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;-{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})}{{\boldsymbol{\psi }}}_{1,x}^{\top }(x,-t;-{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{P}}}_{1}\right.\\ +{\left(\frac{{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})}{{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;-{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}(x,t;\lambda )}\right)}_{x}{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;-{\lambda }_{1})\\ \left.{\boldsymbol{C}}{{\boldsymbol{P}}}_{1}-{{\boldsymbol{P}}}_{1}({\mathbb{I}}-{{\boldsymbol{P}}}_{1}){\boldsymbol{U}}({\boldsymbol{Q}};\lambda )\right]\\ =\,0.\end{array}\end{eqnarray*}$Similarly, we have $\mathop{\mathrm{Res}}\limits_{\lambda =-{\lambda }_{1}}{\boldsymbol{F}}(x,t;\lambda )=0$. Thus the function ${\boldsymbol{F}}(x,t;\lambda )$ is an analytic function in the whole complex plane. Due to the Bäcklund transformation (7), the function ${\boldsymbol{F}}(x,t;\lambda )$ will vanish as $\lambda \to \infty $, which implies the function ${\boldsymbol{F}}(x,t;\lambda )=0$ by the Liouville theorem. Defining$\begin{eqnarray*}\begin{array}{rcl}{\boldsymbol{G}}(x,t;\lambda ) & \equiv & {{\boldsymbol{T}}}_{1,t}{{\boldsymbol{T}}}_{1}^{-1}+{{\boldsymbol{T}}}_{1}{\boldsymbol{V}}{{\boldsymbol{T}}}_{1}^{-1}-\widehat{{{\boldsymbol{V}}}^{[1]}},\\ \widehat{{{\boldsymbol{V}}}^{[1]}} & = & {\rm{i}}({\lambda }^{2}{{\boldsymbol{\sigma }}}_{3}+\lambda {{\boldsymbol{Q}}}^{[1]}+\widehat{{{\boldsymbol{V}}}_{0}^{[1]}}),\end{array}\end{eqnarray*}$where $\widehat{{{\boldsymbol{V}}}_{0}^{[1]}}\equiv {{\boldsymbol{V}}}_{0}+{\rm{i}}{\boldsymbol{S}}{\boldsymbol{Q}}-{\rm{i}}{{\boldsymbol{Q}}}^{[1]}{\boldsymbol{S}},{\boldsymbol{S}}={\lambda }_{1}-2{\lambda }_{1}{{\boldsymbol{P}}}_{1}$, and taking the similar procedure as above x-part, we can prove that$\begin{eqnarray*}{\boldsymbol{G}}(x,t;\lambda )=0.\end{eqnarray*}$Furthermore, with direct calculation, we can verify that$\begin{eqnarray*}\widehat{{{\boldsymbol{V}}}_{0}^{[1]}}={{\boldsymbol{V}}}_{0}({{\boldsymbol{Q}}}^{[1]}).\end{eqnarray*}$ Now we proceed to prove the symmetric properties of ${{\boldsymbol{U}}}^{[1]}$ and ${{\boldsymbol{V}}}^{[1]}$. Since ${{\boldsymbol{\Psi }}}^{\dagger }(x,-t;-\lambda ){\boldsymbol{C}}{\boldsymbol{\Psi }}(x,t;\lambda ){\boldsymbol{C}}={\mathbb{I}}$, then$\begin{eqnarray*}{\boldsymbol{C}}{{\boldsymbol{T}}}^{\dagger }(x,-t;-\lambda ){\boldsymbol{C}}{\boldsymbol{T}}(x,t;\lambda )={\mathbb{I}}.\end{eqnarray*}$From ${{\boldsymbol{U}}}^{\dagger }(x,-t;-\lambda )=-{\boldsymbol{C}}{\boldsymbol{U}}(x,t;\lambda ){{\boldsymbol{C}}}^{\top }$ and ${{\boldsymbol{U}}}^{[1]}(x,t;\lambda )\,={{\boldsymbol{T}}}_{1,x}{{\boldsymbol{T}}}_{1}^{-1}+{{\boldsymbol{T}}}_{1}{\boldsymbol{U}}{{\boldsymbol{T}}}_{1}^{-1}$, it follows that$\begin{eqnarray*}\begin{array}{l}{{{\boldsymbol{U}}}^{[1]}}^{\dagger }(x,-t;-\lambda )=-{\left({{\boldsymbol{T}}}_{1}{\left(x,-t;\lambda \right)}_{x}{{\boldsymbol{T}}}_{1}(x,-t;-\lambda )\right)}^{\dagger }\\ +\,{\left({{\boldsymbol{T}}}_{1}(x,-t;-\lambda ){\boldsymbol{U}}(x,-t;-\lambda ){{\boldsymbol{T}}}_{1}(x,-t;-\lambda )\right)}^{\dagger }\\ =\,{\boldsymbol{C}}{{\boldsymbol{U}}}^{[1]}(x,t;\lambda ){\boldsymbol{C}}.\end{array}\end{eqnarray*}$and through ${{\boldsymbol{V}}}^{\dagger }(x,-t;-\lambda )={\boldsymbol{C}}{\boldsymbol{V}}(x,t;\lambda ){{\boldsymbol{C}}}^{\top }$, we have$\begin{eqnarray*}{{{\boldsymbol{V}}}^{[1]}}^{\dagger }(x,-t;-\lambda )={\boldsymbol{C}}{{\boldsymbol{V}}}^{[1]}(x,t;\lambda ){{\boldsymbol{C}}}^{\top }.\end{eqnarray*}$This completes the proof.
The above elementary DT can be iterated one by one to yield the N-fold DT. With the knowledge of linear algebra and complex variables functions, the iterated N-fold DT can be represented in a compact form. Then the multi-soliton solution can be constructed through the formula of corresponding Bäcklund transformation. In general, we can establish the following N-fold DT for nNLSE (1) as
If we have N different solutions to system (2):${{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1}),{{\boldsymbol{\psi }}}_{2}(x,t;{\lambda }_{2}),\,\ldots ,\,$${{\boldsymbol{\psi }}}_{N}(x,t;{\lambda }_{N})$ with $\lambda ={\lambda }_{i},$ $i=1,2,\,\ldots ,\,N,$ ${\lambda }_{i}\ne {\lambda }_{j}$ $(i\ne j)$, ${\lambda }_{i}\ne 0$, then the N-fold DT can be represented as$\begin{eqnarray*}{\boldsymbol{T}}(x,t;\lambda )={\mathbb{I}}-{\boldsymbol{Y}}{{\boldsymbol{M}}}^{-1}{{\boldsymbol{D}}}^{-1}{\boldsymbol{Z}}{\boldsymbol{C}},\end{eqnarray*}$where$\begin{eqnarray*}\begin{array}{rcl}{\boldsymbol{D}} & = & \mathrm{diag}(\lambda +{\lambda }_{1},\lambda +{\lambda }_{2},\,\ldots ,\,\lambda +{\lambda }_{N}),\\ {\boldsymbol{Y}}(x,t) & = & [| {{\boldsymbol{y}}}_{1}\rangle ,| {{\boldsymbol{y}}}_{2}\rangle ,\,\ldots ,\,| {{\boldsymbol{y}}}_{N}\rangle ],\\ {\boldsymbol{Z}} & = & \left[\begin{array}{c}\langle {{\boldsymbol{y}}}_{1}| \\ \langle {{\boldsymbol{y}}}_{2}| \\ \vdots \\ \langle {{\boldsymbol{y}}}_{N}| \end{array}\right],\qquad {\boldsymbol{M}}={\left(\displaystyle \frac{\langle {{\boldsymbol{y}}}_{i}| {\boldsymbol{C}}| {{\boldsymbol{y}}}_{j}\rangle }{{\lambda }_{j}+{\lambda }_{i}}\right)}_{1\leqslant i,j\leqslant N},\end{array}\end{eqnarray*}$$\langle {{\boldsymbol{y}}}_{j}| \equiv {{\boldsymbol{\psi }}}_{j}{\left(x,-t\right)}^{\top }$, $| {{\boldsymbol{y}}}_{i}\rangle \equiv {{\boldsymbol{\psi }}}_{i}(x,t)$. And the corresponding Bäcklund transformation between old and new potential functions is given by$\begin{eqnarray}{p}^{[N]}=p-2{{\boldsymbol{Y}}}_{2}{{\boldsymbol{M}}}^{-1}{{\boldsymbol{Z}}}_{1}.\end{eqnarray}$
The recursive DTs between matrix functions are given as$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{\Psi }}}^{[k]}(x,t;\lambda ) & = & {{\boldsymbol{T}}}_{k}(x,t;\lambda ){{\boldsymbol{\Psi }}}^{[k-1]}(x,t;\lambda ),\,\,{{\boldsymbol{Q}}}^{[k]}={{\boldsymbol{Q}}}^{[k-1]}\\ & & +2{\lambda }_{1}[{{\boldsymbol{P}}}_{k},{{\boldsymbol{\sigma }}}_{3}],\,\,k=1,\,\ldots ,\,N,\end{array}\end{eqnarray*}$where$\begin{eqnarray}\begin{array}{rcl}{{\boldsymbol{T}}}_{1} & = & {\mathbb{I}}-\displaystyle \frac{2{\lambda }_{1}}{\lambda +{\lambda }_{1}}{{\boldsymbol{P}}}_{1},\\ {{\boldsymbol{P}}}_{1} & = & \displaystyle \frac{{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1}){{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}}{{{\boldsymbol{\psi }}}_{1}^{\top }(x,-t;{\lambda }_{1}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}(x,t;{\lambda }_{1})},\\ {{\boldsymbol{T}}}_{2} & = & {\mathbb{I}}-\displaystyle \frac{2{\lambda }_{2}}{\lambda +{\lambda }_{2}}{{\boldsymbol{P}}}_{2},\\ {{\boldsymbol{P}}}_{2} & = & \displaystyle \frac{{{\boldsymbol{\psi }}}_{2}^{[1]}(x,t;{\lambda }_{2}){\left({{\boldsymbol{\psi }}}_{2}^{[1]}(x,-t;{\lambda }_{2})\right)}^{\top }{\boldsymbol{C}}}{{\left({{\boldsymbol{\psi }}}_{2}^{[1]}(x,-t;{\lambda }_{2})\right)}^{\top }{\boldsymbol{C}}{{\boldsymbol{\psi }}}_{2}^{[1]}(x,t;{\lambda }_{2})},\\ & & \vdots \\ {{\boldsymbol{T}}}_{i} & = & {\mathbb{I}}-\displaystyle \frac{2{\lambda }_{i}}{\lambda +{\lambda }_{i}}{{\boldsymbol{P}}}_{i},\\ {{\boldsymbol{P}}}_{i} & = & \displaystyle \frac{{{\boldsymbol{\psi }}}_{i}^{[i-1]}(x,t;{\lambda }_{i}){\left({{\boldsymbol{\psi }}}_{i}^{[i-1]}(x,-t;{\lambda }_{i})\right)}^{\top }{\boldsymbol{C}}}{{\left({{\boldsymbol{\psi }}}_{i}^{[i-1]}(x,-t;{\lambda }_{i})\right)}^{\top }{\boldsymbol{C}}{{\boldsymbol{\psi }}}_{i}^{[i-1]}(x,t;{\lambda }_{i})},\end{array}\end{eqnarray}$and$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{\psi }}}_{i}^{[i-1]}(x,t;{\lambda }_{i}) & = & ({{\boldsymbol{T}}}_{i-1}{{\boldsymbol{T}}}_{i-2}...{{\boldsymbol{T}}}_{1}){| }_{\lambda ={\lambda }_{i}}{{\boldsymbol{\psi }}}_{i}(x,t;{\lambda }_{i}),\\ i & = & 1,...N.\end{array}\end{eqnarray*}$Defining$\begin{eqnarray}{\boldsymbol{T}}(x,t;\lambda )={{\boldsymbol{T}}}_{N}(x,t;\lambda ){{\boldsymbol{T}}}_{N-1}(x,t;\lambda )...{{\boldsymbol{T}}}_{1}(x,t;\lambda )\end{eqnarray}$which represents the N-fold DT, and analyzing the form of equation (10), it immediately comes out that ${\boldsymbol{T}}(x,t;\lambda )$ is a meromorphic function and $\infty $ is not the essential singularity, so it is a rational function with respect to λ. Then the N-fold DT ${\boldsymbol{T}}(x,t;\lambda )$ can be expressed in the form of$\begin{eqnarray}{\boldsymbol{T}}(x,t;\lambda )={\mathbb{I}}-\sum _{i=1}^{N}\displaystyle \frac{{{\boldsymbol{A}}}_{i}(x,t)}{\lambda +{\lambda }_{i}},\end{eqnarray}$where ${{\boldsymbol{A}}}_{i}(x,t)$ is a matrix. By calculating residues on both sides of equation (11), an expression of ${{\boldsymbol{A}}}_{i}(x,t)$ is obtained:$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{A}}}_{i}(x,t) & = & -\mathop{\mathrm{Res}}\limits_{\lambda =-{\lambda }_{i}}{\boldsymbol{T}}(x,t;\lambda )\\ & = & -({{\boldsymbol{T}}}_{N}...{{\boldsymbol{T}}}_{i+1}){| }_{\lambda =-{\lambda }_{i}}2{\lambda }_{i}{{\boldsymbol{P}}}_{i}({{\boldsymbol{T}}}_{i-1}...{{\boldsymbol{T}}}_{1}){| }_{\lambda =-{\lambda }_{i}}.\end{array}\end{eqnarray*}$From equation (9) we know that $\mathrm{rank}({{\boldsymbol{P}}}_{i})=1$, thus $\mathrm{rank}({{\boldsymbol{A}}}_{i}(x,t))=1$. According to the knowledge of linear algebra, ${{\boldsymbol{A}}}_{i}(x,t)$ could be rewritten as$\begin{eqnarray*}{{\boldsymbol{A}}}_{i}(x,t)=| {{\boldsymbol{x}}}_{i}\rangle \langle {{\boldsymbol{y}}}_{i}| {\boldsymbol{C}},\end{eqnarray*}$where $| {{\boldsymbol{x}}}_{i}\rangle $ is a column vector, and $\langle {{\boldsymbol{y}}}_{i}| $ is a row vector. For simplicity, we denote that$\begin{eqnarray*}\begin{array}{rcl}{\boldsymbol{R}} & = & \left[\begin{array}{cccc}| {{\boldsymbol{x}}}_{1}\rangle , & | {{\boldsymbol{x}}}_{2}\rangle , & ... & | {{\boldsymbol{x}}}_{N}\rangle \end{array}\right],\\ {\boldsymbol{D}} & = & \mathrm{diag}(\lambda +{\lambda }_{1},\lambda +{\lambda }_{2},\,\ldots ,\,\lambda +{\lambda }_{N}),\\ {\boldsymbol{Z}} & = & \left[\begin{array}{c}\langle {{\boldsymbol{y}}}_{1}| \\ \langle {{\boldsymbol{y}}}_{2}| \\ \vdots \\ \langle {{\boldsymbol{y}}}_{N}| \end{array}\right].\end{array}\end{eqnarray*}$Then ${\boldsymbol{T}}(x,t;\lambda )$ can be rewritten in matrix form as$\begin{eqnarray}{\boldsymbol{T}}(x,t;\lambda )={\mathbb{I}}-{\boldsymbol{R}}{{\boldsymbol{D}}}^{-1}{\boldsymbol{Z}}.\end{eqnarray}$Besides, after observation and inspection, we obtain that $\ker ({\boldsymbol{T}}(x,t;{\lambda }_{i}))={{\boldsymbol{\psi }}}_{i}(x,t;{\lambda }_{i})$, i.e.$\begin{eqnarray}\left({\mathbb{I}}-\sum _{j=1}^{N}\displaystyle \frac{| {{\boldsymbol{x}}}_{j}\rangle \langle {{\boldsymbol{y}}}_{j}| {\boldsymbol{C}}}{{\lambda }_{i}+{\lambda }_{j}}\right){{\boldsymbol{\psi }}}_{i}(x,t;{\lambda }_{i})=0,\end{eqnarray}$which implies that$\begin{eqnarray*}{{\boldsymbol{\psi }}}_{i}=\sum _{j=1}^{N}\displaystyle \frac{\langle {{\boldsymbol{y}}}_{j}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{i}}{{\lambda }_{i}+{\lambda }_{j}}| {{\boldsymbol{x}}}_{j}\rangle .\end{eqnarray*}$Thus$\begin{eqnarray}\begin{array}{l}\left[\begin{array}{cccc}{{\boldsymbol{\psi }}}_{1}, & {{\boldsymbol{\psi }}}_{2}, & ... & {{\boldsymbol{\psi }}}_{N}\end{array}\right]\\ ={\boldsymbol{R}}\left(\begin{array}{cccc}\displaystyle \frac{\langle {{\boldsymbol{y}}}_{1}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}}{{\lambda }_{1}+{\lambda }_{1}} & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{1}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{2}}{{\lambda }_{1}+{\lambda }_{2}} & ... & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{1}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{N}}{{\lambda }_{1}+{\lambda }_{N}}\\ \displaystyle \frac{\langle {{\boldsymbol{y}}}_{2}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}}{{\lambda }_{2}+{\lambda }_{1}} & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{2}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{2}}{{\lambda }_{2}+{\lambda }_{2}} & ... & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{2}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{N}}{{\lambda }_{2}+{\lambda }_{N}}\\ \vdots & \vdots & \ddots & \vdots \\ \displaystyle \frac{\langle {{\boldsymbol{y}}}_{N}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{1}}{{\lambda }_{N}+{\lambda }_{1}} & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{N}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{2}}{{\lambda }_{N}+{\lambda }_{2}} & ... & \displaystyle \frac{\langle {{\boldsymbol{y}}}_{N}| {\boldsymbol{C}}{{\boldsymbol{\psi }}}_{N}}{{\lambda }_{N}+{\lambda }_{N}}\end{array}\right).\end{array}\end{eqnarray}$Furthermore, it is readily verified that$\begin{eqnarray*}{\boldsymbol{T}}(x,t;\lambda ){\boldsymbol{C}}{{\boldsymbol{T}}}^{\top }(x,-t;-\lambda ){\boldsymbol{C}}={\mathbb{I}},\end{eqnarray*}$i.e.$\begin{eqnarray}\left({\mathbb{I}}-\sum _{i=1}^{N}\displaystyle \frac{| {{\boldsymbol{x}}}_{i}\rangle \langle {{\boldsymbol{y}}}_{i}| {\boldsymbol{C}}}{\lambda +{\lambda }_{i}}\right){\boldsymbol{C}}\left({\mathbb{I}}+\sum _{i=1}^{N}\displaystyle \frac{{\boldsymbol{C}}| {{\boldsymbol{y}}}_{i}\rangle \langle {{\boldsymbol{x}}}_{i}| }{\lambda -{\lambda }_{i}}\right){\boldsymbol{C}}={\mathbb{I}}.\end{eqnarray}$Calculating residues on both sides of equation (15), then we obtain that$\begin{eqnarray}\left({\mathbb{I}}-\sum _{j=1}^{n}\displaystyle \frac{| {{\boldsymbol{x}}}_{j}\rangle \langle {{\boldsymbol{y}}}_{j}| {\boldsymbol{C}}}{{\lambda }_{i}+{\lambda }_{j}}\right)| {{\boldsymbol{y}}}_{i}\rangle \langle {{\boldsymbol{x}}}_{i}| =0.\end{eqnarray}$Because $\langle {{\boldsymbol{x}}}_{i}| \,\ne \,{\bf{0}}$, comparing equations (13) and (16), we find that$\begin{eqnarray}| {{\boldsymbol{y}}}_{i}\rangle ={{\boldsymbol{\psi }}}_{i}(x,t;{\lambda }_{i}).\end{eqnarray}$Denote$\begin{eqnarray*}{\boldsymbol{Y}}=[| {{\boldsymbol{y}}}_{1}\rangle ,| {{\boldsymbol{y}}}_{2}\rangle ,\,\ldots ,\,| {{\boldsymbol{y}}}_{N}\rangle ],\qquad {\boldsymbol{M}}={\left(\displaystyle \frac{\langle {{\boldsymbol{y}}}_{i}| {\boldsymbol{C}}| {{\boldsymbol{y}}}_{j}\rangle }{{\lambda }_{j}+{\lambda }_{i}}\right)}_{1\leqslant i,j\leqslant N},\end{eqnarray*}$then equation (14) can be rewritten as$\begin{eqnarray*}{\boldsymbol{Y}}={\boldsymbol{R}}{\boldsymbol{M}},\end{eqnarray*}$thus$\begin{eqnarray}{\boldsymbol{R}}={\boldsymbol{Y}}{{\boldsymbol{M}}}^{-1}.\end{eqnarray}$Finally, by substituting equations (18) into (12), we obtain that$\begin{eqnarray*}{\boldsymbol{T}}={\mathbb{I}}-{\boldsymbol{Y}}{{\boldsymbol{M}}}^{-1}{{\boldsymbol{D}}}^{-1}{\boldsymbol{Z}}.\end{eqnarray*}$The Bäcklund transformation (8) can be obtained through the formula$\begin{eqnarray}{{\boldsymbol{T}}}_{x}+{\boldsymbol{T}}{\boldsymbol{U}}={{\boldsymbol{U}}}^{[N]}{\boldsymbol{T}}\end{eqnarray}$by expanding with respect to λ at the neighborhood of $\infty $.
So far we have constructed the N-fold DT, and in the next section it will be utilized to derive the N-soliton solution, based on which the singularity and asymptoticity analysis on these solutions will have been done.
3. Bounded multi-solitons solutions and their asymptotic analysis
In this section, some exact solutions with zero seed solution will be derived. It is worth noting that the parameter σ in nNLSE represents different optical meanings, that is, σ takes ±1 to correspond to focusing case and defocusing case respectively. As mentioned before, Ye and Zhang [20] have studied a reverse-time nNLSE with $\sigma =1$, so we mainly discuss the situation when $\sigma =-1$ here, because this situation has never been studied before. When discussing the asymptoticity of the solution, we quote two lemmas proposed by Zhang et al [40] to illustrate the properties of the modulus of the soliton solution. What is more, we utilize a kind of method proposed by Ling et al [15] to calculate the limit form of M-matrix. Also, another method proposed by Faddeev and Takhtajan [41] has been used to determine the exponential decay term of the remainder.
3.1. The dynamics for the single-soliton solution
Taking ${\lambda }_{1}={a}_{1}+{\rm{i}}{b}_{1},$ ${a}_{1},{b}_{1}\in {\mathbb{R}}$ and the seed solution $p(x,t)=0$, then the vector solution of Lax pair (2) can be solved with the form:$\begin{eqnarray*}{{\boldsymbol{\psi }}}_{1}=\left[\begin{array}{c}{{\rm{e}}}^{{\eta }_{1}(x,t)}\\ {c}_{1}{{\rm{e}}}^{-{\eta }_{1}(x,t)}\end{array}\right],\end{eqnarray*}$where ${\eta }_{1}(x,t)={\rm{i}}{\lambda }_{1}(x+{\lambda }_{1}t)$, and c1 is a complex constant. Then, by the formula (7), we derive the following single-soliton solution:$\begin{eqnarray}{p}^{[1]}(x,t)=\displaystyle \frac{4({a}_{1}+{\rm{i}}{b}_{1}){c}_{1}{{\rm{e}}}^{[4{a}_{1}{b}_{1}+2{\rm{i}}({b}_{1}^{2}-{a}_{1}^{2})]t}}{{{\rm{e}}}^{2({\rm{i}}{a}_{1}-{b}_{1})x}-\sigma {c}_{1}^{2}{{\rm{e}}}^{-2({\rm{i}}{a}_{1}-{b}_{1})x}}.\end{eqnarray}$
Now we analyze the dynamics for the single-soliton.
3.1.1. Singularity
If $\mathrm{ln}(\sigma {c}_{1}^{2})/({\rm{i}}{\lambda }_{1})\in {\mathbb{R}}$, the above solution will appear the singularity. For the other case, there is no singularity for the single-soliton solutions. For instance, by setting ${c}_{1}={\rm{i}}$, then we can obtain a non-singular single-soliton solution.
3.1.2. Boundedness
When ${a}_{1}=0$, $\sigma =-1$, we can obtain the bounded single-soliton solution with ${\mathfrak{R}}({c}_{1})\ne 0$:$\begin{eqnarray*}{p}^{[1]}(x,t)=\displaystyle \frac{4{\rm{i}}{b}_{1}{c}_{1}{{\rm{e}}}^{2{b}_{1}(x+{\rm{i}}{b}_{1}t)}}{{{\rm{e}}}^{4{b}_{1}x}{c}_{1}^{2}\,+\,1}.\end{eqnarray*}$After calculation, we can deduce that the extreme point of the soliton solution is$\begin{eqnarray*}x=-\displaystyle \frac{\mathrm{ln}\left(| {c}_{1}| \right)}{2\,{b}_{1}},\end{eqnarray*}$and its amplitude is$\begin{eqnarray}\displaystyle \frac{4{b}_{1}^{2}| {c}_{1}{| }^{2}}{{\left({\mathfrak{R}}({c}_{1})\right)}^{2}}.\end{eqnarray}$Similarly, we can consider the boundedness of solution for the case $\sigma =1$. It is seen that the boundedness of solutions is valid for the stationary one.
3.1.3. Oscillation effect
It is seen that the oscillation effect will appear in the solution as a1 large enough, which had never been pointed out in the previous studies. After calculation, we obtain the bottom envelope of the oscillation (see the figure 1(b)), the expression of which is as follows$\begin{eqnarray}b(x)=\displaystyle \frac{16{\mathfrak{R}}{\left(c\right)}^{2}({a}^{2}+{b}^{2})}{{\mathfrak{R}}{\left(c\right)}^{4}{{\rm{e}}}^{4{bx}}+{{\rm{e}}}^{-4{bx}}+2{\mathfrak{R}}{\left(c\right)}^{2}}.\end{eqnarray}$In principle, the expression of an envelope at the top can also be calculated, but the top envelope here is singular and has lost its practical significance, so it is not shown. In order to better represent the oscillation effect of soliton solution, we define its width.
Figure 1.
New window|Download| PPT slide Figure 1.(a) By choosing the parameters ${a}_{1}=10$, ${b}_{1}=\tfrac{1}{2}$, c=1, $\sigma =-1$, we obtain an unbounded non-singular single-soliton solution. (b) The corresponding sectional view of the single-soliton solution in (a) and its lower envelope. (c) By setting ${a}_{1}=0$, ${b}_{1}=1$, $c=1+{\rm{i}}$, $\sigma =-1$, we obtain a bounded non-singular single-soliton solution.
The distance between the two abscissa corresponding to 1/2 of the maximum of the lower envelope is defined as the width of the soliton solution oscillation.
By equation (22) we obtain the expression of the width$\begin{eqnarray}\begin{array}{l}d=| {x}_{+}-{x}_{-}| ,\,\,{x}_{\pm }=\displaystyle \frac{1}{4b}\mathrm{ln}\\ \left|\displaystyle \frac{({\mathfrak{R}}{\left(c\right)}^{4}+{\mathfrak{R}}{\left(c\right)}^{2}+1)\pm ({\mathfrak{R}}{\left(c\right)}^{2}+1)\sqrt{{\mathfrak{R}}{\left(c\right)}^{4}+1}}{{\mathfrak{R}}{\left(c\right)}^{4}}\right|.\end{array}\end{eqnarray}$And it is readily proved that the period of oscillation is$\begin{eqnarray}T=\displaystyle \frac{\pi }{2a},\end{eqnarray}$so combining equation (22) with equation (24), we can deduce that the number of times that soliton solution oscillates is$\begin{eqnarray}N=\left[\displaystyle \frac{d}{T}\right],\end{eqnarray}$where $[]$ represents the ceil function. The single-soliton solution with oscillation effect and the sectional view of soliton solution and lower envelope are shown in figure 1.
Different from the single-soliton of classic NLSE, the amplitude of single-soliton solution of which is determined by the fixed spectral parameter ${\lambda }_{1}$. From equation (21) we know that for the nNLSE (1) studied in this paper, the amplitude of the soliton solution is jointly determined by the spectral parameter ${\lambda }_{1}$ and the solution parameter c1. What is more, its amplitude can tend to $\infty $. Two examples of single-soliton solution are shown in figure 1.
Choosing the parameters appropriately we have drawn graphs of unbounded single-soliton solutions and bounded single-soliton solutions. And from figure 1(a), we can see the single-soliton emerging the effect of periodic oscillation. Also, we provide the sectional view of the single-soliton solution and the lower envelope.
In fact, it is not easy-to find the singularity conditions of the soliton solution, as the conditions are already very complicated only in the case of a single-soliton. In physics, we always pay much attention on non-singular and bounded soliton solutions. We have discussed these two features in the case of single-soliton. Fortunately, we have not only found the non-singular conditions and bounded conditions of single-soliton solutions, we have also found those for the multi-soliton solution.
3.2. The non-singular symmetric multi-soliton solution
For the convenience of discussion, we will take the solution parameter ci of each matrix function ${{\boldsymbol{\psi }}}_{i}(x,t)$ as ${{\rm{e}}}^{\tfrac{\pi }{2}{\rm{i}}\tfrac{1+\sigma }{2}}$ below, then the matrix function of multi-soliton can be expressed as follows:$\begin{eqnarray}{{\boldsymbol{\psi }}}_{i}(x,t)=\left(\begin{array}{c}{{\rm{e}}}^{{\eta }_{i}(x,t)}\\ {{\rm{e}}}^{\tfrac{\pi }{2}{\rm{i}}\tfrac{1+\sigma }{2}}{{\rm{e}}}^{-{\eta }_{i}(x,t)}\end{array}\right),\end{eqnarray}$where$\begin{eqnarray*}{\eta }_{i}(x,t)={\rm{i}}{\lambda }_{i}(x+{\lambda }_{i}t),\qquad i=1...n.\end{eqnarray*}$
By the formula (8), we can construct the multi-soliton solution with the setting of ${{\boldsymbol{\psi }}}_{i}(x,t)$ (26):$\begin{eqnarray}{p}^{[N]}(x,t)={{\boldsymbol{Y}}}_{2}(x,t){{\boldsymbol{M}}}^{-1}(x,t){{\boldsymbol{Y}}}_{1}^{\top }(x,-t),\end{eqnarray}$where$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{Y}}}_{1}(x,t) & = & \left(\begin{array}{cccc}{{\rm{e}}}^{{\eta }_{1}(x,t)}, & {{\rm{e}}}^{{\eta }_{2}(x,t)}, & ... & {{\rm{e}}}^{{\eta }_{n}(x,t)}\end{array}\right),\\ {{\boldsymbol{Y}}}_{2}(x,t) & = & \left(\begin{array}{cccc}{{\rm{e}}}^{-{\eta }_{1}(x,t)}, & {{\rm{e}}}^{-{\eta }_{2}(x,t)}, & ... & {{\rm{e}}}^{-{\eta }_{n}(x,t)}\end{array}\right),\\ {\boldsymbol{M}}(x,t) & = & \left(\displaystyle \frac{{{\rm{e}}}^{{\eta }_{i}(x,-t)+{\eta }_{j}(x,t)}+{{\rm{e}}}^{-{\eta }_{i}(x,-t)-{\eta }_{j}(x,t)}}{2({\lambda }_{j}+{\lambda }_{i})}\right).\end{array}\end{eqnarray*}$
With the aim of better illustrating the properties of the soliton solutions, we first invoke two lemmas about the relations among the solution, the reverse-time solution and the M-matrix. [40]$\begin{eqnarray*}\begin{array}{rcl}{\left({p}^{[N]}(x,t)\right)}^{* } & = & -{p}^{[N]}(x,-t),\\ {p}^{[N]}(x,t){p}^{[N]}(x,-t) & = & -| {p}^{[N]}(x,t){| }^{2}.\end{array}\end{eqnarray*}$
From equation (27), we can deduce that$\begin{eqnarray*}\begin{array}{rcl}-{p}^{[N]}(x,-t) & = & -{{\boldsymbol{Y}}}_{2}(x,-t){{\boldsymbol{M}}}^{-1}(x,-t){{\boldsymbol{Y}}}_{1}^{\top }(x,t)\\ & = & {\left({{\boldsymbol{Y}}}_{2}(x,t){{\boldsymbol{M}}}^{-1}(x,t){{\boldsymbol{Y}}}_{1}^{\top }(x,-t)\right)}^{* }\\ & = & {\left({p}^{[N]}(x,t)\right)}^{* }.\end{array}\end{eqnarray*}$Then, naturally we have that$\begin{eqnarray*}\begin{array}{rcl}{p}^{[N]}(x,t){p}^{[N]}(x,-t) & = & -{p}^{[N]}(x,t){\left({p}^{[N]}(x,t)\right)}^{* }\\ & = & -| {p}^{[N]}(x,t){| }^{2}.\end{array}\end{eqnarray*}$
Following the method in [40], the following lemma can be established: [40]$\begin{eqnarray*}{p}^{[N]}(x,t){p}^{[N]}(x,-t)=-{\partial }_{x}^{2}\mathrm{ln}(| {\boldsymbol{M}}| )\end{eqnarray*}$
From theorem 2 we can perform the N-fold Darboux matrix as follows:$\begin{eqnarray*}{\boldsymbol{T}}(x,t;\lambda )={\mathbb{I}}+\sum _{j=1}^{N}\displaystyle \frac{{{\boldsymbol{K}}}_{j}}{\lambda +{\lambda }_{j}}.\end{eqnarray*}$That Darboux matrix converts the system (2) into the invariant form$\begin{eqnarray*}\left\{\begin{array}{l}{{\boldsymbol{\psi }}}_{x}^{[N]}={{\boldsymbol{U}}}^{[N]}{{\boldsymbol{\psi }}}_{[N]},\qquad {{\boldsymbol{U}}}^{[N]}(x,t;\lambda ):= {\rm{i}}(\lambda {{\boldsymbol{\sigma }}}_{3}+{{\boldsymbol{Q}}}^{[N]}),\\ {{\boldsymbol{\psi }}}_{t}^{[N]}={{\boldsymbol{V}}}^{[N]}{{\boldsymbol{\psi }}}_{[N]},\qquad {{\boldsymbol{V}}}^{[N]}(x,t;\lambda ):= \lambda {{\boldsymbol{U}}}^{[N]}\\ +\displaystyle \frac{1}{2}{{\boldsymbol{\sigma }}}_{3}({{\boldsymbol{Q}}}_{x}^{[N]}-{\rm{i}}{\left({{\boldsymbol{Q}}}^{[N]}\right)}^{2})\end{array}\right.\end{eqnarray*}$with$\begin{eqnarray*}{{\boldsymbol{Q}}}^{[N]}=\left[\begin{array}{cc}0 & \sigma {p}^{[N]}(x,-t)\\ {p}^{[N]}(x,t) & 0\end{array}\right].\end{eqnarray*}$By ${{\boldsymbol{T}}}_{x}+{\boldsymbol{T}}{{\boldsymbol{U}}}^{{bg}}(\lambda )={{\boldsymbol{U}}}^{[N]}{\boldsymbol{T}}$ and matching the term ${ \mathcal O }({\lambda }^{-1})$, one yields$\begin{eqnarray*}\begin{array}{l}{\left(\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}\right)}_{x}={\rm{i}}({{\boldsymbol{Q}}}^{[N]}\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}\\ -\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}{{\boldsymbol{Q}}}^{{bg}}).\end{array}\end{eqnarray*}$By ${{\boldsymbol{T}}}_{t}+{\boldsymbol{T}}{{\boldsymbol{V}}}^{{bg}}(\lambda )={{\boldsymbol{V}}}^{[N]}{\boldsymbol{T}}$ and matching the term ${ \mathcal O }(1)$, one yields$\begin{eqnarray*}\begin{array}{rcl}{\left({{\boldsymbol{Q}}}^{[N]}\right)}^{2} & = & {{\boldsymbol{Q}}}^{{bg}}+{\rm{i}}{\left({{\boldsymbol{Q}}}^{{bg}}-{{\boldsymbol{Q}}}^{[N]}\right)}_{x}+2{\sigma }_{3}({{\boldsymbol{Q}}}^{[N]}\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}\\ & & -\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}{{\boldsymbol{Q}}}^{{bg}}).\end{array}\end{eqnarray*}$Combining the above two equations, one deduces$\begin{eqnarray*}{\left({{\boldsymbol{Q}}}^{[N]}\right)}^{2}={{\boldsymbol{Q}}}^{{bg}}+{\rm{i}}{\left({{\boldsymbol{Q}}}^{{bg}}-{{\boldsymbol{Q}}}^{[N]}\right)}_{x}-2{\rm{i}}{{\boldsymbol{\sigma }}}_{3}{\left(\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}\right)}_{x}.\end{eqnarray*}$Then we derive$\begin{eqnarray*}| {p}^{[N]}{| }^{2}=-2{\rm{i}}\displaystyle \frac{{\partial }^{2}}{\partial x}\left(\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}\right).\end{eqnarray*}$Note that$\begin{eqnarray*}\sum _{j=1}^{N}{{\boldsymbol{K}}}_{j}=-{\boldsymbol{Y}}{{\boldsymbol{M}}}^{-1}{\boldsymbol{Z}},\qquad {\boldsymbol{Y}}={{\boldsymbol{Z}}}^{\dagger }(x,-t){\boldsymbol{C}}.\end{eqnarray*}$Then one obtains$\begin{eqnarray*}| {p}^{[N]}{| }^{2}=\displaystyle \frac{{\partial }^{2}}{\partial {x}^{2}}\mathrm{ln}(| {\boldsymbol{M}}| ),\end{eqnarray*}$i.e.$\begin{eqnarray*}{p}^{[N]}(x,t){p}^{[N]}(x,-t)=-\displaystyle \frac{{\partial }^{2}}{\partial {x}^{2}}\mathrm{ln}(| {\boldsymbol{M}}| ).\end{eqnarray*}$
Then we provide a sufficient condition for a non-singular $2n$-soliton solution. For a 2n-soliton solution, if the parameters satisfy$\begin{eqnarray*}{\lambda }_{2}=-{\lambda }_{1}^{* },\,\ldots ,\,{\lambda }_{2i}=-{\lambda }_{2i-1}^{* },\qquad i=1,\,\ldots ,\,n,\end{eqnarray*}$then the solutions constructed by theorem 2 are non-singular.
Note that throughout the paper we use * to denote complex conjugation.
Firstly, we verify that ${\boldsymbol{M}}$ is non-degenerate. By theorem 2, ${\boldsymbol{M}}$ can be written as ${\boldsymbol{M}}={\left({m}_{{ij}}\right)}_{1\leqslant i,j\leqslant 2n}$, where$\begin{eqnarray*}\begin{array}{rcl}{m}_{{ij}} & = & \displaystyle \frac{{{\boldsymbol{\psi }}}_{i}^{\top }(x,-t){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{j}(x,t)}{2({\lambda }_{j}+{\lambda }_{i})}\\ & = & \displaystyle \frac{{{\rm{e}}}^{{\eta }_{i}(x,-t)+{\eta }_{j}(x,t)}+{{\rm{e}}}^{-{\eta }_{i}(x,-t)-{\eta }_{j}(x,t)}}{2({\lambda }_{j}+{\lambda }_{i})}.\end{array}\end{eqnarray*}$So the matrix ${\boldsymbol{M}}$ can also be expressed in matrix form as$\begin{eqnarray*}{\boldsymbol{M}}={{\boldsymbol{M}}}_{1}{\boldsymbol{U}}{{\boldsymbol{M}}}_{2}+{{\boldsymbol{M}}}_{3}{\boldsymbol{U}}{{\boldsymbol{M}}}_{4},\end{eqnarray*}$where$\begin{eqnarray*}\begin{array}{rcl}{{\boldsymbol{M}}}_{1} & = & \mathrm{diag}({{\rm{e}}}^{{\eta }_{1}(x,-t)},\,\ldots ,\,{{\rm{e}}}^{{\eta }_{2n}(x,-t)}),\\ {{\boldsymbol{M}}}_{2} & = & \mathrm{diag}({{\rm{e}}}^{{\eta }_{1}(x,t)},\,\ldots ,\,{{\rm{e}}}^{{\eta }_{2n}(x,t)}),\\ {{\boldsymbol{M}}}_{3} & = & \mathrm{diag}({{\rm{e}}}^{-{\eta }_{1}(x,-t)},\,\ldots ,\,{{\rm{e}}}^{-{\eta }_{2n}(x,-t)}),\\ {{\boldsymbol{M}}}_{4} & = & \mathrm{diag}({{\rm{e}}}^{-{\eta }_{1}(x,t)},\,\ldots ,\,{{\rm{e}}}^{-{\eta }_{2n}(x,t)}),\\ {\boldsymbol{U}} & = & {\left(\displaystyle \frac{1}{2({\lambda }_{j}+{\lambda }_{i})}\right)}_{1\leqslant i,j\leqslant 2n},\end{array}\end{eqnarray*}$among which obviously ${\boldsymbol{U}}$ is a Cauchy matrix. Moreover, we readily come up with the following parameter relations:$\begin{eqnarray}\begin{array}{rcl}{\eta }_{2k-1}^{* }(x,t) & = & {\eta }_{2k}(x,-t),\\ {\eta }_{2k}^{* }(x,t) & = & {\eta }_{2k-1}(x,-t),\qquad k=1,\,\ldots ,\,n.\end{array}\end{eqnarray}$Besides, by equation (28) we can deduce that$\begin{eqnarray}\begin{array}{l}{{\boldsymbol{M}}}_{1}=\mathrm{diag}({{\rm{e}}}^{{\eta }_{2}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{1}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{4}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{3}^{* }(x,t)},\,\ldots ,\,\\ {{\rm{e}}}^{{\eta }_{2n}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{2n-1}^{* }(x,t)}),\\ {{\boldsymbol{M}}}_{3}=\mathrm{diag}({{\rm{e}}}^{-{\eta }_{2}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{1}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{4}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{3}^{* }(x,t)},\,\ldots ,\,\\ {{\rm{e}}}^{-{\eta }_{2n}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{2n-1}^{* }(x,t)}).\end{array}\end{eqnarray}$In order to make the proof clearer, we define a block matrix$\begin{eqnarray*}{\boldsymbol{N}}=\left(\begin{array}{cccc}{{\boldsymbol{N}}}_{1} & {\bf{0}} & ... & {\bf{0}}\\ {\bf{0}} & {{\boldsymbol{N}}}_{2} & ... & {\bf{0}}\\ \vdots & \vdots & \ddots & \vdots \\ {\bf{0}} & {\bf{0}} & ... & {{\boldsymbol{N}}}_{n}\end{array}\right),\end{eqnarray*}$where$\begin{eqnarray*}{{\boldsymbol{N}}}_{1}={{\boldsymbol{N}}}_{2}=...={{\boldsymbol{N}}}_{n}=\left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right).\end{eqnarray*}$Furthermore, equation (29) implies that$\begin{eqnarray}\begin{array}{l}{\boldsymbol{N}}{{\boldsymbol{M}}}_{1}{{\boldsymbol{N}}}^{-1}=\mathrm{diag}({{\rm{e}}}^{{\eta }_{1}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{2}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{3}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{4}^{* }(x,t)},\,\ldots ,\,\\ {{\rm{e}}}^{{\eta }_{2n-1}^{* }(x,t)},{{\rm{e}}}^{{\eta }_{2n}^{* }(x,t)}),\\ {\boldsymbol{N}}{{\boldsymbol{M}}}_{3}{{\boldsymbol{N}}}^{-1}=\mathrm{diag}({{\rm{e}}}^{-{\eta }_{1}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{2}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{3}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{4}^{* }(x,t)},\,\ldots ,\,\\ {{\rm{e}}}^{-{\eta }_{2n-1}^{* }(x,t)},{{\rm{e}}}^{-{\eta }_{2n}^{* }(x,t)}).\end{array}\end{eqnarray}$For simplicity, we denote that$\begin{eqnarray*}{\tilde{{\boldsymbol{M}}}}_{1}={\boldsymbol{N}}{{\boldsymbol{M}}}_{1}{{\boldsymbol{N}}}^{-1},\quad {\tilde{{\boldsymbol{M}}}}_{3}={\boldsymbol{N}}{{\boldsymbol{M}}}_{3}{{\boldsymbol{N}}}^{-1},\quad \hat{{\boldsymbol{M}}}={\boldsymbol{N}}{\boldsymbol{U}},\end{eqnarray*}$and it is readily verified that ${\hat{{\boldsymbol{M}}}}^{\dagger }=\hat{{\boldsymbol{M}}}$, i.e. ${\boldsymbol{M}}$ is a Hermitian matrix. We ulteriorly derive that$\begin{eqnarray}\begin{array}{rcl}{\boldsymbol{N}}{\boldsymbol{M}} & = & {\boldsymbol{N}}({{\boldsymbol{M}}}_{1}{\boldsymbol{U}}{{\boldsymbol{M}}}_{2}+{{\boldsymbol{M}}}_{3}{\boldsymbol{U}}{{\boldsymbol{M}}}_{4})\\ & = & {\tilde{{\boldsymbol{M}}}}_{1}\hat{{\boldsymbol{M}}}{{\boldsymbol{M}}}_{2}+{\tilde{{\boldsymbol{M}}}}_{3}\hat{{\boldsymbol{M}}}{{\boldsymbol{M}}}_{4}.\end{array}\end{eqnarray}$By setting that ${\boldsymbol{H}}$ equals to equation (31), ${\boldsymbol{H}}$ can be written as ${\boldsymbol{H}}={\left({h}_{{ij}}\right)}_{1\leqslant i,j\leqslant 2n}$, where$\begin{eqnarray*}{h}_{{ij}}=\left(\displaystyle \frac{{{\rm{e}}}^{{\eta }_{i}^{* }(x,t)+{\eta }_{j}(x,t)}+{{\rm{e}}}^{-{\eta }_{i}^{* }(x,t)-{\eta }_{j}(x,t)}}{2({\lambda }_{i}+{\lambda }_{j})}\right).\end{eqnarray*}$And because for any ${\xi }_{i}$,${\xi }_{j}$,$\begin{eqnarray*}\begin{array}{l}\displaystyle \sum _{i,j}^{2n,2n}{h}_{{ij}}{\xi }_{i}{\xi }_{j}^{* }={\left|\displaystyle \sum _{j=1}^{\infty }{\xi }_{j}{\int }_{x}^{+\infty }{{\rm{e}}}^{{\eta }_{j}(s,t)}{ds}\right|}^{2}\\ +{\left|\displaystyle \sum _{j=1}^{\infty }{\xi }_{j}{\int }_{-\infty }^{x}{{\rm{e}}}^{-{\eta }_{j}(s,t)}{ds}\right|}^{2}\gt 0\end{array}\end{eqnarray*}$thus ${\boldsymbol{H}}$ is a positive definite matrix, which implies that ${\boldsymbol{M}}$ is a non-degenerate matrix. By theorem 2, a solution can be constructed in the form of$\begin{eqnarray}{p}^{[N]}(x,t)={{\boldsymbol{X}}}_{2}{{\boldsymbol{M}}}^{-1}{{\boldsymbol{X}}}_{1}=\displaystyle \frac{{{\boldsymbol{X}}}_{2}^{{\rm{adj}}}{\boldsymbol{M}}{{\boldsymbol{X}}}_{1}}{\det ({\boldsymbol{M}})},\end{eqnarray}$where ${{\boldsymbol{X}}}_{1}$, ${{\boldsymbol{X}}}_{2}$ are matrices obtained by the transformation of system matrix function. Because ${\boldsymbol{M}}$ is non-degenerate, i.e. $\det ({\boldsymbol{M}})\ne 0$, under the circumstances ${p}^{[N]}(x,t)$ must be a non-singular soliton solution.
In the above proposition, we give a sufficient condition for the $2n$-soliton solution with pair parameters. Actually, for the case ${\lambda }_{i}\in {\rm{i}}{\mathbb{R}}$, we can also obtain the similar result in a similar way. In this case, what we obtain are also the bounded non-singular soliton solutions.
3.3. Asymptotic analysis for the multi-soliton solution
In what follows, we will illustrate the asymptoticity of multi-soliton solution under $\sigma =-1$. For the case $\sigma =1$, it can be analyzed similarly. In section 3.3, we set ci=1 for convenience. We take that ${\lambda }_{1}={a}_{1}+{\rm{i}}{b}_{1},{\lambda }_{2}=-{a}_{1}+{\rm{i}}{b}_{1}$, because other forms of parameter selection result in that the derivation result in any direction equals to 0 in the limiting case. The corresponding matrix functions are as follows:$\begin{eqnarray*}{{\boldsymbol{\psi }}}_{1}(x,t)=\left(\begin{array}{c}{{\rm{e}}}^{{\eta }_{1}(x,t)}\\ {{\rm{e}}}^{-{\eta }_{1}(x,t)}\\ \end{array}\right),\qquad {{\boldsymbol{\psi }}}_{2}(x,t)=\left(\begin{array}{c}{{\rm{e}}}^{{\eta }_{2}(x,t)}\\ {{\rm{e}}}^{-{\eta }_{2}(x,t)}\\ \end{array}\right),\end{eqnarray*}$where ${\eta }_{i}(x,t)={\rm{i}}{\lambda }_{i}(x+{\lambda }_{i}t)$, i=1, 2. And then, throughout the theorem 2 we can derive the 2nd order ${{\boldsymbol{M}}}_{2}$ matrix:$\begin{eqnarray*}{{\boldsymbol{M}}}_{2}={\left(\displaystyle \frac{{{\boldsymbol{\psi }}}_{i}^{\top }(x,-t;{\lambda }_{i}){\boldsymbol{C}}{{\boldsymbol{\psi }}}_{j}(x,t;{\lambda }_{j})}{{\lambda }_{j}+{\lambda }_{i}}\right)}_{1\leqslant i,j\leqslant 2}.\end{eqnarray*}$For simplicity of expression, we denote that$\begin{eqnarray*}[{{\boldsymbol{\psi }}}_{1}(x,t),{{\boldsymbol{\psi }}}_{2}(x,t)]=\left[\begin{array}{c}{{\boldsymbol{X}}}_{1}(x,t)\\ {{\boldsymbol{X}}}_{2}(x,t)\end{array}\right].\end{eqnarray*}$Then the solution could be written as:$\begin{eqnarray}{p}^{[1]}(x,t)={{\boldsymbol{X}}}_{2}(x,t;{\lambda }_{1},{\lambda }_{2}){{\boldsymbol{M}}}^{-1}{{\boldsymbol{X}}}_{1}^{\top }(x,-t;{\lambda }_{1},{\lambda }_{2}).\end{eqnarray}$
Due to the symmetry of the solution with respect to the time variable t, here we only prove the case that $t\to +\infty $. If we fix the direction as $x+2{a}_{1}t={\theta }_{1}$, thus$\begin{eqnarray*}| {{\boldsymbol{M}}}_{2}| \to {\tilde{{\boldsymbol{M}}}}_{21},\end{eqnarray*}$where$\begin{eqnarray*}{\tilde{{\boldsymbol{M}}}}_{21}=\left|\begin{array}{cc}\frac{{{\rm{e}}}^{2{\eta }_{1}(x,t)}}{2({\lambda }_{1}+{\lambda }_{1})} & \frac{1}{2(-\lambda \displaystyle \mathop{{}_{1}}\limits^{* }+{\lambda }_{1})}\\ \frac{{{\rm{e}}}^{2({\eta }_{1}(x,t)+{\eta }_{1}^{* }(x,t))}+1}{2({\lambda }_{1}-\lambda \mathop{{}_{1}}\limits^{* })} & \frac{{{\rm{e}}}^{2{\eta }_{1}^{* }(x,t)}}{2(-\lambda \mathop{{}_{1}}\limits^{* }-\lambda \mathop{{}_{1}}\limits^{* })}\end{array}\right|.\end{eqnarray*}$Similarly, we can obtain that if the direction is fixed as $x-2{a}_{1}t={\theta }_{2}$,$\begin{eqnarray*}| {\boldsymbol{M}}| \to | {\tilde{{\boldsymbol{M}}}}_{22}| =\left|\begin{array}{cc}\frac{1}{2({\lambda }_{1}+{\lambda }_{1})} & \frac{{{\rm{e}}}^{2({\eta }_{1}(x,-t)+{\eta }_{1}^{* }(x,-t))}+1}{2(-\lambda \mathop{{}_{1}}\limits^{* }+{\lambda }_{1})}\\ \frac{1}{2({\lambda }_{1}-\lambda \mathop{{}_{1}}\limits^{* })} & \frac{1}{2(-\lambda \mathop{{}_{1}}\limits^{* }-\lambda \mathop{{}_{1}}\limits^{* })}\end{array}\right|.\end{eqnarray*}$If we fix the direction to any other value, then we have$\begin{eqnarray*}{\partial }_{x}^{2}\mathrm{ln}(| {\boldsymbol{M}}| )\to 0.\end{eqnarray*}$Therefore, from lemmas 2 and 3, we can derive that$\begin{eqnarray*}| {p}^{[2]}{| }^{2}\,=\,4{b}_{1}^{2}\sum _{i=1}^{2}{{\rm{sech}} }^{2}\left(\displaystyle \frac{{\tilde{w}}_{i}}{2}\right)+{ \mathcal O }({{\rm{e}}}^{-| c| t}),\end{eqnarray*}$where$\begin{eqnarray*}\begin{array}{rcl}c & = & 4{a}_{1}{b}_{1},\\ {\tilde{w}}_{1} & = & -4{b}_{1}{\theta }_{1}+\mathrm{ln}\displaystyle \frac{{a}_{1}^{2}}{{a}_{1}^{2}+{b}_{1}^{2}},\\ {\tilde{w}}_{2} & = & -4{b}_{1}{\theta }_{2}+\mathrm{ln}\displaystyle \frac{{a}_{1}^{2}+{b}_{1}^{2}}{{a}_{1}^{2}}.\end{array}\end{eqnarray*}$So far, the asymptotic form of two-soliton solution has been obtained.
Choosing the parameters as ${a}_{1}=1,{b}_{1}=1,\sigma =-1$, we construct the asymptotic form of the two-soliton solution. Due to the symmetry of the solution with respect to the time variable t, here we only show the sectional view when $t\to +\infty $. The results are shown in figure 2.
Figure 2.
New window|Download| PPT slide Figure 2.${a}_{1}=1,{b}_{1}=1$ (a) The red solid lines represent the sectional view of $| {p}^{[2]}{| }^{2}$ when t=3. The green dotted lines represent the sectional view of the sum of the two decomposed single-soliton solutions with t=3. It is shown that the sum matches $| {p}^{[2]}{| }^{2}$ very well. (b) two-soliton solution: a1=2,b1=1,σ=−1.
Note that the asymptotic analysis merely works for the multi-soliton with different velocity. Next, on the premise of the sufficient condition for non-singular $2n$-soliton solution mentioned above, we give the asymptotic analysis of $2n$-soliton solutions. For a $2n$-soliton solution ${p}^{[2n]}$, if the selected parameters satisfy the following conditions$\begin{eqnarray*}{\lambda }_{2i}=-{\lambda }_{2i-1}^{* },\qquad i=1,\,\ldots ,\,n\end{eqnarray*}$then$\begin{eqnarray*}\begin{array}{rcl}| {p}^{[2n]}{| }^{2} & = & \displaystyle \sum _{i=1}^{n}4{b}_{i}^{2}\left[{{\rm{sech}} }^{2}\left(\displaystyle \frac{{\tilde{w}}_{2i-1}}{2}\right)+{{\rm{sech}} }^{2}\left(\displaystyle \frac{{\tilde{w}}_{2i}}{2}\right)\right]\\ & & +{ \mathcal O }({{\rm{e}}}^{-| {ct}| }),\,\,t\to \pm \infty \end{array}\end{eqnarray*}$where ${b}_{i}={\rm{\Im }}({\lambda }_{i})$, and ${\tilde{w}}_{2i-1}$, ${\tilde{w}}_{2i}$, $| c| $ are given by equations (37), (39), (41) respectively. In other words, when $t\to \infty $, the square of the modulus of a 2n-soliton solution can be written as the sum of solutions of 2n single-soliton.
Choosing the parameters as ${a}_{1}=2,{b}_{1}=1,{a}_{2}=3,{b}_{2}=1,\sigma =-1$, we construct the asymptotic form of a four-soliton solution to test the result of theorem 3. Due to the symmetry of the solution with respect to the time variable t, here we only show the sectional view when $t\to +\infty $. Then results are shown in figure 3, which verifies the asymptotic analysis by numeric graphs.
Figure 3.
New window|Download| PPT slide Figure 3.${a}_{1}=1,{b}_{1}=1,{a}_{2}=3,{b}_{2}=1$ (a) The red solid lines represent the sectional view of $| {p}^{[4]}{| }^{2}$ when t=3. The green dotted lines represent the sectional view of the sum of the four decomposed single-soliton solutions with t=3. It is shown that the sum matches $| {p}^{[4]}{| }^{2}$ very well. (b) four-soliton solution: ${a}_{1}=2,{b}_{1}=1,{a}_{2}=3,{b}_{2}=1,\sigma =-1$.
For a $(2n+1)$-soliton solution, if the added parameter ${\lambda }_{2n+1}$ satisfies ${\mathfrak{R}}({\lambda }_{2n+1})=0$, then the result still holds. For the $\sigma =1$, by setting the solution parameter ci of each matrix function ${{\boldsymbol{\psi }}}_{i}(x,t)$ as ${\rm{i}}$, we can not only ensure the non-singularity of the soliton solution, but also obtain the same ${\bf{M}}$ matrix as in the case of $\sigma =-1$. So it can be similarly verified that$\begin{eqnarray*}\begin{array}{rcl}| {p}^{[2n]}{| }^{2} & = & \displaystyle \sum _{i=1}^{n}4{b}_{i}^{2}\left[{{\rm{sech}} }^{2}\left(\displaystyle \frac{{\tilde{w}}_{2i-1}}{2}\right)\right.\\ & & +\,\left.{{\rm{sech}} }^{2}\left(\displaystyle \frac{{\tilde{w}}_{2i}}{2}\right)\right]+{ \mathcal O }({{\rm{e}}}^{-| {ct}| }),\end{array}\end{eqnarray*}$where$\begin{eqnarray*}| c| =\mathop{\min }\limits_{j=1,\,\ldots ,\,n}\{2| {b}_{j}| \}\mathop{\min }\limits_{\displaystyle \genfrac{}{}{0em}{}{i\ne k}{i,k=1,\,\ldots ,\,2n}}\{| {a}_{i}-{a}_{k}| \}.\end{eqnarray*}$
Compared to the classical NLSE, the asymptotic decomposition of the multi-soliton solutions of the nNLS equation under study is only applicable to the symmetric soliton solutions. In addition, the classical NLSE has not only the asymptotic expression of the square of the modulus of the solution, but also other forms of asymptotic expressions of the solution. What is more, there is no phase shift character in our asymptotic analysis, but in general it does exist in classical NLSE.
4. Discussions and conclusions
In this work, we obtain and analyze the bounded multi-soliton solution for the focusing and defocusing nNLSE (1) in a uniform frame by the method of DT. Throughout the studies in this work, we find that the feature of soliton for the nNLSE is different from the classic NLSE in the following aspects. The amplitude of the soliton solution to the nNLSE is jointly determined by the spectral parameter and the solution parameter, but for the solitons of NLSE the amplitude of soliton is uniquely determined by the spectral parameter. The exponentially blow up and decay solution can admit the oscillating effect. And some special parameter setting will result in the singularity for the solutions, which can not appear for the solitons of classic NLSE. The bounded multi-soliton solutions also have the elastic interaction. These interesting dynamics would enrich the dynamics for the field of nonlinear physics.
We construct the N-fold DT for the nNLSE (1) by the loop group method. Then we use the DT to the determinant representation of multi-soliton solutions by zero seed solution. Afterwards, the singular and asymptotic analysis on multi-soliton solutions are performed by the formula of determinant. Actually, we propose a way to analyze the singularity and asymptotic analysis for the nonlocal type of NLS equation. This method can be readily extended to the multi-component equation [26], two-place and four-place nonlocal integrable equation, multi-place nonlocal KP equation and so on.
As a matter of fact, there are lots of work to be performed. When constructing the solitonic solution, we only consider the zero seed solution. We can construct the solitonic solution by the plane wave solution or elliptic function seed solution. Meanwhile, the high order or the multi-pole solitons with large order are also deserved to study for these models. In addition, besides singularity and asymptotics, soliton solutions have many other noteworthy properties to be further explored. The above mentioned problems will be studied in the near future.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant No. 11771151), the Guangzhou Science and Technology Program of China (Grant No. 201904010362), the Fundamental Research Funds for the Central Universities of China (Grant No. 2019MS110).