# 8. DMRG-SCF¶

In methods which use a FCI solver, this solver can be replaced by DMRG. DMRG allows for an efficient extraction of the 2-RDM. The 2-RDM of the active space is required in the complete active space self-consistent field (CASSCF) method to compute the gradient and the Hessian with respect to orbital rotations [DMRGSCF1]. It is therefore natural to introduce a CASSCF variant with DMRG as active space solver, called DMRG-SCF [DMRGSCF2] [DMRGSCF3] [DMRGSCF4], which allows to treat static correlation in large active spaces. In CheMPS2, the augmented Hessian Newton-Raphson DMRG-SCF method is implemented, with exact Hessian [DMRGSCF5] [DMRGSCF6].

## 8.1. Augmented Hessian Newton-Raphson¶

The basic idea is to express the energy with the unitary group generators up to second order:

$\begin{split}\hat{E}_{pq} & = & \sum\limits_{\sigma} \hat{a}^{\dagger}_{p \sigma} \hat{a}_{q \sigma} \\ \left[ \hat{E}_{pq} , \hat{E}_{rs} \right] & = & \delta_{qr} \hat{E}_{ps} - \delta_{ps} \hat{E}_{rq} \\ \hat{E}^{-}_{pq} & = & \hat{E}_{pq} - \hat{E}_{qp} \\ \hat{T}(\vec{x}) & = & \sum\limits_{p<q} x_{pq} \hat{E}^{-}_{pq} \\ E(\vec{x}) & = & \left\langle 0 \mid e^{\hat{T}(\vec{x})} \hat{H} e^{-\hat{T}(\vec{x})} \mid 0 \right\rangle \approx E(0) + \vec{x}^T \vec{g} + \frac{1}{2} \vec{x}^T \mathbf{H} \vec{x}\end{split}$

The vector $$\vec{g}$$ is the gradient and the matrix $$\mathbf{H}$$ the Hessian for orbital rotations. The minimum of $$E(\vec{x})$$ is found at $$\vec{x} = - \mathbf{H}^{-1} \vec{g}$$. The variables $$\vec{x}$$ parametrize an additional orbital rotation $$\mathbf{U}_{add} = \exp(\mathbf{T}(\vec{x}))$$, with $$\mathbf{T}(\vec{x}) = -\mathbf{T}^T(\vec{x})$$ a real-valued skew-symmetric matrix. The additional orbital rotation $$\mathbf{U}_{add}$$ transforms the current orbitals $$\mathbf{U}(n)$$ to the new orbitals

$\mathbf{U}(n+1) = \mathbf{U}_{add} \mathbf{U}(n) = \exp(\mathbf{T}(\vec{x}(n))) \mathbf{U}(n).$

This updating scheme is called the Newton-Raphson method. If the Hessian is positive definite, these updates are stable. For saddle points in the energy landscape, the Hessian has negative eigenvalues, and these updates can be unstable. It is therefore better to use the augmented Hessian Newton-Raphson method:

$\begin{split}\left[ \begin{array}{cc} \mathbf{H} & \vec{g} \\ \vec{g}^T & 0 \end{array} \right] \left[ \begin{array}{c} \vec{x} \\ 1 \end{array} \right] = \alpha \left[ \begin{array}{c} \vec{x} \\ 1 \end{array} \right].\end{split}$

The eigenvector with smallest algebraic eigenvalue determines a stable update $$\vec{x}$$, as is well explained in Ref. [DMRGSCF7].

## 8.2. DIIS¶

When the update norm $$\|\vec{x}\|_2$$ is small enough, the convergence can be accelerated by the direct inversion of the iterative subspace (DIIS) [DMRGSCF8] [DMRGSCF9]. For a given set of orbitals $$\mathbf{U}(n)$$, the update $$\vec{x}(n)$$ is calculated with the augmented Hessian Newton-Raphson method. This update defines the next set of orbitals:

$\mathbf{U}(n+1) = \mathbf{U}_{add} \mathbf{U}(n) = \exp(\mathbf{T}(\vec{x}(n))) \mathbf{U}(n).$

In DIIS, the error vector $$\vec{x}(n)$$ and the state vector $$\mathbf{Y}(n+1) = \log(\mathbf{U}(n+1))$$ are added to a list. The error

$e = \left\| \sum\limits_{i = 1}^{\kappa} c_i \vec{x}(n - \kappa + i) \right\|_2$

is minimized under the constraint $$\sum_{i} c_i = 1$$. $$\kappa$$ is the size of the list memory, i.e. the number of retained vectors. The minimization of the error $$e$$ can be performed with Lagrangian calculus:

$\begin{split}\left[ \begin{array}{cc} \mathbf{B} & \vec{1} \\ \vec{1}^T & 0 \end{array} \right] \left[ \begin{array}{c} \vec{c} \\ \lambda \end{array} \right] = \left[ \begin{array}{c} \vec{0} \\ 1 \end{array} \right],\end{split}$

where $$2\lambda$$ is the Lagrangian multiplier and

$\left[\mathbf{B}\right]_{ij} = \vec{x}^T(n - \kappa + i) \vec{x}(n - \kappa + j) = \left[\mathbf{B}\right]_{ji}.$

The new state vector is then defined as

$\mathbf{Y}_{new} = \sum\limits_{i = 1}^{\kappa} c_i \mathbf{Y}(n+1 - \kappa + i).$

The current orbitals are then set to $$\mathbf{U}(n+1) = \exp(\mathbf{Y}_{new})$$.

 [DMRGSCF1] P.E.M. Siegbahn, J. Almlof, A. Heiberg and B.O. Roos, Journal of Chemical Physics 74, 2384-2396 (1981), doi: 10.1063/1.441359
 [DMRGSCF2] Ghosh, J. Hachmann, T. Yanai and G.K.-L. Chan, Journal of Chemical Physics 128, 144117 (2008), doi: 10.1063/1.2883976
 [DMRGSCF3] Zgid and M. Nooijen, Journal of Chemical Physics 128, 144116 (2008), doi: 10.1063/1.2883981
 [DMRGSCF4] Yanai, Y. Kurashige, D. Ghosh and G.K.-L. Chan, International Journal of Quantum Chemistry 109, 2178-2190 (2009), doi: 10.1002/qua.22099
 [DMRGSCF5] Wouters, W. Poelmans, P.W. Ayers and D. Van Neck, Computer Physics Communications 185, 1501-1514 (2014), doi: 10.1016/j.cpc.2014.01.019
 [DMRGSCF6] Wouters, T. Bogaerts, P. Van Der Voort, V. Van Speybroeck and D. Van Neck, Journal of Chemical Physics 140, 241103 (2014), doi: 10.1063/1.4885815
 [DMRGSCF7] Banerjee, N. Adams, J. Simons and R. Shepard, Journal of Physical Chemistry 89, 52-57 (1985), doi: 10.1021/j100247a015
 [DMRGSCF8] Pulay, Chemical Physics Letters 73, 393-398 (1980), doi: 10.1016/0009-2614(80)80396-4
 [DMRGSCF9] C.D. Sherrill, Programming DIIS, http://vergil.chemistry.gatech.edu/notes/diis/node3.html (2000).