Spectral Norm Bounds for Hermitian Matrices

In the mathematical literature, numerous bounds can be found for the spectral norm of a matrix. Can we improve these bounds if the matrices are known to be Hermitian or even real symmetric semidefinite?

Let A be an n \times n matrix. An induced matrix norm is defined as

 \lVert A \rVert_p = \max_{\lVert x \rVert_p = 1} \lVert Ax \rVert_p,

where x is a vector. The induced matrix norms \lVert\cdot\rVert_1, \lVert\cdot\rVert_2, and \lVert\cdot\rVert_\infty are called row sum norm, spectral norm, and column sum norm, respectively. On a computer, the row sum norm and the column sum norm can be calculated efficiently for dense and sparse matrices whereas the spectral norm requires the computation of the largest singular value - this is expensive in general. For example, for dense matrices, computing singular values requires \mathcal{O}(n^3) flops while the row sum norm and the column sum norm can be evaluated in \mathcal{O}(n^2) flops, i. e., in linear time in the number of matrix entries. Consequently, it may be necessary on a computer to replace spectral norms in algorithms with row sum norms, column sum norms, or a combination thereof but in this case, one would like to have tight bounds on the relative differences between the norms. In this post, we try to find tight bounds on the spectral norm given the row sum norm when a matrix A is known to be Hermitian or real symmetric semidefinite. Note that for Hermitian matrices, the row sum norm and the column sum norm are identical. To preserve uniformity, I will only use the row sum norm \lVert\cdot\rVert_1 in this blog post.

Improving the Upper Bounds

For the induced matrix norms of square matrices, we have (among others) the following norm equivalence:

 \frac{1}{\sqrt{n}} \lVert\cdot\rVert_1 \leq \lVert\cdot\rVert_2 \leq \sqrt{n} \lVert\cdot\rVert_1.

For Hermitian matrices, the upper bounds on the spectral norm are easily improved using the inequality

 \lVert\cdot\rVert_2 \leq \sqrt{\lVert\cdot\rVert_1 \lVert\cdot\rVert_\infty}

because for these matrices it holds that \lVert\cdot\rVert_1 = \lVert\cdot\rVert_\infty. Thus

 A = A^*: \lVert A \rVert_2 \leq \lVert A \rVert_1,

where A^* is the complex conjugate transpose of A.

Improving the Lower Bounds

As a student assistant, I work a lot with the Harwell-Boeing BCS structural engineering matrices and for these matrices, the row sum norm is usually a good estimate of the spectral norm; therefore I was convinced that the lower bounds on the spectral norm should improve for Hermitian matrices. Since I was unable to come up immediately with an analysis, I decided to have the computer generate matrices to test my conjecture. For this purpose, I used direct search optimization: given an initial vector x_0 and a real-valued objective function f, the goal is to maximize f. Nicholas J. Higham devoted multiple sections of his book Accuracy and Stability of Numerical Algorithms to direct search methods and he provides Matlab implementations of three algorithms of this kind as part of his Matrix Computation Toolbox. I used the multidirectional search method from Higham's toolbox. In this blog post, the optimization goal is to maximize

 f(A) := \frac{\lVert A \rVert_1}{\lVert A \rVert_2}

for symmetric matrices A \in \mathbb{R}^{n,n} and given n. Accordingly, the search space has dimension m = \frac{1}{2} n (n+1) because we only need to generate the lower or the upper triangular part of the matrix A. In order to strengthen my conjecture that the lower bounds \frac{1}{\sqrt{n}} \lVert A \rVert_1 \leq \lVert A \rVert_2 are not the best bounds for A = A^*, the optimizer needs to fail to find A with f(A) \approx \sqrt{n}. To disprove the conjecture, it needs to find only one counterexample.

The optimizer quickly dealt a deadly blow to my conjecture: for n = 2, 4, 8, 16, the maximum value of f(A) found by the optimizer was close or equal to \sqrt{n}. Here is a 2 \times 2 matrix that achieves the worst-case norm ratio f(A) = \sqrt{n} = \sqrt{2}:

 \begin{pmatrix} 1 & -1 \\ -1 & -1 \end{pmatrix}.

Here a 4 \times 4 example by the optimizer:

 \begin{pmatrix} 1 & 1 & -4 & 4 \\ 1 & 1 & 4 & -4 \\ -4 & 4 & 4 & 4 \\ 4 & -4 & 4 & 4 \end{pmatrix}.

Note that you can replace the number four in the 4 \times 4 matrix with another value with modulus larger or equal to one. With a bit of thought, this example can probably be used to construct larger matrices with f(A) = \sqrt{n}. Evidently, the existing lower bounds \frac{1}{\sqrt{n}} \lVert\cdot\rVert_1 \leq \lVert\cdot\rVert_2 are tight for Hermitian A.

Improving the Lower Bounds for Semidefinite Matrices

All generated worst-case matrices had two things in common:

  • they were indefinite,
  • the moduli of the smallest and largest eigenvalue were equal or almost equal.

The Harwell-Boeing BCS structural engineering matrices I work with are all positive semidefinite so I decided to adjust my conjecture: for real symmetric semidefinite matrices, f(A) < \sqrt{n}. Again, I employed direct search optimization to test the hypothesis. To that end, I had the search algorithm generate lower triangular matrices L and set A := L L^T, that is, the optimizer was implicitly computing the Cholesky decomposition of some symmetric positive semidefinite (SPSD) matrix A.

Let \eta(n) denote the maximum value of the computed f(A) for a given n, A \in \mathbb{R}^{n,n}, and A SPSD. I did not save the initial set of results but what I saw was similar to the following values:

  • \eta(2) = 1.2071,
  • \eta(4) = 1.5000,
  • \eta(8) = 1.9142,
  • \eta(16) = 2.4999,
  • \eta(32) = 3.3259.

Clearly, these maxima are smaller than \sqrt{n} and we found some support for the claim that f(A) < \sqrt{n} when A is real symmetric semidefinite.

Now it would be interesting to know which function g: \mathbb{R}_0^+ \rightarrow \mathbb{R} generates these \eta values so that we can compute lower bounds  for the spectral norm. Clearly, g is not linear and I did not recognize a familiar number sequence here. Furthermore, if g were a power function then \frac{\eta(2n)}{\eta(n)} would be constant but this is not the case; if g were an exponential function, then \frac{\log(\eta(2n))}{\log(\eta(n))} would be constant but this is also not the case. There are databases of number sequences on the internet and my next idea was to search for some of the computed values. Here is the fourth Google hit for the search term "1.207107 root".

The fourth Google hit for the search term "1.207107 root" on February 25, 2015.
The fourth Google hit for the search term "1.207107 root" on February 25, 2015.

We found the answer:

 g(n) = \frac{1}{2} (\sqrt{n} + 1).

Conclusion

For Hermitian matrices, the existing bounds on the spectral norm are optimal but for real symmetric semidefinite matrices, I conjecture the following bounds hold:

 \frac{2}{\sqrt{n} + 1} \lVert A \rVert_1 \leq \lVert A \rVert_2 \leq \lVert A \rVert_1.

Edit April 15, 2016

The Matlab code for this article is available on gitlab.com.