Quantcast
Channel: Uncategorized – Dipendra Misra
Viewing all articles
Browse latest Browse all 9

Mathursday: Rayleigh-Ritz, Courant-Fischer, and Weyl’s Inequality

$
0
0

Mathursday is back after a very long time. The last year was unusually hectic for all of us and I couldn’t devote enough time to posts. We restart with the study of eigenvalues which finds significant use in many important areas in mathematics, and computer science. In this post, we’ll discuss some fundamental results on eigenvalues that are reused commonly. These results are common and well-known but it can help to relook at their proof.

Preliminary

Lemma: (Eigenvalues of symmetric matrices) If A is be a n \times n symmetric matrix then there is an orthonormal basis of \mathbb{R}^n consisting of eigenvectors of A.

We’ll assume knowledge of the following:

Lemma: (Dimension Formula) Given vector sub-space V, W of \mathbb{R}^n we have:

\dim(V + W) = \dim(V) + \dim(W) - \dim(V\cap W)

Definition (Rayleigh Quotient): For a square matrix A, its Rayleigh quotient is the function from x \ne 0 to \mathbb{R} defined below:

R_A(x) = \frac{x^\top A x}{x^\top x}.

Rayleigh–Ritz Theorem

Rayleigh–Ritz Theorem: Let A \in M_n be a symmetric matrix with eigenvalues \lambda_1 \le \lambda_2 \le \cdots \lambda_n upto multiplicity and with an orthonormal basis of eigenvectors \{u_i\}_{i=1}^n, where u_i has eigenvalue \lambda_i, then for all 1 \le k \le n we have:

\displaystyle \lambda_k = \min_{x \ne 0; x \in \{u_1, \cdots, u_{k-1}\}^\top} R_A(x),

where x \in \{u_1, \cdots, u_{k-1}\}^\top means that x is orthogonal to \{u_1, \cdots, u_{k-1}\}. For k=1, the only constraint on x is that x \ne 0.

Proof: We can express each x as x = \sum_{i=1}^n \alpha_i u_i for some \alpha \in \mathbb{R}^n. Then we have:

\begin{aligned} R_A(x) = \frac{\left(\sum_{i=1}^n \alpha_i u_i\right)^\top A\left(\sum_{i=1}^n \alpha_i u_i\right)}{(\sum_{i=1}^n \alpha_i u_i)^\top (\sum_{i=1}^n \alpha_i u_i)} &= \frac{\left(\sum_{i=1}^n \alpha_i u_i\right)^\top \left(\sum_{i=1}^n \alpha_i A u_i\right)}{(\sum_{i, j=1}^n \alpha_i \alpha_j u_i^\top u_j)}  \\ &= \frac{\left(\sum_{i=1}^n \alpha_i u_i\right)^\top \left(\sum_{i=1}^n \lambda_i \alpha_i u_i\right)}{\sum_{i=1}^n \alpha^2_i} \\ &= \frac{\sum_{i=1}^n \lambda_i \alpha^2_i}{\sum_{i=1}^n \alpha^2_i} = \sum_{i=1}^n p_i \lambda_i \end{aligned}

where in the last line we have p_i = \frac{\alpha^2_i}{\sum_{j=1}^n \alpha^2_j}. Observe that this p is a probability vector for any value of \alpha, and any probability vector can be written using \alpha. Hence, max over x, is same as max over \alpha, which itself is same as max over p.

If x \in \{u_1, \cdots, u_{k-1}\}^\top, then we have \alpha_i = 0 \Rightarrow p_i=0 for all i \in \{1, 2, \cdots, k-1\}. This implies:

\displaystyle \qquad \min_{x \ne 0; x \in \{u_1, \cdots, u_{k-1}\}^\top} R_A(x) = \min_{p \in \Delta^n; p_1=p_2, \cdots = p_{k-1}=0} \sum_{i=1}^n p_i \lambda_i .

It is straightforward to observe that this minima is achieved when the entire probability mass is on \lambda_k. Hence, proved.

Lemma: Let A \in M_n be a symmetric matrix with eigenvalues \{\lambda_i\}_{i=1}^n and an orthonormal basis of eigenvectors \{u_i\}_{i=1}^n. Then for all 1 \le i \le j \le n we have:

\forall x \in LS\{u_i, \cdots, u_j\}, \qquad R_A(x) \in \left[\lambda_i, \lambda_j \right].

Proof: We can adopt proof for Rayleigh-Ritz to write:

\displaystyle \min_{x \ne 0; x \in LS\{u_i, \cdots, u_j\}^\top} R_A(x) = \min_{p \in \Delta^n; p_1=p_2, \cdots = p_{i-1}=0, p_{j+1}=p_{j+2}, \cdots=p_n=0} \sum_{i=1}^n p_i \lambda_i .

It is obvious to see that the minima is achieved when the entire probability mass is on the smallest eigenvalue possible which is \lambda_i, and the maximum is achieved when the entire probability mass is on the largest eigenvalue possible which is \lambda_j. Hence, proved.

Courant-Fischer Min-Max Theorem

Courant-Fischer Min-Max Theorem: Let A be a real symmetric matrix with eigenvalues \lambda_1 \le \lambda_2 \cdots \lambda_n corresponding to an orthonormal set of eigenvectors u_1, u_2, \cdots, u_n. Then for any k \in \{1, 2, \cdots, n\} we have:

\displaystyle \lambda_k = \min_{U, \dim(U)=k}~~\max_{x; x \in U, x \ne 0} R_A(x),

\displaystyle \lambda_k = \max_{U, \dim(U)=n-k+1}~~\min_{x; x \in U, x \ne 0} R_A(x).

Proof: Let V = \{u_k, \cdots, u_n\} and let U be any subspace of dimension k. Then from dimensionality theorem we have:

\displaystyle \begin{aligned}  \dim(V\cap U) &= \dim(V) + \dim(U) - \dim(V + U) \\ &\ge \dim(V) + \dim(U) - n \\ &= (n - k + 1) + k - n = 1 \end{aligned}.

This allows us to pick a non-zero vector x in V \cap U. For this, x we have x \in LS\{u_k, \cdots, u_n\}, and therefore, from previous result we have R_A(x) \ge \lambda_k. This gives us:

\displaystyle \max_{x \in U; x\ne 0} R_A(x) \ge \lambda_k .

However, since this result holds for any U of dimensionality k, therefore, we can write:

\displaystyle \min_{U; \dim(U)=k}\max_{x \in U; x\ne 0} R_A(x) \ge \lambda_k .

We now need to prove the other direction. Let W = LS\{u_1, \cdots, u_k\} then dim(W)=k. Further, for any x \in W, we have, from previous result, R_A(x) \le \lambda_k. This implies:

\displaystyle \min_{U; \dim(U)=k}\max_{x \in U; x\ne 0} R_A(x) \le \max_{x \in W; x\ne 0} R_A(x) \le \lambda_k .

Combining these two inequalities, proves the first min-max equality. The second equation can be proven similarly.

Weyl’s Inequality

Weyl’s Inequality: Let A, B be two n \times n real symmetric matrix. For all i \in \{1, 2, \cdots, n\}, let \lambda_i(A), \lambda_i(B), \lambda_i(A+B) denote the i^{th} eigenvalues of A, B, and A+B respectively, arranged in ascending order. Let u_i, v_i, w_i be the unit norm eigenvectors of A, B, and A+B respectively corresponding to the i^{th} eigenvalue. Then for all k \in \{1, 2, \cdots, n\} and i \in \{k, k+1\cdots, n\}, we have:

\displaystyle \lambda_k(A+B) \le \lambda_i(A) + \lambda_{n+k-i}(B)

\displaystyle \lambda_{n-k+1}(A+B) \ge \lambda_{n-i+1}(A) + \lambda_{i-k+1}(B)

Proof: The proof is similar to that of Courant-Fischer, in that we will define a set of subspaces and show that we can pick a point in their intersection. We will prove the first inequality and the second one is proven in a similar fashion. For a fixed value k, i, we define three subspaces:

\displaystyle \begin{aligned} S_1 &= LS\{w_k, \cdots, w_n\},\\ S_2 &= LS\{u_1, \cdots, u_i\},\\ S_3 &= LS\{v_1, \cdots, v_{n+k-i}\}\\ \end{aligned}

You can sort of guess why these subspaces were defined in this manner by looking at the inequality we are trying to prove. We have k on the left hand-side corresponding to matrix A + B, and i and n+k-i on the right hand side corresponding to matrices A and B respectively. We have \dim(S_1) = n-k+1, \dim(S_2) = i, and \dim(S_3) = n+k-i. Applying the dimension theorem twice we get:

\displaystyle \begin{aligned} \dim(S_1 \cap S_2 \cap S_3) &= \dim(S_1) + \dim(S_2 \cap S_3) - dim(S_1 + S_2 \cap S_3)\\ &= \dim(S_1) + \dim(S_2) + \dim(S_3) - \dim(S_2 + S_3) - dim(S_1 + S_2 \cap S_3)\\ &\ge \dim(S_1) + \dim(S_2) + \dim(S_3) - 2n\\  &= n-k+1 + i + n + k - i - 2n\\ &= 1 \end{aligned}

Let z be a non-zero vector in S_1 \cap S_2 \cap S_3. Then since it is in S_1, we have \lambda_k(A + B)  \le R_{A+B}(z) = R_A(z) + R_B(z), where the last equality holds from observing that Rayleigh’s quotient of a matrix is linear in matrix. Finally, as z \in S_2 and z \in S_3 we have R_A(z) \le \lambda_i(A) and R_B(z) \le \lambda_{n+k-i}. Combining these terms we get \lambda_k(A + B) \le \lambda_i(A) + \lambda_{n+k-i}(B) which is what we want to prove.

Equalities and inequalities involving eigenvalues are quite useful and constitute a big part of linear algebra literature. Interested readers can look at Matrix Algebra & Its Applications to Statistics & Econometrics by C. R. Rao and M. B. Rao, or Matrix Analysis by Rajendra Bhatia. Bhatia also has an extremely interesting article on eigenvalue inequalities which covers interesting history: Linear Algebra to Quantum Cohomology: The Story of Alfred Horn’s Inequalities.


Viewing all articles
Browse latest Browse all 9

Trending Articles