Intuitive Explanation: Why Distinct Eigenvalues Implies Linear Independence of Eigenvectors
Intuitive Explanation: Why Distinct Eigenvalues Implies Linear Independence of Eigenvectors
Understanding the relationship between distinct eigenvalues and the linear independence of corresponding eigenvectors is fundamental in linear algebra. In this article, we will provide an intuitive explanation of why two eigenvectors corresponding to distinct eigenvalues must be linearly independent. We will define the key concepts and then use a step-by-step approach to demonstrate the logic behind this mathematical fact.
Definitions Recap
Eigenvalue
An eigenvalue, denoted by (lambda), is a scalar such that for some non-zero vector (mathbf{v}), the equation (Amathbf{v} lambda mathbf{v}) holds, where (A) is a linear transformation or matrix.
Eigenvector
An eigenvector, denoted by (mathbf{v}), is a non-zero vector associated with an eigenvalue (lambda).
Linear Dependence and Independence
Two vectors (mathbf{v}_1) and (mathbf{v}_2) are linearly dependent if there exist scalars (c_1) and (c_2), not both zero, such that (c_1mathbf{v}_1 c_2mathbf{v}_2 mathbf{0}). If they are linearly independent, the only solution to this equation is (c_1 0) and (c_2 0).
Key Idea
When we have distinct eigenvalues, say (lambda_1) and (lambda_2), with their corresponding eigenvectors (mathbf{v}_1) and (mathbf{v}_2), we aim to show that (mathbf{v}_1) and (mathbf{v}_2) cannot be linearly dependent. This means we need to demonstrate that any linear combination of (mathbf{v}_1) and (mathbf{v}_2) that results in the zero vector implies that both coefficients are zero.
Intuitive Explanation
Transformation Behavior
When a matrix (A) acts on an eigenvector (mathbf{v}_i) associated with eigenvalue (lambda_i), it scales the vector by (lambda_i). Mathematically, this can be expressed as:
(Amathbf{v}_1 lambda_1 mathbf{v}_1) (Amathbf{v}_2 lambda_2 mathbf{v}_2)Assume Linear Dependence
Assume that (mathbf{v}_1) and (mathbf{v}_2) are linearly dependent. Then we can express (mathbf{v}_2) as a scalar multiple of (mathbf{v}_1):
(mathbf{v}_2 k mathbf{v}_1) for some scalar (k)
Apply the Matrix
Now, applying the matrix (A) to (mathbf{v}_2):
(Amathbf{v}_2 A(kmathbf{v}_1) kAmathbf{v}_1 klambda_1 mathbf{v}_1)Since (mathbf{v}_2) is also an eigenvector, it also satisfies the eigenvalue equation:
(Amathbf{v}_2 lambda_2 mathbf{v}_2 lambda_2 (k mathbf{v}_1) klambda_2 mathbf{v}_1)Equating Expressions
We now have two expressions for (Amathbf{v}_2):
(klambda_1 mathbf{v}_1 klambda_2 mathbf{v}_1)If (k eq 0), which would imply linear dependence, we can divide both sides of the equation by (k mathbf{v}_1) (assuming (mathbf{v}_1 eq mathbf{0})):
(lambda_1 lambda_2)
This is a contradiction because we assumed (lambda_1) and (lambda_2) are distinct.
Conclusion
Thus, if (lambda_1) and (lambda_2) are distinct eigenvalues, their corresponding eigenvectors (mathbf{v}_1) and (mathbf{v}_2) cannot be linearly dependent. They must be linearly independent. This reasoning holds for any number of distinct eigenvalues and their associated eigenvectors.
Understanding this concept is crucial for applications in physics, engineering, computer science, and many other fields where linear transformations and eigenvalues play a significant role.