Decoupling systems

Aug 30, 2024

Eigenvectors and Eigenvalues

Before we look at decoupling matrices, we should take a second to understand what an eigenvector for some linear transformation AA is. By definition, some vector v0\mathbf{v} \neq \mathbf{0} is an eigenvector of a linear map AA if Av=λvA\mathbf{v} = \lambda \mathbf{v} for some constant λ\lambda. In essence, this means that the direction of vv is unchanged when the map AA is applied to it. Rearranging our equation, we find another form of this definition: (AλI)v=0(A - \lambda I) \mathbf{v} = \mathbf{0} Because v\mathbf{v} is, by definition, not the zero vector, the above equation means that the map AλIA - \lambda I has a non-trivial nullspace, and therefore, is a singular matrix. The determinant of it must, by the TFAE, be equal to 0. det(AλI)=0\text{det} (A - \lambda I) = 0 The above determinant yields an equation known as the characteristic polynomial, which will be of order nn for an n×nn \times n matrix. This is how eigenvalues (and, through substitution, eigenvectors) are found in practice.

These eigenvectors are somehow special to this mapping AA. In order to produce a coordinate transformation to simplify calculations with the map, it would intuitively make sense to transform these eigenvectors into the basis unit vectors. We'll revisit this idea later.

Coupled and Decoupled Systems

A linear system of differential equations is a system of the form null When AA is a diagonal matrix, like A=[aamp;0 0amp;b]A = \begin{bmatrix} a & 0 \ 0 & b \end{bmatrix} then the system is said to be decoupled, as we'd end up with a system looking like null x1x_1 and its derivatives have no dependence on x2x_2, and likewise, x2x_2 and its derivatives have no dependence on x1x_1. Decoupled systems are extremely easy to solve, as we can just guess the proper exponential.

The goal, then, is to introduce some coordinate system (u,v)(u, v) that is related to the (x1,x2)(x_1, x_2) via some decoupling matrix DD, such that we may obtain "easy" solutions for (u,v)(u, v) that can be plugged in to find the solution set (x1,x2)(x_1, x_2). We define the mappings [u v]=D[x1 x2]\begin{bmatrix} u \ v \end{bmatrix} = D\begin{bmatrix} x_1 \ x_2 \end{bmatrix} [x1 x2]=D1[u v]\begin{bmatrix} x_1 \ x_2 \end{bmatrix} = D^{-1}\begin{bmatrix} u \ v \end{bmatrix} where DD stands for "decoupling matrix."

Revisiting our idea from earlier about mapping the eigenvectors of AA to (u,v)(u, v), suppose the D1D^{-1} is the eigenvector matrix of AA. D1=[a1amp;a2 b1amp;b2]D^{-1} = \begin{bmatrix} a_1 & a_2 \ b_1 & b_2\end{bmatrix} When (u,v)=(1,0)(u, v) = (1, 0), we get the first eigenvector of our original transformation. Likewise, when (u,v)=(0,1)(u, v) = (0, 1), we get our second eigenvector.

Proof

Why does this method work? First, we consider what the matrix D1D^{-1} actually does. By definition, when we apply AA to the first column of D1D^{-1} (in other words, its first eigenvector), we get A[a1 b1]=λ1[1 0]A\begin{bmatrix} a_1 \ b_1 \end{bmatrix} = \lambda_1 \begin{bmatrix} 1 \ 0 \end{bmatrix} And likewise, A[a2 b2]=λ2[0 1]A\begin{bmatrix} a_2 \ b_2 \end{bmatrix} = \lambda_2 \begin{bmatrix} 0 \ 1 \end{bmatrix} We can put these together into a unified, matrix representation. A[a1amp;a2 b1amp;b2]=[λ1amp;0 0amp;λ2]A \begin{bmatrix} a_1 & a_2 \ b_1 & b_2 \end{bmatrix} = \begin{bmatrix} \lambda_1 & 0 \ 0 & \lambda_2\end{bmatrix} AD1=D1[λ1amp;0 0amp;λ2]A D^{-1} = D^{-1} \begin{bmatrix} \lambda_1 & 0 \ 0 & \lambda_2\end{bmatrix}

We now substitute x=D1[u v]\mathbf{x} = D^{-1} \begin{bmatrix} u \ v \end{bmatrix} into our original linear system: null null Recall that AD1=D1[λ1amp;0 0amp;λ2]A D^{-1} = D^{-1} \begin{bmatrix} \lambda_1 & 0 \ 0 & \lambda_2\end{bmatrix} Substituting this in, we get null Now, we may just leftiply both sides by the matrix DD, which leaves us with the system null which is, indeed, a decoupled system. Furthermore, this new set of coordinates (u,v)(u, v) are called the canonical variables of our system. Since we now have a correspondence between (u,v)(u, v) and the initial variables in our linear system, we can solve the simplifed, decoupled system in (u,v)(u, v) and find the solutions (x1,x2)(x_1, x_2) utilizing this information.