Before we look at decoupling matrices, we should take a second to understand what an eigenvector for some linear transformation is. By definition, some vector is an eigenvector of a linear map if for some constant . In essence, this means that the direction of is unchanged when the map is applied to it. Rearranging our equation, we find another form of this definition: Because is, by definition, not the zero vector, the above equation means that the map has a non-trivial nullspace, and therefore, is a singular matrix. The determinant of it must, by the TFAE, be equal to 0. The above determinant yields an equation known as the characteristic polynomial, which will be of order for an matrix. This is how eigenvalues (and, through substitution, eigenvectors) are found in practice.
These eigenvectors are somehow special to this mapping . In order to produce a coordinate transformation to simplify calculations with the map, it would intuitively make sense to transform these eigenvectors into the basis unit vectors. We'll revisit this idea later.
A linear system of differential equations is a system of the form null When is a diagonal matrix, like then the system is said to be decoupled, as we'd end up with a system looking like null and its derivatives have no dependence on , and likewise, and its derivatives have no dependence on . Decoupled systems are extremely easy to solve, as we can just guess the proper exponential.
The goal, then, is to introduce some coordinate system that is related to the via some decoupling matrix , such that we may obtain "easy" solutions for that can be plugged in to find the solution set . We define the mappings where stands for "decoupling matrix."
Revisiting our idea from earlier about mapping the eigenvectors of to , suppose the is the eigenvector matrix of . When , we get the first eigenvector of our original transformation. Likewise, when , we get our second eigenvector.
Why does this method work? First, we consider what the matrix actually does. By definition, when we apply to the first column of (in other words, its first eigenvector), we get And likewise, We can put these together into a unified, matrix representation.
We now substitute into our original linear system: null null Recall that Substituting this in, we get null Now, we may just leftiply both sides by the matrix , which leaves us with the system null which is, indeed, a decoupled system. Furthermore, this new set of coordinates are called the canonical variables of our system. Since we now have a correspondence between and the initial variables in our linear system, we can solve the simplifed, decoupled system in and find the solutions utilizing this information.