
Imagine you are looking at a piece of graph paper. Now, imagine you grab the corners of that paper and stretch it, twist it, or flip it. In the world of mathematics, this is called a Linear Transformation, and it is the foundation of everything from computer graphics to Artificial Intelligence.
But how do we measure what just happened to that space? That’s where our three "main characters" come in:
1. The Determinant: The Scaling Factor
The Determinant is a single number that tells you how much the area (or volume) of your space expanded or contracted. If the determinant is 2, you’ve doubled the area. If it’s 0, you’ve squashed the entire universe into a single flat line.
2. Eigenvectors: The Fixed Directions
When you stretch that graph paper, most lines will change their direction. However, there are usually a few special paths that stay exactly where they were, only getting longer or shorter. These "unshakeable" directions are the Eigenvectors. They represent the fundamental axes of a transformation.
3. Eigenvalues: The Magnitude of Change
If the eigenvector is the direction, the Eigenvalue () is the strength. It tells you exactly how much the eigenvector was stretched or squished during the process.
In this guide (and the accompanying Google Colab demo), we aren’t just going to solve these by hand using pen and paper. We are going to go "under the hood" using Python and NumPy. You’ll see exactly how these concepts allow us to compress data and recognize faces.
Affine Transformations
Applying a matrix to a vector can linearly transform the vector, eg: Rotate it or Rescale it. This is also known as matrix vector transformation. The identity matrix is the exception that proves the rule.
Consider a matrix :
that flips a vector accross the x-axis:
The resulting product will be flipped accross the x-axis:
The plot below shows the transformation, the blue line is the plot of and the orange line is the plot of :

Affine transformations are geometric transformations that modify positions, distances, or angles while preserving parallelism between vectors.
The transfromations includes:
- Translation
- Rotation
- Scaling
- Flipping/Reflection
- Shearing etc
A matrix can apply mutltiple affine transformations simultaneously eg: a rotation and a scaling.
Eigen Vectors
Eigen Vectors are the vectors that are not changed by the transformation, these are vectors that retain their span (direction) after the transformation. They can be shrunk or stretched but not rotated.
Consider the example transformation of some vectors & by some matrix:

After the transformation vectors & remain on their span or have the same direction. Although vector is flipped or reflected it is still on the same span as before the transformation.
Let's take a look at a shearing transformation example:

After the transformation, only vector remain on its span or have the same direction. Vector is knocked of its span and has a different direction. So vector is an Eigen Vector of the transformation.
Eigen Values
Eigen Values are the change in the length of the vectors after the transformation. In the case of the shearing transformation example (Fig 3), the Eigen value of vector is 1 because it still has a length of 2 boxes even after the transformation.
In the reflection transformation example (Fig 2), the Eigen value of vector is -1 because it has a length of 2 boxes but it is flipped or reflected in the opposite direction.
If the legnth of the vector is doubled after the transformation, the Eigen value would be 2.
Examples:
| Original Length | Transformed Length | Eigen Value |
|---|---|---|
| 5 | 10 | 2 |
| 5 | 2.5 | 0.5 |
Numpy Practice - Eigen Values and Eigen Vectors
We can use numpy to calculate the Eigen values and Eigen vectors of a matrix using linalg.eig function.
import numpy as np
# Create a matrix
A = np.array([[1, 2], [3, 4]])
# Calculate the Eigen values and Eigen vectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigen values:", eigenvalues)
print("Eigen vectors:", eigenvectors)

Determinants
A determinant maps a square matrix to a scalar value.
- It enables us to determine if a matrix can be inverted.
- The determinant of matrix , denoted as det().
If det() = 0
- The inverse of the matrix cannot be computed.
- Matrix is singular; it cantains linearly dependent columns.
The determinant of a 2x2 matrix is calculated as follows:
As:
Generalizing determinants using recurssion

For a matrix with 5 rows and 5 columns means there will be 4 rounds of recursion to calculate the determinant.
Alternating the positive (+) and negative (-) signs for each column.
Let's take a look at an example with a matrix:
.
.
.
Numpy Practice - Determinant
import numpy as np
X = np.array([[1, 2, 4], [2, -1, 3], [0, 5, 1]])
det_X = np.linalg.det(X)
print(det_X)

Relationship between determinants and Eigen values
There is a relationship between the geometric transformation of a matrix (the determinant) and its characteristic scaling factors (the Eigen values). The product of the Eigen values of a matrix is equal to the determinant of the matrix.
If we have an matrix with Eigen values , the determinant is defined as
also quantifies the volume of the unit cube after the transformation.
Some characteristics of a determinant:
- If , then the matrix is singular and the matrix is not invertible.
- If , then collapses space completely in at least one dimension, thereby eliminating all volume.
- If , then contracts space, shrinking it but not collapsing it.
- If , then does not change the volume of space.
- If , then expands space, increasing its volume.
Numpy Practice - Relationship between determinants and Eigen values

Eigen Decomposition
Eigendecomposition is the factorization of a square matrix into a form that reveals its eigenvalues and eigenvectors. It is essentially a way of "decoupling" a matrix's complex transformations into simple scaling operations along specific directions.
If is an matrix with linearly independent eigenvectors, it can be decomposed as:
Where:
- : Is a matrix whose columns are the eigenvectors of .
- (Capital Lambda): Is a diagonal matrix where the diagonal elements are the corresponding eigenvalues ().
- : Is the inverse of the eigenvector matrix.
Numpy Practice - Eigen Decomposition
You can find the numpy practice for eigen decomposition here.
It shows how the eigen values and the vectors can recombine to form the original matrix.
Why do we use it?
Eigendecomposition is useful for several reasons, the primary advantage of eigendecomposition is that it simplifies complex matrix operations by changing the "basis" of the space to the eigenvectors.
Matrix Powers
Matrix powers are a powerful tool in linear algebra that allow us to compute the power of a matrix .
Calculating is computationally expensive. However, with eigendecomposition:
Since is diagonal, is simply each diagonal element raised to the power of . This turns a massive matrix multiplication problem into a simple arithmetic one.

Symmetric Matrices
If matrix is symmetric (meaning ), the eigendecomposition becomes even cleaner. The eigenvectors are orthogonal, allowing us to use the transpose instead of the inverse:
This is known as the Spectral Theorem. It is the mathematical foundation for Principal Component Analysis (PCA) in data science.
Conclusion: Putting it All Together
Through this exploration, we’ve seen that Determinants, Eigenvalues, and Eigenvectors are not just abstract symbols on a page—they are the vital organs of linear algebra. They allow us to take a complex, high-dimensional transformation and strip it down to its most basic, understandable components.
Key Takeaways for your Workflow:
- The Determinant acts as the "health check" for your matrix. It tells you if your space has collapsed () and how much it has scaled.
- Eigenvectors and Eigenvalues provide the "skeleton" of the transformation. They show us which directions remain constant and how much force is applied along those axes.
- Eigen decomposition () is the ultimate shortcut. By shifting our perspective to the "Eigen-basis," we can perform massive calculations—like raising a matrix to the 100th power—with almost zero computational effort.
What’s Next? If you haven't already, I highly recommend opening the Google Colab notebook here. Try changing the values in the matrix examples and watch how the determinant reacts—seeing the math come to life in code is the best way to make these concepts stick.

