Blog

The Heart of Linear Algebra: Understanding Determinants, Eigenvalues, and Eigenvectors.

Understanding Determinants, Eigenvalues, and Eigenvectors.
Ben Kissi

Ben Kissi

Fullstack AI Engineer

Linear AlgebraMathematicsMachine LearningData Science

23 Dec 2025

8 min read

Imagine you are looking at a piece of graph paper. Now, imagine you grab the corners of that paper and stretch it, twist it, or flip it. In the world of mathematics, this is called a Linear Transformation, and it is the foundation of everything from computer graphics to Artificial Intelligence.

But how do we measure what just happened to that space? That’s where our three "main characters" come in:

1. The Determinant: The Scaling Factor

The Determinant is a single number that tells you how much the area (or volume) of your space expanded or contracted. If the determinant is 2, you’ve doubled the area. If it’s 0, you’ve squashed the entire universe into a single flat line.

2. Eigenvectors: The Fixed Directions

When you stretch that graph paper, most lines will change their direction. However, there are usually a few special paths that stay exactly where they were, only getting longer or shorter. These "unshakeable" directions are the Eigenvectors. They represent the fundamental axes of a transformation.

3. Eigenvalues: The Magnitude of Change

If the eigenvector is the direction, the Eigenvalue (λ\lambda) is the strength. It tells you exactly how much the eigenvector was stretched or squished during the process.

In this guide (and the accompanying Google Colab demo), we aren’t just going to solve these by hand using pen and paper. We are going to go "under the hood" using Python and NumPy. You’ll see exactly how these concepts allow us to compress data and recognize faces.

Affine Transformations

Applying a matrix to a vector can linearly transform the vector, eg: Rotate it or Rescale it. This is also known as matrix vector transformation. The identity matrix is the exception that proves the rule.

An identity matrix is a square matrix in which all the elements of the principal diagonal are ones and all other elements are zeros. - Learn more here

Consider a matrix AA:

A=[1001]A =\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

that flips a vector vv accross the x-axis:

v=[21]v = \begin{bmatrix} 2 \\ 1 \end{bmatrix}

The resulting product AvAv will be flipped accross the x-axis:

Av=[21]Av = \begin{bmatrix} 2 \\ -1 \end{bmatrix}

The plot below shows the transformation, the blue line is the plot of vv and the orange line is the plot of AvAv:

Fig 1: Affine Transformation Plot

Affine transformations are geometric transformations that modify positions, distances, or angles while preserving parallelism between vectors.

The transfromations includes:

  • Translation
  • Rotation
  • Scaling
  • Flipping/Reflection
  • Shearing etc

A matrix can apply mutltiple affine transformations simultaneously eg: a rotation and a scaling.

Eigen Vectors

Eigen Vectors are the vectors that are not changed by the transformation, these are vectors that retain their span (direction) after the transformation. They can be shrunk or stretched but not rotated.

Consider the example transformation of some vectors aa & bb by some matrix:

Fig 2: Reflection Transformation Example

After the transformation vectors aa & bb remain on their span or have the same direction. Although vector bb is flipped or reflected it is still on the same span as before the transformation.

Let's take a look at a shearing transformation example:

Fig 3: Shearing Transformation Example

After the transformation, only vector bb remain on its span or have the same direction. Vector aa is knocked of its span and has a different direction. So vector bb is an Eigen Vector of the transformation.

An Eigen vector is a special vector vv such that when it is transformed by a matrix AA, The product AvAv has the exact same direction as vv.

Eigen Values

Eigen Values are the change in the length of the vectors after the transformation. In the case of the shearing transformation example (Fig 3), the Eigen value of vector bb is 1 because it still has a length of 2 boxes even after the transformation.

In the reflection transformation example (Fig 2), the Eigen value of vector bb is -1 because it has a length of 2 boxes but it is flipped or reflected in the opposite direction.

If the legnth of the vector is doubled after the transformation, the Eigen value would be 2.

Examples:

Original LengthTransformed LengthEigen Value
5102
52.50.5
An Eigen value is a scalar λ\lambda that simply scales the Eigen vector such that the following is satisfied: Av=λvAv = \lambda v.

Numpy Practice - Eigen Values and Eigen Vectors

We can use numpy to calculate the Eigen values and Eigen vectors of a matrix using linalg.eig function.

import numpy as np

# Create a matrix
A = np.array([[1, 2], [3, 4]])

# Calculate the Eigen values and Eigen vectors
eigenvalues, eigenvectors = np.linalg.eig(A)

print("Eigen values:", eigenvalues)
print("Eigen vectors:", eigenvectors)
Fig 5: Numpy Example

Determinants

A determinant maps a square matrix to a scalar value.

  • It enables us to determine if a matrix can be inverted.
  • The determinant of matrix XX, denoted as det(XX). If det(XX) = 0
    • The inverse of the matrix X1X^{-1} cannot be computed.
    • Matrix XX is singular; it cantains linearly dependent columns.

The determinant of a 2x2 matrix XX is calculated as follows:

X=[abcd]X = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

As:

det(X)=adbcdet(X) = ad - bc

Generalizing determinants using recurssion

Fig 4: General Matrix

For a matrix XX with 5 rows and 5 columns means there will be 4 rounds of recursion to calculate the determinant.

det(X)=x1,1det(X1,1)x1,2det(X1,2)+x1,3det(X1,3)x1,4det(X1,4)+x1,5det(X1,5)\det(X) = x_{1,1}\det(X_{1,1}) - x_{1,2}\det(X_{1,2}) + x_{1,3}\det(X_{1,3}) - x_{1,4}\det(X_{1,4}) + x_{1,5}\det(X_{1,5})

Alternating the positive (+) and negative (-) signs for each column.

Let's take a look at an example with a 3x33 x 3 matrix:

X=[124213051]X = \begin{bmatrix} 1 & 2 & 4 \\ 2 & -1 & 3 \\ 0 & 5 & 1 \end{bmatrix} det(X)=x1,1det(X1,1)x1,2det(X1,2)+x1,3det(X1,3)\det(X) = x_{1,1}\det(X_{1,1}) - x_{1,2}\det(X_{1,2}) + x_{1,3}\det(X_{1,3})=1[1351]2[2301]+4[2105]= 1 \begin{bmatrix} -1 & 3 \\ 5 & 1 \end{bmatrix} - 2 \begin{bmatrix} 2 & 3 \\ 0 & 1 \end{bmatrix} + 4 \begin{bmatrix} 2 & -1 \\ 0 & 5 \end{bmatrix}

.

=1(1135)2(2130)+4(25(10))= 1(-1 * 1 - 3 * 5) - 2(2 * 1 - 3 * 0) + 4(2 * 5 - (-1 * 0))

.

=1(115)2(20)+4(100)= 1(-1 - 15) - 2(2 - 0) + 4(10 - 0)

.

=1(16)2(2)+4(10)= 1(-16) - 2(2) + 4(10) =164+40= -16 - 4 + 40 =20= 20

Numpy Practice - Determinant

import numpy as np

X = np.array([[1, 2, 4], [2, -1, 3], [0, 5, 1]])
det_X = np.linalg.det(X)
print(det_X)
Fig 5: Numpy Determinant

Relationship between determinants and Eigen values

There is a relationship between the geometric transformation of a matrix (the determinant) and its characteristic scaling factors (the Eigen values). The product of the Eigen values of a matrix is equal to the determinant of the matrix.

If we have an n×nn \times n matrix AA with Eigen values λ1,λ2,,λn\lambda_1, \lambda_2, \dots, \lambda_n, the determinant is defined as

det(A)=i=1nλi=λ1λ2λn\det(A) = \prod_{i=1}^{n} \lambda_i = \lambda_1 \cdot \lambda_2 \cdot \dots \cdot \lambda_n

det(A)det(A) also quantifies the volume of the unit cube after the transformation.

Some characteristics of a determinant:

  1. If det(A)=0det(A) = 0, then the matrix is singular and the matrix is not invertible.
  2. If det(A)=0det(A) = 0, then XX collapses space completely in at least one dimension, thereby eliminating all volume.
  3. If 0<det(A)<10 < det(A) < 1, then AA contracts space, shrinking it but not collapsing it.
  4. If det(A)=1det(A) = 1, then AA does not change the volume of space.
  5. If det(A)>1det(A) > 1, then AA expands space, increasing its volume.
The determinant of a matrix is a scalar value that quantifies the volume of the unit cube after the transformation. It also helps us determine if a matrix is invertible or not.

Numpy Practice - Relationship between determinants and Eigen values

Fig 6: Numpy Relationship

Eigen Decomposition

Eigendecomposition is the factorization of a square matrix into a form that reveals its eigenvalues and eigenvectors. It is essentially a way of "decoupling" a matrix's complex transformations into simple scaling operations along specific directions.

If AA is an n×nn \times n matrix with nn linearly independent eigenvectors, it can be decomposed as:

A=VΛV1A = V \Lambda V^{-1}

Where:

  • VV: Is a matrix whose columns are the eigenvectors of AA.
  • Λ\Lambda (Capital Lambda): Is a diagonal matrix where the diagonal elements are the corresponding eigenvalues (λ1,λ2,,λn\lambda_1, \lambda_2, \dots, \lambda_n).
  • V1V^{-1}: Is the inverse of the eigenvector matrix.

Numpy Practice - Eigen Decomposition

You can find the numpy practice for eigen decomposition here.

It shows how the eigen values and the vectors can recombine to form the original matrix.

Why do we use it?

Eigendecomposition is useful for several reasons, the primary advantage of eigendecomposition is that it simplifies complex matrix operations by changing the "basis" of the space to the eigenvectors.

Matrix Powers

Matrix powers are a powerful tool in linear algebra that allow us to compute the nthn^{th} power of a matrix AA.

Calculating A100A^{100} is computationally expensive. However, with eigendecomposition:

Ak=VΛkV1A^k = V \Lambda^k V^{-1}

Since Λ\Lambda is diagonal, Λk\Lambda^k is simply each diagonal element raised to the power of kk. This turns a massive matrix multiplication problem into a simple arithmetic one.

Fig 7: Numpy Matrix Powers

Symmetric Matrices

If matrix AA is symmetric (meaning A=ATA = A^T), the eigendecomposition becomes even cleaner. The eigenvectors are orthogonal, allowing us to use the transpose instead of the inverse:

A=QΛQTA = Q \Lambda Q^T

This is known as the Spectral Theorem. It is the mathematical foundation for Principal Component Analysis (PCA) in data science.

Conclusion: Putting it All Together

Through this exploration, we’ve seen that Determinants, Eigenvalues, and Eigenvectors are not just abstract symbols on a page—they are the vital organs of linear algebra. They allow us to take a complex, high-dimensional transformation and strip it down to its most basic, understandable components.

Key Takeaways for your Workflow:

  • The Determinant acts as the "health check" for your matrix. It tells you if your space has collapsed (det=0det=0) and how much it has scaled.
  • Eigenvectors and Eigenvalues provide the "skeleton" of the transformation. They show us which directions remain constant and how much force is applied along those axes.
  • Eigen decomposition (A=VΛV1A = V \Lambda V^{-1}) is the ultimate shortcut. By shifting our perspective to the "Eigen-basis," we can perform massive calculations—like raising a matrix to the 100th power—with almost zero computational effort.

What’s Next? If you haven't already, I highly recommend opening the Google Colab notebook here. Try changing the values in the matrix examples and watch how the determinant reacts—seeing the math come to life in code is the best way to make these concepts stick.

Related articles