Watch the series here: https://www.bilibili.com/video/BV1ys411472E
From a geometric standpoint, vectors in linear algebra are arrows originating from the origin of a coordinate system. Their coordinates represent their components along each axis.
(Since vectors almost always start at the origin, we can sometimes think of a vector simply as a point in space—the point where its tip lands.)
The sum of two scaled vectors is called a linear combination of those vectors.
One way to think about the “linear” part: If you fix one scalar and let the other vary freely, the tips of the resulting vectors will trace a straight line.
Depending on the vectors, their linear combinations can have different outcomes:
v and w can reach any point in the 2D plane.
The set of all possible vectors you can reach with a linear combination of a given set of vectors is called the span of those vectors.
A basis of a vector space is a set of linearly independent vectors that spans the entire space.
The term “transformation” is essentially another word for “function”; it takes an input and produces an output. Using “transformation” emphasizes the idea of motion, which provides excellent geometric intuition for what happens to vectors.
In linear algebra, a transformation moves all points in a vector space to new locations.
A linear transformation is a special kind of transformation with two properties:
Visually, a linear transformation keeps grid lines parallel and evenly spaced, without moving the origin.
The core idea: We only need to track where the basis vectors land. The transformation of any other vector can be described as a linear combination of these transformed basis vectors.
Each column of a matrix represents the coordinates of a transformed basis vector. Multiplying a matrix by a vector (x, y) gives the coordinates of that vector after the transformation.
Whenever you see a matrix, you can interpret it as a specific transformation of space.
A composite transformation is the new linear transformation that results from applying several individual transformations one after another.
The matrix of a composite transformation is the product of the individual transformation matrices. The product is calculated from right to left, corresponding to the order of application.
Crucially, matrix multiplication is not commutative. Geometrically, this means that changing the order of transformations will generally result in a different final transformation.
The determinant of a transformation is the factor by which areas are scaled.
A determinant of zero means the transformation squishes space into a lower dimension (e.g., a plane becomes a line or a point). This is a vital property, as it indicates that the columns of the matrix are “linearly dependent.”
The determinant represents a scaling factor, so why can it be negative? A negative sign indicates that the transformation inverts the orientation of space. The absolute value of the determinant still represents the scaling factor for area.
In 3D, a negative determinant means the transformation changes a “right-hand system” into a “left-hand system.”
A major application of linear algebra is solving systems of linear equations. We can view a system Ax = v geometrically as searching for an unknown vector x that, after being transformed by matrix A, lands on a known vector v.
When det(A) ≠ 0:
The transformation does not reduce the dimensionality of space. This means you can always find a unique x by applying the inverse transformation (A⁻¹) to v.
When det(A) = 0:
The transformation squishes space into a lower dimension, and an inverse transformation does not exist. A solution exists only if the target vector v happens to lie within that lower-dimensional output space. If a solution exists, there will be infinitely many, as multiple input vectors x get mapped to the same output vector.
The rank of a transformation is the number of dimensions in the output space. More precisely: The rank is the dimension of the column space.
The set of all possible output vectors of a transformation is called its column space.
Since the columns of a matrix tell you where the basis vectors land, the column space is simply the span of the columns of the matrix. The zero vector is always included in the column space, as a linear transformation must keep the origin fixed.
The set of all vectors that land on the origin (the zero vector) after a transformation is called the null space (or kernel).
For two vectors of the same dimension, multiply their corresponding components and add the results.
Geometrically, it is the length of the projection of one vector onto the other, multiplied by the magnitude of the other vector.
We can think of the dot product with a specific vector as a transformation from a 2D space to a 1D space (the number line).
Due to symmetry, the matrix that defines this 2D-to-1D transformation is simply the coordinates of that vector. How cool is that!
The magnitude of the cross product v × w is the area of the parallelogram formed by the two vectors. Its direction is determined by their relative orientation and the right-hand rule.
For 3D vectors, the cross product yields a new vector that is perpendicular to both v and w. Its magnitude is the area of the parallelogram they form, and its direction follows the right-hand rule.
The cross product is deeply connected to the concept of duality. Taking the cross product with v and w can be defined via a 3D-to-1D linear transformation whose value is the determinant of the matrix formed by a variable vector (x, y, z) and the vectors v and w.
Any system that converts a vector into a set of coordinates is a coordinate system. This conversion is defined by the system’s basis vectors. Using a different set of basis vectors changes the mapping between vectors and their coordinates.
A change of basis matrix allows us to translate coordinates from one system to another.
To translate a transformation matrix M from our standard coordinate system to an alternate basis, we use the formula P⁻¹MP, where P is the change of basis matrix for the alternate basis.
The defining equation is Av = λv, where A is the transformation matrix, v is an eigenvector, and λ is its eigenvalue.
We can rearrange this to (A - λI)v = 0. For this equation to have a non-zero solution for v, the transformation (A - λI) must squish space into a lower dimension. This means its determinant must be zero.
So, we find eigenvalues by solving det(A - λI) = 0.
If we use eigenvectors as our basis vectors (an “eigenbasis”), calculations involving the transformation matrix become much simpler, as the matrix becomes diagonal with the eigenvalues on the diagonal.
The property of linearity is what allows a linear transformation to be fully described by its action on the basis vectors, which is what makes matrix-vector multiplication possible.
(The derivative is a classic example of a linear operator.)
A vector space is an abstract concept. Anything that satisfies the fundamental axioms of vector addition and scalar multiplication can be considered a vector space, allowing us to apply the powerful tools of linear algebra.