Navigation

Python

How to Perform Matrix Operations with NumPy

Execute lightning-fast matrix operations with NumPy's optimized linear algebra functions - from basic multiplication to advanced decompositions.

Table Of Contents

Linear Algebra Unleashed

Forget tedious nested loops for matrix operations. NumPy transforms complex linear algebra into simple, readable code that runs at blazing speed.

Essential Matrix Operations

import numpy as np

# Create matrices
A = np.array([[1, 2], 
              [3, 4]])
B = np.array([[5, 6], 
              [7, 8]])

# Matrix multiplication (dot product)
C = np.dot(A, B)  # or A @ B (Python 3.5+)
print(C)
# [[19 22]
#  [43 50]]

# Element-wise multiplication (NOT matrix multiplication)
element_wise = A * B
print(element_wise)
# [[ 5 12]
#  [21 32]]

# Matrix transpose
A_T = A.T  # or np.transpose(A)
print(A_T)
# [[1 3]
#  [2 4]]

# Matrix inverse
A_inv = np.linalg.inv(A)
print(A_inv)
# [[-2.   1. ]
#  [ 1.5 -0.5]]

# Verify: A * A_inv = Identity
identity = np.dot(A, A_inv)
print(np.round(identity, 10))  # Remove floating point errors

Advanced Linear Algebra Operations

# Determinant
det_A = np.linalg.det(A)
print(f"Determinant: {det_A}")  # -2.0

# Eigenvalues and eigenvectors
eigenvals, eigenvecs = np.linalg.eig(A)
print(f"Eigenvalues: {eigenvals}")
print(f"Eigenvectors:\n{eigenvecs}")

# Matrix rank
rank_A = np.linalg.matrix_rank(A)
print(f"Rank: {rank_A}")  # 2

# Singular Value Decomposition (SVD)
U, s, Vt = np.linalg.svd(A)
print(f"Singular values: {s}")

# QR decomposition
Q, R = np.linalg.qr(A)
print(f"Q matrix:\n{Q}")
print(f"R matrix:\n{R}")

Solving Linear Systems

# Solve Ax = b
A = np.array([[3, 1], 
              [1, 2]])
b = np.array([9, 8])

# Solve for x
x = np.linalg.solve(A, b)
print(f"Solution x: {x}")  # [2. 3.]

# Verify solution
verification = np.dot(A, x)
print(f"Ax = {verification}")  # Should equal b

# Least squares solution (for overdetermined systems)
A_over = np.array([[1, 1], 
                   [1, 2], 
                   [1, 3]])
b_over = np.array([6, 8, 10])

x_lstsq, residuals, rank, s = np.linalg.lstsq(A_over, b_over, rcond=None)
print(f"Least squares solution: {x_lstsq}")

Matrix Norms and Distances

# Different matrix norms
matrix = np.array([[3, 4], 
                   [0, 5]])

# Frobenius norm (default)
frobenius = np.linalg.norm(matrix)
print(f"Frobenius norm: {frobenius}")

# Nuclear norm (sum of singular values)
nuclear = np.linalg.norm(matrix, 'nuc')
print(f"Nuclear norm: {nuclear}")

# Condition number (measures numerical stability)
cond_num = np.linalg.cond(matrix)
print(f"Condition number: {cond_num}")

Batch Matrix Operations

# Multiple matrices at once
matrices = np.random.rand(5, 3, 3)  # 5 matrices of shape 3x3

# Batch determinants
batch_dets = np.linalg.det(matrices)
print(f"5 determinants: {batch_dets}")

# Batch matrix multiplication
A_batch = np.random.rand(10, 4, 3)
B_batch = np.random.rand(10, 3, 2)
C_batch = np.matmul(A_batch, B_batch)  # Shape: (10, 4, 2)

Performance Tips

  • Use @ operator for matrix multiplication (cleaner syntax)
  • np.matmul() handles batch operations automatically
  • np.dot() works for both vectors and matrices
  • Use np.linalg.multi_dot() for chained multiplications

Real-World Applications

# Principal Component Analysis (PCA) basics
data = np.random.rand(100, 5)  # 100 samples, 5 features

# Center the data
centered = data - np.mean(data, axis=0)

# Covariance matrix
cov_matrix = np.cov(centered.T)

# Get principal components
eigenvals, eigenvecs = np.linalg.eig(cov_matrix)

# Sort by eigenvalues (descending)
idx = eigenvals.argsort()[::-1]
principal_components = eigenvecs[:, idx]

Dive Deeper

Master scientific computing with SciPy, explore machine learning linear algebra, and understand optimization algorithms.

Share this article

Add Comment

No comments yet. Be the first to comment!

More from Python