Andrew Gurung
  • Introduction
  • Data Science
    • Natural Language Processing
      • Sentiment analysis using Twitter
    • Linear Algebra
      • Linear algebra explained in four pages
      • Vectors
        • Vector Basics
        • Vector Projection
        • Cosine Similarity
        • Vector Norms and Orthogonality
        • Linear combination and span
        • Linear independence and Basis vectors
      • Matrices
        • Matrix Arithmetic
        • Matrix Operations
        • Functions and Linear Transformations
        • Matrix types
      • Eigendecomposition, Eigenvectors and Eigenvalues
      • Principle Component Analysis (PCA)
      • Singular-Value Decomposition(SVD)
      • Linear Algebra: Deep Learning Book
    • Calculus
      • Functions, Limits, Continuity and Differentiability
      • Scalar Derivative and Partial Derivatives
      • Gradient
      • Matrix Calculus
      • Maxima and Minima using Derivatives
      • Gradient Descent and its types
    • Statistics and Probability
      • Probability Rules and Axioms
      • Types of Events
      • Frequentist vs Bayesian View
      • Random Variables
      • MLE, MAP, and Naive Bayes
      • Probability Distributions
      • P-Value and hypothesis test
    • 7 Step DS Process
      • 1: Business Requirement
      • 2: Data Acquisition
      • 3: Data Processing
        • SQL Techniques
        • Cleaning Text Data
      • 4: Data Exploration
      • 5: Modeling
      • 6: Model deployment
      • 7: Communication
    • Miscellaneous
      • LaTeX commands
  • Computer Science
    • Primer
      • Big O Notation
  • Life
    • Health
      • Minimalist Workout Routine
      • Reddit FAQ on Nootropics
      • Hiking/Biking Resources
    • Philosophy
      • Aristotle's Defense of Private Property
    • Self-improvement
      • 100 Mental Models
      • Don't break the chain
      • Cal Newport's 5 Productivity tips
      • Andrew Ng's advice on deliberate practice
      • Atomic Habits
      • Turn sound effects off in Outlook
    • Food and Travel
      • 2019 Guide to Pesticides in Produce
      • Recipe
        • Spicy Sesame Noodles
      • Travel
        • Hiking
    • Art
      • Scott Adams: 80% of the rules of good writing
      • Learn Blues Guitar
    • Tools
      • Software
        • Docker
        • Visual Studio Code
        • Terminal
        • Comparing Git Workflow
      • Life Hacks
        • DIY Deck Cleaner
  • Knowledge Vault
    • Book
      • The Almanack of Naval Ravikant
    • Media
    • Course/Training
Powered by GitBook
On this page
  • Transpose
  • Inversion
  • Trace
  • Determinant
  • Rank

Was this helpful?

  1. Data Science
  2. Linear Algebra
  3. Matrices

Matrix Operations

PreviousMatrix ArithmeticNextFunctions and Linear Transformations

Last updated 6 years ago

Was this helpful?

Matrix operations are used in many machine learning algorithms. Linear algebra makes matrix operations fast and easy, especially when training on GPUs.

Transpose

A transpose is a matrix which is formed by turning all the rows of a given matrix into columns and vice-versa represented as A^T.

A=[a11a12a21a22a31a32],AT=[a11a21a31a12a22a32]A=\begin{bmatrix} a_{11} \hspace{0.2cm} a_{12} \\ a_{21} \hspace{0.2cm} a_{22} \\ a_{31} \hspace{0.2cm} a_{32} \end{bmatrix}, A^T=\begin{bmatrix} a_{11} \hspace{0.2cm} a_{21}\hspace{0.2cm} a_{31} \\ a_{12} \hspace{0.2cm} a_{22}\hspace{0.2cm} a_{32} \end{bmatrix}A=​a11​a12​a21​a22​a31​a32​​​,AT=[a11​a21​a31​a12​a22​a32​​]
from numpy import array
A = array([[1, 2], [3, 4], [5, 6]])
C = A.T
print(C)
[[1 3 5]
 [2 4 6]]

Inversion

Matrix inversion is a process that finds another matrix that when multiplied with the matrix, results in an identity matrix (1's in main diagonal, zeros everywhere else) represented as AB = BA = I

Note: A square matrix that is not invertible is referred to as singular. The matrix inversion operation is not computed directly, but rather the inverted matrix is discovered through forms of matrix decomposition.

B=A−1,I=[10..001..0....00..1]B = A^{-1}, I=\begin{bmatrix} 1\hspace{0.2cm} 0\hspace{0.2cm}.. \hspace{0.2cm}0 \\0\hspace{0.2cm} 1\hspace{0.2cm}.. \hspace{0.2cm}0 \\ .. .. \\ 0\hspace{0.2cm} 0\hspace{0.2cm}.. \hspace{0.2cm}1 \end{bmatrix}B=A−1,I=​10..001..0....00..1​​
from numpy import array
from numpy import dot
from numpy.linalg import inv
A = array([[4,3], [3,2]])
B = inv(A)
print(B)
product = dot(A,B)
print(product)​
[[-2.  3.]
 [ 3. -4.]]
[[1. 0.]
 [0. 1.]]

Trace

A trace of a square matrix is the sum of the values on the main diagonal of the matrix represented as tr(A)

from numpy import array
from numpy import trace
A = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
B = trace(A)
print(B)
15

Determinant

The determinant of a square matrix is a scalar representation of the volume of the matrix represented as |A| or det(A)

Note: The determinant is the product of all the eigenvalues of the matrix. Also, a determinant of 0 indicates that the matrix cannot be inverted.

from numpy import array
from numpy.linalg import det
A = array([[2, -3, 1], [2, 0, -1], [1, 4, 5]])
B = det(A)
print(B)
49.000000000000014

Rank

The rank of a matrix is the number of dimensions spanned by all of the vectors within a matrix.

  • Rank of 0: All vectors span a point

  • Rank of 1: All vectors span a line

  • Rank of 2: All vectors span a two-dimensional plane.

from numpy import array
from numpy.linalg import matrix_rank
# rank 0
M0 = array([[0,0],[0,0]])
mr0 = matrix_rank(M0)
print(mr0)
# rank 1
M1 = array([[1,2],[1,2]])
mr1 = matrix_rank(M1)
print(mr1)
# rank 2
M2 = array([[1,2],[3,4]])
mr2 = matrix_rank(M2)
print(mr2)
0
1
2
A=[a11a12a13a21a22a23a31a32a33],tr(A)=a11+a22+a33A=\begin{bmatrix} a_{11} \hspace{0.2cm} a_{12} \hspace{0.2cm} a_{13} \\ a_{21} \hspace{0.2cm} a_{22} \hspace{0.2cm} a_{23} \\ a_{31} \hspace{0.2cm} a_{32} \hspace{0.2cm} a_{33} \end{bmatrix}, tr(A) = a_{11} + a_{22} + a_{33}A=​a11​a12​a13​a21​a22​a23​a31​a32​a33​​​,tr(A)=a11​+a22​+a33​

Calculating rank mathematically (matrix decomposition method):

Link:

https://www.youtube.com/watch?v=59z6eBynJuw
https://machinelearningmastery.com/matrix-operations-for-machine-learning/
Calculating determinant of a matrix