Andrew Gurung
  • Introduction
  • Data Science
    • Natural Language Processing
      • Sentiment analysis using Twitter
    • Linear Algebra
      • Linear algebra explained in four pages
      • Vectors
        • Vector Basics
        • Vector Projection
        • Cosine Similarity
        • Vector Norms and Orthogonality
        • Linear combination and span
        • Linear independence and Basis vectors
      • Matrices
        • Matrix Arithmetic
        • Matrix Operations
        • Functions and Linear Transformations
        • Matrix types
      • Eigendecomposition, Eigenvectors and Eigenvalues
      • Principle Component Analysis (PCA)
      • Singular-Value Decomposition(SVD)
      • Linear Algebra: Deep Learning Book
    • Calculus
      • Functions, Limits, Continuity and Differentiability
      • Scalar Derivative and Partial Derivatives
      • Gradient
      • Matrix Calculus
      • Maxima and Minima using Derivatives
      • Gradient Descent and its types
    • Statistics and Probability
      • Probability Rules and Axioms
      • Types of Events
      • Frequentist vs Bayesian View
      • Random Variables
      • MLE, MAP, and Naive Bayes
      • Probability Distributions
      • P-Value and hypothesis test
    • 7 Step DS Process
      • 1: Business Requirement
      • 2: Data Acquisition
      • 3: Data Processing
        • SQL Techniques
        • Cleaning Text Data
      • 4: Data Exploration
      • 5: Modeling
      • 6: Model deployment
      • 7: Communication
    • Miscellaneous
      • LaTeX commands
  • Computer Science
    • Primer
      • Big O Notation
  • Life
    • Health
      • Minimalist Workout Routine
      • Reddit FAQ on Nootropics
      • Hiking/Biking Resources
    • Philosophy
      • Aristotle's Defense of Private Property
    • Self-improvement
      • 100 Mental Models
      • Don't break the chain
      • Cal Newport's 5 Productivity tips
      • Andrew Ng's advice on deliberate practice
      • Atomic Habits
      • Turn sound effects off in Outlook
    • Food and Travel
      • 2019 Guide to Pesticides in Produce
      • Recipe
        • Spicy Sesame Noodles
      • Travel
        • Hiking
    • Art
      • Scott Adams: 80% of the rules of good writing
      • Learn Blues Guitar
    • Tools
      • Software
        • Docker
        • Visual Studio Code
        • Terminal
        • Comparing Git Workflow
      • Life Hacks
        • DIY Deck Cleaner
  • Knowledge Vault
    • Book
      • The Almanack of Naval Ravikant
    • Media
    • Course/Training
Powered by GitBook
On this page
  • Scalars, Vectors, Matrices and Tensors
  • Multiplying Matrix and Vectors
  • Identity and Inverse Matrix
  • Linear dependence and Span

Was this helpful?

  1. Data Science
  2. Linear Algebra

Linear Algebra: Deep Learning Book

Linear algebra is a form of continuous rather than discrete mathematics.

Scalars, Vectors, Matrices and Tensors

Multiplying Matrix and Vectors

Identity and Inverse Matrix

Identity Matrix

  • In∈Rn×nI_n \in \mathbb{R}^{n \times n}In​∈Rn×n; where main diagonal = 1's and all other entries = 0's

  • An identity matrix InI_nIn​ is a matrix that does not change any vector xxx when we multiply that vector by that matrix. Inx=xI_nx = xIn​x=x

Inverse Matrix

  • Denoted by A−1A^{-1}A−1; where A−1A=InA^{-1}A = I_nA−1A=In​

  • Use case: A−1A^{-1}A−1 can be used to solve linear equations:

Ax=bStep_1:Multiply by A−1A−1Ax=A−1bStep_2:A−1A=InInx=A−1bStep_3:Multiplying with In will not change the vectorx=A−1bAx=b \newline Step \_1: Multiply \ by \ A^{-1} \newline A^{-1}Ax=A^{-1}b \newline Step \_2: A^{-1}A = I_n \newline I_nx = A^{-1}b \newline Step \_3: Multiplying\ with \ I_n \ will \ not \ change \ the \ vector \newline x = A^{-1}bAx=bStep_1:Multiply by A−1A−1Ax=A−1bStep_2:A−1A=In​In​x=A−1bStep_3:Multiplying with In​ will not change the vectorx=A−1b

Linear dependence and Span

PreviousSingular-Value Decomposition(SVD)NextCalculus

Last updated 6 years ago

Was this helpful?

Link:

http://www.deeplearningbook.org/contents/linear_algebra.html