Andrew Gurung
  • Introduction
  • Data Science
    • Natural Language Processing
      • Sentiment analysis using Twitter
    • Linear Algebra
      • Linear algebra explained in four pages
      • Vectors
        • Vector Basics
        • Vector Projection
        • Cosine Similarity
        • Vector Norms and Orthogonality
        • Linear combination and span
        • Linear independence and Basis vectors
      • Matrices
        • Matrix Arithmetic
        • Matrix Operations
        • Functions and Linear Transformations
        • Matrix types
      • Eigendecomposition, Eigenvectors and Eigenvalues
      • Principle Component Analysis (PCA)
      • Singular-Value Decomposition(SVD)
      • Linear Algebra: Deep Learning Book
    • Calculus
      • Functions, Limits, Continuity and Differentiability
      • Scalar Derivative and Partial Derivatives
      • Gradient
      • Matrix Calculus
      • Maxima and Minima using Derivatives
      • Gradient Descent and its types
    • Statistics and Probability
      • Probability Rules and Axioms
      • Types of Events
      • Frequentist vs Bayesian View
      • Random Variables
      • MLE, MAP, and Naive Bayes
      • Probability Distributions
      • P-Value and hypothesis test
    • 7 Step DS Process
      • 1: Business Requirement
      • 2: Data Acquisition
      • 3: Data Processing
        • SQL Techniques
        • Cleaning Text Data
      • 4: Data Exploration
      • 5: Modeling
      • 6: Model deployment
      • 7: Communication
    • Miscellaneous
      • LaTeX commands
  • Computer Science
    • Primer
      • Big O Notation
  • Life
    • Health
      • Minimalist Workout Routine
      • Reddit FAQ on Nootropics
      • Hiking/Biking Resources
    • Philosophy
      • Aristotle's Defense of Private Property
    • Self-improvement
      • 100 Mental Models
      • Don't break the chain
      • Cal Newport's 5 Productivity tips
      • Andrew Ng's advice on deliberate practice
      • Atomic Habits
      • Turn sound effects off in Outlook
    • Food and Travel
      • 2019 Guide to Pesticides in Produce
      • Recipe
        • Spicy Sesame Noodles
      • Travel
        • Hiking
    • Art
      • Scott Adams: 80% of the rules of good writing
      • Learn Blues Guitar
    • Tools
      • Software
        • Docker
        • Visual Studio Code
        • Terminal
        • Comparing Git Workflow
      • Life Hacks
        • DIY Deck Cleaner
  • Knowledge Vault
    • Book
      • The Almanack of Naval Ravikant
    • Media
    • Course/Training
Powered by GitBook
On this page
  • Vector Spaces
  • Vector Norm
  • L1 Norm
  • L2 Norm
  • Max Norm
  • Calculating Norm using NumPy
  • Orthogonality
  • Finding orthogonality using NumPy

Was this helpful?

  1. Data Science
  2. Linear Algebra
  3. Vectors

Vector Norms and Orthogonality

Vector Spaces

A vector space is a set V of vectors on which two operations + and · are defined, called vector addition and scalar multiplication.

The operation + (vector addition) must satisfy the following conditions, where u, v, w are vectors and c,d are scalars:

  1. Commutative law: u + v = v + u

  2. Associative law: u + (v + w) = (u + v) + w

  3. Additive identity: 0 + v = v and v + 0 = v

  4. Additive inverses: u + (-u) = 0

The operation · (scalar multiplication) must satisfy the following conditions:

  1. Distributive law over vector sum: c · (u + v) = c · u + c · v

  2. Distributive law over scalar sum: (c+d) · v = c · v + d · v

  3. Associative law of scalar product: c · (d · v) = (cd) · v

  4. Unitary law: 1 · v = v

Vector Norm

Vector norm is a non-negative number which describes the length or extent of the vector in space. Also known as vector length or magnitude. There are different ways to calculate vector norms.

X=[x1x2x3]X =\begin{bmatrix}x_1 \\ x_2 \\ x_3 \end{bmatrix}X=​x1​x2​x3​​​

L1 Norm

L1 Norm is calculated as the sum of absolute values of the vector.

  • Calculates the Manhattan distance from the origin of vector space

L1=∣x1∣+∣x2∣+∣x3∣L^1 = |x_1| + |x_2|+|x_3|L1=∣x1​∣+∣x2​∣+∣x3​∣

L2 Norm

L2 Norm is calculated as the square root of the sum of the squared vector values.

  • Calculates the Euclidean distance from the origin of vector space

  • More commonly used in ML

L2=x12+x22+x32L^2 = \sqrt{ x_1^2 + x_2^2+x_3^2}L2=x12​+x22​+x32​​

Max Norm

Max Norm is calculated as the maximum vector values.

L∞=max(∣x1∣,∣x2∣,∣x3∣)L^\infty = max(|x_1| , |x_2|,|x_3|)L∞=max(∣x1​∣,∣x2​∣,∣x3​∣)

Calculating Norm using NumPy

from numpy import array
from numpy.linalg import norm
from numpy import inf

x = array([1, 2, -3])
l1 = norm(x, 1)
l2 = norm(x, 2)
lmax = norm(x, inf)

print('L1 norm: ', l1)
print('L2 norm: ', l2)
print('LMax norm: ', lmax)
L1 norm:  6.0
L2 norm:  3.7416573867739413
LMax norm:  3.0

Orthogonality

Orthogonal: Two vectors are orthogonal if they are perpendicular to each other and their dot product is zero. u . v = 0

Proof: u · v = |u| |v| cos θ (From Cosine Similarity) Note: cosine of 90° = 0

Orthonormal: Two vectors are orthonormal if their dot product is zero and norm/length of each vector is 1. u . v = 0 and |u| = 1, |v| = 1

In 3-D Euclidean space, using L2 Norm to calculate length; we come up with the following equation for orthonormal vectors.

u12+u22+u32=1andv12+v22+v32=1\sqrt{ u_1^2 + u_2^2+u_3^2} = 1 and \sqrt{ v_1^2 + v_2^2+v_3^2} = 1u12​+u22​+u32​​=1andv12​+v22​+v32​​=1

Normal: A vector is said to be normal to a surface or curve if it is perpendicular to it.

Finding orthogonality using NumPy

from numpy import vdot
from numpy import array

u = array([2, 18])
v = array([3/2, -1/6])
dotProduct = vdot(u, v)
print(dotProduct)
0.0
PreviousCosine SimilarityNextLinear combination and span

Last updated 6 years ago

Was this helpful?

Links:

What is a Vector Space? (Abstract Algebra)
Gentle Introduction to Vector Norms in Machine Learning
From Norm to Orthogonality