Inner Products

Given a vector space , an inner product is bilinear mapping with the following properties

  • symmetric ,
  • positive definite , ,

Given an inner product over a vector space with basis , we can write and , so . Taking , , and ,

Orthogonal Projection

Given a lower dimensional subspace with and basis , we can project to with , with coordinates for . That is, . We require , so , or . Explicitly, the projection is given

Singular Value Decomposition

For all with, we can write , where , are bases and . The diagonals of are the singular values, .

Using the spectral theorem, we know any symmetric matrix has an orthonormal basis of eigenvectors. Then we can compute basis for and as follows

To actually compute these values, we can find the right singular vectors by diagonalizing and computing eigenvectors / eigenvalues (the square of the singular values). Given , with with singular values , we can compute the left singular vectors with , and .

Alternatively, given and , we can compute singular values directly with .

Differentiation

Given a function , , the partial is given with , and gradient . This can be generalized to a mulitvariate function , where and with the Jacobian

Neural Networks

The th layer of a neural network is represented with a function, where is the previous layer’s inputs, is the weight matrix, and is the bias vector.

Given layers, and an expected output , define a loss function