Third task set

  1. I wrote a function that computes the absolute error between two vectors of the same dimensionality using the 2-norm. See my software manual entry
  2. I wrote a function that computes the absolute error between two vectors of the same dimensionality using the 1-norm. See my software manual entry
  3. I wrote a function that computes the absolute error between two vectors of the same dimensionality using the infinity-norm. See my software manual entry
  4. I wrote a function that computes the 1-norm for a matrix. See my software manual entry
  5. I wrote a function that computes the infinity-norm for a matrix. See my software manual entry
  6. I wrote a function that computes the dot product of two vectors of the same dimensionality. See my software manual entry.
  7. I wrote a function that computes the cross product of two 3D vectors. See my software manual entry.
  8. I wrote a function that computes the product of two matrices of the same inner dimension. See my software manual entry.
  9. I wrote a method that generates a diagonally dominant matrix with random values. See my software manual entry.
    • The Frobenius norm of a matrix is equal to the square root of the sum of its all of the squared elements. I read the response to the question “What is the significance of the Frobenius norm?” in the following forum: https://www.quora.com/What-is-the-significance-of-the-Frobenius-norm. Garrett Thomas, a PhD student at Stanford, explains how the Frobenius norm can be useful because it is less computationally expensive than the 2-norm. Thomas explains that because the Frobenius norm is differentiable with respect to every element of a matrix, gradient based methods can be employed to optimize the norm. He also mentions that there may be some other uses for the Frobenius norm due to the ways it is analogous to the L2 vector norm and equivalent to the L2 norm of the vectorized version of a matrix.
    • As described on Wikipedia (https://en.wikipedia.org/wiki/Matrix_norm#Consistent_norms) a matrix norm is consistent with a vector norm if the matrix norm of Ax is less than the product of the matrix norm of A and the vector norm of x. I found this helpful to realize that when talking about the consistency of a matrix norm, we should specify with which vector norm it is consistent (although with induced norms this is obvious).