Some unsolicited advice
- See if you are able to reproduce this list from memory. This will
help you during the exam. Think of this list as an index that will help
you quickly pinpoint the relevant bits of knowledge in order to answer a
question.
- Next, see if you are able to create short summaries for each topic,
again from memory.
- If you are able to do both of the above, there is no reason why you
shouldn’t do very well in the midterm.
Keywords
Background
- Scalars, vectors and matrices
- Dot and cross product
- Orthognality
- Matrix inversion, transpose, etc.
- Eigenvalues and eigenvectors
- Performing differentiation
- Sums
- Lines and planes
- Polynomials
On imaging geometry
- Pinhole camera
- Projection matrix
- Intrinsic and extrinsic camera matrices
- Camera rotation and translation
- Expressing points in world/camera coordinates
- Homogeneous coordinates
- Consequences of projection
On convolutions
- Cross-correlation (in 1, 2, and higher dimensions)
- Convolution (in 1, 2, and higher dimensions)
- Filters
- Common filters
- Averaging
- Summing
- Differentiation
- Gaussian
- Mean
- Variance
- Multi-variate Gaussian
- Boundary conditions
- Separable filters
- Use of SVD to check for separability
- Computational considerations
- How many multiplications/additions are needed to carry out a
convolutional operation?
On scale-space processing
- Gaussian pyramids
- Laplacian pyramids
- Image blending
On image gradients
- Filters for computing image gradients
- Gradient magnitude and direction
- Higher order derivatives
On frequency representation
- Uses of Fourier transformation
- Relationship between Convolutions and Fourier Space
On model fitting
- Linear regression
- Models of sort \(y = mx + b\)
- Models of sort \(ax + by + c =
0\)
- Expressing linear regression problems as either \(\mathbf{A} \mathbf{x} = \mathbf{b}\) or
\(\mathbf{A} \mathbf{x} =
\mathbf{0}\)
- Solving system of linear equations of the form \(\mathbf{A} \mathbf{x} = \mathbf{b}\) and
\(\mathbf{A} \mathbf{x} =
\mathbf{0}\)
- Extensions of linear regression to more complex models
- E.g., fitting planes, polynomials, etc.
- MSE loss
- Gradients of MSE loss
- An intuition about why we need gradient
- Under what conditions can we set gradient to zero to solve for
optimum parameters
- Under what conditions we cannot find optimum parameters by setting
gradient of the loss equal to \(0\)?
- Model complexity
- Number of model parameters
- Outliers
- What are these?
- How these effect model fitting?
- Intuition behind losses used in robust least squares
Beyond linear regression
- Classification vs. regression
- Logistic regression
- Cross-entropy loss
- Softmax
- Perceptron and linear models
Model fitting (training)
- Gradient computation of MSE loss
- Why gradient descent?
- Why gradient descent might fail?
Neural networks
- Multi-Layer Perceptron (MLP) Networks
- Regression
- Binary classification
- Multi-class classification
- Understanding and computing model parameters
- Bias
- Convolutional Neural Networks (CNN)
- Convolutional layer
- Input tensor dimensions
- Output tensor dimensions
- Understanding and computing model parameters
- Computation/memory tradeoff between CNNs and MLPs
- Activation functions and their role
Optical flow
- Brightness consistency
- Spatial coherence
- Small motions
- How to deal with large motions?
- Setting up a system of linear equations to solve for optical
flow
- Computing spatial and temporal gradients
- Uses of optical flow
- Definition of optical flow