update notations
This commit is contained in:
@@ -64,7 +64,7 @@ $d = \begin{bmatrix}
|
||||
u \\ v
|
||||
\end{bmatrix}$
|
||||
|
||||
The solution is $d=(A^T A)^{-1} A^T b$
|
||||
The solution is $d=(A^\top A)^{-1} A^\top b$
|
||||
|
||||
Lucas-Kanade flow:
|
||||
|
||||
@@ -170,7 +170,7 @@ E = \sum_{i=1}^n (a(x_i-\bar{x})+b(y_i-\bar{y}))^2 = \left\|\begin{bmatrix}x_1-\
|
||||
$$
|
||||
|
||||
We want to find $N$ that minimizes $\|UN\|^2$ subject to $\|N\|^2= 1$
|
||||
Solution is given by the eigenvector of $U^T U$ associated with the smallest eigenvalue
|
||||
Solution is given by the eigenvector of $U^\top U$ associated with the smallest eigenvalue
|
||||
|
||||
Drawbacks:
|
||||
|
||||
|
||||
@@ -178,7 +178,7 @@ $$
|
||||
\begin{pmatrix}a\\b\\c\end{pmatrix} \times \begin{pmatrix}a'\\b'\\c'\end{pmatrix} = \begin{pmatrix}bc'-b'c\\ca'-c'a\\ab'-a'b\end{pmatrix}
|
||||
$$
|
||||
|
||||
Let $h_1^T, h_2^T, h_3^T$ be the rows of $H$. Then
|
||||
Let $h_1^\top, h_2^\top, h_3^\top$ be the rows of $H$. Then
|
||||
|
||||
$$
|
||||
x_i' × Hx_i=\begin{pmatrix}
|
||||
@@ -186,15 +186,15 @@ x_i' × Hx_i=\begin{pmatrix}
|
||||
y_i' \\
|
||||
1
|
||||
\end{pmatrix} \times \begin{pmatrix}
|
||||
h_1^T x_i \\
|
||||
h_2^T x_i \\
|
||||
h_3^T x_i
|
||||
h_1^\top x_i \\
|
||||
h_2^\top x_i \\
|
||||
h_3^\top x_i
|
||||
\end{pmatrix}
|
||||
=
|
||||
\begin{pmatrix}
|
||||
y_i' h_3^T x_i−h_2^T x_i \\
|
||||
h_1^T x_i−x_i' h_3^T x_i \\
|
||||
x_i' h_2^T x_i−y_i' h_1^T x_i
|
||||
y_i' h_3^\top x_i−h_2^\top x_i \\
|
||||
h_1^\top x_i−x_i' h_3^\top x_i \\
|
||||
x_i' h_2^\top x_i−y_i' h_1^\top x_i
|
||||
\end{pmatrix}
|
||||
$$
|
||||
|
||||
@@ -206,15 +206,15 @@ x_i' × Hx_i=\begin{pmatrix}
|
||||
y_i' \\
|
||||
1
|
||||
\end{pmatrix} \times \begin{pmatrix}
|
||||
h_1^T x_i \\
|
||||
h_2^T x_i \\
|
||||
h_3^T x_i
|
||||
h_1^\top x_i \\
|
||||
h_2^\top x_i \\
|
||||
h_3^\top x_i
|
||||
\end{pmatrix}
|
||||
=
|
||||
\begin{pmatrix}
|
||||
y_i' h_3^T x_i−h_2^T x_i \\
|
||||
h_1^T x_i−x_i' h_3^T x_i \\
|
||||
x_i' h_2^T x_i−y_i' h_1^T x_i
|
||||
y_i' h_3^\top x_i−h_2^\top x_i \\
|
||||
h_1^\top x_i−x_i' h_3^\top x_i \\
|
||||
x_i' h_2^\top x_i−y_i' h_1^\top x_i
|
||||
\end{pmatrix}
|
||||
$$
|
||||
|
||||
@@ -222,9 +222,9 @@ Rearranging the terms:
|
||||
|
||||
$$
|
||||
\begin{bmatrix}
|
||||
0^T &-x_i^T &y_i' x_i^T \\
|
||||
x_i^T &0^T &-x_i' x_i^T \\
|
||||
y_i' x_i^T &x_i' x_i^T &0^T
|
||||
0^\top &-x_i^\top &y_i' x_i^\top \\
|
||||
x_i^\top &0^\top &-x_i' x_i^\top \\
|
||||
y_i' x_i^\top &x_i' x_i^\top &0^\top
|
||||
\end{bmatrix}
|
||||
\begin{bmatrix}
|
||||
h_1 \\
|
||||
|
||||
@@ -17,16 +17,16 @@ If we set the config for the first camera as the world origin and $[I|0]\begin{p
|
||||
Notice that $x'\cdot [t\times (Ry)]=0$
|
||||
|
||||
$$
|
||||
x'^T E x_1 = 0
|
||||
x'^\top E x_1 = 0
|
||||
$$
|
||||
|
||||
We denote the constraint defined by the Essential Matrix as $E$.
|
||||
|
||||
$E x$ is the epipolar line associated with $x$ ($l'=Ex$)
|
||||
|
||||
$E^T x'$ is the epipolar line associated with $x'$ ($l=E^T x'$)
|
||||
$E^\top x'$ is the epipolar line associated with $x'$ ($l=E^\top x'$)
|
||||
|
||||
$E e=0$ and $E^T e'=0$ ($x$ and $x'$ don't matter)
|
||||
$E e=0$ and $E^\top e'=0$ ($x$ and $x'$ don't matter)
|
||||
|
||||
$E$ is singular (rank 2) and have five degrees of freedom.
|
||||
|
||||
@@ -35,13 +35,13 @@ $E$ is singular (rank 2) and have five degrees of freedom.
|
||||
If the calibration matrices $K$ and $K'$ are unknown, we can write the epipolar constraint in terms of unknown normalized coordinates:
|
||||
|
||||
$$
|
||||
x'^T_{norm} E x_{norm} = 0
|
||||
x'^\top_{norm} E x_{norm} = 0
|
||||
$$
|
||||
|
||||
where $x_{norm}=K^{-1} x$, $x'_{norm}=K'^{-1} x'$
|
||||
|
||||
$$
|
||||
x'^T_{norm} E x_{norm} = 0\implies x'^T_{norm} Fx=0
|
||||
x'^\top_{norm} E x_{norm} = 0\implies x'^\top_{norm} Fx=0
|
||||
$$
|
||||
|
||||
where $F=K'^{-1}EK^{-1}$ is the **Fundamental Matrix**.
|
||||
@@ -60,17 +60,17 @@ Properties of $F$:
|
||||
|
||||
$F x$ is the epipolar line associated with $x$ ($l'=F x$)
|
||||
|
||||
$F^T x'$ is the epipolar line associated with $x'$ ($l=F^T x'$)
|
||||
$F^\top x'$ is the epipolar line associated with $x'$ ($l=F^\top x'$)
|
||||
|
||||
$F e=0$ and $F^T e'=0$
|
||||
$F e=0$ and $F^\top e'=0$
|
||||
|
||||
$F$ is singular (rank two) and has seven degrees of freedom
|
||||
|
||||
#### Estimating the fundamental matrix
|
||||
|
||||
Given: correspondences $x=(x,y,1)^T$ and $x'=(x',y',1)^T$
|
||||
Given: correspondences $x=(x,y,1)^\top$ and $x'=(x',y',1)^\top$
|
||||
|
||||
Constraint: $x'^T F x=0$
|
||||
Constraint: $x'^\top F x=0$
|
||||
|
||||
$$
|
||||
(x',y',1)\begin{bmatrix}
|
||||
@@ -95,7 +95,7 @@ F=U\begin{bmatrix}
|
||||
\sigma_1 & 0 \\
|
||||
0 & \sigma_2 \\
|
||||
0 & 0
|
||||
\end{bmatrix}V^T
|
||||
\end{bmatrix}V^\top
|
||||
$$
|
||||
|
||||
## Structure from Motion
|
||||
@@ -126,7 +126,7 @@ a_{21} & a_{22} & a_{23} & t_2 \\
|
||||
0 & 0 & 0 & 1
|
||||
\end{bmatrix}=\begin{bmatrix}
|
||||
A & t \\
|
||||
0^T & 1
|
||||
0^\top & 1
|
||||
\end{bmatrix}
|
||||
$$
|
||||
|
||||
@@ -160,10 +160,10 @@ The reconstruction is defined up to an arbitrary affine transformation $Q$ (12 d
|
||||
$$
|
||||
\begin{bmatrix}
|
||||
A & t \\
|
||||
0^T & 1
|
||||
0^\top & 1
|
||||
\end{bmatrix}\rightarrow\begin{bmatrix}
|
||||
A & t \\
|
||||
0^T & 1
|
||||
0^\top & 1
|
||||
\end{bmatrix}Q^{-1}, \quad \begin{pmatrix}X_j\\1\end{pmatrix}\rightarrow Q\begin{pmatrix}X_j\\1\end{pmatrix}
|
||||
$$
|
||||
|
||||
|
||||
@@ -74,7 +74,7 @@ x\\y
|
||||
\end{pmatrix}
|
||||
$$
|
||||
|
||||
To undo the rotation, we need to rotate the image by $-\theta$. This is equivalent to apply $R^T$ to the image.
|
||||
To undo the rotation, we need to rotate the image by $-\theta$. This is equivalent to apply $R^\top$ to the image.
|
||||
|
||||
#### Affine transformation
|
||||
|
||||
|
||||
@@ -96,7 +96,7 @@ Example: Linear classification models
|
||||
Find a linear function that separates the data.
|
||||
|
||||
$$
|
||||
f(x) = w^T x + b
|
||||
f(x) = w^\top x + b
|
||||
$$
|
||||
|
||||
[Linear classification models](http://cs231n.github.io/linear-classify/)
|
||||
@@ -144,13 +144,13 @@ This is a convex function, so we can find the global minimum.
|
||||
The gradient is:
|
||||
|
||||
$$
|
||||
\nabla_w||Xw-Y||^2 = 2X^T(Xw-Y)
|
||||
\nabla_w||Xw-Y||^2 = 2X^\top(Xw-Y)
|
||||
$$
|
||||
|
||||
Set the gradient to 0, we get:
|
||||
|
||||
$$
|
||||
w = (X^T X)^{-1} X^T Y
|
||||
w = (X^\top X)^{-1} X^\top Y
|
||||
$$
|
||||
|
||||
From the maximum likelihood perspective, we can also derive the same result.
|
||||
|
||||
@@ -59,7 +59,7 @@ Suppose $k=1$, $e=l(f_1(x,w_1),y)$
|
||||
|
||||
Example: $e=(f_1(x,w_1)-y)^2$
|
||||
|
||||
So $h_1=f_1(x,w_1)=w^T_1x$, $e=l(h_1,y)=(y-h_1)^2$
|
||||
So $h_1=f_1(x,w_1)=w^\top_1x$, $e=l(h_1,y)=(y-h_1)^2$
|
||||
|
||||
$$
|
||||
\frac{\partial e}{\partial w_1}=\frac{\partial e}{\partial h_1}\frac{\partial h_1}{\partial w_1}
|
||||
|
||||
@@ -20,7 +20,7 @@ Suppose $k=1$, $e=l(f_1(x,w_1),y)$
|
||||
|
||||
Example: $e=(f_1(x,w_1)-y)^2$
|
||||
|
||||
So $h_1=f_1(x,w_1)=w^T_1x$, $e=l(h_1,y)=(y-h_1)^2$
|
||||
So $h_1=f_1(x,w_1)=w^\top_1x$, $e=l(h_1,y)=(y-h_1)^2$
|
||||
|
||||
$$
|
||||
\frac{\partial e}{\partial w_1}=\frac{\partial e}{\partial h_1}\frac{\partial h_1}{\partial w_1}
|
||||
|
||||
Reference in New Issue
Block a user