diff --git a/content/CSE5519/CSE5519_E1.md b/content/CSE5519/CSE5519_E1.md index 554e574..fd12677 100644 --- a/content/CSE5519/CSE5519_E1.md +++ b/content/CSE5519/CSE5519_E1.md @@ -318,7 +318,7 @@ $$ p_s\sim K\hat{T}_{t\to s}\hat{D}_t(p_t)K^{-1}p_t $$ -The we use spacial transformer network to sample continuous pixel coordinates. +Then we use spacial transformer network to sample continuous pixel coordinates. ### Compensation for PoseNet & DepthNet @@ -336,7 +336,7 @@ Since it's inevitable that there are some moving objects or occlusions between t This is a method that use pair of images as Left and Right eye to estimate depth. Increased consistency by flipping the right-left relation. -**Intuition:** Given a calibarated pair of binocular cameras, if we cna learn a function that is able to reconstruct one image from the other, then we have learned something about the depth of the scene. +**Intuition:** Given a calibarated pair of binocular cameras, if we can learn a function that is able to reconstruct one image from the other, then we have learned something about the depth of the scene. ### DispNet and Scene flow diff --git a/public/CSE5519/Left-right_consistency_Network.png.png b/public/CSE5519/Left-right_consistency_Network.png.png new file mode 100644 index 0000000..fc8ef2c Binary files /dev/null and b/public/CSE5519/Left-right_consistency_Network.png.png differ