updates today
This commit is contained in:
@@ -1,2 +1,13 @@
|
||||
# CSE5519 Advances in Computer Vision (Topic C: 2024 - 2025: Neural Rendering)
|
||||
|
||||
## COLMAP-Free 3D Gaussian SplattingLinks to an external site
|
||||
|
||||
[link to the paper](https://arxiv.org/pdf/2312.07504)
|
||||
|
||||
We propose a novel 3D Gaussian Splatting (3DGS) framework that eliminates the need for COLMAP for camera pose estimation and bundle adjustment.
|
||||
|
||||
> [!TIP]
|
||||
>
|
||||
> This paper presents a novel 3D Gaussian Splatting framework that eliminates the need for COLMAP for camera pose estimation and bundle adjustment.
|
||||
>
|
||||
> Inspired by point map construction, the author uses Gaussian splatting to reconstruct the 3D scene. I wonder how this method might contribute to higher resolution reconstruction or improvements. Can we use the original COLMAP on traditional NeRF methods for comparable results?
|
||||
|
||||
@@ -1,2 +1,17 @@
|
||||
# CSE5519 Advances in Computer Vision (Topic F: 2025: Representation Learning)
|
||||
|
||||
## Can Generative Models Improve Self-Supervised Representation Learning?
|
||||
|
||||
[link to the paper](https://arxiv.org/pdf/2403.05966)
|
||||
|
||||
### Novelty in SSL with Generative Models
|
||||
|
||||
- Use generative models to generate synthetic data to train self-supervised representation learning models.
|
||||
- Use generative augmentation to generate new data from the original data using a generative model. (with gaussian noise, or other data augmentation techniques)
|
||||
- Using standard augmentation techniques like flipping, cropping, and color jittering with generative techniques can further improve the performance of the self-supervised representation learning models.
|
||||
|
||||
> [!TIP]
|
||||
>
|
||||
> This paper shows that using generative models to generate synthetic data can improve the performance of self-supervised representation learning models. The key seems to be the use of generative augmentation to generate new data from the original data using a generative model.
|
||||
>
|
||||
> However, both representation learning and generative modeling have some hallucinations. I wonder will these kinds of hallucinations would be reinforced, and the bias in the generation model would propagate to the representation learning model in the process of generative augmentation?
|
||||
|
||||
Reference in New Issue
Block a user