updates
This commit is contained in:
@@ -55,6 +55,27 @@ Close-loop planning:
|
||||
|
||||
- At each state, iteratively build a search tree to evaluate actions, select the best-first action, and the move the next state.
|
||||
|
||||
Use model as simulator to evaluate actions.
|
||||
|
||||
#### MCTS Algorithm Overview
|
||||
|
||||
1. Selection: Select the best-first action from the search tree
|
||||
2. Expansion: Add a new node to the search tree
|
||||
3. Simulation: Simulate the next state from the selected action
|
||||
4. Backpropagation: Update the values of the nodes in the search tree
|
||||
|
||||
#### Policies in MCTS
|
||||
|
||||
Tree policy:
|
||||
|
||||
Decision policy:
|
||||
|
||||
- Max (highest weight)
|
||||
- Robust (most visits)
|
||||
- Max-Robust (max of the two)
|
||||
|
||||
#### Upper Confidence Bound on Trees (UCT)
|
||||
|
||||
|
||||
|
||||
#### Continuous Case: Trajectory Optimization
|
||||
|
||||
@@ -181,7 +181,7 @@ Regenerating codes, Magic #2:
|
||||
- Both decreasing functions of $d$.
|
||||
- $\Rightarrow$ Less repair-bandwidth by contacting more nodes, minimized at $d = n - 1$.
|
||||
|
||||
### Constructing Minimum bandwidth regenerating (MBR) codes from Minimum distance codes
|
||||
### Constructing Minimum bandwidth regenerating (MBR) codes from Maximum distance separable (MDS) codes
|
||||
|
||||
Observation: For MBR code with parameters $n, k, d$ and $\beta = 1$, one can construct MBR with parameters $n, k, d$ and any $\beta$.
|
||||
|
||||
|
||||
62
content/CSE5313/CSE5313_L16.md
Normal file
62
content/CSE5313/CSE5313_L16.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# CSE5313 Computer Vision (Lecture 16: Exam Review)
|
||||
|
||||
## Exam Review
|
||||
|
||||
### Information flow graph
|
||||
|
||||
Parameters:
|
||||
|
||||
- $n$ is the number of nodes in the initial system (before any node leaves/crashes).
|
||||
- $k$ is the number of nodes required to reconstruct the file $k$.
|
||||
- $d$ is the number of nodes required to repair a failed node.
|
||||
- $\alpha$ is the storage at each node.
|
||||
- $\beta$ is the edge capacity **for repair**.
|
||||
- $B$ is the file size.
|
||||
|
||||
#### Graph construction
|
||||
|
||||
Source: System admin.
|
||||
|
||||
Sink: Data collector.
|
||||
|
||||
Nodes: Storage servers.
|
||||
|
||||
Edges: Represents transmission of information. (Number of $\mathbb{F}_q$ elements is weight.)
|
||||
|
||||
Main observation:
|
||||
|
||||
- $k$ elements (number of servers required to reconstruct the file) The message size is $B$. from $\mathbb{F}_q$ must "flow" from the source (system admin) to the sink (data collector).
|
||||
- Any cut $(U,\overline{U})$ which separates source from sink must have capacity at least $k$.
|
||||
|
||||
### Bounds for local recoverable codes
|
||||
|
||||
#### Turan's Lemma
|
||||
|
||||
Let $G$ be a graph with $n$ vertices. Then there exists an induced directed acyclic subgraph (DAG) of $G$ on at least $\frac{n}{1+\operatorname{avg}_i(d^{out}_i)}$ nodes, where $d^{out}_i$ is the out-degree of vertex $i$.
|
||||
|
||||
#### Bound 2
|
||||
|
||||
Consider the induced acyclic graph $G_U$ on $U$ nodes.
|
||||
|
||||
By the definition of $r$-locally recoverable code, each leaf node in $G_U$ must be determined by other nodes in $G\setminus G_U$, so we can safely remove all leaf nodes in $G_U$ and the remaining graph is still a DAG.
|
||||
|
||||
Let $N\subseteq [n]\setminus U$ be the set of neighbors of $U$ in $G$.
|
||||
|
||||
$|N|\leq r|U|\leq k-1$.
|
||||
|
||||
Complete $n$ to be of the size $k-1$ by adding elements not in $U$.
|
||||
|
||||
$|C_N|\leq q^{k-1}$
|
||||
|
||||
Also $|N\cup U'|=k-1+\lfloor\frac{k-1}{r}\rfloor$
|
||||
|
||||
All nodes in $G_U$ can be recovered from nodes in $N$.
|
||||
|
||||
So $|C_{N\cup U'}|=|C_N|\leq q^{k-1}$.
|
||||
|
||||
Therefore, $\max\{|I|:C_I<q^k,I\subseteq [n]\}\geq |N\cup U'|=k-1+\lfloor\frac{k-1}{r}\rfloor$.
|
||||
|
||||
Using reduction lemma, we have $d= n-\max\{|I|:C_I<q^k,I\subseteq [n]\}\leq n-k-1-\lfloor\frac{k-1}{r}\rfloor+2=n-k-\lceil\frac{k}{r}\rceil +2$.
|
||||
|
||||
### Reed-Solomon code
|
||||
|
||||
@@ -18,4 +18,5 @@ export default {
|
||||
CSE5313_L13: "CSE5313 Coding and information theory for data science (Lecture 13)",
|
||||
CSE5313_L14: "CSE5313 Coding and information theory for data science (Lecture 14)",
|
||||
CSE5313_L15: "CSE5313 Coding and information theory for data science (Lecture 15)",
|
||||
CSE5313_L16: "CSE5313 Coding and information theory for data science (Exam Review)",
|
||||
}
|
||||
@@ -1,2 +1,21 @@
|
||||
# CSE5519 Advances in Computer Vision (Topic A: 2023 - 2024: Semantic Segmentation)
|
||||
|
||||
## Segment Anything
|
||||
|
||||
[link to the paper](https://arxiv.org/pdf/2304.02643)
|
||||
|
||||
### Novelty in Segment Anything
|
||||
|
||||
Brute force approach with large scale training data (400x) more
|
||||
|
||||
#### Dataset construction
|
||||
|
||||
- Model-assisted manual annotation
|
||||
- Semi-automatic annotation
|
||||
- Automatic annotation (predict mask for 32x32 patches)
|
||||
|
||||
> [!TIP]
|
||||
>
|
||||
> This paper shows a remarkable breakthrough in semantic segmentation with a brute force approach using a large scale training data. The authors use a transformer encoder to get the final segmentation map.
|
||||
>
|
||||
> I'm really interested in the scalability of the model. Is there any approach to reduce the training data size or the model size with comparable performance via distillation or other techniques?
|
||||
Reference in New Issue
Block a user