Files
NoteNextra-origin/pages/CSE347/CSE347_L2.md
2024-11-18 14:07:36 -06:00

7.8 KiB

Lecture 2

Divide and conquer

Review of CSE 247

  1. Divide the problem into (generally equal) smaller subproblems
  2. Recursively solve the subproblems
  3. Combine the solutions of subproblems to get the solution of the original problem
    • Examples: Merge Sort, Binary Search

Recurrence

Master Method:


T(n)=aT(\frac{n}{b})+\Theta(f(n))

Example 1: Multiplying 2 numbers

Normal Algorithm:

def multiply(x,y):
    p=0
    for i in y:
        p+=x*y
    return p

divide and conquer approach

def multiply(x,y):
    n=max(len(x),len(y))
    if n==1:
        return x*y
    xh,xl=x>>(n/2),x&((1<<n/2)-1)
    yh,yl=y>>(n/2),y&((1<<n/2)-1)
    return (multiply(xh,yh)<<n)+((multiply(xh,yl)+multiply(yh,xl))<<(n/2))+multiply(xl,yl)

T(n)=4T(n/2)+\Theta(n)=\Theta(n^2)

Not a useful optimization

But,


multiply(xh,yl)+multiply(yh,xl)=multiply(xh-xl,yh-yl)+multiply(xh,yh)+multiply(xl,yl)
def multiply(x,y):
    n=max(len(x),len(y))
    if n==1:
        return x*y
    xh,xl=x>>(n/2),x&((1<<n/2)-1)
    yh,yl=y>>(n/2),y&((1<<n/2)-1)
    zhh=multiply(xh,yh)
    zll=multiply(xl,yl)
    return (zhh<<n)+((multiply(xh-xl,yh-yl)+zhh+zll)<<(n/2))+zll

T(n)=3T(n/2)+\Theta(n)=\Theta(n^{\log_2 3})\approx \Theta(n^{1.58})

Example 2: Closest Pairs

Input: P is a set of n points in the plane. p_i=(x_i,y_i)


d(p_i,p_j)=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2}

Goal: Find the distance between the closest pair of points.

Naive algorithm: iterate all pairs (O(n)=\Theta(n^2)).

Divide and conquer algorithm:

Preprocessing: Sort P by x coordinate to get P_x.

Base case:

  • 1 point: clostest d = inf
  • 2 points: clostest d = d(p_1,p_2)

Divide Step:

Compute mid point and get Q, R.

Recursive step:

  • d_l closest pair in Q
  • d_r closest pair in R

Combine step:

Calculate d_c closest point such that one point is on the left side and the other is on the right.

return min(d_c,d_l,d_r)

Total runtime:


T(n)=2T(n/2)+\Theta(n^2)

Still no change.

Important Insight: Can reduce the number of checks

Lemma: If all points within this square are at least \delta=min\{d_r,d_l\} apart, there are at most 4 points in this square.

A better algorithm:

  1. Divide P_x into 2 halves using the mid point
  2. Recursively computer the d_l and d_r, take \delta=min(d_l,d_r).
  3. Filter points into y-strip: points which are within (mid_x-\delta,mid_x+\delta)
  4. Sort y-strip by y coordinate. For every point p, we look at this y-strip in sorted order starting at this point and stop when we see a point with y coordinate >p_y +\delta
# d is distance function
def closestP(P,d):
    Px=sorted(P,key=lambda x:x[0])
    def closestPRec(P,d):
        n=len(P)
        if n==1:
            return float('inf')
        if n==2:
            return d(P[0],P[1])
        Q,R=Px[:n//2],Px[n//2:]
        midx=R[0][0]
        dl,dr=closestP(Q),closestP(R)
        dc=min(dl,dr)
        ys=[i if midx-dc<i[0]<midx+dc for i in P]
        ys.sort()
        yn=len(ys)
        # this step below checks at most 4 points, (but still runs O(n))
        for i in range(yn):
            for j in range(i,yn):
                curd=d(ys[i],ys[j])
                if curd>dc:
                    break
                dc=min(dc,curd)
        return dc
    return closestPRec(Px,d):

Runtime analysis:


T(n)=2T(n/2)+\Theta(n\log n)=\Theta(n\log^2 n)

We can do even better by presorting Y

  1. Divide P_x into 2 halves using the mid point
  2. Recursively computer the d_l and d_r, take \delta=min(d_l,d_r).
  3. Filter points into y-strip: points which are within (mid_x-\delta,mid_x+\delta) by visiting presorted P_y
# d is distance function
def closestP(P,d):
    Px=sorted(P,key=lambda x:x[0])
    Py=sorted(P,key=lambda x:x[1])
    def closestPRec(P,d):
        n=len(P)
        if n==1:
            return float('inf')
        if n==2:
            return d(P[0],P[1])
        Q,R=Px[:n//2],Px[n//2:]
        midx=R[0][0]
        dl,dr=closestP(Q),closestP(R)
        dc=min(dl,dr)
        ys=[i if midx-dc<i[0]<midx+dc for i in Py]
        yn=len(ys)
        # this step below checks at most 4 points, (but still runs O(n))
        for i in range(yn):
            for j in range(i,yn):
                curd=d(ys[i],ys[j])
                if curd>dc:
                    break
                dc=min(dc,curd)
        return dc
    return closestPRec(Px,d):

Runtime analysis:


T(n)=2T(n/2)+\Theta(n)=\Theta(n\log n)

In-person lectures


T(n)=aT(n/b)+f(n)

a is number of sub problems, n/b is size of subproblems, f(n) is the cost of divide and combine cost.

Example 3: Max Contiguous Subsequence Sum (MCSS)

Given: array of integers (positive or negative), S=[s_1,s_2,...,s_n]

Return: max\{\sum^i_{k=i} s_k|1\leq i\leq n, i\leq j\leq n\}

Trivial solution:

brute force O(n^3)

A bit better solution:

O(n^2) use prefix sum to reduce cost for sum.

Divide and conquer solution.

def MCSS(S):
    def MCSSMid(S,i,j,mid):
        res=S[j]
        for l in range(i,j):
            curS=0
            for r in range(l,j):
                curS+=S[r]
                res=max(res,curS)
        return res
    def MCSSRec(i,j):
        if i==j:
            return S[i]
        mid=(i+j)//2
        L,R=MCSSRec(i,mid),MCSSRec(mid,j)
        C=MCSSMid(i,j)
        return min([L,C,R])
    return MCSSRec(0,len(S))

If MCSSMid(S,i,j,mid) use trivial solution, the running time is:


T(n)=2T(n/2)+O(n^2)=\Theta(n^2)

and we did nothing.

Observations: Any contiguous subsequence that starts on the left and ends on the right can be split into two parts as sum(S[i:j])=sum(S[i:mid])+sum(S[mid,j])

and let LS be the subsequence that has the largest sum that ends at mid, and RS be the subsequence that has the largest sum on the right that starts at mid.

Lemma: Biggest subsequence that contains S[mid] is LS+RP

Proof:

By contradiction,

Assume for the sake of contradiction that y=L'+R' is a sum of such a subsequence that is larger than x (y>x).

Let z=LS+R', since LS\geq L', by definition of LS, then z\geq y, WOLG, RS\geq R', x\geq y, which contradicts that y>x.

Optimized function as follows:

def MCSS(S):
    def MCSSMid(S,i,j,mid):
        res=S[mid]
        LS,RS=0,0
        cl,cr=0,0
        for l in range(mid-1,i-1,-1):
            cl+=S[l]
            LS=max(LS,cl)
        for r in range(mid+1,j):
            cr+=S[r]
            RS=max(RS,cr)
        return res+LS+RS
    def MCSSRec(i,j):
        if i==j:
            return S[i]
        mid=(i+j)//2
        L,R=MCSSRec(i,mid),MCSSRec(mid,j)
        C=MCSSMid(i,j)
        return min([L,C,R])
    return MCSSRec(0,len(S))

The running time is:


T(n)=2T(n/2)+O(n)=\Theta(n\log n)

Strengthening the recusions:

def MCSS(S):
    def MCSSRec(i,j):
        if i==j:
            return S[i],S[i],S[i],S[i]
        mid=(i+j)//2
        L,lp,ls,sl=MCSSRec(i,mid)
        R,rp,rs,sr=MCSSRec(mid,j)
        return min([L,R,ls+rp]),max(lp,sl+rp),max(rs,sr+ls),sl+sr
    return MCSSRec(0,len(S))

Pre-computer version:

def MCSS(S):
    pfx,sfx=[0],[S[-1]]
    n=len(S)
    for i in range(n-1):
        pfx.append(pfx[-1]+S[i])
        sfx.insert(sfx[0]+S[n-i-2],0)
    def MCSSRec(i,j):
        if i==j:
            return S[i],pfx[i],sfx[i]
        mid=(i+j)//2
        L,lp,ls=MCSSRec(i,mid)
        R,rp,rs=MCSSRec(mid,j)
        return min([L,R,ls+rp]),max(lp,sfx[mid]-sfx[i]+rp),max(rs,sfx[j]-sfx[mid]+ls)
    return MCSSRec(0,n)

T(n)=2T(n/2)+O(1)=\Theta(n)