# 算法導論第三版答案

Selected Solutions for Chapter 2: Getting Started Solution to rcise 2.2-2 SELECTION-SORT.A/ n D A:length for j D 1 to n ? 1 smallest D j for i D j C 1 to n if A?i? A?mid? low D mid C 1 else high D mid ? 1 returnNIL RECURSIVE-BINARY-SEARCH.A;;low;high/ if low > high returnNIL mid D b.low C high/=2c if ==A?mid? return mid elseif > A?mid? return RECURSIVE-BINARY-SEARCH.A;;mid C 1;high/ else return RECURSIVE-BINARY-SEARCH.A;;low;mid ? 1/ Both procedures terminate the search unsuccessfully when the range is empty (i.e., low > high) and terminate it successfully if the value has been found. Based on the comparison of to the middle element in the searched range, the search continues with the range halved. The recurrence for these procedures is therefore T.n/ D T.n=2/ C ?.1/, whose solution is T.n/ D ?.lgn/. Solution to Problem 2-4 a. The inversions are .1;5/;.2;5/;.3;4/;.3;5/;.4;5/. (Rememberthat inversions are specifi ed by indices rather than by the values in the array.) b. The array with elements from f1;2;:::;ng with the most inversions is hn;n ? 1;n ? 2;:::;2;1i. For all 1 i y. And since x is in L and y is in R, x must be within a subarray preceding the subarray containing y. Therefore x started out in a position i preceding y’s original position j, and so .i;j/ is an inversion. Having shown a one-to-one correspondence between inversions and merge- inversions, it suffi ces for us to count merge-inversions. Consider a merge-inversion involving y in R. Let ′ be the smallest value in L that is greater than y. At some point during the merging process, ′ and y will be the “exposed” values in L and R, i.e., we will have ′ D L?i? and y D R?j? in line 13 of MERGE. At that time, there will be merge-inversions involving y and L?i?;L?i C 1?;L?i C2?;:::;L?n1?, and these n1? i C1 merge-inversions will be the only ones involving y. Therefore, we need to detect the fi rst time that ′ and y become exposed during the MERGEprocedure and add the value of n1? i C 1 at that time to our total count of merge-inversions. The following pseudocode, modeled on merge sort, works as we have just de- scribed. It also sorts the array A. COUNT-INVERSIONS.A;p;r/ inersions D 0 if p 0, the inequality still holds when all parts are raised to the power b: 0 1 2n b .n C a/b .2n/b; 0 1 2 b nb .n C a/b 2bnb: Thus, c1D .1=2/b, c2D 2b, and n0 D 2jaj satisfy the defi nition. Solution to rcise 3.1-3 Let the running time be T.n/. T.n/ O.n2/ means that T.n/ f.n/ for some function f.n/ in the set O.n2/. This statement holds for any running time T.n/, since the function g.n/ D 0 for all n is in O.n2/, and running times are always nonnegative. Thus, the statement tells us nothing about the running time. 3-2Selected Solutions for Chapter 3: Growth of Functions Solution to rcise 3.1-4 2nC1D O.2n/, but 22n¤ O.2n/. To show that 2nC1D O.2n /, we must fi nd constants c;n0> 0 such that 0 2nC1 c 2nfor all n n0: Since 2nC1D 2 2n for all n, we can satisfy the defi nition with c D 2 and n0D 1. To show that 22n6D O.2n/, assume there exist constants c;n0> 0 such that 0 22n c 2nfor all n n0: Then 22nD 2n 2n c 2n) 2n c. But no constant is greater than all 2n, and so the assumption leads to a contradiction. Solution to rcise 3.2-4 dlgne? is not polynomially bounded, but dlglgne? is. Proving that a function f.n/ is polynomially bounded is equivalent to proving that lg.f.n// D O.lgn/ for the following reasons. If f is polynomially bounded, then there exist constants c, k, n0such that for all n n0, f.n/ cnk. Hence, lg.f.n// kc lgn, which, since c and k are constants, means that lg.f.n// D O.lgn/. Similarly, if lg.f.n// D O.lgn/, then f is polynomially bounded. In the following proofs, we will make use of the following two facts: 1. lg.n?/ D ?.nlgn/ (by equation (3.19)). 2. dlgne D ?.lgn/, because dlgne lgn dlgne 0, we have lgbn D o.na/. Substitute lgn for n, 2 for b, and 1 for a, giving lg2.lgn/ D o.lgn/. Therefore, lg.dlglgne?/ D O.lgn/, and so dlglgne? is polynomially bounded. Selected Solutions for Chapter 4: Divide-and-Conquer Solution to rcise 4.2-4 If you can multiply 3 3 matrices using k multiplications, then you can multiply n n matrices by recursively multiplying n=3 n=3 matrices, in time T.n/ D kT.n=3/ C ?.n2/. Using the master to solve this recurrence, consider the ratio of nlog3 k and n2: If log3k D 2, case 2 applies and T.n/ D ?.n2lgn/. In this case, k D 9 and T.n/ D o.nlg7/. If log3k 9. T.n/ D o.nlg7/ when log3k A?j?g for 1 i b regardless of the low-order digits. If adD bd, the sort will leave a and b in the same order they were in, because it is stable. But that order is already correct, since the correct order of a and b is determined by the low-order d ?1 digits when their dth digits are equal, and the elements are already sorted by their low-order d ? 1 digits. If the intermediate sort were not stable, it might rearrange elements whose dth digits were equal—elements that were in the right order after the sort on their lower-order digits. Solution to rcise 8.3-4 Treat the numbers as 3-digit numbers in radix n. Each digit ranges from 0 to n?1. Sort these 3-digit numbers with radix sort. There are 3 calls to counting sort, each taking ?.n C n/ D ?.n/ time, so that the total time is ?.n/. Solution to Problem 8-1 a. For a comparison algorithm A to sort, no two permutations can reach the same leaf of the decision tree, so there must be at least n? leaves reached in TA, one for each possible permutation. Since A is a deterministic algorithm, it must always reach the same leaf when given a particular permutation as , so at most n? leaves are reached (one for each permutation). Therefore exactly n? leaves are reached, one for each permutation. Selected Solutions for Chapter 8: Sorting in Linear Time8-3 These n? leaves will each have probability 1=n?, since each of the n? possible permutations is the with the probability 1=n?. Any remaining leaves will have probability 0, since they are not reached for any . Without loss of generality, we can assume for the rest of this problem that paths leading only to 0-probability leaves aren’t in the tree, since they cannot affect the running time of the sort. That is, we can assume that TAconsists of only the n? leaves labeled 1=n? and their ancestors. b. If k > 1, then the root of T is not a leaf. This implies that all of T’s leaves are leaves in LT and RT. Since every leaf at depth h in LT or RT has depth hC1 in T, D.T/ must be the sum of D.LT/, D.RT/, and k, the total number of leaves. To prove this last assertion, let dT.x/ D depth of node x in tree T. Then, D.T/D X x2leaves.T/ dT.x/ D X x2leaves.LT/ dT.x/ C X x2leaves.RT/ dT.x/ D X x2leaves.LT/ .dLT.x/ C 1/ C X x2leaves.RT/ .dRT.x/ C 1/ D X x2leaves.LT/ dLT.x/ C X x2leaves.RT/ dRT.x/ C X x2leaves.T/ 1 DD.LT/ C D.RT/ C k : c. To show that d.k/ D min1ik?1fd.i/ C d.k ? i/ C kg we will show sepa- rately that d.k/ min 1ik?1 fd.i/ C d.k ? i/ C kg and d.k/ min 1ik?1 fd.i/ C d.k ? i/ C kg : Toshowthatd.k/ min1ik?1fd.i/ C d.k ? i/ C kg, weneedonlyshow that d.k/ d.i/ C d.k ? i/ C k, for i D 1;2;:::;k ? 1. For any i from 1 to k ? 1 we can fi nd trees RT with i leaves and LT with k ? i leaves such that D.RT/ D d.i/ and D.LT/ D d.k?i/. Construct T such that RT and LT are the right and left subtrees of T’s root respectively. Then d.k/D.T/ (by defi nition of d as min D.T/ value) D D.RT/ C D.LT/ C k(by part (b)) D d.i/ C d.k ? i/ C k(by choice of RT and LT) . Toshowthatd.k/ min1ik?1fd.i/ C d.k ? i/ C kg, weneedonlyshow that d.k/ d.i/ C d.k ? i/ C k, for some i in f1;2;:::;k ? 1g. Take the tree T with k leaves such that D.T/ D d.k/, let RT and LT be T’s right and left subtree, respecitvely, and let i be the number of leaves in RT. Then k ? i is the number of leaves in LT and d.k/ D D.T/(by choice of T) D D.RT/ C D.LT/ C k (by part (b)) d.i/ C d.k ? i/ C k (by defi ntion of d as min D.T/ value) . 8-4Selected Solutions for Chapter 8: Sorting in Linear Time Neither i nor k ?i can be 0 (and hence 1 i k ?1), since if one of these were 0, either RT or LT would contain all k leaves of T, and that k-leaf subtree would have a D equal to D.T/ ? k (by part (b)), contradicting the choice of T as the k-leaf tree with the minimum D. d. Let fk .i/ D i lgi C.k ?i/lg.k ?i/. To fi nd the value of i that minimizes fk, fi nd the i for which the derivative of fkwith respect to i is 0: f 0 k.i/ D d di i lni C .k ? i/ln.k ? i/ ln2 D lni C 1 ? ln.k ? i/ ? 1 ln2 D lni ? ln.k ? i/ ln2 is 0 at i D k=2. To verify this is indeed a minimum (not a maximum), check that the second derivative of fkis positive at i D k=2: f 00 k.i/ D d di lni ? ln.k ? i/ ln2 D 1 ln2 1 i C 1 k ? i : f 00 k.k=2/ D 1 ln2 2 k C 2 k D 1 ln2 4 k >0since k > 1 . Now we use substitution to prove d.k/ D .k lgk/. The base case of the induction is satisfi ed because d.1/ 0 D c 1 lg1 for any constant c. For the inductive step we assume that d.i/ ci lgi for 1 i k ? 1, where c is some constant to be determined. d.k/Dmin 1ik?1 fd.i/ C d.k ? i/ C kg min 1ik?1 fc.i lgi C .k ? i/lg.k ? i// C kg Dmin 1ik?1 fcfk.i/ C kg Dc k 2 lg k 2 k ? k 2 lg k ? k 2 C k Dck lg k 2 C k Dc.k lgk ? k/ C k Dck lgk C .k ? ck/ ck lgkif c 1 ; and so d.k/ D .k lgk/. e. Using the result of part (d) and the fact that TA (as modifi ed in our solution to part (a)) has n? leaves, we can conclude that D.TA/ d.n?/ D .n?lg.n?// : Selected Solutions for Chapter 8: Sorting in Linear Time8-5 D.TA/ is the sum of the decision-tree path lengths for sorting all per- mutations, and the path lengths are proportional to the run time. Since the n? permutations have equal probability 1=n?, the expected time to sort n random elements (1 permutation) is the total time for all permutations divided by n?: .n?lg.n?// n? D .lg.n?// D .nlgn/ : f. We will show how to modify a randomized decision tree (algorithm) to defi ne a deterministic decision tree (algorithm) that is at least as good as the randomized one in terms of the average number of comparisons. At each randomized node, pick the child with the smallest subtree (the subtree with the smallest average number of comparisons on a path to a leaf). Delete all the other children of the randomized node and splice out the randomized node itself. The deterministic algorithm corresponding to this modifi ed tree still works, be- cause the randomized algorithm worked no matter which path was taken from each randomized node. The average number of comparisons for the modifi ed algorithm is no larger than the average number for the original randomized tree, since we discarded the higher-average subtrees in each case. In particular, each time we splice out a randomized node, we leave the overall average less than or equal to what it was, because thesamesetofpermutations reachesthemodifi edsubtreeasbefore, but those s are handled in less than or equal to average time than before, and the rest of the tree is unmodifi ed. The randomized algorithm thus takes at least as much time on average as the corresponding deterministic one. (We’ve shown that the expected running time for a deterministic comparison sort is .nlgn/, hence the expected time for a randomized comparison sort is also .nlgn/.) Selected Solutions for Chapter 9: Medians and Order Statistics Solution to rcise 9.3-1 For groups of 7, the algorithm still works in linear time. The number of elements greater than x (and similarly, the number less than x) is at least 4 1 2 ln 7 m ? 2 2n 7 ? 8 ; and the recurrence becomes T.n/ T.dn=7e/ C T.5n=7 C 8/ C O.n/ ; which can be shown to be O.n/ by substitution, as for the groups of 5 case in the text. Forgroupsof3, however, thealgorithmnolongerworksinlineartime. Thenumber of elements greater than x, and the number of elements less than x, is at least 2 1 2 ln 3 m ? 2 n 3 ? 4 ; and the recurrence becomes T.n/ T.dn=3e/ C T.2n=3 C 4/ C O.n/ ; which does not have a linear solution. We can prove that the worst-case time for groups of 3 is .nlgn/. We do so by deriving a recurrence for a particular case that takes .nlgn/ time. In counting up the number of elements greater than x (and similarly, the num- ber less than x), consider the particular case in which there are exactly l 1 2 l n 3 mm groups with medians x and in which the “leftover” group does contribute 2 elements greater than x. Then the number of elements greater than x is exactly 2 l 1 2 l n 3 mm ? 1 C 1 (the ?1 discounts x’s group, as usual, and the C1 is con- tributed by x’s group) D 2dn=6e ? 1, and the recursive step for elements x has n ? .2dn=6e ? 1/ n ? .2.n=6 C 1/ ? 1/ D 2n=3 ? 1 elements. Observe also that the O.n/ term in the recurrence is really ?.n/, since the partitioning in step 4 takes ?.n/ (not just O.n/) time. Thus, we get the recurrence T.n/ T.dn=3e/ C T.2n=3 ? 1/ C ?.n/ T.n=3/ C T.2n=3 ? 1/ C ?.n/ ; from which you can show that T.n/ cnlgn by substitution. You can also see that T.n/ is nonlinear by noticing that each level of the recursion tree sums to n. In fact, any odd group size 5 works in linear time. 9-2Selected Solutions for Chapter 9: Medians and Order Statistics Solution to rcise 9.3-3 A modifi cation to quicksort that allows it to run in O.nlgn/ time in the worst case uses the deterministic PARTITION algorithm that was modifi ed to take an element to partition around as an parameter. SELECTtakes an array A, the bounds p and r of the subarray in A, and the rank i of an order statistic, and in time linear in the size of the subarray A?p ::r? it returns the ith smallest element in A?p::r?. BEST-CASE-QUICKSORT.A;p;r/ if p e. Then e=k k0gD n X kDk0C1 PrfM D kg D n X kDk0C1 Pk 0, n 1 . Selected Solutions for Chapter 15: Dynamic Programming15-3 Running RECURSIVE-MATRIX-CHAINtakes O.n3n?1/ time, and enumerating all parenthesizations takes .4n=n3=2/ time, and so RECURSIVE-MATRIX-CHAINis more effi cient than enumeration. Note: The above substitution uses the following fact: n?1 X iD1 ixi?1D nxn?1 x ? 1 C 1 ? xn .x ? 1/2 : This equation can be derived from equation (A.5) by taking the derivative. Let f.x/ D n?1 X iD1 xiD xn? 1 x ? 1 ? 1 : Then n?1 X iD1 ixi?1D f 0.x/ D nxn?1 x ? 1 C 1 ? xn .x ? 1/2 : Solution to rcise 15.4-4 When computing a particular row of the c table, no rows before the previous row are needed. Thus only two rows—2Y:length entries—need to be kept in memory at a time. (Note: Each row of c actually has Y:lengthC1 entries, but we don’t need to store the column of 0’s—instead we can make the program “know” that those entries are 0.) With this idea, we need only 2 min.m;n/ entries if we always call LCS-LENGTHwith the shorter sequence as the Y argument. We can thus do away with the c table as follows: Use two arrays of length min.m;n/, preious-row and current-row, to hold the appropriate rows of c. Initialize preious-row to all 0 and compute current-row from left to right. When current-row is fi lled, if there are still more rows to compute, copy current-row into preious-row and compute the new current-row. Actually only a little more than one row’s worth of c entries—min.m;n/ C 1 en- tries—are needed during the computation. The only entries needed in the table when it is time to compute c?i;j? are c?i;k? for k j ? 1 (i.e., earlier entries in the current row, which will be needed to compute the next row); and c?i ?1;k? for k j ?1 (i.e., entries in the previous row that are still needed to compute the rest of the current row). This is one entry for each k from 1 to min.m;n/ except that there are two entries with k D j ?1, hence the additional entry needed besides the one row’s worth of entries. We can thus do away with the c table as follows: Use an array a of length min.m;n/ C 1 to hold the appropriate entries of c. At the time c?i;j? is to be computed, a will hold the following entries: a?k? D c?i;k? for 1 k 1 are in a?0? D c?i;j ? 1?, a?j ? 1? D c?i ? 1;j ? 1?, and a?j? D c?i ? 1;j?. When c?i;j? has been computed, move a?0? (c?i;j ? 1?) to its “correct” place, a?j ? 1?, and put c?i;j? in a?0?. Solution to Problem 15-4 Note: We assume that no word is longer than will fi t into a line, i.e., li M for all i. First, we’ll make some defi nitions so that we can state the problem more unily. Special cases about the last line and worries about whether a sequence of words fi ts in a line will be handled in these defi nitions, so that we can forget about them when framing our overall strategy. Defi ne extras?i;j? D M ? j C i ? Pj kDilk to be the number of extra spaces at the end of a line containing words i through j. Note that extras may be negative. Now defi ne the cost of including a line containing words i through j in the sum we want to minimize: lc?i;j? D ? 1 if extras?i;j? 0 : Note that the way we defi ned lc ensures that all choices made will fi t on the line (since an arrangement with lc D 1 cannot be chosen as the minimum), and the cost of putting words i;:::;j on the last line will not be 0 unless this really is the last line of the paragraph (j D n) or words i :::j fi ll the entire line. We can compute a table of c values from left to right, since each value depends only on earlier values. To keep track of what words go on what lines, we can keep a parallel p table that points to where each c value came from. When c?j? is computed, if c?j? is based on the value of c?k ? 1?, set p?j? D k. Then after c?n? is computed, we can trace the pointers to see where to break the lines. The last line starts at word p?n? and goes through word n. The previous line starts at word p?p?n?? and goes through word p?n? ? 1, etc. In pseudocode, here’s how we construct the tables: PRINT-NEATLY.l;n;M/ let extras?1::n;1::n?, lc?1::n;1::n?, and c?0::n? be new arrays / / Compute extras?i;j? for 1 i j n. for i D 1 to n extras?i;i? D M ? li for j D i C 1 to n extras?i;j? D extras?i;j ? 1? ? lj? 1 / / Compute lc?i;j? for 1 i j n. for i D 1 to n for j D i to n if extras?i;j? t) and halls that are free at time t. (As in the activity- selection problem in Section 16.1, we are assuming that activity time intervals are half open—i.e., that if si fj, then activities i and j are compatible.) When t is the start time of some activity, assign that activity to a free hall and move the hall from the free list to the busy list. When t is the fi nish time of some activity, move the activity’s hall from the busy list to the free list. (The activity is certainly in some hall, because the event times are processed in order and the activity must have started before its fi nish time t, hence must have been assigned to a hall.) To avoid using more halls than necessary, always pick a hall that has already had an activity assigned to it, if possible, before picking a never-used hall. (This can be done by always working at the front of the free-halls list—putting freed halls onto 16-2Selected Solutions for Chapter 16: Greedy Algorithms the front of the list and taking halls from the front of the list—so that a new hall doesn’t come to the front and get chosen if there are previously-used halls.) This guarantees that the algorithm uses as few lecture halls as possible: The algo- rithm will terminate with a schedule requiring m n lecture halls. Let activity i be the fi rst activity scheduled in lecture hall m. The reason that i was put in the mth lecture hall is that the fi rst m ? 1 lecture halls were busy at time si. So at this time there are m activities occurring simultaneously. Therefore any schedule must use at least m lecture halls, so the schedule returned by the algorithm is optimal. Run time: Sort the 2n activity-starts/activity-ends events. (In the sorted order, an activity- ending event should precede an activity-starting event that is at the same time.) O.nlgn/timefor arbitrary times, possiblyO.n/if the times are restricted (e.g., to small integers). Process the events in O.n/ time: Scan the 2n events, doing O.1/ work for each (moving a hall from one list to the other and possibly associating an activity with it). Total: O.n C time to sort/ Solution to rcise 16.2-2 The solution is based on the optimal-substructure observation in the text: Let i be the highest-numbered item in an optimal solution S for W pounds and items 1;:::;n. Then S0D S ? fig must be an optimal solution for W ? wipounds and items 1;:::;i ? 1, and the value of the solution S is iplus the value of the subproblem solution S0. We can express this relationship in the following ula: Defi ne c?i;w? to be the value of the solution for items 1;:::;i and maximum weight w. Then c?i;w? D ? 0if i D 0 or w D 0 ; c?i ? 1;w?if wi> w ; max.iC c?i ? 1;w ? wi?;c?i ? 1;w?/if i > 0 and w wi: The last case says that the value of a solution for i items either includes item i, in which case it is iplus a subproblem solution for i ? 1 items and the weight excluding wi, or doesn’t include item i, in which case it is a subproblem solution for i ? 1 items and the same weight. That is, if the thief picks item i, he takes i value, and he can choose from items 1;:::;i ? 1 up to the weight limit w ? wi, and get c?i ? 1;w ? wi? additional value. On the other hand, if he decides not to take item i, he can choose from items 1;:::;i ?1 up to the weight limit w, and get c?i ? 1;w? value. The better of these two choices should be made. The algorithm takes as s the maximum weight W , the number of items n, and the two sequences D h1; 2; :::; ni and w D hw1; w2; :::; wni. It stores the c?i;j? values in a table c?0::n;0::W ? whose entries are computed in row-major order. (That is, the fi rst row of c is fi lled in from left to right, then the second row, Selected Solutions for Chapter 16: Greedy Algorithms16-3 and so on.) At the end of the computation, c?n;W ? contains the maximum value the thief can take. DYNAMIC-0-1-KNAPSACK.;w;n;W / let c?0::n;0::W ? be a new array for w D 0 to W c?0;w? D 0 for i D 1 to n c?i;0? D 0 for w D 1 to W if wi w if iC c?i ? 1;w ? wi? > c?i ? 1;w? c?i;w? D iC c?i ? 1;w ? wi? else c?i;w? D c?i ? 1;w? else c?i;w? D c?i ? 1;w? We can use the c table to deduce the set of items to take by starting at c?n;W ? and tracing where the optimal values came from. If c?i;w? D c?i ?1;w?, then item i is not part of the solution, and we continue tracing with c?i ? 1;w?. Otherwise item i is part of the solution, and we continue tracing with c?i ? 1;w ? wi?. The above algorithm takes ?.nW / time total: ?.nW / to fi ll in the c table: .nC1/.W C1/ entries, each requiring ?.1/ time to compute. O.n/ time to trace the solution (since it starts in row n of the table and moves up one row at each step). Solution to rcise 16.2-7 Sort A and B into monotonically decreasing order. Here’s a proof that this yields an optimal solution. Consider any indices i and j such that i u:d C 1, since we visit at the latest when we explore edge .u;/. Thus, :d u:d C 1. 4. Clearly, :d 0 for all vertices . For a back edge .u;/, is an ancestor of u in the breadth-fi rst tree, which means that :d u:d. (Note that since self-loops are considered to be back edges, we could have u D .) Selected Solutions for Chapter 23: Minimum Spanning Trees Solution to rcise 23.1-1 Theorem 23.1 shows this. Let A be the empty set and S be any set containing u but not . Solution to rcise 23.1-4 A triangle whose edge weights are all equal is a graph in which every edge is a light edge crossing some cut. But the triangle is cyclic, so it is not a minimum spanning tree. Solution to rcise 23.1-6 Suppose that for every cut of G, there is a unique light edge crossing the cut. Let us consider two minimum spanning trees, T and T 0, of G. We will show that every edge of T is also in T 0, which means that T and T0 are the same tree and hence there is a unique minimum spanning tree. Consider any edge .u;/ 2 T. If we remove .u;/ from T, then T becomes disconnected, resulting in a cut .S;V ?S/. The edge .u;/ is a light edge crossing the cut .S;V ? S/ (by rcise 23.1-3). Now consider the edge .x;y/ 2 T 0 that crosses .S;V ? S/. It, too, is a light edge crossing this cut. Since the light edge crossing .S;V ?S/ is unique, the edges .u;/ and .x;y/ are the same edge. Thus, .u;/ 2 T 0. Since we chose .u;/ arbitrarily, every edge in T is also in T0. Here’s a counterexample for the converse: x y z 1 1 23-2Selected Solutions for Chapter 23: Minimum Spanning Trees Here, the graph is its own minimum spanning tree, and so the minimum spanning tree is unique. Consider the cut .fxg;fy;′g/. Both of the edges .x;y/ and .x;′/ are light edges crossing the cut, and they are both light edges. Selected Solutions for Chapter 24: Single-Source Shortest Paths Solution to rcise 24.1-3 If the greatest number of edges on any shortest path from the source is m, then the path-relaxation property tells us that after m iterations of BELLMAN-FORD, every vertex has achieved its shortest-path weight in :d. By the upper-bound property, after miterations, no d values willever change. Therefore, no d values will change in the .mC1/st iteration. Because we do not know m in advance, we cannot make the algorithm iterate exactly m times and then terminate. But if we just make the algorithm stop when nothing changes any more, it will stop after m C 1 iterations. BELLMAN-FORD-(M+1).G;w;s/ INITIALIZE-SINGLE-SOURCE.G;s/ changes DTRUE while changes==TRUE changes DFALSE for each edge .u;/ 2 G:E RELAX-M.u;;w/ RELAX-M.u;;w/ if :d > u:d C w.u;/ :d D u:d C w.u;/ : D u changes DTRUE The test for a negative-weight cycle (based on there being a d value that would change if another relaxation step was done) has been removed above, because this version of the algorithm will never get out of the while loop unless all d values stop changing. Solution to rcise 24.3-3 Yes, the algorithm still works.Let u be the leftover vertex that does not get extracted from the priority queue Q.If u is not reachable from s, then 24-2Selected Solutions for Chapter 24: Single-Source Shortest Paths u:d D ?.s;u/ D 1.If u is reachable from s, then there is a shortest path p D s ; x ! u. When the node x was extracted, x:d D ?.s;x/ and then the edge .x;u/ was relaxed; thus, u:d D ?.s;u/. Solution to rcise 24.3-6 To fi nd the most reliable path between s and t, run Dijkstra’s algorithm with edge weightsw.u;/ D ?lgr.u;/tofi ndshortestpathsfroms inO.ECV lgV /time. The most reliable path is the shortest path from s to t, and that path’s reliability is the product of the reliabilities of its edges. Here’s why this works. Because the probabilities are independent, the probability that a path will not fail is the product of the probabilities that its edges will not fail. We want to fi nd a path s p ; t such that Q .u;/2pr.u;/ is maximized. This is equivalent to maximizing lg.Q.u;/2pr.u;// D P .u;/2plgr.u;/, which is in turn equivalent to minimizing P .u;/2p?lgr.u;/. (Note: r.u;/ can be 0, and lg0 is undefi ned. So in this algorithm, defi ne lg0 D ?1.) Thus if we assign weights w.u;/ D ?lgr.u;/, we have a shortest-path problem. Since lg1 = 0, lgx 0, which in turn implies that f.u;/ 1. Defi ne f 0.x;y/ D f.x;y/for all x;y 2 V , exceptthat f0.u;/ D f.u;/?1. Although f 0 obeys all capacity contraints, even after c.u;/ has been reduced, it is not a legal fl ow, as it violates fl ow conservation at u (unless u D s) and (unless D t). f 0 has one more unit of fl ow entering u than leaving u, and it has one more unit of fl ow leaving than entering . The idea is to try to reroute this unit of fl ow so that it goes out of u and into via some other path. If that is not possible, we must reduce the fl ow from s to u and from to t by one unit. Look for an augmenting path from u to (note: not from s to t). If there is such a path, augment the fl ow along that path. If there is no such path, reduce the fl ow from s to u by augmenting the fl ow from u to s. That is, fi nd an augmenting path u ; s and augment the fl ow along that path. (There defi nitely is such a path, because there is fl ow from s to u.) Similarly, reduce the fl ow from to t by fi nding an augmenting path t ; and augmenting the fl ow along that path. Time O.V C E/ D O.E/ if we fi nd the paths with either DFS or BFS.