C中的数据结构与算法分析

更新时间:2023-05-02 14:18:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Data Structures

and Algorithm Analysis in C

Second Edition

Solutions Manual

Mark Allen Weiss

Florida International University

Preface

Included in this manual are answers to most of the exercises in the textbook Data Structures and Algorithm Analysis in C,second edition,published by Addison-Wesley.These answers re?ect the state of the book in the?rst printing.

Speci?cally omitted are likely programming assignments and any question whose solu-tion is pointed to by a reference at the end of the chapter.Solutions vary in degree of complete-ness;generally,minor details are left to the reader.For clarity,programs are meant to be pseudo-C rather than completely perfect code.

Errors can be reported to weiss@?d082d0ea998fcc22bcd10da5.Thanks to Grigori Schwarz and Brian Harvey for pointing out errors in previous incarnations of this manual.

Table of Contents

1.Chapter1:Introduction (1)

2.Chapter2:Algorithm Analysis (4)

3.Chapter3:Lists,Stacks,and Queues (7)

4.Chapter4:Trees (14)

5.Chapter5:Hashing (25)

6.Chapter6:Priority Queues(Heaps) (29)

7.Chapter7:Sorting (36)

8.Chapter8:The Disjoint Set ADT (42)

9.Chapter9:Graph Algorithms (45)

10.Chapter10:Algorithm Design Techniques (54)

11.Chapter11:Amortized Analysis (63)

12.Chapter12:Advanced Data Structures and Implementation (66)

-iii-

Chapter 1:Introduction

1.3Because of round-off errors,it is customary to specify the number of decimal places that should be included in the output and round up accordingly.Otherwise,numbers come out looking strange.We assume error checks have already been performed;the routine Separate is left to the reader.Code is shown in Fig. 1.1.

1.4

The general way to do this is to write a procedure with heading

void ProcessFile(const char *FileName );

which opens FileName, does whatever processing is needed,and then closes it.If a line of the form

#include SomeFile

is detected,then the call

ProcessFile(SomeFile );

is made recursively.Self-referential includes can be detected by keeping a list of ?les for which a call to ProcessFile has not yet terminated,and checking this list before making a new call to ProcessFile.

1.5(a)The proof is by induction.The theorem is clearly true for 0 < X ≤ 1,since it is true for X = 1,and for X < 1,log X is negative.It is also easy to see that the theorem holds for 1 < X ≤ 2,since it is true for X = 2,and for X < 2,log X is at most 1.Suppose the theorem is true for p < X ≤ 2p (where p is a positive integer),and consider any 2p < Y ≤ 4p (p ≥ 1).Then log Y = 1 + log (Y / 2) < 1 + Y / 2 < Y / 2 + Y / 2 ≤ Y ,where the ?rst ine-quality follows by the inductive hypothesis.

(b)Let 2X = A .Then A B = (2X )B = 2XB .Thus log A B = XB .Since X = log A ,the theorem is proved.

1.6(a)The sum is 4/3and follows directly from the formula.

(b)S = 41__ + 422___ + 433___ + . . . .4S = 1+42__ + 42

3___ + . . . .Subtracting the ?rst equation from the second gives 3S = 1 + 41__ + 4

22___ + . . . .By part (a),3S = 4/ 3so S = 4/ 9.(c)S = 41__ + 424___ + 439___ + . . . .4S = 1 + 44__ + 429___ + 4316___ + . . . .Subtracting the ?rst equa-tion from the second gives 3S = 1+43__ + 425___ + 437___ + . . . .Rewriting,we get 3S = 2i =0Σ∞4i i ___ + i =0Σ∞4

i 1___.Thus 3S = 2(4/ 9) + 4/ 3 = 20/ 9.Thus S = 20/ 27.

(d)Let S N = i =0Σ∞4i i N ___.Follow the same method as in parts (a)-(c)to obtain a formula for S N in terms of S N ?1,S N ?2,...,S 0and solve the recurrence.Solving the recurrence is very dif?cult.-1-

_______________________________________________________________________________

double RoundUp(double N,int DecPlaces )

{

int i;

double AmountToAdd =0.5;

for(i =0;i

AmountToAdd /=10;

return N +AmountToAdd;

}

void PrintFractionPart(double FractionPart,int DecPlaces )

{

int i,Adigit;

for(i =0;i

{

FractionPart *=10;

ADigit =IntPart(FractionPart );

PrintDigit(Adigit );

FractionPart =DecPart(FractionPart );

}

}

void PrintReal(double N,int DecPlaces )

{

int IntegerPart;

double FractionPart;

if(N <0)

{

putchar(’-’);

N =-N;

}

N =RoundUp(N,DecPlaces );

IntegerPart =IntPart(N );FractionPart =DecPart(N );

PrintOut(IntegerPart );/*Using routine in text */

if(DecPlaces >0)

putchar(’.’);

PrintFractionPart(FractionPart,DecPlaces );

}

Fig.1.1._______________________________________________________________________________

1.7i = N / 2 ΣN

i 1__ = i =1ΣN i 1__ ? i =1Σ N / 2 ? 1 i 1__ ~~ ln N ? ln N / 2 ~~ ln 2.-2-

1.8

24 = 16 ≡ 1 (mod 5).(24)25 ≡ 125 (mod 5).Thus 2100 ≡ 1 (mod 5).1.9(a)Proof is by induction.The statement is clearly true for N = 1and N = 2.Assume true

for N = 1,2,...,k .Then i =1Σk +1

F i = i =1Σk F i +F k +1.By the induction hypothesis,the value of the

sum on the right is F k +2 ? 2 + F k +1=F k +3 ? 2,where the latter equality follows from the de?nition of the Fibonacci numbers.This proves the claim for N = k + 1,and hence for all N .

(b)As in the text,the proof is by induction.Observe that φ + 1 = φ2.This implies that φ?1 + φ?2 = 1.For N = 1and N = 2,the statement is true.Assume the claim is true for N = 1,2,...,k .

F k +1 = F k + F k ?1

by the de?nition and we can use the inductive hypothesis on the right-hand side,obtaining

F k +1 < φk + φk ?1 < φ?1φk +1 + φ?2φk +1

F k +1 < (φ?1 + φ?2)φk +1 < φk +1

and proving the theorem.

(c)See any of the advanced math references at the end of the chapter.The derivation involves the use of generating functions.

1.10(a)

i =1ΣN (2i ?1) = 2i =1ΣN i ? i =1

ΣN 1 = N (N +1) ? N = N 2.(b)The easiest way to prove this is by induction.The case N = 1is trivial.Otherwise,i =1ΣN +1i 3 = (N +1)3 + i =1ΣN

i 3= (N +1)3 + 4

N 2

(N +1)2_________= (N +1)2 4N 2___ + (N +1)

= (N +1)2 4N 2 + 4N + 4___________ = 22

(N +1)2(N +2)2_____________= 2(N +1)(N +2)___________

2= i =1ΣN +1i

2

-3-

Chapter 2:Algorithm Analysis

2.1

2/N ,37,√ N ,N ,N log log N ,N log N ,N log (N 2),N log 2N ,N 1.5,N 2,N 2log N ,N 3,2N / 2,2N .N log N and N log (N 2)grow at the same rate.2.2(a)True.

(b)False.A counterexample is T 1(N ) = 2N ,T 2(N ) = N ,and f (N ) = N .(c)False.A counterexample is T 1(N ) = N 2,T 2(N ) = N ,and f (N ) = N 2.(d)False.The same counterexample as in part (c)applies.

2.3We claim that N log N is the slower growing function.To see this,suppose otherwise.Then,N ε/ √ log N would grow slower than log N .Taking logs of both sides,we ?nd that,

under this assumption,ε/ √ log N log N grows slower than log log N .But the ?rst expres-

sion simpli?es to ε√ log N .If L = log N ,then we are claiming that ε√ L

grows slower than log L ,or equivalently,that ε2L grows slower than log 2 L .But we know that log 2 L = ο (L ),so the original assumption is false,proving the claim.2.4

Clearly,log k 1N = ο(log k 2N )if k 1 < k 2,so we need to worry only about positive integers.The claim is clearly true for k = 0and k = 1.Suppose it is true for k < i .Then,by L’Hospital’s rule,

N →∞lim N log i N ______ = N →∞

lim i N log i ?1N _______The second limit is zero by the inductive hypothesis,proving the claim.

2.5

Let f (N ) = 1when N is even,and N when N is odd.Likewise,let g (N ) = 1when N is odd,and N when N is even.Then the ratio f (N ) / g (N )oscillates between 0and ∞.2.6For all these programs,the following analysis will agree with a simulation:(I)The running time is O (N ).

(II)The running time is O (N 2).

(III)The running time is O (N 3).

(IV)The running time is O (N 2).

(V) j can be as large as i 2,which could be as large as N 2.k can be as large as j ,which is N 2.The running time is thus proportional to N .N 2.N 2,which is O (N 5).

(VI)The if statement is executed at most N 3times,by previous arguments,but it is true only O (N 2)times (because it is true exactly i times for each i ).Thus the innermost loop is only executed O (N 2)times.Each time through,it takes O ( j 2) = O (N 2)time,for a total of O (N 4).This is an example where multiplying loop sizes can occasionally give an overesti-mate.

2.7(a)It should be clear that all algorithms generate only legal permutations.The ?rst two algorithms have tests to guarantee no duplicates;the third algorithm works by shuf?ing an array that initially has no duplicates,so none can occur.It is also clear that the ?rst two algorithms are completely random,and that each permutation is equally likely.The third algorithm,due to R.Floyd,is not as obvious;the correctness can be proved by induction.-4-

See

J.Bentley,"Programming Pearls,"Communications of the ACM 30(1987),754-757.

Note that if the second line of algorithm 3is replaced with the statement

Swap(A[i],A[RandInt(0,N-1)]);

then not all permutations are equally likely.To see this,notice that for N = 3,there are 27equally likely ways of performing the three swaps,depending on the three random integers.Since there are only 6permutations,and 6does not evenly divide

27,each permutation cannot possibly be equally represented.

(b)For the ?rst algorithm,the time to decide if a random number to be placed in A [i ]has not been used earlier is O (i ).The expected number of random numbers that need to be tried is N / (N ? i ).This is obtained as follows:i of the N numbers would be duplicates.Thus the probability of success is (N ? i ) / N .Thus the expected number of independent trials is N / (N ? i ).The time bound is thus

i =0ΣN ?1N ?i Ni ____ < i =0ΣN ?1N ?i N 2____ < N 2i =0ΣN ?1N ?i 1____ < N 2j =1

ΣN j 1__ = O (N 2log N )The second algorithm saves a factor of i for each random number,and thus reduces the time bound to O (N log N )on average.The third algorithm is clearly linear.

(c,d)The running times should agree with the preceding analysis if the machine has enough memory.If not,the third algorithm will not seem linear because of a drastic increase for large N .

(e)The worst-case running time of algorithms I and II cannot be bounded because there is always a ?nite probability that the program will not terminate by some given time T .The algorithm does,however,terminate with probability 1.The worst-case running time of the third algorithm is linear -its running time does not depend on the sequence of random numbers.

2.8Algorithm 1would take about 5days for N = 10,000,14.2years for N = 100,000and 140

centuries for N = 1,000,000.Algorithm 2would take about 3hours for N = 100,000and about 2weeks for N = 1,000,000.Algorithm 3would use 1?12minutes for N = 1,000,000.These calculations assume a machine with enough memory to hold the array.Algorithm 4solves a problem of size 1,000,000in 3seconds.

2.9

(a)O (N 2).

(b)O (N log N ).

2.10(c)The algorithm is linear.

2.11Use a variation of binary search to get an O (log N )solution (assuming the array is preread).

2.13(a)Test to see if N is an odd number (or 2)and is not divisible by 3,5,7,...,√

N .(b)O (√ N ),assuming that all divisions count for one unit of time.(c)B = O (log N ).

(d)O (2B / 2).

(e)If a 20-bit number can be tested in time T ,then a 40-bit number would require about T 2time.

(f)B is the better measure because it more accurately represents the size of the input.

-5-

2.14The running time is proportional to N times the sum of the reciprocals of the primes less

than N .This is O (N log log N ).See Knuth,Volume2,page394.

2.15Compute X 2,X 4,X 8,X 10,X 20,X 40,X 60,and X 62.

2.16Maintain an array PowersOfX that can be?lled in a for loop.The array will contain X ,X 2,

X 4,up to X 2 log N .The binary representation of N (which can be obtained by testing even or odd and then dividing by2,until all bits are examined)can be used to multiply the appropriate entries of the array.

2.17For N =0or N =1,the number of multiplies is zero.If b (N )is the number of ones in the

binary representation of N ,then if N >1,the number of multiplies used is

log N + b (N )?1

2.18(a)A .

(b)B .

(c)The information given is not suf?cient to determine an answer.We have only worst-

case bounds.

(d)Yes.

2.19(a)Recursion is unnecessary if there are two or fewer elements.

(b)One way to do this is to note that if the?rst N ?1elements have a majority,then the last

element cannot change this.Otherwise,the last element could be a majority.Thus if N is odd,ignore the last element.Run the algorithm as before.If no majority element emerges, then return the N th element as a candidate.

(c)The running time is O (N ),and satis?es T (N )= T (N / 2)+ O (N ).

(d)One copy of the original needs to be saved.After this,the B array,and indeed the recur-

sion can be avoided by placing each B i in the A array.The difference is that the original recursive strategy implies that O (log N )arrays are used;this guarantees only two copies. 2.20Otherwise,we could perform operations in parallel by cleverly encoding several integers

into one.For instance,if A=001,B=101,C=111,D=100,we could add A and B at the same time as C and D by adding00A00C+00B00D.We could extend this to add N pairs of numbers at once in unit cost.

2.22No.If Low =1,High =2,then Mid =1,and the recursive call does not make progress. 2.24No.As in Exercise2.22,no progress is made.

-6-

Chapter3:Lists,Stacks,and Queues 3.2The comments for Exercise3.4regarding the amount of abstractness used apply here.The

running time of the procedure in Fig. 3.1is O (L + P ).

_______________________________________________________________________________ void

PrintLots(List L,List P)

{

int Counter;

Position Lpos,Ppos;

Lpos=First(L);

Ppos=First(P);

Counter=1;

while(Lpos!=NULL&&Ppos!=NULL)

{

if(Ppos->Element==Counter++)

{

printf("%?",Lpos->Element);

Ppos=Next(Ppos,P);

}

Lpos=Next(Lpos,L);

}

}

Fig.3.1.

_______________________________________________________________________________ 3.3(a)For singly linked lists,the code is shown in Fig. 3.2.

-7-

______________________________________________________________________________ _

/*BeforeP is the cell before the two adjacent cells that are to be swapped.*/

/*Error checks are omitted for clarity.*/

void

SwapWithNext(Position BeforeP,List L)

{

Position P,AfterP;

P=BeforeP->Next;

AfterP=P->Next;/*Both P and AfterP assumed not NULL.*/

P->Next=AfterP->Next;

BeforeP->Next=AfterP;

AfterP->Next=P;

}

Fig.3.2.

_______________________________________________________________________________

(b)For doubly linked lists,the code is shown in Fig. 3.3.

_______________________________________________________________________________ /*P and AfterP are cells to be switched.Error checks as before.*/

void

SwapWithNext(Position P,List L)

{

Position BeforeP,AfterP;

BeforeP=P->Prev;

AfterP=P->Next;

P->Next=AfterP->Next;

BeforeP->Next=AfterP;

AfterP->Next=P;

P->Next->Prev=P;

P->Prev=AfterP;

AfterP->Prev=BeforeP;

}

Fig.3.3.

_______________________________________________________________________________ 3.4Intersect is shown on page9.

-8-

______________________________________________________________________________ _

/*This code can be made more abstract by using operations such as*/

/*Retrieve and IsPastEnd to replace L1Pos->Element and L1Pos!=NULL.*/

/*We have avoided this because these operations were not rigorously de?ned.*/

List

Intersect(List L1,List L2)

{

List Result;

Position L1Pos,L2Pos,ResultPos;

L1Pos=First(L1);L2Pos=First(L2);

Result=MakeEmpty(NULL);

ResultPos=First(Result);

while(L1Pos!=NULL&&L2Pos!=NULL)

{

if(L1Pos->ElementElement)

L1Pos=Next(L1Pos,L1);

else if(L1Pos->Element>L2Pos->Element)

L2Pos=Next(L2Pos,L2);

else

{

Insert(L1Pos->Element,Result,ResultPos);

L1=Next(L1Pos,L1);L2=Next(L2Pos,L2);

ResultPos=Next(ResultPos,Result);

}

}

return Result;

}

_______________________________________________________________________________ 3.5Fig. 3.4contains the code for Union.

3.7(a)One algorithm is to keep the result in a sorted(by exponent)linked list.Each of the MN

multiplies requires a search of the linked list for duplicates.Since the size of the linked list is O (MN ),the total running time is O (M 2N 2).

(b)The bound can be improved by multiplying one term by the entire other polynomial,and

then using the equivalent of the procedure in Exercise3.2to insert the entire sequence.

Then each sequence takes O (MN ),but there are only M of them,giving a time bound of O (M 2N ).

(c)An O (MN log MN )solution is possible by computing all MN pairs and then sorting by

exponent using any algorithm in Chapter7.It is then easy to merge duplicates afterward.

(d)The choice of algorithm depends on the relative values of M and N .If they are close,

then the solution in part(c)is better.If one polynomial is very small,then the solution in part(b)is better.

-9-

______________________________________________________________________________ _

List

Union(List L1,List L2)

{

List Result;

ElementType InsertElement;

Position L1Pos,L2Pos,ResultPos;

L1Pos=First(L1);L2Pos=First(L2);

Result=MakeEmpty(NULL);

ResultPos=First(Result);

while(L1Pos!=NULL&&L2Pos!=NULL){

if(L1Pos->ElementElement){

InsertElement=L1Pos->Element;

L1Pos=Next(L1Pos,L1);

}

else if(L1Pos->Element>L2Pos->Element){

InsertElement=L2Pos->Element;

L2Pos=Next(L2Pos,L2);

}

else{

InsertElement=L1Pos->Element;

L1Pos=Next(L1Pos,L1);L2Pos=Next(L2Pos,L2);

}

Insert(InsertElement,Result,ResultPos);

ResultPos=Next(ResultPos,Result);

}

/*Flush out remaining list*/

while(L1Pos!=NULL){

Insert(L1Pos->Element,Result,ResultPos);

L1Pos=Next(L1Pos,L1);ResultPos=Next(ResultPos,Result);

}

while(L2Pos!=NULL){

Insert(L2Pos->Element,Result,ResultPos);

L2Pos=Next(L2Pos,L2);ResultPos=Next(ResultPos,Result);

}

return Result;

}

Fig.3.4.

_______________________________________________________________________________ 3.8One can use the Pow function in Chapter2,adapted for polynomial multiplication.If P is

small,a standard method that uses O (P )multiplies instead of O (log P )might be better because the multiplies would involve a large number with a small number,which is good for the multiplication routine in part(b).

3.10This is a standard programming project.The algorithm can be sped up by setting

M' = M mod N ,so that the hot potato never goes around the circle more than once,and

-10-

then if M' > N / 2,passing the potato appropriately in the alternative direction.This requires a doubly linked list.The worst-case running time is clearly O (N min (M , N )), although when these heuristics are used,and M and N are comparable,the algorithm might be signi?cantly faster.If M =1,the algorithm is clearly linear.The VAX/VMS C compiler’s memory management routines do poorly with the particular pattern of f ree s in this case,causing O (N log N )behavior.

3.12Reversal of a singly linked list can be done nonrecursively by using a stack,but this

requires O (N )extra space.The solution in Fig. 3.5is similar to strategies employed in gar-bage collection algorithms.At the top of the while loop,the list from the start to Pre-viousPos is already reversed,whereas the rest of the list,from CurrentPos to the end,is normal.This algorithm uses only constant extra space.

_______________________________________________________________________________ /*Assuming no header and L is not empty.*/

List

ReverseList(List L)

{

Position CurrentPos,NextPos,PreviousPos;

PreviousPos=NULL;

CurrentPos=L;

NextPos=L->Next;

while(NextPos!=NULL)

{

CurrentPos->Next=PreviousPos;

PreviousPos=CurrentPos;

CurrentPos=NextPos;

NextPos=NextPos->Next;

}

CurrentPos->Next=PreviousPos;

return CurrentPos;

}

Fig.3.5.

_______________________________________________________________________________

3.15(a)The code is shown in Fig. 3.6.

(b)See Fig. 3.7.

(c)This follows from well-known statistical theorems.See Sleator and Tarjan’s paper in

the Chapter11references.

3.16(c)Delete takes O (N )and is in two nested for loops each of size N ,giving an obvious

O (N 3)bound.A better bound of O (N 2)is obtained by noting that only N elements can be deleted from a list of size N ,hence O (N 2)is spent performing deletes.The remainder of the routine is O (N 2),so the bound follows.

(d)O (N 2).

-11-

______________________________________________________________________________ _

/*Array implementation,starting at slot1*/

Position

Find(ElementType X,List L)

{

int i,Where;

Where=0;

for(i=1;i

if(X==L[i].Element)

{

Where=i;

break;

}

if(Where)/*Move to front.*/

{

for(i=Where;i>1;i--)

L[i].Element=L[i-1].Element;

L[1].Element=X;

return1;

}

else

return0;/*Not found.*/

}

Fig.3.6.

_______________________________________________________________________________

(e)Sort the list,and make a scan to remove duplicates(which must now be adjacent).

3.17(a)The advantages are that it is simpler to code,and there is a possible savings if deleted

keys are subsequently reinserted(in the same place).The disadvantage is that it uses more space,because each cell needs an extra bit(which is typically a byte),and unused cells are not freed.

3.21Two stacks can be implemented in an array by having one grow from the low end of the

array up,and the other from the high end down.

3.22(a)Let E be our extended stack.We will implement E with two stacks.One stack,which

we’ll call S ,is used to keep track of the Push and Pop operations,and the other,M ,keeps track of the minimum.To implement Push(X,E),we perform Push(X,S).If X is smaller than or equal to the top element in stack M ,then we also perform Push(X,M).To imple-ment Pop(E),we perform Pop(S).If X is equal to the top element in stack M ,then we also Pop(M).FindMin(E)is performed by examining the top of M .All these operations are clearly O (1).

(b)This result follows from a theorem in Chapter7that shows that sorting must take

?(N log N )time.O (N )operations in the repertoire,including DeleteMin ,would be suf?cient to sort.

-12-

______________________________________________________________________________ _

/*Assuming a header.*/

Position

Find(ElementType X,List L)

{

Position PrevPos,XPos;

PrevPos=FindPrevious(X,L);

if(PrevPos->Next!=NULL)/*Found.*/

{

XPos=PrevPos->Next;

PrevPos->Next=XPos->Next;

XPos->Next=L->Next;

L->Next=XPos;

return XPos;

}

else

return NULL;

}

Fig.3.7.

_______________________________________________________________________________ 3.23Three stacks can be implemented by having one grow from the bottom up,another from the

top down,and a third somewhere in the middle growing in some(arbitrary)direction.If the third stack collides with either of the other two,it needs to be moved.A reasonable strategy is to move it so that its center(at the time of the move)is halfway between the tops of the other two stacks.

3.24Stack space will not run out because only49calls will be stacked.However,the running

time is exponential,as shown in Chapter2,and thus the routine will not terminate in a rea-sonable amount of time.

3.25The queue data structure consists of pointers Q->Front and Q->Rear, which point to the

beginning and end of a linked list.The programming details are left as an exercise because it is a likely programming assignment.

3.26(a)This is a straightforward modi?cation of the queue routines.It is also a likely program-

ming assignment,so we do not provide a solution.

-13-

Chapter4:Trees

4.1(a)A .

(b)G ,H ,I ,L ,M ,and K .

4.2For node B :

(a)A .

(b)D and E .

(c)C .

(d)1.

(e)3.

4.3 4.

4.4There are N nodes.Each node has two pointers,so there are2N pointers.Each node but

the root has one incoming pointer from its parent,which accounts for N ?1pointers.The rest are NULL.

4.5Proof is by induction.The theorem is trivially true for H =0.Assume true for H =1,2,...,

k .A tree of height k +1can have two subtrees of height at most k .These can have at most 2k +1?1nodes each by the induction hypothesis.These2k +2?2nodes plus the root prove the theorem for height k +1and hence for all heights.

4.6This can be shown by induction.Alternatively,let N =number of nodes,F =number of

full nodes,L =number of leaves,and H =number of half nodes(nodes with one child).

Clearly,N = F + H + L .Further,2F + H = N ?1(see Exercise4.4).Subtracting yields L ? F =1.

4.7This can be shown by induction.In a tree with no nodes,the sum is zero,and in a one-node

tree,the root is a leaf at depth zero,so the claim is true.Suppose the theorem is true for all trees with at most k nodes.Consider any tree with k +1nodes.Such a tree consists of an i node left subtree and a k ? i node right subtree.By the inductive hypothesis,the sum for the left subtree leaves is at most one with respect to the left tree root.Because all leaves are one deeper with respect to the original tree than with respect to the subtree,the sum is at most?12with respect to the root.Similar logic implies that the sum for leaves in the right subtree is at most?12,proving the theorem.The equality is true if and only if there are no nodes with one child.If there is a node with one child,the equality cannot be true because adding the second child would increase the sum to higher than1.If no nodes have one child,then we can?nd and remove two sibling leaves,creating a new tree.It is easy to see that this new tree has the same sum as the old.Applying this step repeatedly,we arrive at a single node,whose sum is1.Thus the original tree had sum1.

4.8(a)-**a b+c d e.

(b)((a*b)*(c+d))-e.

(c)a b*c d+*e-.

-14-

4.9

1

23

4

5

6

7

9

1

2

4

5

6

7

9

4.11This problem is not much different from the linked list cursor implementation.We maintain

an array of records consisting of an element?eld,and two integers,left and right.The free list can be maintained by linking through the left?eld.It is easy to write the CursorNew and CursorDispose routines,and substitute them for malloc and free.

4.12(a)Keep a bit array B .If i is in the tree,then B [i ]is true;otherwise,it is false.Repeatedly

generate random integers until an unused one is found.If there are N elements already in the tree,then M ? N are not,and the probability of?nding one of these is(M ? N ) / M .

Thus the expected number of trials is M / (M ?N )=α / (α?1).

(b)To?nd an element that is in the tree,repeatedly generate random integers until an

already-used integer is found.The probability of?nding one is N / M ,so the expected number of trials is M / N =α.

(c)The total cost for one insert and one delete isα / (α?1)+α=1+α+1 / (α?1).Set-

tingα=2minimizes this cost.

4.15(a)N (0)=1,N (1)=2,N (H )= N (H ?1)+ N (H ?2)+1.

(b)The heights are one less than the Fibonacci numbers.

4.16

12

3

4

5

6

7

9

4.17It is easy to verify by hand that the claim is true for1≤ k ≤3.Suppose it is true for k =1,

2,3,...H .Then after the?rst2H ?1insertions,2H ?1is at the root,and the right subtree is

a balanced tree containing2H ?1+1through2H ?1.Each of the next2H ?1insertions,

namely,2H through2H +2H ?1?1,insert a new maximum and get placed in the right

-15-

subtree,eventually forming a perfectly balanced right subtree of height H ?1.This follows by the induction hypothesis because the right subtree may be viewed as being formed from the successive insertion of2H ?1+1through2H +2H ?1?1.The next insertion forces an imbalance at the root,and thus a single rotation.It is easy to check that this brings2H to the root and creates a perfectly balanced left subtree of height H ?1.The new key is attached to a perfectly balanced right subtree of height H ?2as the last node in the right path.Thus the right subtree is exactly as if the nodes2H +1through2H +2H ?1were inserted in order.By the inductive hypothesis,the subsequent successive insertion of 2H +2H ?1+1through2H +1?1will create a perfectly balanced right subtree of height

H ?1.Thus after the last insertion,both the left and the right subtrees are perfectly bal-

anced,and of the same height,so the entire tree of2H +1?1nodes is perfectly balanced(and has height H ).

4.18The two remaining functions are mirror images of the text procedures.Just switch Right

and Left everywhere.

4.20After applying the standard binary search tree deletion algorithm,nodes on the deletion path

need to have their balance changed,and rotations may need to be performed.Unlike inser-tion,more than one node may need rotation.

4.21(a)O (log log N ).

(b)The minimum AVL tree of height255(a huge tree).

4.22

_______________________________________________________________________________ Position

DoubleRotateWithLeft(Position K3)

{

Position K1,K2;

K1=K3->Left;

K2=K1->Right;

K1->Right=K2->Left;

K3->Left=K2->Right;

K2->Left=K1;

K2->Right=K3;

K1->Height=Max(Height(K1->Left),Height(K1->Right))+1;

K3->Height=Max(Height(K3->Left),Height(K3->Right))+1;

K2->Height=Max(K1->Height,K3->Height)+1;

return K3;

}

_______________________________________________________________________________

-16-

4.23After accessing3,

12

3

4

5

6

7

8

9

10

11

12

13

After accessing9,

12

3

4

5

6

7

8

9

10

11

12

13

-17-

After accessing1,

1

2

3

4

56

7

8

9

10

11

12

13

After accessing5,

1

2

34

5

6

7

8

9

10

11

12

13 -18-

本文来源:https://www.bwwdw.com/article/c8se.html

Top