Of course, this is also the last experiment of data structure. It has brought us deep thinking through the previous 9 experiments. Now it should be greatly improved for programming algorithms and very good for problem solving. I think now for me, the programming ability has made a qualitative leap. Thank you for this data structure course, which makes me understand a lot! Of course, the later course "algorithm design and analysis" is also very important (although many majors do not open it, it seems that it is only opened in software engineering, so it needs self-study). It is another breakthrough in programming ability. I also hope that students who have completed this course can also learn the course of algorithm. Recommended textbook: Li Chunbao, "algorithm design and analysis (Second Edition)" , Tsinghua University Press.

Finally, summarize the previous data structure experiments!

Experiment 1 Application of linked list (city information)

Application of Experiment 2 stack (purchase information)

Experimental algorithm of triple binary tree (recursive and non recursive)

Experiment 4 Huffman coding

Experiment 5 traversal of graphs (adjacency matrix and adjacency table)

Experiment 6 AOE critical path

Experiment 7 search application (name search)

Experiment 8 search algorithm (balanced binary tree + Hash)

This data structure experiment is about 7 sorting algorithms. I recommend quick sorting and heap sorting. These two sorting methods are also the sorting methods of C++ STL sort function. Here are the performance comparison of various sorting methods. At present, six sorting algorithms have been sorted out, among which the cardinality sorting will be updated later. Let me slow down! I have to do level 6 tomorrow

Baidu network disk extraction algorithm test code Extraction code: ss6l

Sorting method | Average time complexity | Secondary storage space |
---|---|---|

Simple sort | O ( n 2 ) O(n^{2}) O(n2) | O(1) |

Quick sort | O ( n log n ) O(n\log_{}{n} ) O(nlogn) | O(log(n) |

Heap sort | O ( n log n ) O(n\log_{}{n} ) O(nlogn) | O(1) |

Merge sort | O ( n log n ) O(n\log_{}{n} ) O(nlogn) | O(n) |

Cardinality sort | O(d(n+rd)) | O(rd) |

First, the structure code is given

typedef struct{ int key; }RecordList;

Detailed comments are attached to the following codes, so no explanation is given, only the most refined language is given!

Your third company is the biggest driving force for my creation!

# Direct insert sort

Direct insertion, how do you think, is to sort with the previous number from the second number, and then change the position.

void InsertSort(RecordList r[],int n){ for(int i=2;i<=n;i++){ //Why do i start with 2 because i=1 doesn't have to be sorted count++;//Count each trip r[0]=r[i];//Set sentinel and put the to be sorted in r[0] int j=i-1;//j represents the position that has been sorted previously while(r[0].key<r[j].key){ //Sort from small to large here //Move j backward, because it has been sorted in front, so it stops when it is greater than r[j+1]=r[j]; j--; } r[j+1]=r[0];//Now insert the sentry into the j+1 position //Now perform the output of each trip cout<<"The first"<<count<<"Trip sort:"; for(int k=1;k<=n;k++){ cout<<r[k].key<<" "; } cout<<endl; } }

# Bubble sorting

How small the bubble is, it is very simple, that is, select the smallest one in the ith position (i=1,2... n) in each trip

void BubbleSort(RecordList r[],int n){ bool change=true;//Set flag so you don't have to do useless loops for(int i=1;i<=n-1&&change;i++){ count++; change=false;//If you have finished sorting, it will become false and jump out in the next cycle //But I don't think it's enough to let j start from i+1? for(int j=1;j<=n-1;j++){ if(r[j].key>r[j+1].key){ swap(r[j],r[j+1]); change=true; } } //Now perform the output of each trip cout<<"The first"<<count<<"Trip sort:"; for(int k=1;k<=n;k++){ cout<<r[k].key<<" "; } cout<<endl; } }

# Quick sort

How do you think about quick sorting? First select a benchmark, which is generally the first number, and then sort from right and left. Pay attention to starting from right, then encounter something smaller than the benchmark, then start from left, encounter something larger than the benchmark, and then exchange the newly selected ones until the middle. At this time, a benchmark has been completed, and then divide and conquer recursion can be used.

//One quick sort process int QKPass(RecordList r[],int low,int high){ count++; //Divide r[low] and r[high] to obtain the reference position RecordList x=r[low];//Now start with low and save it in X, that is, x will be used as the benchmark to divide the array. The smaller one is on the left and the larger one is on the left //The key step is to start with pointer j and then i while(low<high){ while(low<high&&r[high].key>=x.key){ high--; } //Why does low < high appear here every time, because each adjustment needs to see whether the benchmark is in place. If not, adjust it if(low<high){ r[low]=r[high]; low++; } while(low<high&&r[low].key<x.key){ low++; } if(low<high){ r[high]=r[low]; high--; } } r[low]=x; //Now perform the output of each trip cout<<"The first"<<count<<"Trip sort:"; for(int k=1;k<=N;k++){ cout<<r[k].key<<" "; } cout<<endl; return low; } void QuickSort(RecordList r[],int low,int high){ if(low<high){ int pos=QKPass(r,low,high); //Sorting by divide and conquer actually has a taste similar to Hill sorting QuickSort(r,low,pos-1); QuickSort(r,pos+1,high); } }

# Select sort

What do you think of sorting? In short, it is to select the smallest one on the left. The idea is similar to bubble sorting, but bubble is a number to the end, and selection sorting is the exchange of adjacent numbers.

void SelectSort(RecordList r[],int n){ for(int i=1;i<=n-1;i++){ //Why start with 1? Unlike insertion sorting, start with 2, because i=1 also needs to choose the smallest one in the first place count++; int k=i;//Save the location at this time i for(int j=i+1;j<=n;j++){ //Different from bubble sorting, j starts from i+1, because the previous ones have been sorted, so only the latter ones need to be sorted if(r[j].key<r[k].key){ k=j;//If the following is smaller than the i-th bit at this time, the position of j at this time is saved } } if(k!=i){ swap(r[i],r[k]); } //Now perform the output of each trip cout<<"The first"<<count<<"Trip sort:"; for(int k=1;k<=N;k++){ cout<<r[k].key<<" "; } cout<<endl; } }

# Heap sort

What do you think of heap sorting? First of all, you should have the basic knowledge of binary tree, especially hierarchical traversal. Only when you know the child parent array of binary tree can you understand the creation of algorithm. Note that the essence is to select the parent node from left to right, and then treat the root node as the bottom of the heap.

//Screening method: change one heap void sift(RecordList r[],int k,int m){ //This is a binary tree array traversed hierarchically. r[k] is the root, r[2k] left and r[2k+1] right. Why? Reference binary tree child parent storage RecordList t=r[k];//Where to save the root int x=r[k].key; int i=k;//Save the position i of pointer k int j=2*i;//Note that it only refers to the left subtree of i, traversing from left to right first bool finished=false;//Why set flag, similar to bubbling, to avoid invalid and redundant loops while(j<m&&!finished){ //What is m? M is the bottom of the pile //First, compare the left and right subtrees. If the size of the left and right subtrees is different, the pointer changes to a right subtree if(j<m&&r[j].key<r[j+1].key) j++; //true if the parent node is larger than the child node, otherwise the location will be exchanged. Note that this is a change if(x>=r[j].key) finished=true; else{ r[i]=r[j]; i=j; j=2*i; } } //Insert t into the i position at this time. If it is not changed, it will still be the original position. If it is changed, it will be converted r[i]=t; } //Build heap algorithm void crt_heap(RecordList r[],int n){ for(int i=n/2;i>=1;--i){ //Why start with n/2? Binary tree. The textbook takes the middle order traversal as the root. It needs to be changed every time sift(r,i,n); } } //For heap sorting, the first two steps are preliminary work, which is equivalent to changing all root nodes every time void HeapSort(RecordList r[],int n){ crt_heap(r,n); for(int i=n;i>2;i--){ //Why start with n? It's easier to operate count++; RecordList b=r[1];//First position r[1]=r[i]; r[i]=b; sift(r,1,i-1); //Now perform the output of each trip cout<<"The first"<<count<<"Trip sort:"; for(int k=1;k<=N;k++){ cout<<r[k].key<<" "; } cout<<endl; } }

# Merge sort

What do you think of merging and sorting? Do you feel Hill sort? It feels incremental. This is the essence of splitting from the middle, sorting, and merging one by one.

void Merge(RecordList r1[],int low,int mid,int high,RecordList r[]){ count++; //Merge from the middle, and save r1[low..mid] and r1[mid+1...high] permutations in r [] int i=low,j=mid+1,k=low;//k as the low bit of r while((i<=mid)&&(j<=high)){ //Divide into two paths from the middle, and then sort the two paths if(r1[i].key<=r1[j].key){ //If it is smaller than the middle, it is placed in r[low] r[k]=r1[i]; i++; } else{ r[k]=r1[j]; j++; } k++;//r [] traversal creation } while(i<=mid){ r[k]=r1[i]; k++; i++; } while(j<=high){ r[k]=r1[j]; k++; j++; } } void MergeSort(RecordList r1[],int low,int high,RecordList r[]){ //After sorting, it is placed in r [] and r2 is used as the auxiliary space count++; RecordList *r2; r2=new RecordList[high-low+1]; if(low==high) r[low]=r1[low]; else{ int mid=(low+high)/2; MergeSort(r1,low,mid,r2); MergeSort(r1,mid+1,high,r2); Merge(r2,low,mid,high,r); } delete r2; }

# summary

The above sorting algorithms are the most basic algorithms. Of course, the sorting algorithm can be further optimized. There is time to come up with an optimization algorithm. These algorithms are the basic skills of data structure. Although it is easy to understand, the code implementation is still difficult, especially in fast, merging and heap sorting. For the understanding of sorting, I feel that we should pay attention to the characteristics of grasping the pointer (position), and when to change the position and when to exchange.