Skip to main content

Section 13.7 Key Concepts

  1. Algorithms are systematic ways of solving certain problems.
  2. Two categories of algorithms include searching for an item and sorting a linear sequence of values.
  3. Algorithms for searching include sequential search and binary search
  4. Sequential search looks for an item in a linear sequence by testing all elements one at a time, in order, until element is found or all elements of the sequence are tested.
  5. Binary search must start with a sorted sequence. It repeatedly halves the range of elements to search by comparing the middle element with the item sought and eliminating one of the two halves. This repeats until the item is found or the search range reduces to 0.
  6. Three algorithms for sorting include selection sort, insertion sort, and merge sort.
  7. Selection sort repeatedly selects the next most suitable element from an unsorted partition of a linear sequence and moves it to the end of a sorted partition.
  8. Insertion sort picks elements from an unsorted partition of a linear sequence and places them at the correct position in a sorted partition.
  9. Merge sort works by recursively dividing a linear sequence into smaller subsequences, sorting each subsequence and merging sorted subsequences together to form a final sorted sequence.
  10. The cost of an algorithm may be determined by a mathematical analysis called complexity analysis. This result of analysis in a mathematical expression that describes how cost grows with some respect to some input, such as the the size of the data structure being processed (\(N\)).
  11. Complexity analysis typically looks at cost in terms of the time it takes to complete the execution of an algorithm (time complexity) or the amount of memory required (space complexity).
  12. Complexity classes categorize growth rate as a function of the problems size (\(N\)). Common complexity classes are listed in Table 13.1.2. The order of these classes is important in order to be able to compare algorithms.
  13. Complexity analysis typically derives one or more of three measures: worst-case complexity (\(O(…)\) notation), best-case complexity (\(\Omega(…)\) notation), and average-case complexity (\(\Theta(…)\) notation). Worst-case complexity tends to be the most interesting of the three because it reports a theoretical upper bound of how resource requirements grow with problem size.