Algorithmic Efficiency
Help Questions
AP Computer Science Principles › Algorithmic Efficiency
A smartwatch app computes Fibonacci numbers for animations; it may request n up to 45, and each computation must finish under 0.1 seconds. Two approaches are given:
Pseudocode (Naive Recursion):
FIB_RECURSIVE(n)
IF n <= 1
RETURN n
RETURN FIB_RECURSIVE(n-1) + FIB_RECURSIVE(n-2)
Time complexity: $O(2^n)$
Pseudocode (Dynamic Programming):
FIB_DP(n)
IF n <= 1
RETURN n
prev <- 0
curr <- 1
FOR i <- 2 TO n
next <- prev + curr
prev <- curr
curr <- next
RETURN curr
Time complexity: $O(n)$
Considering the time complexities mentioned, which algorithm has a lower time complexity for large inputs?
Dynamic programming, because $O(n)$ grows far slower than $O(2^n)$ as $n$ increases.
They are equivalent, because both compute the same Fibonacci values in the same steps.
Naive recursion, because it uses fewer variables and therefore runs in $O(n)$.
Naive recursion, because $O(2^n)$ is smaller than $O(n)$ for large $n$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding exponential versus linear time complexity in recursive algorithms. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between naive recursive Fibonacci and dynamic programming highlights how memoization dramatically improves efficiency, with naive recursion having exponential O(2ⁿ) complexity while dynamic programming achieves linear O(n) complexity. Choice B is correct because it accurately identifies dynamic programming as more efficient, recognizing that O(n) grows far slower than O(2ⁿ) - for n=45, this is the difference between 45 operations versus over 35 trillion operations. This demonstrates understanding of exponential growth's severity. Choice C is incorrect because it claims O(2ⁿ) is smaller than O(n) for large n, which is backwards - exponential functions grow explosively compared to linear functions. This error often occurs when students misread the notation or don't understand exponential growth. To help students: Calculate actual values for small n to show exponential explosion (e.g., 2¹⁰=1024 vs 10). Trace through recursive calls to visualize repeated subproblems. Watch for: confusion about exponential notation or underestimating how quickly exponential functions grow.
A library kiosk searches for a book ID in a sortedList with up to 10,000,000 IDs; each query must finish in under 50 ms. Two search methods are considered:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, targetID)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = targetID
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, targetID)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = targetID
return mid
else if sortedList<u>mid</u> < targetID
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Based on the algorithms described, which algorithm has a lower time complexity for large inputs?
Linear search, because it checks items in order and avoids dividing the range.
Binary search, because $O(\log n)$ grows slower than $O(n)$ as $n$ increases.
Binary search, because it works best even when the list is not sorted.
Linear search, because $O(n)$ is the same as $O(\log n)$ for large lists.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically comparing search algorithm time complexities on sorted data. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search and binary search on a sorted list highlights how binary search's O(log n) time complexity provides a massive advantage over linear search's O(n) when searching through up to 10,000,000 IDs. Choice B is correct because it accurately identifies binary search as more efficient for large inputs, due to its O(log n) time complexity growing much slower than O(n) as n increases. This demonstrates understanding that logarithmic growth is dramatically better than linear growth for large datasets. Choice D is incorrect because it claims binary search works on unsorted lists, which is false - binary search requires a sorted list to function correctly. This error often occurs when students memorize algorithm names without understanding their prerequisites. To help students: Calculate actual comparisons needed for both algorithms with n=10,000,000 (linear: up to 10 million, binary: about 23). Emphasize that binary search only works on sorted data. Watch for: students who forget the sorted list requirement or who don't grasp the dramatic difference between O(n) and O(log n) for large n.
A club leader sorts nameList for printing. The list is usually small (around 20 names), but can reach 5,000. Two algorithms are available:
Pseudocode (Bubble Sort):
procedure bubbleSort(nameList)
n <- length(nameList)
for i <- 1 to n - 1
for j <- 1 to n - i
if nameList<u>j</u> > nameList<u>j + 1</u>
swap(nameList<u>j</u>, nameList<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(nameList)
if length(nameList) <= 1
return nameList
mid <- length(nameList) / 2
left <- mergeSort(nameList<u>1..mid</u>)
right <- mergeSort(nameList<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Considering the time complexities mentioned, under what conditions does mergeSort outperform bubbleSort?
When the list is unsorted, because merge sort becomes $O(n^2)$ in the worst case.
When nameList is large, because $O(n \log n)$ scales better than $O(n^2)$.
When input size is irrelevant, because Big O ignores $n$ and focuses only on swaps.
When nameList is tiny, because $O(n^2)$ grows slower than $O(n \log n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when different time complexities matter in practice. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort (O(n²)) and merge sort (O(n log n)) highlights how the advantage of better time complexity becomes significant as input size grows, particularly when the list can reach 5,000 names. Choice A is correct because it accurately identifies that merge sort outperforms bubble sort when nameList is large, due to O(n log n) scaling better than O(n²) as n increases. This demonstrates understanding of how algorithmic efficiency impacts real-world performance. Choice B is incorrect because it reverses the relationship, claiming O(n²) grows slower than O(n log n), which is mathematically false. This error often occurs when students don't understand how to compare growth rates of different functions. To help students: Create tables showing actual values of n² vs n log n for various n values. Discuss that for small inputs (like 20 names), the difference may be negligible, but for larger inputs (like 5,000), it becomes significant. Watch for: students who think 'smaller' Big O always means faster regardless of input size, or who don't understand that both algorithms work correctly but differ in speed.
A budgeting app computes Fibonacci-based projections; users can set n up to 50, and calculations must complete within 0.2 seconds on a phone. Two versions are shown:
Pseudocode (Naive Recursion):
FIB_RECURSIVE(n)
IF n <= 1
RETURN n
RETURN FIB_RECURSIVE(n-1) + FIB_RECURSIVE(n-2)
Time complexity: $O(2^n)$
Pseudocode (Dynamic Programming):
FIB_DP(n)
IF n <= 1
RETURN n
prev <- 0
curr <- 1
FOR i <- 2 TO n
next <- prev + curr
prev <- curr
curr <- next
RETURN curr
Time complexity: $O(n)$
Based on the algorithms described, under what conditions does the dynamic programming approach outperform naive recursion?
When n is small, because $O(2^n)$ decreases as n increases.
Only when n is even, because odd values force extra recursive calls.
When n is large, because it avoids repeated subproblems and runs in $O(n)$.
Never, because loops are slower than recursion and make the runtime $O(n^2)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when dynamic programming outperforms naive recursion. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between naive recursive Fibonacci and dynamic programming shows how avoiding repeated calculations transforms exponential O(2ⁿ) complexity into linear O(n) complexity. Choice A is correct because it accurately identifies that dynamic programming outperforms naive recursion when n is large, specifically because it avoids recalculating the same Fibonacci values repeatedly and achieves O(n) time - for n=50, this means 50 operations versus approximately 1.1 quadrillion operations. This demonstrates understanding of memoization benefits. Choice B is incorrect because it claims O(2ⁿ) decreases as n increases, which is backwards - exponential functions increase extremely rapidly with n. This error often occurs when students misread exponential notation or don't understand that larger exponents mean faster growth, not slower. To help students: Draw the recursion tree for small n values to visualize repeated calculations. Show how dynamic programming stores and reuses results. Watch for: confusion about exponential growth direction or underestimating the impact of avoiding repeated work.
A music app searches for a song title in a sortedList of titles. The list can be as small as 30 titles or as large as 3,000,000, and the app has a strict time limit per search. Two algorithms are available:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, targetTitle)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = targetTitle
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, targetTitle)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = targetTitle
return mid
else if sortedList<u>mid</u> < targetTitle
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Considering the time complexities mentioned, which is more efficient for small datasets?
Linear search, because $O(n)$ becomes $O(1)$ whenever the target is near the end.
Binary search, because $O(\log n)$ is always smaller than $O(n)$ for any $n$.
Linear search, because small $n$ can make the difference between $O(n)$ and $O(\log n)$ minor.
Binary search, because it runs in $O(n)$ time when the list is already sorted.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when simpler algorithms might be preferable despite worse Big O complexity. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search (O(n)) and binary search (O(log n)) on small datasets (as small as 30 titles) highlights that Big O notation describes growth rates, not absolute performance for all input sizes. Choice B is correct because it accurately identifies that for small datasets, the difference between O(n) and O(log n) can be minor, and linear search's simpler implementation might even perform better due to lower overhead. This demonstrates understanding that Big O analysis is most relevant for large inputs. Choice A is incorrect because it claims O(log n) is always smaller than O(n) for any n, ignoring that for small values, implementation overhead and constant factors matter. This error often occurs when students treat Big O as an absolute measure rather than an asymptotic one. To help students: Show actual runtime comparisons for n=30 where linear search might complete in microseconds. Discuss how binary search's overhead (calculating midpoints, maintaining bounds) can outweigh benefits for small n. Watch for: students who think better Big O always means faster execution regardless of input size or implementation details.
A school app must sort scoreList of up to 200,000 integers in under 2 seconds. Two options are tested:
Pseudocode (Bubble Sort):
procedure bubbleSort(scoreList)
n <- length(scoreList)
for i <- 1 to n - 1
for j <- 1 to n - i
if scoreList<u>j</u> > scoreList<u>j + 1</u>
swap(scoreList<u>j</u>, scoreList<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(scoreList)
if length(scoreList) <= 1
return scoreList
mid <- length(scoreList) / 2
left <- mergeSort(scoreList<u>1..mid</u>)
right <- mergeSort(scoreList<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Based on the algorithms described, which algorithm has a lower time complexity for large inputs?
Merge sort, because it is always faster regardless of input size or constraints.
Merge sort, because $O(n \log n)$ grows slower than $O(n^2)$ for large $n$.
Bubble sort, because $O(n^2)$ is smaller than $O(n \log n)$ for large $n$.
Bubble sort, because nested loops reduce comparisons as the list grows.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding and comparing algorithm time complexities. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort and merge sort highlights how input size affects their time complexities, with bubble sort having a time complexity of O(n²) while merge sort has a time complexity of O(n log n). Choice B is correct because it accurately identifies merge sort as more efficient for large inputs, due to its lower time complexity of O(n log n), which grows much slower than O(n²) as n increases to 200,000. This demonstrates understanding of efficiency analysis. Choice C is incorrect because it misunderstands the relationship between O(n²) and O(n log n), incorrectly claiming that O(n²) is smaller for large n. This error often occurs when students misinterpret Big O notation or fail to understand how different functions grow. To help students: Use graphs to visualize how n², n log n, and other functions grow as n increases. Emphasize that for large values like 200,000, the difference between O(n²) and O(n log n) becomes dramatic. Watch for: students who memorize algorithm names without understanding their complexities or who confuse the meaning of 'smaller' in Big O context.
A data team must sort sensorReadings each minute. Input size can spike from 1,000 to 100,000 readings, and each minute’s sort must finish before the next batch arrives. Two algorithms are compared:
Pseudocode (Bubble Sort):
procedure bubbleSort(sensorReadings)
n <- length(sensorReadings)
for i <- 1 to n - 1
for j <- 1 to n - i
if sensorReadings<u>j</u> > sensorReadings<u>j + 1</u>
swap(sensorReadings<u>j</u>, sensorReadings<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(sensorReadings)
if length(sensorReadings) <= 1
return sensorReadings
mid <- length(sensorReadings) / 2
left <- mergeSort(sensorReadings<u>1..mid</u>)
right <- mergeSort(sensorReadings<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Considering the time complexities mentioned, which algorithm has a lower time complexity for large inputs?
Bubble sort, because Big O ignores input limits and focuses on constant-time swaps.
Bubble sort, because swapping adjacent values makes it $O(\log n)$ in practice.
Merge sort, because $O(n \log n)$ grows slower than $O(n^2)$ as input increases.
Merge sort, because $O(n \log n)$ is the same as $O(n^2)$ when $n$ is large.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically comparing quadratic and linearithmic sorting algorithms. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort (O(n²)) and merge sort (O(n log n)) for sorting sensor readings that can spike to 100,000 items highlights how algorithm choice becomes critical for large, time-sensitive datasets. Choice A is correct because it accurately identifies merge sort as more efficient for large inputs, with O(n log n) growing much slower than O(n²) - for n=100,000, this means roughly 1.7 million operations versus 10 billion operations. This demonstrates understanding of how different growth rates impact real-world performance. Choice B is incorrect because it claims bubble sort's adjacent swapping makes it O(log n), which is false - the nested loops clearly show O(n²) behavior regardless of how swaps are performed. This error often occurs when students focus on implementation details rather than loop structure. To help students: Calculate actual operation counts for n=100,000 to show the dramatic difference. Explain that meeting time constraints often requires choosing algorithms with better asymptotic complexity. Watch for: students who think implementation tricks can change fundamental time complexity or who underestimate the impact of quadratic growth on large datasets.
A game server repeatedly searches a sortedList of player scores (up to 1,000,000 entries) and must respond within 2 milliseconds. Two search methods are provided:
Pseudocode (Linear Search):
LINEAR_SEARCH(sortedList, target)
FOR i <- 1 TO LENGTH(sortedList)
IF sortedList<u>i</u> = target
RETURN i
RETURN -1
Time complexity: $O(n)$
Pseudocode (Binary Search):
BINARY_SEARCH(sortedList, target)
low <- 1
high <- LENGTH(sortedList)
WHILE low <= high
mid <- (low + high) / 2
IF sortedList<u>mid</u> = target
RETURN mid
ELSE IF sortedList<u>mid</u> < target
low <- mid + 1
ELSE
high <- mid - 1
RETURN -1
Time complexity: $O(\log n)$
Based on the algorithms described, what is the primary advantage of using binary search over linear search?
It guarantees a match exists, because it checks the middle element first.
It has $O(n)$ time but uses fewer comparisons than linear search for large $n$.
It works best on unsorted lists, because halving ignores element order.
It reduces checks by halving the remaining range each step, giving $O(\log n)$ time.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding the fundamental advantage of binary search over linear search. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search and binary search on sorted data shows how binary search achieves logarithmic O(log n) complexity by eliminating half the remaining elements with each comparison, while linear search maintains O(n) complexity. Choice B is correct because it accurately identifies the key mechanism - binary search reduces checks by halving the search space at each step, resulting in O(log n) time complexity, which means searching 1,000,000 scores requires only about 20 comparisons versus potentially 1,000,000. This demonstrates understanding of divide-and-conquer efficiency. Choice D is incorrect because it claims binary search has O(n) time complexity, which is false - the entire advantage of binary search is its O(log n) complexity. This error often occurs when students recognize binary search is better but misunderstand why, incorrectly attributing it to fewer comparisons within the same complexity class. To help students: Trace through binary search step-by-step showing the halving process. Calculate log₂ values for various list sizes to show dramatic savings. Watch for: confusion between 'fewer operations' and 'better complexity class' or misunderstanding that binary search requires sorted data.
A help-desk tool searches a sortedList of ticket numbers that can grow to 2,000,000 entries; each lookup must be fast. Two algorithms are implemented:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, target)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = target
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, target)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = target
return mid
else if sortedList<u>mid</u> < target
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Based on the algorithms described, under what conditions does binarySearch outperform linearSearch?
When the target is missing, because binary search changes to $O(n)$ in the worst case.
When the list is large, because linear search becomes $O(\log n)$ after enough queries.
When the list is large and sorted, because $O(\log n)$ grows slower than $O(n)$.
When the list is unsorted, because binary search does not depend on ordering.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when binary search provides advantages over linear search. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search (O(n)) and binary search (O(log n)) on a sorted list of up to 2,000,000 ticket numbers highlights how logarithmic algorithms excel on large, sorted datasets. Choice A is correct because it accurately identifies that binary search outperforms linear search when the list is large and sorted, as O(log n) grows much slower than O(n) - for 2 million entries, binary search needs at most 21 comparisons versus potentially 2 million. This demonstrates understanding of both algorithm requirements and efficiency benefits. Choice B is incorrect because it claims binary search doesn't depend on ordering, when in fact binary search absolutely requires a sorted list to function correctly. This error often occurs when students memorize algorithm names without understanding their fundamental requirements. To help students: Emphasize that binary search's divide-and-conquer approach only works with sorted data. Show that log₂(2,000,000) ≈ 21, making the efficiency gain dramatic. Watch for: students who forget binary search's sorted list requirement or who don't appreciate the massive difference between O(n) and O(log n) at scale.
A music app searches a sortedList of up to 2,000,000 song IDs during playback, with a strict 1 millisecond limit. Two algorithms are used:
Pseudocode (Linear Search):
LINEAR_SEARCH(sortedList, target)
FOR i <- 1 TO LENGTH(sortedList)
IF sortedList<u>i</u> = target
RETURN i
RETURN -1
Time complexity: $O(n)$
Pseudocode (Binary Search):
BINARY_SEARCH(sortedList, target)
low <- 1
high <- LENGTH(sortedList)
WHILE low <= high
mid <- (low + high) / 2
IF sortedList<u>mid</u> = target
RETURN mid
ELSE IF sortedList<u>mid</u> < target
low <- mid + 1
ELSE
high <- mid - 1
RETURN -1
Time complexity: $O(\log n)$
Based on the algorithms described, under what conditions does binary search outperform linear search?
When the list is unsorted, because binary search does not rely on element order.
When the target is missing, because linear search becomes $O(\log n)$ in that case.
When the list is small, because $O(n)$ is always worse than $O(2^n)$.
When the list is sorted and large, because $O(\log n)$ grows much slower than $O(n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding the conditions required for binary search to outperform linear search. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison shows binary search with O(log n) complexity versus linear search with O(n) complexity, but binary search requires the list to be sorted to function correctly. Choice A is correct because it accurately identifies both conditions for binary search superiority: the list must be sorted (a prerequisite for binary search to work) and large (where the difference between O(log n) and O(n) becomes significant) - for 2,000,000 songs, binary search needs about 21 comparisons versus potentially 2,000,000. This demonstrates understanding of both algorithm requirements and scaling. Choice B is incorrect because it claims binary search works on unsorted lists, which is false - binary search's divide-and-conquer approach relies on sorted order to eliminate half the search space. This error often occurs when students focus only on complexity without understanding algorithm prerequisites. To help students: Demonstrate why binary search fails on unsorted data with examples. Show the dramatic difference in comparisons for large sorted lists. Watch for: forgetting that binary search requires sorted input or assuming all search algorithms work on any data.