The previous article on binary search ended with a claim that probably raised an eyebrow: you can find the k-th smallest element in an unsorted array in O(n) average time, without sorting. If sorting is O(n log n), how does selection get away with O(n)? The answer is quickselect, and understanding it will permanently change how you think about the divide-and-conquer family.
1. The problem sorting solves that you didn't actually need to solve
When you sort an array to find the k-th smallest element, you're answering a much bigger question than the one you asked. Sorting establishes the complete order of every element. Finding the k-th smallest only requires establishing the relative order of one element.
That's the key insight. You don't need everything in place. You just need to know that the element at index k has exactly k - 1 elements smaller than it and n - kelements larger. Quickselect exploits this.
2. Partition: the primitive that powers both quicksort and quickselect
Both algorithms are built on the same subroutine: partition. Given an array and a pivot element, partition rearranges the array so that:
- Every element to the left of the pivot is smaller than it
- Every element to the right is larger
- The pivot is at its final sorted position
Crucially, and this is the part that enables O(n) selection, the pivot's final index is now known. You don't need to recurse into both halves. You only recurse into the half that contains k.
// Partition using Lomuto scheme
// Moves pivot to its correct sorted position and returns that index
function partition<T>(
arr: T[],
lo: number,
hi: number,
compare: (a: T, b: T) => number
): number {
const pivot = arr[hi];
let i = lo - 1;
for (let j = lo; j < hi; j++) {
if (compare(arr[j], pivot) <= 0) {
i++;
[arr[i], arr[j]] = [arr[j], arr[i]];
}
}
[arr[i + 1], arr[hi]] = [arr[hi], arr[i + 1]];
return i + 1; // pivot is now at its final sorted index
}
Notice the comparator parameter, this makes the function generic. The same partition works for numbers, strings, objects sorted by a field, anything.
3. Quickselect: the full implementation
With partition in hand, quickselect is almost trivially simple:
// Returns the k-th smallest element (1-indexed) in O(n) average time
// Mutates the input array ā pass a copy if the original must be preserved
function quickselect<T>(
arr: T[],
k: number,
compare: (a: T, b: T) => number = defaultCompare
): T {
if (k < 1 || k > arr.length) {
throw new RangeError(`k must be between 1 and ${arr.length}, received ${k}`);
}
return select(arr, 0, arr.length - 1, k - 1, compare); // convert to 0-indexed
}
function select<T>(
arr: T[],
lo: number,
hi: number,
targetIdx: number,
compare: (a: T, b: T) => number
): T {
if (lo === hi) return arr[lo];
const pivotIdx = partition(arr, lo, hi, compare);
if (pivotIdx === targetIdx) {
return arr[pivotIdx]; // pivot landed exactly at k ā done
} else if (pivotIdx < targetIdx) {
return select(arr, pivotIdx + 1, hi, targetIdx, compare); // k is in right half
} else {
return select(arr, lo, pivotIdx - 1, targetIdx, compare); // k is in left half
}
}
function defaultCompare<T>(a: T, b: T): number {
if (a < b) return -1;
if (a > b) return 1;
return 0;
}
Walk through what happens on each recursive call. After partitioning, the pivot sits at pivotIdx. You compare that to targetIdx the 0-indexed position of the element you want. Three outcomes:
- Match: return immediately. Done.
- Pivot too low: k is in the right subarray, discard everything left of the pivot.
- Pivot too high: k is in the left subarray, discard everything right of the pivot.
You never touch the discarded half again. That's the entire source of the efficiency gain.
4. Why the average case is O(n) and the worst case is O(n²)
This is where most explanations wave their hands. Let's be precise.
Each call to partition scans every element between lo and hi, that's a linear pass. The question is how many elements remain after each partition.
In the average case, a random pivot lands somewhere near the middle. The search space roughly halves each time:
n + n/2 + n/4 + n/8 + ... = 2n ā O(n)
A geometric series that converges to 2n. Linear.
In the worst case, the pivot is always the minimum or maximum element; the search space shrinks by only one element per call:
n + (n-1) + (n-2) + ... + 1 = n(n+1)/2 ā O(n²)
This worst case occurs on already-sorted input when you always pick the last element as pivot, exactly the implementation above. Here's how to defend against it:
// Median-of-three pivot selection
// Picks the median of lo, mid, and hi ā dramatically reduces worst-case probability
function medianOfThreePivot<T>(
arr: T[],
lo: number,
hi: number,
compare: (a: T, b: T) => number
): void {
const mid = lo + Math.floor((hi - lo) / 2);
// Sort lo, mid, hi in place so arr[mid] is the median
if (compare(arr[lo], arr[mid]) > 0) [arr[lo], arr[mid]] = [arr[mid], arr[lo]];
if (compare(arr[lo], arr[hi]) > 0) [arr[lo], arr[hi]] = [arr[hi], arr[lo]];
if (compare(arr[mid], arr[hi]) > 0) [arr[mid], arr[hi]] = [arr[hi], arr[mid]];
// Place pivot at hi - 1 for Lomuto partition
[arr[mid], arr[hi]] = [arr[hi], arr[mid]];
}
An alternative is random pivot selection, swap a random element into the hi position before partitioning. This gives O(n) expected time regardless of input order, which is the safer default for production use
function partitionWithRandomPivot<T>(
arr: T[],
lo: number,
hi: number,
compare: (a: T, b: T) => number
): number {
const pivotIdx = lo + Math.floor(Math.random() * (hi - lo + 1));
[arr[pivotIdx], arr[hi]] = [arr[hi], arr[pivotIdx]]; // move pivot to hi
return partition(arr, lo, hi, compare);
}
One line change. The rest of the algorithm is identical.
5. Real-world usage: it's more common than you think
The median is a k-th smallest element problem where k = Math.ceil(n / 2). Computing the median via sort is O(n log n). Via quickselect it's O(n) average. For large datasets or streaming analytics pipelines, that difference compounds significantly.
// Compute the median of an array in O(n) average time
function median(data: number[]): number {
if (data.length === 0) throw new RangeError("Cannot compute median of empty array");
const arr = [...data]; // preserve original ā quickselect mutates
const mid = Math.floor(arr.length / 2);
if (arr.length % 2 === 1) {
return quickselect(arr, mid + 1); // odd length ā exact middle
}
// Even length ā average of two middle elements
// Two separate quickselect calls, each O(n)
const lo = quickselect([...data], mid);
const hi = quickselect([...data], mid + 1);
return (lo + hi) / 2;
}
Other practical applications: percentile calculations in monitoring systems (p95, p99 latency), top-K recommendations, finding outliers in datasets, and priority scheduling where you need the highest-priority item without maintaining a full sorted structure.
7. The gotcha: quickselect mutates your array
This is the most common production bug with this algorithm. After quickselect runs, your array is partially sorted around the pivot, not in its original order, not fully sorted. If you need the original preserved, copy before calling.
// ā Original array is now in an unpredictable partial order
const scores = [85, 42, 91, 37, 66, 78];
const median = quickselect(scores, 3);
console.log(scores); // [37, 42, 66, 91, 85, 78] ā mutated
// ā
Preserve the original
const median2 = quickselect([...scores], 3);
console.log(scores); // [85, 42, 91, 37, 66, 78] ā untouched
TL;DR
- Quickselect finds the k-th smallest element in O(n) average time by exploiting the fact that partition places the pivot at its final sorted index, you only recurse into the half containing k.
- Worst case is O(n²) with a naive pivot. Randomized pivot selection makes this astronomically unlikely in practice.
- For guaranteed O(n) worst-case, use introselect or median-of-medians, at the cost of a larger constant.
- Quickselect mutates the input array. Always pass a copy if the original order matters.
- Real use cases: median computation, percentile statistics, top-K selection, outlier detection.












