Big O notation is a way to talk about the time or resource complexity of an algorithm. It’s commonly used to describe worst, best, and average cases while comparing one algorithm to others. Since algorithms with less complexity finish faster that those with high complexity they are more desirable.

The expression O(x) pronounced “big-O of x” is used to symbolically express the asymptotic behavior of a given function. Specifically when n is an integer that tends to infinity and x is a continuous variable tending to some limit.

Calculating Big O Complexity

  1. Break your algorithm into individual operations
  2. Calculate the Big O of each operation
  3. Add up the Big O of each operation together
  4. Remove constants
  5. Focus on the highest order term

Complexity Classes

The table below contains a number of common Big O complexity limits. It illustrates how a relatively small number of inputs can drive large complexity. It also shows the dramatic differences between complexity classes.

nO(log n)O(n log n)O(n2)O(n!)O(nn)
100111
4281624256
164642562 × 10^131 × 10^19
64638440961 × 10^893 × 10^115
25682048655368 × 10^5063 × 10^616
1024101024010485765 × 10^26393 × 10^3082

The Algorithm Time Complexity graph below compares several complexity classes over time. A logarithmic scale has been utilized to allow all of the classes to appear in a single image.

Alg Time Complexity

Examples of Big O Complexity

O(1) - Constant

O(1) is constant complexity, meaning that the operation is independent of the number of inputs. Operations with constant complexity include

  • array access
  • array push
  • array pop

Functions that perform primitive operations like multiplication, addition, or subtraction also have constant complexity. Below is an example of such a constant function

def isEvenOrOdd(n):
    return "even" if n % 2 == 0 else "odd"

O(log n) - Logarithmic - Dived and Conquer

Algorithms with logarithmic complexity narrow down the search by repeatedly halving n until it there is nothing left to halve. The classic logarithmically complex algorithm is binary search.

def binary_search(self, nums: List[int], target: int) -> int:
    left = 0
    right = len(nums) - 1
    while left <= right:
        pivot = left + (right - left) // 2
        if nums[pivot] == target:
            return pivot
        if target < nums[pivot]:
            right = pivot - 1
        else:
            left = pivot + 1
    return -1

O(n) - Linear Complexity

Linearly complex algorithms scale in proportion to the number of inputs. When there are 100 inputs there are 100 operations. The following functions all have linear complexity.

  • Traversing an array
  • Traversing a linked list
  • Linear Search

In the following example, we’re tasked with finding an integer ‘k’ in an array of increasingly sorted positive integers. Our solution iterates over each value in the array twice which equals 2n. After removing the constant ‘2’ we’re left with linear complexity.

def twoSum(self, nums: List[int], target: int) -> List[int]:
    hashmap = {}
    for i in range(len(nums)):
        hashmap[nums[i]] = i
    for i in range(len(nums)):
        complement = target - nums[i]
        if complement in hashmap and hashmap[complement] != i:
            return [i, hashmap[complement]]

O(n log n) - Log Linear

The best way to understand log-linear complexity is to think of it as linear complexity multiplied by logarithmic. Most of the common sorting algorithms have log-linear complexity.

  • merge sort
  • heapj sort
  • quick sort

Here we have a quick sort implementation that iterates over all values of the array of n length and the values are continually halved.

def quick_sort(arr):
    len_arr = len(arr)

    if len_arr < 2:
        return arr

    current_position = 0

    for i in range(1, len_arr):
        if arr[i] <= arr[0]:
            current_position += 1
            temp = arr[i]
            arr[i] = arr[current_position]
            arr[current_position] = temp

    temp = arr[0]
    arr[0] = arr[current_position]
    arr[current_position] = temp

    left = QuickSort(arr[0:current_position])
    right = QuickSort(arr[current_position + 1:len_arr])

    arr = left + [arr[current_position]] + right

    return arr

O(n2) - Quadratic Complexity

O(n2) is also known at quadratic complexity. This is because performance is proportional to the squared size of the input data. Another way of thinking about it is that it’s linear squared, hence quadratic. This level of complexity often occurs when the data is iterated over in nested loops. There are also several sorting algorithms that have quadratic complexity.

  • bubble sort
  • insertion sort
  • selection sort

A naive solution to the classic Two Sum coding challenge illustrates quadratic complexity. For each element, we try to find its complement by iteratinv through the rest of the array which takes O(n)O(n) time.

def bubblesort(nums):
    for n in range(len(nums) - 1, 0, -1):
        for i in range(n):
            if nums[i] > nums[i + 1]:
                nums[i], nums[i + 1] = nums[i + 1], nums[i]

O(2n) Doubling - Fibonacci

O(2n) represents a function whose performance doubles for every element in the input. Our example is a function which recursive generates the numbers in the Fibonacci sequence. The function recursively calls itself twice for each input number until the number is less than or equal to one.

def fib(target) -> int:
    if target <= 1:
        return target
    return fib(target - 1) + fib(target - 2)

O(n!) - Factorial

Algorithms with O(n!) complexity usually compare each value of n against every other value in the data set. Some solutions to the Traveling Salesman Problem and Finding Permutations of a String have O(n!) complexity.

An O(n!) complexity algorithm to construct all permutations of a string.

def get_permutations(nums):
    def backtrack(first=0):
        if first == n:
            output.append(nums[:])
        for i in range(first, n):
            nums[first], nums[i] = nums[i], nums[first]
            backtrack(first + 1)
            nums[first], nums[i] = nums[i], nums[first]

    n = len(nums)
    output = []
    backtrack()
    return output

O(2n) - Exponential

Finding the exact solution to the Traveling Salesman Problem using dynamic programming has exponential complexity. That solution is non-trivial and won’t be demonstrated here.

Established Values

Every commonly used data structure and search implementation has a well-known complexity. Those complexities are detailed here.

Common Data Structure Operations

Data StructureAccess (Avg)Search (Avg)Insertion (Avg)Deletion (Avg)
ArrayO(1)O(n)O(n)O(n)
StackO(n)O(n)O(1)O(1)
QueueO(n)O(n)O(1)O(1)
Singly-Linked ListO(n)O(n)O(1)O(1)
Doubly-Linked ListO(n)O(n)O(1)O(1)
Hash Table0(1)O(1)O(1)O(1)
Binary Search TreeO(log(n))O(log(n))O(log(n))O(log(n))
B-TreeO(log(n))O(log(n))O(log(n))O(log(n))
Red-Black TreeO(log(n))O(log(n))O(log(n))O(log(n))
Splay Tree0(log(n))O(log(n))O(log(n))O(log(n))
AVL TreeO(log(n))O(log(n))O(log(n))O(log(n))
KD TreeO(log(n))O(log(n))O(log(n))O(log(n))

Common Search Implementations

AlgorithmBestAverageWorseSpace (Worst)
QuicksortO(n log(n))Θ(n log(n))O(n^2)O(log(n))
MergesortO(n log(n))Θ(n log(n))O(n log(n))O(n)
TimsortO(n)Θ(n log(n))O(n log(n))O(n)
HeapsortO(n log(n))Θ(n log(n))O(n log(n))O(1)
Bubble SortO(n)Θ(n^2)O(n^2)O(1)
Insertion SortO(n)Θ(n^2)O(n^2)O(1)
Selection SortO(n^2)Θ(n^2)O(n^2)O(1)
Tree SortO(n log(n))Θ(n log(n))O(n^2)O(n)
Shell SortO(n log(n))Θ(n(log(n))^2)O(n(log(n))^2)O(1)
Bucket SortO(n+k)Θ(n+k)O(n^2)O(n)
Radix SortO(nk)Θ(nk)O(nk)O(n+k)
Counting SortO(n+k)Θ(n+k)O(n+k)O(k)
CubesortO(n)Θ(n log(n))O(n log(n))O(n)