Understanding Big O Notation: A Simplified Guide for Programmers

Big O Notation is like the GPS for programmers. It tells you how fast you’ll reach your coding destination.

Learning Big O Notation becomes particularly valuable once you’ve become proficient in your chosen programming language(s). It’s a concept that typically comes into focus when you start exploring data structures and algorithms (DSA), often as part of the interview process. I mean this is the only reason I ever decided to look into it.

This blog post aims to provide you with a basic understanding of Big O Notation without delving too deep into the mathematical intricacies.

Basics of Big O Notation

Big O Notation serves as a way to understand the relationship between the size of input data and the efficiency of algorithm execution (the amount of operations/steps needed).

For instance, consider an array with three values, [3, 4, 2], passed as an argument to a function requiring only one operation for completion. This scenario showcases ideal efficiency, where the data size has no significant impact on the number of operations required. In contrast, when a function’s operations grow in proportion to the array’s size, it signals less desirable efficiency.

Constant Time – O(1)

This level of efficiency represents the programmer’s dream. It implies that, regardless of input size, the algorithm performs the same number of operations.


def my_function(input_data: List[int]):
    return input_data[1]
input_data = [3, 4, 2]
output = my_function(input_data)
print(my_function(input_data)) # Output of 4

Linear Time – O(n)

Linear time complexity means that the number of operations increases linearly with the size of the input data. For example, if a function processes an array with one item, it takes one operation; with ten items, it takes ten operations.


def my_function(input_data: List[int]):
    for item in input_data:
input_data = [3, 4, 2]

The function above loops through every item in the array and prints its value.

Quadratic Time – O(n^2)

Quadratic time complexity arises in situations involving nested loops or dependencies. As data size grows, operations increase exponentially.


def my_function(input_data: List[int]):
    for x in len(input_data):
        for y in len(input_data):
            print(input_data[i], input_data[j])
input_data = [3, 4, 2]

The above example loops through every item in the array and for every one of those items it loops through the every item in the array.

Logarithmic Time – O(log n)

This is quite a good level of time complexity, the data that is passed gets split up into smaller pieces at each step. You often see this for binary searches (similar to the half-split method for fault finding in electrical components).

I won’t delve too deeply into the intricacies of how binary search works, but in essence, it’s a method used to find a specific value within a sorted array, such as [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Binary search begins by examining the first half of the array to determine if the target value is present. If it’s not found, the algorithm discards the first half and proceeds to divide the remaining portion in half, repeating this process until the desired value is located.

If you’re familiar with fault finding electronics then this is essentially the same as the ‘half-split’ method.

The following Python example is the same as described above.

def binary_search(input_data: List[int], target: int) -> int:
    low_end = 0
    high_end = len(arr) - 1

    while low_end <= high_end:
        middle = (low_end + high_end) // 2
        if arr[middle] == target:
            return middle
        elif arr[middle] < target:
            low = mid + 1
            high = mid - 1

    return -1

input_data = [1, 2, 3, 4, 5, 6, 7, 8, 9]
target_element = 9

result = binary_search(my_sorted_array, target_element)

if result != -1:
    print(f"Value {target_element} found at index {result}.")
    print(f"Value {target_element} not found in the array.")

Exponential Time – O(2^n)

This is often considered the worst-case scenario, a point of reference when evaluating algorithms. Much like in life, preparing for the worst ensures you won’t be disappointed. As data size grows, the number of operations increases rapidly.

A good example of this is working out the nth Fibonacci number using a recursive approach. This essentially keeps on calling itself over-and-over again.

def fibonacci_recursive(target: int):
    if target <= 1:
        return target
        return fibonacci_recursive(target - 1) + fibonacci_recursive(target - 2)

target = 5
result = fibonacci_recursive(n)
print(f"The {target}-th Fibonacci number is {result}.")

Good luck!

If you’ve read this post to gain a foundational understanding of Big O Notation for interviews, best of luck to you! I’ve been reading Cracking the Coding Interview, 6th Edition which has been fantastic at explaining subjects around DSA and I highly recommend it. While I haven’t personally reached the interview stage, I sincerely hope that this information proves valuable in your pursuit of your dream job.

Leave a Reply

Your email address will not be published. Required fields are marked *