Big O
Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs. In this post I’m going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don’t worry if these words mean nothing to you right now. I’m going to talk about them in detail, as well as visualise them, throughout this post. Before you scroll! This post has been sponsored by the wonderful folks at ittybit, and their API for working with videos, images, and audio. If you need to store, encode, or get intelligence from the media files in your app, check them…