Measuring algorithmic complexity with big-O notation

Efficiency

Algorithmic efficiency

An algorithm takes memory and time to run. Analysing these characteristics of algorithms can enable effective choice of algorithms.

Complexity is described using big-O notation. So an algorithm with parameters \(\theta\) would have a time efficiency of \(O(f(\theta ))\) where \(f(\theta )\) is a function of \(\theta\).

Generally we expect \(f(\theta )\) to be weakly increasing for all \(\theta\). As we add additional inputs, these would not decrease the time or space requirements of the algorithm.

An algorithm which did not change complexity with inputs would have a constant as the largest term. So we would write \(O(c)\).

An algorithm which increase linearly with inputs could be written \(O(\theta )\).

An algorithm which increase polynomially with inputs could be written \(O(\theta ^k)\).

An algorithm which increased exponentially could be written \(O(e^\theta )\).

Complexity can differ between worst-case scenarios, best-case scenarios and average case scenarios.

We can describe logical systems by completeness (all true statements are theorems) and soundness (all theorems are true). We have similar definitions for algorithms.

An algorithm which returns outputs for all possible inputs is complete. An algorithm which never returns an incorrect output is optimal.

Big-O and little-o recap

Time efficency

Space efficiency

Verifying answers

NP NP-hard NP-complete

Decision problems

Return yes or no.

Calculating the cost of an algorithm

Instruction costs

Efficiency of loops

number of times each instruction called

Big-O recap (take from maths)

Efficiency of functions with arguments

best case, worst case