Time Complexity |
|
|
So what do we mean when we talk about "the time to solve"?
For a given instance of a problem, we have a concept of its "size," usually expressed as the number of bits required to choose this instance from among all possible instances.
As the size of the instance grows, the algorithm can (quite reasonably) be expected to take longer, and the time complexity tells you what this function looks like.
|
F tells us the maximum time we need to wait for a solution.
Described like that, we can see that the time complexity is specific to a particular algorithm, and not necessarily to the problem it solves. However, we can say that the time complexity of a particular problem is "bounded above" by a function because we know of an algorithm that achieves that bound. For example, we know that the time complexity of multiplying two numbers (of size n and m respectively) is bounded above by amn, where a is some constant, because we know that the traditional tried and tested long multiplication algorithm takes (at most) that many steps.
|
Proven lower bounds can be hard to come by.
Of course, sometimes we don't care about the details and we just want a broad indication of how bad things get. That's where we start talking about classes of algorithms, and, by implication, classes of problems.
One such class is called P.