# Introduction

Prev: FrontPage / Next: What is a problem?

# Introduction

In August 2010 the blogosphere and twitterverse exploded into life with the leaking of a paper that appeared at first glance to be a serious contender in the ongoing quest to decide the "P vs NP" question.

As the dust settles it seems that the proof is flawed and the question remains unanswered. There are new ideas which may subsequently prove to be useful, but for now, the questions remains open.

So what is this question, and why did it get so many people so interested so quickly?

My intention here is to give an overview of the subject, without going into more than a hint of the details. I'm largely going to concentrate on explaining all this at the conceptual level, with enough detail to help you get a feel for what's going on. I'm not an expert, so if you want to get into the really deep dark details, these posts should give you a general map of the terrain, but don't rely on them too heavily!

Broadly speaking, the "P vs NP" question is concerned with algorithms and their efficiency.
 This is called their "Time Complexity." Feel free to skip ahead and read about it, but we will get to it in time.
Some tasks are easy to complete and for a given input will take a predictable amount of time - multiplying two numbers, for example - while others seem to take forever. This depends, of course, on how big the particular task is - bigger numbers take longer to multiply - so on the way we need to make precise what we mean by "easy", "hard", "quick", "slow", "size", and exactly what we mean by "how long".

## Why is it interesting or important?

It's pretty obvious that understanding how long algorithms take is important when dealing with large amounts of data. Using a bubble sort is fine when you only have a few tens of elements, but sorted a few million elements requires more sophisticated techniques. Similarly, long multiplication is suitable when dealing with a few tens of digits, but the numbers used in cryptography routinely have hundreds or thousands of digits. Faster multiplication techniques are critical.

But even more important, modern cryptography relies on certain problems being hard to solve. The RSA public-key cryptosystem relies on the currently accepted belief that multiplying numbers together is easy (for some definition of "easy") but factoring some numbers is hard (for some definition of "hard").

But we don't actually know if this is true. Up until quite recently it was thought by some that checking to see if a number is prime is sometimes quite hard. But it turns out not to be. Perhaps there's a similar nasty surprise waiting in the wings to render all our modern cryptographic techniques (relatively) useless.

That's why it's important to understand exactly what we mean by "hard" and "easy".

## A word of warning

I've tried to make this an easy read, provided you ignore the side-boxes and don't try to work through the detail. If you simply "read like a novel" then you should get a pretty good sense of what's going on.

However, in places there are some tricky details, and I haven't ignored them. In those cases you might need to "read like math." In other words, there are place where you need to read a sentence, then pause, and figure out what it really means, and its implications. In later versions of this document I'll mark them to give you due warning, but currently I don't know where they are!

See, I wrote this, so I don't know what you'll find tricky. I'll make those adjustments as I get feedback.