To explain what an integer overflow is, you have to have a basic concept of the way many programming languages are. When you declare a variable (something that you can temporarily store information in; read from, write to, within the program's memory) you typically (from now on I'll assume your working with a language like this since most -- if not all -- act like this) need to give it a type (Integer, Long, String, Boolean, Byte, Etc...) so that the program's memory knows what kind of information your going to be storing there (In this case Integers). Integer's are interesting because many languages handle them differently, The size tends to vary from language to language, for instance the range of a basic (signed) integer in C is -2,147,483,648 to 2,147,483,647, where as the range of an integer in Visual Basic 6 is only -32,768 to 32,767. But what happens if you go over the limit of what an integer is defined as? Some high-level languages (such as Python) will correct the problem for you, but other languages will crash the program when they accept information that causes an integer overflow, unless the enterprising young programmer thought ahead and researched his language to find the restrictions then built in checks before arbitrarily accepting user-provided information.
In other words, an integer overflow occurs when a number is put into an integer variable which to too large for that languages definition of 'integer'.
(Gawd I am bored
But 700th post