Long post with some EE theory...I apologize if this goes over peoples' heads. I'm trying to explain this as best I can without busting out complicated math and formal circuit analysis techniques...
Most of the LEDs I've had fail due to abuse have failed as opens. If you look inside, you can sometimes see the bond wire going to the die melted. If it fails as a short, it'll usually promptly fry itself further into an open.
The reason that not using resistors with diodes is not recommended is due to the non-linear nature of a diode. With a resistor or something that looks like one, such as a light bulb, if you increase the voltage across it, the current goes up in proportion with that voltage change. That's in fact the definiton of the resistance parameter R. That isn't how diodes behave. Once a diode is "on", increasing the voltage across it slightly causes a disproportionantly large increase in current flow.
Think of a diode as a switch in series with a very small resistor. The switch is open until you get some voltage across it, about 0.6V for a silicon diode, and about 1.6V for a red LED. After this, the switch closes and the only thing limiting the current is that very small resistor. It isn't actually a resistor, but for small voltages and currents this works out as a good model (an engineer would call this a "first order approximation" or a "linearization" of the diode). This very small equivalent resistance is in fact so small that many times it's valid to omit it from the model entirely. Thus, the diode doesn't conduct any current until you reach it's "on" voltage, then it will conduct as much current as you want, until it blows up, of course.
Thus, what you do is insert a series resistor. The series resistor lets you have a little play in how much voltage to apply in order to get the current right. If your voltage is too low, the diode will barely conduct at all. If it's even just a little bit too high, it would start to conduct way too much current (until it blew up or was limited by the power source). With the series resistor, the resistor drops the additional voltage and limits the current, which you can solve for as I=V/R, where V is the voltage across the resistor, which is in turn the supply voltage minus the drop across the diode.
You can solve that for R and get R = V/I. This in fact is the derrivation of the formula used to bias an LED for DC operation. You take your power supply voltage, subtract the predicted drop across the LED, divide by the current you want to flow through the LED, and find R.
The thing to remember is that, due to the nature of a diode, seemingly insignificant changes in applied voltage can cause catastrophically large changes in current. Power is voltage times current, and too much power dissipation can kill the LED. This is why when you design a circuit for powering an LED, your major concern is current, not the voltage across it. If you were to have a small change in current, the change in voltage would be really, really small, and you're still OK.
If you had a method for directly controlling the current through an LED, and letting the voltage be whatever it turns out to be, you could do that. This is actually possible: you can build a so-called "current source" using a couple transistors and a resistor, and such things are in fact common pieces of equipment on an EE lab bench. However, it's a lot cheaper and easier to just use a resistor as the way you choose the resistor in the current source is exactly the same way you choose it in an LED power application.
All this simplification is done because the actual formula relating current through a diode and the voltage across it is rather nasty and can in fact be impossible to solve analytically in some cases: you have to solve it graphically or numerically through iterative methods. If you're curious, the standard model for a diode or LED is the Shockley Diode Equation, and it is I_d = I_s(e^(V_d/(n*V_t)) - 1), where I_s is a parameter called the saturation current, V_d is the voltage across the LED, n (actually a greek letter "eta") is something called the emissivity coeffecient (it amounts to a fudge factor, though it does have physical meaning), and V_t is the "thermal voltage", which is 26mV at room temp (and depends on Boltzmann's constant, quantum of charge, and temperature). In other words, the current through a diode depends on: two parameters of the diode itself, the voltage across the diode, the temperature of the diode, and two fundamental constants of physics. Yeah, there's a reason to simplify this stuff

It's legit to string LEDs in series (in parallel, there are some serious problems that come up), but you have to realize that any error in the forward voltage drop will add. So if your datasheet says to expect about 1.8V, but your LEDs all exhibit only 1.6V, and you string 10 of them together, you'll have an error of 2V from your predicted value which could be enough to cause problems. The solution to this is assume worst case (lowest drop) and choose your resistor then. That'll make sure that they're always safe, but will result in significantly lower brightness in the average case (and possibly non-uniform brightness if they really aren't matched well).
EDITED to make a small clarification.