In general, you should have one resistor per "string" of LEDs in series. The reason for this is that the current/voltage characteristics of the LEDs does have process variance, and the relationship is exponential, so small process differences can make a big difference at application. The idea behind the resistor is to swamp this exponential dependency with linear behavior of the resistor, but if you put the LEDs in parallel, they each see the same voltage, and the exponential behavior still applies to a great extent (I can explain this in more detail, but the math starts to get a bit complicated - just trust me).
The extreme case is that, despite your best efforts, the LEDs are so mismatched that one ends up taking the entire set current which would be several times the rated current (since you intended it to be split between all your LEDs). Poof.
Will it work? Probably. Then again, you might pop a few LEDs, too. You'll also have to adjust the resistor value as compared to just a single LED. Since you'll have more current (rated LED current x number of parallel LEDs), you'll have to set the resistor to drop the right voltage for that multiplied current. You'll also want to round it up some (and accept a slightly dimmer LED) to avoid popping LEDs due to mismatch. Mismatch will be especially evident if the LEDs aren't the same color or type.
If you're trying to control each LED as per your diagram, this won't work at all. Don't bother trying.
If you want to light several LEDs from the same control line. It's better to wire them in series. You then add up all their forward voltages (at desired current) and set the resistor based on that (the math is as though it were a single LED with all the voltages added up). This greatly lessens the problem of mismatch. As long as you're close, you won't pop anything, but you may get slightly uneven brightness.
As for where you put the resistor, it doesn't matter. As long as it's in series with the LED, it can go anywhere.