But those failure modes which leak current past definitely send that extra to the remaining LEDs.
I totally understand your point there, but that's and extremely rare and unlikely case, and isn't really relevant in our case on LEDs in series vs in parallel.
Let me ask you, if it was so much of an issues, why would all the engineers on the planet use IR LEDs in series in their IR point designs?
What would lead you to that conclusion? Is it simply the lack of a good driver or have you concluded that there is something inherently bad/slow about the processor in the Wii remotes? From what I have been able to gather, all of the important sensor information is available from the Wii controller and would therefore be accessible by code running on a much faster and more powerful processor. Namely, the one in the PC.
The extra processing happening in the wiimote itself, the way it's handled, and the BT protocol are the bottleneck here, and the sensor wasn't used to its maximum specs simply because it wouldn't have been useful.
That means with a wiimote you'll never ever reach the 4ms latency I achieve with my gun system, especially without the Wii proprietary low latency BT protocol (which can't really be used for anything else than the Wii itself or Wii emulation).
Sadly no good drivers can fix that.
If it could I don't think I would have bothered making a full hardware solution.
You might think latency isn't an issue, but lightguns are way more sensitive than normal controllers to latency if you are playing without crosshair (as it should always be).
You want the hit to always land on time and exactly where you are aiming, not where you were aiming 2 or more frames ago.
Heh. Actually, it might be. If the camera has an integrated filter, which I assume it does, then the plastic in front is just a window the camera can see through but you can't, in order to hide the inner components. I.e. just for show
. However, if the window is further filtering what gets in beyond the integrated filter, then that's a different story. What does your testing show?
I'll buy that they didn't want the LEDs in the "sensor bar" to be seen, but oh the cost.
What might be an interesting experiment to improve detection would be to remove any internal filter (assuming it's possible and also that it indeed has one tuned for 940nm) and replace it with an external one which is meant for 850nm and use those LEDs. Spectral response on virtually every silicon-based camera, even on the expensive ones intended for IR, drops by about 50% between 850 and 940nm, and I'm honestly very skeptical that these are much different, especially given the price.
Not "might". The front plastic is an IR pass filter as the camera doesn't have a proper integrated filter, it relies on the extra IR pas filter, the same way it relies on external oscillator to drive it (later models don't need that extra hardware or extra filter). Please just try it by yourself before making assumptions, you'll see how bad the cam behaves without the front filter, no matter the type of led used or the cam sensitivity setting.
It can catch some LEDs that aren't in the IR spectrum, and will catch tons of noises and reflections that cannot be filtered by software or cam settings.
That's just the way it was designed.
Having an IR pass filter that also hides the internal was most likely more cost effective than having to add a proper filter in the cam itself.
The cost? I am not sure I'm following you here, how would that have been more expensive or less efficient (for that particular use) to use simple 940nm than using 850nm ones? The "sensor bar" (even if it's a weird name for a thing that doesn't have sensors) doesn't need expensive IR pass filter or anything, and uses very very basic low power LEDs (in series).
I already tried what you are suggesting, and the results weren't good enough to be worth the switch to 850nm. And the whole point of using black LEDs is making them very discreet, using a wavelength we can see with naked eyes would make it completely counter productive.
Again, I have no idea why you think I am guesstimating here.
I have the feeling you think I don't know what I'm talking about here haha.
You think you'd get more knowledge from quick online search than me in years of study on the subject?

It's possible. But it's probably not necessary to store bitmapped image data, outside of the standard frame-buffer, to get good blob co-ordinates. I.e. I would think the memory requirement for the calculations would remain constant, regardless of blob size. But once the sensor is driven to the point that there are no real borders to be found (everything above the low-level cutoff) then I'm sure it will have a bad time.
It does store its data in fixed registers, that have a very limited size. Of course the computation of the blobs has its own limit, but the way the data is stored/prepared is also limiting it (newer more powerful sensors have far less limits on that aspect). And like I said, there are many other factors at play here but I won't go into detail, here is not the place to do.