Main > Lightguns |
Quick prototype demo for possible Sinden light gun improvement |
<< < (3/4) > >> |
TapeWormInYourGut:
Yeah that's the point. The only difference between the 2 images is that 1 has a black screen and the other has white screen. It's pretty easy to compare the images to find the screen. You just have to scan for those 2 differences. Ok imagine it this way. Place the images on top of each other. Now, what parts are the same on both images? The parts that aren't is the screen. For example, you know his hand is not a part of the screen because his hand is the same color in both images. Only the screen will have polar opposite colors when comparing the images. |
RandyT:
--- Quote from: TapeWormInYourGut on September 28, 2022, 05:11:09 pm ---The parts that aren't is the screen. For example, you know his hand is not a part of the screen because his hand is the same color in both images. Only the screen will have polar opposite colors when comparing the images. --- End quote --- It's not "polar opposite colors". I'm honestly not exactly sure what that even means. Colors are just frequencies of light on the visible spectrum. It is simply devoid of at least as much light as can be blocked by the polarizer, as it would be in the case of black, so nothing is solved. To make matters worse, now think about what happens in a room with subdued lighting. It just won't do what you think it will. But feel free to try it and prove me wrong ;) |
TapeWormInYourGut:
Nothing is measuring wavelengths or light. These are regular cameras taking pictures. It is comparing RGB hex values between the pixels of 2 images. If the first image has 0x000000 for a given pixel, and the second image has 0xFFFFFF for that same pixel, then you can assume that the difference is caused by the polarizing lens. If the lens wasn't blocking the screen's light, they'd both be 0xFFFFFF. Since we know the lens is only blocking light from the screen, we'd know that this pixel must be a part of the screen. You do this for the entire image and you've have the screen mapped out. There is no other way that a pixel would be nearly black on 1 image, but white on the other, unless it were blocked by the polarizing lens. Well, only the screen is being blocked, so you'd know it's a part of the screen. I was also exaggerating about being polar opposites. It's not going to be 100% black and 100% white between the 2 images. The point is that they aren't even close in value. You can implement thresholds as well since the brightness between colors will be slightly different across the entire image. There is no point in talking about implementation when trying to provide a simplified view on how it would identify the screen. |
RandyT:
--- Quote from: TapeWormInYourGut on September 28, 2022, 08:52:15 pm ---Nothing is measuring wavelengths or light. These are regular cameras taking pictures. It is comparing RGB hex values between the pixels of 2 images. If the first image has 0x000000 for a given pixel, and the second image has 0xFFFFFF for that same pixel, then you can assume that the difference is caused by the polarizing lens. If the lens wasn't blocking the screen's light, they'd both be 0xFFFFFF. Since we know the lens is only blocking light from the screen, we'd know that this pixel must be a part of the screen. You do this for the entire image and you've have the screen mapped out. There is no other way that a pixel would be nearly black on 1 image, but white on the other, unless it were blocked by the polarizing lens. Well, only the screen is being blocked, so you'd know it's a part of the screen. I was also exaggerating about being polar opposites. It's not going to be 100% black and 100% white between the 2 images. The point is that they aren't even close in value. You can implement thresholds as well since the brightness between colors will be slightly different across the entire image. There is no point in talking about implementation when trying to provide a simplified view on how it would identify the screen. --- End quote --- It's all theory, assumes perfection in the function of all components and ignores how things actually work in practice. If a camera can pick up subtle shade and color changes with pixel precision, then the border of the standard system wouldn't need to be a bright color. But it does. Cameras also get very noisy in low light conditions. Noise means inaccuracy. In fact, the troubleshooting wiki calls out a dim screen as a possible cause for jitter. In a perfect world, the camera should be able to pick up the subtle illumination of the leaked backlight in the black areas to define the screen. But this is not a perfect world. And another thing standing in the way is that only LCD's use this type of polarization. What about OLED, CRT and Plasma screens? This suggestion simply adds more hardware and complexity, and hobbles display compatibility while IMHO, delivering zero gain. |
greymatr:
Yes this is basically the idea that I had, sorry that I didn't explain it a bit more fully. As I understand it the Sinden system uses a white border as it uses the distortion of the border to work out how the camera on the end of the gun is viewing the screen so that it can work out where it is pointing at on the screen. Basically if it were looking dead on to the screen then the border would be a rectangle but any other angle and it becomes a parallelogram. So what I was showing is that if we can work out where the screen is by taking the difference between the two images then we can work out the same distortion and then we don't need the white border to be showing on the screen. That is what would be gained by doing this. I just don't like the white border idea. I think Gun4IR may be the better system but then you have to mount and power the infrared LEDs. But you are right, I have an LCD screen and that is what I was working with and it wouldn't work for other display types. |
Navigation |
Message Index |
Next page |
Previous page |