Guessing from reading this, that you could probably trick the radeon driver in Windows to take the lrmc generated modeline of my new switchres script and use your method in Windows instead of xrandr in Linux. This sounds quite interesting, do you think other Windows graphics drivers can be tricked this way, or have they, how exactly would I need to go about adapting switchres to do this when run in windows? From the amazing functionality I'm having in Linux with switchres now, being able to generate modelines dynamically and get pretty good modelines and not rely on .ini files, seems this would be great now combined with your ability to push modlines into the radeon driver in Windows. If only this could be done for even more video cards in Windows too, then we'd really have something extra interesting. If we can make switchres work exactly the same in both Windows and Linux, that would be great, and I think at least with the radeon driver this seems very possible.
I keep following this thread, it's getting really interesting, though I've just become a father this week and it's really hard to catch up
Now that you bring that post back about the 'loader' thing I had in mind, I see that it's exactly what you have succeeded to implement for Linux, it's fantastic. Think of the endless possibilities of using dynamic modelines, for instance you can write a new 'advv' clone to allow you to find your monitor ranges, centering/tweaking modes, and use the results as feed back for lrmc so it can create even better modelines for your hardware. This is why I made this Arcade_OSD program, to test this functionality, though it will only work with my hacked Catalyst (CRT_Emudriver). I understand you've used a video mode DB to get rid of inis, good.
The same scheme would work for Windows, that's sure, but it's complicated to make a general method for all cards, as I'll explain. Windows video drivers just parse the registry for custom modelines at startup. That's why you need to restart the system all the time to test changes... annoying. If only we could reset the driver by unloading and reloading it so it went through it's initialization routines again and read the registry keys after we modify them, there would be a chance to get it. Accidentaly, there is a documented way of doing this! Here is it:http://msdn.microsoft.com/en-us/library/ff568527%28VS.85%29.aspx
Basically, it works by setting 640x480x16 colors (4 bits) and inmediately restoring the original mode. This is because 640x480x16 is usually implemented by Windows default video driver, thus by calling it the specific video driver is unloaded from memory. To get it working for me, after that, I need to request Windows for available video modes. It's stable, works really well and is reasonably fast. Unfortunately, I only got it working with my hacked Catalyst 6.5, and (very strange) just when the amount of defined modelines is big enough (I still have to find the reason for this). No luck with Catalyst 8.x in my office computer, nor with ForceWare in my laptop, but I haven't tested it so much. I believe, as the article says, it's because these drivers have native support for 640x480x16 colors, so they never get unloaded
However, it's a matter of testing and investigating it, I unfortunately have so little time to do this, I hope someone will use this stuff to do it.
There's a limitation to this method: you can modify existing modes, but not create new ones on the fly (you need restart). The reason, I believe, is that Windows internally only requests the driver for available video modes during startup sequence, so it won't modify it's internal mode table until we restart. But this limitation can be easily overcome by preparing a general mode table of needed resolutions (no vfreq defined) and tweaking the chosen modeline before calling the emulator, following the loader-wrapper scheme.
At this point, I'd really would consider to have a look at lrmc method for calculating modelines. It's funny because I wrote VMMaker from scratch figuring out all calculations and for me, lrmc is still a black box, if only I had more time to study it. I'm convinced the way I use in VMMaker is better. However, this is a secondary matter.
Definitely, your D9800 is a fantastic monitor, it's incredible it has a continuous range. But it's hard for me to imagine how this works from the hardware part. At the end of the day the intervals must exist, because porchs and sync pulses need to be smaller as hfreq increases, and there should be jumps somewhere (maybe that's why you experiment centering shifts at some points). It seems to work as an automatic car, you just have to put your foot on the accelarator, but the car does change gears inside.
I'm also concerned about the vsync stuff in Linux. Now that I've tested SDLMame for Windows, which I believe is the same code for Linux, I think that you should be able to turn throttle off as I do, and if it gets full speed is because vsync is not really working. This also happened to me if I used 'video software' instead of opengl. Think that vsync is a must, because even if you can get really accurate vfreqs (you'll normally be 2 cents of Hz above or below in the best cases), if throttle is on, Mame will keep its internal clock on, and it will produce regular hiccups in scrolls. We don't want Mame to do that, we want it to hang off our vfreq to be as smooth as possible.
There's a lot I should check, fist of all I'm not sure what version of SDL I have, or if SDLMame is using SDL in any way, what's the role of opengl in all this, so I need to clarify some concepts for me. Also, how to run perl scripts in Windows, etc.