Software Support > GroovyMAME
Switchres: modeline generator engine
bitbytebit:
--- Quote from: elvis on October 20, 2010, 11:34:11 pm ---
--- Quote from: bitbytebit on October 20, 2010, 11:10:12 pm --- What version is your xorg-server? Mine is 1.7.7
--- End quote ---
root@lowboy:~# dpkg -l xorg xserver-xorg-core
ii xorg 1:7.5+5ubuntu1 X.Org X Window System
ii xserver-xorg-core 2:1.7.6-2ubuntu7.3 Xorg X server - core server
1.7.6 according to that.
The supplied version of nv_drv was 2.1.15. I've just tried 2.1.18 (latest stable release, dated July 2010) and it gave the same error. I tried to build from git, but it said a required tool (specifically xorg-macros) was too old.
Next stop might be dist-upgrading to Maverick (10.10) and trying again. According to packages.ubuntu.com it uses xserver-xorg-core 1.9.0.
--- End quote ---
Yeah I can't see any direct xrandr support in the drivers, looks like it's talking to the xorg-server directly and it calls the drivers get_modelines function from there but the xrandr stuff is completely handled within the xorg-server I guess. Hopefully it's just that they didn't have the xrandr working as well in that version as they did in 1.7.7, I do think possibly my version of xorg-server with the default gentoo install didn't work till I updated to the vanilla 1.7.7 one. Looks nice, getting version 1.9.0, would be interested if that fixes it. I really want to try the newest version but not sure when Gentoo is going to move to that high of version, and I've spent a lot of time building this box with gentoo but it's tempting seeing ubuntu is using the newest one.
elvis:
Upgrading to Maverick as I type.
There's also the "nouveau" drivers now too, which do support my TNT2 (and apparently work fine in 2D/"soft"). However it's using this new fangled kernel modeline stuff, so I'm not sure where to hack the minimum pclocks (or even if I can!). But that will be my next step if Maverick doesn't play ball.
[edit]
Maverick upgrade is in. Same issues as before with the nv_drv.so (even the latest from git). It definitely looks like that driver can't understand XRandR requests.
nouveau certainly understands them (I can see the resize requests coming through in the log file), but the pixel clocks limits must be too high because it's throwing errors.
So, now I'm off to hack the nouveau drivers (and hopefully just the userspace stuff, and not kernel stuff too).
Maybe it's time to order an old ATI card off eBay? :)
Calamity:
--- Quote from: bitbytebit on October 20, 2010, 02:09:45 am ---Guessing from reading this, that you could probably trick the radeon driver in Windows to take the lrmc generated modeline of my new switchres script and use your method in Windows instead of xrandr in Linux. This sounds quite interesting, do you think other Windows graphics drivers can be tricked this way, or have they, how exactly would I need to go about adapting switchres to do this when run in windows? From the amazing functionality I'm having in Linux with switchres now, being able to generate modelines dynamically and get pretty good modelines and not rely on .ini files, seems this would be great now combined with your ability to push modlines into the radeon driver in Windows. If only this could be done for even more video cards in Windows too, then we'd really have something extra interesting. If we can make switchres work exactly the same in both Windows and Linux, that would be great, and I think at least with the radeon driver this seems very possible.
--- End quote ---
Hi bitbytebit,
I keep following this thread, it's getting really interesting, though I've just become a father this week and it's really hard to catch up ;D
Now that you bring that post back about the 'loader' thing I had in mind, I see that it's exactly what you have succeeded to implement for Linux, it's fantastic. Think of the endless possibilities of using dynamic modelines, for instance you can write a new 'advv' clone to allow you to find your monitor ranges, centering/tweaking modes, and use the results as feed back for lrmc so it can create even better modelines for your hardware. This is why I made this Arcade_OSD program, to test this functionality, though it will only work with my hacked Catalyst (CRT_Emudriver). I understand you've used a video mode DB to get rid of inis, good.
The same scheme would work for Windows, that's sure, but it's complicated to make a general method for all cards, as I'll explain. Windows video drivers just parse the registry for custom modelines at startup. That's why you need to restart the system all the time to test changes... annoying. If only we could reset the driver by unloading and reloading it so it went through it's initialization routines again and read the registry keys after we modify them, there would be a chance to get it. Accidentaly, there is a documented way of doing this! Here is it:
http://msdn.microsoft.com/en-us/library/ff568527%28VS.85%29.aspx
Basically, it works by setting 640x480x16 colors (4 bits) and inmediately restoring the original mode. This is because 640x480x16 is usually implemented by Windows default video driver, thus by calling it the specific video driver is unloaded from memory. To get it working for me, after that, I need to request Windows for available video modes. It's stable, works really well and is reasonably fast. Unfortunately, I only got it working with my hacked Catalyst 6.5, and (very strange) just when the amount of defined modelines is big enough (I still have to find the reason for this). No luck with Catalyst 8.x in my office computer, nor with ForceWare in my laptop, but I haven't tested it so much. I believe, as the article says, it's because these drivers have native support for 640x480x16 colors, so they never get unloaded :( However, it's a matter of testing and investigating it, I unfortunately have so little time to do this, I hope someone will use this stuff to do it.
There's a limitation to this method: you can modify existing modes, but not create new ones on the fly (you need restart). The reason, I believe, is that Windows internally only requests the driver for available video modes during startup sequence, so it won't modify it's internal mode table until we restart. But this limitation can be easily overcome by preparing a general mode table of needed resolutions (no vfreq defined) and tweaking the chosen modeline before calling the emulator, following the loader-wrapper scheme.
At this point, I'd really would consider to have a look at lrmc method for calculating modelines. It's funny because I wrote VMMaker from scratch figuring out all calculations and for me, lrmc is still a black box, if only I had more time to study it. I'm convinced the way I use in VMMaker is better. However, this is a secondary matter.
Definitely, your D9800 is a fantastic monitor, it's incredible it has a continuous range. But it's hard for me to imagine how this works from the hardware part. At the end of the day the intervals must exist, because porchs and sync pulses need to be smaller as hfreq increases, and there should be jumps somewhere (maybe that's why you experiment centering shifts at some points). It seems to work as an automatic car, you just have to put your foot on the accelarator, but the car does change gears inside.
I'm also concerned about the vsync stuff in Linux. Now that I've tested SDLMame for Windows, which I believe is the same code for Linux, I think that you should be able to turn throttle off as I do, and if it gets full speed is because vsync is not really working. This also happened to me if I used 'video software' instead of opengl. Think that vsync is a must, because even if you can get really accurate vfreqs (you'll normally be 2 cents of Hz above or below in the best cases), if throttle is on, Mame will keep its internal clock on, and it will produce regular hiccups in scrolls. We don't want Mame to do that, we want it to hang off our vfreq to be as smooth as possible.
There's a lot I should check, fist of all I'm not sure what version of SDL I have, or if SDLMame is using SDL in any way, what's the role of opengl in all this, so I need to clarify some concepts for me. Also, how to run perl scripts in Windows, etc.
bitbytebit:
--- Quote from: Calamity on October 21, 2010, 05:12:13 am ---
--- Quote from: bitbytebit on October 20, 2010, 02:09:45 am ---Guessing from reading this, that you could probably trick the radeon driver in Windows to take the lrmc generated modeline of my new switchres script and use your method in Windows instead of xrandr in Linux. This sounds quite interesting, do you think other Windows graphics drivers can be tricked this way, or have they, how exactly would I need to go about adapting switchres to do this when run in windows? From the amazing functionality I'm having in Linux with switchres now, being able to generate modelines dynamically and get pretty good modelines and not rely on .ini files, seems this would be great now combined with your ability to push modlines into the radeon driver in Windows. If only this could be done for even more video cards in Windows too, then we'd really have something extra interesting. If we can make switchres work exactly the same in both Windows and Linux, that would be great, and I think at least with the radeon driver this seems very possible.
--- End quote ---
Hi bitbytebit,
I keep following this thread, it's getting really interesting, though I've just become a father this week and it's really hard to catch up ;D
Now that you bring that post back about the 'loader' thing I had in mind, I see that it's exactly what you have succeeded to implement for Linux, it's fantastic. Think of the endless possibilities of using dynamic modelines, for instance you can write a new 'advv' clone to allow you to find your monitor ranges, centering/tweaking modes, and use the results as feed back for lrmc so it can create even better modelines for your hardware. This is why I made this Arcade_OSD program, to test this functionality, though it will only work with my hacked Catalyst (CRT_Emudriver). I understand you've used a video mode DB to get rid of inis, good.
The same scheme would work for Windows, that's sure, but it's complicated to make a general method for all cards, as I'll explain. Windows video drivers just parse the registry for custom modelines at startup. That's why you need to restart the system all the time to test changes... annoying. If only we could reset the driver by unloading and reloading it so it went through it's initialization routines again and read the registry keys after we modify them, there would be a chance to get it. Accidentaly, there is a documented way of doing this! Here is it:
http://msdn.microsoft.com/en-us/library/ff568527%28VS.85%29.aspx
Basically, it works by setting 640x480x16 colors (4 bits) and inmediately restoring the original mode. This is because 640x480x16 is usually implemented by Windows default video driver, thus by calling it the specific video driver is unloaded from memory. To get it working for me, after that, I need to request Windows for available video modes. It's stable, works really well and is reasonably fast. Unfortunately, I only got it working with my hacked Catalyst 6.5, and (very strange) just when the amount of defined modelines is big enough (I still have to find the reason for this). No luck with Catalyst 8.x in my office computer, nor with ForceWare in my laptop, but I haven't tested it so much. I believe, as the article says, it's because these drivers have native support for 640x480x16 colors, so they never get unloaded :( However, it's a matter of testing and investigating it, I unfortunately have so little time to do this, I hope someone will use this stuff to do it.
There's a limitation to this method: you can modify existing modes, but not create new ones on the fly (you need restart). The reason, I believe, is that Windows internally only requests the driver for available video modes during startup sequence, so it won't modify it's internal mode table until we restart. But this limitation can be easily overcome by preparing a general mode table of needed resolutions (no vfreq defined) and tweaking the chosen modeline before calling the emulator, following the loader-wrapper scheme.
At this point, I'd really would consider to have a look at lrmc method for calculating modelines. It's funny because I wrote VMMaker from scratch figuring out all calculations and for me, lrmc is still a black box, if only I had more time to study it. I'm convinced the way I use in VMMaker is better. However, this is a secondary matter.
Definitely, your D9800 is a fantastic monitor, it's incredible it has a continuous range. But it's hard for me to imagine how this works from the hardware part. At the end of the day the intervals must exist, because porchs and sync pulses need to be smaller as hfreq increases, and there should be jumps somewhere (maybe that's why you experiment centering shifts at some points). It seems to work as an automatic car, you just have to put your foot on the accelarator, but the car does change gears inside.
I'm also concerned about the vsync stuff in Linux. Now that I've tested SDLMame for Windows, which I believe is the same code for Linux, I think that you should be able to turn throttle off as I do, and if it gets full speed is because vsync is not really working. This also happened to me if I used 'video software' instead of opengl. Think that vsync is a must, because even if you can get really accurate vfreqs (you'll normally be 2 cents of Hz above or below in the best cases), if throttle is on, Mame will keep its internal clock on, and it will produce regular hiccups in scrolls. We don't want Mame to do that, we want it to hang off our vfreq to be as smooth as possible.
There's a lot I should check, fist of all I'm not sure what version of SDL I have, or if SDLMame is using SDL in any way, what's the role of opengl in all this, so I need to clarify some concepts for me. Also, how to run perl scripts in Windows, etc.
--- End quote ---
Congratulations on becoming a father :).
Yeah it's nice now without using ini's, not really a video DB but actually just quickly doing a mame -listdev game and grabbing the display section. I've been able to start testing multiple monitor types much easier just changing the command line to cga or ega, so has been interesting to compare each ones limitations and what happens when it's more restricted. That windows stuff sounds promising, glad there is a possible way to somewhat do it there. I suspect your methods are probably better than lrmc, I can see what your saying about the gear changing as you climb the Horz Freq up higher. I think there needs to be some sort of dynamic altering of the porches as it gets higher values instead of what it does now with multiple static display sets with values for each. I've been finding basically as it moves higher in Freq the divisor it is using has to be larger for each of the porches. I'm still learning how that works, but from what I can tell if it dynamically generated the display values for a d9800 depending on what you asked it in screen size and refresh rates then it could very likely always center the screen. I actually have it pretty much doing this for most games with my current set which has chunks through like 15.250-17.499, 17.500-20.000,20.001-23.899,23.900-25.500,27-30.99,31-32,32.001-40 that has some places in there which need some perfecting but it's more for really odd games and most really are centered and full without boarders with the right refresh rate. So if that was able to be adjusted dynamically without all those preset areas possibly then the oddball games would work better. I've been running into one issue with a few games that are widescreen it seems like the DBZ game which seems to be a 1.5 aspect ratio and I have some odd recalculating of the framesize which kind of makes it work but haven't been satisfied with any method to automatically calculate any games that go above the 1.3333333 normal monitor aspect ratio.
I found something odd about the waitvsync setting though, I'm starting to think it's my video card can't handle it properly or the X drivers for my video card/opengl implementation. I got the vsync working on another computer using an nvidia quadro fx 3400, my arcade system uses the newer arcadeVGA3000 which is just a radeon 2600HD I guess and has the RV630 on it. When I start mame on the radeon it always says OpenGL: FBO not supported and also about IRQ's not enabled falling back to busy waits: 2 0. I am guessing this is the problem, that the card isn't able to do interrupts with opengl so it can't do vsync stuff through opengl correctly. I'm thinking of using an nvidia quadro, testing if it can do the low pixclocks which I'm guessing it can? It's sad if that's really true about the arcadeVGA actually not supporting waitforvsync in Linux, when it's supposed to be specifically for arcade systems, and yet this older nvidia card can do it just fine. Although that's also with the proprietary nvidia drivers so I'm not sure if it's the same for the open ones which I'd have to use for the lower dotclock stuff. Am now planning on possibly testing these nvidia cards and seeing if they are at all better than the radeon, although it just doesn't seem to make sense that they would be but if the vsync to refresh rates works on them in Linux and dotclocks can go low then I guess they pretty much are more suited for this.
In windows just getting active perl, it installs from the http://www.activestate.com/activeperl website pretty easy and then you can run perl scripts in windows just like any other program there.
Calamity:
--- Quote from: bitbytebit on October 21, 2010, 10:18:54 am ---I found something odd about the waitvsync setting though, I'm starting to think it's my video card can't handle it properly or the X drivers for my video card/opengl implementation. I got the vsync working on another computer using an nvidia quadro fx 3400, my arcade system uses the newer arcadeVGA3000 which is just a radeon 2600HD I guess and has the RV630 on it.
--- End quote ---
I'm afraid the problem is with the Radeon driver not implementing waitvsync properly, as the folks in the Spanish forum thought, as with nVidia they were able to make waitvsync work without problem.
Regarding porches and sync pulses, I am positive they keep constant through the range covered by my monitor: 15.625 - 16.670 KHz, so in order to keep my modes centered I have to use the same values all the time. This stuff is related to the speed of the electron beam, which is constant in my monitor regardless of hfreq. There's a visible effect due to this: when you increase hfreq, the picture becomes narrower in width, as the extra lines, being the beam speed constant, need to come from somewhere: the sides of the picture! Your monitor must have more complex electronics, to be able to program the electrom beam at different speeds according to some previously defined hfreq intervals (you would notice the explained narrowing effect somewhere), or... have a really progresive beam speed, which would allow to interpolate the porch values for the whole range.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version