Software Support > GroovyMAME
Switchres: modeline generator engine
Calamity:
Hi there,
I've been out for some days and, I'll try to catch up with the new info. I'd like to thank bitbytebit again and the others.
These days I've been thinking about the vfreq ignoring issue with SDL 1.2. There is a way to overcome it, I believe. Maybe you've already tested this and I didn't follow. We can use a resolution table for xorg.conf, without explicit vfreq. This table would be created condensating the whole Mame video mode list by omitting vfreq values. Mame inis would just include WxH values. For each of these resolutions we should create a sample modeline, (say 60Hz) which will be included in xorg.conf. Mame already knows the valid vfreq values for each game, so we just need to patch Mame so it invokes lrmc for the right values before it tryes to switch video mode. A new "throw-away" modeline would be created which should be made available for Radeon driver by any means (could it overwrite the sample one in xorg.conf?). Now, Mame would invoke video mode switching without pushing vfreq. As there is just one modeline defined for this resolution, the one we just created, the proper video mode should be invoked.
I'm not sure how this can colide with the built-in video mode list, but the general idea of having a fixed resolution table with default vfreq values and tuning the existing mode to the refresh we request before switching video mode is the approach I'd like to implement for Windows in the near future.
Calamity:
--- Quote from: bitbytebit on October 16, 2010, 12:38:09 pm ---I'm still trying to figure out though how to make sure the refresh rates are good enough for people with just arcade monitors, yet be able to get them exact for people with this multisync able to do 15-40Khz WG type monitors.
--- End quote ---
I've never touched a multisync monitor, I need to get one of these WG. I wonder if 15-40KHz sync range is continuous or it has some gaps (I think so). If you have several separate intervals, I think the best approach would be to consider each one as a single monitor, with it's own HfreqMin - HfreqMax valid interval, then choose the best one for the mode we want to produce, or the only one we have if it's a non multisync monitor. Otherwise you'll get a lot of hardcoded conditionals for each particular case. The definitive approach is to produce our modeline for all of those virtual monitors: for some of them it'll be necessarily degradated (doublescaned, interlaced-stretched, bad vfreq, bad res), depending on the valid hsync interval and the general conditions we set in the algorithm, so we'll get a score that accounts for this degradation. This score will be used to choose the best modeline and discard the others. I'd like to go more into this when I have some time. If you need any help with modeline calculation just let me know.
bitbytebit:
--- Quote from: Calamity on October 16, 2010, 07:10:52 pm ---
--- Quote from: bitbytebit on October 16, 2010, 12:38:09 pm ---I'm still trying to figure out though how to make sure the refresh rates are good enough for people with just arcade monitors, yet be able to get them exact for people with this multisync able to do 15-40Khz WG type monitors.
--- End quote ---
I've never touched a multisync monitor, I need to get one of these WG. I wonder if 15-40KHz sync range is continuous or it has some gaps (I think so). If you have several separate intervals, I think the best approach would be to consider each one as a single monitor, with it's own HfreqMin - HfreqMax valid interval, then choose the best one for the mode we want to produce, or the only one we have if it's a non multisync monitor. Otherwise you'll get a lot of hardcoded conditionals for each particular case. The definitive approach is to produce our modeline for all of those virtual monitors: for some of them it'll be necessarily degradated (doublescaned, interlaced-stretched, bad vfreq, bad res), depending on the valid hsync interval and the general conditions we set in the algorithm, so we'll get a score that accounts for this degradation. This score will be used to choose the best modeline and discard the others. I'd like to go more into this when I have some time. If you need any help with modeline calculation just let me know.
--- End quote ---
I've asked the Wells Gardner support guy about the range, and if it's continuous or if not what are the tolerance levels. The lrmc guy seems to have gone through this with the 9200 but it was different and the 9400/9800 seem to have both extended the range to include SVGA and also seems more robust about handling that entire range. In lrmc it is built to do this, basically can define how ever many ranges of details and it scores each and usually seems to pick the right one. So this is a good thing about lrmc, the trick is that you can't really define them in the config like the hard coded structures in linked list setup used for the -cga/ega/d9200 etc. predefines. I actually think the way it is setup in lrmc should in theory be pretty good at working for basic CGA monitors where you know the values, as long as a user can input the complete setup into the config or on the command line. There are details with the calculations for the 9200/9800 I'm trying to work out and the Wells Gardner guy hopefully will help with that, knowing the true limits. Although I can make it do anything in that range and it seems to work fine, I just want to make sure they approve. I think in lrmc the main issues with it's calculations are the centering or back/front porch calculations, and that may just be when it comes to these Wells Gardner type monitors since they are very tricky to pin down what they are since they are all types of monitors in one.
The X windows thing is interesting, having mame create the modelines itself. The tricky part though is, at least from what I've seen, the stuff that dynamically puts modelines into X windows isn't stable, kind of slow, and often crashes it. I've seen it using xrandr and also read threads of people saying that xrandr often will either not work or crash X randomly on them. The other issue is that the whole window centering and general management of SDL onto the X display doesn't work as well outside of letting SDL do it. I can use xrandr, but then I have to really work to get the games to position themselves inside the visible screen area since X likes to be virtual windowed when changing modes to smaller display sizes. I am guessing SDL takes care of all this usually, and without it doing that part things just seem way less stable and smooth. SDL 1.3 really looks exciting because of that, and the Vsync stuff it seems to have, although I'm not sure if it's 100% perfect or as good as triplebuffer (although I've read that seems to be slightly imperfect at times for some too). Also I saw logs on the Soft 15Khz threads which seem to indicate that in Windows it can actually utilize the refresh rate of modelines, at least seemed like it prints out the information which it does for SDL 1.3 but not for SDL 1.2. So I assumed doing that it was able to at least pick it somewhat, much better than totally ignoring it in SDL 1.2 and X just picking the first modeline that matches the height and width. Sounds like the best solution would be to dynamically create them for X though, if only X windows was stable in that area. It seems like the right way to do it according to the X windows stuff, use xrandr protocol and add/choose them that way, and would think that SDL at least could do that for us and we wouldn't need modelines or mess with them in mame at all either. I think SDL is doing it the most stable current way though and that seems to require modelines, I've looked at the idea of porting SDL 1.3 code into 1.2 just to include Vsync and/or taking code from SDL 1.3 and including it in mame. They look really complicated though, both options would require a lot of heavy lifting and end up having something probably hard to maintain through releases and become out dated as soon as SDL 1.3 is done. It really seems like SDL 1.3 is close with just needing the mouse hiding to be done, but then again I'm not sure, but from the mailing lists it looks like it may be settling down sooner or later into a stable fully working release. Is using SDL on Windows a bad thing compared to the other options there? Would think that SDL 1.3 would fix both Linux and Windows, if it also works well in Windows that is. Possibly just adding some type of fixes to be like triplebuffer or proper vsync. I also have read that the vsync issues are really an X windows one and they have indicated in a future release they will have some option for vsync with the X server. I'm not sure if that's in newer ones, distributions use 1.7.7 currently and they are on 1.9.0 right now so not sure if it's been done yet (and 1.9.0 doesn't compile the radeon driver so guessing it's not stable and not all parts have caught up to it yet).
Calamity:
--- Quote from: bitbytebit on October 16, 2010, 10:27:55 pm ---Is using SDL on Windows a bad thing compared to the other options there? Would think that SDL 1.3 would fix both Linux and Windows, if it also works well in Windows that is. Possibly just adding some type of fixes to be like triplebuffer or proper vsync. I also have read that the vsync issues are really an X windows one and they have indicated in a future release they will have some option for vsync with the X server.
--- End quote ---
I'm right now testing Win32 SDLMame in my laptop, after some tweaking it's running perfectly smooth and vsyncronized, the right settings in Windows seem to be: video opengl, keepaspect 0, unevenstretch 0, waitvsync 1, throttle 0, switchres 1. If 'video soft' is used, I can't get vsync working. I still have to test it using custom video modes in my cabinet, but it's quite probable it can replicate DirectX functionallity managing modes. However, there's still this inherent limitation of integer values for refresh rates, imposed by the system and drivers inner calls. In Windows, any API we use for going fullscreen will be conditioned by that.
From Windows point of mind, Linux way of using xrandr method seems strange, it looks like resizing the desktop before maximizing the window, affecting all applications. Although we have Win32 APIs to resize the desktop resolution, they are not used in this context, DirectX api (and SDL I suppose) are used to switch video modes instead and use the screen in exclusive mode, so you can access the advanced video features. The method I suggested would just use plain SDL to go fullscreen, i.e. "320x224", but we'll make sure the proper refresh will be used by making an unique modeline available to the driver at the 320x224 resolution, so it should not fail... anyway this is just theory.
bitbytebit:
--- Quote from: Calamity on October 17, 2010, 05:53:53 pm ---
--- Quote from: bitbytebit on October 16, 2010, 10:27:55 pm ---Is using SDL on Windows a bad thing compared to the other options there? Would think that SDL 1.3 would fix both Linux and Windows, if it also works well in Windows that is. Possibly just adding some type of fixes to be like triplebuffer or proper vsync. I also have read that the vsync issues are really an X windows one and they have indicated in a future release they will have some option for vsync with the X server.
--- End quote ---
I'm right now testing Win32 SDLMame in my laptop, after some tweaking it's running perfectly smooth and vsyncronized, the right settings in Windows seem to be: video opengl, keepaspect 0, unevenstretch 0, waitvsync 1, throttle 0, switchres 1. If 'video soft' is used, I can't get vsync working. I still have to test it using custom video modes in my cabinet, but it's quite probable it can replicate DirectX functionallity managing modes. However, there's still this inherent limitation of integer values for refresh rates, imposed by the system and drivers inner calls. In Windows, any API we use for going fullscreen will be conditioned by that.
From Windows point of mind, Linux way of using xrandr method seems strange, it looks like resizing the desktop before maximizing the window, affecting all applications. Although we have Win32 APIs to resize the desktop resolution, they are not used in this context, DirectX api (and SDL I suppose) are used to switch video modes instead and use the screen in exclusive mode, so you can access the advanced video features. The method I suggested would just use plain SDL to go fullscreen, i.e. "320x224", but we'll make sure the proper refresh will be used by making an unique modeline available to the driver at the 320x224 resolution, so it should not fail... anyway this is just theory.
--- End quote ---
Are you testing SDL 1.3 or 1.2 in Windows? I can't get the throttle setting to work at 0, it just goes full speed in Linux and I'm using 1.2 opengl and also tried 1.3 but have been using -video sdl13 there.
There's quite a few bugs I've found with my method of width size incrementing, but have fixed them somewhat and now it kind of works as a work around but definitely hopeful it can be avoided.
I also found a big bug/memory leak in the radeon driver I introduced by the way I was getting the modelines, I think that was the instability I was seeing in xrandr since it calls that function each and every call to re-add the modelines. I am pretty sure it should work smooth now without issue, and it sounds interesting to just add the modeline for each game upon start and remove after finish. My perl script would be easy to do this with, using lrmc to calculate the modeline add it and switch to it, switch back to the default and delete that modeline on exit. This seems like the cleanest way to do things to me, and avoid needing to patch mame as much as patching the radeon driver and possibly the xserver to avoid annoying extra modelines that override custom ones. It would be nice to have mame call the xrandr stuff itself but I get the feeling it's really tied into SDL and we'd get the same functionality through the perl script method, and avoid having to pack lrmc and xrandr into mame code and maintain it. Plus SDL 1.3 pretty much will do the same, and should be able to use refresh rates as doubles (I think my patch allows this, but I haven't inspected it to check for SDL possibly having the limitation to integers). Again it goes back I guess to using the xrandr perl script and external lrmc being the most universal solution if SDL 1.3 still doesn't allow it to perfectly call xrandr functions.
Hopefully tomorrow I'll have an updated genres with the fixed radeon drivers memory leak, lrmc that increments horizontal size for SDL 1.2, and possibly try to get the xrandr perl script to dynamically generate and force modelines from the ini files (which should be able to avoid the ini files, maybe build a DB of resolutions from the mame.xml for it to use so it knows each games needs). I have also possibly gotten lrmc to output a lot better aspect ratios for the vertical games, which before I think the calculation for that was kinda strange. It at least never matched the modelines I saw others generate and made it hard to match refresh rates without getting odd sizes. For some reason it divided the horizontal size on a vertical game by .5625 and now I'm multiplying the vertical size by 1.33333 for a 4x3 screen (which seems to work better, really curious though, seems a lot of people use 1.22222 and not sure why). Here's the code basically how that's done...
--- Code: ---if (mode->aspect3x4 == 1) {
+ /* Vertical games */
+ //mode->hres = mode->hres / 0.5625;
+ mode->hres = align(mode->vres * (1.33333), 8);
+ } else if (!mode->aspect3x4 && (mode->hres < mode->vres)) {
+ /* Odd games */
+ if (mode->refreshrate <= 30.5) {
+ mode->refreshrate = mode->refreshrate * 2.0;
+ //mode->vres = align(mode->vres / 2.0, 8);
+ } else {
+ //mode->hres = mode->hres / 0.5625;
+ mode->hres = align(mode->vres * (1.33333), 8);
+ }
+ }
--- End code ---
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version