Software Support > GroovyMAME

Switchres: modeline generator engine

<< < (3/260) > >>

bitbytebit:
Yeah I am wondering now about the refresh rate and how it would be called for a game even, not sure if the SDL calls really can be that precise.  I did alter the SDL code in mame to store it as doubles and pass it to SDL, but can see how the Windows code doesn't look so inviting to such a change.  Right now I'm trying to figure out if all my odd refresh rates of the same resolution really are even being used.  Actually I can't tell for sure if the vertical frequency is being applied for sure, oddly even with -verbose on mame or the X windows output logs it doesn't say if it used it.  From what I can tell I'm just getting all of them with a range of 57-60 Hz and mostly 59 Hz it seems.  I'm digging into see if something is limiting that in the X server for the Radeon chip which the ArcadeVGA card is using.

Basically the way the modelines work in X windows are lables and they are just plain labels, then you have a list of them as a Modes section in order from first choice to last.  It seems the calls to change resolution go through this list finding the first one that matches.  It looks like this...


    Modeline "288x240x56.00"  5.278336  288 296 328 344  240 244 248 274  -HSync -VSync
    Modeline "280x247x54.08"  5.124000  280 288 320 336  247 251 255 282  -HSync -VSync
    Modeline "280x244x60.04"  5.628000  280 288 320 336  244 248 252 279  -HSync -VSync
    Modeline "280x240x60.00"  5.523840  280 288 320 336  240 244 248 274  -HSync -VSync
    Modeline "280x240x58.00"  5.339712  280 288 320 336  240 244 248 274  -HSync -VSync
    Modeline "272x267x50.00"  5.002000  272 280 304 328  267 272 276 305  -HSync -VSync
    Modeline "272x251x53.14"  5.002000  272 280 304 328  251 256 260 287  -HSync -VSync
    Modeline "272x240x60.00"  5.392320  272 280 304 328  240 244 248 274  -HSync -VSync
    Modeline "272x240x57.00"  5.122704  272 280 304 328  240 244 248 274  -HSync -VSync
    Modeline "272x240x56.00"  5.032832  272 280 304 328  240 244 248 274  -HSync -VSync
    Modeline "264x256x55.00"  5.139200  264 272 296 320  256 260 264 292  -HSync -VSync
    Modeline "264x240x60.00"  5.260800  264 272 296 320  240 244 248 274  -HSync -VSync
    Modeline "264x240x59.00"  5.173120  264 272 296 320  240 244 248 274  -HSync -VSync
    Modeline "264x240x58.00"  5.085440  264 272 296 320  240 244 248 274  -HSync -VSync
    Modeline "256x256x57.36"  5.092000  256 264 288 304  256 260 264 292  -HSync -VSync
 
And then in another section it has them listed as...
    Modes "288x267x50.00" "288x244x60.04" "288x243x55.05" "288x240x60.00" "288x240x59.00" "288x240x58.00" "288x240x57.00" "288x240x56.00" "280x247x54.08" "280x244x60.04" "280x240x60.00" "280x240x58.00" "272x267x50.00" "272x251x53.14" "272x240x60.00" "272x240x57.00" "272x240x56.00" "264x256x55.00" "264x240x60.00" "264x240x59.00" "264x240x58.00" "256x256x57.36"

Where the first is the first one tried

From what I can tell in SDL at least, we are just giving it the height/width actually.  The code I saw in there using the refresh rate is for SDL 1.3 it seems, which I need to try again, first compile with it did not work at all.  So seems that SDL 1.2 doesn't even pass the refresh rate it wants.  

Definitely interesting, I'm looking deeper into this, also seems the code I'm looking at probably would be somehow involving the waitvsync stuff which seems broken if the throttle is turned off although maybe it's doing the job because some code calls SDL functions using that.


Update: When I run as root in SDL mode at the Linux Console outside of X windows using software rendering, it seems it listens to my vertical refresh rates and the throttle setting can be 0 too.  Hmmm, so something is odd, it can do it, but X windows may be getting in the way or not being root (I can do this as a normal user for some reason, at least change modes, it will play though).

Ok it's even weirder than that, I can use the waitvsync 1 and throttle 0 options, get it to show the proper vertical refresh loading up, but have to run the mame default SDL menu and launch games from there.  Once that is done then after that it will work for them outside of that menu, because it writes a new .mame/game.ini file for each game using the values from the custon ini/game.ini file?   It seems really strange it's doing it this way, and it doesn't pay attention to those settings otherwise.  Well I've got something really odd to explore now, it does seem to be really utilizing the monitor now getting the resolution to really use the modelines though.  I'm definitely confused now as to what exactly it's doing, like it only does the switchres properly from the mame sdl menu and/or when there's a .ini file in ~/.mame/ and not ~/ini/

bitbytebit:
I figured out the issue in Linux with the Xorg server and at least the ATI Radeon driver (which the ArcadeVGA card uses).  I am thinking other X Drivers may do this too, basically it ignores your entire Modeline except the very first part of it or name and parses that for the HxW values and ignores everything else.  Seems quite odd to make all these modelines and for the X Driver to ignore all of it.  I have hacked it so far to actually take the refresh value and calculate it, was just using 60.0 hardwired in the driver.  This has improved things already seeing the actual refresh rates be correct.  I still need to figure out how to make it really listen to the whole modeline though, it is ignoring the -Vsync value for some reason and always using +VSync, also alters the numbers slightly and figure it'd be nice if we could either choose all that or have an option to tell the driver what kind of Arcade monitor it exactly is so it can be like lrmc and do the calculations right. 

So there might have to be a lrmc/Radeon driver patch for Linux to allow the ArcadeVGA or any other Radeon card to use lrmc and calculate modelines which MAME mostly will use with the ini files, probably a patch for MAME to use doubles for the resolution (which I do have) but also have to figure out how to really force the X driver to go to the proper mode with the right refresh rate we want.  I'm pretty sure this is possible, hopefully doesn't need SDL 1.3 because that is not near ready from what I can tell for general use.  There's this RandR X Method of changing the X server resolution which MAME might be able to call and tell it what to do, maybe that's what xmame did or similar to how AdvanceMAME worked but this is officially through the X Driver API. 

Seems like when all this is put together should be pretty good, and see how the vsync method would help.  I can tell already with the change in using the refresh rate games play way smoother and I had a lot of fun, either I'm sleep deprived or it made quite a difference already. 

Calamity:
Hi bitbytebit,

These are great news! I suspected there was something odd with Linux modeline management. And yes, I also found some strange behaviour when invoking games from SDLMame menu, the ini options weren't read or used as expected, or just were properly used for the first one... though I did these tests some time ago and I can't remember exactly what was wrong...

This is a patch I used over CabMame diffs, its related to Emuspeed patch and I use it to prompt the real emulation speed in Hz, with several decimals, and the average speed. It is useful in combination with vsync to see if the vfreq intended is actually working:


--- Code: ---
Go to \src\emu\video.c

/*-------------------------------------------------
    Patch for prompting emulation speed in Hz
    This was done for v0.131, it may need updating
    - Calamity -

    video_get_speed_text - print the text to
    be displayed in the upper-right corner
-------------------------------------------------*/

const char *video_get_speed_text(running_machine *machine)
{
int paused = mame_is_paused(machine);
static char buffer[1024];
char *dest = buffer;
float rate;
screen_state *state = NULL;

/* validate */
assert(machine != NULL);

/* if we're paused, just display Paused */
if (paused)
dest += sprintf(dest, "paused");

/* if we're fast forwarding, just display Fast-forward */
else if (global.fastforward)
dest += sprintf(dest, "fast ");

/* if we're auto frameskipping, display that plus the level */
else if (effective_autoframeskip(machine))
dest += sprintf(dest, "auto%2d/%d", effective_frameskip(), MAX_FRAMESKIP);

/* otherwise, just display the frameskip plus the level */
else
dest += sprintf(dest, "skip %d/%d", effective_frameskip(), MAX_FRAMESKIP);

/* append the speed for all cases except paused */

if (machine->primary_screen != NULL)
state = get_safe_token(machine->primary_screen);

rate = (state != NULL) ? ATTOSECONDS_TO_HZ(state->frame_period) : DEFAULT_FRAME_RATE;

if (!paused && global.overall_emutime.seconds >= 2)
{
osd_ticks_t tps = osd_ticks_per_second();
double average_real_time = (double)global.overall_real_seconds + (double)global.overall_real_ticks / (double)tps;
double average_emu_time = attotime_to_double(global.overall_emutime);

dest += sprintf(dest, " %4.2f%% %4.6f/%4.6f Hz [%4.6f]", 100 * global.speed_percent, global.speed_percent * rate, rate, average_emu_time / average_real_time * rate);


        }
/* display the number of partial updates as well */
if (global.partial_updates_this_frame > 1)
dest += sprintf(dest, "\n%d partial updates", global.partial_updates_this_frame);

/* return a pointer to the static buffer */
return buffer;
}

--- End code ---

If you are going to dig in the modeline calculations for Ati Radeon, this will be of your interest: if you download the package from the link I pasted in my firt post, you'll find a file named Ati9250.txt. Inside there's the list of the real dotclocks used by Ati Radeon 9250 hardware, measured one by one by me. If you use that data for your modeline calculations, you can predict the resulting vfreq with extreme accuracy.

bitbytebit:

--- Quote from: Calamity on October 09, 2010, 06:21:43 am ---Hi bitbytebit,

These are great news! I suspected there was something odd with Linux modeline management. And yes, I also found some strange behaviour when invoking games from SDLMame menu, the ini options weren't read or used as expected, or just were properly used for the first one... though I did these tests some time ago and I can't remember exactly what was wrong...

This is a patch I used over CabMame diffs, its related to Emuspeed patch and I use it to prompt the real emulation speed in Hz, with several decimals, and the average speed. It is useful in combination with vsync to see if the vfreq intended is actually working:


--- Code: ---
Go to \src\emu\video.c

/*-------------------------------------------------
    Patch for prompting emulation speed in Hz
    This was done for v0.131, it may need updating
    - Calamity -

    video_get_speed_text - print the text to
    be displayed in the upper-right corner
-------------------------------------------------*/

const char *video_get_speed_text(running_machine *machine)
{
int paused = mame_is_paused(machine);
static char buffer[1024];
char *dest = buffer;
float rate;
screen_state *state = NULL;

/* validate */
assert(machine != NULL);

/* if we're paused, just display Paused */
if (paused)
dest += sprintf(dest, "paused");

/* if we're fast forwarding, just display Fast-forward */
else if (global.fastforward)
dest += sprintf(dest, "fast ");

/* if we're auto frameskipping, display that plus the level */
else if (effective_autoframeskip(machine))
dest += sprintf(dest, "auto%2d/%d", effective_frameskip(), MAX_FRAMESKIP);

/* otherwise, just display the frameskip plus the level */
else
dest += sprintf(dest, "skip %d/%d", effective_frameskip(), MAX_FRAMESKIP);

/* append the speed for all cases except paused */

if (machine->primary_screen != NULL)
state = get_safe_token(machine->primary_screen);

rate = (state != NULL) ? ATTOSECONDS_TO_HZ(state->frame_period) : DEFAULT_FRAME_RATE;

if (!paused && global.overall_emutime.seconds >= 2)
{
osd_ticks_t tps = osd_ticks_per_second();
double average_real_time = (double)global.overall_real_seconds + (double)global.overall_real_ticks / (double)tps;
double average_emu_time = attotime_to_double(global.overall_emutime);

dest += sprintf(dest, " %4.2f%% %4.6f/%4.6f Hz [%4.6f]", 100 * global.speed_percent, global.speed_percent * rate, rate, average_emu_time / average_real_time * rate);


        }
/* display the number of partial updates as well */
if (global.partial_updates_this_frame > 1)
dest += sprintf(dest, "\n%d partial updates", global.partial_updates_this_frame);

/* return a pointer to the static buffer */
return buffer;
}

--- End code ---

If you are going to dig in the modeline calculations for Ati Radeon, this will be of your interest: if you download the package from the link I pasted in my firt post, you'll find a file named Ati9250.txt. Inside there's the list of the real dotclocks used by Ati Radeon 9250 hardware, measured one by one by me. If you use that data for your modeline calculations, you can predict the resulting vfreq with extreme accuracy.



--- End quote ---

Thanks, I am sure those will help things.  I've been able to hack the Radeon driver and basically replace the modeline calculations it does with lrmc's.  So essentially I put lrmc into the Radeon driver and it no longer uses the default Xorg modeline calculator.  I figured out that what is happening is now X Windows basically does all the modeline calculations no matter what you put as modelines, it overrides users input and they aren't even done in the driver usually but in the X Server itself.  They aren't really even radeon specific calculations, they just take the string of modenames and assume they are heightxwidth's and use 60 Hz for each one in basically the cvt program or "Calculates VESA CVT (Coordinated Video Timing) modelines for use with X.", which is not a good thing to use for Arcade monitors or anything low resolution. 

I've now am actually getting the exact modelines running, and looks like I was working uphill before because now a lot of the work arounds I was doing probably are not doing some odd stuff, because I couldn't escape the Vertical Frequency of 60Hz they forced and now I can get anywhere from 40 - 100 like the specs of the d9800 say.  I'm going to play around with this, figure out more and hopefully soon will have a modified Radeon driver that could be used with any Arcade monitor or TV with LRMC doing the calculations inside it.  Also should be able to specify the Monitor type in the xorg.conf file also this does allow you to have an lrmc.conf file in /etc/lrmc.conf or the current working directory to change custom settings for lrmc.  I think I can merge the X Windows Vertical/Horizontal config to do that for lrmc, and we should have a full on X Server for Arcade Monitors for Radeon cards. 

I'm still learning about the timing stuff, so will look into the timing settings you sent too and try to figure out how to improve things with those.  Definitely interesting, I'm surprised I was able to get it working, and looks like hopefully if Mame isn't already able to push a vertical frequency into it in the future we probably can put the xRandr support into there and do it that way.  At least seems that way, supposidly SDL can do it and maybe it already does and I just didn't connect the code.

It's funny when you look through google at this issue with the X Drivers, turns out there are tons of people finding they can't set modelines anymore and no one knows why, the developers are ignoring them saying the monitors should do the EIDID stuff instead.  In the code it seems like in theory it should be reading the whole settings of the modelines but when you trace it they seem to skip that part or something now.

Calamity:

--- Quote from: bitbytebit on October 09, 2010, 12:59:51 pm ---I've been able to hack the Radeon driver and basically replace the modeline calculations it does with lrmc's.  So essentially I put lrmc into the Radeon driver and it no longer uses the default Xorg modeline calculator.
--- End quote ---

That's a big achievement! So you have direct control on the Radeon driver, and can set the code to calculate a modeline on the fly with the parameters you define... I wish I had that on Windows ;)


--- Quote from: bitbytebit on October 09, 2010, 12:59:51 pm ---I figured out that what is happening is now X Windows basically does all the modeline calculations no matter what you put as modelines, it overrides users input and they aren't even done in the driver usually but in the X Server itself.  They aren't really even radeon specific calculations, they just take the string of modenames and assume they are heightxwidth's and use 60 Hz for each one in basically the cvt program or "Calculates VESA CVT (Coordinated Video Timing) modelines for use with X.", which is not a good thing to use for Arcade monitors or anything low resolution.
--- End quote ---

Amazing. Now I understand why the modelines we used didn't seem to work as expected. What sounds strange to me is that if CVT is being used you shouldn't be getting a stable picture on an arcade monitor, as sync pulses are way too different... at least it wouldn't work on my Hantarex MTC9110 (lowres only). Maybe yours is more tolerant?


--- Quote from: bitbytebit on October 09, 2010, 12:59:51 pm ---It's funny when you look through google at this issue with the X Drivers, turns out there are tons of people finding they can't set modelines anymore and no one knows why, the developers are ignoring them saying the monitors should do the EIDID stuff instead.  In the code it seems like in theory it should be reading the whole settings of the modelines but when you trace it they seem to skip that part or something now.

--- End quote ---

It's incredible all the modeline stuff is bypassed in practice, really amazing. That's is a great discovery.
If you end up getting it to work this is going to be a perfect support for emulation, I will consider to move to Linux  ;D

I haven't looked at lrmc code and don't know how it calculates modelines. Most modeline calculators I've seen use percentages to work out the borders size and sync pulses. I find it not very accurate. I think it's a better aproach to work with characters (integer 8 pixel blocks) instead of pixels for the inner calculations, and define the porchs and sync pulses as time data, then work out the number of those characters we need to achieve these timings. This is the method I implemented, however I harcoded it for lowres, no multisync support. Anyway, this is not important now.

When I have the time I have to make a list of Mame games which resolution is badly reported by xml, I've come up with many. I've been considering doing a patch for this.

Thanks for all the effort!

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version