Main Restorations Software Audio/Jukebox/MP3 Everything Else Buy/Sell/Trade
Project Announcements Monitor/Video GroovyMAME Merit/JVL Touchscreen Meet Up Retail Vendors
Driving & Racing Woodworking Software Support Forums Consoles Project Arcade Reviews
Automated Projects Artwork Frontend Support Forums Pinball Forum Discussion Old Boards
Raspberry Pi & Dev Board controls.dat Linux Miscellaneous Arcade Wiki Discussion Old Archives
Lightguns Arcade1Up --- Bug Reports --- Site News

Unread posts | New Replies | Recent posts | Rules | Chatroom | Wiki | File Repository | RSS | Submit news

  

Author Topic: Theoretical versus real refresh rates  (Read 1802 times)

0 Members and 1 Guest are viewing this topic.

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Theoretical versus real refresh rates
« on: January 08, 2013, 05:09:05 am »
Hi Calamity,

While testing I noticed two possible "issues" with refresh rates that might negatively impact GM's (sound synchronization) accuracy. The first is about the refresh rate that GM seems to calculate from the dynamically created modelines and the second is about the modelines' theoretical refresh rates versus their real refresh rates.

1 - Refresh rate rate calculation

When running Snowbros, Genesis and SNES, GM generates the following 3 modelines 'on the fly' (it's dynamically creating them from the installed 'static' soft15khz modelines, which are at 62Hz and 63Hz refresh rates):

Modeline "256x240_57 15.64KHz 57.50Hz" 5.26 256 272 296 336 240 247 250 272 -hsync -vsync
Modeline "320x240_60 15.78KHz 60.00Hz" 6.56 320 336 368 416 240 241 244 263 -hsync -vsync
Modeline "512x240_60 15.63KHz 60.10Hz" 10.38 512 536 584 664 240 241 244 260 -hsync -vsync

According to GM the refresh rates of these dynamically created modelines are respectively 57.50Hz, 60.00Hz and 60.10Hz. But, when I calculate the refresh rates from the above modelines myself (given pixelclock and pixel/line x/y timing values), they should respectively be (cut to 4 decimals)  57.5543Hz, 59.9591Hz and 60.1251Hz. Any idea what's the reason for this difference?


2 - Theoretical versus real refresh rate

On my HD 4850, actual refresh rates will differ by about 0.01 to 0.05Hz from their modelines theoretical values. I'm quite certain you're already aware of this issue, but nonetheless I thought it would be good to bring it up.

I've taken the three modelines generated by GM 'on the fly', as mentioned above, and installed them as 'static' modelines with soft15Khz, just so that I can measure their real refresh rate.

To test the refresh rates I'm using the tool "Freqtest.exe" version 1.6 (not 2.1), which is available from here: http://www.mediafire.com/?lycrjcm55j37n.

Code: [Select]
Resolution     Theoretical      Real
                 refresh       refresh
                   (Hz)         (Hz)
256x240          57.5543       57.5821
320x240          59.9591       59.9753
512x250          60.1251       60.1075
 

Interestingly the real refresh rates are differing by about 0.02 to 0.03Hz in either direction from their theoretical modeline values. I've tested this for other screenmodes also, and the deviations are seemingly random positive and negative by about 0.01 to 0.05Hz.

I'm guessing these differences between the theoretical modeline and real refresh rates might be negatively impacting the (sound) sync timing In GM? If that's the case, would it then be an idea to add the possibility to GM to use a custom user specified refresh rate (with high granularity, freqtest provides 6 decimals), instead of the modeline theoretical value?


*Edit: clarified that I'm talking about the 'on the fly' created modelines under point 1.
« Last Edit: January 08, 2013, 11:42:20 am by Dr.Venom »

matrigs

  • Trade Count: (0)
  • Jr. Member
  • **
  • Offline Offline
  • Posts: 8
  • Last login:August 01, 2013, 06:07:13 pm
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #1 on: January 08, 2013, 09:21:52 am »
I think that Calamity might have answered this question in my topic below:

But unless you're using the 'static' mode list method, you won't get the 57.55 Hz refresh, because everything is normalized to 60 Hz in order to reduce the list as much as possible, and GM recalculates the right refresh later.

So as long as i have any 320x240 resolution in modeline.txt, no matter the refresh rate, GM will still show the game at the right refresh rate later on right?

Yes.

So i understand that no matter what the modelines say about the refresh rate, GM calculates the correct refresh rates on the fly.

That would also explain the differences that you pointed out using soft 15khz. Soft 15 khz is using a static modeline and you will never be able to have an exact refresh ratio as at some point some degree of rounding the numbers is used.

As far as i understand GM, it is trying to generate the nearest possible modeline, and later uses throttle to speed up/down the game to make it perfectly smooth.

Please someone correct me if i have given any false informations.

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #2 on: January 08, 2013, 11:14:12 am »
I think that Calamity might have answered this question in my topic below:

No, sorry, you're talking about a different topic. My post is about the -dynamically- created nearest possible modeline (refresh rate) that GM generates 'on the fly' and whether or not the two issues I raised are negatively affecting, however small, the sound synchronization. 

So as long as i have any 320x240 resolution in modeline.txt, no matter the refresh rate, GM will still show the game at the right refresh rate later on right?

Yes.

So to be clear, the "right refresh rate" in above quote might be more of an "nearly right refresh rate", which would possibly result in a nearly accurate (but not 100%) sound synchronization, because of the issues I mentioned.

But, let's see what Calamity has to say about it.

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 6553
  • Last login:Today at 06:55:05 am
Re: Theoretical versus real refresh rates
« Reply #3 on: January 08, 2013, 01:39:49 pm »
Hi Dr.Venom,

There are two different issues going on there, each has its own explanation. I'm going to focus on one of your modelines as an example, the same applies to all of them.

Quote
1 - Refresh rate rate calculation

Modeline "256x240_57 15.64KHz 57.50Hz" 5.26 256 272 296 336 240 247 250 272 -hsync -vsync

So if you calculate the refresh out of the above values, you get: 5.26 * 1000000 / 336 / 272 = 57.5543 Hz (diff = 0.0543 Hz)

The key here is that the dotclock is calculated as a real number, but in order to pass it to the ATI drivers we need to round it to just two decimal figures, because that's the best precission that the drivers admit. So that's where the 5.26 value comes from.

If you tried the lower inmediate possible value, you'd get: 5.25 * 1000000 / 336 / 272 = 57.5449 Hz (diff = -0.0551 Hz)
So the absolute difference is bigger in this case and 5.26 MHz is selected as best possible dotclock.

Now this is the theoretical refresh due to the limitations imposed by the drivers. What about the real refresh obtained?


Quote
2 - Theoretical versus real refresh rate

256x240          57.5543       57.5821

So let's assume the refresh returned by the Freqtest program is accurate enough (I believe so). We do the opposite operation:

real_dotclock = 57.5821 * 336 * 272 =  5.2625432832 MHz (vs 5.26)

This value is the best approximation that the drivers can obtain for our requested value of 5.26 MHz.

You can easily see where I want to go: it's impossible to get an *exact* match for a given refresh for these two reasons:

- Dotclock granularity
- Dotclock uncertainty

While we can't do anything about the granularity (unless you created your own drivers), there's a way to reduce or eliminate the uncertainty: we can measure ALL possible real dotclocks and create a dotclock look-up table instead of assuming theoretical values. I did implement this for VMMaker and the Radeon 9250. There's a file named Ati9250.txt that contains all measured dotclocks.

However, this is just one part of the problem: it allows you to predict how much off you're going to be, but doesn't help to get any closer. So what else can we do to gain precission:

real_dotclock = 57.5821 * 336 * 272

We shouldn't play with Horizontal-total because that would ruin our geometry. On the other hand, adding extra lines to Vertical-Total is not going to change the geometry, it will only raise the horizontal frequency a bit. So we can try again with 273, 274, 275, etc. lines, just in case the our *real* precalculated dotclocks required for that combination happens to produce a value which is closer to the target refresh. This is implemented in VMMaker with an option named 'Iterations'. By doing this, you can get values that are just 0.02 Hz off or less, most of the times.

Before you ask, GroovyMAME does not implement this feature. The reason is that you need to create an unique dotclock look-up table for your specific card and drivers, and I should write some software to assist people doing this. Considering most users find it hard to figure out the porch values of their monitors with Arcade_OSD, leave apart understanding how these affect the picture, which is the ABC of the whole thing, I honestly don't believe it's worth the effort to create the required software for doing this and supporting this feature, being the only benefit that the obtained refresh is 0.03 Hz closer. (http://en.wikipedia.org/wiki/Diminishing_returns ;)) I consider myself very obsessed with this stuff and I can't truly notice the difference.

Anyway, Arcade_OSD let's you play with the involved values and measure the real refresh rate (just like FreqTest), so you can come up with a modeline that's more accurate. GroovyMAME will allow you to enter a raw modeline in the near feature, so you will be able to use your own tweaked modelines.

Finally, regarding how this affects the sound accuracy, GM doesn't need to know the exact refresh in order to adjust the sound. It just uses the core speed percentage that's recalculated on the fly which has a precission of 1/1000 (1000 = 100%, 999 = 99.9 %), so that means that for a 60 Hz typical refresh the accuracy is only 0.06 Hz. This factor is applied to the sound buffer when doing the final mix. However, I need to clarify that this is NOT applied when the new -frame_delay option is enabled. The reason is that modifying the core speed made the -frame_delay code to go crazy and produce an erratic speed.
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead or pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #4 on: January 09, 2013, 01:49:36 pm »
Hi Calamity,

Many thanks for your thorough explanation, that's truly appreciated and made for a good read also.

Quote
Anyway, Arcade_OSD let's you play with the involved values and measure the real refresh rate (just like FreqTest), so you can come up with a modeline that's more accurate. GroovyMAME will allow you to enter a raw modeline in the near feature, so you will be able to use your own tweaked modelines.

That sounds like a valuable option. I'll also give Arcade_OSD a go, but from what I remember it would somehow not work fully on my Win 7 / dual gfx-card setup. I'll let you know.

Quote
Finally, regarding how this affects the sound accuracy, GM doesn't need to know the exact refresh in order to adjust the sound. It just uses the core speed percentage that's recalculated on the fly which has a precission of 1/1000 (1000 = 100%, 999 = 99.9 %), so that means that for a 60 Hz typical refresh the accuracy is only 0.06 Hz. This factor is applied to the sound buffer when doing the final mix. However, I need to clarify that this is NOT applied when the new -frame_delay option is enabled. The reason is that modifying the core speed made the -frame_delay code to go crazy and produce an erratic speed.

The low accuracy (1/1000) on the sound buffer adjustment is what's worrying me. I agree that when it's about the perception of *video* speed that a 0.03Hz difference most probably will go unnoticed. "Unfortunately" in MAME we have to deal with video + audio sync and the mechanisms in MAME/UME to do that. It seems to me that *without* the proper audio rate adjustment (< 0.06Hz threshold), a mismatch between the video and audio rate will make the emulation to either regularly discard videoframes or drop samples from the audio buffer to keep audio and video in sync.

I'm almost certain the following isn't correct in the context of MAME/UME and am not even fully sure it would work this way, but for discussion sake it will do. 

Suppose we're running the SNES driver in UME. The SNES runs at (rounded) 60.10Hz. Because of the Dotclock granularity and uncertainty we've come up with a 512x240 resolution for the SNES running at 60.13Hz (+0.03Hz versus SNES core). This is below the +/- 0.06Hz threshold for the sound speed adjustment that's built into UME.

When at 100% the SNES audio runs at 32040 samples per second, or say 533 samples per frame. But, in above case the video is running at a factor 60.13/60.10=1.000499 faster. This means the emulation should be providing 32040*1.000499=32056 samples per second, but it's doing 32040. So we're underrunning by 16 samples per second. Since one frame is 533 samples, sound will be lagging by a full frame after 533/16=33 seconds. This is most probably where UME decides to discard a *video* frame to keep video and audio better in lockstep. This cycle of discarding a video frame then repeats every 33 seconds! It can easily get worse when there's a +0.05%Hz difference in targeted versus real refresh rate. That difference still sits under the radar of the audio adjustment and would mean UME needs to discard a frame every 20 seconds!

I've been looking in the documentation and there I found "By default MAME tries to keep the DirectSound audio buffer between 1/5 and 2/5 full." So I'm not sure how this will impact above hypothetical case, but my best guess is that a difference of +/-0.03Hz in targeted versus real refresh rate is causing serious loss of emulation accuracy; basically because you just can't get around the fact of either discarding video frames or (in the opposite case of real refresh being lower than target?) dropping samples from the buffer as the main video/audio sync mechanism. Of course this would be purely because of the fact that audio isn't properly adjusted for refresh rate differences below the 0.06Hz threshold (in case we're talking of a typical 60Hz refresh), and the "simple" solution would be to increase the granularity on the 'core speed percentage that is calculated on the fly'. The big question then is: how simple would it be to make that granularity adjustment?
 
Of course the above is hypothesizing mainly. Hopefully you have some more insight in how the adjustment mechanism for the video-audio sync actually works in MAME, especially in the special case where the target and real refresh differ by less than 0.06Hz (or more precisely the difference stays under the 1/1000 granularity radar).

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 6553
  • Last login:Today at 06:55:05 am
Re: Theoretical versus real refresh rates
« Reply #5 on: January 10, 2013, 08:18:10 am »
Well, you shouldn't be worried about GroovyMAME dropping video frames, that definitely does NOT happen, as long as you don't enable triple buffering. Your maths are correct on the other hand, so the audio is indeed affected by this slight difference, whatever it is. As a fact, the sound code evaluates the current situation each time it updates its buffer, and decides whether the buffer has overflown or underrun, and restores the synchronization. You can see a count of these events in the logs. This re-synchronization happens often enough so that the sound mismatch doesn't accumulate to a noticeable level (at least in theory), but the mismatch is there, obviously. Whether this is noticeable or not, that's another question. Not only the mismatch but the fact that some samples are dropped or missed should affect the sound to some extent. I swear I can't notice it. Some people are known to have a very fine hearing and might possibly notice it. We're only translating the video problem to the audio side, that's an inconvenient truth. Most of the synchronization work is done via resampling (the audio equivalent to stretching) but there's some information loss too, that's a fact.

Is there room for improvement? Sure. We might add finer granularity to the speed factor, that's quite easy. The problem is not actually that, but to calculate the speed factor with enough accuracy on real time. You need several seconds to obtain a good enough figure, so this will produce unbearable audio glitches whenever you pause the emulation and things like that.

Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead or pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #6 on: January 10, 2013, 11:26:44 am »
Well, you shouldn't be worried about GroovyMAME dropping video frames, that definitely does NOT happen, as long as you don't enable triple buffering.

That's good to hear. 

Quote
Your maths are correct on the other hand, so the audio is indeed affected by this slight difference, whatever it is. As a fact, the sound code evaluates the current situation each time it updates its buffer, and decides whether the buffer has overflown or underrun, and restores the synchronization. You can see a count of these events in the logs.

What does "restore the synchronization" mean in this case? So what does it do exactly when it encounters a buffer over- or underrun? I don't fully understand (yet) how it can adjust/restore the sync in all cases without affecting video when it's not able to adjust the core speed of the audio (i.e. in case of a <0.06 Hz difference).

Quote
Is there room for improvement? Sure. We might add finer granularity to the speed factor, that's quite easy. The problem is not actually that, but to calculate the speed factor with enough accuracy on real time. You need several seconds to obtain a good enough figure, so this will produce unbearable audio glitches whenever you pause the emulation and things like that.

I guess there's an easy testcase to see whether adding -only- the finer granularity (the easy part if I understand correctly) is of benefit of not.

Lets pick a game, say snowbros, set the audio_latency for it to 1 (sound buffer is kept between 1/5 and 2/5th full) and run it on a screenmode that has a +0.05Hz difference with the core speed. Without the fine granularity adjustment, you should be able to hear tiny sound "plops" at regular intervals, and you should see the number of over-/underruns of the soundbuffer in the log increase steadily (which is probably the more objective thing to check). On the other hand -with- the fine granularity adjustment, in case that enhancement would indeed be effective, the sound should be smooth(er) and the number of over-/underruns in the log should be far less than without the adjustment.

Not sure how you feel about it, but IMHO it would be worthwhile to give the easy part of the fine granularity adjustement a try at least (if time and energy permits of course.)

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #7 on: February 01, 2013, 07:13:40 am »
Hi Calamity,

Not giving up easy I've been thinking of a way to get the mainline MAME/UME Video and Audio sync to a level of perfection in GM. I think there's a real possibility to create this perfection, at least for the Windows version, so hopefully the following provides a starting point to render more thoughts on.

Is there room for improvement? Sure. We might add finer granularity to the speed factor, that's quite easy. The problem is not actually that, but to calculate the speed factor with enough accuracy on real time. You need several seconds to obtain a good enough figure, so this will produce unbearable audio glitches whenever you pause the emulation and things like that.

I'll start with our basis, which is that in GM the target modeline and its refresh is getting -very close- to the target refresh rate, but because of dotclock granularity and uncertainty the reality will very probably be a deviation somewhere between 0.01Hz and 0.03Hz. This doesn't seem much, but as our math showed earlier, a 0.03Hz difference will cause imperfect video and audio sync, causing negative impact on either the audio or the video at regular intervals. We'll call that the "unconvenient truth" of any mismatch, how small even, between real refresh and emulated refresh. 

My understanding is that the common auto-adjustment mechanism in emulators use mechanisms of either discarding/skipping a videoframe, pausing a frame (until next vblank), or dropping audio samples from the buffer. I guess there may be mixes of these mechanics or maybe some slightly different mechanics, but in the end they all boil down to re-adjusting either audio or video (to get them in sync again) in a way that creates a loss of source emulated frames of video or audio.

So on to a possible method in which we can create audio and video sync perfection. I'll call this method "dynamic vsync", which when implemented in the following way, would create the best possible synchronization for emulation without (visible) artifacts.

I'll refer to our earlier SNES example where the emulation runs at (rounded) 60.10Hz and our real screen refresh is 60.13Hz. The 0.03Hz difference will cause a video frame of lag every 33 seconds, which creates the unconvenient truth that every 33 seconds we'll get an adjustment of a frame in either video (and possible input latency problem) or audio. Again my assumption is that in mainline and groovy mame/ume these adjustment happen on a per frame level. So somewhere (possibly every 33 seconds) either a frame of video or audio is lost, to keep them in sync. Basicly this happens because the emulation is either vsynced or not, but it cannot be a mix of the two.

Now to the solution. For the SNES emulation in vsync mode every 33 seconds the real refresh will be ahead of the emulation by a full frame, so we can either skip a frame (not display it, also negatively impacting input response) and syncing the next frame to vblank *or* choose to *temporarily* disable vsync in a intelligent way. The intelligent way would be to make the emulation catch up a full frame without loosing a frame. Some math shows that if we disable vsync at this point and flip the next 16 frames all 1 millisecond before vblank than we would achieve perfect sync again, *without the loss of a video frame*, AND, because flipping 1 ms before vblank is still in the border blanking area, there will be no visible screentearing! So concluding we would have perfect resynchronization of audio and video:
- no loss of either source emulated video or audio (i.e. 100% equivalent of a real machine: all video frames displayed, no loss of audio samples)
- no artifacts

I guess this method would need work on the following:
  • Vsync needs to become a "Dynamic Vsync". This should be possible by means of choosing dynamically between D3DPRESENT_INTERVAL_ONE (vsync) and  D3DPRESENT_INTERVAL_IMMEDIATE, depending on where real time is versus emulated time.
  • The "dynamic vsync" needs to be intelligent in the sense that it should not brutally present 1 or 2 frames consecutively during mid frame, causing visible screen tearing, but instead should recalculate how many frames need to be presented 1 ms before vblank and execute those, before syncing to vblank again, to keep any possible tearing in the border blanking area, making it for the user as if full vsync has been retained (no visible artifacts, no loss of audio or video).
  • To be able to present frames 1 ms before vblank one needs to know ahead when this will happen. This would be possible by means of a loop that counts the time between vblanks, making use of the D3DRASTER_STATUS routine.

Of course there are a lot of blanks that need to be filled in before it could be made of practical use, but hopefully there's some basis in above thinking that we can build upon. If only to bring that last bit of perfection to (Groovy)UME :)


EDIT: Hmmm... While giving it a second thought the "flip 1 ms before vblank" isn't going to solve anything :/   The only way the dynamic vsync would solve anything is when you would do two flips per frame, otherwise you don't gain any of the lost ground, which of course means that there'll be a screentear mid frame. So in the end that would be a choice between two evils: either dismiss a whole frame, or do dynamic vsync and when readjusting choose to do two consecutive flips per frame (halfway the frame and at vsync) for two frames. That way the sync would be regained at the cost of two frames tearing halfscreen. Making it a choice between two evils. Ah well... it already sounded too good to be true :(.

I guess really the only way to get perfect sync for the dotclock granularity and uncertainty mismatch is to do an ad-hoc patch of the audio, cpu and video crystals in the drivers.....   So you could measure the exact refresh rates of your real screen modes and patch the crystal speeds in the drivers for the micro misalignments. Not sure whether that would not cause entirely new issues though.. At least I know one thing for sure, that would be one thing that isn't easy to automate for all drivers...
« Last Edit: February 01, 2013, 09:22:41 am by Dr.Venom »

tris_d

  • -driverman-
  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 51
  • Last login:February 24, 2013, 03:23:13 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #8 on: February 01, 2013, 09:47:39 am »
I'll start with our basis, which is that in GM the target modeline and its refresh is getting -very close- to the target refresh rate, but because of dotclock granularity and uncertainty the reality will very probably be a deviation somewhere between 0.01Hz and 0.03Hz. This doesn't seem much, but as our math showed earlier, a 0.03Hz difference will cause imperfect video and audio sync, causing negative impact on either the audio or the video at regular intervals. We'll call that the "unconvenient truth" of any mismatch, how small even, between real refresh and emulated refresh. 

My understanding is that the common auto-adjustment mechanism in emulators use mechanisms of either discarding/skipping a videoframe, pausing a frame (until next vblank), or dropping audio samples from the buffer. I guess there may be mixes of these mechanics or maybe some slightly different mechanics, but in the end they all boil down to re-adjusting either audio or video (to get them in sync again) in a way that creates a loss of source emulated frames of video or audio.

You don't skip audio, and it shouldn't need to be adjusted regardless of any frame-skip.


Quote
So on to a possible method in which we can create audio and video sync perfection. I'll call this method "dynamic vsync", which when implemented in the following way, would create the best possible synchronization for emulation without (visible) artifacts.   

I'll refer to our earlier SNES example where the emulation runs at (rounded) 60.10Hz and our real screen refresh is 60.13Hz. The 0.03Hz difference will cause a video frame of lag every 33 seconds, which creates the unconvenient truth that every 33 seconds we'll get an adjustment of a frame in either video (and possible input latency problem) or audio. Again my assumption is that in mainline and groovy mame/ume these adjustment happen on a per frame level. So somewhere (possibly every 33 seconds) either a frame of video or audio is lost, to keep them in sync. Basicly this happens because the emulation is either vsynced or not, but it cannot be a mix of the two.

Now to the solution. For the SNES emulation in vsync mode every 33 seconds the real refresh will be ahead of the emulation by a full frame, so we can either skip a frame (not display it, also negatively impacting input response) and syncing the next frame to vblank *or* choose to *temporarily* disable vsync in a intelligent way. The intelligent way would be to make the emulation catch up a full frame without loosing a frame. Some math shows that if we disable vsync at this point and flip the next 16 frames all 1 millisecond before vblank than we would achieve perfect sync again, *without the loss of a video frame*, AND, because flipping 1 ms before vblank is still in the border blanking area, there will be no visible screentearing! So concluding we would have perfect resynchronization of audio and video:
- no loss of either source emulated video or audio (i.e. 100% equivalent of a real machine: all video frames displayed, no loss of audio samples)
- no artifacts

I guess this method would need work on the following:
  • Vsync needs to become a "Dynamic Vsync". This should be possible by means of choosing dynamically between D3DPRESENT_INTERVAL_ONE (vsync) and  D3DPRESENT_INTERVAL_IMMEDIATE, depending on where real time is versus emulated time.
  • The "dynamic vsync" needs to be intelligent in the sense that it should not brutally present 1 or 2 frames consecutively during mid frame, causing visible screen tearing, but instead should recalculate how many frames need to be presented 1 ms before vblank and execute those, before syncing to vblank again, to keep any possible tearing in the border blanking area, making it for the user as if full vsync has been retained (no visible artifacts, no loss of audio or video).
  • To be able to present frames 1 ms before vblank one needs to know ahead when this will happen. This would be possible by means of a loop that counts the time between vblanks, making use of the D3DRASTER_STATUS routine.

Of course there are a lot of blanks that need to be filled in before it could be made of practical use, but hopefully there's some basis in above thinking that we can build upon. If only to bring that last bit of perfection to (Groovy)UME :)

If a given display can not match game refresh rate the only thing you can do to have perfect animation is to change the game speed to match screen refresh rate. Anything else will result in glitches or tearing.
This user is driverman. Permanent probationary status contingent upon following forum rules of decorum. --- saint

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 6553
  • Last login:Today at 06:55:05 am
Re: Theoretical versus real refresh rates
« Reply #9 on: February 01, 2013, 01:10:31 pm »
I guess really the only way to get perfect sync for the dotclock granularity and uncertainty mismatch is to do an ad-hoc patch of the audio, cpu and video crystals in the drivers.....   So you could measure the exact refresh rates of your real screen modes and patch the crystal speeds in the drivers for the micro misalignments. Not sure whether that would not cause entirely new issues though.. At least I know one thing for sure, that would be one thing that isn't easy to automate for all drivers...

Hi Dr.Venom,

I was about to answer to your post when I read your update, so you've already figured out why your suggested method wouldn't work the way you intended.

Anyway, as discussed in previous posts, the scenario you are depicting does not occur in real GM operation. I mean, you never get a whole frame of mismatch after 33 seconds, because the re-synchronization is continuous. OK, if have the real hardware side by side there to compare, then yes, after 33 seconds there should be a frame of lag between both systems.

But even so, if you have the proper equipment to measure the real refresh of your SNES, I believe you'll find out it's a bit different from the nominal refresh the emulators are assuming (I've never done this I must admit). Crystal oscillators suffer frequency variations due to temperature and age.

In order to achieve better refresh granularity, there are more aggressive techniques that involve dynamic reprogramming of the video timings and require Powerstrip, I think you'll find this reading interesting:

http://forum.doom9.org/showthread.php?t=73874

As you see, this problem is an old one.
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead or pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

Dr.Venom

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 270
  • Last login:May 08, 2018, 05:06:54 am
  • I want to build my own arcade controls!
Re: Theoretical versus real refresh rates
« Reply #10 on: February 02, 2013, 12:45:28 pm »
In order to achieve better refresh granularity, there are more aggressive techniques that involve dynamic reprogramming of the video timings and require Powerstrip, I think you'll find this reading interesting:

http://forum.doom9.org/showthread.php?t=73874

Thanks for the pointer, I'll give it a read through.

Quote
As you see, this problem is an old one.

Ah yes I see. Of course that's not withholding us to try and come up with different/better solutions ;)  But in this case I think I'll leave it at this for now, and just enjoy GM/UME in all of its current glory :)