Main Restorations Software Audio/Jukebox/MP3 Everything Else Buy/Sell/Trade
Project Announcements Monitor/Video GroovyMAME Merit/JVL Touchscreen Meet Up Retail Vendors
Driving & Racing Woodworking Software Support Forums Consoles Project Arcade Reviews
Automated Projects Artwork Frontend Support Forums Pinball Forum Discussion Old Boards
Raspberry Pi & Dev Board controls.dat Linux Miscellaneous Arcade Wiki Discussion Old Archives
Lightguns Arcade1Up Try the site in https mode Site News

Unread posts | New Replies | Recent posts | Rules | Chatroom | Wiki | File Repository | RSS | Submit news

  

Author Topic: Modeline calculation and GroovyMAME discussion  (Read 12603 times)

0 Members and 1 Guest are viewing this topic.

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Modeline calculation and GroovyMAME discussion
« on: March 29, 2013, 06:26:40 pm »
So, the different monitors presets indicated for GroovyMame tend to be pretty narrow. That means modelines generated with pixel clocks above MAME specifications, and the risk of not matching the exact frame duration. For example, the CPS-III pixel clock is 7.5 MHz (Hfreq = 15432 Hz), but when you have 15450 Hz as the lowest Hfreq in preset, that means you must go for a higher pixel clock, and higher numbers for pixels/lines.

Yeah with GroovyMAME we don't attempt to reproduce the actual PCB video signal because that would force the user to be constantly adjusting his monitor. Rather than that we try to get the closest possible vertical frequency, but we play with the horizontal frequency to our convenience.

For instance, the Hantarex MTC 9110 preset is defined between the range of 15.625-16.670 kHz. This is because the Hantarex has a 1 kHz wide working range. You can move this range up or down by means of a potentiometer on the chassis. For instance, you can set it to a admit a range of 15.00-16.00 kHz. This is useful for a PCB that outputs 15.25 kHz. But if you're running a MAME cab, then it makes no sense to lower this adjustment. Rather than that what you want is to adjust it so you can benefit from the highest possible frequency the chassis admits (16.67 kHz) so you can use higher resolution/refresh rates. However this limits the lower value to 15.625. So when using this preset, we need to promote all video modes used by MAME to, at least, 15.625, by adding the required lines.

I agree that the PAL/NTSC presets in GroovyMAME are very narrow and thus I don't recommend using them unless you're using a TV with very strict requirements (we have found some of those). Usually a better preset for TVs is any of the 15 kHz arcade presets that have a wider frequency range.

Quote
Since we deal with integers numbers and a granularity of 0.01 MHz for pixel clocks, there's no chances to get a signal that matches MAME frame duration. It could be close, but not enough to avoid using triple buffering...

Indeed. But we are not using triple buffering in GroovyMAME, unless the user wants to. V-sync is enough to have *perfect* arcade animation. Now, there's a common misconception here: even if you achieved a refresh that's *exact* to the original, you would need some sort of vertical synchronization. So we are not v-syncing because of the limitations of modeline accuracy, we're v-syncing because we must do it. And on this regard the dotclock granularity affects the same even if we attempt to keep the original modeline or we "port" it to the frequency range we prefer.

« Last Edit: April 01, 2013, 10:36:53 am by Calamity »
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

eboshidori

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 13
  • Last login:May 20, 2013, 06:56:18 pm
  • Le CRT vaincra ! ^.^
Re: Re: Pentranic CGA Monitor. Custom CRT_Range. GroovyArcade.
« Reply #1 on: March 30, 2013, 11:38:29 am »
Yeah with GroovyMAME we don't attempt to reproduce the actual PCB video signal because that would force the user to be constantly adjusting his monitor.

That's the point I wanted to discuss. ^^
The frame sizes of arcade PCBs are designed to match the safe area of the NTSC standard, in most cases. They don't care of the 59.94 Hz refresh rate (nor 60 Hz), because monitors are feeded with RGB signals (so there's no need to work with color sub-carrier in mind), but they care about centering a picture which will be visible in every monitor, even by knowing these monitors won't have the same geometry settings (more or less overscan, slighty off centered), and the fact that most arcade operators don't take time to adjust the picture position, even if it's easier than in a consumer TV.

For example, the CPS-III (384x224) : the total frame size is 486x259. Why such numbers ? ( that aren't convenient multiples of 8 )
Because for this frame size and this pixel clock (7.5 MHz) the size of the active display matches the 5% overscan area of the NTSC standard.
The refresh rate is around 59.58 Hz (and 15.432 kHz for Hfreq), numbers which don't respect NTSC frequency (neither a 15.72 kHz@60 Hz arcade "standard"...), but that's not the point. The goal is to match the geometry and position of the picture that correspond to NTSC, because arcade monitors are adjusted according to that.

Same goes for arcade boards that display 320x240. The frame size is over the regular 262 lines in order to be sure to display the 240 lines within the NTSC safe area (because in NTSC, you do have 240 lines of active display, but the overscan is set in such a way that you end up seeing only 224). So most arcade boards designed with 240 active lines display them inside a ~ 270 lines frame. Then the refresh rate goes lower than 60 Hz, because there's no need to rise the Hfreq (and having higher pixel clock) to stay close a 60 Hz or a 59.94 Hz standard. Typically, those boards are around 57-58 Hz, accordingly a 15625 Hz Hfreq, which is a very convenient number that seems to be the real standard over 15720 Hz. These boards are designed with frame of 270 lines (and more), not with the operator adjusting his screen in mind...

Arcade boards work with pixels clock such as 4,6,7,8, 10 MHz, which come from the division of common oscillators values. They usually work at 15.625 kHz, and often under that. The active display of the game is always within the active display of NTSC, with overscan taken into account.

So, when you specify modelines close to those parameters, you have more chance to match the exact frame duration of the original game, and there's no trouble to get a centered picture with a size close to full screen (with little adjustements of H and V size on your monitor).


Quote
Rather than that we try to get the closest possible vertical frequency, but we play with the horizontal frequency to our convenience.

OK, but the same can be done for modelines that respect original games specifications. For example, Bubble Bobble (256x224) has a frame of 384x264 (same as Neo Geo). That means wide porches, ie narrow picture with black borders (because there are 384-256 = 128 px for the horizontal blanking interval). Normally, you just modify the H size on the monitor to get a full screen picture, but you can tweak the modeline to do so. Instead of following the 6 MHz pixel clock + 384 pixels per lines of the original board (and MAME), you can choose 5.25 Mhz and 336 pixels for the modeline. The active display remains the same, the Hfreq too ( 6 Mhz/384 = 5.25 Mhz/336 = 15625 Hz), but the active display will appear wider on screen (so no need to modify Hsize, and no need to center the picture is you keep the same porches distirbution). If you keep the same number of lines (264), you keep the exact same refresh rate.
For this case (and every stuffs based on 15625 Hz), it's easy to modify geometry, but for other systems, when you change the number of line or of total pixels, it became difficult to keep the same refresh rate. Ex: the CPS3 frame duration is 16783,2 µs (486*259 /7.5), it's impossible to get this exact duration with higher number of pixels and/or lines, with 0.01 MHz pixel clock granularity.


Quote
For instance, the Hantarex MTC 9110 preset is defined between the range of 15.625-16.670 kHz. This is because the Hantarex has a 1 kHz wide working range. You can move this range up or down by means of a potentiometer on the chassis. For instance, you can set it to a admit a range of 15.00-16.00 kHz. This is useful for a PCB that outputs 15.25 kHz. But if you're running a MAME cab, then it makes no sense to lower this adjustment.

Of course it make sense to lower this : because most arcade boards tend to have low H frequency, and it's easier to match the frame duration when your modeline are written according to low Hfreq. Even if your cab is running only emulators and will never run any real PCB, it make sense to lower this setting.

Quote
Rather than that what you want is to adjust it so you can benefit from the highest possible frequency the chassis admits (16.67 kHz) so you can use higher resolution/refresh rates. However this limits the lower value to 15.625. So when using this preset, we need to promote all video modes used by MAME to, at least, 15.625, by adding the required lines.

OK, I understand, but really there aren't much boards who need higher Hfreq (close to 16.5 Khz), and by promoting all other video modes to this range (15.625-16.67), you need to add lines or pixels to the original frame size, and the little freedom you have with the pixel clocks value is not enough to keep the exact frame duration.

If you don't want to constantly adjust the size of your picture for every different system, you can obtain it by staying close to the original specifications, because they are designed precisely to do so, they are designed to match the NTSC safe area, something that *never* changes, and something that is used to set the defaut geometry setting of arcade monitors.

Quote
we are not using triple buffering in GroovyMAME, unless the user wants to. V-sync is enough to have *perfect* arcade animation. Now, there's a common misconception here: even if you achieved a refresh that's *exact* to the original, you would need some sort of vertical synchronization. So we are not v-syncing because of the limitations of modeline accuracy, we're v-syncing because we must do it.

Triple buffering is used because the timing of MAME (to send the pre-rendered frame to the graphic card buffer) and the video mode don't match. When MAME defines a duration of exactly 16896 µs between two emulated frames, if the modeline that drive the timings of the graphic card don't produce exactly this duration, then you get some artifacts (tearing).
Triple buffering doesn't compensate the barely visible 0.002473 Hz difference of your setting (that is nevertheless very close...), it will stock serveral frames and then display them according to the timing of the graphic card, in order to have smooth display. And there's no way to ask MAME to produce those few frame faster than it's supposed to... When you need 2 additional frames for buffering, you need to wait the whole duration of those frames. Here you need to wait 2x 16896 :  33792 µs. So a hardly considerable difference for the vertical rate leads to a noticeable input lag. If you don't want it, then don't activate triple buffering... but deal with tearing or jerky scrollings. I understand you must Vsync even with the same frame duration, but I don't understand how this option alone can produce perfectly smooth display when the frame durations don't match... That's why not only Vsync is necessary, but achieving perfect frame duration is even more desirable.

Quote
the dotclock granularity affects the same even if we attempt to keep the original modeline or we "port" it to the frequency range we prefer.

To produce the desired low res pixel clock, PLLs of graphic card work with reference input clock of 27 MHz (sometime 13.5, 14.318 or 25 MHz). The fact that the onboard oscillator isn't a perfect 27000000 Hz isn't a problem. As long as the progammed value of the PLL matches what comes from MAME, everything can work in sync. If MAME says 16896 µs, and your modeline also says it, and the PLL can be programmed to do so (because the pixel clock you indicated is manageable), everything works fine. But at the very end, you will certainly measure a slight difference at the VGA output, because the 27 MHz oscillator value isn't excatly 27 MHz. But inside, the maths correspond.
If you specify a modeline with a 7.8505859 MHz pixel clock to get the exact desired refresh rate, the maths can be OK too, but when it comes to PLL generation, you're screw up... Before considering the variation of the 27 MHz input clock, there's no way a PLL can produce such a pixel clock. Every digits outside the kHz range are discarded. So your 7.8505859 MHz becomes a 7851 kHz. And the PLL could either produce a 7850 or 7852 kHz as the closest values... So the frame duration doesn't match anymore (before considering to measure what comes out from the VGA port). That's why its safe to work with pixel clocks of 0.01 MHz granularity, because about every PLLs are able to produce them. In this regard, arcade boards are even safer  :D , because they work with very easy pixel clock to manage : it's easier to produce 7 MHz than 7.8505859 . And it's safer to produce 7.50 rather than 7.851 . And considering this constraint, it's safer to aim for the lowest Hfreq to get the exact frame duration, because you'll tend to work with easy pixel clocks.

As your increase the frequency, you increase the need of higher granularity, which you can't obtain. As you increase frequency, you increase the distance between frame durations produced by modelines and the MAME specifications. If the only goal of increasing Hfreq is to get rid of picture size adjustements, you can also obtain it with lower frequency. But by the way, you also get perfect frame duration as bonus.  :)

« Last Edit: March 30, 2013, 12:27:19 pm by eboshidori »

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Re: Modeline calculation and GroovyMAME discussion
« Reply #2 on: April 01, 2013, 01:05:52 pm »
Hi Eboshidori,

First please excuse my late answer, but you're touching many interesting topics I don't have much time lately for an elaborated answer. I decided to split this topic so we don't hijack someone else's thread.

The problem when you master a certain subject, as you do, is that you become sceptic on how other people are doing things.

Some time ago I took the time to *measure* all real values of pixel clock that could be achieved with an ATI 9250, from 3.75 to 25.10 MHz. I've attached the list below. This allowed me to actually predict with great accuracy what the real refresh of a modeline would be. On the left column you have the 10 kHz-precission value which ATI drivers take, and on the right column you have the real value obtained in Hz:

 375  3749904
 376  3764327
 377  3771963
 378  3779904
 379  3796780
 380  3808831
 etc.

This list is only valid for that chipset. So one would actually need to work out his own list for any card.

Let's consider the CPS-III case you pointed. This would be the theoretical modeline that preserves the original frame time (the totals are what matters, the porches are guessed):

Modeline "384x224 15,432 KHz 59,58339 Hz" 7.500000 384 400 432 486 224 236 239 259  -hsync -vsync

I'm going to keep the original number of lines. Now, when porting this to normal PC hardware, the first thing we have to do is normalizing the htotal to the next 8-multiple, otherwise the video card will silently do it for us, ruining our timings. With this in mind, we can no longer use the original pixel clock. For this new htotal, I'll take the closest real pixel clocks from my look-up table, and this is what I get:

Modeline "384x224 15,427 KHz 59,56581 Hz" 7.528643 384 400 432 488 224 236 239 259  -hsync -vsync (-0.0175 Hz)
Modeline "384x224 15,445 KHz 59,63435 Hz" 7.537305 384 400 432 488 224 236 239 259  -hsync -vsync (+0.0509 Hz)

As you see, we come close, but not enough to get a perfect 59,58339 Hz figure.

Now, let's try adding a line. Again, I take the closest pixel clock from my look-up table, and this is what I get:

Modeline "384x224 15,491 KHz 59,58231 Hz" 7.559804 384 400 432 488 224 236 239 260  -hsync -vsync (-0.0011 Hz)

Just -0.0011 Hz off, not bad!

What I'm trying to prove is that keeping the original vertical total lines and horizontal frequency doesn't necessarily mean better refresh accuracy as you are suggesting, due to the random nature of real dotclocks that is a result of their granularity.

The modeline generator I wrote for VMMaker does account for this. It has an option named "iterations" that allows to keep adding lines one by one to the strictly required ones and calculates the real refresh for each "iteration". It turns out that you get much better approximations when you let it calculate 4 or 5 iterations. This means searching the range between 15.6-16.0 kHz. So your suggestion of higher frecuency->worse granularity may be true but it certainly doesn't affect us when we only consider this narrow range (15.6-16.0 kHz).

For GroovyMAME, we didn't port the iterations method because that needs a look-up pixel clock table that is specific to each particular card, and we still don't have a tool to calculate this in an easy way. And, mainly, because the average man can't notice the difference. Yes, GroovyMAME just assumes the calculated pixel clock is going to be rounded to 10 kHz, but as explained, you wouldn't necessarily get better accuracy by using 10 kHz aligned values unless you knew what the resulting real value is, which can only be done by direct measuring.

I bet that there may be bigger refresh variations from their nominal values in original hardware due to temperature or component's age that what we're achieving here.

However, one feature that GroovyMAME will implement at some point is the possibility to reproduce the exact original video signal when this is fully documented by MAME.

On the other hand, the modeline generator is flexible enough to do what you want, as long as you use custom settings. This is what I just obtained for the CPS-III system, by editting the current generic_15 preset frequency range:

monitor       custom
crt_range0  15400.00-15720.00,49.50-65.00,2.000,4.700,8.000,0.064,0.192,1.024,0,0,192,288,448,576

Modeline "384x224_60 15.43KHz 59.58Hz" 7.78 384 400 440 504 224 233 236 259   -hsync -vsync

As for the convenience of having 16+ kHz frequencies available, it's just required for vertical games on horizontal monitor. It allows you having real 256p@60Hz resolutions, otherwise you can't achieve this.
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Re: Re: Pentranic CGA Monitor. Custom CRT_Range. GroovyArcade.
« Reply #3 on: April 01, 2013, 01:42:11 pm »
This deserves a separate post :)

Triple buffering is used because the timing of MAME (to send the pre-rendered frame to the graphic card buffer) and the video mode don't match.

Yes, but I'm telling you we only use triple buffering in GroovyMAME in extreme cases (things like 1 Hz difference). We're just using v-sync (syncrefresh) most of the time. They are NOT the same thing.

Quote
When MAME defines a duration of exactly 16896 µs between two emulated frames, if the modeline that drive the timings of the graphic card don't produce exactly this duration, then you get some artifacts (tearing).

Nope :)

This is the classic misconception. Tearing just happens because the video card's vertical retrace and the emulation loop are out of phase. This would be true even if the modeline's refresh matched the emulated frame duration up to the 10000000000000000000 decimal position. This is why we need to v-sync.

Being fair, your sentence is just partially true, it's what we tell to novice users to explain why they're getting tearing, but it's not the technical explanation.

Quote
Triple buffering doesn't compensate the barely visible 0.002473 Hz difference of your setting (that is nevertheless very close...), it will stock serveral frames and then display them according to the timing of the graphic card, in order to have smooth display. And there's no way to ask MAME to produce those few frame faster than it's supposed to... When you need 2 additional frames for buffering, you need to wait the whole duration of those frames. Here you need to wait 2x 16896 :  33792 µs. So a hardly considerable difference for the vertical rate leads to a noticeable input lag. If you don't want it, then don't activate triple buffering... but deal with tearing or jerky scrollings. I understand you must Vsync even with the same frame duration, but I don't understand how this option alone can produce perfectly smooth display when the frame durations don't match... That's why not only Vsync is necessary, but achieving perfect frame duration is even more desirable.

Triple buffering in MAME is not real triple buffering, it's only a circular queue. This is due to how DirectX manages flipping with more than two buffers. This has been explained in detail in other threads in this subforum. This explains the lag. It's even worse: triple buffering based on DirectX is not asynchronous, as it is supposed to be.

GroovyMAME implements the -triplebuffer option in a different way: it performs an asynchronous double buffering. This minimizes the lag while allowing the game loop to actually run freely at its supposed speed, keeping the screen update routines in a separate thread.

This technique eliminates tearing while keeping the game at it's required speed, but, by design, it produces scroll stuttering. That's why we just use it when there's no better possibility. For instance, when running pucman rotated on a horizontal monitor, we wouldn't achieve 288p@60.61 Hz on a standard monitor.

For the rest of cases, we just use -syncrefresh. What this option does in GroovyMAME is telling the emulator to just throttle the game at the video card's refresh. This is because smooth video emulation is just impossible when you try to keep two clocks synchronized on a PC system: the video card's clock and the cpu clock.

So on this scenario it doesn't matter if you are 0.1 Hz off because the emulation just gets synchronized with the video cards speed. Of course, the more accurate the refresh is, the better for us. But you reach a point where human beings can't actually tell the difference, so trying to get more accurate just makes sense from an academic point of view but doesn't affect gameplay at all.
« Last Edit: April 01, 2013, 03:43:39 pm by Calamity »
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

rCadeGaming

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 1256
  • Last login:April 13, 2025, 12:14:40 pm
  • Just call me Rob!
Re: Modeline calculation and GroovyMAME discussion
« Reply #4 on: April 01, 2013, 06:04:50 pm »
Calamity, how close do we need to get to the native vertical scan rate to avoid audio stuttering at the lowest audio latency setting?  Does this vary among different games (is it just me or is CHD audio stricter)?  If so, what should we shoot for to be safe in the worst case scenario?

When tweaking modelines manually, I noticed that you can define the pixel clock in at least 0.001 MHz increments in a text modeline, but only 0.01 MHz increments in ArcadeOSD.  Wouldn't it be useful to have the smaller increment of adjustment there?  I know it may not actually be that accurate in terms of what you'd actually measure on the physical video output anyhow, but wouldn't it be helpful just for the internal math in terms of audio synchronization?

What's more important for optimum audio synchronization, accuracy in the actual output or accuracy in the internal mathematical value?

Will the new GM/switchres which accepts text modelines accept pixel clocks defined to 0.001 MHz increments, or will GM/CRT_Emudriver round them off?  If it will accept them to 0.001 MHz, it would be really nice to be able to tweak that in real time in ArcadeOSD instead of having to calculate things mathematically and make the final change the text modeline.

eboshidori

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 13
  • Last login:May 20, 2013, 06:56:18 pm
  • Le CRT vaincra ! ^.^
Re: Modeline calculation and GroovyMAME discussion
« Reply #5 on: April 02, 2013, 01:09:36 pm »
Calamity : thanx to take time to make essential precisions on this delicate subject.  :)


The problem when you master a certain subject, as you do, is that you become sceptic on how other people are doing things.

I came back to emulation stuffs recently, and this time I want it to be done correctly  :D, I can't stand anymore with input lag and jerky scrollings. I discovered GroovyMAME few weeks ago, but when I read various logs of users, with modelines that don't match the specifications of MAME : different frame sizes (but you can see it only when MAME does actually define a frame size...), higher pixel clocks, different frame durations... so yeah I became a little bit sceptic ^^... Given the fact that many users continue to use triple buffering, that's not something to reassure me. :P


Quote
Some time ago I took the time to *measure* all real values of pixel clock that could be achieved with an ATI 9250, from 3.75 to 25.10 MHz. I've attached the list below.


That's a good job. What software did you use to read those values ? PowerStrip or something else (your own code) ?

Quote
This allowed me to actually predict with great accuracy what the real refresh of a modeline would be.

Exactly, because at the end, the actual result of the PLL is something very important to know. But the 9250 is an old card, nearly 10 years old, knowing that at the time Matrox released a graphic card that would allow better control on timings. And I was thinking that other cards would follow...

Quote
This list is only valid for that chipset. So one would actually need to work out his own list for any card.

This is valid for this PLL, with this input clock (was it 14.318 or 27 MHz by the way ?)

It would seem to be a headache to take the time to list each exact value from the PLL for every clocks possible, but when you consider the oscillators of arcade boards, the list is greatly reduced. Sure, you have multiples resolutions, total frame sizes, refresh rates, but the different values for the pixels clocks are much reduced : 4, 6, 6.25, 7, 7.5, 8, 8.4672 , 10, 12.27 ... With only this few numbers, you cover many systems.


Quote
Now, when porting this to normal PC hardware, the first thing we have to do is normalizing the htotal to the next 8-multiple, otherwise the video card will silently do it for us, ruining our timings. With this in mind, we can no longer use the original pixel clock.

Nearly 10 years ago, Matrox was one of the first graphic card manufacturer to allow designing video modes with pixel precision (and no more multiples of 8 ). So what the point of using Power Strip, soft15kHz etc. if at the end we're always stuck to narrow VESA specifications ?

If my card isn't able to produce something outside multiples of 8, OK I will deal with it (and write modelines according to). But I was thinking all those tweaking utilities where made to bypass those limitations (kept for matching old standard from the late 80's), assuming that most cards dropped the strict VESA design and would allow pixel granularity for video modes when properly asked to...
That's not useful to increase the math precision in software if the hardware doesn't follow. On contrary, you increase the error margin...


Quote
Now, let's try adding a line. Again, I take the closest pixel clock from my look-up table, and this is what I get:

Modeline "384x224 15,491 KHz 59,58231 Hz" 7.559804 384 400 432 488 224 236 239 260  -hsync -vsync (-0.0011 Hz)

Just -0.0011 Hz off, not bad!

In this case, you get a ~ 16783,503911 µs duration, as opposed to the 16783,2 original one. Knowing that MAME speaks in attoseconds internally, then the slight 0.0011 Hz (which is barely considerable at all) becomes a much significant difference for calculations...


Let's go back to the Neo Geo example : if my card is not able to produce the nice 6.000 MHz pixel clock I requested, I'd rather write the corresponding value directly in MAME source and then compile a dedicated build... Or better : write a build that allows the user to set the corresponding value of his card, in Hz.

For the same number of cycles (internally, this remains the same, and this must be because is the most important thing for emulation accuracy) , I would write 23.999384 MHz as the Master Clock value instead of 24.000000 . Then the M68000 would run at 11.999692 MHz (Mclk/2), and the pixel clock would be 5.999846 MHz (Mclk/4), matching the result of the PLL, and the corresponding modeline. The frame duration of the emulator and the modeline would be the same :
384x264 / 5.999846 = 16896.433675 (etc.) µs (converted in attosecondes) instead of the initial 16896.
In hz:  59.184087 instead of 59.185606 .

After all, SNK made this sort of change during the life of the Neo Geo. Early MVS have 24 MHz oscillators, and late home consoles have a 24.167829 MHz one. But the number of cycles is exactly the same, everything still works the same way. The two hardwares still handle 384x264x4 = 405504 cycles per frame. But with late consoles you get a 59.599484 Hz refresh rate. And nobody ever noticed or complained about this. As long as everything is perfectly smooth, there's no reason. The slight rise of freq is barely noticeable, even for the audio part.

Quote
What I'm trying to prove is that keeping the original vertical total lines and horizontal frequency doesn't necessarily mean better refresh accuracy as you are suggesting, due to the random nature of real dotclocks that is a result of their granularity.

This is the result of the PLL, something that is consistant for all other video modes (but is not consistant among the different cards on the market, even from the same brand...). But the PLL formula isn't linear, so you don't see the same granularity between two values, as you choose low or high clocks.

« Last Edit: April 02, 2013, 02:28:05 pm by eboshidori »

eboshidori

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 13
  • Last login:May 20, 2013, 06:56:18 pm
  • Le CRT vaincra ! ^.^
Re: Re: Pentranic CGA Monitor. Custom CRT_Range. GroovyArcade.
« Reply #6 on: April 02, 2013, 03:03:53 pm »
Quote
When MAME defines a duration of exactly 16896 µs between two emulated frames, if the modeline that drive the timings of the graphic card don't produce exactly this duration, then you get some artifacts (tearing).

Nope :)

This is the classic misconception. Tearing just happens because the video card's vertical retrace and the emulation loop are out of phase. This would be true even if the modeline's refresh matched the emulated frame duration up to the 10000000000000000000 decimal position. This is why we need to v-sync.

I understand this. If the frame duration matches but you are out of phase, you still get tearing. But a stable tearing, located at the same place on screen. That's precisely what indicates you match the frame duration. Because when you are not in phase and don't have the same frame duration, the tearing moves all across the screen.
So, in first case, you just have to be in phase, ie use V-sync. No more tearing. :cheers:
In the second case, you also use V-sync, but the slight difference in duration would tend to cause stuttering... It could be very light, but to get rid of it, there's a need to use triple buffering...


Quote
Triple buffering in MAME is not real triple buffering, it's only a circular queue. This is due to how DirectX manages flipping with more than two buffers. This has been explained in detail in other threads in this subforum. This explains the lag. It's even worse: triple buffering based on DirectX is not asynchronous, as it is supposed to be.

GroovyMAME implements the -triplebuffer option in a different way: it performs an asynchronous double buffering. This minimizes the lag while allowing the game loop to actually run freely at its supposed speed, keeping the screen update routines in a separate thread.

That's good news ! ^^

Synchronous buffering is used to stay close to the signal. That's what you have in LCD TVs when you connect a 240p source. The TV considers it as a 480i signal that must be deinterlaced before being upscaled. So it waits for the first frame, then the 2nd (to get a full field), then performs deinterlacing (interpolation, more or less crude), then upscals it to the native res of the panel. But it must display the final picture in sync with the signal, so you often need to wait another frame. That's what is causing the big input lag, along with a nasty result that ruins the pixel art. :P

Quote
This technique eliminates tearing while keeping the game at it's required speed, but, by design, it produces scroll stuttering.


Buffering causes lag + smoothness (synchronous), or less lag + stuttering (asynchronous). In every cases, you want to escape from it.


Quote
For the rest of cases, we just use -syncrefresh. What this option does in GroovyMAME is telling the emulator to just throttle the game at the video card's refresh.

Is GroovyMAME collecting the value of the modeline, the value of the actual PLL, or just computing the duration between two Vsync intervals ?

Quote
So on this scenario it doesn't matter if you are 0.1 Hz off because the emulation just gets synchronized with the video cards speed. Of course, the more accurate the refresh is, the better for us. But you reach a point where human beings can't actually tell the difference, so trying to get more accurate just makes sense from an academic point of view but doesn't affect gameplay at all.

The throttling in MAME is based in internal values. This affects video as well as audio. Even if at the end you can avoid video troubles by synchronizing to the graphic card (even if it is slightly off), I think the audio issues come from this. It's very difficult to achieve a 1/5 audio latency in MAME without getting artifacts. Even if your video is smooth without triple buffering, it's safe to stay at 2/5 .
I still think the durations need to perfectly match, no matter what durations (close to original hardware, or close to what is actually possible to achieve for each settings, when all the real parameters are carefully taken into account).


« Last Edit: April 02, 2013, 03:11:23 pm by eboshidori »

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Re: Modeline calculation and GroovyMAME discussion
« Reply #7 on: April 02, 2013, 05:23:48 pm »
Yes, in theory if you got a perfect match for the refresh you would be getting static tearing. But what I'm suggesting is that this is impossible in practice. On the one hand, it's impossible to get a perfect match for the refresh, due to the pixel clock uncertainty, unless you are increadibly lucky. But besides that, you have the cpu clock issue: remind we're running over a multi-tasking system, so any frame timing based on counting cpu cycles is doomed to be uneven. In practice, just the video card's clock is reliable, and this is only true because the vertical retrace is a relatively long period of time rather than a discrete event, which allows for some tolerance on when this is actually reported.

Even if MAME works with attoseconds, this is just to organize the hardware emulation events with extreme precission. But each frame is emulated as fast as possible. It's only in the last step, once the video frame is ready, when the throttling to real time is applied. And this throttling is just a sophisticated old-skool wait loop.

This means that if you immediately exit this wait loop (which is based on cpu cycles and is, by definition, inaccurate), and you just perform a simple wait for vertical retrace, the emulation speed "magically" becomes the one dictated by the video card. 100% smooth video, no tearing, no scroll hiccups, nothing, smooth as a PCB. This is what -syncrefresh does. (BTW: have you really tested GroovyMAME with CRT Emudriver on an ATI card? I find your lack of faith disturbing :D).

So the key idea here is: MAME does not need to know what the real refresh is in order to produce smooth video.

Now, what happens to audio? Ok, here is where the trick is applied. As you know, audio is still emulated at the theoretical native speed. However, MAME audio buffer supports "stretching" the final mix by a custom factor with 3 decimal figures. It's not a great precission but it gets the job done. This factor is used officially by the -speed option. What we do is to tweak this factor internally on real time, based on the calculated speed percent (not the integer rounded value that MAME prompts on screen). This eliminates audio stuttering. Probably a dog can notice the trick, I personally can't.

As for the audio latency, Dr.Venom in this forum has suggested a patch for setting fractional factors like 1.5 that work the best for him, this will be added to GroovyMAME. But the key to remove most of the latency seems to be, according to him, in moving from DirectSound to a different sound API.
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Re: Modeline calculation and GroovyMAME discussion
« Reply #8 on: April 02, 2013, 05:49:13 pm »
Calamity, how close do we need to get to the native vertical scan rate to avoid audio stuttering at the lowest audio latency setting?  Does this vary among different games (is it just me or is CHD audio stricter)?  If so, what should we shoot for to be safe in the worst case scenario?

I'm not sure, but I think this depends on the game, specially whether the original board used samples or analog sound. But with GroovyMAME the sound is re-synchronized by default so you shouldn't notice stuttering even with 2 Hz of difference or more (obviously you'll notice the different pitch).

Quote
When tweaking modelines manually, I noticed that you can define the pixel clock in at least 0.001 MHz increments in a text modeline, but only 0.01 MHz increments in ArcadeOSD.  Wouldn't it be useful to have the smaller increment of adjustment there?  I know it may not actually be that accurate in terms of what you'd actually measure on the physical video output anyhow, but wouldn't it be helpful just for the internal math in terms of audio synchronization?

What's more important for optimum audio synchronization, accuracy in the actual output or accuracy in the internal mathematical value?

The internal maths for audio synchronization are based on the refresh measured on real time while in game, not on the modeline values.

ArcadeOSD uses 10 kHz aligned dotclocks because it was written for ATI cards and this is the precission that the ATI drivers use internally. Anyway this is just an approximation for the real dotclock, that can only be found by direct measuring.

Quote
Will the new GM/switchres which accepts text modelines accept pixel clocks defined to 0.001 MHz increments, or will GM/CRT_Emudriver round them off?  If it will accept them to 0.001 MHz, it would be really nice to be able to tweak that in real time in ArcadeOSD instead of having to calculate things mathematically and make the final change the text modeline.

The text modeline will handle whatever you input, but before sending the modeline to the driver, we still need to round the dotclock to 10 kHz, so it's better to work with the real values I believe.
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

rCadeGaming

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 1256
  • Last login:April 13, 2025, 12:14:40 pm
  • Just call me Rob!
Re: Modeline calculation and GroovyMAME discussion
« Reply #9 on: April 02, 2013, 06:42:30 pm »
The internal maths for audio synchronization are based on the refresh measured on real time while in game, not on the modeline values.

ArcadeOSD uses 10 kHz aligned dotclocks because it was written for ATI cards and this is the precission that the ATI drivers use internally. Anyway this is just an approximation for the real dotclock, that can only be found by direct measuring.

Ok.  Do you have to measure that with an oscilloscope on the v sync line, or can you do it in software?

Good to know that ArcadeOSD is already giving me as control as possible though.  I'm glad I saw this or I'd have started wasting time tweaking the text modeline by 1kHz increments.

I can usually get vertical scan rate within 0.01Hz or so in ArcadeOSD, so I'll see how that works with 1/5 audio latency.  If I don't hear any stuttering I won't worry about it, and if I do in picky games I'll bump it to 2/5.

Thanks for the quick reply.

Calamity

  • Moderator
  • Trade Count: (0)
  • Full Member
  • *****
  • Offline Offline
  • Posts: 7461
  • Last login:May 23, 2025, 06:07:25 am
  • Quote me with care
Re: Modeline calculation and GroovyMAME discussion
« Reply #10 on: April 03, 2013, 07:00:08 pm »
@Eboshidori & rCadeGaming,

The method I used for measuring the pixel clocks was by means of the ArcadeOSD program, well and old custom version of it. What I really measured was the vertical refresh with enough accuracy, then I'd obtain the real pixel clock as:

real_dotclock = real_vertical_refresh / (h_total * v_total)

It had a routine that automatically created an array of modelines for a series of dotclocks within the given range, in steps of 10 kHz. Then the program would set the mode and measure its refresh for a while, compute the dotclock and write it to a file, etc. As the capacity of the drivers was 200 custom modes, it needed several sessions to complete the whole working range. Each session lasting several hours.

At the time I was obsessed with obtaining deterministic results, and it was finally possible by following these 3 rules:

- Use 8-multiples for horizontal values
- Use precalculated pixel clocks
- Use an odd number of lines for interlaced modes

The need for an odd number of lines for interlaced modes is pretty obvious, the problem comes if you don't count with this, then the drivers do that for you, and if you've based your timings on an even number of lines, then these are ruined.

The 8-multiples limitations has its roots, I believe, in the design of the old VGA CRTC  (http://www.stanford.edu/class/cs140/projects/pintos/specs/freevga/vga/crtcreg.htm), where horizontal registers hold characters rather than pixels, and for some reason ATI still respects this. I remember reading that Powerstrip allows pixel-based values because some cards do, which now makes sense if Matrox supported it, but the problem is other brands didn't and Powerstrip still allowed to leading to inconsistency in timings.

Finally, the pixel clock only accepts an input value with 10 kHz precision. But the problem is, you can't make the calculations based on your input value. The actual output pixel clock is going to be a good approximation of your requested value, but never exact. The good news are this value is deterministic so you can actually build a look-up table. This is a brute force approach, but it works.

A look-up table allows you to make decisions on which value to pick, because sometimes the immediate upper or lower value ends up being closer to your initially requested value (the real one, not the rouded up one you use as an index in the look-up table).

This method works, in the sense that you can take a precalculated dotclock, put it in a totally different modeline with different resolution and predict the resulting refresh. This was in the pre-GroovyMAME days. I modified a build of MAME in such a way that it calculated the vertical refresh by averaging the speed during 15 minutes or so, with many decimal figures, and you could see how the speed would slowly converge with the precalculated one with great precision.

Of course the proper method would be knowing the actual algorithm used by the drivers to calculate the PLL dividers. The formula is:

desired_pixel_clock = (reference_clock * feedback_divider) / (reference_divider * post_divider)

Where the reference_clock is the one of the card, and the other three values need to be calculated by the algorithm as integers. There are some implementations for this algorithm in the open source ATI drivers for Linux, it could be an interesting experiment to check if the results match the measured ones, but I know these algorithms have changed through time, so chances are the ones we're using are different.

These days I'm not so obsessed about μHz precision and, indeed, GroovyMAME doen't use precalculated pixel clocks.

PD: In case this wasn't clear, with ArcadeOSD you can measure the vertical refresh of modeline by pressing "5", this computes the real speed, rather than the precalculated one based on the 10 kHz aligment. The value next the dotclock into () used to be the real dotclock as taken from the look-up table in Ati9250.txt. I've noticed that for some reason this value appears as 0 in the latest versions, so the look-up table is not considered any more, I'll need to have a look at this. Of course when this worked you could manually edit the look-up table with your real values based on your hardware.
« Last Edit: April 03, 2013, 07:08:03 pm by Calamity »
Important note: posts reporting GM issues without a log will be IGNORED.
Steps to create a log:
 - From command line, run: groovymame.exe -v romname >romname.txt
 - Attach resulting romname.txt file to your post, instead of pasting it.

CRT Emudriver, VMMaker & Arcade OSD downloads, documentation and discussion:  Eiusdemmodi

rCadeGaming

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 1256
  • Last login:April 13, 2025, 12:14:40 pm
  • Just call me Rob!
Re: Modeline calculation and GroovyMAME discussion
« Reply #11 on: April 03, 2013, 07:37:19 pm »
The need for an odd number of lines for interlaced modes is pretty obvious

Ah, because of the half-line offset.  You're right, that should be obvious but I didn't even think about it while editing interlaced modes...  :banghead:

These days I'm not so obsessed about μHz precision and, indeed, GroovyMAME doen't use precalculated pixel clocks.

I wouldn't be, as long as you can get it close enough that audio pitch changes and stuttering aren't noticeable, it's impossible to tell the difference during gameplay.  I'm interested in trying Dr. Venom's decimal-value-latency idea, but we're already in pretty darn good shape in this area.

In case this wasn't clear, with ArcadeOSD you can measure the vertical refresh of modeline by pressing "5"

Actually, I don't think I've seen this before.  That's awesome.  I'll be sure to go by this during final tweaking of every modeline.

Are there any other "hidden" functions, meaning anything that's not clearly selectable on the OSD?

eboshidori

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 13
  • Last login:May 20, 2013, 06:56:18 pm
  • Le CRT vaincra ! ^.^
Re: Modeline calculation and GroovyMAME discussion
« Reply #12 on: April 05, 2013, 06:40:59 pm »
The need for an odd number of lines for interlaced modes is pretty obvious

Ah, because of the half-line offset.  You're right, that should be obvious but I didn't even think about it while editing interlaced modes...  :banghead:

Of course the very fact of having odd number of lines is what causes interlacing. The trick relies on a half scanline, the beam then goes on top of the screen but doesn't start at the same position than the previous frame. So, to display in progressive mode, this half scanline must be avoid ! That's what 240p stuffs do ( they usually display 262 lines, not 262.5 ).

Knowing that, it's been more than 10 years, and still to this day nobody finds nothing to add on this :

Code: [Select]
1855      /* video hardware */
 1856      MCFG_SCREEN_ADD("screen", RASTER)
 1857      MCFG_SCREEN_REFRESH_RATE(15625/271.5)

It's from the Cave.c driver, in MAME sources...

Dude ! 271.5 lines !... :o

Sure, the actual games might run at ~ 57.5 Hz (because in arcades, there's no need to stay close 60 Hz), but I highly doubt they choose to display 271 and half a scanline. So the Hfreq may be something different from 15625 Hz too (and the frame size would be different, so the number of total cycles).

Again, see the Psikyo SH driver :

Code: [Select]
  177        VSync - 60Hz
  178        HSync - 15.27kHz

Ok.... 15270 / 60 = 254.5:dizzy:

Later :

Code: [Select]
  819      MCFG_SCREEN_ADD("screen", RASTER)
  820      MCFG_SCREEN_REFRESH_RATE(60)
  821      MCFG_SCREEN_VBLANK_TIME(ATTOSECONDS_IN_USEC(0))
  822      MCFG_SCREEN_SIZE(64*8, 32*8)
  823      MCFG_SCREEN_VISIBLE_AREA(0, 40*8-1, 0, 28*8-1)

Ok, we still have 60 Hz, but for a 512x256 frame... (the game is 320x224, 512 is pretty big to display only 320 px).

If the Hfreq is ~15270 Hz, the pixel clock must be : 7.81824 MHz.
By looking at the PCB, there's no Xtal that can provide this frequency with integer dividers. The only one is 57.2727MHz, so the possible Dclk could be :
7,1590875 MHz ( Mclk/8 ). And actually, there are other boards that use this Dclk.

So the refresh rate of the Psikyo games is not 60 Hz, for sure !
And to say so, 60 Hz is not used so often at all...


These days I'm not so obsessed about μHz precision and, indeed, GroovyMAME doen't use precalculated pixel clocks.

Sure, you're right. And it makes even more sense considering that providing so many efforts to match incorrect refresh rates is absurd. ::)


Quote from: Calamity
(BTW: have you really tested GroovyMAME with CRT Emudriver on an ATI card? I find your lack of faith disturbing :D).

As I said earlier, this time I want it to be done the right way  8) .
So, first there's a need to write correct values for the video parameters in many sources. And it can't be something better, because this way, the emulation accuracy is improved.
Then, there's a need to closely match those specifications  for 15 kHz display, ie write modelines that can do so. Sure, it's not about seeing a difference of 0.00243 Hz (that will be well handled by GroovyMame, even for the audio part, as you explained to me) , but by following original specifications, you generally get a nice centered picture that matches either the 5% or 10% overscan of the NTSC standard. Because all those "weird frequencies" aren't weird at all when you know they are designed to this purpose (even if some manufacturers screw up the thing ^^' ). Remember there's no "arcade standard" (no need to have 60Hz, and no need to have frame sizes with multiples of 8, not even integer values for pixels ). The main constraint is the overscan that comes from the NTSC specifications, which affects TV screens as well as arcade monitors.


I want to use a nice Trinitron for my dedicated cab, and I don't want to have to pick up the remote, put the TV in stand by, type the code, go in service mode menus, adust the picture size /position , validate, exit the menus...

To illustrate this, let's go back the CPS example :

Quote
This is what I just obtained for the CPS-III system, by editting the current generic_15 preset frequency range:

monitor       custom
crt_range0  15400.00-15720.00,49.50-65.00,2.000,4.700,8.000,0.064,0.192,1.024,0,0,192,288,448,576

Modeline "384x224_60 15.43KHz 59.58Hz" 7.78 384 400 440 504 224 233 236 259   -hsync -vsync

Not bad !  ;D
But .... with 7.78 MHz instead of something close to 7.50, you get : 504*259/ 7.78 = 16778,41 µs (59.60 Hz).

The 504*259 frame is close to the original one (and you can obtain it by lowering the Hfreq in presets  ::) , otherwise the difference would have been greater), but by staying closer to the signal, you can get :
 
488x259 /7.53 = 16785,13 µs (59.57 Hz) -> closer to the specs (16778.2 µs, 59.58 Hz).

Simply because 7.53 is closer to 7.50 than 7.78. The greater the pixel clock, the greater the total frame (the smaller the active display) , and the greater the difference.


But at the end, what's the point to stay close if I'm stuck to multiples of 8 even for the centering of the picture ?  :-\
Choosing between 8, 16 or 24 px for the porch value won't give me a perfectly centered picture on my screen, even if I had carefully adjusted it according to NTSC standard (better than it was set at first) and had been reporting the corresponding values to the modeline generator. Simply because the ratio of 858/720 doesn't match the 488/384 one (neither 504/384 and 486/384).

See the thing :



This is the 488x259 frame. The timings of the NTSC standard had been reported to this frame size. Black is blanking, red border is 5% overscan, yellow is 10 %, blue lines are the 720x480 active display and show the center of the picture (which you set according the center of your own screen, whatever size it is, flat or curved, Trinitron or regular). Choosing between 16 or 24 (the closest values allowed) won't let me reach the center. The picture will be off centered anyway, and the active display of the game will have some lines outside the 5% area I had set, and a black border at the other side.
So, back to service mode...  :-[

And this case would not be the only one, about every stuffs suffer the 8px limitations of the CRTC register, that we still can't get rid of as I understand it now (even with cards that can do TV out, and are supposed to deal with pixel granularity...).

Well, at this point, you can think of getting a Matrox card ^.^ , or simply double the horizontal timings, in order to get a better granularity equivalent to 4 px. By displaying a 976*259 frame (with 7.53x2 = 15.06 MHz Dclk) I can get something close to the original duration, and manage to get an almost perfectly centered picture, that evenly matches the 5% overscan I set in my TV (because every screens are supposed to have at best 5% overscan) :


( here the Vsize is doubled too, for better visualization ).

See how the active display nicely matches the 5% overscan area (something that explains the 486*259 frame size), and the fact that even if the game is displayed by an old monitor with larger overscan (10%), all HUD informations are still visible. That's the way to do it.


So, at the end, there's no need to spend much time in order to find that sweet graphic card that can produce very low pixel clocks (according to original specifications) because the 8px constraint ruins all the nice features about modelines control. But on the other side, it means that cards which can't go under 10-12 MHz aren't out of interest anymore...

But if I have to deal with those cards, I'm not sure about how I can set up GroovyMAME in order to produce the results I expect, because it is written in order to aim for the fastest freqs (instead of the lowest), and the very fact of letting the users define porches durations doesn't lead to perfectly centered picture. It can lead to Mr. Lettuce screwing up the thing with impossible back porch durations, but that's another story. ^^'

To get a centered picture, porches informations aren't necessary. Once you defined the (correct) frame size, you just have to aim to the corresponding center of your set-up (that is not supposed to be changed, and that you don't want to change). In NTSC, standard that concerns every of us ( because the games we play come from NTSC countries ), the center of the picture never changes. Even with modifications of the standard, engineers aim to keep the same center. Because the center of the screen itself never changes : the beam reachs the perfect middle of the screen when the magnetic field of the yoke is at zero. Knowing that for CRT, the video signal not only provides visible informations, but timings according to the current that will move this beam as well, it makes perfect sense. That's why the old 12.27 MHz and the new 13.5 MHz standard have the same center, even with different frame sizes and active display.

On the 13.5 MHz standard, the horizontal middle is at the 485th pixel (Hsync + Bporch = 125 , and you add 720/2). That means the part before that is ~77% of the signal. With a 486*259 frame, the middle is ~ at the 274th or 275th pixel (because we deal with integers numbers, it's not possible to set 274.5, something that arcade board can actually do). Then you set a value for Hsync according to your pixel clock, close to the standard, which is 4.7 µs for NTSC :
7.5*4.7 = 35.25, rounded to 35px.
Then you get the Bporch value : 274 - (384/2) - 35 =  47 px. ( the last value defines the Fporch : 486 - 384 - 35 - 47 = 20)

So, there's no need to let the user define those parameters ( especially considering that they will tend to screw up things  :P ). There's a need to define correct frame sizes, with pixel clock values as close as possible to the games.

But that works well with pixel granularity, when you deal with values rounded to multiples of 8 you can't achieve this anymore. The principle is still the same, but the result is an off centered picture, unless you're lucky (unless you deal with originals specifications done with multiples of 8, or that come close to).


Quote
As for the convenience of having 16+ kHz frequencies available, it's just required for vertical games on horizontal monitor. It allows you having real 256p@60Hz resolutions, otherwise you can't achieve this.

Yes, I understand... But by reaching the lowest values for Vblank (without getting artifacts), I would want to get the lowest Hfreq that could give me the desired Vfreq. If I want to play Bomb Jack on my Trinitron without rotating the TV, I want to have 256*1.04 = 266 lines for the frame size. Then to display the game at ~ 60 Hz ( but I don't think the original game ran at that frequency :laugh: ) :
266*60 = 15960 Hz.
Because my Trinitron can display the sickest scanlines out there, but If I send something above 16 kHz, it will say to me : "sorry dude...I can't "
And if I'm using an arcade monitor that can do 16.67 kHz, I want the have max number of lines for the frame size, in order to get the active display inside the safe area I defined. Because on my Trinitron, even if I get a picture for Bomb Jack at 266 lines, some parts will be outside the screen, I will have to reduce Vsize. But a simple potentiometer nicely placed before the vertical amplification will allow me to reduce the size without going in service mode (but it won't allow me to adjust delay, ie moving the center of the picture).



OK, after all of this, don't get me wrong :
GroovyMame and its features are great, but for some crucial things, I end up going back to the old way, getting my pen and writing by hand the specific stuffs I want, because even if the thing is very flexible, it is designed  the opposite way from my viewpoint.

I hope you understand all of this, and hope you'll incorporate the updated drivers I'll work on. Because here too, to specify correct video parameters ( that won't write themselves ), I'll take my pen and some time to carefully inspect PCBs and trying to match what could have been done, realistically, according to overscan and NTSC stuffs.
Since I'm not a developper (just a random guy on the net, but with solid knowledge on CRT), I don't think the MAME team will bother with it, especially considering that many drivers haven't been updated since 10 years on those parameters, and the fact that anyway everybody plays on crappy LCD screens (even the MAME guys...), are used to triple buffering, and would prefer to stay around 60 Hz because their screens can't sync with lower (and correct) refresh rates.  :P


« Last Edit: April 05, 2013, 07:22:45 pm by eboshidori »

rCadeGaming

  • Trade Count: (0)
  • Full Member
  • ***
  • Offline Offline
  • Posts: 1256
  • Last login:April 13, 2025, 12:14:40 pm
  • Just call me Rob!
Re: Modeline calculation and GroovyMAME discussion
« Reply #13 on: April 10, 2013, 06:10:09 pm »
I think you're going a little overboard.  You can get h size and v centering (and v size to some degree) where you need it in ArcadeOSD (or by editing the modeline manually, but might take longer).  As for h centering, as long as you can get at least one pixel into overscan each side, you can fine tune it with single pixel granularity in MAME using the slider controls, which, correct me if I'm wrong, don't cause any problems as long as you don't touch the stretch controls.

I'm also extremely picky about perfect sizing and centering, but there don't need to be any changes made to what we have to achieve this, beyond letting GM accept custom modelines at launch in the ini, which Calamity is already working on.

I would be more worried about input lag and geometry.

What kind of Trinitron are you using?  I guessing it's a Flat Sony Wega Trinitron if it has a digital service menu.  I'm using a KV-27FS120.  The big problem with all the flat Sony's is the horizontal bowing in the center of the screen.  Supposedly this is fixable by adjusting the angle of the deflection yoke, but I haven't tried it yet. 

Also, a tip on these is that if you raise the total vertical lines above a certain value (288 I think, haven't played with it in a month) it will snap to a much smaller overall size (more of screen visible).  This is really useful for using the larger size for 224p and under games and the smaller size for 256p games, yoko, etc.  [EDIT: I have this working on some KD-27FS170's I was testing with, can't get it working with my KV-27FS120's.]

I'm really interested in be able to add a vertical size pot to a digital chassis TV.  Could you elaborate on that?
« Last Edit: May 09, 2013, 02:19:32 pm by rCadeGaming »