The caching issues (both the OS write cache as well as the cache on the hard drive) are solved by inserting "barriers" within the cache schedule. Re-ordering writes around these barriers is not permitted. I know Linux does this with ext3 and ext4. Windows should at this point; it's not a new technique. Hardware support is a bit more spotty, but it can be emulated (at a performance cost) if the hardware doesn't support it. Linux actually maintains a blacklist of drives that claim to support cache barriers but actually ignore them.
If the OS actually journals the filesystem and uses write barriers (and the drive actually implements them or the OS otherwise knows that it doesn't) correctly, it should not be possible to hose the entire filesystem. You could lose data (unless that's journaled, too, but that's uncommon due to the severe performance hit) in the form of truncated files or missing files, but the filesystem metadata should always be in a "consistent" state meaning you shouldn't lose everything.
Now, that's a pretty high assumption as it's tough to get right, but it's at least guaranteed by the design of a journaled filesystem. I'd like to think they've worked most of the bugs out of NTFS at this point given that it's well over a decade old...
I don't know if the Windows defrag program honors any of these guarantees. It damn well should since it would be a glaring hole in an otherwise reasonably well designed system to ignore them, but it can't really be inspected.
Also, as you note, if data loss or corruption occurs in a critical system file, the OS could be rendered unusable. It's considered good practice to make the OS as tolerant of this as possible, and Windows is surprisingly good at it.
My suggestion of "the risk is low" is experience driven, though. I've killed the power to both Windows XP and Linux ext3 systems without shutting them down for basically as long as they've been available, both intentionally and unintentionally, and I don't think I've ever lost an entire filesystem. In one case, this happened daily for over a year (a one-off test arcade system). I don't think I've ever even ended up with an unbootable OS.
I *have* lost entire FAT and ext2 (both are not journaled) filesystems, despite my not having done nearly as much with them. I *have* also lost data, even on NTFS and ext3/4, due to hard power down, and that's definitely bad, hence my suggestion that you should always shut down properly or use another means of ensuring things don't break like COW to a ramdisk or scratch drive, but I've never lost an entire filesystem. The design criteria of a journaled filesystem are supposed to guarantee this exact behavior, so I'd tend to believe the implementors have gotten it at least mostly right.
All the commercial PC arcades I've torn down use some form of ramdisk to contain writes and discard them on reboot, always bringing the system up in the "factory" state (high scores and operator settings are stored elsewhere with checking to see that it's OK and blowing them away with the defaults if they're not). On Windows, this is called EWF: the Enhanced Write Filter.
Unfortunately for a typical MAME builder, these aren't the easiest things in the world to set up. EWF doesn't ship with XP Pro (it comes with the XP embedded toolkit), though it can be installed by hand, and the only way to manage it is a somewhat obscure command line tool. Doing a UnionFS/tmpfs on Linux is also rather complicated (probably more so, but also more flexible); there are other options on Linux that are easier and more typically deployed (system in initrd + data on readonly partition is the most common).
I guess what I'm saying is that it comes down to whether or not you feel lucky, punk. I'd wager that most users will never have a problem, especially given the largely static nature of a typical MAME setup, but like I said, I'd never ship a production system like this without addressing this problem.