It sounds like you might be basing your development examples more in the present day, not some 20-25 years ago - this is the crux of the argument.
Things where very different back then, very limited workspace, processing power and storage - combined with immaturity of compilers in that time.
To rethink "one can be a LOT more productive in a high level language" statement, back then this would have held true up to a point until resources started running out - then the assembly option looks a lot more sensible.
So you take the path of "write it in C then start optimizing the sensitive parts in assembly as it becomes necessary". The computer (and I don't just mean PC) guys had been doing it for years, basically since the dawn of usable compilers. I also think you underestimate the maturity of compilers by the mid 80s. UNIX was quite popular and was written in C at that point. The language was essentially a decade old. In general, they do no worse than what a typical "first shot" would be by someone writing by hand in assembly, but I'll admit compilers of the era would often do no better. C is not a complicated language to write a non-optimizing compiler for that behaves properly - such a task is often given to sophomore or junior university computer science students.
"a mediocre compiler will often beat the pants off of a naive first pass at doing something in assembly, especially if the programmer is hurried" - today I am sure this will hold true for the most part. Back then - maybe if the programmer was working on his first project on a new platform. If the programmer has more than one project already done for the current platform a lot of intimate processor and hardware knowledge and resources from the last project would be available to him. The new development cycle would offer time to further refine already efficient working code, this principle does not usually apply to higher level development.
I'm thinking about things like self modifying code, dynamic stack relocation, efficient zero page usage, sub routine register return values instead of stacked return values, unrolling loops, dynamic routine entry - low level coding that you rarely see outside of native assembly.
Some of these are indeed "rarely seen outside native assembly" like self modifying code (though the IOCCC people do pull some crazy stunts), but register returns are standard on many archs with alternate calling conventions available from many compilers, especially old ones (hell Microsoft STILL has fastcall, stdcall, and cdecl - for grins guess which one is actually the industry standard) back when stuff like that really mattered. I've seen people pull crazy stack tricks in C (perhaps with a little inline assembly), and almost all compilers (or, usually, linkers) have supported forcing certain things into certain locations since the beginning allowing you to tune your code/data locality. People also used to use setjmp/longjmp (you'd be crucified if you did that at most companies, these days). Most compilers supported pinning globals into a register, and people often used such tricks for multiple register return values. You can, and many people did, treat C as a form of "architecture independent assembly" that allowed you to more easily express complex things if you wanted to while still sticking close to the hardware and having control over the generated output.
Loop unrolling in C was in fact so popular that compiler makers are STILL counseling people to NOT do it, since modern compilers usually do a better job on the loop than the unrolled code. People would frequently re-use local variables because the old compilers weren't smart enough to re-use registers or space on the stack until something fell out of scope. The register keyword actually had relevance, and people used it. Old code is littered with macros to ensure inlining where modern code would use an "inline function" (not available back then). Heck, look up duff's device. People used to do that crap in C all the time. They don't any more because 1) the compiler figures it out, and 2) there's little reason, but back then, heck yeah.
Especially back then, using C was not a substitute for the programmer knowing what they were doing. The compilers weren't that good, just good enough to let you do what needed to be done. The difference is that where C lets you get done what needs to be done, assembly FORCES you to think about every nitty gritty little thing in order to get things done. I'm well aware that there were (and still are) some great assembly hot shots around, but even the best of them generally admit that they're much more productive in a high level language with usually little downside in terms of either performance or code size.
Also, keep in mind most arcade games used "cleaner" architectures like the 68000, not x86, so they didn't have all the weird memory segmentation tricks to deal with (68000 has a flat address space) that made C a pain on DOS and the like.
There are plenty of ways to be smarter than the compiler without dropping all the way to assembly. Look up Carmack's floating point inverse square root trick for a nifty example (note: totally obsoleted by SSE on x86).
I do doubt that you'll find any arcade game from the 80s or even early 90s where EVERYTHING was written in a high level language and just run through a compiler without giving it any further thought. They were asking too much of the hardware for that. Carmack and friends were still hand optimizing things well into the late 90s (and probably still do one some occasions).
I just think it stands to reason that, given the common availability of reasonably robust compilers (again, UNIX was written in C and very popular, as I think was VMS, which begot OS/2 in 1987 and eventually Windows NT), especially for the popular M68k, that it would make sense to concentrate optimization efforts on the parts of the program where it matters. Arcade games were usually high budget for their time, but they also had tight timetables and somewhat limited programming staff. It just doesn't make sense to have a bunch of people hanging around banging out assembly all day for weeks on end when they could hammer it out in a few days in C, test and debug it, and optimize the parts that matter. Especially as the games got more complex, writing the bulk of the game in a faster to develop, more maintainable language would make sense. It might even let you re-use some boilerplate code across multiple games, even as the hardware changed.
I'm actually really kinda curious when the switch actually occurred (I know for a fact that almost all arcade games are nearly pure C/C++ or sometimes even Java, .NET or even Flash, now - I've seen a fair bit of the source, and the disassembly is often quite telling with C++). My guess would be mid-late 80s (some time around 1988-1989) with perhaps some early forays in the early-mid 80s (like 1983-1984) and a near total conversion (all but very "hot" sections of code) by 1995 or so. Your guess is apparently later (maybe early to mid 90s?).
I should try to get back in touch with the former Midway guys I know and see what the answer to that is. I'm sure it varied with game house (e.g. Midway was one of the first to go to PCs while Konami was one of the last).
timeframe + hardware of the day + high level language = 60fps ? Still not convinced, sorry.
I don't see why you couldn't do it on a lot of things like side scrolling beat 'em ups or platformers that had simpler scenes and behaviors. The graphics stuff like tile/sprite addressing and priority was often partially accelerated in hardware by then, and separate sound processors were common, too. Honestly, I kinda wonder what percentage of the time the main CPU on your average arcade game from 1988 spent doing nothing (just waiting on the vertical sync interrupt to run the next iteration).