Main > Everything Else
Has there ever been a documented reverse engineering of an arcade game?
<< < (6/8) > >>
MonMotha:
I can't name any specific examples from the era you're probably thinking, but some notable late-80s ones ran at even less (e.g. Hard Drivin').  Having talked to some ex-Midway people, I know it's at least reasonably common.  Most games were still locked to the vertical refresh, they'd just evenly divide it down if they couldn't keep up.  I was told (by a guy who worked on it) that this is one reason MK2 runs at such a weird refresh rate.  The hardware couldn't quite keep up with 60Hz, but they didn't want to drop all the way to 30 since that's a bit slow for something as action-sensitive as a fighter, so they just increased the total video resolution until they hit ~53Hz, which everything could keep up with.

Doing this does potentially require more RAM, though.  You can't generate the graphics and spit them out on the fly if you can't draw them in real-time to start with.  You have to have not just one but TWO framebuffers (one to operate on and one to scan out to the monitor).  This would have made the practice somewhat uncommon until the mid to late 80s when RAM started getting cheap-ish.  Some really old games didn't have a framebuffer at all: they really did generate the graphics right as they were about to be spit out to the monitor.

Of course, this doesn't stop you from having the game's event loop run slower (and just drawing the graphics twice based on old state data), but the event loop isn't normally the blocking factor in terms of complexity for most (but not all) games.  The event loop (and surrounding code) is what we're talking about reverse engineering, here.

I wouldn't be surprised if some games that ran at a lower graphical framerate actually ran their event loop faster just not drawing graphics based on the interim states, but that's conjecture.

I remember reading somewhere that the design guides for getting your game licensed on one popular console (I think it was either the original Xbox or one of the Playstations, but this sounds like a very Nintendo thing) required that you always run the game at the same framerate, and it had to evenly divide the output rate.  That is, you couldn't allow your game to degrade to 30fps (NTSC land) momentarily - it had to always run at 30fps unless it could consistently manage 60.

If you do want to reverse engineer a game, the Laserdisc stuff is probably some of the simplest.  Not much there besides a timer, a cuesheet for the laserdisc, a list of "correct" moves and their timing, and some input handling.  If you have a target for what you want to reverse engineer (e.g. "How do the ghosts behave?" or "How does this IO board work?") things are usually a lot easier.  I took the question to be "Has anybody ever completely reverse engineered a game to the point where one can compile, from reverse engineered source exactly mimicking the structure and functionality of the original, a total workalike using the original graphic, sound, etc. assets?", and the answer seems to be "it's pretty unlikely", since there's relatively little reason to do so.  You might as well just make your own similar game: it's generally a lot easier.
jimmy2x2x:
Thanks for the wordy reply, I appreciate it.

Polygon based games of course ran at much lower frame rates back in that era and I wouldn't be surprised at all if they where written entirely in C or another non assembly language. Very interesting background information on the Mortal Kombat games.

Yes, the original question was about a complete reconstruction at source level and this has been partially answered in the pac man dossier, lots of fascinating information there.  The reason for the question was more about curiosity than anything else, I just wondered how they did things and how much support the games had from the hardware, collision detection, hardware limitation dictating the games design, underlying logic for when events are triggered - that kind of thing.







lilshawn:

--- Quote from: jimmy2x2x on March 02, 2012, 06:40:05 pm ---I'm still not really convinced that arcade game code would have been written in C or any other high level languages in the mid to late 80's. With most games running around 60 frames a second, binary code size and chip prices at the time, speed of the CPU in many arcade games, non mature nature of compilers back then... seems like a stretch to me.


--- End quote ---

one has to realize that when it came to early videogames, you didn't write the code for the computer. you built the computer for the code.

they wrote a program built some computer to support it...coded some more... added to the computer. when it was all said and done, you have a very specialized computer running tailored software with tailored hardware.

sloppy coding aside, you don't need to drop a V8 engine to win the race. lots of amazing things could be done with a z80 mainly because alot of needlessly repeating things can be eliminated from code.


[EXAMPLE]

my friend and I both had laptop computers running basic. we both wrote the same identical program to pick 2 random numbers between 1 and 1,000,000 and compare them... if they where different it picked 2 new ones. if they matched end program. easy peasy.

here's the kicker. My laptop was a Toshiba T1200 80c86 running at ~10mhz. His was a Macintosh Powerbook with a 68030 25mhz so his computer was approximately 2.5 times faster right?

my computer crunched the numbers WAY faster. I had several hundred thousand results before his hit even 3000. How could this be?

we found one reason (among many) was the 80c86 was capable of automatically processing repeating operations receives the instruction once and spits out multiple results... where the 68030 had to process and reprocess and reprocess over and over and over... wasting valuable clock cycles doing the same thing over and over again.

of course his powerbook could do lots of things way better and faster than my Toshiba... but it shows that code optimized for the hardware is capable of many feats.

[/EXAMPLE]
jimmy2x2x:
A point well made, do you think that the kind of optimization you are referring to indicates that assembly language would or would not be best suited to this kind of exacting machine code flow?

EDIT: I am basing my thoughts on this around the gaming computers of the time, 8 bit and 16 bit machines such as the amiga and st.  If you wanted optimal speed and size you had no other viable choice other than assembly language.

If you are writing something where frame rates are irrelevant such as adventure or management games, that is a different argument altogether.

I feel there is a cultural divide here with American programmers adopting high level languages much sooner that Euro coders, IMO that's the main reason that Euro coded games of that era (on home machines at least) run rings around the US offerings (performance and storage wise).

For example: How many silky smooth platform games or shmups (50 or 60fps) came out of the States on the Amiga vs how many came out Europe?  Different ideas on what makes a great game.



lilshawn:
don't forget that arcade games are VERY short, often only a few levels at most. the difficulty increases greatly by level. Game makers made games nearly impossible in higher levels in some cases.

you have to remeber too that the Z80 (the most popular processor used throughout the 70's and 80's has a very intuitive instruction set. Allot of tiny simple instructions that otherwise had to be done in programming through several steps could be executed in a single command with the Z80.

in fact the 80c80 series processors where a predecessors of the Z80 processor and use a very similar instruction set (hence the great increase in random number generation and comparison as shown in my last post)

As well as allot of complex memory operations (moving bits/changing bits) can be executed with a single (or way fewer) commands without having to control the processor and tell it step by step directly. this eliminated seperate memory control hardware. When it come to computers, the more you can get the processor to do without having to wait for stuff outside the CPU to do it, the faster things can happen. Memory at this time was very slow (by todays standards) eliminating an extra clock cycle to not wait for a ram refresh could potentially DOUBLE processing speed.

to reduce program size LOTS of operations where abbreviated so single letters, so instead of "MOVE byte ptr" it was simply "LD".moving a piece of information in memory used to be "MOV byte ptr [DI+01],02" now it was "LD (IY+01),02"

remember LOGO? probably not.  :oldman for a square it was "forward 10 right 90 forward 10 right 90 forward 10 right 90 forward 10 right 90" or "forward 10 right 90 repeat 4" we reduced our code by a factor of 4 by using a shortcut. how about a circle? "forward 1 right 1 forward 1 right 1....(repeat 360 times) or instead "forward 1 right 1 repeat 360" now we have reduced our program size by a factor of several hundred.

just think of it more like stock car racing - the cars are very specialised machines built for one purpose. turning left. ever seen those cars try and turn right or go straight? it's not very effective. as the machine has been built around only having to turn left. (suspention, tire alignment, etc etc)

NA designers sort of got stuck in the thinking of having the processor do everything. The Asians realized we don't need to waste time trying to deal with audio so we will add a chip to do that. that way we can use a few clock cycles to send it a command, while we go do this other stuff while the support the chip pulls the sound out of ram and sends it to the D/A converter...by the time the graphics are ready to get refreshed to the screen the audio is on it's way. and with PRC and Hong Kong at the forefront of chip manufacturing, making/finding chips to do unusual things was allot easier for them. need a chip to take bits, invert the bits and spit them out in hexadecimal, no probrem! </racisim>

as a side note, you will notice desktop computers in the 2000's started a movement... designers realized that even though 3.5ghz was fast there was still issues with having to waste those clock cycles waiting for thing like RAM. RAM was fast? what's the big deal? problem? the memory controller is on a bus that operates at 10X 20x 30x slower than the processor. By moving the memory controler to the CPU they gained a huge speed increase without really making anything faster. The same kids of things where happening with game boards.

Game makers started installing chips that soley deal with video (the first GPU if you will.) it involved a little bit of re learning how to code but they took alot of load off the CPU because now it didn't have to deal with the video. and using another Z80 to deal with audio. it went on an on. the multiple automated instructions that could be executed with the Z80 made it very quick and efficient. hell they used versions of the z80 in the gameboy, colecovision, sega neogeo everything... for years, because they didn't need a huge complex code set to to complex things.
Navigation
Message Index
Next page
Previous page

Go to full version