Both specifics about your video timing (number of blanked lines, length of vsync, relative number of blanked lines before vs. after vsync, etc.) and the way you generate the composite sync can affect performance of interlaced video. Digital sync separation methods are usually a bit pickier than the analog timers of yore. They need to "detect" that the signal is interlaced, otherwise you end up with it treating the video as progressive and a VERY jittery display. The upside is that they're a fair bit more accurate and can handle much shorter sync pulses to start with. It's also generally very important with modern designs to make sure that the sync format (horizontal offset from vertical) is correct to properly signal interlaced video to the monitor.
Generally, the "correct" thing to do is output the exact timings specified by your old regional analog TV standard and XOR the two sync signals (having the same polarity) together. If you want negative sync, invert one of the incoming signals or invert the result. While this may not get the equalization/serration quite right, it seems to make most monitors happy.
Some of the "passive sync combiners" that use only resistors or diodes and resistors can exhibit somewhat weird behavior. The "resistors only" method (or just tying your two sync outputs together) will actually give you a tri-level output that most purely analog designs are OK with but can upset the later digital designs. Some of the diode based designs work well enough while others omit serration/equalization pulses during vsync and can be problematic on analog (and some digital) designs, especially when using interlaced video.
The more you stray from "standard" TV timings, the more likely you are to need to play with things, and YMMV, of course. Sometimes, the thing to do seems to be to throw all the electronics out the window and just twist your horizontal and vertical sync wires together (preferably with a couple of low value resistors just in case your video source doesn't have any series resistance!).