I find that, with an SSD on one end but just a single, 7200RPM consumer grade revolving metal drive on the other end, the best I can do is about 70-80MB/sec, and that is indeed limited by the spinning metal. Using scp or sftp sometimes imposes other limits, depending on the CPU in the system on both ends, due to it not being able to run the crypto any faster. It should be noted that transferring something TO the machine with the spinning metal isn't necessarily limited by the speed of the drive since the OS is free to dump the data into the filesystem cache for later flushing, assuming write-back cache policy which is the default on most UNIXy OSes as well as modern Windows for fixed (non-removable) disks.
Obviously an SSD can go quite a bit faster. The Intel 520 series I have in my laptop is far and away limited by the speed of the SATA interface, which is only 3Gbps (the laptop is just a hair too old for 6Gbps SATA) less overhead of course. I typically see ~250MB/sec to/from it.
Gigabit ethernet is cheap as hell these days. Almost all PCs have NICs that'll do it, and reasonable switches can be had for about $10/port. Heck, you can even get a "mostly managed" switch for just a hair more (Netgear GS108Tv2 is going for ~$95 these days). 10GbE is quite a bit more expensive. NICs are a couple hundred used for SFP (+ add optics) or CAT6a PHY, pushing $500-700 new. You can get some old 10GBASE-CX4 stuff for <$100 on the secondary market. The switches will really nail you, though. Figure minimum $200/port. The cost of 10Gb combined with the fact that most of my systems lack the disk throughput to back it up has somewhat made my shy away from 10Gb.
Regardless, don't expect to hit 10Gbps right now even if you've got the network for it without relatively large disk arrays, striped SSDs, or transfers of predominately in-memory data. 1Gb link aggregates are still somewhat attractive for that reason, too. If all you need is 2-4Gbps and can afford to burn the extra ports and deal with the administrative overhead of the LAG, it's usually cheaper.
There are circumstances where disk throughput doesn't matter. Distributed file systems where the server does in-memory caching are a great example. You can have multiple systems with a coherent view of some files, and have it all run at very usable, essentially local speeds using commodity 10GbE hardware. It generally (but not always, if you consider used gear) works out cheaper than Infiniband. These applications are somewhat esoteric, though, and I doubt you'd ever encounter them in a home-use scenario.