It all depends on how you define "performance". Software designers tend to be fixed on the amount of time that they (can) use for the computer to give a rsponse (ie "every frame it needs the screen needs to be redrawn" or "in a maximum of 2 seconds the database retrieval needs to be done"). So yeah this timed performance is a constant, but the complexity of what is returned within that fixed time has advanced tremendously.
It's like a store replacing a tiny van with a truck and then complaining the old van moved a tray of beer quicker than the truck moves a weeks supply of beer .
I think it's more like a big ass truck that has an overnight sleeper, satelite radio, and a mobile bathroom to move that weeks supply of beer. It's not the data that's the problem, it's what's used to manipulate that data.
Look at it this way. Let's take program X that's slimmer and program Y that's a little chubby and program Z that's the slimmest of all. The reason I might like X is the extensibility inherent with the design. If I wanted to, I can add some plugins to achieve additional features. Y on the other hand tries to include every conceivable function a user might want even if 90% of the end users never actually use the functions. Program Z does one thing and nothing else, but does that one thing very very well. They all process the same data.
The problem with most modern programmers is that many want to create program Y and very few create programs like X or Z. I can live with a program sucking down the necessary resources digesting 9 GB of data. It's data I want to have processed. I can't deal with a program that sucks down 200MB or more of RAM loading a giant monolithic block of code that's capable of turning on my sink, purchasing gas and balancing my budget when all I want it to do is grab and display a 200k block of data from a file.
With a plug-in architecture, the software might end up eating more resources if I add all the plug-ins to achieve the same functionality as the monolithic version. But the catch is, I don't
need all the functionality of the monolithic program therefor I still end up being ahead of the resource game. Both with a smaller program and a program that fits
exactly what I need it to do.
And if I really had to worry about overhead and needed to eek out every scrap of resources, then switching over to Z would be the thing to do.
Most programmers don't necessarily need to focus on eeking out 10% performance increases on their refactors. I'm just saying there needs to be a dramatic change in the current school of thought that "hardware is cheap" and writing monolithic software to fill limited hardware resources.