Why would you keep 50 pages of document in main memory at once? It’s not like 75 is some magic limit that’s enough and 50 isn’t. No, if you stand any chance of getting anywhere near such a limit, you would certainly design your data structures so you don’t need all the content in memory at once, and so the difference is not so ferociously significant.
It wasn’t free and limitless, but it wasn’t scarce either—you probably had 100–1000× more disk space than RAM, which is close enough to unlimited for most text purposes. (https://en.wikipedia.org/wiki/History_of_hard_disk_drives suggests 1GB was typical in the mid-1990s.)
Consider also that at this very time we’re talking of (early 1990s) the industry was shifting away from largely 8-bit code pages to 16-bit UCS-2, which is an even more extreme cost when compared to UTF-8, doubling space requirements for most people, rather than merely the 50% increase yongjik speaks of for certain languages. Yet this change was being done (more’s the pity).
Concerning the scarcity of bytes, yongjik’s point would certainly be valid if it referred to the 1970s, was probably valid of the 1980s, but is not valid of the 1990s. (But the point about keeping the full document in RAM is an unrealistic strawman.)