The most serious problem with Knuth's TAOCP books is that they are severely out of date. For example, Volume III (searching and sorting) only has two paragraphs that even mention virtual memory and memory caches at all. That is a huge issue--if you have a sorting problem that is bad enough that you need to break out Knuth, then you probably need to know to implement the algorithms in a cache-oblivious manner. Further, you need to know when you can rely on the VMM (mmap and swap) and when you must explicitly manage disk-based data separately from RAM-based data.
I doubt I agree. Knuth covers the fundamental, distilled algorithms very comprehensively. I really don't think covering VM, cache issues/width of cache lines, as well as virtues of pinning pages in memory is completely in his scope. If he did that, he'd have to talk about architectural issues such as out of order execution and depth of pipe-lines, which he obviously has no interest in since he developed his own `ideal' (M)MIX architecture specifically for the books.
I feel that the best way to approach actual algorithm implementation is to see what Knuth has to say about it in terms of virtues of some algorithms vs others and then to study the environment it will be running on to tweak the general algorithm.
I have to agree with Brian... Caching is not a purely practical concern; it is important to the analysis of algorithms as well. It looks like Knuth agrees:
Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it’s a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.
In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).
I remember reading Volume 1 where he notes how he is not happy with current architectures, which is why he introduced MIX. I guess I wrote my earlier comment from how I look at Knuth, perhaps not how it was intended. Haven't seen Volume 4 yet though.
You were not misremembering. Volume 4 is in beta, and a planned rewrite of Volumes 1 to 3 (after Volume 5! eta: 2015) to take advantage of MMIX (the new version of the MIX language) is not even on the horizon yet.
At the risk of being labeled a heretic, Knuth is great and all, but, in a rapidly expanding field, you can't just sit down and "describe the whole thing", or even the algorithms of the whole thing. Knowledge in the field is a hyper linear curve, and the ability of one aging person to describe the sum total of all knowledge in said field is pretty much linear. That's a recipe for failure, at least in terms of the stated goals. There is something... vaguely dubious to me about how Knuth has approached the whole project, including stopping for N years to work on TeX just to typeset it. I guess you can give him for trying, and in trying, producing some excellent work, but perhaps a different approach might have yielded other benefits.
I agree to a certain extent. All sciences specialize as they mature. Computer scientists looking to make their mark are inevitably driven out from the core into security, bioinformatics, robotics, data mining, distributed systems, what have you.
I don't know Knuth's mind exactly, but it seems to me he decided to set up shop at the core, at theory and algorithms, to set the science on a solid foundation going forward. So it's fundamental algorithms, arithmetic, searching, sorting, and lately graphs and combinations.
There are enough people writing in their specialties, and there will always be incentives to go baroque and novel. That's how we get stupid stuff like my thesis, or 8% faster neural network training, or the paper on implementing a Turing machine in C++ templates.
It takes a special kind of person to forge ahead slowly, for decades, on the field-defining work that he does. The other kinds of scientists (and software engineers, for that matter) are all too easy to find.