When did CPU cycles become expensive?
Most available CPU cycles are wasted by the processor being idle.
I know these numbers come from VM people who are talking their book but the oft quoted number even for servers is 5% CPU utilization.
Due to the scoping rules in most imperative languages a look up table implemented in this manner will generally reduce the cyclomatic complexity of a method which is a major source of bugs, also you see this pattern implemented in most functional languages as a match statement.
Most statements about performance are by people talking their own book.
As for how this is different than a switch statement, you are failing to realize the importance of higher level concepts, do you use only NAND statements in your logic? (or the gate of your choice capable of implementing the other gates)
I've said "potentially expensive". Surely you must agree that while most of the time CPU cycles are indeed cheap in some applications (e.g. embedded systems or performance-critical loops) performance counts.
How is this any higher level than a switch statement? The syntax is slightly different but how exactly it hides complexity with abstraction?
I know these numbers come from VM people who are talking their book but the oft quoted number even for servers is 5% CPU utilization.
Due to the scoping rules in most imperative languages a look up table implemented in this manner will generally reduce the cyclomatic complexity of a method which is a major source of bugs, also you see this pattern implemented in most functional languages as a match statement.
Most statements about performance are by people talking their own book.
As for how this is different than a switch statement, you are failing to realize the importance of higher level concepts, do you use only NAND statements in your logic? (or the gate of your choice capable of implementing the other gates)