I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers.
In some cases, yes. If you accept utilitarianism as the the reductive explanation of morality, and assume some non-controversial terminal values, then all of morality is reduced to straight-forward calculations.
"The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right." -Leibniz
Unfortunately we retain some ignorance on the correct nature of utility functions (finite? time-preference adjusted? etc.), and terminal values for humans are demonstrably arbitrary.
>If you accept utilitarianism as the the reductive explanation of morality
... then LW ends up with Roko's Basilisk.
Really, you're using that as your answer to "I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers." You can prove anything if you first make it an axiom.
You can't seriously claim that utilitarianism accurately captures human moral intuitions. Variations on the Repugnant Conclusion occur immediately to anyone told about utilitarianism, and are discussed in first-year philosophy right there when utilitarianism is introduced.
LessWrong routinely has discussion articles showing some ridiculous or horrible consequence of utilitarianism. The usual failure mode is to go "look, this circumstance leads to a weird conclusion and that's very important!" and not "gosh, perhaps naive utilitarianism taken to an extreme misses something important."
For more-less exactly the same reason you accept general relativity over aristotelian motion - it is derived from first principles using maths, can be shown to match experience even if somewhat intuitive to people, and works pretty well in practice.
> can be shown to match experience even if somewhat [un]intuitive to people, and works pretty well in practice.
I think these are the two points that those skeptical of utilitarianism have trouble with: it's exactly that it doesn't seem to match experience that started this thread. Additionally, it doesn't actually seem to work well in practice: http://econlog.econlib.org/archives/2014/07/the_argument_fr_...