Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.

Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.



I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.

I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.

You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.


This sounds like what philosophers call "indirect consequentialism" or the related "two-level utilitarianism". The idea is what you say: aim for good outcomes, but use rules or virtues as heuristics because direct consequential reasoning is impractical, and it's easy to go wrong with it. If you're interested, take a look at https://plato.stanford.edu/entries/consequentialism/#:~:text... and https://en.wikipedia.org/wiki/Two-level_utilitarianism.


After a couple of decades I've concluded that you need both. Virtue ethics gives you things like the War on Drugs and abortion bans; justification for having enforcement inflict real and significant harms in the name of virtue.

Virtue ethics is open-loop: the actions and virtues get considered without checking if reality has veered off course.

Consequentialist is closed-loop, but you have to watch out for people lying to themselves and others about the future.


What does "virtue ethics" mean?


The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.

The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.


Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.

And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.


Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.

> Utility has the advantage of sustaining moral care toward people far away from you

Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.


One of the three traditional European philosophy approaches to ethics:

https://en.wikipedia.org/wiki/Virtue_ethics

EA being a prime example of consequentialism.


… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)

Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: