5 Comments
User's avatar
Humam's avatar

Thank you for the shoutout, Theo! I loved the article. Sorry, but I read past the section I was allowed to, and I didn’t like it. I am scared to dive into the insect moral worth rabbit hole for fear of a world-ending conclusion. When I was 11, I tormented the anthills on my lawn with water and the occasional lawnmower. Even though 11-year-old me might have prevented millions of ant generations, this was wrong. I concede that insects have moral worth -- but I don't think it's much at all. Ants outnumber humans 2.5 million to 1; however, humans outneuron ants 344 thousand to 1. I think an ant has a vanishingly low ability to understand suffering or feel pain compared to a human, cow, or chicken. So, for the time being, I am okay with insect farming as an alternative to livestock farming.

Expand full comment
Theodore Yohalem Shouse 🔸's avatar

Thank you Humam. But I think you're very wrong about insects.

1. We should not be scared of world-ending conclusions. First because this is motivated reasoning. And second because ending the world would probably be the most terrible thing ever. There's so much potential positive value in the future, and with technological advancement I see no reason to believe that we won't be able to manipulate our environment to make it net-positive for all sentient creatures. At the very least, i think the EV of the future is very high, even if the value of the most likely outcome is negative (though I bet it's positive). Third, even if we have strong reason to believe the EV calculation says we should destroy the world, I think we should refrain from hitting apocalypse buttons for this reason: https://www.goodthoughts.blog/p/naive-instrumentalism-vs-principled?utm_campaign=post&utm_medium=web

2. I think it's unclear whether your tormenting the ants was good or bad. I don't know whether ants have net-positive lives or whether spraying them with water prevents millions of ants from being born. I think your intuition about the badness of your action mostly tracks virtue signals and not an EV calculation: https://www.goodthoughts.blog/p/moral-intuitions-track-virtue-signals

3. Counting neurons is not a good way of calculating moral worth: https://benthams.substack.com/i/151605056/intensely-what-about-neuron-count

Best,

Theo 🧡

Expand full comment
Theodore Yohalem Shouse 🔸's avatar

😭😭😭

Expand full comment
Pete McCutchen's avatar

Where do these sentience numbers come from? You have pigs at 1, which is presumably where humans are. So if you had a trolley problem situation with a choice of killing five pigs or one human infant, you’d kill the baby? That would seem to be the implication of this.

Expand full comment
Theodore Yohalem Shouse 🔸's avatar

The sentience numbers come from Brian Tomasik. You can read his research and methodology here: https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/#Results_table

Also, I disagree about the supposed implication. I don't think a being's worth is entirely determined by its level of sentience. What's far more important is whether it leads a good life or not. For example, I'd rather save one human than an arbitrarily large number of wild animals because I think most humans lead lives of net-positive experiential welfare, while I think most wild animals don't.

Probably we also shouldn't kill babies because it's illegal and would make tons of people extremely distressed. I think many putative counter examples to consequentialist ethical ideas come from confounding ethical theory and practice, as Richard Yetter Chappell has put it.

See: Ethical Theory and Practice https://www.goodthoughts.blog/p/ethical-theory-and-practice?utm_campaign=post&utm_medium=web

And: Naive Instrumentalism vs. Principled Proceduralism https://www.goodthoughts.blog/p/naive-instrumentalism-vs-principled

Expand full comment