[HN Gopher] The shrimp welfare project
___________________________________________________________________
The shrimp welfare project
Author : 0xDEAFBEAD
Score : 63 points
Date : 2024-11-18 14:44 UTC (8 hours ago)
(HTM) web link (benthams.substack.com)
(TXT) w3m dump (benthams.substack.com)
| sodality2 wrote:
| I read this article after it was linked on Amos Wollen's substack
| this weekend. Thoroughly convinced. I have been preaching shrimp
| rights to everyone in my life. 15,000 shrimp will be given a
| death with dignity by my donation.
| thinkingtoilet wrote:
| I love articles like this that challenge the status quo of
| morality. I find my self asking the question, "Would I rather
| save a billion shrimp or one human?" and I honestly think I'm
| siding on the human. I'm not saying that answer is "correct".
| It's always good to think about these things and the point the
| author makes about shrimp being a test of our morality because
| they're so different is a good one.
| barefoot wrote:
| How about one million kittens or one human?
| dartos wrote:
| Where did the kittens come from?
|
| If they were spawned into existence for this thought
| experiment, then the human, probably.
|
| But if even one of those kittens were mine, entire cities
| could be leveled before I let anyone hurt my kitten.
| some_random wrote:
| This brings up an interesting point, our view of morality
| is heavily skewed. If you made me choose between something
| bad happening to my partner or 10 random people, I would
| save my partner every time and I expect every normal person
| in the world to choose the same.
| dartos wrote:
| Well humans aren't perfectly rational.
|
| I wouldn't think it moral to save my kitten over a random
| non-evil person, but I'd still do it.
| Iulioh wrote:
| It is a rational choice tho.
|
| It wouldn't just hurt your partner, it would hurt you.
|
| We know that following a "objective morality" the 10
| people would be a better choice but it would hurt
| (indirectly) you.
| dartos wrote:
| You're right. Maybe rational was the wrong word.
|
| Humans aren't perfectly objective.
| hansvm wrote:
| Also, where did the human come from? Are they already on
| their deathbed, prolonged in this thought experiment for
| only a few fleeting moments? Were they themselves a
| murderer?
| dartos wrote:
| None of that matters if my kitten is in danger!
| saalweachter wrote:
| Collectively we kill and eat around a billion rabbits a year,
| around 8 million in the US. They aren't kittens, but they do
| have a similar level of fluffy cuteness.
|
| It's not quite "one million to one"; the meat from 1 million
| rabbits meets the caloric needs of around 2750 people for 1
| year.
| mihaic wrote:
| In that case I actually ask "Who's the human?", and in about
| 80% of the time I'd pick the human.
| HansardExpert wrote:
| Still the human.
|
| How about one million humans or one kitten?
|
| Where is the cut-off point for you>?
| JumpCrisscross wrote:
| One cat versus many humans. My spending on my cat makes the
| answer clear.
| himinlomax wrote:
| The best way to save even more shrimps would be to campaign for
| and subsidize whaling. They are shrimp-mass-murdering machines.
| What's a few whales versus billions of shrimps?
| xipho wrote:
| You didn't read the article nor follow the argument, just
| jumped in. It's about _suffering_ shrimps, not _saving_
| shrimps.
| some_random wrote:
| Do shrimp not suffer from being consumed by whales?
| eightysixfour wrote:
| In comparison to having their eyeballs crushed but left
| alive, or slowly frozen to death?
| some_random wrote:
| The trade off here is eliminating mutilation, going from
| Pain + Death to just Death, or in the case of the whales,
| going from Death to normal, beautiful shrimp Life. I
| don't really have any interest in doing shrimp quality of
| life math this morning but there's clearly something
| there.
| himinlomax wrote:
| I did. It was a whole lot of nothing.
|
| In any case, I just wanted to point out that if you cared
| about the welfare of damn arthropods, you're going nowhere
| really fast.
|
| Consider this: the quickest, surest, most efficient way and
| ONLY way to reduce all suffering on earth to nothing
| forever and ever is a good ole nuclear holocaust.
| 0xDEAFBEAD wrote:
| I'm almost certain this organization is focused on farmed
| shrimp. https://forum.effectivealtruism.org/posts/z79ycP5jCDk
| s4LPxA/...
| DangitBobby wrote:
| You see this type of argument used against animal welfare all
| the time. At the end of the day, I dismiss them all as "we
| can't be perfect so we might as well do nothing".
|
| As the article suggests, imagine you must live the lifetime
| of 1 million factory farmed shrimps. Would you then rather
| people quibble over whether we should hunt whales to
| extinction and ultimately do nothing (including never
| actually hunting whales to extinction to save you because
| they don't actually care about you), or would you rather they
| attempt to reduce your suffering in those millions of deaths
| as much as possible?
| anothername12 wrote:
| It's the trolley problem https://neal.fun/absurd-trolley-
| problems/
| BenthamsBulldog wrote:
| Thanks for the kind words! I agree lots of people would value a
| human more than any number of shrimp. Now, in the article, I'm
| talking about which is worse--extreme suffering for one human
| or extreme suffering for millions of shrimp. So then the
| question is: can the common sense verdict be defended? What
| about shrimp is it that makes it so that their pain is of
| negligible importance compared to humans? Sure they aren't
| smart, but being dumb doesn't seem to make your pain less bad
| (hurting babies and mentally disabled people is still very very
| bad).
| theonething wrote:
| For babies and mentally disabled people, we absolutely know
| beyond any doubt that they are capable of feeling pain,
| intense, blood curdling pain.
|
| I don't think we can say the same of shrimp.
|
| That's why humane killing of cattle (with piston guns to the
| head) is widely practiced, but nothing of the sort for crabs,
| oysters, etc. We know for sure cattle feel pain so we do
| something about it.
| n4r9 wrote:
| Apologies for focusing on just one sentence of this article, but
| I feel like it's crucial to the overall argument:
|
| > ... if [shrimp] suffer only 3% as intensely as we do ...
|
| Does this proposition make sense? It's not obvious to me that we
| can assign percentage values to suffering, or compare it to human
| suffering, or treat the values in a linear fashion.
|
| It reminds me of that vaguely absurd thought experiment where you
| compare one person undergoing a lifetime of intense torture vs
| billions upon billions of humans getting a fleck of dust in their
| eyes. I just cannot square choosing the former with my
| conscience. Maybe I'm too unimaginative to comprehend so many
| billions of bits of dust.
| sodality2 wrote:
| Have you read the linked paper by Norcross? "Great harms from
| small benefits grow: how death can be outweighed by headaches"
| [0].
|
| [0]: https://www.jstor.org/stable/3328486
| n4r9 wrote:
| No; thanks for bringing it to my attention. The first page is
| intriguing... I'll see if I can locate a free copy somewhere.
| sodality2 wrote:
| Here's a copy I found: https://philosophysmith.com/wp-
| content/uploads/2018/07/alist...
|
| It's pretty short, I liked it. Was surprised to find myself
| agreeing with it at the end of my first read.
| probably_wrong wrote:
| I read the paper and I believe the same objections applies:
| the reasoning only works if you assume "pain" to be a
| constant number subject to the additive property.
|
| If we have to use math, I'd say: the headaches are temporal -
| the effect of all the good you've done today is effectively
| gone tomorrow one way or another. But killing a person means,
| to quote "Unforgiven", that "you take away everything he's
| got and everything he's ever gonna have". So the calculation
| needs at least a temporal discount factor.
|
| I also believe that the examples are too contrived to be
| actually useful. Comparing a room with one person to another
| with five million is like comparing the fine for a person
| traveling at twice the speed limit with that of someone
| traveling at 10% the speed of light - the results of such an
| analysis are entertaining to think about, but not actually
| useful.
| BenthamsBulldog wrote:
| No, that isn't true. We can consider some metric like being
| at some temperature for an hour. Start with some truly
| torturous pain like being at 500 degrees for an hour (you'd
| die quickly, ofc). One person being at 500 degrees is less
| bad than 10 at 499 degrees which is less bad than 100 at
| 498 degrees...which is less bad than some number at 85
| degrees (not torture, just a bit unpleasant).
| n4r9 wrote:
| I think OP's objection is that - even granting that a
| "badness value" can be assigned to headaches and that 3
| people with headaches is worse than 2 - there's no clear
| reason to suppose that 3 is exactly _half again_ as bad
| as 2. It may be that the function mapping headaches to
| badness is logarithmic, or even that it asymptotes
| towards some limit. In mathematical terms it can be both
| monotonic and bounded.
|
| Thus, when comparing headaches to a man being tortured,
| there's no clear reason to suppose that there _is_ a
| number of headaches that is worse than the torture.
| sdwr wrote:
| That's reversed. The number of people can be mapped
| linearly, but not the intensity of the pain.
|
| (Intuitively, it's hard to say saving 100 people is 100x
| as good as saving 1, because we can't have 100 best
| friends, but it doesn't affect the math at all)
| InsideOutSanta wrote:
| The article mentions that issue in passing ("I reject the claim
| that no number of mild bads can add up to be as bad as a single
| thing that's very bad, as do many philosophers"), but I don't
| understand the actual argument behind this assertion.
|
| Personally, I believe that you _can 't_ just add up mildly bad
| things and create a very bad thing. For example, I'd rather get
| my finger pricked by a needle once a day for the rest of my
| life than have somebody amputate my legs without anesthesia
| just once, even though the "cumulative pain" of the former
| choice might be higher than that of the latter.
|
| Having said that, I also believe that there is sufficient
| evidence that shrimp suffer greatly when they are killed in the
| manner described in the article, and that it is worthwhile to
| prevent that suffering.
| aithrowawaycomm wrote:
| Their point isn't that it's merely "worthwhile," but that
| donating to Sudanese refugees is a waste of money because 1
| starving child = 80 starving shrimp, or whatever their
| ghoulish and horrific math says.
| 0xDEAFBEAD wrote:
| >donating to Sudanese refugees is a waste of money
|
| Donating to Sudanese refugees sounds like a great use of
| money. Certainly not a waste.
|
| Suboptimal isn't the same as wasteful. Suppose you sit down
| to eat a great meal at a restaurant. As you walk out, you
| realize that you could have gotten an even better meal for
| the same price at the restaurant next door. That doesn't
| mean you just wasted your money.
|
| >ghoulish and horrific math
|
| It's not the math that's horrific, it's the world we live
| in that's horrific. The math just helps us alleviate the
| horror better.
|
| Researcher: "Here's my study which shows that a new
| medication reduces the incidence of incredibly painful
| kidney stones by 50%." Journal editorial board: "We refuse
| to publish this ghoulish and horrific math."
| BenthamsBulldog wrote:
| It's not a waste as another commenter noted, just probably
| not the best use of money.
|
| I agree this is unintuitive, but I submit that's because of
| speceisism. What about shrimp makes it so that tens of
| millions of them painfully dying is less bad than a single
| human death? It doesn't seem like the fact that they aren't
| smart makes their extreme agony less bad (the badness of a
| headache doesn't depend on how smart you are).
| Vecr wrote:
| How much of your posting is sophistry? I assume this
| isn't (I doubt this _increases_ the positivity of the
| perception of EA), but the God stuff makes very close to
| no sense at all.
|
| If it's sophistry anyway, can't you take Eliezer's
| position and say God doesn't exist, and some CEV like
| system is better than Bentham style utilitarianism
| because there's not an objective morality?
|
| I don't think CEV makes much sense, but I think you're
| scoring far less points that you think you are even
| relative to something like that.
| dfedbeef wrote:
| It is regular absurd.
| aithrowawaycomm wrote:
| Yeah this (along with the "billion headaches" inanity) rests on
| a fallacy: insisting an abstraction can be measured as a
| quantity when it clearly cannot. This trick is usually done by
| blindly averaging together some concrete quantities and
| claiming it represents the abstraction. The illusion is
| fostered by "local continuity" of these abstractions - if
| pulling your earlobe causes suffering, pulling harder causes
| more suffering. And of course the "mathiness" gives an aura of
| rigor and rationality. But a terrible error in quantitative
| reasoning occurs when you break locality: going from pulled
| earlobe to emotional loss, or pulled earlobe to pulled
| antennae, etc. The very nature of the abstraction -
| "suffering," "badness," - changes between entities and
| situations, so the one formula cannot possibly apply.
|
| ETA: see also the McNamara fallacy
| https://en.wikipedia.org/wiki/McNamara_fallacy
| sdwr wrote:
| It's not about the numbers. The core argument is:
|
| - they suffer
|
| - we are good people who care about reducing suffering
|
| - so we spend our resources to reduce their suffering
|
| And some (most!) people balk at one of those steps
|
| But seriously, pain is the abstraction already. It's damage
| to the body represented as a feeling.
| mistercow wrote:
| I don't really doubt that it's _in principle_ possible to
| assign percentage values to suffering intensity, but the 3%
| value (which the source admits is a "placeholder") seems
| completely unhinged for an animal with 0.05% as many neurons as
| a chicken, and the source's justification for largely
| discounting neuron counts seems pretty arbitrary, at least as
| presented in their FAQ.
| adrian_b wrote:
| The ratio of the neuron numbers may be somewhat meaningful
| when comparing vertebrates with vertebrates and arthropods
| with arthropods, but it is almost completely meaningless when
| comparing vertebrates with arthropods.
|
| The reason is that the structure of the nervous systems of
| arthropods is quite different from that of the vertebrates.
| Comparing them is like comparing analog circuits and digital
| circuits that implement the same function, e.g. a number
| multiplier. The analog circuit may have a dozen transistors
| and the digital circuit may have hundreds of transistors, but
| they do the same thing (with different performance
| characteristics).
|
| The analogy with comparing analog and digital circuits is
| quite appropriate, because parts of the nervous systems that
| have the same function, e.g. controlling a leg muscle, may
| have hundreds or thousands of neurons in a vertebrate, which
| function in an all-or-nothing manner, while in an arthropod
| the equivalent part may have only a few neurons that function
| in a much more complex manner in order to achieve fine
| control of the leg movement.
|
| So typically one arthropod neuron is equivalent with much
| more vertebrate neurons, e.g. hundreds or even thousands.
|
| This does not mean that the nervous system of arthropods is
| better than that of vertebrates. They are optimized for
| different criteria. Neither a vertebrate can become as small
| as the smallest arthropods, nor an arthropod can become as
| big as the bigger vertebrates, the systems that integrate the
| organs of a body into a single living organism, i.e. the
| nervous system and the circulatory and respiratory systems,
| are optimized for a small size in arthropods and for a big
| size in vertebrates.
| 0xDEAFBEAD wrote:
| Interesting.
|
| I'm fairly puzzled by sensation/qualia. The idea that
| there's some chemical reaction in my brain which produces
| sensation as a side effect is very weird. In principle it
| seems like you ought to be able to pare things down in
| order to produce a "minimal chemical reaction" for
| suffering, and do "suffering chemistry" in a beaker (if you
| were feeling unethical). That's really trippy.
|
| People often talk about suffering in conjunction with
| consciousness, but in my mind information processing and
| suffering are just different phenomena:
|
| * Children aren't as good at information processing, but
| they are even more capable of suffering.
|
| * I wouldn't liked to be kicked if I was sleeping, or
| blackout drunk, even if I was incapable of information
| processing at the time, and had no memory of the event.
|
| So intuitively it seems like more neurons = more "suffering
| chemistry" = greater moral weight. However, I imagine that
| perhaps the amount of "suffering chemistry" required to
| motivate an organism is actually fairly constant regardless
| of its size. Same way a gigantic cargo ship and a small
| children's toy could in principle be controlled by the same
| tiny microchip. That could explain the moral weight result.
|
| Interested to hear any thoughts.
| adrian_b wrote:
| While in animals with complex nervous systems like humans
| and also many mammals and birds there may be
| psychological reasons for suffering, like the absence or
| death of someone beloved, suffering from physical pain is
| present in most, if not all animals.
|
| The sensation of pain is provided by dedicated sensory
| neurons, like other sensory neurons are specialized for
| sensing light, sound, smell, taste, temperature, tactile
| pressure, gravity, force in the muscles/tendons, electric
| currents, magnetic fields, radiant heat a.k.a. infrared
| light and so on (some of these sensors exist only in some
| non-human animals).
|
| The pain-sensing neurons, a.k.a. nociceptors, can be
| identified anatomically in some of the better studied
| animals, including humans, but it is likely that they
| also exist in most other animals, with the possible
| exception of some parasitic or sedentary animals, where
| all the sense organs are strongly reduced.
|
| So all animals with such sensory neurons that cause pain
| are certain to suffer.
|
| The nociceptors are activated by various stimuli, e.g.
| either by otherwise normal stimuli that exceed some pain
| threshold, e.g. too intense light or noise, or by
| substances generated by damaged cells from their
| neighborhood.
| 0xDEAFBEAD wrote:
| Interesting. So how about counting nociceptors for moral
| weight?
|
| What specifically makes it so the pain neurons cause pain
| and the pleasure neurons cause pleasure? Supposing I
| invented a sort of hybrid neuron, with some features of a
| pain neuron and some features of a pleasure neuron -- is
| there any way a neuroscientist could look at its
| structure+chemistry and predict whether it will produce
| pleasures vs pain?
| adrian_b wrote:
| Even if this is not well understood, it is likely that
| any differences between the pain neurons and any other
| sensory neurons are not essential.
|
| It is likely that it only matters where they are
| connected in the sensory paths that carry the information
| about sensations towards the central nervous system.
| Probably any signal coming into the central nervous
| system on those paths dedicated for pain is interpreted
| as pain, like a signal coming through the optical nerves
| would be interpreted as light, even when it would be
| caused by an impact on the head.
|
| https://en.wikipedia.org/wiki/Nociception
| mistercow wrote:
| I totally agree that you can't just do a 1:1 comparison. My
| point is not to say that a shrimp suffers .05% as much as a
| chicken, but to use a chicken as a point of reference to
| illustrate just how simple the nervous system of a shrimp
| is.
|
| We're talking about a scale here where we have to question
| whether the notion of suffering is applicable _at all_
| before we try to put it on any kind of spectrum.
| sodality2 wrote:
| > "Shouldn't you give neuron counts more weight in your
| estimates?"
|
| Rethink Priorities [0] has a FAQ entry on this [1].
|
| [0]: https://rethinkpriorities.org/research-area/welfare-
| range-es...
|
| [1]: https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJ
| voB/...
| mistercow wrote:
| Which I referenced and called arbitrary.
| sodality2 wrote:
| Your claim that it's arbitrary doesn't really have much
| weight without further reasoning.
| mistercow wrote:
| The problem is that the reasoning they give is so vague
| that there isn't really anything to argue against. At
| best, they convincingly argue, in an extremely non-
| information-dense way, that neuron count isn't
| everything, which is obviously true. They do not manage
| to argue convincingly that a 100k neuron system is
| something that we can even apply the word "suffering" to
| meaningfully.
| NoMoreNicksLeft wrote:
| > I don't really doubt that it's in principle possible to
| assign percentage values to suffering intensity, but the 3%
| value (which the source admits is a "placeholder") seems
| completely unhinged for an animal with 0.05% as many neurons
| as a chicken,
|
| There is a simple explanation for the confusion that this
| causes you and the other people in this thread: _suffering 's
| not real_. It's a dumb gobbledygook term that in the most
| generous interpretation refers to a completely subjective
| experience that is not empirical or measurable.
|
| The author uses the word "imagine" three times in the first
| two paragraphs for a reason. Then he follows up with a fake
| picture of anthropomorphic shrimp. This is some sort of con
| game. And you're all falling for it. He's not scamming money
| out of you, instead he wants to convert you to his religious-
| dietary-code-that-is-trying-to-become-a-religion.
|
| Shrimp are food. They have zero moral weight.
| mistercow wrote:
| Denying the existence of something that you and everyone
| else has experienced is certainly an approach.
|
| Look, I'm not going to defend the author here. The linked
| report reads to me like the output of a group of people who
| have become so insulated in their thinking on this subject
| that they've totally lost perspective. They give an 11%
| prior probability of earthworm sentience based on proxies
| like "avoiding noxious stimuli", which is... really
| something.
|
| But I'm not so confused by a bad set of arguments that I
| think suffering doesn't exist.
| NoMoreNicksLeft wrote:
| > Denying the existence of something that you and
| everyone else has experienced is certainly an approach.
|
| You've experienced this mystical thing, and so you know
| it's true?
|
| > They give an 11% prior probability of earthworm
| sentience
|
| I'm having trouble holding in the laughter. But you don't
| seem to understand how dangerously deranged these people
| are. They'll convert you to their religion by hook or
| crook.
| abemiller wrote:
| Using some italics with an edgy claim doesn't allow you to
| cut through centuries of philosophy. It's almost as if,
| when philosophers have coined this term in language
| "subjective experience" and thousands have used it often in
| coherent discussion, that it actually has semantic value.
| It exists in the intersubjective space between people who
| communicate with shared concepts.
|
| I don't have much to say about the shrimp, but I find it
| deeply sad when people convince themselves that they don't
| really exist as a thinking, feeling thing. It's self
| repression to the maximum, and carries the implication that
| yourself and all humans have no value.
|
| If you don't have certain measurable proof either way, why
| would you choose to align with the most grim possible
| skeptical beliefs? Listen to some music or something -
| don't you hear the sounds?
| NoMoreNicksLeft wrote:
| > Using some italics with an edgy claim
|
| There is nothing edgy about it. You can't detect it, you
| can't measure it, and if the word had any applicability
| (to say, humans), then you're also misapplying it. If it
| is your contention that suffering is something-other-
| than-subjective, then you're the one trying to be edgy.
| Not I.
|
| The way sane, reasonable people describe subjective
| phenomena that we can't detect or measure is "not real".
| When we're talking about decapods, it can't even be self-
| reported.
|
| > but I find it deeply sad when people convince
| themselves that they don't really exist as a thinking,
| feeling thing. It's self repression to the maximum,
|
| Says the guy agreeing with a faction that seeks to
| convince people shrimp are anything other than food. That
| if for some reason we need to euthanize them, that they
| must be laid down on a velvet pillow to listen to
| symphonic music and watch films of the beautiful Swiss
| mountain countryside until their last gasp.
|
| "Sad" is letting yourself be manipulated so that some
| other religion can enforce its noodle-brained dietary
| laws on you.
|
| > If you don't have certain measurable proof either way
|
| I'm not obligated to prove the negative.
| 0xDEAFBEAD wrote:
| The way I think about it is that we're already making decisions
| like this in our own lives. Imagine a teenager who gets a
| summer job so they can save for a PS5. The teenager is making
| an implicit moral judgement, with themselves as the only moral
| patient. They're judging that the negative utility from working
| the job is lower in magnitude than the positive utility that
| the PS5 would generate.
|
| If the teenager gets a job offer, but the job only pays minimum
| wage, they may judge that the disutility for so many hours of
| work actually exceeds the positive utility from the PS5. There
| seems to be a capability to estimate the disutility from a
| single hour of work, and multiply it across all the hours which
| will be required to save enough.
|
| It would be plausible for the teenager to argue that the
| disutility from the job exceeds the utility from the PS5, or
| vice versa. But I doubt many teenagers would tell you "I can't
| figure out if I want to get a job, because the utilities simply
| aren't comparable!" Incomparability just doesn't seem to be an
| issue in practice for people making decisions about their own
| lives.
|
| Here's another thought experiment. Imagine you get laid off
| from your job. Times are tough, and your budget is tight.
| Christmas is coming up. You have two children and a pet. You
| could get a fancy present for Child A, or a fancy present for
| Child B, but not both. If you _do_ buy a fancy present, the
| only way to make room in the budget is to switch to a less
| tasty food brand for your pet.
|
| This might be a tough decision if the utilities are really
| close. But if you think your children will mostly ignore their
| presents in order to play on their phones, and your pet gets
| incredibly excited every time you feed them the more expensive
| food brand, I doubt you'll hesitate on the basis of cross-
| species incomparability.
|
| I would argue that the shrimp situation sits closer to these
| sort of every-day "common sense" utility judgments than an
| exotic limiting case such as torture vs dust specks. I'm not
| sure dust specks have any negative utility at all, actually.
| Maybe they're even positive utility, if they trigger a blink
| which is infinitesimally pleasant. If I change it from specks
| to bee stings, it seems more intuitive that there's some
| astronomically large number of bee stings such that torture
| would be preferable.
|
| It's also not clear to me what I should do when my intuitions
| and mathematical common sense come into conflict. As you
| suggest, maybe if I spent more time really trying to wrap my
| head around how astronomically large a number can get, my
| intuition would line up better with math.
|
| Here's a question on the incomparability of excruciating pain.
| Back to the "moral judgements for oneself" theme... How many
| people would agree to get branded with a hot branding iron in
| exchange for a billion dollars? I'll bet at least a few would
| agree.
| hansvm wrote:
| > How many people would agree to get branded with a hot
| branding iron in exchange for a billion dollars?
|
| Temporary pain without any meaningful lasting injuries? I do
| worse long-term damage than that at my actual job just in
| neck and wrist damage and not being sufficiently active (on a
| good day I get 1-2hrs, but that doesn't leave much time for
| other things), and I'm definitely not getting paid a billion
| for it.
| 0xDEAFBEAD wrote:
| Sorry to hear about your neck and wrist. I like this site:
|
| https://www.painscience.com/
|
| This article was especially helpful:
|
| https://www.painscience.com/tutorials/trigger-points.php
|
| I suspect the damage you're concerned about is reversible,
| if you're sufficiently persistent with research and
| experimentation. That's been my experience with chronic
| pain.
| sixo wrote:
| > The teenager is making an implicit moral judgement, with
| themselves as the only moral patient.
|
| No they're not! You have made a claim of the form "these
| things are the same thing"--but it only seems that way if you
| can't think of a single plausible alternative. Here's one:
|
| * Humans are motivated by two competing drives. The first
| drive we can call "fear", which aims to avoid suffering,
| either personally or in people you care about or identify
| with. This derives from our natural empathic instinct, but is
| can be extended by a socially-construction of group identity.
| So, the shrimp argument is saying "your avoiding-suffering
| instinct can and should be applied to crustaceans too", which
| is contrary to how most people feel. Fear also includes "fear
| of ostracization", this being equivalent to death in a
| prehistoric context.
|
| * The second drive is "thriving" or "growing" or "becoming
| yourself", and leads you to glimpse the person you could be,
| things you could do, identities you could hold, etc, and to
| strive to transform yourself into those things. The teenager
| ultimately wants the PS5 because they've identified with it
| in some way--they see it as a way to express themself. Their
| "utilitarian" actions in this context are _instrumental_ ,
| not _moral_ --towards the attainment of what-they-want. I
| think, in this simple model, I'd also broader this drive to
| include "eating meat"--you don't do this for the animal or to
| abate suffering, you do it because you want to: your body's
| hungry, you desire the pleasure of satiation, and you act to
| realize that desire.
|
| * The two drives are _not the same_ , and in the case of
| eating meat are directly opposed. (You could perhaps devise a
| way to see either as, ultimately, an expression of the
| other.) Human nature, then, basically undertakes the
| "thriving" drive except when there's a threat of suffering,
| in which case we switch gears to "fear" until it's handled.
|
| * Much utilitarian discourse seems to exist in a universe
| where the apparently-selfish "thriving" drive doesn't exist,
| or has been moralized out of existence--because it doesn't
| look good on paper. But, however it sounds, it in fact
| _exists_ , and you will find that almost all living humans
| will defend their right to express themselves, sometimes to
| the death. This is at some level the essence of life, and the
| rejection of it leads many people to view EA-type
| utilitarianism as antithetical to life itself.
|
| * One reason for this is that "fear-mode thinking" is
| cognitively expensive, and while people will maintain it for
| a while, they will eventually balk against it, no matter how
| reasonable it seems (probably this explains the last decade
| of American politics).
| 0xDEAFBEAD wrote:
| I find myself motivated to alleviate suffering in other
| beings. It feels good that a _quarter million_ shrimp are
| better off because I donated a few hundred bucks. It makes
| me feel like my existence on this planet is worthwhile. I
| did my good deed for the day.
|
| There was a time when my good deeds were more motivated by
| fear. I found that fear wasn't a good motivator. This has
| become the consensus view in the EA community. EAs
| generally think it's important to avoid burnout. After
| reworking my motivations, doing good now feels like a way
| to thrive, not a way to avoid fear. The part of me which
| was afraid feels good about this development, because my
| new motivational structure is more sustainable.
|
| If you're not motivated to alleviate suffering in other
| beings, it is what it is. I'm not going to insult you or
| anything. However, if I notice you insulting _others_ over
| moral trifles, I might privately think to myself that you
| are being hyperbolic. When I put on my EA-type utilitarian
| hat on, almost all internet fighting seems to lack
| perspective.
|
| I support your ability to express yourself. (I'm a _little_
| skeptical that 's the main driver of the typical PS5
| purchase, but that's beside the point.) I want you to
| thrive! I consume meat, so I can't condemn you for
| consuming meat. I did try going vegan for a bit, but a
| vegan diet was causing fatigue. I now make a mild effort to
| eat a low-suffering diet. I also donate to https://gfi.org/
| to support research into alternative meats. (I think it's
| plausible that the utilitarian impact of my diet+donations
| is net positive, since the invention of viable alternative
| meats could have such a large impact.) And whenever I get
| the chance, I rant about the state of vegan nutrition
| online, in the hope that vegans will notice my rants and
| improve things.
|
| (Note that I'm not a member of the EA community, but I
| agree with aspects of the philosophy. My issues with the
| community can go in another thread.)
|
| (I appreciate you writing this reply. Specifically, I find
| myself wondering if utilitarian advocacy would be more
| effective if what I just wrote, about the value of
| rejecting fear-style motivation, was made explicit from the
| beginning. It could make utilitarianism both more appealing
| and more sustainable.)
| BenthamsBulldog wrote:
| Seems possible in principle. Experiences can cause one to feel
| more or less pain--what's wrong with quantifying it? Sure it
| will be a bit handwavy and vague, but the alternative of doing
| no comparisons and just going based on vibes is worse
| https://www.goodthoughts.blog/p/refusing-to-quantify-is-
| refu.... But as I argue, given high uncertainty, you don't need
| any fine grained estimates to think giving to shrimp welfare is
| valuable. Like, if there was a dollar in front of you and you
| could use it to save 16,000 shrimp, seems like that's a good
| use of it.
| kaashif wrote:
| > Like, if there was a dollar in front of you and you could
| use it to save 16,000 shrimp, seems like that's a good use of
| it.
|
| Uhh, that's totally unintuitive and surely almost all people
| would disagree, right?
|
| If not in words, people disagree in actions. Even within
| effective altruism there are a lot of people only giving to
| human centred causes.
| jjcm wrote:
| No the proposition doesn't make sense. The 3% number comes from
| this: https://rethinkpriorities.org/research-area/welfare-
| range-es...
|
| The page gives 3% to shrimp because their lifespan is 3% that
| of humans. It's a terrible avenue for this estimate. By the
| same estimate, giant tortoises are less ethical to kill than
| humans. The heavens alone can judge you for the war crimes
| you'd be committing by killing a Turritopsis dohrnii.
|
| Number of neurons is the least-bad objective measurement in my
| eyes. Arthropods famously have very few neurons, <100k compared
| to 86b in humans. That's a 1:1000000 neuron ratio, which feels
| like a more appropriate ratio for suffering than a lifespan-
| based ratio, though both are terrible.
| aziaziazi wrote:
| Not only lifespan. From the link you quote:
|
| > Capacity for welfare = welfare range x lifespan. An
| individual's welfare range is the difference between the best
| and worst welfare states the individual can realize.
|
| > we rely on indirect measures even in humans: behavior,
| physiological changes, and verbal reports. We can observe
| behavior and physiological changes in nonhumans, but most of
| them aren't verbal. So, we have to rely on other indirect
| proxies, piecing together an understanding from animals'
| cognitive and affective traits or capabilities.
|
| First time I see this "warfare range" notion and it seems
| quite clever to me.
|
| Also the original article says 3.1% is the median while the
| mean is 19%. I guess that may be caused by individuals havug
| differents experiences each other's.
| hazbot wrote:
| I'm willing to run with the 3% figure... But I take issue with
| the linearity assumption that torturing 34 shrimp is thus worse
| than torturing a human!
| dfedbeef wrote:
| Waiting for the funny reveal that this is a prank
| niek_pas wrote:
| Why do you think this is a prank?
| dfedbeef wrote:
| Because it's funny enough and seems like an absolute S-tier
| performance artist critique of the effective altruism
| movement. Like who gives a shit about whether shrimp freeze
| to death or are electrocuted and then freeze to death.
|
| But this blog post uses a little BS math (.3 seconds IS
| shorter than 20 minutes! By an order of magnitude! Take my
| money!)
|
| and some hand wavey citations (Did you know shrimp MIGHT be
| conscious based on a very loose definition of consciousness?
| Now you too are very smart! You can talk about this with your
| sort-of friends (coworkers) from the job where you spend 80
| hours a week now!)
|
| to convince some people that this is indeed an important and
| worthy thing. Because people who can be talked into this
| don't really interact with the real world, for the most part.
| So they don't know that lots of actual people need actual
| help that doesn't involve them dying anyway and being eaten
| en-masse afterwards.
| dfedbeef wrote:
| and it stimulates interesting conversations like this.
| Watch this comment section it's going to be great
| dfedbeef wrote:
| It's also just such a perfect half-measure. You're not
| asking people to not eat these little guys. They're not
| even confirmed to be fully conscious. This is a speculative
| fix for a theoretical problem. Plus like, there's some
| company making shrimp zappers. So by donating you're also
| kind of paying two people to kill the shrimp?
| sodality2 wrote:
| > They're not even confirmed to be fully conscious
|
| Please read the cited Rethink Priorities research:
| https://rethinkpriorities.org/research-area/welfare-
| range-es...
|
| Notably the FAQ and responses.
| AlexandrB wrote:
| I think Dennis Prager is a hack, but this quote looms
| larger in my mind as I get older.
|
| > The foolishness of that comment is so deep, I can only
| ascribe it to higher education. You have to have gone to
| college to say something that stupid.
|
| The entire effort to quantify morality rests on the
| shakiest of foundations but makes confident claims about
| its own validity based on layers and layers of
| mathematical obfuscation and abstraction.
| dfedbeef wrote:
| I guess I've spent my whole life waiting for the funny reveal
| that this whole thing is a funny prank. I like when things are
| funny.
| dfedbeef wrote:
| I guess it is not a prank, maybe just a perfect encapsulation
| of life in tech in the 2020's.
| VyseofArcadia wrote:
| This is an intensely weird read. I kept waiting for the satire to
| become more obvious. Maybe throw in a reference or two to the
| Futurama episode "The Problem with Popplers". But by the end I
| can only conclude that it is sincere.
|
| I guess what strikes me the most odd is that not eating shrimp is
| never suggested as an alternative. It starts from the premise
| that, well, we're going to eat shrimp anyway, so the least we
| could do is give them a painless death first. If you follow this
| logic to its extremes, you get things like, "well, it's expensive
| to actually feed these starving children, but for just pennies a
| day you can make sure they at least die painlessly".
| InsideOutSanta wrote:
| You have no control over other people's eating habits, but you
| do have control over your own charitable spending.
|
| If you're considering how to best spend your money, it doesn't
| matter that not eating shrimp would be an even better solution
| than preventing pain when they are killed. It only matters what
| the most effective way of spending your money is.
| AlexandrB wrote:
| If we're talking ethical giving, I'd rather give that money
| to a panhandler where there's a chance it relieves even a
| little bit of human suffering.
| DangitBobby wrote:
| TFA addresses this. Many humans believe that no amount of
| animal suffering is as bad as any amount of human
| suffering, which is just a failure of humans to
| empathetize. Human suffering is not all that matters, and
| people who can't be convinced otherwise probably aren't the
| target audience.
| erostrate wrote:
| Did the author factor in the impact of this kind of article on
| the external perception of the rationalist / utilitarian / EA
| community when weighing the utility of publishing this?
|
| Should you push arguments that seem ridiculously unacceptable to
| the vast majority of people, thereby reducing the weight of more
| acceptable arguments you could possibly make?
| sodality2 wrote:
| Should we stop making logically sound but unpalatable
| arguments?
| erostrate wrote:
| How palatable an argument is determines its actual impact.
| It's not logical to spend effort making arguments that are so
| unpalatable that they will just make people ignore you.
| sodality2 wrote:
| Deontologically, maybe it's principally better to make an
| argument you know you can't refute. Maybe even just to try
| to convince yourself otherwise.
|
| I know the person making this argument isn't necessarily
| aligned with deontology. Maybe that was your original
| point.
| slothtrop wrote:
| That may be part of the intent.
| 0xDEAFBEAD wrote:
| The article seems to have been well-received:
| https://benthams.substack.com/p/you-all-helped-hundreds-of-m...
|
| I think this is a tough call in general. Current morality would
| be considered "ridiculously unacceptable" by 1800s standards,
| but I see it as a good thing that we've moved away from 1800s
| morality. I'm glad people were willing to challenge the 1800s
| status quo. At the same time, my sense is that the
| environmentalists who are ruining art in museums are probably
| challenging the status quo in a way that's unproductive.
|
| To some degree, I suspect the rationalist / EA crowd has
| decided that weird contrarians tend to be the people who have
| the greatest impact in the long run, so it's OK to filter for
| those people.
| freejazz wrote:
| It remains to be seen that anyone besides the EA community
| takes the EA community seriously, I wouldn't worry about it.
| delichon wrote:
| By this logic someone who kills a person and lets them decay in a
| swamp such that billions or trillions of microbes benefit, we
| should hail them as a paragon of charity. I hope this point of
| view doesn't catch on.
| sodality2 wrote:
| To draw this parallel situation, you're stipulating a few
| things: microbes feel pain, X good is as good as X bad is bad,
| and that actively bringing about a good thing is equivalent to
| avoiding a harmful thing. I don't think any of these are true,
| so I disagree.
| bhelkey wrote:
| This two thousand word article boils down to, 1) every dollar
| donated saves ~1,500 shrimp per year from agony in perpituaty and
| 2) saving 32 shrimp from agony is morally equivalent to saving 1
| human from agony.
|
| Neither of these points are well supported by the article. Nor
| are they well supported by the copious links scattered through
| the blog post.
|
| For example, "they worked with Tesco to get an extra 1.6 billion
| shrimp stunned before slaughter every year" links to a summary
| about the charity NOT to any source for 1.6 billion shrimp saved.
| sodality2 wrote:
| > For example, "they worked with Tesco to get an extra 1.6
| billion shrimp stunned before slaughter every year" links to a
| summary about the charity NOT to any source for 1.6 billion
| shrimp saved.
|
| It's in the exact webpage linked there. You just didn't scroll
| down enough.
|
| > Tesco and Sainsbury's published shrimp welfare commitments,
| citing collaboration with SWP (among others), and signed 9
| further memoranda of understanding with producers, in total
| committing to stunning a further ~1.6B shrimps per annum.
|
| https://animalcharityevaluators.org/charity-review/shrimp-we...
| bhelkey wrote:
| The purpose of a citation is to provide further evidence
| supporting the claim. This instead links to a ~thousand word
| article. A single sentence of which is relevant. Instead of
| supporting the claim, it instead restates the claim.
| sodality2 wrote:
| It's a primary source from the organization doing the
| partnership with Tesco. Why would they cite anything? Who
| would they cite?
|
| https://www.globenewswire.com/en/news-
| release/2024/08/17/293...
| bhelkey wrote:
| > It's a primary source
|
| It's not a primary source. It's a one sentence summary of
| a secondary source. This[1] is the primary source of the
| Tesco commitment.
|
| [1] https://www.tescoplc.com/sustainability/documents/pol
| icies/t...
| sodality2 wrote:
| Given that two organizations make an agreement, I'd say a
| statement by either organization is considered a primary
| source of said agreement.
| tengbretson wrote:
| Gotta hand it to the author- no one here is arguing over whether
| ChatGPT wrote the article.
| vasco wrote:
| It seems weird to genetically engineer, force reproduce and grow
| other species just for us to eat them but worry about the
| specific aspect of how they die. Their whole existence is for our
| exploitation. I know it's still a good thing to not cause extra
| suffering if we can avoid it for cheap, and I support this kinda
| thing, but it's always weird.
| BenthamsBulldog wrote:
| I'm also against exploiting them.
| goda90 wrote:
| A purely selfish, human-centric argument for this initiative
| might be that electrical stunning before freezing might improve
| the taste and texture of the shrimp. I know some animals
| reportedly have worse tasting meat if their death was stressful.
| ajkjk wrote:
| This person seems to think they have engaged with the
| counterarguments against their way of looking at the world, but
| to my eye they are clueless; they just have no idea how most
| other people think at all. They've rationalized their morality
| into some kind of pseudo-quantitative ethical maximization
| problem and then failed to notice that most people's moralities
| don't and aren't going to work like that.
|
| Indeed, people will resist being "tricked" into this framework:
| debating on these terms will feel like having their morals
| twisted into justifying things they don't believe in. And
| although they may not have the patience or rhetorical skill to
| put into words exactly why they resist it, their intuitions won't
| lead them astray, and they'll react according to their true-but-
| hard-to-verbalize beliefs (usually by gradually getting
| frustrated and angry with you).
|
| A person who believes in rationalizing everything will then think
| that someone who resists this kind of argument is just dumb, or
| irrational, or stubborn, or actually-evil, to see that they are
| wrong. But it seems to me that the very idea that you can
| rationalize morality, that you can compute the right thing to do
| at a personal-ethics level, is itself a moral belief, which those
| people simply do not agree with, and their resistance is in
| accordance with that: you'd be trying to convince them to replace
| their moral beliefs with yours in order to win an argument by
| tricking them with logic. No wonder they resist! People do not
| release control over their moral beliefs lightly. Rather I think
| it's the people who are very _insecure_ in their own beliefs who
| are susceptible to giving them up to someone who runs rhetorical
| circles around them.
|
| I've come to think that a lot of 21st century discord (c.f.
| American political polarization) is due to this basic conflict.
| People who believe in rationalizing everything think they can't
| be wrong because the only way to evaluate anything is rationally
| --a lens through which, _of course_ rationality looks better than
| anything else. Meanwhile everyone who trusts in their own moral
| intuitions feels tricked and betrayed and exploited and sold out
| when it happens. Sure, they can 't always find the words to
| defend themselves. But it's the rationalizers who are in the
| wrong: pressuring someone into changing their mind is not okay;
| it's a basic act of disrespect. Getting someone on your side for
| real means appealing to _their_ moral intuition, not making them
| doubt theirs until they give up and reluctantly agree with yours.
| Anyway it 's a temporary and false victory: theirs will re-emerge
| years later, twisted and deformed from years of imprisonment, and
| often set on vengeance. At that point they may well be "wrong",
| but there's no convincing them otherwise: their moral goal has
| been replaced with a singular need to get to make their own
| decisions instead of being subjugated by yours.
|
| Anyway.
|
| IMO to justify animal welfare utilitarianism to people who don't
| care about it at all, you need to take one of two stances:
|
| 1. We (the animal-empathizers) live in a society with you, and we
| care a lot about this, but you don't. But we're in community with
| each other, so we ought to support each other's causes even if
| they're not personally relevant to us. So how about you support
| what we care about and you support what we care about, so
| everyone benefits? In this case it's very cheap to help.
|
| 2. We all live in a society together which should, by now, have
| largely solved for our basic needs (except for our basic
| incompetence at it, which, yeah, we need to keep working on). The
| basic job of morality is to guarantee the safety of everyone in
| our community. As we start checking off basic needs at the local
| scale we naturally start expanding our definition of "community"
| to more and more beings that we can empathize with: other nations
| and peoples, the natural world around us, people in the far
| future who suffer from our carelessness, pets, and then, yes, and
| animals that we use for food. Even though we're still working on
| the "nearby" hard stuff, like protecting our local ecosystems, we
| can also start with the low-hanging-fruit on the far-away stuff,
| including alleviating the needless suffering of shrimp. Long-term
| we hope to live in harmony with everything on earth in a way that
| has us all looking out for each other, and this is a small step
| towards that.
|
| "(suffering per death) * (discount rate for shrimp being 3% of a
| human) * (dollar to alleviate) = best charity" just doesn't work
| at all. I notice that the natural human moral intuition (the non-
| rational version) is necessarily _local_ : it's focused on
| protecting whatever you regard as your community. So to get
| someone to extend it to far-away less-sentient creatures, you
| have to convince the person to change their definition of the
| "community"--and I think that's what happens naturally when they
| feel like their local community is safe enough that they can
| start extending protection at a wider radius.
| sodality2 wrote:
| > They've rationalized their morality into some kind of pseudo-
| quantitative ethical maximization problem and then failed to
| notice that most people's moralities don't and aren't going to
| work like that.
|
| To me, the point of this argument (along with similar ones) is
| to expose these deeper asymmetries that exist in most people's
| moral systems - to make people question their moral beliefs
| instead of accepting their instinct. Not to say "You're all
| wrong, terrible people for not donating your money to this
| shrimp charity which I have calculated to be a moral
| imperative".
| sixo wrote:
| > to make people question their moral beliefs instead of
| accepting their instinct
|
| Yes every genius 20 year old wants to break down other
| peoples' moral beliefs, because it's the most validating
| feeling in the world to change someone's mind. From the other
| side, this looks like, quoting OP:
|
| > you'd be trying to convince them to replace their moral
| beliefs with yours in order to win an argument by tricking
| them with logic.
|
| And feels like:
|
| > pressuring someone into changing their mind is not okay;
| it's a basic act of disrespect.
|
| And doesn't work, instead:
|
| > Anyway it's a temporary and false victory: theirs will re-
| emerge years later, twisted and deformed from years of
| imprisonment, and often set on vengeance.
| sodality2 wrote:
| > Yes every genius 20 year old wants to break down other
| peoples' moral beliefs, because it's the most validating
| feeling in the world to change someone's mind
|
| I may be putting my hands up in surrender, as a 20 year old
| (decidedly not genius though). But I'm instead defending
| this belief, not trying to convince others. Also, I don't
| think it's the worst thing in the world to have people
| question their preconceived moral notions. I've taken
| ethics classes in college and I personally loved having
| them challenged.
| sixo wrote:
| ha, got one. Yes it is pretty fun _if you 're in the
| right mental state for it_, I've just seen so many EA-
| type rationalists out on the internet proliferating this
| worldview, and often pushing it on people who a) don't
| enjoy it, b) are threatened by it, c) are underequipped
| to defend themselves rationally against it, that I find
| myself jumping to defend against it. EA-type
| utilitarianism, I think, proliferates widely on the
| internet specifically by "survival bias"--it is easily-
| argued in text; it looks good on paper. Whereas the
| "innate" morality of most humans is more based on ground-
| truth emotional reality; see my other comment for the
| character of that
| https://news.ycombinator.com/item?id=42174022
| sodality2 wrote:
| I see, and I wholly agree. I'm looking at this from
| essentially the academic perspective (aka, when I was
| required to at least question my innate morality). When I
| saw this blog post, I looked at it in the same way. If
| you read it as "this charity is more useful than every
| other charity, we should stop offering soup kitchens, and
| redirect the funding to the SWP", then I disagree with
| that interpretation. I don't need or want to rationalize
| that decision to an EA. But it is a fun thought
| experiment to discuss.
| ajkjk wrote:
| IMO: the idea that "this kind of argument exposes deeper
| asymmetries..." is itself fallacious for the same reason: it
| presupposes that a person's morality answers to logic.
|
| Were morality a logical system, then yes, finding apparent
| contradictions would seem to invalidate it. But somehow
| that's backwards. At some level moral intuitions _can 't_ be
| wrong: they're moral intuitions, not logic. They obey
| different rules; they operate at the level of emotion,
| safety, and power. A person basically cannot be convinced
| with logic to no longer care about the safety of
| someone/something that they care about the safety of. Even if
| they submit to an argument of that form, they're doing it
| because they're conceding power to the arguer, not because
| they've changed their mind (although they may actually say
| that they changed their opinion as part of their concession).
|
| This isn't cut-and-dry; I think I have seen people genuinely
| change their moral stances on something from a logical
| argument. But I suspect that it's incredibly rare, and when
| it happens it feels genuinely surprising and bizarre. Most of
| the time when it seems like it's happening, there's actually
| something else going on. A common one is a person changing
| their professed moral stance because they realize they win
| some social cachet for doing so. But that's a switch at the
| level of power, not morality.
|
| Anyway it's easy to claim to hold a moral stance when it
| takes very little investment to do so. To identify a person's
| actual moral opinions you have to see how they act when
| pressure is put on them (for instance, do they resist someone
| trying to change their mind on an issue like the one in the
| OP?). People are _incredibly_ good at extrapolating from a
| moral claim to its moral implications that affect them (if
| you claim that we should prioritize saving the lives of
| shrimp, what else does that argument justify? And what things
| that I care about does that argument then invalidate? Can I
| still justify spending money on the things I care about in a
| world where I 'm supposed to spend it on saving animals?),
| and they will treat an argument as a threat if it seems to
| _imply_ things that would upset their personal morality.
|
| The sorts of arguments that _do_ regularly change a person 's
| opinion on the level of moral intuitions are of the form:
|
| * information that you didn't notice how you were
| hurting/failing to help someone
|
| * or, information that you thought you were helping or
| avoiding hurting someone, but you were wrong.
|
| * corrective actions like shame from someone they respect or
| depend on ("you hurt this person and you're wrong to not
| care")
|
| * other one-on-one emotional actions, like a person genuinely
| apologizing, or acting selfless towards you, or asserting a
| boundary
|
| (Granted, this stance seems to invalidate the entire subject
| of ethics. And it kinda does: what I'm describing is
| phenomological, not ethical; I'm claiming that this is how
| people actually work, even if you would like them to follow
| ethics. It seems like ethics is what you get when you try to
| extend ground-level moralities to an institutional level.
| when you abstract morality from individuals to collectives,
| you have to distill it into actual rules that obey some
| internal logic, and that's where ethics comes in.)
| dfedbeef wrote:
| This is a good comment.
| sys32768 wrote:
| How does the author know that shrimp experience "extreme agony"
| the way humans experience it?
|
| Trees and bushes and vegetables might experience extreme agony
| too when dying.
| sodality2 wrote:
| > How does the author know that shrimp experience "extreme
| agony" the way humans experience it?
|
| https://rethinkpriorities.org/research-area/welfare-range-es...
| addicted wrote:
| Or more simply, don't kill animals when we don't need to.
| burnt-resistor wrote:
| Okay, plausible, I guess. The problem boils (no pun intended)
| down to a problem of anthropocentric one; that is, it's
| impossible to ask a shrimp how much it hurts. Perhaps it hurts a
| lot or only hurts a little, or it varies based on other factors.
| It's not necessarily an unknowable, but it's an unknowable in
| human-relatable terms because (human) intelligence and theory of
| mind frame taking requires a prerequisite of linguistic
| understanding and compatibility. (Has not almost every conquering
| civilization deemed every indigenous group it encountered to be
| "dumb" or "subhuman" simply by not being able to converse? And,
| I'll take it one further that "intelligence" is purely a
| qualitative property inferred by performative interaction often
| respecting either strategy signals, complexity of response, or
| academic fashions... all requiring a shared language. This leaves
| out all other species because humans haven't yet evolved the
| intelligence or tools to communicate with other species.)
|
| Also, why not endeavor to replace meat grown by slaughtering
| animals with other alternatives? The optimization of such would
| reduce the energy, costs, biothreats, and suffering that eating
| other living beings creates.
| IncreasePosts wrote:
| Yes, if individuals suffering is a metric, then we should be
| shutting down shrimp farms and only eating large animals that
| provide a whole lot of calories per individual - like cows or
| elephants.
| 0xDEAFBEAD wrote:
| >why not endeavor to replace meat grown by slaughtering animals
| with other alternatives?
|
| Utilitarians tend to be very interested in this, too. I've been
| giving to this group: https://gfi.org/
| sixo wrote:
| This is one of those arguments that you reach when you go in a
| certain direction for long enough, and you can divide people into
| two camps at this point by whether:
|
| * they triumphally declare victory--ethics is solved! We can
| finally Do The Most Good!
|
| * or, it's so ridiculous that it occurs to them that they're
| missing something--must have taken a wrong turn somewhere earlier
| on.
|
| By my tone you can probably tell I take the latter position,
| roughly because "suffering", or "moral value", is not rightly
| seen as measurable, calculatable, or commensurable, even between
| humans. It's occasionally a useful view for institutions to hold,
| but imo the one for a human.
| 0xDEAFBEAD wrote:
| You can read and respond to my reply here if you like:
| https://news.ycombinator.com/item?id=42173441
| RodgerTheGreat wrote:
| If you find this line of argument compelling, consider another
| alternative: engineering an organism which is much smaller and
| consumes far fewer resources than shrimp but which exists in a
| neurologically stable state of perpetual bliss. The survival and
| replication of this species of biological prayer-wheels would
| rapidly become a far stronger moral imperative (within the logic
| of the article) than _any_ consideration for shrimp, or indeed
| humans.
| sodality2 wrote:
| I don't think this is the same at all. Creation of good is not
| necessarily as good as avoidance of harm is bad.
| VyseofArcadia wrote:
| It depends on your goals, right?
|
| If your goal is "maximize total happiness", then engineering
| blisshrimp is obviously the winning play. If your goal is
| "minimize total suffering", than the play is to engineer
| something that 1. experiences no suffering, 2. is delicious,
| and 3. outcompetes existing shrimp so we don't have to worry
| about their suffering anymore.
|
| Ideally we'd engineer something that is in a state of
| perpetual bliss and _wants_ to be eaten, not unlike the cows
| in _Restaurant at the End of the Universe_.
| sodality2 wrote:
| > If your goal is "minimize total suffering", than the play
| is to engineer something that 1. experiences no suffering,
| 2. is delicious, and 3. outcompetes existing shrimp so we
| don't have to worry about their suffering anymore.
|
| Eh, only if you're minimizing suffering per living being.
| Not total suffering. Having more happy creatures doesn't
| cancel out the sad ones. But I see what you mean.
| Vecr wrote:
| > Eh, only if you're minimizing suffering per living
| being. Not total suffering. Having more happy creatures
| doesn't cancel out the sad ones. But I see what you mean.
|
| According to this guy it does.
| 0xDEAFBEAD wrote:
| If you start that charity, I think there's a decent chance the
| substack author will write a blog post about it. Seems like a
| great idea to me.
| VyseofArcadia wrote:
| Prayer wheels is an _excellent_ example of where this kind of
| logic leads.
|
| Kudos to you for making the connection.
| Melonotromo wrote:
| Shrimps don't suffer. No one 'suffers'.
|
| Suffering is an expresion / concept we humans have because we
| called a certain state like this. Suffering is something a
| organism presents if that organism can't survive or struggles
| with survival.
|
| Now i'm a human and my empathy is a lot stronger for humans, a
| lot, than for shrimps.
|
| Btw. i do believe if we would really care and make sure fewllow
| humans would not need to suffer (they need to suffer because of
| capitalsm), a lot of other suffering would stop too.
|
| We would be able to actually think about shrimps and other
| animals.
| KevinMS wrote:
| > They were going to be thrown onto ice where slowly,
| agonizingly, over the course of 20 minutes, they'd suffocate and
| freeze to death at the same time, a bit like suffocating in a
| suitcase in the middle of Antarctica. Imagine them struggling,
| gasping, without enough air, fighting for their lives, but it's
| no use.
|
| How do we know this isn't just fiction?
| qwertygnu wrote:
| Responses to animal welfare articles are sad. There are mountains
| of evidence[0] that many animals experience emotions (including
| suffering) in much the same way that we do. It's tough seeing
| people say, without hesitation, they'd kill millions of animals
| over a single human.
|
| > Shrimp are a test of our empathy. Shrimp don't look normal,
| caring about them isn't popular, but basic ethical principles
| entail that they matter.
|
| I think we'll be looking back in the not-so-far future with
| disgust about how we treated animals.
|
| [0] RTFA
| theonething wrote:
| I find it not only sad, but horrifying that there are people
| that would actually consider sacrificing a human over animals.
| leephillips wrote:
| Shrimp do not have experience. There is no place within the
| shrimp, no anatomical structure, where experience can reside. The
| article, and many of the articles it links to, confuse the
| existence of pain receptors, complexity of behavior, memory,
| aversion, intelligence, learning capacity, and other measures for
| experience.
|
| Since they don't have experience, they can't suffer, in the
| morally relevant sense for this argument.
| telharmonium wrote:
| I'm reminded of the novel "Venomous Lumpsucker" by Ned Beauman, a
| deeply weird satire about the perverse incentives and behaviors
| engendered by the union of industrial consumption, market-based
| conservation, and the abstract calculus of ethics at scale.
|
| In particular, one portion features an autonomous bioreactor
| which produces enormous clouds of "yayflies"; mayflies whose
| nervous systems have been engineered to experience constant,
| maximal pleasure. The system's designer asserts that, given the
| sheer volume of yayflies produced, they have done more than
| anyone in history to increase the absolute quantity of happiness
| in the universe.
| c0detrafficker wrote:
| What's next Telling hyenas they should stun their prey first?
| Palestinians to stop schechting?
| AlexandrB wrote:
| Oh, boy. Site links to some "Effective Altruism" math on the
| topic[1]. This both reinforces my existing (negative) opinion of
| EA and makes me question the validity of this whole thing.
|
| [1]
| https://forum.effectivealtruism.org/posts/EbQysXxofbSqkbAiT/...
| l1n wrote:
| I made these, proceeds go to the SWF folks
| https://www.etsy.com/listing/1371574690/shrimp-want-me-unali...
| RandallBrown wrote:
| I wonder if these donations would be better spent on lobbying for
| shrimp stunning regulations rather than just buying the shrimp
| farms shrimp stunners.
| mrguyorama wrote:
| Pulling completely unsubstantiated numbers out of your ass is not
| an argument. No, calling it "an estimate" does not actually make
| a number you've pulled out of your ass an actual estimate. No,
| composing a bunch of """estimates""" doesn't make an argument,
| and it doesn't matter what kind of "error ranges" you give your
| made up numbers, the error range of the composed value is
| basically infinite.
|
| >The way it works is simple and common sense
|
| Claiming "common sense" in any argument is red flag number 1 that
| you don't actually have a self supporting argument. Common sense
| doesn't actually exist, and anyone leaning on it is just trying
| to compel you through embarrassment to support their cause
| without argument. There's a reason _proving_ 1+1=2 takes a
| hundred pages.
|
| Randomly inserting numbers that "seem" right so that you can
| pretend to be a rigorous field is cargo cultism and
| pseudoscience. Numbers without data and justification is not
| rigor.
| BenthamsBulldog wrote:
| But the numbers didn't come from me but from the RP report.
___________________________________________________________________
(page generated 2024-11-18 23:01 UTC)