[HN Gopher] Moral Competence
___________________________________________________________________
Moral Competence
Author : flaque
Score : 188 points
Date : 2021-01-05 17:37 UTC (5 hours ago)
(HTM) web link (evanjconrad.com)
(TXT) w3m dump (evanjconrad.com)
| hodgesrm wrote:
| Great essay. The key is to judge yourself on visible outcomes
| that make the world better for others rather than progress
| metrics like delivering releases, passing laws, holding
| conferences, etc.
| antonzabirko wrote:
| This article is wrong. Moral good comes from moral incompetence
| as much as competence (as per the author's definitions) because
| morality is intention, not result.
| draw_down wrote:
| This is really an amazing point, though unfortunately couched in
| the world of startups. Thanks for posting!
| Arete314159 wrote:
| I'd summarize his essay by saying that true service is about
| doing what's best for the situation, regardless of ego, while
| shiny silicon valley "service" is about getting paid in praise
| and adulation rather than the usual currency, money.
|
| When you're doing real service it's usually largely thankless,
| anonymous for a good long while, and you work really hard and
| maybe move things 1 millimeter towards the goal. If you're
| traveling around doing photo-op's, then except in rare cases
| (like, you're a celebrity bringing attention to an under-
| recognized problem), you're not doing the right work.
| gnarbarian wrote:
| With the exception of regressive products like drugs and junk
| food, making a profit from a product or service is a good sign
| that you are helping people and improving their lives. Otherwise
| they wouldn't pay you for it. You've hit on the reason why
| capitalism has successfully raised billions out of poverty over
| the last century.
| yrimaxi wrote:
| What about bananas? Referring to banana republics.
| gnarbarian wrote:
| Without competition it's easy to end up in a situation where
| a monopoly can accrue too much power and exploit their
| customers or their workforce. With sufficient economic
| freedom competition can arise naturally. Unfortunately when a
| company becomes very powerful You will start to see behaviors
| like rent-seeking, regulatory capture, and cronyism. These
| artificially raise the barriers to entry for plucky startups.
| This is more a reflection of poor or corrupt governance than
| capitalism though.
| [deleted]
| currymj wrote:
| i tend to think that the approach here described as "incompetent"
| isn't necessarily bad as long as you maintain honesty about it.
|
| for instance, it might be more effective to just donate money but
| there's nothing wrong with wanting to volunteer hands on, as long
| as you don't fool yourself.
|
| of course from a very, very strict utilitarian perspective,
| making a somewhat less effective choice is in fact morally evil.
| but I don't think that viewpoint actually holds up to scrutiny --
| few people would agree that volunteering at a food bank rather
| than donating money is morally equivalent to taking food away
| from hungry children.
| gregw2 wrote:
| The author casts this in moral terms which is useful in grabbing
| attention (moral competence vs moral incompetence!), but I'm not
| sure how those two labels and concepts as he describes them are
| any different than the neither-moral-nor-immoral standard
| management practice of trying to focus oneself and others on
| outcomes rather than effort.
|
| I am not sure "moral" is truly the right word for what the author
| is discussing... morality is not just about the outcome, it's
| about what you are aiming for. If you aim for the wrong thing,
| that is a moral failure, moral incompetence if you will. But that
| is not what the author is talking about. True, it is good to be
| effective and obtain a desired outcome, particularly one that you
| or others think is good, but I'd point out that this "moral
| competence" and what another early poster mentioned, "effective
| altruism", are not that dissimilar from earlier concepts and
| discussions of ... "wisdom".
| protomyth wrote:
| _If you want to do good, you actually have to help people._
|
| Yep. Its a hard thing to look at the actual data coming out of
| something and see your good idea didn't translate into actually
| fixing a problem. Sadly, a lot of people, particularly in
| government, would rather die on that hill instead of moving on.
|
| There is a fine line to walk though. Just because you didn't fix
| everything doesn't mean chuck it all in the trash. Being able to
| see the positives is important and will help with refining your
| strategy.
| tunesmith wrote:
| I've thought about this in terms of heroes and helpers, how we
| often have a hero worship problem in society and jobs, and how
| it's usually better to try instead to be a helper rather than a
| hero.
| rriepe wrote:
| Markets force you to this conclusion, which is why governments
| are so morally incompetent.
| hectorlorenzo wrote:
| I'm unsure I've understood the premise: does it all boil down to
| "the morally incompetent considers problems as personal
| challenges, while the morally competent considers them as a
| societal responsibility" (which of course it includes
| him/herself)?
| richardwhiuk wrote:
| My reading was:
|
| - the "morally incompetent" views working on a societal problem
| as "good thing"
|
| - the "morally competent" views _solving_ a societal problem as
| "good thing"
|
| i.e. success at solving the problem is important.
| legerdemain wrote:
| If I can read between the lines:
|
| 1. Start a "socially-good mental health product."
|
| 2. Pivot to a completely unrelated, non-"socially good" product.
|
| 3. Write an essay to defend your decision to pivot by describing
| it as the act of a "morally competent person."
| jan_Inkepa wrote:
| I wish he related it to the product(s) in question, how it
| changed from before to after, and maybe say if there's 'good'
| being done with the new, more morally-neutral, product.
|
| Reading their documentation of both projects, it's not clear how
| they relate to each other. It went from an app that doesn't seem
| like it'd need a multi-player infrastructure (some kind of CBT
| self-help app) to a library for multiplayer apps? It seems less
| like a pivot and more like a full startover.
| flaque wrote:
| > It seems less like a pivot and more like a full startover.
|
| This is a pretty good description of what we did. The path went
| something like CBT Journal => Regular Journal => Notion Clone
| => Notion Clone with an API => API for "Notion Features" =>
| Real time collaboration as a service, within the course of a
| month or so.
| spIrr wrote:
| This is really insightful - thanks for sharing that path! And
| good for you for not sticking to the sunk cost fallacy
| through the pivots.
| strofcon wrote:
| I think they're using the "3P" version of "pivot", as in
| "Pause, Pivot, or Persevere".
|
| Pivot definitely seems to soft a word to describe what is,
| generally, exactly what you said - a full startover. At least,
| in the context of a startup.
|
| But then I dunno what other "P" word would work, and heaven
| forbid we break the catch alliteration... :-)
| peter_l_downs wrote:
| Agreed - my guess is probably making money and donating it, as
| 80k hours recommends.
| a-dub wrote:
| i believe this is not specific to social good, but instead is an
| implicit trait in human nature. an excellent piece was run by the
| wapo recently that examined this very issue within the us cdc in
| the early days of the pandemic. other countries turned around
| testing programs in short order based on the specific testing
| methodology distributed by the who, where the us cdc wasted 41
| precious days trying to improve on it.
|
| where it gets interesting is trying to tease out where it went
| wrong. is it simple vanity? do people feel obligated to live up
| to some standard created by the environment they operate within?
| is good enough ignored in light of exceptionalism? where does
| that come from and how do you (and should you) dispel it?
|
| https://www.washingtonpost.com/investigations/cdc-covid/2020...
| gnusty_gnurc wrote:
| > is good enough ignored in light of exceptionalism?
|
| I think this is the larger issue.
|
| The FDA/CDC standards seem way too high, hindering provably
| useful solutions, and are actively harmful especially in the
| context of a fast-moving crisis.
|
| Watch Lex Fridman interview Michael Mina about cheap rapid
| testing that's not available because of regulatory roadblocks.
| It's an infuriating listen.
|
| But not only is it a very real substantive issue, it's a
| characteristic issue of the academic/expert institutional
| class. An obsession with the theoretical that ignores pragmatic
| reality. These are the experts that suggest a plan that
| _requires_ 100% compliance and works against every human
| instinct. That 's intelligent?
|
| It's a group of people divorced from the experience of average
| people. E.g. all these advisors forcing everyone into isolation
| but breaking the rules for themselves and their own family
| (look at Birx, Newsom, etc., the list is horrendously long).
| a-dub wrote:
| i think the ivory tower argument is related but somewhat
| separate. i think in the cdc case, you have scientists of
| status who feel compelled to justify their status and in
| doing so they skipped over simpler solutions. in the OP case,
| you have people chasing status perhaps at the detriment to
| their stated goal.
|
| i think there may also be a flavor of this in the vaccine
| race. there are tried and proven vaccine technologies but it
| seems most of the attention is going to the more experimental
| (and difficult to deliver!) approach that has potential nobel
| prizes and wall st upside tied to it.
| gipp wrote:
| The author is making a fantastically insightful point that more
| people need to understand, but the whole thing is ruined by the
| strange choice to use a massively loaded and judgmental term like
| "morally incompetent". I know it's cast as self analysis and so
| is supposed to be somewhat self-effacing, but it also means that
| folks who really need to hear it won't.
| musingsole wrote:
| Personally, I love the judgmental tone of 'morally
| incompetent'. It's what the term needs.
|
| Those behaving in morally incompetent systems deserve some
| judgment. They may deserve a pat on the back for their efforts,
| but just because you're pursuing a feel-good dream doesn't mean
| you get to ignore the reality around you. I've seen nonprofits
| ruined by such mindsets. Working on the mission at all is the
| moral component. Being effective, neutral or making things
| worse through naivety is the competency component that needs to
| be weighed in any effort -- regardless of the effort's moral
| imperative.
| piazz wrote:
| Jeez, this strikes me as such an unnecessarily harsh and critical
| self analysis.
|
| The biggest issue I have with this take is that the author's
| distinction between moral "competence" and "incompetence", from
| what I've read, doesn't really have anything to do with a moral
| or ethical system. It seems like his morally incompetent have
| admirable moral intentions (help depressed people with CBT), but
| suffer basically from implementation failures. They don't know
| how to help effectively, and are taking all of wrong lessons from
| working in tech (searching for direct & measurable impact,
| searching for technological solutions etc) and could probably be
| gently taught how to help more effectively. Dividing the world of
| people who want to help into binary 'competent' and 'incompetent'
| groups will just serve to discourage people who want to help but
| don't know how.
|
| I also don't feel like I have a sense of what a 'morally
| competent person' actually does, or how they design strategies to
| help effectively. The article feels like a lot of unfortunate
| self-flagellation for being 'morally incompetent' and not as much
| constructive dialog about how to actually help better.
|
| Edit: To OP - maybe there is a perspective where you don't have
| to be so down on yourself here? Maybe this idea 'failed' from a
| cashflow & user acquisition perspective, but if it helped all of
| the people who downloaded and used the app, didn't you do good in
| the world? Maybe you didn't do good at billion dollar scale, but
| does that matter if you improved people's lives?
| flaque wrote:
| This really wasn't supposed to come across as harsh, and I'm
| really not down on myself, though thank you for the concern! I
| think the context of the word "incompetent" is what's doing it
| here. Maybe "effective" and "ineffective" would be better.
| erosenbe0 wrote:
| "So in order to continue Quirk, Quirk needed to make people
| feel worse for longer. "
|
| You really ought to say something else. I know you don't mean
| it, but some mental health professionals read that a little
| sideways. I get it -- the folks who have a need for sustained
| engagement with therapy were difficult to reach or require
| more intensive efforts or something. So why not say that?
| flaque wrote:
| Hmm I agree, this isn't really what I meant. I was
| concerned we had a business model that treated success
| stories as failures (folks feeling better unsubscribing)
| and failures as successes. While we had some early success
| with folks, the overall direction we were going was getting
| worse, not better. It felt naive to assume that the forces
| of the business model wouldn't drive future us (or
| potentially people we hired) towards doing more bad than
| good, even though that wasn't what we wanted to do then.
| And so while we still had absolute control of the company
| and weren't blinded by a survival instinct, we pivoted.
| gopalv wrote:
| > The article feels like a lot of unfortunate self-flagellation
| for being 'morally incompetent'
|
| That's an acceptable thing to do for catharsis, but not so much
| fun when your diary entry lands on hacker news.
|
| The problem is that the person reading the blog (like me),
| reads it in our own voice and the division between "working to
| cure cancer" vs "curing cancer" seems to be one of consequence
| not intention or effort (merely "wanting to" is weird).
|
| Because being congratulated for trying-so-hard sucks, when you
| know you are failing instead. So much harder on your ego to
| tell people "I can't do this", while they're patting your back
| for almost doing it. Or maybe, I read it like that because as a
| parent of a newborn + working from home, I'm going through this
| somewhat.
|
| I read the "morally incompetent" as just short hand for "not
| being honest about current state of mind".
|
| But y'know what, that sort of honesty can kill things because
| everyone who succeeded knows the dark times where everything
| seems hopeless until something out of your control drops in to
| give you a push forward. To broadcast "I don't have it in me"
| usually prevents such serendipity.
| mlthoughts2018 wrote:
| > " I read the "morally incompetent" as just short hand for
| "not being honest about current state of mind"."
|
| That's a great way to put it. I think "incompetent" is the
| correct term because aside from not being honest with one's
| self on these "do good" issues, people can also just be very
| ignorant about it.
|
| Being ignorant about the actual, measurable, demonstrable
| humanitarian side-effect of your beliefs and actions is
| rightfully seen as moral incompetence and should be harshly
| criticized.
|
| I like a quote from Robin Hanson about this (paraphrased):
|
| > "I wish people felt more of a social obligation to believe
| accurately and less of a social entitlement to believe
| whatever they want."
| lallysingh wrote:
| Nope, have a look at the NGO space.
|
| Many, many in this space care more about developing careers,
| brands, technologies, etc than helping people. Helping people
| is a side effect that's published as a purpose.
|
| It's not really a fault. It's actually quite like open source.
| Most open source contributions are ultimately for selfish-but-
| harmless reasons, and that's completely ok.
| free_rms wrote:
| > doesn't really have anything to do with a moral or ethical
| system
|
| Philosophical ethics typically divide into utility-based, rule-
| based and virtue-based ethics. OP is clearly lamenting some
| utility-based failures and claiming that virtue without any
| utility isn't a whole lot of good (but also no harm, so hey).
| eeZah7Ux wrote:
| > unnecessarily harsh and critical self analysis
|
| Where? The article starts with "we pivoted our YC startup" and
| then claims that some behaviors are "morally incompetent"...
|
| ...to justify the pivot.
| stakkur wrote:
| A bit harsh, but the explanations of the 'morally incompetent'
| remind me of a great article I read a few years ago, titled _The
| Reductive Seduction of Other People 's Problems_:
| https://brightthemag.com/the-reductive-seduction-of-other-pe...
| nerdponx wrote:
| A related concept/tool is Theory of Change:
| https://en.m.wikipedia.org/wiki/Theory_of_change
|
| An organization that intends to "help people" or "make a
| difference" needs to specify how it plans to do so, how it
| defines change in the first place, and how it plans to measure
| change.
| tylermenezes wrote:
| > The signature move of the morally incompetent is to be told
| about existing solutions that they were previously unaware of and
| then soldier on without any critical examination of any added
| value they're providing. Others working on the problem are
| ignored entirely or seen as a threat to their own solution.
|
| Really hit the nail on the head for the non-profit industry (of
| which I am a part). A lot of non-profit leaders are totally
| insufferable because they take anything less than fawning over
| them to be an attack on their identity as savior.
|
| You get some of these people everywhere, of course, but there's a
| way higher concentration in non-profits.
| jzemeocala wrote:
| If you do something right; no one can tell if you've done
| anything at all -
|
| Futurama
| inakarmacoma wrote:
| If you do things right, people won't be sure you've done
| anything at all.
| xyzelement wrote:
| Competence and practicality are severally underrated among many
| people who believe they are doing good. This belief seems to
| often short circuit critical thinking and the person ends up
| diminishing their impact or having an adverse effect.
|
| Examples: Habitat for humanity - for the cost of sending one
| incompetent american to build a house in the third world, an army
| of local builders can be hired. If you really want to make
| impact, donate rather than going in person.
|
| Talented and capable people who end up working low impact non
| profit jobs at low pay. If you had a real job and donated half
| your income to the cause, both you and the cause would be better
| off.
|
| Stupid advocacy. People who get excited about feel good slogans
| like "cancel rent" without considering impact on the medium and
| long term supply of housing, and thus end up amplifying the
| problem they believe they are fixing.
|
| These are just a few examples where not doing the seemingly
| good/moral thing can be much better than doing it.
| watwut wrote:
| > Talented and capable people who end up working low impact non
| profit jobs at low pay. If you had a real job
|
| These are real jobs. As real as any other job. It is absurd to
| consider them not real. It is also not like people would be
| super lining up at those positions.
|
| And third, the people week make that choice do it because this
| is what they want to do. They might or might not be successful
| or happy in corporation. If only stupid people worked in those
| positions, those non-profits would never be a good place to
| send money to.
| jvanderbot wrote:
| This is very true. I just want to point out one under-
| appreciated side effect of hands-on altruisim: it does often
| change the person for the better.
|
| It's often better to give money, and there's no better way to
| see how little you can do with your hands vs how much you can
| do with a dollar than to go and see it yourself. Even raising
| awareness about the problems with a visceral experience of
| being there can make a person more prone to donate money,
| advocate for causes, and be contemplative in their policy
| choices.
|
| But, money is not a sure-fire solution. Even when donating
| money, this is a cognitive barrier that must be overcome. In
| major philanthropic work, a huge problem is matching the
| donor's expepectations of targetted "This person saved 1000
| people" investments vs a "general fund" investment to, say, a
| global vaccine initiative. There's no traceability when you
| dump your dollars into a pool to buy a billion vaccines, but
| there is enormous positive impact. Contrast that with, say,
| building a well in a village (something that is usually poorly
| done, poorly maintained, and not very helpful compared with a
| municipal water project or sanitation education program). But
| the well has immediate impact that can be photographed and they
| probably got their name etched into it.
|
| These aren't made up examples, I've heard these stories from
| policy / fund raisers. They just shake their head and agree
| that the world is slightly better, even if it could have been
| even more slightly better.
| KineticLensman wrote:
| > Talented and capable people who end up working low impact non
| profit jobs at low pay
|
| I currently volunteer at a raptor conservation charity. For no
| pay at all. They cannot afford to pay anyone but their trained
| bird team and a small office staff. Volunteers like myself are
| a force multiplier for the trained permanent staff. By doing
| mundane jobs for them, we free them to apply their expertise
| where it matters. Pre-COVID, these mundane tasks included basic
| animal husbandry, grounds work and infrastructure maintenance.
| Good skill fits for these role include carpentry, building,
| 'roady' type knowledge (how to build and wire-up outdoor
| structures) and many others. Post-COVID, new tasks include
| cleaning, sanitization and crowd management (facilitating
| social distancing).
|
| > If you had a real job and donated half your income to the
| cause, both you and the cause would be better off.
|
| I used to provide financial support to them before I retired.
| Now I only work intermittently, and I'm using my income to fund
| my own time working for them, for free.
|
| This is genuinely the best way I can help them. Especially now
| that their income has been decimated by COVID lockdowns.
|
| [Edit] The flaw in the parent's line of reasoning is assuming
| that all a non-profit organisation needs is money. Sometimes
| they need actual stuff doing (such as building a new aviary)
| and volunteers let them reduce or even eliminate the not-
| insignificant labour and contracting costs.
| carbonguy wrote:
| I think what you're describing is "moral competence" as the
| article frames it. That is to say: your efforts are not
| directed to the end of "how can I _be seen as working_ to
| support raptor conservation " but rather "how can I support
| raptor conservation?"
|
| Or, in other words: how can you genuinely further the cause,
| rather than simply seem to?
| CPLX wrote:
| > If you had a real job and donated half your income to the
| cause, both you and the cause would be better off.
|
| This argument takes as an unstated axiom that the correct way
| to organize society is by having everyone in a role that
| strictly maximizes that system's assessment of their economic
| output. It then concludes that the correct way to effect change
| is to embrace that system and after the results are in,
| redirect those economic outputs to desired goals.
|
| As the reader may have noticed in examining that unstated
| premise, this presents quite a dilemma when it becomes clear
| that this way of organizing society is in fact itself the main
| cause of the human misery you're trying to eliminate.
| xyzelement wrote:
| Just so I am clear, you propose "complete reorganization of
| society" as the practical approach for saving the puppies?
|
| The puppies are gonna die, CPLX.
| [deleted]
| samvher wrote:
| This is such an underrated point. A lot of the lucrative
| "real" jobs are quite destructive and counter to the problems
| that many charities and NGOs try to solve. Also, if everyone
| thinks this way it's going to be very hard for these
| charities and NGOs to find the skills they need at rates that
| they can manage to fund. A skilled worker can potentially
| generate a lot of value that might not directly flow back to
| the organization they work for but to other parts of society
| - that means the organization has no business case for hiring
| them (the economics don't exactly work out) but in the bigger
| picture it can still be a net benefit.
| pdonis wrote:
| _> This argument takes as an unstated axiom that the correct
| way to organize society is by having everyone in a role that
| strictly maximizes that system 's assessment of their
| economic output._
|
| No, that's not the axiom. The axiom is that scarce resources
| should be allocated to their most productive uses. If the
| scarce resource of your time and energy can be more
| productive at building houses if you have a real job and
| donate a portion of your income to house building, than if
| you built houses directly, then that's how that scarce
| resource should be allocated.
|
| Note that it is _you_ doing the resource allocation here, not
| "the system". It is true that "the system" is very
| inefficient, but what it's inefficient at is _providing
| people with opportunities to choose from to be more
| productive with the scarce resource of their time and
| energy_. The way to make that more efficient is _more_ free
| markets and _less_ government regulation; but our society
| tends to be the other way around.
| eevilspock wrote:
| > this way of organizing society is in fact itself the main
| cause of the human misery you're trying to eliminate.
|
| looks like this went completely over your head
| pdonis wrote:
| _> looks like this went completely over your head_
|
| The fact that I disgree with the statement you quoted
| doesn't mean I didn't read it.
| mdorazio wrote:
| See "Effective Altruism" as a topic and the book "Doing Good
| Better" as a more in-depth exploration of what's being said
| here [1]. It's really interesting to see how psychology and
| outcomes are interrelated.
|
| [1] https://www.effectivealtruism.org/doing-good-better/
| jsmith99 wrote:
| The perfect is the enemy of the good. Many of these charities
| are technically irrational but realistically people are likely
| not to get involved at all otherwise. For example, many people
| donate food to collection bins for food banks, despite the food
| banks being able to buy food wholesale for a tiny fraction of
| the price. Being aware of this I smugly refrain from donating
| overpriced retail food but I haven't yet gotten round to
| actually sending them some cash yet instead and I suppose not
| many other have either.
| thimkerbell wrote:
| What % of the calories that food banks buy are junk food?
| nmca wrote:
| I think this analysis is a bit oversimplified. If your goal is
| to build houses, then local labour is clearly better. But if
| it's to recruit lifetime donors, then perhaps shipping out
| incompetent (but fabulously wealthy) Americans is the way
| forward.
|
| Similarly for advocacy, our model should perhaps be that
| seemingly extreme proposals serve to stretch the Overton window
| before the middle-ground wins out.
|
| Naively disregarding approaches that don't myopically optimise
| for first-order impact seems likely to have harmful
| reprocussions.
|
| That said, lots of donations are deeply ineffective, and if you
| want to prioritize I'd suggest the excellent guides from
| https://www.givewell.org/ , who reach many of the same
| conclusions as you do, after more detailed reasoning.
| pdonis wrote:
| _> If your goal is to build houses_
|
| Which is the goal of Habitat for Humanity, correct?
|
| _> if it 's to recruit lifetime donors_
|
| Which is _not_ the goal of Habitat for Humanity, right? It 's
| only a means to an end. The end is building homes for people.
| If that end can be accomplished by a means more efficient
| than "recruit lifetime donors", shouldn't it be done that
| way?
| protomyth wrote:
| _Which is not the goal of Habitat for Humanity, right?_
|
| To build more houses they do really need to recruit the
| lifetime donors. The whole point of the donor's doing some
| of the work is to get the donations in the first place. It
| generally is pitched as a team building exercise.
| crispyambulance wrote:
| > [building houses] is the goal of Habitat for Humanity,
| correct?
|
| I think that's PART of their goal. They're a Christian
| organization. "Service" is also a part of their mission,
| that means having people physically work for/with the needy
| (and not just give money).
|
| Moreover, much of their work is done domestically where
| construction labor is NOT cheap. Yes, some things are going
| to be shoddy, but it's really hard to get a house built and
| renovated for low cost in the USA. There are challenges not
| only in finding labor but also general contractors and
| materials.
|
| Could all the "building of houses" be done more efficiently
| if Habitat for Humanity was all just finance and management
| operations? Perhaps, but the service aspect would be
| missing.
| pdonis wrote:
| _> "Service" is also a part of their mission_
|
| The point of the article under discussion is that
| "service", if it just means "doing stuff yourself"
| without regard to how productive you actually are--how
| much you _actually_ help--is a morally incompetent goal.
| If you have a feeling of service after _productively_
| helping someone, that 's great. But if you have a feeling
| of service while the actual impact of what you did was
| negligible (or even negative), that's bad. So "service"
| can't be a goal just by itself; it has to be something
| like "service that actually does help".
| PeterisP wrote:
| "service" definitely _can_ be a goal just by itself -
| perhaps you can argue that it _shouldn 't_ be a goal by
| itself, but I think that it's undeniable that in certain
| cases it is a truthful, accurate description of a goal
| that some people have.
|
| And IMHO the "should" discussion is even not that
| relevant there because it does not change the values that
| people actually have. As people say regarding decision
| theory, "the utility function is not up for grabs";
| people's goals are what they are, and if it turns out
| that someone's goal actually is more about personal
| service than good results, well, you can't force them to
| change so let's just facilitate them to get good results
| while still achieving their core goal, to get a win-win
| outcome.
|
| It's plausible that their goals might also be achieved
| just as well without actually productively helping
| someone - so you might want to nudge them away from that
| and towards more productive means; but these nudges won't
| be effective without taking into account what their
| actual goals are and what types of "service that actually
| does help" are not valid options because they don't
| satisfy the goals of the potential helper.
| awillen wrote:
| But if sending the incompetent American gets you a lifetime
| donor who pays for many houses to be built (assuming he
| would not have without the feeling of personal connection
| he got from his experience incompetently homebuilding),
| then isn't that the right thing to do if your goal is to
| build as many houses as possible?
|
| You mentioned efficiency, but the goal was never efficiency
| - it's to build houses. Would it be better if people were
| entirely rational and just gave money instead of
| incompetently homebuilding and then also became lifetime
| donors? Of course. That's not reality, though.
| pdonis wrote:
| _> if sending the incompetent American gets you a
| lifetime donor who pays for many houses to be built
| (assuming he would not have without the feeling of
| personal connection he got from his experience
| incompetently homebuilding)_
|
| _If_ that is the case, yes. Basically you are saying
| that, because of a quirk in human psychology, the only
| way to get people to donate, over the long term, a
| portion of their income, derived from them doing jobs
| they are much more productive at than building houses, to
| building houses, is to engage them by having them build a
| house themselves first.
|
| The question then becomes, does this actually happen? Do
| people who volunteer to build homes end up becoming
| lifetime donors? Or are those two _separate_ sets of
| people?
| macintux wrote:
| Speaking from personal experience, getting involved with
| volunteer work ensures I always consider those
| organizations when donating.
|
| Certainly I volunteered because I had strong feelings
| towards those needs anyway, but those connections make a
| big difference.
| fhrow4484 wrote:
| > Basically you are saying that, because of a quirk in
| human psychology, the only way to get people to donate,
| over the long term, a portion of their income, derived
| from them doing jobs they are much more productive at
| than building houses, to building houses, is to engage
| them by having them build a house themselves first.
|
| Having people engage in a process isn't just for personal
| fulfillment of "morally incompetent" people: it also
| serves as due diligence.
|
| Trust is a big factor of deciding where you want to
| donate your money. Having walked through the process as a
| volunteer, you have a much clearer picture of how your
| donations are spent, and yes you are more likely to
| donate.
|
| Trust has to be earned one way or another.
|
| Piggybacking on this home building example with an
| extreme example: how would you feel if you donated for
| years to some non profit which builds houses, but later
| learned that child labor is used to build those as well
| as cheap, structurally unsound materials, with the non
| profit CEO pocketing millions?
| yters wrote:
| I used to donate a lot, but have lost trust when I've
| seen reports on how much money is wasted by groups. Now I
| tend to donate directly to someone or group that I
| personally know and/or have worked with.
| pdonis wrote:
| _> it also serves as due diligence_
|
| Yes, this is a good point. But note that it's a
| _different_ point from the one that I was responding to.
| If the purpose of having volunteers build the houses
| before becoming lifetime donors is due diligence--them
| gaining trust in the process--then it doesn 't matter how
| efficient they are at building the houses, because the
| tradeoff is not them building houses vs. whatever more
| productive work they could have been doing with that time
| and energy. Nor is it about having to overcome any
| irrational quirks of human psychology, giving the donors
| "personal connection", etc. The tradeoff is the donors
| having trust in the process vs. not--i.e., them gaining
| the information they need to _rationally_ believe that
| their donations will accomplish the end goal, vs. not. In
| short, it 's a perfectly _rational_ investment of time
| and energy for the donors, even if they are terrible
| house builders.
| thaeli wrote:
| Donor engagement is a primary driver of donor loyalty and
| retention. This is pretty fundamental stuff in the
| nonprofit world. If you can build a deep personal and
| emotional connection, such as "helping build a house with
| their own hands", that is a big boost to average lifetime
| donor value. (Nonprofit donor development can run on
| decades-long timeframes - engage a high schooler and
| you're more likely to be able to solicit funds from them
| in their 50s.)
|
| On a smaller level, this is why many charities will get
| an initial $5 donation and then spend more than $5 on
| further solicitation of the same donor. The best
| indicator that you're going to give money to a charity
| (or political campaign) is that you've already done so.
| Lots of people feel this doesn't apply to them
| personally, but data-driven fundraising bears the point
| out.
| throwaway201103 wrote:
| > The best indicator that you're going to give money to a
| charity (or political campaign) is that you've already
| done so.
|
| This ignores why you gave the money though. Example, I
| donated to a particular children's hospital because it
| was the dying wish of a friend that donations be made to
| that specific charity instead of sending flowers to the
| funeral. I don't care about or have a connection to that
| charity in any other way and I'm not going to dontate to
| them again but they hound me incessantly for more money.
| mjburgess wrote:
| > seemingly extreme proposals serve to stretch the Overton
| window before the middle-ground wins out
|
| I've not seen any evidence that this is true, nor that
| "Overton"-thinking has any evidence as such.
|
| OP's comment is precisely that backreactions to this kind of
| thinking are often larger than the alleged shifts that are
| assumed to occur.
|
| NB. The "overton window" is a _description_ of a consensus,
| not a model; ie., there is nothing _of_ the consensus which
| is like a dial to be moved.
|
| The reason that people view X as possible is the latent
| preferences around X, incentives and institutional practices,
| and their (essentially rational) judgement about the
| practicalities and possibilities of X.
|
| Eg., Marxism isn't "outside the overton window" by some weird
| dialogue exclusion. Ironically, it is only in the
| conversation because of "overton thinking" ie., hype.
|
| It is outside the window because peoples preferences,
| incentives, and institutional practices are _overwhelmingly_
| oriented away from Marxism; which has no practical or
| evidentiary basis. No social mechanism of implementation. No
| group of people have the incentives, __reasons __and power to
| implement Marxism.
|
| Activism cannot "move the overton window". It provides no
| reasons, incentives, or power. It changes no institutional
| practices; it offers no evidence.
|
| "Hype activism" frequently does more harm than good, by
| reinforcing the status quo.
| dragonwriter wrote:
| > Activism cannot "move the overton window".
|
| The "Overton Window" is a concept developed initially and
| further elaborated subsequently by a policy think tank,
| largely as a tool for activists (specifically, that think
| tank itself) to understand how they can move the social
| consensus to drive policy.
|
| https://www.mackinac.org/OvertonWindow
| mjburgess wrote:
| Yes, and it's a false model of "policy change".
|
| Your link even, largely, makes this point, to quote:
| Sometimes politicians can move the Overton Window
| themselves by courageously endorsing a policy lying
| outside the window, but this is rare. More often, the
| window moves based on a much more complex and dynamic
| phenomenon, one that is not easily controlled from on
| high: the slow evolution of societal values and norms.
|
| It's a description of why politicians will advocate for
| some policies; it's not a theory of change. (Such a
| theory would be unevidenced and obviously false).
|
| The quote says "rare", but i'd say, "essentially never".
| The "slow evolution" is also not really of norms. The
| public do not "Accept" ideas within some "Window" that
| you move by "Advocacy".
|
| Almost all "beliefs" are grounded in technological,
| incentive, preference and practice structures. They are
| symptoms of much larger mechanistic forces that regulate
| behaviour (, belief, etc.).
|
| The significant majority of "belief change" is caused by
| technological and economic change.
| dragonwriter wrote:
| If you read more carefully, you will recognize that it's
| saying it's rare _for elected politicians_ because of the
| short-term constraints of needing to win elections, and
| that change in the Window is driven by actors who aren 't
| concerned with losing their job if they advocate outside
| of it.
|
| > It's a description of why politicians will advocate for
| some policies; it's not a theory of change.
|
| It's both.
|
| > (Such a theory would be unevidenced and obviously
| false).
|
| This seems to be based on your own ("unevidenced and
| obviously false", IMO) naive rational choice theory of
| change. Yes, broad forces are responsible for what would
| be the long-term equilibrium if conditions were static.
| Conditions change fast enough that the conditions which
| set the long-term equilibrium don't determine the actual
| Overton Window, or even consistently the short-term
| direction of change, though they are part of the context
| which influences change.
| yters wrote:
| Overton window is pretty effective in some ways. Just
| look at how much of what is taken for granted nowadays as
| the moral high ground was considered grossly immoral and
| unthinkable just a generation ago, e.g. gay marriage.
| This was largely achieved intentionally through media and
| other forms of social programming. Seems like a win for
| the Overton window to me.
| mjburgess wrote:
| Attitudes towards same-sex coupling have much to do with
| birth rate, health and population shocks.
|
| Much of the regulation of sex comes down to population &
| birth rate fears (this is the history of abortion in
| America: initially only discouraged because immigrants
| were outbreeding white people).
|
| The religious origin of anti-samesex prohibitions is
| _just_ a part of techniques to keep birthrates up (since,
| historically, 20% of children died before 10).
|
| With massive death rates up until 1940s, it is only in
| the very modern era that we have the pill & tiny
| childhood mortality rates.
|
| In this environment our sensitivity to "reporductive
| freedom" has decreased dramatically, drastically tapering
| off these prohibitions.
|
| If we were in a pre-40s world of high infant mortality,
| etc. _MANY_ would regard homosexuality as immoral,
| precisely because it would seem to limit the size of the
| future generation (seen, here, as a "misuse of the body
| to serve the needs of pleasure vs. society").
|
| Activists haven't "moved the window", they are precisely
| a symptom of forces they are profoundly ignorant of;
| here, laregely technological. Once these have taken hold,
| activists have only to prod at the now meaningless
| tradition.
| yters wrote:
| You don't consider the pill part of the Overton window?
| It was introduced largely due to an ulterior agenda of
| population control and eugenics, and seems to have
| furthered that goal effectively.
| dabbledash wrote:
| "Similarly for advocacy, our model should perhaps be that
| seemingly extreme proposals serve to stretch the Overton
| window before the middle-ground wins out."
|
| This seems like an argument in favor of openly advocating
| positions that are good but too far from the current
| consensus to be adopted. Dragging the Overton window toward
| stupid policies or bad behavior is a different issue.
| wisty wrote:
| There does need to be a middle ground.
|
| But there are definitely dangers when charities try to
| maximise revenue:
|
| 1. They are (to some extent) competing for donors. If it's
| zero sum, then more marketing spending means the least
| efficient charities "win".
|
| 2. It might even be negative sum, as donors turn away from
| charity in general.
|
| 3. A focus on revenue changes the way charities operate, and
| might change the type of people who work there.
|
| I think there's a sweet spot to engagement. You don't have to
| send the donor out there, but sending them photos does help.
| They get the personal connection and the feeling that their
| money was well spent without needing to waste a lot of money.
| Obviously it's not 100% efficient, but it brings a lot to the
| table - a certain level of accountability, engagement, and
| not too much inefficiency.
| yowlingcat wrote:
| A colleague in my extended network had a pithy way of
| describing this: "Many distribution problems look like moral
| problems from far away." Not saying I agree with it completely,
| but it did change how I started looking at framings of societal
| problems.
| nicbou wrote:
| I often think of this SMBC comic [1]. If Superman wanted to
| maximise good, he could just turn a crank really fast.
|
| However, maximizing utility is not the only goal. People want
| to feel good about their actions, and dumping money in a black
| hole just isn't that satisfying.
|
| [1] https://www.smbc-comics.com/comic/2011-07-13
| yboris wrote:
| "dumping money in a black hole" is a very misleading word
| choice. The analogy with a black hole suggests that money
| disappears with no good.
|
| Donating to cost-effective charities directly and
| significantly improves the quality of life for numerous
| individuals. Just because you don't "solve the whole problem"
| with your donations doesn't mean it shouldn't be done. Just
| like you don't choose to stop eating because eating a single
| meal won't satisfy your hunger for the next 30 years.
| nicbou wrote:
| It's a deliberate choice of words. Yes, your money is going
| towards a good cause, but if you accidentally sent the
| money to the wrong place, you couldn't possibly tell. Your
| donation is never mapped to a specific result. It's just
| added to a much larger pool. You will never get a message
| saying "your donation paid Paul's first month of rent" or
| something to that effect.
|
| By comparison, volunteering at a soup kitchen feels _real_
| , regardless of effectiveness. You see the people you help,
| and I suspect that's why people prefer being charitable in
| less efficient ways.
| neilparikh wrote:
| This isn't true. There are charities that do exactly what
| you claim they don't do. For example,
| https://www.againstmalaria.com/Default.aspx maps each
| donation to a specific bed net distribution, then gives
| you a status page (that you can share with others!),
| which tells you exactly what stage your distribution is
| at.
|
| For example, I can see that one of my donations is
| currently in the manufacturing stage, one is on a boat
| travelling to the country, and one is being distributed
| to households right now.
|
| After the distribution is complete, they'll post pictures
| from that distribution, as well as conduct follow up
| surveys in a few months, to ensure effective use. All
| this information is shared with the donor directly.
| yboris wrote:
| Evolution fucked us again. As a result, we feel drawn
| towards performing acts where we can be seen as doing
| good, not towards acts that do good. Being in a soup
| kitchen gives you witnesses, writing a check does not
| (especially with our shitty culture that often
| discourages people from talking about their philanthropic
| deeds).
| yboris wrote:
| There are charities with extreme transparency. For
| example, all the money sent to the highly-rated (by
| GiveWell) charity the Against Malaria Foundation (AMF)
| gets tied to specific distributions (specific villages).
| You get to see the photos of your money at work and
| updates about the quality of the malaria-protecting
| bednets years after they have been distributed.
|
| [0] https://www.givewell.org/charities/amf
| thimkerbell wrote:
| If anyone has not seen this SMBC comic...really, you must.
| JoshTko wrote:
| Travelling to Haiti to building homes on the surface may seem
| like a very inefficient way to build homes, however perhaps the
| experience helps them truly understand the local needs and come
| up with a much more efficient solution. At the very least they
| could become a lifetime advocate that donates to more efficient
| programs.
| erosenbe0 wrote:
| You are entirely correct that this is helpful. But you don't
| need to go to Haiti.
|
| 3000 W. Madison in Chicago is two miles from the nation's
| second most important financial district and it is still burned
| out exactly as it was three days after Dr. King was
| assassinated.
|
| OP's use of the term incompetence is correct. It certainly
| rises to that level when kids in Cambridge, MA and their
| senator think 50k student loan forgiveness is somehow
| progressive and moral. I mean compare the per capita benefit in
| Cambridge to that block of Chicago or to rural WV, MS, or some
| Reservations.
| curiousllama wrote:
| I love the idea, but competence isn't the right word here.
| Competence doesn't have an intrinsic ethical dimension. It's not
| unethical to sincerely try and fail, while the author does
| ascribe a pretty clear ethical dimension to moral incompetence.
|
| I think humility is the better idea here. The examples of moral
| incompetence all center around egoism. Moral competence seems to
| follow from removing oneself and focusing on outcomes over
| personal validation. It's a call for helpers to serve and be
| humble.
|
| I've had this thought before many times. Having gone down this
| path a bit, I would caution the author: deciding "I'm going to
| solve a big problem" is at once incredibly important (to get to
| the solution) and deeply arrogant (what, _you're_ gonna solve
| poverty?). Don't let the necessity for arrogance get in the way
| of your willingness to do good.
| dgb23 wrote:
| > Don't let the necessity for arrogance get in the way of your
| willingness to do good.
|
| I like this! Though there are other ways to describe the state
| of mind necessary. For example "crazy" and "naive" are used
| this way often.
|
| Typically, to advance in drastic ways, one has to break out of
| the mold, think outside of the box, dream big... Essentially
| act abnormal in some way.
|
| Most of these attempts fail, the few that succeed are worth it
| all, because without outliers things can't change and
| ultimately progress.
| scrozier wrote:
| I think you're on to something, but "humility" doesn't seem to
| quite get at it. Maybe "focus"? It seems that the author is
| saying that to be moral and morally effective, one needs to be
| ends-focused, as opposed to means-focused. IOW, if a morally
| focused person hears of a way to provide the same results that
| they are, at half the cost, they should rush to implement that
| method, or join that effort. A means-focused person might tend
| to recruit more people or money to _their_ cause, seeing the
| better method as competition.
| flaque wrote:
| Focus is exactly the word and tone I was looking for.
| imgabe wrote:
| > It's not unethical to sincerely try and fail, while the
| author does ascribe a pretty clear ethical dimension to moral
| incompetence.
|
| It doesn't matter if it's ethical or not if it doesn't work.
| eeZah7Ux wrote:
| > Last year, we pivoted our YC startup from a socially-good
| mental health product (Quirk), towards a socially-neutral
| software infrastructure product
|
| > It's more important [...] to improve mental health than to be
| working on improving mental health.
|
| How is this pivot effective? How is it improving mental health?
| yters wrote:
| One reason may be that there is sort of an existential question
| hanging at the edge of moral actions, namely does anything really
| matter? Moral action is the epitome of the belief that at least
| something matters, and matters a whole lot. And perhaps it is
| precisely through moral action that we get an experiential
| understanding of moral value. So, if through 'moral competence'
| we disengage from moral action, then we may lose the very reason
| we seek to make a moral difference in the first place.
| anonyfox wrote:
| Isn't this just the typical oscillation from virtue ethics
| (mindset is most important) to consequentialism (results are most
| important) and sometimes deontological ethics (acting itself is
| most important). This seems to swing like a pendulum every few
| decades into the ,,mainstream". Depending on how/where/when you
| grew up, one of these is your ,,moral compass" (which lead to
| preferential world views like individualism, utilitarism, ...).
| This ,,moral compass" often changes over time and your own kids
| will probably have a different starting point.
|
| ,,Moral" is just an ordered set of values that feels ,,obvious"
| or ,,innate" to you, but other people (especially from different
| ethical axioms, see the three above) have other ordered sets.
|
| All of them have their downsides and have been en vogue since
| Aristoteles, still societies (or political parties) basically
| argue over the exact same stuff for millenia without any progress
| whatsoever.
|
| Why the long intro? I basically consider the concept or even
| discussion about ,,moral", especially ,,which would be better"
| not only pointless but actually harmful.
|
| (not a perfect analogy: asking ,,what was before the (original)
| big bang" makes no sense assuming our concept of time started
| with the big bang. But this question probably killed/tortured
| less people in history than confrontations resulting from
| conflicting sets of moral values).
|
| Escape hatch from here? Nietzsche actually saw that coming, dont
| just stay at nihilism/absurdism but maybe read ,,beyond good and
| evil"... :)
| russnewcomer wrote:
| I have been thinking about this basic idea a bit, trying to
| reason how to design durable institutions that can help function
| as crisis mitigation organizations while also working on problems
| in a community, or put in terms of the article, how to structure
| an institution that is morally competent.
|
| One problem to this is that many institution that have a morally
| competent start soon struggle with many of the problems that
| plague institutions, things like nepotism, political players
| attempting to coopt the organization for their own good,
| institutional survival over problem solving, and the like. While
| these problems can be mitigated by strong boards of directions
| who adhere to a larger mission, I think the better idea is to
| have many small organizations that promote problem solving while
| being part of a larger network that has the goal of solving the
| problem, and hope that the morally competent people can be spread
| about the network enough to encourage the solving of the problem,
| instead of the working on the problem.
| DoreenMichele wrote:
| I like this a lot more than I expected to.
|
| A lot of social ills are rooted in "The world doesn't work." And
| then we try to find the right feel-good pill or the right talk
| therapy when what we really need is a jobs program or a bridge or
| store that sells a thing that works and if you come up with a
| real solution, no one will connect the dots and say "Homelessness
| is down because someone built a better mousetrap." Instead they
| will turn their baleful eye to the latest earthquake or the
| latest revolt or the latest political drama and continue to
| complain that the world is broken.
|
| Doing things super well is hard. Heroics make headlines and
| headlines have something of a tendency to actively interfere with
| problem solving. Problem solving tends to be done quietly at your
| desk, in your lab, in some back room and people who love good
| press tend to be better at playing to the crowd than at actually
| solving anything.
|
| If you want to make the world a better place, make a business
| that solves a real problem and be decent in how you deal with
| people. Don't be a hero. Don't focus so much on the social and
| emotional stuff. Build a better widget instead and build it with
| an awareness of the social and emotional stuff and a sensitivity
| to the current state of the world which is always high drama and
| lots of pain points.
|
| Doing anything well is really hard. There are lots of ways to do
| things badly while getting lauded for it in certain circles.
| aerovistae wrote:
| Fantastic essay. Articulates a thought I've long (in some oblique
| sense) had but never found the words for.
|
| I hope this term enters the mainstream, because I find the mere
| fact of there being a term for something helps to anchor the
| concept in society's mind and help them to be mindful of it.
| Because we have a term for the Dunning-Kruger effect, we know to
| be aware of it, etc.
|
| Of course only a small selection of society (the sort that like
| reading and digesting concepts) ever learn these terms, but that
| slice of society is very high-impact and arguably the most
| important slice for where such concepts need to take hold.
| yrimaxi wrote:
| > Because we have a term for the Dunning-Kruger effect, we know
| to be aware of it, etc.
|
| Or rather, we are more accutely aware of how _everyone else_
| might suffer from it.
| yboris wrote:
| If helping people is something you're interested in, please read
| about _Effective Altruism_ and consider joining in!
|
| [0] https://www.effectivealtruism.org/
| yrimaxi wrote:
| No wonder Singer didn't end up liking Marx. EA is the most
| alienating code of ethics that I've seen.
| yboris wrote:
| What's alienating about EA for you?
| rossnordby wrote:
| I'm not yrimaxi, and I'm very much in the EA camp, but I
| suspect a lot of people bounce off the implication that
| there isn't really such a thing as supererogation.
|
| A lot of EA messaging tries to work around this by trying
| to avoid guilt trips or _demanding_ that you do everything
| you possibly can, since asking too much is a good way to
| end up with nothing. See Giving What We Can and similar.
|
| But a lot of the justifications for this kind of approach
| can very easily imply a much higher bar than anyone can
| meet. Taking global health as an example- millions of
| preventable deaths a year, and if you work yourself to the
| bone for years, you can only hope to _partially_ mitigate
| it.
|
| Even if the messaging says, "hey, just do what you can
| sustainably and don't worry about being perfect, because
| that's way better than nothing," the logic behind it is
| based on nothing more than observations of actions and
| their consequences. There's not a specific threshold where
| your job is _actually_ done, and 'failure' to meet the
| impossible bar generally means mass death.
|
| For someone outside EA, it is very easy to go from that
| observation to "these people are telling me I am basically
| a mass murderer for not helping, and even if I help, I'll
| still in practice be a mass murderer because I'll help
| suboptimally, gee thanks."
|
| Even if very few identifying with the EA community would
| endorse that phrasing or the implied moral judgment, I can
| see how people could end up feeling that way. Not sure what
| to do about it.
| yrimaxi wrote:
| Maybe you are pursuing a different point than mine, but
| my own point was about alienation, not supererogation. I
| don't put EA proponents on such a lofty pedestal.
| rossnordby wrote:
| Yup, started writing my response before I saw yours,
| definitely a different point.
|
| I do think that the implied burden is still alienating in
| a sense and that's what motivated my post (the philosophy
| can just feel really bad in some ways when you get into
| the nitty gritty, and many people don't respond well to
| that), but you were clearly talking about a different
| kind of alienation.
| yrimaxi wrote:
| I guess it's affirming to think that people reject one's
| ethical outlook because it would necessitate such an awe-
| inspiring commitment or burden. Rather than for more
| analytic reasons.
| rossnordby wrote:
| It doesn't actually require awe-inspiring commitment and
| dramatic personal sacrifice, and such grand gestures
| would likely end up unsustainable. I just suspect that
| the ideas themselves will tend to sound really hoity
| toity and holier than thou, which isn't helpful in
| persuasion. Case in point- I was deliberately trying to
| avoid that, and it appears I failed badly.
|
| (Edit: reading my first post, it's definitely the case
| that I did not make the connection here explicit- the
| rejection isn't merely 'oh no I can't handle the TRUTH',
| but more like, 'ugh, these guys'. A big part of it is
| social in nature, stacked on top of the underlying
| problem of asking too much.)
|
| A huge part of EA is driven by analysis of effectiveness.
| If I want to be effective, I have to figure out how to be
| effective. If someone provides a valuable insight on how
| to be more effective, it's not a rejection of my ethical
| outlook, but rather useful information.
| yrimaxi wrote:
| A scenario: you want to help people. You have the
| opportunity to get a very well-paying job on Wall Street
| (finance or whatever). You reckon that you would optimize
| your good-output by working more rather than helping
| directly (you don't have the skills or know-how to help
| directly, you also reckon). So you get a well-paying job,
| work 60 hours a week in order to earn more money and
| advance your career, live frugally and donate 70% of your
| income to charity. At this point you are spent: you cannot
| do more good than working those hours without burning out
| and thus hurting your career long-term.
|
| Now you are alienated from your charity: your work in
| finance indirectly helps other people, but you have no way
| of experiencing or connecting to that other than looking at
| statistics and numbers; the lifeworld of your charitable
| work is just some numbers that you get every quarter from
| the various charities that you donate to. You have no
| direct involvement in them.
| yboris wrote:
| Thank you for a concrete example. I think you will agree
| though, that the outcome you describe isn't inevitable or
| most-likely.
|
| Over time you will likely figure out a good life-work-
| philanthropy balance. Much of the conversation within EA
| is about self-care and long-term planning. If giving 70%
| of your income to charity works for you, that's
| magnificent. But if you realize you need to give less and
| take more care of yourself so you don't burn out, that's
| the appropriate decision.
|
| I gave 50% one year but it didn't work out long term.
| I've been giving 10% for almost a decade and intend to
| ramp up to 20% in the future. My wife does 10% too - it
| works out well for us.
| yrimaxi wrote:
| Burn out wasn't the point. I explicitly disregarded that
| by putting in the premise that the hypothetical person is
| not and will not get burned out with that philanthropy
| schedule.
|
| I could have just as well have written that they made a
| million a year and only donated 5%--that's besides
| (my/the) point.
|
| I guess we are all so alienated these days that we don't
| find working on Wall Street--or Main Street--in order to
| indirectly help other people with our money instead of
| helping _directly_ not the least bit _weird_ at all. Or
| to help people by working as a programmer _and donating a
| lot of your salary_ to some school instead of just
| working there as a teacher, helping people directly (to
| use another example). (Oh, that reminds me. I need to
| take my vitamin D supplements right about now. I
| calculated that being in the Sun is not worth my time so
| I have to compensate a bit, you see.)
| UnFleshedOne wrote:
| Well, in the end, do you want to avoid feeling weird, or
| to help people? Both of those are valid desires, but they
| don't always go together. Sometimes you can find
| something that does both and that's great, but not
| everyone can.
| yrimaxi wrote:
| Describing non-alienation as "feeling good" (as another
| commenter did) or "not feeling weird" is a great way to
| pathologize my observation of how weird it is to work a
| six-figure, ad/surveillance-optimizing job at Google in
| order for there to be more malaria nets in Africa. Trust
| me: I don't believe that EA is sound in any way (or
| "effective", if you like), but in this thread I chose to
| focus on just one aspect of it, indeed its most bizarre
| feature, which apparently isn't bizarre at all to all of
| the Soylent-drinking life-optimizers out there, so my
| point has been like, as they say, seeds on barren ground.
| PeterisP wrote:
| What I read in this is essentially an observation that "A
| scenario: you want to help people" does not necessarily
| (and possibly not even in most cases) imply "you want to
| maximize your good-output".
|
| In many (perhaps most?) cases the feeling and motivation
| people have is the intuitive desire to feel good about
| helping others in their society personally, which is in
| many ways different from "true altruism". And it makes
| some sense (from e.g. evolutionary psychology
| perspective) to consider that our innate desire to "do
| good" is closely related to building social ties and
| status, and by helping people close to us in various ways
| (kinship, common "tribe-in-the-wide-sense-of-that-word",
| previous relationship history, expected future
| interactions) as opposed to simply helping abstract
| people as much as you can. True altruism is the exception
| (IMHO) rather than the norm, it certainly exists in some
| cases, but most helping others and charity and doing good
| is driven by "ordinary goodwill" that includes a mix of
| other motivations.
|
| And, of course, if someone's goal is not really altruism,
| then doing more effective altruism won't help them
| achieve their goals. If we ("abstract we") want to
| optimize effective altruism, then instead we should help
| optimize people's efforts witin the area where true
| altruism overlaps with their actual goals to "do good"
| (whatever they mean by "do good" since that likely isn't
| exactly the same thing as true altruism); since once we
| get to the proposals which are more altruistic but
| contrary to their true personal goals/values, people are
| just going to reject the whole thing (as most do).
| yrimaxi wrote:
| I keep getting impressed by the EA-proponents ability and
| eagerness to frame EA as "true altruism" (an actual quote
| in this case!), as if EA is some purely-rational approach
| with no philosophical baggage or assumptions. How utterly
| self-congratulatory.
|
| Another problem I have with EA is how incredibly fragile
| it is: because it is so reductive and narrowly-focused,
| you are likely to optimize for the wrong thing (the map
| is never the territory) and might even do more harm than
| good. In the best case scenario you might do a lot of
| good, though.
| PeterisP wrote:
| Ah, I'm not really an EA-proponent, so please don't use
| my arguments as bad examples of their position, that
| wouldn't be fair. My use of "true altruism" isn't a
| proper term, I just needed to somehow contrast two
| different aspects of "altruism-as-understood-in-common-
| language", to differentiate the theoretical concept of
| fully unselfish concern for the welfare of others with
| the (IMHO more popular/realistic) concept that's somewhat
| like "general habit/desire/concer/action of doing good
| for others with limited (but still some) selfishness,
| because of a mix of motivation only part of which is
| actual altruism".
|
| I mean, IMHO framing "effective altruism" as "true
| altruism" isn't that inaccurate as far as philosophy and
| definitions are concerned - my main criticism of EA is
| that actual altruism (according to a strict definition of
| altruism) is quite rare, so for most people effective
| altruism isn't personally relevant because most people
| (including me) simply aren't truly altruistic; it
| provides a guide on how to maximize something that most
| people (myself included) don't really want to maximize,
| they want to maximize other things which may have some
| overlap with altruism but diverge from it as you leave
| commonly accepted charity practices and approach various
| maximums.
|
| I do concede that it's definitely good according to most
| value systems (including purely selfish ones) to have
| everyone _else_ in your society to be a bit more
| altruistic, everything just works better that way, so
| facilitating various nudges towards altruism is generally
| a Good Thing no matter how altruistic you or I personally
| are.
| IHLayman wrote:
| That is the point though... would you rather "feel good"
| doing charity, or have a positive effect on society?
| Donating cash instead of physically participating in
| charity can be alienating, sure... but certainly more
| effective. Note that many food charities for decades now
| but even in the past year are like don't send cans of
| food, send cash, because it has the most effect.
| yrimaxi wrote:
| > Donating cash instead of physically participating in
| charity can be alienating, sure... but certainly more
| effective.
|
| Do you have proof? Because whether it "feels good" (or
| rather, feels like you are actually doing something,
| rather than trying to motivate yourself by convincing
| yourself that _the numbers_ are correct and you _are_
| (indirectly) doing something) will influence how much
| good you can do.
|
| Some things--like only eating Soylent or maximizing your
| goodiness-output by working on Wall Street (or wherever
| else)-- _might_ only work well on paper.
|
| In any case _my_ point all along was the alienation
| factor. That's _the_ point, to me. Raise whatever other
| point _you_ would like. (Of course an EA-enthusiast would
| only care about the supposed numbers.)
| notahacker wrote:
| > Donating cash instead of physically participating in
| charity can be alienating, sure... but certainly more
| effective.
|
| It _can_ be more effective sure, especially if the
| physical participation is flying long haul to offer
| considerably less skilled construction labour than
| locally-based hungry people. But a division of labour
| between smart, driven people working outside the third
| sector to fund it and only people incapable of securing
| well paid jobs left inside the sector to spend it is also
| unlikely to lead to better resource allocation. There are
| certainly initiatives that could use a good software lead
| more than they could use a generous portion of a software
| lead 's FAANG salary to hire some mediocre contractors,
| for example.
| [deleted]
| yboris wrote:
| I wonder what alternative you would propose. Presumably,
| not doing philanthropy is not an alternative, because
| that would be the most alienating approach.
|
| If it's connection to people you want, why not "Purchase
| Fuzzes and Utilons Separately"? [0] Give enough to
| charities that make you not feel alienated, and then
| donate the rest to charities that are making a greater
| positive impact on the lives of others.
|
| [0] https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/pur
| chase-f...
| yrimaxi wrote:
| What alternative? How is that even a question? The answer
| is obvious: do good directly, with your own mind and
| hands, not indirectly. That's the obvious alternative.
| Maybe not everyone has the opportunity to do that, just
| like not everyone has the opportunity to get a non-
| alienating job.
|
| > Give enough to charities that make you not feel
| alienated, and then donate the rest to charities that are
| making a greater positive impact on the lives of others.
|
| You see? Both of these things are still indirect do-
| gooding. Donating to charity? How about _being_ good,
| doing good? Or are you only able to assess the moral
| weight of something if you can read about them in some
| spreadsheet?
| rcoveson wrote:
| I don't see how this is any different. Where does the
| proposition, "do good directly, with your own mind and
| hands", lead? Should the first step not be "consider who
| is most in need"? Should the next stop not be "consider
| how best to help them"? And should the last step not be
| "help them in that way"?
|
| It almost seems like you're arguing against examining the
| problem at all. What would the world look like if
| everybody just quit their jobs so they could do good
| "directly"? They'd realize pretty quick they needs planes
| and ships to move people and things, and farmers to grow
| food. Not to mention doctors and chemists to develop and
| administer medicine. If they were smart about it, they'd
| end up allocating their own time to the things they were
| best at, and then liquidate and donate any excess they
| produce.
|
| If you just follow your heart, in the most basal sense,
| you will probably do some good and you will likely feel
| very good about it. Which is great for you, and good for
| those you help. There's nothing wrong with that. But that
| approach will never help those afflicted with malaria,
| because your heart doesn't know about them. Your head has
| to hear about them. And then your head has to tell you
| not to fly down there yourself, because if everybody did
| that then there'd be nobody back here running air traffic
| control or formulating medicine.
| yrimaxi wrote:
| You've set up a convenient dichotomy where one through
| pure reason alone arrives at the inevitable conclusion
| that one should "liquidate and donate any excess they
| produce", or else one is merely being driven by pure
| sentiment/feel-goodiness. For some reason though it is
| only these tunnel-vision engineer types that seem to be
| sentimentally drawn towards this oh so obvious
| conclusion.
|
| (As an example: a socialist will probably not think that
| making the most money possible and then giving a lot of
| it away is the most ethical thing to do.)
|
| And yes, of course my obvious point is that everyone
| should just quit their jobs and travel to Africa.
| rcoveson wrote:
| I'm just trying to give examples, not set up a dichotomy.
| There's certainly a very wide range of ways to be
| charitable. For example, you could be charitable in this
| discussion by not lampooning those who disagree you as
| "tunnel-vision engineer types."
|
| I'd like to hear more specifically how you think people
| should approach charity. Surely some people actually
| should travel to Africa? And some should not.
|
| I'm not sure why you're being sarcastic about quitting
| one's job and traveling to Africa in a conversation about
| charity. It's not an insane thing to do. The point I was
| trying to make was that not everybody can do it, and that
| some people can actually do more good by just being
| excellent at their current job.
| yboris wrote:
| There are many ways to look at the problem we're
| discussing. One is to think about what the effect on the
| world would be if people follow one strategy over
| another.
|
| The strategy of "help those around you" results in rich
| people who live in rich areas with multi-million-dollar-
| homes helping those that live in merely million dollar
| homes (because that's what's around). And if they venture
| too far geographically, they end up feeling alienated,
| apparently. Furthermore, these people, rather than taking
| the tremendous power of their wealth, do something "with
| their own minds and hands" - which is presumably serving
| some soup in a soup kitchen.
|
| My observation is that it's really unfortunate that many
| people feel the need for a personal connection, and
| therefore do less good than they could otherwise. My
| response is to ignore the ill-fitting kluge that is the
| evolution-installed software I have.
|
| When you know that a $3 donation protects 2 people from
| malaria for about 3 years, can you really think you can
| do more good with your hands and mind than to just
| protect those people with $3 you have?
|
| You can get your warm feeling of having done good through
| doing something, and then use your money to give to cost-
| effective charities regardless of how "alienated" that
| makes you feel.
| PeterisP wrote:
| Perhaps the disconnect here is between fundamentally
| different ways of measuring "good" or "doing good".
|
| Your argument here and EA arguments in general are based
| on an axiomatic assumption that good done to anyone is
| equally valuable, that all people worldwide now (and in
| some analyses, all hypothetical future people) have an
| equal claim on your help.
|
| IMHO this axiom does not match the "built-in moral
| system" of most people. To start with an illustrative
| example (obviously you can imagine many less extreme
| comparisons), for most people, the welfare of their child
| is unquestionably much, much more important than the
| welfare of some other child across the globe. For most
| people saving the life of another child across the globe
| at the cost of the life of their child would not be a
| neutral exchange of things of equal value, it would be a
| horrifically unbalanced "trade". This is completely
| understandable even for undeniably good people doing lots
| of good. So this extreme establishes a baseline that the
| axiom of "saving every life is equally valuable" can not
| be accepted by most people (and accepting that axiom is
| not a requirement for "being good" or "doing good"),
| there is _some_ difference, and the only question is
| about the scale and of that difference, what factors
| apply, etc.
|
| And coming from an (incompatible) axiomatic assumption
| that it's plausible that helping someone in your
| community can be more valuable than protecting two people
| with no connection to you, all these other strategies
| start making some sense.
|
| Looking at this from a Kantian 'moral duty' perspective,
| some people (perhaps including you) have an implied moral
| duty to care about everyone, equally. And some people
| have an implied moral duty to care about their community
| _more_ than "strangers". Obviously those two approaches
| are incompatible, but IMHO both are frequently
| encountered, and I don't believe that "good people" and
| "people who do lots of good" always subscribe to the
| first concept of moral duty, there seems to be a lot of
| good works done based on the latter understanding.
| neilparikh wrote:
| > do good directly, with your own mind and hands, not
| indirectly.
|
| How? I live in North America. The people need the most
| help in the world don't live in North America. How do I
| help them directly "with my own mind and hands"?
|
| I could help those in my city/state/country, which isn't
| unreasonable by any means, and definitely commendable.
| But this will leave the global poor in just as bad as
| state as they are right now. Who will do good directly
| for them?
|
| As a side note, maybe this is an issue of framing. We've
| been calling EA charity, but another way to view it is
| wealth distribution from rich countries to poor
| countries. I think from that lens, it becomes obvious (to
| me) that this is not only good, but necessary, because I
| don't think the current global inequality in wealth is
| fair at all. Telling people to stop donating to EA
| charities is effectively telling them to keep the wealth
| in rich countries, rather than having it flow to poor
| countries (who really need it).
| IHLayman wrote:
| Thank you for posting this! This is something that has been
| mulling in the back of my head for years now and I didn't know
| how to express it, but others have already fleshed it out. I
| have a lot of reading to do.
| ajb wrote:
| What is most interesting to me is that the business model he
| rejected[1] is not just the one of his app, but essentially the
| one used by almost all therapists.
|
| [1] https://github.com/Flaque/quirk: "Unfortunately, in order for
| the business to work and for us to pay ourselves, we needed folks
| to be subscribed for a fair amount of time. But that wasn't the
| case and we honestly should have predicted it given my own
| experience: as people did better, they unsubscribed.
| Unfortunately, the opposite was true as well, if folks weren't
| doing better, but were giving it a good shot, they would stay
| subscribed longer.
|
| So in order to continue Quirk, a future Quirk would need to make
| people feel worse for longer, or otherwise not help the people we
| signed up to help. If the incentives of the business weren't
| aligned with the people, it would have been naive to assume that
| we could easily fix it as the organization grew. We didn't want
| to go down that path, so we pivoted the company."
| [deleted]
| flaque wrote:
| Not at all! Therapists make more in a single session than a
| consumer subscription app would in an entire year, and they're
| limited to the amount a single therapist can accomplish. Most
| therapists have an entirely booked schedule; they don't have
| nearly the incentive to keep people longer that mobile apps do.
| They literally cannot handle more clients, so from a business
| perspective, they're much better off helping people, getting
| good reviews, and then charging more for the limited time they
| have.
| ajb wrote:
| Although you are probably right that the situation is worse
| for apps, in my experience the incentive of actual therapists
| is still sufficient to prevent them admitting that their
| client would be better served elsewhere. Limited sample size,
| of course.
| jimkleiber wrote:
| I really appreciate you openly talking about these struggles.
| I built an app back in 2012 called iFeelio for emotional
| micro-journaling and one of the goals I set for it was for me
| (and then others) to get better at expressing how I felt so I
| didn't have to use the app anymore. As you expressed on
| Quirk's Github readme, that doesn't jive with a subscription-
| based model: if people improve, they stop paying. I also
| didn't want to make the app addictive, another thing that
| seems to clash with the subscription model.
|
| One thing I contemplated but didn't have the courage to do
| was to charge a very high initial price to use the app. It
| was on Android at the time and I wanted to charge the max
| price, which I think was like ~$200, to download the app, one
| time.
|
| Did you think about doing something similar? What
| alternatives did you contemplate to the subscription model?
| Is there a way to price it with an anticipated 3-month
| retention or other time-limited retention?
|
| Would love to hear any thoughts you have on this!
| themacguffinman wrote:
| However sound your epistemology may be, this feels like unhelpful
| gatekeeping. Saying a one line disclaimer "That doesn't mean that
| _trying_ to help people is bad " falls flat when you spend the
| entire article constructing a special category of _personal
| failure_ for people who fail to effect change: "moral
| incompetence".
|
| This doesn't help anyone learn anything except that apparently if
| you fail, it might be because you didn't actually care about
| succeeding in the first place! Your failure likely involved
| concrete issues that can be learned from and changed, chalking it
| up to "oh well it turns out I was the problem" is an unproductive
| take-away and only serves to discourage people who can't or won't
| have the right intentions as you define them. Let's leave the
| mental purity tests aside, modern society works because people
| have the space & incentive to do good regardless of their
| intention.
| sweetheart wrote:
| Why such an extreme dichotomy between the morally competent and
| morally incompetent? The article makes it seem like have an ego
| _at all_ makes one morally incompetent. We can genuinely strive
| to help, and we can also want to glean an egotistical sense of
| self-worth/importance at the same time.
|
| I think that one is morally incompetent when they _only_ strive
| to advance themselves through acts of kindness/helping, or when
| their shallow act of do-gooding harms those who need help more
| than it helps them. But certainly we should be allowed to like
| helping because we feel good about ourselves, right?
| motohagiography wrote:
| Thought this was going to be about how competence is a moral
| virtue, which is a more common use of the phrase. However, if you
| aren't helping anyone, you aren't doing any good, so it's
| tangentially related.
| strofcon wrote:
| This is a weird one for me...
|
| It's odd that we're proxying competency by way of intent.
|
| Would it make more sense to consider the actual actions of
| individuals?
|
| If I work on a problem with the intention to solve it, I've done
| both, and thus (by this essay) am competent for my intent to
| solve the problem, but also incompetent for my intent to _work_
| on solving the problem.
|
| I don't know that I entirely disagree with the author's core
| points, but I don't think this was a very effective piece. Mostly
| because I have to take a wild guess at what those core points
| might be, and then try to tease them out myself.
| waynesonfire wrote:
| this was hard to read nothing of substance. the author is
| defining terms to attempt some philosophical justification for
| their pivot and falls flat. good for you, one of many.
| neilk wrote:
| I'm glad you recognized that your business model (VC-backed
| startup) wasn't compatible with your goals. For-profit health has
| a lot of the same issues, from top to bottom.
|
| But it seem like you built something that did good for some
| users. I see you've open sourced it, but did you consider
| pivoting the business model instead, to a non-profit or
| delivering the app through therapists and health providers?
|
| Therapists have the same incentives, and yet they seem to make a
| business of it. They are happy when clients don't need them any
| more.
| sb1752 wrote:
| The term competence seems to be throwing people off, which I can
| understand but I think the point here is very insightful. It's
| important to separate those with a hero complex from those that
| are more morally sincere. Greater moral sincerity means you're
| more concerned with solving the problem than being the one to
| solve the problem. It's not about you, it's about the problem and
| seeing it solved. I think this is an especially relevant point in
| today's culture.
___________________________________________________________________
(page generated 2021-01-05 23:00 UTC)