[HN Gopher] EA and the Current Funding Situation
       ___________________________________________________________________
        
       EA and the Current Funding Situation
        
       Author : kvee
       Score  : 49 points
       Date   : 2022-05-10 16:45 UTC (6 hours ago)
        
 (HTM) web link (forum.effectivealtruism.org)
 (TXT) w3m dump (forum.effectivealtruism.org)
        
       | syngrog66 wrote:
       | EA stands for Electronic Arts. in my planet's software field. for
       | decades. was a weird shock to clickthrough on that article link.
       | ;-)
       | 
       | "Kids... get off my lawn!"
        
       | quirkot wrote:
       | Very disappointed at missing an AMAZING opportunity to say "shut
       | up and multiply" about the funding
        
       | drumhead wrote:
       | I thought this was an article about Electronic Arts and a scoop
       | about liquidity issues. Unfortuantely not the case...
        
         | benbristow wrote:
         | Especially as they've just lost the FIFA licence, renaming the
         | next football (soccer) games to "EA FC" as FIFA were asking for
         | a billion for the licence to use their name.
         | 
         | https://www.theguardian.com/games/2022/may/10/electronic-art...
        
         | Arrath wrote:
         | I figured it was either that or Early Access and its impact on
         | development schedules, milestones, feedback, etc.
        
         | mdtrooper wrote:
         | Yes, hahaha me too. I clicked happy because I thought "yes, EA
         | a big devil company who no release any game for Linux in their
         | history, thanks karma".
         | 
         | But not.
        
           | DonHopkins wrote:
           | Actually, EA released the source code and binary Linux
           | version the original SimCity under GPL-3, which I ported to
           | various Unix platforms including Linux and the OLPC.
           | 
           | https://medium.com/@donhopkins/open-sourcing-
           | simcity-58470a2...
           | 
           | >Open Sourcing SimCity, by Chaim Gingold. Excerpt from page
           | 289-293 of "Play Design", a dissertation submitted in partial
           | satisfaction of the requirements for the degree of Doctor in
           | Philosophy in Computer Science by Chaim Gingold.
           | 
           | Granted, EA's QA folks had never QA'ed a Linux game before,
           | so I had to walk them through installing VMWare on Windows
           | and gave them a Linux operating system image to test it with,
           | but it did pass QA, and they released it in binary on the
           | OLPC as well as in source code form.
           | 
           | https://github.com/SimHacker/micropolis
           | 
           | Here's the contract with all the legal details:
           | 
           | https://donhopkins.com/home/olpc-ea-contract.pdf
        
       | Recursing wrote:
       | @dang I would suggest editing the title, changing EA to
       | "Effective Altruism"
       | 
       | I'm also really curious about HN's view of Effective Altruism,
       | and how do individuals here handle their charitable giving.
        
       | PragmaticPulp wrote:
       | I really hope this serves as an inflection point for the
       | Effective Altruism crowd to pivot away from some of the weirder
       | things they embraced in the past.
       | 
       | I love the idea of Effective Altruism and what they're trying to
       | accomplish, but in practice it frequently turns into funding for
       | intellectuals to sit around and pontificate about AI risk and
       | other things, as opposed to actually going out into the world and
       | doing altruistic acts.
       | 
       | Some of their previous grants are downright laughable, like
       | spending $30K to distribute copies of rationalist Harry Potter
       | fanfiction ( _Harry Potter and the Methods of Rationality_ )
       | despite the fact that it's freely available online. (Source:
       | https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/...
       | Ctrl+F "Harry Potter")
       | 
       | The EA movement has become a bit of a joke in many circles after
       | making too many moves like this, so hopefully this serves as a
       | wake-up call for them to start getting more practical.
        
         | SilasX wrote:
         | You're saying that the existence of a book being freely
         | available online means that it's always pointless to give out
         | physical copies? Do you make the same criticism of eg Dolly
         | Parton's program to give books to families for free, in cases
         | where the copyright on the book has expired?
        
         | casebash wrote:
         | I guess the universe is allowed to be crazy.
         | 
         | The universe is allowed to decide the Donald Trump will be
         | president in 2016 despite all the reasons to think it was crazy
         | that he would become president. The universe is allowed to
         | decide that Volodymyr Zelenskyy will be elected president of
         | Ukraine on the basis of having starring as the president in a
         | comedy.
         | 
         | And I guess the universe is allowed to decide that a piece of
         | fanfic that is desperately in need of an editor will be
         | successful at recruiting talent for the rationalist or AI
         | Safety communities.
         | 
         | I expect you are probably skeptical of AI Safety, but then your
         | criticism would be a criticism of the final objective, not the
         | method (distribution of HPMoR) used to achieve the objective.
        
         | kvee wrote:
         | For whatever it's worth, HPMOR introduced me to Effective
         | Altruism and lesswrong, etc.
         | 
         | Though I found it online and not from the $30K distributed, I
         | have since as a result of being introduced to Effective
         | Altruism there donated much more than $30K to highly effective
         | charities and caused hundreds of other people to get into
         | donating to highly effective charities too. I also started a
         | company inspired by the EA concepts of working on "neglected
         | problems" - https://80000hours.org/articles/problem-
         | framework/#definitio...
         | 
         | I might be wrong, but I suspect there's a decent chance there
         | was at least 1 person like me of the people in that $30k
         | distribution.
        
         | vorpalhex wrote:
         | > despite the fact that it's freely available online.
         | 
         | The poor frequently do not have printers or significant access
         | to consistently functional digital devices.
        
           | defen wrote:
           | They gave copies to the 650 people who advanced far enough in
           | IMO and EGMO (International Math Olympiad / European Girls'
           | Mathematical Olympiad) to be considered "medalists". It's
           | very unlikely that those people don't have internet access.
        
           | PragmaticPulp wrote:
           | > The poor frequently do not have printers or significant
           | access to consistently functional digital devices.
           | 
           | If a group of people is so poor that they don't have access
           | to digital devices, maybe printing out rationalist Harry
           | Potter fan fiction shouldn't be at the top of the priority
           | list for a charity trying to spend money to improve their
           | lives.
        
             | vorpalhex wrote:
             | I think teaching rationalism with approachable and
             | interesting materials is fine and reasonable.
             | 
             | I suppose we could be equally upset that Sesame street uses
             | puppets?
        
           | stonogo wrote:
           | In the US and most civilized nations they have public library
           | access, which can easily be used to access Harry Potter
           | fanfiction. It's not clear how they benefit from a printed
           | copy of the Harry Potter fanfiction.
        
             | vorpalhex wrote:
             | And how do you get to the library?
        
         | justinpombrio wrote:
         | There are multiple EA groups, but I think the most common place
         | to donate is GiveWell? Here's what its donation page looks
         | like:
         | 
         | https://secure.givewell.org/                   Let GiveWell
         | direct your donation           GiveWell's Maximum Impact Fund
         | Support GiveWell's top charities           Malaria Consortium's
         | seasonal malaria chemoprevention program           Against
         | Malaria Foundation           Helen Keller International's
         | vitamin A supplementation program           SCI Foundation
         | (Schistosomiasis Control Initiative)           Sightsavers'
         | deworming program           New Incentives           Evidence
         | Action's Deworm the World Initiative           END Fund's
         | deworming program           GiveDirectly
         | 
         | The recent grants by their Maximum Impact Fund are listed here:
         | 
         | https://www.givewell.org/maximum-impact-fund
         | 
         | the largest of which was $7.8 million to "Sightsavers --
         | Deworming in Nigeria and Cameroon (February 2022)"
         | 
         | Please don't let the fact that some people are concerned about
         | AI safety detract from EA's overarching goal of doing the most
         | good per dollar donated.
        
           | PragmaticPulp wrote:
           | Right! My point is that there _are_ good aspects to the
           | Effective Altruism movement, but historically they 've done a
           | disservice by also embracing some fringe stuff under the
           | Effective Altruism banner.
           | 
           | Like I said in my comment, I'm hoping this blog post is a
           | signal that they're going to start maturing the organization
           | a bit more and hopefully distance themselves from some of the
           | weirdness that tries to ride the coattails of the Effective
           | Altruism movement.
        
             | justinpombrio wrote:
             | Gotta distance themselves from those weirdos!
             | 
             | Why aren't EA organizations allowed to be concerned about
             | AI safety? Sure, there's nothing dangerous _yet_ , but
             | surely the trend towards more capable AI has become fairly
             | clear these past few years, and at some point it will get
             | concerning even if only in the wrong hands, and it would be
             | better to be prepared for that beforehand?
        
         | chrischen wrote:
         | > Some of their previous grants are downright laughable, like
         | spending $30K to distribute copies of rationalist Harry Potter
         | fanfiction (Harry Potter and the Methods of Rationality)
         | despite the fact that it's freely available online. (Source:
         | https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/...
         | Ctrl+F "Harry Potter")
         | 
         | Is this how money is laundered in non profits?
        
           | [deleted]
        
           | DrRobinson wrote:
           | They're not allowed to make money from this Harry Potter fan-
           | fiction IIRC, which this is (one that teaches effective
           | altruism concepts and rationality), so I don't see how this
           | would be a way to launder money.
           | 
           | It's also worth noting that this was 28k out of 923k USD, so
           | not a too big part of it (~2%). Of course the money should be
           | used for other things if it's estimated to be more effective
           | though.
           | 
           | The reasoning is in the post you linked which I think gives a
           | pretty good explanation. They're given to 650 math medalists
           | (so ~43 USD/person) which they deemed to have good target
           | audience for this.
        
         | Veedrac wrote:
         | Apologies if this is curt, but if you think one of the most
         | impactful altruism movements in the world with major focuses in
         | global health and long term outcomes is a joke because they do
         | outreach in odd (but clearly effective) ways, you have missed
         | the point of effective altruism altogether.
         | 
         | You don't get brownie points for how steadfastly you stick to
         | your Overton window. You get brownie points for saving lives.
        
           | PragmaticPulp wrote:
           | > You don't get brownie points for how steadfastly you stick
           | to your Overton window. You get brownie points for saving
           | lives.
           | 
           | ...by distributing copies of rationalist Harry Potter fan
           | fiction?
           | 
           | I agree that charities that save lives are doing good things.
           | That's why I brought up the fact that some of their spending
           | has historically been kind of ludicrously off base.
        
             | Veedrac wrote:
             | It sounds less stupid when you count the fraction of EA
             | funding and activity that HPMOR was already in the causal
             | chain of. It's not how I'd do outreach, but it's not like
             | the choice was arbitrary.
        
               | Recursing wrote:
               | Honestly curious, how would you do outreach?
        
               | Veedrac wrote:
               | I don't know, sorry.
        
             | rcoveson wrote:
             | You know Maus[0]? It's a _comic book_ about _Disney
             | animals_. Isn 't that just laughable? Wouldn't teaching
             | such a thing in schools--with the aim of imparting serious
             | lessons--be "ludicrously off base"?
             | 
             | Derision for fan fiction and/or Harry Potter can't be the
             | beginning and end of your argument for why HPMOR isn't a
             | good book to introduce nerdy school kids to a handful of
             | valuable ideas.
             | 
             | 0. https://en.wikipedia.org/wiki/Maus
        
             | lucaparodi wrote:
             | So you're literally basing your entire argument to
             | disqualify a movement with clear and public data about the
             | positive impact that has created on a single $30k grant?
        
         | ineptech wrote:
         | You left out that they were distributing the book to
         | participants in math and science competitions, and that their
         | reasoning was that they think the book is effective at teaching
         | scientific reasoning. Donating science books to promising
         | students sounds a lot better than distributing fanfiction.
         | 
         | You may disagree that HPMOR is effective at teaching science,
         | but at least they did it openly. That page has a detailed
         | discussion of the pros and cons of this grant, including the
         | possibility that it could cause "reputational risk" (i.e.
         | people like you making fun of them for it). AFAICT they took
         | that seriously, and decided to do it anyway because they really
         | do believe that it's effective at teaching science.
        
           | zozbot234 wrote:
           | Ah, but is it _the most effective_ book? In the absence of
           | very real countervailing evidence, choosing that book would
           | be a pretty clear-cut case of systematic bias.
        
         | basedgod wrote:
         | His blog post talks about the importance of optics and
         | effective messaging. But the only thing that stood out to me
         | that they're actually doing is researching the dangers of AI
         | and other neckbeard concerns that have nothing to do with the
         | problems and sufferings of actual human beings today.
         | 
         | I have only a vague notion of what EA does (wasn't one of their
         | initial proposals that deworming medications should be sent to
         | rural Africa, for its disproportionate impact it could have?)
         | and thought their org was centered around unsexy issues like
         | this.
         | 
         | My takeaway is that their main successes is raising huge
         | amounts of money, and have nothing to show for it - at least,
         | nothing they've even briefly mentioned in the blog post.
         | 
         | from wikipedia: "One of the key events of the Google conference
         | was a moderated panel on existential risk from artificial
         | general intelligence.[6] Panel member Stuart Russell stated
         | that AI research should be about "building intelligent systems
         | that benefit the human race".[5] Vox writer Dylan Matthews,
         | while praising some aspects of the conference, criticized its
         | perceived focus on existential risk, potentially at the expense
         | of more mainstream causes like fighting extreme poverty."
         | 
         | And there's no mention of anything this org has actually done
         | to help anyone, other than getting people to donate
        
           | [deleted]
        
           | [deleted]
        
           | PoignardAzur wrote:
           | _> they 're actually doing is researching the dangers of AI
           | and other neckbeard concerns that have nothing to do with the
           | problems and sufferings of actual human beings today._
           | 
           | This is really the kind of comment I'd like not to see on HN.
           | 
           | "Neckbeard concern" implies that... what, people worrying
           | about AI risk are worrying for fun? Because they're socially
           | awkward nerds disconnected from reality?
           | 
           | This reminds me of https://xkcd.com/743/
           | 
           | Worrying about the future consequences of incoming
           | technological developments always looks like "neckbeard
           | concerns" until it's too late to do anything about it.
           | 
           | And thus he we are, worrying about climate change and supply
           | chain sovereignty and corporate control of information and
           | wondering why nobody took sensible measures 20 years ago.
           | 
           |  _> And there 's no mention of anything this org has actually
           | done to help anyone, other than getting people to donate_
           | 
           | Note that this is an internal post from an individual
           | suggesting organizational changes, not an official public-
           | facing communication.
           | 
           | As sibling comments pointed out, they do have posts listing
           | success stories.
        
           | lkbm wrote:
           | > wasn't one of their initial proposals that deworming
           | medications should be sent to rural Africa
           | 
           | Yes, and it's heavily funded by EA givers (yours truly
           | included). People like to harp on the weird edge-cases, but
           | Givewell.org is directing hundreds of millions of dollars per
           | year[0], and it's pretty much entirely things deworming,
           | malaria prevention, etc.[1]
           | 
           | If you visit effectivealtruism.org you'll see it's not a
           | bunch of articles about AI research. It's about convincing
           | people to donate in ways that have the most positive impact,
           | and includes a big "What has the effective altruism community
           | done?" section that's mostly the standard "ensure everyone
           | has food, water, basic healthcare, etc." The key is that
           | there's an emphasis on making sure you're _actually
           | improving_ those metrics as much as possible, rather than
           | whatever has the flashiest marketing.
           | 
           | There are a bunch of people talking about AI safety and other
           | weirder things, but we mostly just look at research to figure
           | out what improves lives and do that.
           | 
           | [0] https://www.givewell.org/about/impact
           | 
           | [1] https://www.givewell.org/charities/top-charities
        
           | DrRobinson wrote:
           | > other than getting people to donate
           | 
           | If that's the most effective way of helping people, why would
           | they not do that? They do more than that though, they fund
           | projects of many different kinds. They also coach people to
           | go into fields where they can have a positive impact for
           | example.
           | 
           | > wasn't one of their initial proposals that deworming
           | medications should be sent to rural Africa, for its
           | disproportionate impact it could have?
           | 
           | They have different funds:
           | 
           | - Global Health and Development Fund [0]
           | 
           | - Animal Welfare Fund [1]
           | 
           | - Long-Term Future Fund[2]
           | 
           | - Effective Altruism Infrastructure Fund[3]
           | 
           | Deworming would go under "Global health and development" (if
           | it's estimated to be the most effective) and AI under "Long-
           | term future fund".
           | 
           | ---
           | 
           | 0. https://funds.effectivealtruism.org/funds/global-
           | development
           | 
           | 1. https://funds.effectivealtruism.org/funds/animal-welfare
           | 
           | 2. https://funds.effectivealtruism.org/funds/far-future
           | 
           | 3. https://funds.effectivealtruism.org/funds/ea-community
        
         | zozbot234 wrote:
         | It's amusing that people still complain about AI ethics and
         | debiasing (aka "alignment") being an early focus of EA's long
         | after it's become an increasingly relevant research field, even
         | with controversy regularly making the tech news. If anything,
         | that AI focus counts as a success story for effective altruism,
         | as much as the similar case of pandemic preparedness.
        
           | btrettel wrote:
           | One thing I've wondered is when Effective Altruists are going
           | to stop calling AI ethics _neglected_. AI ethics as a field
           | is now mainstream, in my view, yet people still keep calling
           | it neglected. I personally think Effective Altruists should
           | focus less on AI ethics now as it has gained mainstream
           | attention and focus more on other currently neglected topics.
        
             | zozbot234 wrote:
             | By all available evidence it's horribly neglected at Big
             | Tech firms, we keep seeing examples of AI researchers there
             | not being taken seriously despite the quality of their
             | work.
        
               | btrettel wrote:
               | > we keep seeing examples of AI researchers there not
               | being taken seriously despite the quality of their work
               | 
               | This phenomena is not unique to AI safety. It often is
               | driven by management incentives.
               | 
               | I agree that AI safety is not a solved problem in
               | practice, but that doesn't mean it's _neglected_. AI
               | safety gets a lot of attention, and it is important, but
               | it 's not "neglected" in the sense that _too few_ people
               | work on it or that there isn 't enough money in it [0]. I
               | think the marginal impact of a new person in AI safety is
               | roughly zero, all else equal. AI safety folks would
               | probably do best to change their priorities away from
               | "ivory tower" sort of issues towards the practical issues
               | you bring up.
               | 
               | [0] EAs typically use people or money to measure
               | neglectedness. https://80000hours.org/articles/problem-
               | framework/#how-to-as...
        
             | Recursing wrote:
             | What do you think are currently neglected topics?
        
               | btrettel wrote:
               | To give just one example, I think Effective Altruists
               | focus far too little on meta-science. Much of what they
               | want to do depends on meta-science, and in science it's
               | quite difficult to fund that sort of work, so it seems
               | odd to me that it's not considered a top priority on
               | 80,000 Hours.
               | 
               | I'm sure there are other areas, but I haven't put an
               | effort into listing them. Global priorities work is
               | pretty hard.
               | 
               | Also, apparently 80,000 Hours now considers AI safety
               | only "somewhat neglected", so perhaps they EAs agree with
               | me more than I thought. https://80000hours.org/problem-
               | profiles/positively-shaping-a...
        
         | [deleted]
        
       ___________________________________________________________________
       (page generated 2022-05-10 23:01 UTC)